Updates from: 06/09/2021 03:10:24
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Conditional Access User Flow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/conditional-access-user-flow.md
Title: Add Conditional Access to a user flow in Azure AD B2C description: Learn how to add Conditional Access to your Azure AD B2C user flows. Configure multi-factor authentication (MFA) settings and Conditional Access policies in your user flows to enforce policies and remediate risky sign-ins.- Previously updated : 05/13/2021 Last updated : 06/03/2021 - zone_pivot_groups: b2c-policy-type # Add Conditional Access to user flows in Azure Active Directory B2C- [!INCLUDE [active-directory-b2c-choose-user-flow-or-custom-policy](../../includes/active-directory-b2c-choose-user-flow-or-custom-policy.md)]- Conditional Access can be added to your Azure Active Directory B2C (Azure AD B2C) user flows or custom policies to manage risky sign-ins to your applications. Azure Active Directory (Azure AD) Conditional Access is the tool used by Azure AD B2C to bring signals together, make decisions, and enforce organizational policies.- ![Conditional access flow](media/conditional-access-user-flow/conditional-access-flow.png)- Automating risk assessment with policy conditions means risky sign-ins are identified immediately and then either remediated or blocked.- ## Service overview-
-Azure AD B2C evaluates each sign-in event and ensures that all policy requirements are met before granting the user access. During this **Evaluation** phase, the Conditional Access service evaluates the signals collected by Identity Protection risk detections during sign-in events. The outcome of this evaluation process is a set of claims that indicates whether the sign-in should be granted or blocked. The Azure AD B2C policy uses these claims to take an action within the user flow, such as blocking access or challenging the user with a specific remediation like multi-factor authentication (MFA). ΓÇ£Block accessΓÇ¥ overrides all other settings.
-
+Azure AD B2C evaluates each sign-in event and ensures that all policy requirements are met before granting the user access. During this **Evaluation** phase, the Conditional Access service evaluates the signals collected by Identity Protection risk detections during sign-in events. The outcome of this evaluation process is a set of claims that indicates whether the sign-in should be granted or blocked. The Azure AD B2C policy uses these claims to act within the user flow. An example is blocking access or challenging the user with a specific remediation like multi-factor authentication (MFA). ΓÇ£Block accessΓÇ¥ overrides all other settings.
::: zone pivot="b2c-custom-policy" The following example shows a Conditional Access technical profile that is used to evaluate the sign-in threat.- ```XML <TechnicalProfile Id="ConditionalAccessEvaluation"> <DisplayName>Conditional Access Provider</DisplayName>
The following example shows a Conditional Access technical profile that is used
... </TechnicalProfile> ```- To ensure that Identity Protection signals are evaluated properly, you'll want to call the `ConditionalAccessEvaluation` technical profile for all users, including both [local and social accounts](technical-overview.md#consumer-accounts). Otherwise, Identity Protection will indicate an incorrect degree of risk associated with users.- ::: zone-end- In the *Remediation* phase that follows, the user is challenged with MFA. Once complete, Azure AD B2C informs Identity Protection that the identified sign-in threat has been remediated and by which method. In this example, Azure AD B2C signals that the user has successfully completed the multi-factor authentication challenge.-
-The remediation may also happen through other channels. For example, when the account's password is reset, either by the administrator or by the user. You can check the the user *Risk state* in the [risky users report](identity-protection-investigate-risk.md#navigating-the-risky-users-report).
-
+The remediation may also happen through other channels. For example, when the account's password is reset, either by the administrator or by the user. You can check the user *Risk state* in the [risky users report](identity-protection-investigate-risk.md#navigating-the-risky-users-report).
::: zone pivot="b2c-custom-policy"- > [!IMPORTANT] > To remediate the risk successfully within the journey, make sure the *Remediation* technical profile is called after the *Evaluation* technical profile is executed. If *Evaluation* is invoked without *Remediation*, the risk state will be *At risk*.- When the *Evaluation* technical profile recommendation returns `Block`, the call to the *Evaluation* technical profile is not required. The risk state is set to *At risk*.- The following example shows a Conditional Access technical profile used to remediate the identified threat:- ```XML <TechnicalProfile Id="ConditionalAccessRemediation"> <DisplayName>Conditional Access Remediation</DisplayName>
The following example shows a Conditional Access technical profile used to remed
... </TechnicalProfile> ```- ::: zone-end- ## Components of the solution- These are the components that enable Conditional Access in Azure AD B2C:- - **User flow** or **custom policy** that guides the user through the sign-in and sign-up process. - **Conditional Access policy** that brings signals together to make decisions and enforce organizational policies. When a user signs into your application via an Azure AD B2C policy, the Conditional Access policy uses Azure AD Identity Protection signals to identify risky sign-ins and presents the appropriate remediation action. - **Registered application** that directs users to the appropriate Azure AD B2C user flow or custom policy. - [TOR Browser](https://www.torproject.org/download/) to simulate a risky sign-in.- ## Service limitations and considerations- When using the Azure AD Conditional Access, consider the following:- - Identity Protection is available for both local and social identities, such as Google or Facebook. For social identities, you need to manually activate Conditional Access. Detection is limited because social account credentials are managed by the external identity provider.-- In Azure AD B2C tenants, only a subset of [Azure AD Conditional Access](../active-directory/conditional-access/overview.md) policies are available.-
+- In Azure AD B2C tenants, only a subset of [Azure AD Conditional Access](../active-directory/conditional-access/overview.md) policies is available.
## Prerequisites [!INCLUDE [active-directory-b2c-customization-prerequisites-custom-policy](../../includes/active-directory-b2c-customization-prerequisites-custom-policy.md)]- ## Pricing tier- Azure AD B2C **Premium P2** is required to create risky sign-in policies. **Premium P1** tenants can create a policy that is based on location, application, user-based, or group-based policies. For more information, see [Change your Azure AD B2C pricing tier](billing.md#change-your-azure-ad-pricing-tier)- ## Prepare your Azure AD B2C tenant- To add a Conditional Access policy, disable security defaults:- 1. Sign in to the [Azure portal](https://portal.azure.com/). 2. Select the **Directory + Subscription** icon in the portal toolbar, and then select the directory that contains your Azure AD B2C tenant. 3. Under **Azure services**, select **Azure AD B2C**. Or use the search box to find and select **Azure AD B2C**. 4. Select **Properties**, and then select **Manage Security defaults**.- ![Disable the security defaults](media/conditional-access-user-flow/disable-security-defaults.png)- 5. Under **Enable Security defaults**, select **No**.- ![Set the Enable security defaults toggle to No](media/conditional-access-user-flow/enable-security-defaults-toggle.png) ## Add a Conditional Access policy
-A Conditional Access policy is an if-then statement of assignments and access controls. A Conditional Access policy brings signals together to make decisions and enforce organizational policies. The logical operator between the assignments is *And*. The operator in each assignment is *Or*.
+A Conditional Access policy is an if-then statement of assignments and access controls. A Conditional Access policy brings signals together to make decisions and enforce organizational policies.
-![Conditional access assignments](media/conditional-access-user-flow/conditional-access-assignments.png)
+> [!TIP]
+> In this step you configure the conditional access policy. We recommend you to use one of the following templates: [Template 1: Sign-in risk-based Conditional Access](#template-1-sign-in-risk-based-conditional-access), [Template 2: User risk-based Conditional Access](#template-2-user-risk-based-conditional-access), or [Template 3: Block locations with Conditional Access](#template-3-block-locations-with-conditional-access). You can configure the conditional access policy through Azure portal, or MS Graph API.
-To add a Conditional Access policy:
+The logical operator between the assignments is *And*. The operator in each assignment is *Or*.
+![Conditional access assignments](media/conditional-access-user-flow/conditional-access-assignments.png)
+To add a Conditional Access policy:
1. In the Azure portal, search for and select **Azure AD B2C**. 1. Under **Security**, select **Conditional Access**. The **Conditional Access Policies** page opens. 1. Select **+ New policy**. 1. Enter a name for the policy, such as *Block risky sign-in*. 1. Under **Assignments**, choose **Users and groups**, and then select the one of the following supported configurations:
- |Include |License | Notes |
- ||||
- |**All users** | P1, P2 |If you choose to include **All Users**, this policy will affect all of your users. To be sure not to lock yourself out, exclude your administrative account by choosing **Exclude**, selecting **Directory roles**, and then selecting **Global Administrator** in the list. You can also select **Users and Groups** and then select your account in the **Select excluded users** list. |
-
-1. Select **Cloud apps or actions**, and then **Select apps**. Browse for your [relying party application](tutorial-register-applications.md).
+| Include |License | Notes|
+||||
+|**All users** | P1, P2 | If you choose to include **All Users**, this policy will affect all of your users. To be sure not to lock yourself out, exclude your administrative account by choosing **Exclude**, selecting **Directory roles**, and then selecting **Global Administrator** in the list. You can also select **Users and Groups** and then select your account in the **Select excluded users** list. |
+1. Select **Cloud apps or actions**, and then **Select apps**. Browse for your [relying party application](tutorial-register-applications.md).
1. Select **Conditions**, and then select from the following conditions. For example, select **Sign-in risk** and **High**, **Medium**, and **Low** risk levels.
-
- |Condition |License |Notes |
- ||||
- |**User risk**|P2|User risk represents the probability that a given identity or account is compromised.|
- |**Sign-in risk**|P2|Sign-in risk represents the probability that a given authentication request isn't authorized by the identity owner.|
- |**Device platforms**|Not supported| Characterized by the operating system that runs on a device. For more information, see [Device platforms](../active-directory/conditional-access/concept-conditional-access-conditions.md#device-platforms).|
- |**Locations**|P1, P2|Named locations may include the public IPv4 network information, country or region, or unknown areas that don't map to specific countries or regions. For more information, see [Locations](../active-directory/conditional-access/concept-conditional-access-conditions.md#locations). |
-
+
+|Condition|License |Notes |
+||||
+| **User risk** | P2 |User risk represents the probability that a given identity or account is compromised. |
+| **Sign-in risk** | P2 |Sign-in risk represents the probability that a given authentication request isn't authorized by the identity owner. |
+| **Device platforms** |Not supported |Characterized by the operating system that runs on a device. For more information, see [Device platforms](../active-directory/conditional-access/concept-conditional-access-conditions.md#device-platforms). |
+| **Locations** |P1,P2 |Named locations may include the public IPv4 network information, country or region, or unknown areas that don't map to specific countries or regions. For more information, see [Locations](../active-directory/conditional-access/concept-conditional-access-conditions.md#locations). |
++ 1. Under **Access controls**, select **Grant**. Then select whether to block or grant access:
-
- |Option |License |Note |
- ||||
- |**Block access**|P1, P2| Prevents access based on the conditions specified in this conditional access policy.|
- |**Grant access** with **Require multi-factor authentication**|P1, P2|Based on the conditions specified in this conditional access policy, the user is required to go through Azure AD B2C multi-factor authentication.|
+
+|Option | License | Note |
+||||
+| **Block access** |P1, P2| Prevents access based on the conditions specified in this conditional access policy. |
+| **Grant access** with **Require multi-factor authentication** | P1, P2| Based on the conditions specified in this conditional access policy, the user is required to go through Azure AD B2C multi-factor authentication. |
1. Under **Enable policy**, select one of the following:
-
- |Option |License |Note |
- ||||
- |**Report-only**|P1, P2| Report-only allows administrators to evaluate the impact of Conditional Access policies before enabling them in their environment. We recommend you check policy with this state, and determine the impact to end users without requiring multi-factor authentication or blocking users. For more information, see [Review Conditional Access outcomes in the audit report](#review-conditional-access-outcomes-in-the-audit-report)|
- | **On**| P1, P2| The access policy is evaluated and not enforced. |
- | **Off** | P1, P2| The access policy is not activated and has no effect on the users. |
+
+| Option | License | Note |
+||||
+|**Report-only** | P1, P2 | Report-only allows administrators to evaluate the impact of Conditional Access policies before enabling them in their environment. We recommend you check policy with this state, and determine the impact to end users without requiring multi-factor authentication or blocking users. For more information, see [Review Conditional Access outcomes in the audit report](#review-conditional-access-outcomes-in-the-audit-report) |
+|**On** | P1, P2 |The access policy is evaluated and not enforced. |
+|**Off** | P1, P2 | The access policy is not activated and has no effect on the users. |
1. Enable your test Conditional Access policy by selecting **Create**.
-## Conditional Access Template 1: Sign-in risk-based Conditional Access
+## Template 1: Sign-in risk-based Conditional Access
Most users have a normal behavior that can be tracked, when they fall outside of this norm it could be risky to allow them to just sign in. You may want to block that user or maybe just ask them to perform multi-factor authentication to prove that they are really who they say they are.-
-A sign-in risk represents the probability that a given authentication request isn't authorized by the identity owner. Azure AD B2C tenants with P2 licenses can create Conditional Access policies incorporating [Azure AD Identity Protection sign-in risk detections](../active-directory/identity-protection/concept-identity-protection-risks.md#sign-in-risk). Please note the [limitations on Identity Protection detections for B2C](./identity-protection-investigate-risk.md?pivots=b2c-user-flow#service-limitations-and-considerations).
-
+A sign-in risk represents the probability that a given authentication request isn't authorized by the identity owner. Azure AD B2C tenants with P2 licenses can create Conditional Access policies incorporating [Azure AD Identity Protection sign-in risk detections](../active-directory/identity-protection/concept-identity-protection-risks.md#sign-in-risk). Note the [limitations on Identity Protection detections for B2C](./identity-protection-investigate-risk.md?pivots=b2c-user-flow#service-limitations-and-considerations).
If risk is detected, users can perform multi-factor authentication to self-remediate and close the risky sign-in event to prevent unnecessary noise for administrators.- Configure Conditional Access through the Azure portal or Microsoft Graph APIs to enable a sign-in risk-based Conditional Access policy requiring MFA when the sign-in risk is *medium* or *high*.-
-### Enable with Conditional Access policy
+To configure your conditional access:
1. Sign in to the **Azure portal**. 2. Browse to **Azure AD B2C** > **Security** > **Conditional Access**.
Configure Conditional Access through the Azure portal or Microsoft Graph APIs to
4. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 5. Under **Assignments**, select **Users and groups**. 1. Under **Include**, select **All users**.
- 2. Under **Exclude**, select **Users and groups** and choose your organization's emergency access or break-glass accounts.
+ 2. Under **Exclude**, select **Users and groups** and choose your organization's emergency access or break-glass accounts.
3. Select **Done**. 6. Under **Cloud apps or actions** > **Include**, select **All cloud apps**. 7. Under **Conditions** > **Sign-in risk**, set **Configure** to **Yes**. Under **Select the sign-in risk level this policy will apply to**
Configure Conditional Access through the Azure portal or Microsoft Graph APIs to
9. Confirm your settings and set **Enable policy** to **On**. 10. Select **Create** to create to enable your policy.
-### Enable with Conditional Access APIs (optional)
+### Enable template 1 with Conditional Access APIs (optional)
Create a sign-in risk-based Conditional Access policy with MS Graph APIs. For more information, see [Conditional Access APIs](../active-directory/conditional-access/howto-conditional-access-apis.md#graph-api).- The following template can be used to create a Conditional Access policy with display name "Template 1: Require MFA for medium+ sign-in risk" in report-only mode.- ```json { "displayName": "Template 1: Require MFA for medium+ sign-in risk",
The following template can be used to create a Conditional Access policy with di
} ```
-## Add Conditional Access to a user flow
+## Template 2: User risk-based Conditional Access
-After you've added the Azure AD Conditional Access policy, enable Conditional Access in your user flow or custom policy. When you enable Conditional Access, you don't need to specify a policy name.
+Identity Protection can calculate what it believes is normal for a user's behavior and use that to base decisions for their risk. User risk is a calculation of probability that an identity has been compromised. B2C tenants with P2 licenses can create Conditional Access policies incorporating user risk. When a user is detected as at risk, you can require that they securely change their password to remediate the risk and gain access to their account. We highly recommend setting up a user risk policy to require a secure password change so users can self-remediate.
-Multiple Conditional Access policies may apply to an individual user at any time. In this case, the most strict access control policy takes precedence. For example, if one policy requires MFA while the other blocks access, the user will be blocked.
+Learn more about [user risk in Identity Protection](../active-directory/identity-protection/concept-identity-protection-risks.md#user-risk), taking into account the [limitations on Identity Protection detections for B2C](identity-protection-investigate-risk.md#service-limitations-and-considerations).
-## Enable multi-factor authentication (optional)
+Configure Conditional Access through Azure portal or Microsoft Graph APIs to enable a user risk-based Conditional Access policy requiring multi-factor authentication (MFA) and password change when user risk is medium OR high.
-When adding Conditional Access to a user flow, consider using **Multi-factor authentication (MFA)**. Users can use a one-time code via SMS or voice, or a one-time password via email for multi-factor authentication. MFA settings are configured separately from Conditional Access settings. You can choose from these MFA options:
+To configure your user based conditional access:
+1. Sign in to the **Azure portal**.
+2. Browse to **Azure AD B2C** > **Security** > **Conditional Access**.
+3. Select **New policy**.
+4. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
+5. Under **Assignments**, select **Users and groups**.
+ 1. Under **Include**, select **All users**.
+ 2. Under **Exclude**, select **Users and groups** and choose your organization's emergency access or break-glass accounts.
+ 3. Select **Done**.
+6. Under **Cloud apps or actions** > **Include**, select **All cloud apps**.
+7. Under **Conditions** > **User risk**, set **Configure** to **Yes**. Under **Configure user risk levels needed for policy to be enforced**
+ 1. Select **High** and **Medium**.
+ 2. Select **Done**.
+8. Under **Access controls** > **Grant**, select **Grant access**, **Require password change**, and select **Select**. **Require multi-factor authentication** will also be required by default.
+9. Confirm your settings and set **Enable policy** to **On**.
+10. Select **Create** to create to enable your policy.
+### Enable template 2 with Conditional Access APIs (optional)
+
+To create a user risk-based Conditional Access policy with Conditional Access APIs, refer to the documentation for [Conditional Access APIs](../active-directory/conditional-access/howto-conditional-access-apis.md#graph-api).
+
+The following template can be used to create a Conditional Access policy with display name "Template 2: Require secure password change for medium+ user risk" in report-only mode.
+```json
+{
+ "displayName": "Template 2: Require secure password change for medium+ user risk",
+ "state": "enabledForReportingButNotEnforced",
+ "conditions": {
+ "userRiskLevels": [ "high" ,
+ "medium"
+ ],
+ "applications": {
+ "includeApplications": [
+ "All"
+ ]
+ },
+ "users": {
+ "includeUsers": [
+ "All"
+ ],
+ "excludeUsers": [
+ "f753047e-de31-4c74-a6fb-c38589047723"
+ ]
+ }
+ },
+ "grantControls": {
+ "operator": "AND",
+ "builtInControls": [
+ "mfa",
+ "passwordChange"
+ ]
+ }
+}
+```
+
+## Template 3: Block locations with Conditional Access
+
+With the location condition in Conditional Access, you can control access to your cloud apps based on the network location of a user. More information about the location condition in Conditional Access can be found in the article,
+[Using the location condition in a Conditional Access policy](../active-directory/conditional-access/location-condition.md
+
+Configure Conditional Access through Azure portal or Microsoft Graph APIs to enable a Conditional Access policy blocking access to specific locations.
+
+### Define locations
+1. Sign in to the **Azure portal**.
+2. Browse to **Azure AD B2C** > **Security** > **Conditional Access** > **Named Locations**.
+3. Select **Countries location** or **IP ranges location**
+4. Give your location a name.
+5. Provide the IP ranges or select the Countries/Regions for the location you are specifying. If you choose Countries/Regions, you can optionally choose to include unknown areas.
+6. Choose **Save**.
+To enable with condition access policy:
+
+1. Sign in to the **Azure portal**.
+2. Browse to **Azure AD B2C** > **Security** > **Conditional Access**.
+3. Select **New policy**.
+4. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
+5. Under **Assignments**, select **Users and groups**.
+ 1. Under **Include**, select **All users**.
+ 2. Under **Exclude**, select **Users and groups** and choose your organization's emergency access or break-glass accounts.
+ 3. Select **Done**.
+6. Under **Cloud apps or actions** > **Include**, select **All cloud apps**.
+7. Under **Conditions** > **Location**
+ 1. Set **Configure** to **Yes**.
+ 2. Under **Include**, select **Selected locations**
+ 3. Select the named location you created.
+ 4. Click **Select**
+8. Under **Access controls** > select **Block Access**, and select **Select**.
+9. Confirm your settings and set **Enable policy** to **On**.
+10. Select **Create** to create to enable your policy.
+
+### Enable template 3 with Conditional Access APIs (optional)
+
+To create a location-based Conditional Access policy with Conditional Access APIs, refer to the documentation for [Conditional Access APIs](../active-directory/conditional-access/howto-conditional-access-apis.md#graph-api). To set up Named Locations, refer to the documentations for [Named Locations](/graph/api/resources/namedlocation).
+
+The following template can be used to create a Conditional Access policy with display name "Template 3: Block unallowed locations" in report-only mode.
+```json
+{
+ "displayName": "Template 3: Block unallowed locations",
+ "state": "enabledForReportingButNotEnforced",
+ "conditions": {
+ "applications": {
+ "includeApplications": [
+ "All"
+ ]
+ },
+ "users": {
+ "includeUsers": [
+ "All"
+ ],
+ "excludeUsers": [
+ "f753047e-de31-4c74-a6fb-c38589047723"
+ ]
+ },
+ "locations": {
+ "includeLocations": [
+ "b5c47916-b835-4c77-bd91-807ec08bf2a3"
+ ]
+ }
+ },
+ "grantControls": {
+ "operator": "OR",
+ "builtInControls": [
+ "block"
+ ]
+ }
+}
+```
+## Add Conditional Access to a user flow
+After you've added the Azure AD Conditional Access policy, enable Conditional Access in your user flow or custom policy. When you enable Conditional Access, you don't need to specify a policy name.
+Multiple Conditional Access policies may apply to an individual user at any time. In this case, the most strict access control policy takes precedence. For example, if one policy requires MFA while the other blocks access, the user will be blocked.
+## Enable multi-factor authentication (optional)
+When adding Conditional Access to a user flow, consider using **Multi-factor authentication (MFA)**. Users can use a one-time code via SMS or voice, or a one-time password via email for multi-factor authentication. MFA settings are configured separately from Conditional Access settings. You can choose from these MFA options:
- **Off** - MFA is never enforced during sign-in, and users are not prompted to enroll in MFA during sign-up or sign-in. - **Always on** - MFA is always required, regardless of your Conditional Access setup. During sign-up, users are prompted to enroll in MFA. During sign-in, if users aren't already enrolled in MFA, they're prompted to enroll. - **Conditional** - During sign-up and sign-in, users are prompted to enroll in MFA (both new users and existing users who aren't enrolled in MFA). During sign-in, MFA is enforced only when an active Conditional Access policy evaluation requires it:- - If the result is an MFA challenge with no risk, MFA is enforced. If the user isn't already enrolled in MFA, they're prompted to enroll. - If the result is an MFA challenge due to risk *and* the user is not enrolled in MFA, sign-in is blocked.- > [!NOTE] > With general availability of Conditional Access in Azure AD B2C, users are now prompted to enroll in an MFA method during sign-up. Any sign-up user flows you created prior to general availability won't automatically reflect this new behavior, but you can include the behavior by creating new user flows.- ::: zone pivot="b2c-user-flow"- To enable Conditional Access for a user flow, make sure the version supports Conditional Access. These user flow versions are labeled **Recommended**.- 1. Sign in to the [Azure portal](https://portal.azure.com).- 1. Select the **Directory + Subscription** icon in the portal toolbar, and then select the directory that contains your Azure AD B2C tenant.- 1. Under **Azure services**, select **Azure AD B2C**. Or use the search box to find and select **Azure AD B2C**.- 1. Under **Policies**, select **User flows**. Then select the user flow.- 1. Select **Properties** and make sure the user flow supports Conditional Access by looking for the setting labeled **Conditional Access**.
-
![Configure MFA and Conditional Access in Properties](media/conditional-access-user-flow/add-conditional-access.png)- 1. In the **Multifactor authentication** section, select the desired **Type of method**, and then under **MFA enforcement**, select **Conditional**.
-
1. In the **Conditional access** section, select the **Enforce conditional access policies** check box.- 1. Select **Save**.-- ::: zone-end- ::: zone pivot="b2c-custom-policy"- ## Add Conditional Access to your policy- 1. Get the example of a conditional access policy on [GitHub](https://github.com/azure-ad-b2c/samples/tree/master/policies/conditional-access). 1. In each file, replace the string `yourtenant` with the name of your Azure AD B2C tenant. For example, if the name of your B2C tenant is *contosob2c*, all instances of `yourtenant.onmicrosoft.com` become `contosob2c.onmicrosoft.com`. 1. Upload the policy files.
-
### Configure claim other than phone number to be used for MFA- In the Conditional Access policy above, the `DoesClaimExist` claim transformation method checks if a claim contains a value, for example if the `strongAuthenticationPhoneNumber` claim contains a phone number. - The claims transformation isn't limited to the `strongAuthenticationPhoneNumber` claim. Depending on the scenario, you can use any other claim. In the following XML snippet, the `strongAuthenticationEmailAddress` claim is checked instead. The claim you choose must have a valid value, otherwise the `IsMfaRegistered` claim will be set to `False`. When set to `False`, the Conditional Access policy evaluation returns a `Block` grant type, preventing the user from completing user flow.- ```XML <ClaimsTransformation Id="IsMfaRegisteredCT" TransformationMethod="DoesClaimExist"> <InputClaims>
The claims transformation isn't limited to the `strongAuthenticationPhoneNumber`
</OutputClaims> </ClaimsTransformation> ```- ## Test your custom policy- 1. Select the `B2C_1A_signup_signin_with_ca` or `B2C_1A_signup_signin_with_ca_whatif` policy to open its overview page. Then select **Run user flow**. Under **Application**, select *webapp1*. The **Reply URL** should show `https://jwt.ms`. 1. Copy the URL under **Run user flow endpoint**.- 1. To simulate a risky sign-in, open the [Tor Browser](https://www.torproject.org/download/) and use the URL you copied in the previous step to sign in to the registered app.- 1. Enter the requested information in the sign-in page, and then attempt to sign in. The token is returned to `https://jwt.ms` and should be displayed to you. In the jwt.ms decoded token, you should see that the sign-in was blocked.- ::: zone-end- ::: zone pivot="b2c-user-flow"- ## Test your user flow- 1. Select the user flow you created to open its overview page, and then select **Run user flow**. Under **Application**, select *webapp1*. The **Reply URL** should show `https://jwt.ms`.- 1. Copy the URL under **Run user flow endpoint**.- 1. To simulate a risky sign-in, open the [Tor Browser](https://www.torproject.org/download/) and use the URL you copied in the previous step to sign in to the registered app.- 1. Enter the requested information in the sign-in page, and then attempt to sign in. The token is returned to `https://jwt.ms` and should be displayed to you. In the jwt.ms decoded token, you should see that the sign-in was blocked.- ::: zone-end- ## Review Conditional Access outcomes in the audit report- To review the result of a Conditional Access event:- 1. Sign in to the [Azure portal](https://portal.azure.com/).- 2. Select the **Directory + Subscription** icon in the portal toolbar, and then select the directory that contains your Azure AD B2C tenant.- 3. Under **Azure services**, select **Azure AD B2C**. Or use the search box to find and select **Azure AD B2C**.- 4. Under **Activities**, select **Audit logs**.- 5. Filter the audit log by setting **Category** to **B2C** and setting **Activity Resource Type** to **IdentityProtection**. Then select **Apply**.- 6. Review audit activity for up to the last seven days. The following types of activity are included:- - **Evaluate conditional access policies**: This audit log entry indicates that a Conditional Access evaluation was performed during an authentication. - **Remediate user**: This entry indicates that the grant or requirements of a Conditional Access policy were met by the end user, and this activity was reported to the risk engine to mitigate (reduce the risk of) the user.- 7. Select an **Evaluate conditional access policy** log entry in the list to open the **Activity Details: Audit log** page, which shows the audit log identifiers, along with this information in the **Additional Details** section:- - **ConditionalAccessResult**: The grant required by the conditional policy evaluation. - **AppliedPolicies**: A list of all the Conditional Access policies where the conditions were met and the policies are ON. - **ReportingPolicies**: A list of the Conditional Access policies that were set to report-only mode and where the conditions were met.-
+
## Next steps [Customize the user interface in an Azure AD B2C user flow](customize-ui-with-html.md)++
active-directory-b2c Identity Provider Azure Ad Single Tenant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-azure-ad-single-tenant.md
To enable sign-in for users with an Azure AD account from a specific Azure AD or
If you want to get the `family_name` and `given_name` claims from Azure AD, you can configure optional claims for your application in the Azure portal UI or application manifest. For more information, see [How to provide optional claims to your Azure AD app](../active-directory/develop/active-directory-optional-claims.md).
-1. Sign in to the [Azure portal](https://portal.azure.com). Search for and select **Azure Active Directory**.
+1. Sign in to the [Azure portal](https://portal.azure.com) using your organizational Azure AD tenant. Search for and select **Azure Active Directory**.
1. From the **Manage** section, select **App registrations**. 1. Select the application you want to configure optional claims for in the list. 1. From the **Manage** section, select **Token configuration**.
active-directory-domain-services Troubleshoot Alerts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-domain-services/troubleshoot-alerts.md
Previously updated : 07/09/2020 Last updated : 06/07/2021
This alert is generated when one of these required resources is deleted. If the
1. In the health page, select the alert with the ID *AADDS109*. 1. The alert has a timestamp for when it was first found. If that timestamp is less than 4 hours ago, the Azure platform may be able to automatically recreate the resource and resolve the alert by itself.
- If the alert is more than 4 hours old, the managed domain is in an unrecoverable state. [Delete the managed domain](delete-aadds.md) and then [create a replacement managed domain](tutorial-create-instance.md).
+ For different reasons, the alert may be older than 4 hours. In that case, you can [delete the managed domain](delete-aadds.md) and then [create a replacement managed domain](tutorial-create-instance.md) for an immediate fix, or you can open a support request to fix the instance. Depending on the nature of the problem, support may require a restore from backup.
+ ## AADDS110: The subnet associated with your managed domain is full
active-directory User Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/user-provisioning.md
In Azure Active Directory (Azure AD), the term **app provisioning** refers to au
![architecture](./media/user-provisioning/arch-1.png)
-In addition to creating user identities, automatic provisioning includes the maintenance and removal of user identities as status or roles change. Common scenarios include provisioning an Azure AD user into applications like [Dropbox](../../active-directory/saas-apps/dropboxforbusiness-provisioning-tutorial.md), [Salesforce](../../active-directory/saas-apps/salesforce-provisioning-tutorial.md), [ServiceNow](../../active-directory/saas-apps/servicenow-provisioning-tutorial.md), and more.
- Azure AD to SaaS application provisioning refers to automatically creating user identities and roles in the cloud ([SaaS](https://azure.microsoft.com/overview/what-is-saas/)) applications that users need access to. In addition to creating user identities, automatic provisioning includes the maintenance and removal of user identities as status or roles change. Common scenarios include provisioning an Azure AD user into applications like [Dropbox](../../active-directory/saas-apps/dropboxforbusiness-provisioning-tutorial.md), [Salesforce](../../active-directory/saas-apps/salesforce-provisioning-tutorial.md), [ServiceNow](../../active-directory/saas-apps/servicenow-provisioning-tutorial.md), and more. Azure AD supports provisioning users into SaaS applications as well as applications hosted on-premises or an IaaS solution such as a virtual machine. You may have a legacy application that relies on an LDAP user store or a SQL DB. The Azure AD provisioning service allows you to create, update, and delete users into on-premises applications without having to open up firewalls or dealing with TCP ports.
active-directory Concept Authentication Phone Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concept-authentication-phone-options.md
Previously updated : 01/22/2021 Last updated : 06/08/2021
With phone call verification during SSPR or Azure AD Multi-Factor Authentication
If you have problems with phone authentication for Azure AD, review the following troubleshooting steps: * ΓÇ£You've hit our limit on verification callsΓÇ¥ or ΓÇ£YouΓÇÖve hit our limit on text verification codesΓÇ¥ error messages during sign-in
- * Microsoft may limit repeated authentication attempts that are perform by the same user in a short period of time. This limitation does not apply to the Microsoft Authenticator or verification code. If you have hit these limits, you can use the Authenticator App, verification code or try to sign in again in a few minutes.
+ * Microsoft may limit repeated authentication attempts that are performed by the same user in a short period of time. This limitation does not apply to the Microsoft Authenticator or verification code. If you have hit these limits, you can use the Authenticator App, verification code or try to sign in again in a few minutes.
* "Sorry, we're having trouble verifying your account" error message during sign-in * Microsoft may limit or block voice or SMS authentication attempts that are performed by the same user, phone number, or organization due to high number of failed voice or SMS authentication attempts. If you are experiencing this error, you can try another method, such as Authenticator App or verification code, or reach out to your admin for support. * Blocked caller ID on a single device.
active-directory Concept Sspr Howitworks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concept-sspr-howitworks.md
Previously updated : 12/07/2020 Last updated : 06/08/2021
# How it works: Azure AD self-service password reset
-Azure Active Directory (Azure AD) self-service password reset (SSPR) gives users the ability to change or reset their password, with no administrator or help desk involvement. If a user's account is locked or they forget their password, they can follow prompts to unblock themselves and get back to work. This ability reduces help desk calls and loss of productivity when a user can't sign in to their device or an application.
+Azure Active Directory (Azure AD) self-service password reset (SSPR) gives users the ability to change or reset their password, with no administrator or help desk involvement. If a user's account is locked or they forget their password, they can follow prompts to unblock themselves and get back to work. This ability reduces help desk calls and loss of productivity when a user can't sign in to their device or an application. We recommend this video on [how to enable and configure SSPR in Azure AD](https://www.youtube.com/watch?v=rA8TvhNcCvQ).
> [!IMPORTANT] > This conceptual article explains to an administrator how self-service password reset works. If you're an end user already registered for self-service password reset and need to get back into your account, go to [https://aka.ms/sspr](https://aka.ms/sspr).
active-directory Concept Sspr Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concept-sspr-policy.md
You can also use PowerShell cmdlets to remove the never-expires configuration or
This guidance applies to other providers, such as Intune and Microsoft 365, which also rely on Azure AD for identity and directory services. Password expiration is the only part of the policy that can be changed. > [!NOTE]
-> Only passwords for user accounts that aren't synchronized through Azure AD Connect can be configured to not expire. For more information about directory synchronization, see [Connect AD with Azure AD](../hybrid/whatis-hybrid-identity.md).
+> By default only passwords for user accounts that aren't synchronized through Azure AD Connect can be configured to not expire. For more information about directory synchronization, see [Connect AD with Azure AD](https://docs.microsoft.com/azure/active-directory/hybrid/how-to-connect-password-hash-synchronization#password-expiration-policy).
### Set or check the password policies by using PowerShell
active-directory Howto Mfa App Passwords https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-mfa-app-passwords.md
In this scenario, you use the following credentials:
## Allow users to create app passwords
-By default, users can't create app passwords. The app passwords feature must be enabled before users can use them. To give users the ability to create app passwords, complete the following steps:
+By default, users can't create app passwords. The app passwords feature must be enabled before users can use them. To give users the ability to create app passwords, **admin needs** to complete the following steps:
1. Sign in to the [Azure portal](https://portal.azure.com). 2. Search for and select **Azure Active Directory**, then choose **Users**.
By default, users can't create app passwords. The app passwords feature must be
When users complete their initial registration for Azure AD Multi-Factor Authentication, there's an option to create app passwords at the end of the registration process.
-Users can also create app passwords after registration. For more information and detailed steps for your users, see [What are app passwords in Azure AD Multi-Factor Authentication?](../user-help/multi-factor-authentication-end-user-app-passwords.md)
+Users can also create app passwords after registration. For more information and detailed steps for your users, see the following resources:
+* [What are app passwords in Azure AD Multi-Factor Authentication?](../user-help/multi-factor-authentication-end-user-app-passwords.md)
+* [Create app passwords from the Security info page](https://docs.microsoft.com/azure/active-directory/user-help/security-info-app-passwords)
## Next steps
active-directory Howto Mfa Mfasettings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-mfa-mfasettings.md
If a user's device has been lost or stolen, you can block Azure AD Multi-Factor
### Block a user
-To block a user, complete the following steps:
+To block a user, complete the following steps, or watch [this short video](https://www.youtube.com/watch?v=WdeE1On4S1o&feature=youtu.be)
1. Browse to **Azure Active Directory** > **Security** > **MFA** > **Block/unblock users**. 1. Select **Add** to block a user.
active-directory Howto Mfa Nps Extension https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-mfa-nps-extension.md
The NPS extension is meant to work with your existing infrastructure. Make sure
### Licenses
-The NPS Extension for Azure AD Multi-Factor Authentication is available to customers with [licenses for Azure AD Multi-Factor Authentication](./concept-mfa-howitworks.md). Consumption-based licenses for Azure AD Multi-Factor Authentication, such as per user or per authentication licenses, aren't compatible with the NPS extension.
+The NPS Extension for Azure AD Multi-Factor Authentication is available to customers with [licenses for Azure AD Multi-Factor Authentication](./concept-mfa-howitworks.md) (included with Azure AD Premium P1 and Premium P2 or Enterprise Mobility + Security). Consumption-based licenses for Azure AD Multi-Factor Authentication, such as per user or per authentication licenses, aren't compatible with the NPS extension.
### Software
active-directory Howto Mfa Reporting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-mfa-reporting.md
Previously updated : 05/15/2020 Last updated : 06/08/2021
Get-MsolUser -All | Where-Object {$_.StrongAuthenticationMethods.Count -eq 0 -an
Identify users and output methods registered: ```powershell
-Get-MsolUser -All | Select-Object @{N='UserPrincipalName';E={$_.UserPrincipalName}},
-
-@{N='MFA Status';E={if ($_.StrongAuthenticationRequirements.State){$_.StrongAuthenticationRequirements.State} else {"Disabled"}}},
-
-@{N='MFA Methods';E={$_.StrongAuthenticationMethods.methodtype}} | Export-Csv -Path c:\MFA_Report.csv -NoTypeInformation
+Get-MsolUser -All | Select-Object @{N='UserPrincipalName';E={$_.UserPrincipalName}},@{N='MFA Status';E={if ($_.StrongAuthenticationRequirements.State){$_.StrongAuthenticationRequirements.State} else {"Disabled"}}},@{N='MFA Methods';E={$_.StrongAuthenticationMethods.methodtype}} | Export-Csv -Path c:\MFA_Report.csv -NoTypeInformation
``` ## Downloaded activity reports result codes
active-directory Howto Password Ban Bad On Premises Agent Versions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-password-ban-bad-on-premises-agent-versions.md
# Azure AD Password Protection agent version history
+## 1.2.176.0
+
+Release date: June 4, 2021
+
+* Minor bugfixes to issues which prevented the proxy and DC agents from running successfully in certain environments.
+ ## 1.2.172.0
-Release date: February 22nd 2021
+Release date: February 22, 2021
It has been almost two years since the GA versions of the on-premises Azure AD Password Protection agents were released. A new update is now available - see change descriptions below. Thank you to everyone who has given us feedback on the product.
It is supported to run older and newer versions of the DC agent and proxy softwa
## 1.2.125.0
-Release date: March 22nd 2019
+Release date: March 2, 2019
* Fix minor typo errors in event log messages * Update EULA agreement to final General Availability version
Release date: March 22nd 2019
## 1.2.116.0
-Release date: 3/13/2019
+Release date: March 3, 2019
* The Get-AzureADPasswordProtectionProxy and Get-AzureADPasswordProtectionDCAgent cmdlets now report software version and the current Azure tenant with the following limitations: * Software version and Azure tenant data are only available for DC agents and proxies running version 1.2.116.0 or later.
Release date: 3/13/2019
## 1.2.65.0
-Release date: February 1st 2019
+Release date: February 1, 2019
Changes:
Changes:
## 1.2.25.0
-Release date: November 1st 2018
+Release date: November 1, 2018
Fixes:
Changes:
## 1.2.10.0
-Release date: August 17th 2018
+Release date: August 17, 2018
Fixes:
Fixes:
## 1.1.10.3
-Release date:June 15th 2018
+Release date: June 15, 2018
Initial public preview release
active-directory Troubleshoot Sspr Writeback https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/troubleshoot-sspr-writeback.md
The following more specific issues may occur with password writeback. If you hav
| Federated, pass-through authentication, or password-hash-synchronized users who attempt to reset their passwords see an error after attempting to submit their password. The error indicates that there was a service problem. <br ><br> In addition to this problem, during password reset operations, you might see an error that the management agent was denied access in your on-premises event logs. | If you see these errors in your event log, confirm that the Active Directory Management Agent (ADMA) account that was specified in the wizard at the time of configuration has the necessary permissions for password writeback. <br> <br> After this permission is given, it can take up to one hour for the permissions to trickle down via the `sdprop` background task on the domain controller (DC). <br> <br> For password reset to work, the permission needs to be stamped on the security descriptor of the user object whose password is being reset. Until this permission shows up on the user object, password reset continues to fail with an access denied message. | | Federated, pass-through authentication, or password-hash-synchronized users who attempt to reset their passwords, see an error after they submit their password. The error indicates that there was a service problem. <br> <br> In addition to this problem, during password reset operations, you might see an error in your event logs from the Azure AD Connect service indicating an "Object could not be found" error. | This error usually indicates that the sync engine is unable to find either the user object in the Azure AD connector space or the linked metaverse (MV) or Azure AD connector space object. <br> <br> To troubleshoot this problem, make sure that the user is indeed synchronized from on-premises to Azure AD via the current instance of Azure AD Connect and inspect the state of the objects in the connector spaces and MV. Confirm that the Active Directory Certificate Services (AD CS) object is connected to the MV object via the "Microsoft.InfromADUserAccountEnabled.xxx" rule.| | Federated, pass-through authentication, or password-hash-synchronized users who attempt to reset their passwords see an error after they submit their password. The error indicates that there was a service problem. <br> <br> In addition to this problem, during password reset operations, you might see an error in your event logs from the Azure AD Connect service that indicates that there's a "Multiple matches found" error. | This indicates that the sync engine detected that the MV object is connected to more than one AD CS object via "Microsoft.InfromADUserAccountEnabled.xxx". This means that the user has an enabled account in more than one forest. This scenario isn't supported for password writeback. |
-| Password operations fail with a configuration error. The application event log contains Azure AD Connect error 6329 with the text "0x8023061f (The operation failed because password synchronization is not enabled on this Management Agent)". | This error occurs if the Azure AD Connect configuration is changed to add a new Active Directory forest (or to remove and readd an existing forest) after the password writeback feature has already been enabled. Password operations for users in these recently added forests fail. To fix the problem, disable and then re-enable the password writeback feature after the forest configuration changes have been completed. |
+| Password operations fail with a configuration error. The application event log contains Azure AD Connect error 6329 with the text "0x8023061f (The operation failed because password synchronization is not enabled on this Management Agent)". | This error occurs if the Azure AD Connect configuration is changed to add a new Active Directory forest (or to remove and readd an existing forest) after the password writeback feature has already been enabled. Password operations for users in these recently added forests fail. To fix the problem, disable and then re-enable the password writeback feature after the forest configuration changes have been completed.
+| SSPR_0029: We are unable to reset your password due to an error in your on-premises configuration. Please contact your admin and ask them to investigate. | Problem: Password writeback has been enabled following all of the required steps, but when attempting to change a password you receive "SSPR_0029: Your organization hasnΓÇÖt properly set up the on-premises configuration for password reset." Checking the event logs on the Azure AD Connect system shows that the management agent credential was denied access.Possible Solution: Use RSOP on the Azure AD Connect system and your domain controllers to see if the policy "Network access: Restrict clients allowed to make remote calls to SAM" found under Computer Configuration > Windows Settings > Security Settings > Local Policies > Security Options is enabled. Edit the policy to include the MSOL_XXXXXXX management account as an allowed user. |
## Password writeback event log error codes
active-directory How To Gmsa Cmdlets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/cloud-sync/how-to-gmsa-cmdlets.md
# Azure AD Connect cloud provisioning agent gMSA PowerShell cmdlets
-The purpose of this document is to describe the Azure AD Connect cloud provisioning agent gMSA PowerShell cmdlets. These cmdlets allow you to have more granularity on the permissions that are applied on the service account (gmsa). By default, Azure AD Connect cloud sync applies all permissions similar to Azure AD Connect on the default gmsa or a custom gmsa.
+The purpose of this document is to describe the Azure AD Connect cloud provisioning agent gMSA PowerShell cmdlets. These cmdlets allow you to have more granularity on the permissions that are applied on the service account (gMSA). By default, Azure AD Connect cloud sync applies all permissions similar to Azure AD Connect on the default gMSA or a custom gMSA.
This document will cover the following cmdlets:
active-directory Quickstart Create New Tenant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-create-new-tenant.md
If you don't already have an Azure AD tenant or if you want to create a new one
You'll provide the following information to create your new tenant: - **Organization name**-- **Initial domain** - This domain is part of *.onmicrosoft.com. You can customize the domain later.
+- **Initial domain** - Initial domain `<domainname>.onmicrosoft.com` can't be edited or deleted. You can add a customized domain name later.
- **Country or region** > [!NOTE]
active-directory V2 Protocols Oidc https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-protocols-oidc.md
client_id=6731de76-14a6-49ae-97bc-6eba6914391e
| `nonce` | Required | A value included in the request, generated by the app, that will be included in the resulting id_token value as a claim. The app can verify this value to mitigate token replay attacks. The value typically is a randomized, unique string that can be used to identify the origin of the request. | | `response_mode` | Recommended | Specifies the method that should be used to send the resulting authorization code back to your app. Can be `form_post` or `fragment`. For web applications, we recommend using `response_mode=form_post`, to ensure the most secure transfer of tokens to your application. | | `state` | Recommended | A value included in the request that also will be returned in the token response. It can be a string of any content you want. A randomly generated unique value typically is used to [prevent cross-site request forgery attacks](https://tools.ietf.org/html/rfc6749#section-10.12). The state also is used to encode information about the user's state in the app before the authentication request occurred, such as the page or view the user was on. |
-| `prompt` | Optional | Indicates the type of user interaction that is required. The only valid values at this time are `login`, `none`, and `consent`. The `prompt=login` claim forces the user to enter their credentials on that request, which negates single sign-on. The `prompt=none` claim is the opposite. This claim ensures that the user isn't presented with any interactive prompt at. If the request can't be completed silently via single sign-on, the Microsoft identity platform returns an error. The `prompt=consent` claim triggers the OAuth consent dialog after the user signs in. The dialog asks the user to grant permissions to the app. |
+| `prompt` | Optional | Indicates the type of user interaction that is required. The only valid values at this time are `login`, `none`, `consent`, and `select_account`. The `prompt=login` claim forces the user to enter their credentials on that request, which negates single sign-on. The `prompt=none` parameter is the opposite, and should be paired with a `login_hint` to indicate which user must be signed in. These parameters ensure that the user isn't presented with any interactive prompt at all. If the request can't be completed silently via single sign-on (because no user is signed in, the hinted user isn't signed in, or there are multiple users signed in and no hint is provided), the Microsoft identity platform returns an error. The `prompt=consent` claim triggers the OAuth consent dialog after the user signs in. The dialog asks the user to grant permissions to the app. Finally, `select_account` shows the user an account selector, negating silent SSO but allowing the user to pick which account they intend to sign in with, without requiring credential entry. You cannot use `login_hint` and `select_account` together.|
| `login_hint` | Optional | You can use this parameter to pre-fill the username and email address field of the sign-in page for the user, if you know the username ahead of time. Often, apps use this parameter during reauthentication, after already extracting the username from an earlier sign-in by using the `preferred_username` claim. | | `domain_hint` | Optional | The realm of the user in a federated directory. This skips the email-based discovery process that the user goes through on the sign-in page, for a slightly more streamlined user experience. For tenants that are federated through an on-premises directory like AD FS, this often results in a seamless sign-in because of the existing login session. |
active-directory Licensing Service Plan Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/licensing-service-plan-reference.md
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
- **Service plans included (friendly names)**: A list of service plans (friendly names) in the product that correspond to the string ID and GUID >[!NOTE]
->This information last updated on June 3rd, 2021.
+>This information last updated on June 7th, 2021.
| Product name | String ID | GUID | Service plans included | Service plans included (friendly names) | | | | | | | | APP CONNECT IW | SPZA_IW | 8f0c5670-4e56-4892-b06d-91c085d7004f | SPZA (0bfc98ed-1dbc-4a97-b246-701754e48b17)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | APP CONNECT (0bfc98ed-1dbc-4a97-b246-701754e48b17)<br/>EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | | Microsoft 365 Audio Conferencing | MCOMEETADV | 0c266dff-15dd-4b49-8397-2bb16070ed52 | MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40) | Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40) | | AZURE ACTIVE DIRECTORY BASIC | AAD_BASIC | 2b9c8e7c-319c-43a2-a2a0-48c5c6161de7 | AAD_BASIC (c4da7f8a-5ee2-4c99-a7e1-87d2df57f6fe) | MICROSOFT AZURE ACTIVE DIRECTORY BASIC (c4da7f8a-5ee2-4c99-a7e1-87d2df57f6fe) |
-| AZURE ACTIVE DIRECTORY PREMIUM P1 | AAD_PREMIUM | 078d2b04-f1bd-4111-bbd4-b4b1b354cef4 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0) | AZURE ACTIVE DIRECTORY PREMIUM P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>MICROSOFT AZURE MULTI-FACTOR AUTHENTICATION (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>CLOUD APP SECURITY DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9) |
-| AZURE ACTIVE DIRECTORY PREMIUM P2 | AAD_PREMIUM_P2 | 84a661c4-e949-4bd2-a560-ed7766fcaf2b | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0) | AZURE ACTIVE DIRECTORY PREMIUM P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>MICROSOFT AZURE MULTI-FACTOR AUTHENTICATION (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>CLOUD APP SECURITY DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>AZURE ACTIVE DIRECTORY PREMIUM P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998) |
+| AZURE ACTIVE DIRECTORY PREMIUM P1 | AAD_PREMIUM | 078d2b04-f1bd-4111-bbd4-b4b1b354cef4 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0) | AZURE ACTIVE DIRECTORY PREMIUM P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>CLOUD APP SECURITY DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MICROSOFT AZURE MULTI-FACTOR AUTHENTICATION (8a256a2b-b617-496d-b51b-e76466e88db0) |
+| AZURE ACTIVE DIRECTORY PREMIUM P2 | AAD_PREMIUM_P2 | 84a661c4-e949-4bd2-a560-ed7766fcaf2b | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0) | AZURE ACTIVE DIRECTORY PREMIUM P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AZURE ACTIVE DIRECTORY PREMIUM P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>CLOUD APP SECURITY DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MICROSOFT AZURE MULTI-FACTOR AUTHENTICATION (8a256a2b-b617-496d-b51b-e76466e88db0) |
| AZURE INFORMATION PROTECTION PLAN 1 | RIGHTSMANAGEMENT | c52ea49f-fe5d-4e95-93ba-1de91d380f89 | RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3) | AZURE INFORMATION PROTECTION PREMIUM P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>MICROSOFT AZURE ACTIVE DIRECTORY RIGHTS (bea4c11e-220a-4e6d-8eb8-8ea15d019f90) | | COMMON AREA PHONE | MCOCAP | 295a8eb0-f78d-45c7-8b5b-1eed5ed02dff | MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c) | MICROSOFT 365 PHONE SYSTEM (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>MICROSOFT TEAMS (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c) | | COMMUNICATIONS CREDITS | MCOPSTNC | 47794cd0-f0e5-45c5-9033-2eb6b5fc84e0 | MCOPSTNC (505e180f-f7e0-4b65-91d4-00d670bbd18c) | COMMUNICATIONS CREDITS (505e180f-f7e0-4b65-91d4-00d670bbd18c) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| DYNAMICS 365 FOR FINANCIALS BUSINESS EDITION | DYN365_FINANCIALS_BUSINESS_SKU | cc13a803-544e-4464-b4e4-6d6169a138fa | DYN365_FINANCIALS_BUSINESS (920656a2-7dd8-4c83-97b6-a356414dbd36)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) |FLOW FOR DYNAMICS 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>POWERAPPS FOR DYNAMICS 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>DYNAMICS 365 FOR FINANCIALS (920656a2-7dd8-4c83-97b6-a356414dbd36) | | DYNAMICS 365 FOR SALES AND CUSTOMER SERVICE ENTERPRISE EDITION | DYN365_ENTERPRISE_SALES_CUSTOMERSERVICE | 8edc2cf8-6438-4fa9-b6e3-aa1660c640cc | DYN365_ENTERPRISE_P1 (d56f3deb-50d8-465a-bedb-f079817ccac1)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>NBENTERPRISE (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>PROJECT_ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014) |DYNAMICS 365 CUSTOMER ENGAGEMENT PLAN (d56f3deb-50d8-465a-bedb-f079817ccac1)<br/>FLOW FOR DYNAMICS 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>MICROSOFT SOCIAL ENGAGEMENT - SERVICE DISCONTINUATION (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>POWERAPPS FOR DYNAMICS 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>PROJECT ONLINE ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINT ONLINE (PLAN 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014) | | DYNAMICS 365 FOR SALES ENTERPRISE EDITION | DYN365_ENTERPRISE_SALES | 1e1a282c-9c54-43a2-9310-98ef728faace | DYN365_ENTERPRISE_SALES (2da8e897-7791-486b-b08f-cc63c8129df7)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>NBENTERPRISE (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>PROJECT_ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014) | DYNAMICS 365 FOR SALES (2da8e897-7791-486b-b08f-cc63c8129df7)<br/>FLOW FOR DYNAMICS 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>MICROSOFT SOCIAL ENGAGEMENT - SERVICE DISCONTINUATION (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>POWERAPPS FOR DYNAMICS 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>PROJECT ONLINE ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINT ONLINE (PLAN 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014) |
+| DYNAMICS 365 FOR SUPPLY CHAIN MANAGEMENT | DYN365_SCM | f2e48cb3-9da0-42cd-8464-4a54ce198ad0 | DYN365_CDS_SUPPLYCHAINMANAGEMENT (b6a8b974-2956-4e14-ae81-f0384c363528)<br/>DYN365_REGULATORY_SERVICE (c7657ae3-c0b0-4eed-8c1d-6a7967bd9c65)<br/>D365_SCM (1224eae4-0d91-474a-8a52-27ec96a63fe7)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) | COMMON DATA SERVICE FOR DYNAMICS 365 SUPPLY CHAIN MANAGEMENT (b6a8b974-2956-4e14-ae81-f0384c363528)<br/>DYNAMICS 365 FOR FINANCE AND OPERATIONS, ENTERPRISE EDITION - REGULATORY SERVICE (c7657ae3-c0b0-4eed-8c1d-6a7967bd9c65)<br/>DYNAMICS 365 FOR SUPPLY CHAIN MANAGEMENT (1224eae4-0d91-474a-8a52-27ec96a63fe7)<br/>EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW FOR DYNAMICS 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>POWERAPPS FOR DYNAMICS 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) |
| DYNAMICS 365 FOR TEAM MEMBERS ENTERPRISE EDITION | DYN365_ENTERPRISE_TEAM_MEMBERS | 8e7a3d30-d97d-43ab-837c-d7701cef83dc | DYN365_Enterprise_Talent_Attract_TeamMember (643d201a-9884-45be-962a-06ba97062e5e)<br/>DYN365_Enterprise_Talent_Onboard_TeamMember (f2f49eef-4b3f-4853-809a-a055c6103fe0)<br/>DYN365_ENTERPRISE_TEAM_MEMBERS (6a54b05e-4fab-40e7-9828-428db3b336fa)<br/>DYNAMICS_365_FOR_OPERATIONS_TEAM_MEMBERS (f5aa7b45-8a36-4cd1-bc37-5d06dea98645)<br/>Dynamics_365_for_Retail_Team_members (c0454a3d-32b5-4740-b090-78c32f48f0ad)<br/>Dynamics_365_for_Talent_Team_members (d5156635-0704-4f66-8803-93258f8b2678)<br/>FLOW_DYN_TEAM (1ec58c70-f69c-486a-8109-4b87ce86e449)<br/>POWERAPPS_DYN_TEAM (52e619e2-2730-439a-b0d3-d09ab7e8b705)<br/>PROJECT_ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014) | DYNAMICS 365 FOR TALENT - ATTRACT EXPERIENCE TEAM MEMBER (643d201a-9884-45be-962a-06ba97062e5e)<br/>DYNAMICS 365 FOR TALENT - ONBOARD EXPERIENCE (f2f49eef-4b3f-4853-809a-a055c6103fe0)<br/>DYNAMICS 365 FOR TEAM MEMBERS (6a54b05e-4fab-40e7-9828-428db3b336fa)<br/>DYNAMICS 365 FOR OPERATIONS TEAM MEMBERS (f5aa7b45-8a36-4cd1-bc37-5d06dea98645)<br/>DYNAMICS 365 FOR RETAIL TEAM MEMBERS (c0454a3d-32b5-4740-b090-78c32f48f0ad)<br/>DYNAMICS 365 FOR TALENT TEAM MEMBERS (d5156635-0704-4f66-8803-93258f8b2678)<br/>FLOW FOR DYNAMICS 365 (1ec58c70-f69c-486a-8109-4b87ce86e449)<br/>POWERAPPS FOR DYNAMICS 365 (52e619e2-2730-439a-b0d3-d09ab7e8b705)<br/>PROJECT ONLINE ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINT ONLINE (PLAN 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014) |
+| DYNAMICS 365 P1 TRIAL FOR INFORMATION WORKERS | DYN365_ENTERPRISE_P1_IW | 338148b6-1b11-4102-afb9-f92b6cdc0f8d | DYN365_ENTERPRISE_P1_IW (056a5f80-b4e0-4983-a8be-7ad254a113c9)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | DYNAMICS 365 P1 TRIAL FOR INFORMATION WORKERS (056a5f80-b4e0-4983-a8be-7ad254a113c9)<br/>EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) |
+| DYNAMICS 365 TALENT: ONBOARD | DYNAMICS_365_ONBOARDING_SKU | b56e7ccc-d5c7-421f-a23b-5c18bdbad7c0 | DYN365_CDS_DYN_APPS (2d925ad8-2479-4bd8-bb76-5b80f1d48935)<br/>Dynamics_365_Onboarding_Free_PLAN (300b8114-8555-4313-b861-0c115d820f50)<br/>Dynamics_365_Talent_Onboard (048a552e-c849-4027-b54c-4c7ead26150a)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | COMMON DATA SERVICE (2d925ad8-2479-4bd8-bb76-5b80f1d48935)<br/>DYNAMICS 365 FOR TALENT: ONBOARD (300b8114-8555-4313-b861-0c115d820f50)<br/>DYNAMICS 365 FOR TALENT: ONBOARD (048a552e-c849-4027-b54c-4c7ead26150a)<br/>EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) |
| DYNAMICS 365 TEAM MEMBERS | DYN365_TEAM_MEMBERS | 7ac9fe77-66b7-4e5e-9e46-10eed1cff547 | DYNAMICS_365_FOR_RETAIL_TEAM_MEMBERS (c0454a3d-32b5-4740-b090-78c32f48f0ad)<br/>DYN365_ENTERPRISE_TALENT_ATTRACT_TEAMMEMBER (643d201a-9884-45be-962a-06ba97062e5e)<br/>DYN365_ENTERPRISE_TALENT_ONBOARD_TEAMMEMBER (f2f49eef-4b3f-4853-809a-a055c6103fe0)<br/>DYNAMICS_365_FOR_TALENT_TEAM_MEMBERS (d5156635-0704-4f66-8803-93258f8b2678)<br/>DYN365_TEAM_MEMBERS (4092fdb5-8d81-41d3-be76-aaba4074530b)<br/>DYNAMICS_365_FOR_OPERATIONS_TEAM_MEMBERS (f5aa7b45-8a36-4cd1-bc37-5d06dea98645)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_DYN_TEAM (1ec58c70-f69c-486a-8109-4b87ce86e449)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>POWERAPPS_DYN_TEAM (52e619e2-2730-439a-b0d3-d09ab7e8b705)<br/>PROJECT_ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72) | DYNAMICS 365 FOR RETAIL TEAM MEMBERS (c0454a3d-32b5-4740-b090-78c32f48f0ad)<br/>DYNAMICS 365 FOR TALENT - ATTRACT EXPERIENCE TEAM MEMBER (643d201a-9884-45be-962a-06ba97062e5e)<br/>DYNAMICS 365 FOR TALENT - ONBOARD EXPERIENCE (f2f49eef-4b3f-4853-809a-a055c6103fe0)<br/>DYNAMICS 365 FOR TALENT TEAM MEMBERS (d5156635-0704-4f66-8803-93258f8b2678)<br/>DYNAMICS 365 TEAM MEMBERS (4092fdb5-8d81-41d3-be76-aaba4074530b)<br/>DYNAMICS 365 FOR OPERATIONS TEAM MEMBERS (f5aa7b45-8a36-4cd1-bc37-5d06dea98645)<br/>EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW FOR DYNAMICS 365 (1ec58c70-f69c-486a-8109-4b87ce86e449)<br/>FLOW FOR DYNAMICS 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>OFFICE FOR THE WEB (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>POWERAPPS FOR DYNAMICS 365 (52e619e2-2730-439a-b0d3-d09ab7e8b705)<br/>PROJECT ONLINE ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINT (PLAN 2) (5dbe027f-2339-4123-9542-606e4d348a72) | | DYNAMICS 365 UNF OPS PLAN ENT EDITION | Dynamics_365_for_Operations | ccba3cfe-71ef-423a-bd87-b6df3dce59a9 | DDYN365_CDS_DYN_P2 (d1142cfd-872e-4e77-b6ff-d98ec5a51f66)<br/>DYN365_TALENT_ENTERPRISE (65a1ebf4-6732-4f00-9dcb-3d115ffdeecd)<br/>Dynamics_365_for_Operations (95d2cd7b-1007-484b-8595-5e97e63fe189)<br/>Dynamics_365_for_Retail (a9e39199-8369-444b-89c1-5fe65ec45665)<br/>DYNAMICS_365_HIRING_FREE_PLAN (f815ac79-c5dd-4bcc-9b78-d97f7b817d0d)<br/>Dynamics_365_Onboarding_Free_PLAN (300b8114-8555-4313-b861-0c115d820f50)<br/>FLOW_DYN_P2 (b650d915-9886-424b-a08d-633cede56f57)<br/>POWERAPPS_DYN_P2 (0b03f40b-c404-40c3-8651-2aceb74365fa) | COMMON DATA SERVICE (d1142cfd-872e-4e77-b6ff-d98ec5a51f66)<br/>DYNAMICS 365 FOR TALENT (65a1ebf4-6732-4f00-9dcb-3d115ffdeecd)<br/>DYNAMICS 365 FOR_OPERATIONS (95d2cd7b-1007-484b-8595-5e97e63fe189)<br/>DYNAMICS 365 FOR RETAIL (a9e39199-8369-444b-89c1-5fe65ec45665)<br/>DYNAMICS 365 HIRING FREE PLAN (f815ac79-c5dd-4bcc-9b78-d97f7b817d0d)<br/>DYNAMICS 365 FOR TALENT: ONBOARD (300b8114-8555-4313-b861-0c115d820f50)<br/>FLOW FOR DYNAMICS 365(b650d915-9886-424b-a08d-633cede56f57)<br/>POWERAPPS FOR DYNAMICS 365 (0b03f40b-c404-40c3-8651-2aceb74365fa) |
-| ENTERPRISE MOBILITY + SECURITY E3 | EMS | efccb6f7-5641-4e0e-bd10-b4976e1bf68e | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3) | AZURE ACTIVE DIRECTORY PREMIUM P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>CLOUD APP SECURITY DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>MICROSOFT INTUNE (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>MICROSOFT AZURE MULTI-FACTOR AUTHENTICATION (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>MICROSOFT AZURE ACTIVE DIRECTORY RIGHTS (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>AZURE INFORMATION PROTECTION PREMIUM P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3) |
+| ENTERPRISE MOBILITY + SECURITY E3 | EMS | efccb6f7-5641-4e0e-bd10-b4976e1bf68e | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) | AZURE ACTIVE DIRECTORY PREMIUM P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AZURE INFORMATION PROTECTION PREMIUM P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>CLOUD APP SECURITY DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MICROSOFT AZURE ACTIVE DIRECTORY RIGHTS (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MICROSOFT AZURE MULTI-FACTOR AUTHENTICATION (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>MICROSOFT INTUNE (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) |
| ENTERPRISE MOBILITY + SECURITY E5 | EMSPREMIUM | b05e124f-c7cc-45a0-a6aa-8cf78c946968 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>RMS_S_PREMIUM2 (5689bec4-755d-4753-8b61-40975025187c) | AZURE ACTIVE DIRECTORY PREMIUM P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AZURE ACTIVE DIRECTORY PREMIUM P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>MICROSOFT CLOUD APP SECURITY (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>AZURE ADVANCED THREAT PROTECTION (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>MICROSOFT INTUNE (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>MICROSOFT AZURE MULTI-FACTOR AUTHENTICATION (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>MICROSOFT AZURE ACTIVE DIRECTORY RIGHTS (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>AZURE INFORMATION PROTECTION PREMIUM P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>AZURE INFORMATION PROTECTION PREMIUM P2 (5689bec4-755d-4753-8b61-40975025187c) | | EXCHANGE ONLINE (PLAN 1) | EXCHANGESTANDARD | 4b9405b0-7788-4568-add1-99614e613b69 | EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c) | EXCHANGE ONLINE (PLAN 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)| | EXCHANGE ONLINE (PLAN 2) | EXCHANGEENTERPRISE | 19ec0d23-8335-4cbd-94ac-6050e30712fa | EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0) | EXCHANGE ONLINE (PLAN 2) (efb87545-963c-4e0d-99df-69c6916d9eb0) |
active-directory Google Federation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/google-federation.md
Previously updated : 04/30/2021 Last updated : 06/08/2021
Before Google puts these changes into place in the second half of 2021, Microsof
Applications that are migrated to an allowed web-view for authentication won't be affected, and users will be allowed to authenticate via Google as usual.
+If applications are not migrated to an allowed web-view for authentication, then affected Gmail users will see the following screen.
+
+![Google sign-in error if apps are not migrated to system browsers](media/google-federation/google-sign-in-error-ewv.png)
+ We will update this document as dates and further details are shared by Google. ### Distinguishing between CEF/Electron and embedded web-views
active-directory How To Connect Staged Rollout https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-staged-rollout.md
The following scenarios are not supported for staged rollout:
- Windows 10 Hybrid Join or Azure AD Join primary refresh token acquisition for all versions, when userΓÇÖs on-premises UPN is not routable. This scenario will fall back to the WS-Trust endpoint while in staged rollout mode, but will stop working when staged migration is complete and user sign-on is no longer relying on federation server.
+- If you have a non-persistent VDI setup with Windows 10, version 1903 or later, you must remain on a federated domain. Moving to a managed domain isn't supported on non-persistent VDI. For more information, see [Device identity and desktop virtualization](../devices/howto-device-identity-virtual-desktop-infrastructure.md).
+
+- If you have a Windows Hello for Business hybrid certificate trust with certs that are issued via your federation server acting as Registration Authority or smartcard users, the scenario isn't supported on a staged rollout.
+ >[!NOTE] >You still need to make the final cutover from federated to cloud authentication by using Azure AD Connect or PowerShell. Staged rollout doesn't switch domains from federated to managed. For more information about domain cutover, see [Migrate from federation to password hash synchronization](plan-migrate-adfs-password-hash-sync.md#step-3-change-the-sign-in-method-to-password-hash-synchronization-and-enable-seamless-sso) and [Migrate from federation to pass-through authentication](plan-migrate-adfs-pass-through-authentication.md#step-2-change-the-sign-in-method-to-pass-through-authentication-and-enable-seamless-sso).
active-directory How To Connect Sync Feature Preferreddatalocation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-sync-feature-preferreddatalocation.md
description: Describes how to put your Microsoft 365 user resources close to the
documentationcenter: '' -+ editor: '' ms.assetid:
na ms.devlang: na Previously updated : 11/11/2019 Last updated : 06/08/2021
By setting the attribute **preferredDataLocation**, you can define a user's geo.
> >
-A list of all geos for Microsoft 365 can be found in [Where is your data located?](/microsoft-365/enterprise/o365-data-locations).
-
-The geos in Microsoft 365 available for Multi-Geo are:
-
-| Geo | preferredDataLocation value |
-| | |
-| Asia Pacific | APC |
-| Australia | AUS |
-| Canada | CAN |
-| European Union | EUR |
-| France | FRA |
-| India | IND |
-| Japan | JPN |
-| Korea | KOR |
-| South Africa | ZAF |
-| Switzerland | CHE |
-| United Arab Emirates | ARE |
-| United Kingdom | GBR |
-| United States | NAM |
-
-* If a geo is not listed in this table (for example, South America), then it cannot be used for Multi-Geo.
-
-* Not all Microsoft 365 workloads support the use of setting a user's geo.
+A list of all geos for Microsoft 365 can be found in [Where is your data located?](/microsoft-365/enterprise/o365-data-locations). Azure AD Connect supports all the geos in Microsoft 365.
+ ### Azure AD Connect support for synchronization
active-directory How Manage User Assigned Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md
+
+ Title: Manage user assigned managed identities - Azure AD
+description: Create user assigned managed identities
+++
+editor:
++
+ms.devlang:
++ Last updated : 06/08/2021+
+zone_pivot_groups: identity-mi-methods
++
+# Manage user-assigned managed identities
++
+Managed identities for Azure resources eliminate the need to manage credentials in code. They allow you to get an Azure active directory token your applications can use when accessing resources that support Azure Active Directory authentication. Azure manages the identity so you don't have to. There are two types of managed identities ΓÇô system-assigned and user-assigned. The main difference between the two types is that system assigned managed identities have their lifecycle linked to the resource where they are used. User assigned managed identities may be used on multiple resources. You can learn more about managed identities in the managed identities [overview](overview.md).
++
+In this article, you learn how to create, list, delete, or assign a role to a user-assigned managed identity using the Azure portal
+
+## Prerequisites
+
+- If you're unfamiliar with managed identities for Azure resources, check out the [overview section](overview.md). **Be sure to review the [difference between a system-assigned and user-assigned managed identity](overview.md#managed-identity-types)**.
+- If you don't already have an Azure account, [sign up for a free account](https://azure.microsoft.com/free/) before continuing.
+
+## Create a user-assigned managed identity
+
+To create a user-assigned managed identity, your account needs the [Managed Identity Contributor](../../role-based-access-control/built-in-roles.md#managed-identity-contributor) role assignment.
+
+1. Sign in to the [Azure portal](https://portal.azure.com) using an account associated with the Azure subscription to create the user-assigned managed identity.
+2. In the search box, type *Managed Identities*, and under **Services**, click **Managed Identities**.
+3. Click **Add** and enter values in the following fields under **Create user assigned managed** identity pane:
+ - **Subscription**: Choose the subscription to create the user-assigned managed identity under.
+ - **Resource group**: Choose a resource group to create the user-assigned managed identity in or click **Create new** to create a new resource group.
+ - **Region**: Choose a region to deploy the user-assigned managed identity, for example **West US**.
+ - **Name**: This is the name for your user-assigned managed identity, for example UAI1.
+ ![Create a user-assigned managed identity](media/how-to-manage-ua-identity-portal/create-user-assigned-managed-identity-portal.png)
+4. Click **Review + create** to review the changes.
+5. Click **Create**.
+
+## List user-assigned managed identities
+
+To list/read a user-assigned managed identity, your account needs the [Managed Identity Operator](../../role-based-access-control/built-in-roles.md#managed-identity-operator) or [Managed Identity Contributor](../../role-based-access-control/built-in-roles.md#managed-identity-contributor) role assignment.
+
+1. Sign in to the [Azure portal](https://portal.azure.com) using an account associated with the Azure subscription to list the user-assigned managed identities.
+2. In the search box, type *Managed Identities*, and under Services, click **Managed Identities**.
+3. A list of the user-assigned managed identities for your subscription is returned. To see the details of a user-assigned managed identity click its name.
+
+![List user-assigned managed identity](media/how-to-manage-ua-identity-portal/list-user-assigned-managed-identity-portal.png)
+
+## Delete a user-assigned managed identity
+
+To delete a user-assigned managed identity, your account needs the [Managed Identity Contributor](../../role-based-access-control/built-in-roles.md#managed-identity-contributor) role assignment.
+
+Deleting a user assigned identity does not remove it from the VM or resource it was assigned to. To remove the user assigned identity from a VM see, [Remove a user-assigned managed identity from a VM](qs-configure-portal-windows-vm.md#remove-a-user-assigned-managed-identity-from-a-vm).
+
+1. Sign in to the [Azure portal](https://portal.azure.com) using an account associated with the Azure subscription to delete a user-assigned managed identity.
+2. Select the user-assigned managed identity and click **Delete**.
+3. Under the confirmation box choose, **Yes**.
+
+![Delete user-assigned managed identity](media/how-to-manage-ua-identity-portal/delete-user-assigned-managed-identity-portal.png)
+
+## Assign a role to a user-assigned managed identity
+
+To assign a role to a user-assigned managed identity, your account needs the [User Access Administrator](../../role-based-access-control/built-in-roles.md#user-access-administrator) role assignment.
+
+1. Sign in to the [Azure portal](https://portal.azure.com) using an account associated with the Azure subscription to list the user-assigned managed identities.
+2. In the search box, type *Managed Identities*, and under Services, click **Managed Identities**.
+3. A list of the user-assigned managed identities for your subscription is returned. Select the user-assigned managed identity that you want to assign a role.
+4. Select **Access control (IAM)**, and then select **Add role assignment**.
+
+ ![User-assigned managed identity start](media/how-to-manage-ua-identity-portal/assign-role-screenshot1.png)
+
+5. In the Add role assignment blade, configure the following values, and then click **Save**:
+ - **Role** - the role to assign
+ - **Assign access to** - the resource to assign the user-assigned managed identity
+ - **Select** - the member to assign access
+
+ ![User-assigned managed identity IAM](media/how-to-manage-ua-identity-portal/assign-role-screenshot2.png)
+++++
+In this article, you learn how to create, list, delete, or assign a role to a user-assigned managed identity using the Azure CLI.
+
+## Prerequisites
++
+> [!IMPORTANT]
+> To modify user permissions when using an app service principal using CLI you must provide the service principal additional permissions in Azure AD Graph API as portions of CLI perform GET requests against the Graph API. Otherwise, you may end up receiving a 'Insufficient privileges to complete the operation' message. To do this you will need to go into the App registration in Azure Active Directory, select your app, click on API permissions, scroll down and select Azure Active Directory Graph. From there select Application permissions, and then add the appropriate permissions.
+
+## Create a user-assigned managed identity
+
+To create a user-assigned managed identity, your account needs the [Managed Identity Contributor](../../role-based-access-control/built-in-roles.md#managed-identity-contributor) role assignment.
+
+Use the [az identity create](/cli/azure/identity#az_identity_create) command to create a user-assigned managed identity. The `-g` parameter specifies the resource group where to create the user-assigned managed identity, and the `-n` parameter specifies its name. Replace the `<RESOURCE GROUP>` and `<USER ASSIGNED IDENTITY NAME>` parameter values with your own values:
++
+```azurecli-interactive
+az identity create -g <RESOURCE GROUP> -n <USER ASSIGNED IDENTITY NAME>
+```
+## List user-assigned managed identities
+
+To list/read a user-assigned managed identity, your account needs the [Managed Identity Operator](../../role-based-access-control/built-in-roles.md#managed-identity-operator) or [Managed Identity Contributor](../../role-based-access-control/built-in-roles.md#managed-identity-contributor) role assignment.
+
+To list user-assigned managed identities, use the [az identity list](/cli/azure/identity#az_identity_list) command. Replace the `<RESOURCE GROUP>` with your own value:
+
+```azurecli-interactive
+az identity list -g <RESOURCE GROUP>
+```
+
+In the json response, user-assigned managed identities have `"Microsoft.ManagedIdentity/userAssignedIdentities"` value returned for key, `type`.
+
+`"type": "Microsoft.ManagedIdentity/userAssignedIdentities"`
+
+## Delete a user-assigned managed identity
+
+To delete a user-assigned managed identity, your account needs the [Managed Identity Contributor](../../role-based-access-control/built-in-roles.md#managed-identity-contributor) role assignment.
+
+To delete a user-assigned managed identity, use the [az identity delete](/cli/azure/identity#az_identity_delete) command. The -n parameter specifies its name and the -g parameter specifies the resource group where the user-assigned managed identity was created. Replace the `<USER ASSIGNED IDENTITY NAME>` and `<RESOURCE GROUP>` parameters values with your own values:
+
+```azurecli-interactive
+az identity delete -n <USER ASSIGNED IDENTITY NAME> -g <RESOURCE GROUP>
+```
+> [!NOTE]
+> Deleting a user-assigned managed identity will not remove the reference, from any resource it was assigned to. Please remove those from VM/VMSS using the `az vm/vmss identity remove` command
+
+## Next steps
+
+For a full list of Azure CLI identity commands, see [az identity](/cli/azure/identity).
+
+For information on how to assign a user-assigned managed identity to an Azure VM see, [Configure managed identities for Azure resources on an Azure VM using Azure CLI](qs-configure-cli-windows-vm.md#user-assigned-managed-identity)
++++
+In this article, you learn how to create, list, and delete a user-assigned managed identity using PowerShell.
+
+## Prerequisites
+
+- If you're unfamiliar with managed identities for Azure resources, check out the [overview section](overview.md). **Be sure to review the [difference between a system-assigned and user-assigned managed identity](overview.md#managed-identity-types)**.
+- If you don't already have an Azure account, [sign up for a free account](https://azure.microsoft.com/free/) before continuing.
+- To run the example scripts, you have two options:
+ - Use the [Azure Cloud Shell](../../cloud-shell/overview.md), which you can open using the **Try It** button on the top right corner of code blocks.
+ - Run scripts locally with Azure PowerShell, as described in the next section.
+
+### Configure Azure PowerShell locally
+
+To use Azure PowerShell locally for this article (rather than using Cloud Shell), complete the following steps:
+
+1. Install [the latest version of Azure PowerShell](/powershell/azure/install-az-ps) if you haven't already.
+
+1. Sign in to Azure:
+
+ ```azurepowershell
+ Connect-AzAccount
+ ```
+
+1. Install the [latest version of PowerShellGet](/powershell/scripting/gallery/installing-psget#for-systems-with-powershell-50-or-newer-you-can-install-the-latest-powershellget).
+
+ ```azurepowershell
+ Install-Module -Name PowerShellGet -AllowPrerelease
+ ```
+
+ You may need to `Exit` out of the current PowerShell session after you run this command for the next step.
+
+1. Install the prerelease version of the `Az.ManagedServiceIdentity` module to perform the user-assigned managed identity operations in this article:
+
+ ```azurepowershell
+ Install-Module -Name Az.ManagedServiceIdentity -AllowPrerelease
+ ```
+
+## Create a user-assigned managed identity
+
+To create a user-assigned managed identity, your account needs the [Managed Identity Contributor](../../role-based-access-control/built-in-roles.md#managed-identity-contributor) role assignment.
+
+To create a user-assigned managed identity, use the `New-AzUserAssignedIdentity` command. The `ResourceGroupName` parameter specifies the resource group where to create the user-assigned managed identity, and the `-Name` parameter specifies its name. Replace the `<RESOURCE GROUP>` and `<USER ASSIGNED IDENTITY NAME>` parameter values with your own values:
++
+```azurepowershell-interactive
+New-AzUserAssignedIdentity -ResourceGroupName <RESOURCEGROUP> -Name <USER ASSIGNED IDENTITY NAME>
+```
+
+## List user-assigned managed identities
+
+To list/read a user-assigned managed identity, your account needs the [Managed Identity Operator](../../role-based-access-control/built-in-roles.md#managed-identity-operator) or [Managed Identity Contributor](../../role-based-access-control/built-in-roles.md#managed-identity-contributor) role assignment.
+
+To list user-assigned managed identities, use the [Get-AzUserAssigned] command. The `-ResourceGroupName` parameter specifies the resource group where the user-assigned managed identity was created. Replace the `<RESOURCE GROUP>` with your own value:
+
+```azurepowershell-interactive
+Get-AzUserAssignedIdentity -ResourceGroupName <RESOURCE GROUP>
+```
+
+In the response, user-assigned managed identities have `"Microsoft.ManagedIdentity/userAssignedIdentities"` value returned for key, `Type`.
+
+`Type :Microsoft.ManagedIdentity/userAssignedIdentities`
+
+## Delete a user-assigned managed identity
+
+To delete a user-assigned managed identity, your account needs the [Managed Identity Contributor](../../role-based-access-control/built-in-roles.md#managed-identity-contributor) role assignment.
+
+To delete a user-assigned managed identity, use the `Remove-AzUserAssignedIdentity` command. The `-ResourceGroupName` parameter specifies the resource group where the user-assigned identity was created and the `-Name` parameter specifies its name. Replace the `<RESOURCE GROUP>` and the `<USER ASSIGNED IDENTITY NAME>` parameters values with your own values:
+
+```azurepowershell-interactive
+Remove-AzUserAssignedIdentity -ResourceGroupName <RESOURCE GROUP> -Name <USER ASSIGNED IDENTITY NAME>
+```
+
+> [!NOTE]
+> Deleting a user-assigned managed identity will not remove the reference, from any resource it was assigned to. Identity assignments need to be removed separately.
+
+## Next steps
+
+For a full list and more details of the Azure PowerShell managed identities for Azure resources commands, see [Az.ManagedServiceIdentity](/powershell/module/az.managedserviceidentity#managed_service_identity).
+++++
+In this article, you create a user-assigned managed identity using an Azure Resource Manager.
+
+It is not possible to list and delete a user-assigned managed identity using an Azure Resource Manager template. See the following articles to create and list a user-assigned managed identity:
+
+- [List user-assigned managed identity](how-to-manage-ua-identity-cli.md#list-user-assigned-managed-identities)
+- [Delete user-assigned managed identity](how-to-manage-ua-identity-cli.md#delete-a-user-assigned-managed-identity)
+
+## Prerequisites
+
+- If you are unfamiliar with managed identities for Azure resources, check out the [overview section](overview.md). **Be sure to review the [difference between a system-assigned and user-assigned managed identity](overview.md#managed-identity-types)**.
+- If you don't already have an Azure account, [sign up for a free account](https://azure.microsoft.com/free/) before continuing.
+
+## Template creation and editing
+
+As with the Azure portal and scripting, Azure Resource Manager templates provide the ability to deploy new or modified resources defined by an Azure resource group. Several options are available for template editing and deployment, both local and portal-based, including:
+
+- Using a [custom template from the Azure Marketplace](../../azure-resource-manager/templates/deploy-portal.md#deploy-resources-from-custom-template), which allows you to create a template from scratch, or base it on an existing common or [quickstart template](https://azure.microsoft.com/documentation/templates/).
+- Deriving from an existing resource group, by exporting a template from either [the original deployment](../../azure-resource-manager/management/manage-resource-groups-portal.md#export-resource-groups-to-templates), or from the [current state of the deployment](../../azure-resource-manager/management/manage-resource-groups-portal.md#export-resource-groups-to-templates).
+- Using a local [JSON editor (such as VS Code)](../../azure-resource-manager/templates/quickstart-create-templates-use-the-portal.md), and then uploading and deploying by using PowerShell or CLI.
+- Using the Visual Studio [Azure Resource Group project](../../azure-resource-manager/templates/create-visual-studio-deployment-project.md) to both create and deploy a template.
+
+## Create a user-assigned managed identity
+
+To create a user-assigned managed identity, your account needs the [Managed Identity Contributor](../../role-based-access-control/built-in-roles.md#managed-identity-contributor) role assignment.
+
+To create a user-assigned managed identity, use the following template. Replace the `<USER ASSIGNED IDENTITY NAME>` value with your own values:
++
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "resourceName": {
+ "type": "string",
+ "metadata": {
+ "description": "<USER ASSIGNED IDENTITY NAME>"
+ }
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.ManagedIdentity/userAssignedIdentities",
+ "name": "[parameters('resourceName')]",
+ "apiVersion": "2018-11-30",
+ "location": "[resourceGroup().location]"
+ }
+ ],
+ "outputs": {
+ "identityName": {
+ "type": "string",
+ "value": "[parameters('resourceName')]"
+ }
+ }
+}
+```
+## Next steps
+
+For information on how to assign a user-assigned managed identity to an Azure VM using an Azure Resource Manager template see, [Configure managed identities for Azure resources on an Azure VM using a templates](qs-configure-template-windows-vm.md).
+++++++
+In this article, you learn how to create, list, and delete a user-assigned managed identity using CURL to make REST API calls.
+
+## Prerequisites
+
+- If you're unfamiliar with managed identities for Azure resources, check out the [overview section](overview.md). **Be sure to review the [difference between a system-assigned and user-assigned managed identity](overview.md#managed-identity-types)**.
+- If you don't already have an Azure account, [sign up for a free account](https://azure.microsoft.com/free/) before continuing.
+- You can run all the commands in this article either in the cloud or locally:
+ - To run in the cloud, use the [Azure Cloud Shell](../../cloud-shell/overview.md).
+ - To run locally, install [curl](https://curl.haxx.se/download.html) and the [Azure CLI](/cli/azure/install-azure-cli).
+
+## Obtain a bearer access token
+
+1. If running locally, sign into Azure through the Azure CLI:
+
+ ```
+ az login
+ ```
+
+1. Obtain an access token using [az account get-access-token](/cli/azure/account#az_account_get_access_token)
+
+ ```azurecli-interactive
+ az account get-access-token
+ ```
+
+## Create a user-assigned managed identity
+
+To create a user-assigned managed identity, your account needs the [Managed Identity Contributor](../../role-based-access-control/built-in-roles.md#managed-identity-contributor) role assignment.
++
+```bash
+curl 'https://management.azure.com/subscriptions/<SUBSCRIPTION ID>/resourceGroup
+s/<RESOURCE GROUP>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<USER ASSIGNED IDENTITY NAME>?api-version=2015-08-31-preview' -X PUT -d '{"loc
+ation": "<LOCATION>"}' -H "Content-Type: application/json" -H "Authorization: Bearer <ACCESS TOKEN>"
+```
+
+```HTTP
+PUT https://management.azure.com/subscriptions/<SUBSCRIPTION ID>/resourceGroup
+s/<RESOURCE GROUP>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<USER ASSIGNED IDENTITY NAME>?api-version=2015-08-31-preview HTTP/1.1
+```
+
+**Request headers**
+
+|Request header |Description |
+|||
+|*Content-Type* | Required. Set to `application/json`. |
+|*Authorization* | Required. Set to a valid `Bearer` access token. |
+
+**Request body**
+
+|Name |Description |
+|||
+|location | Required. Resource location. |
+
+## List user-assigned managed identities
+
+To list/read a user-assigned managed identity, your account needs the [Managed Identity Operator](../../role-based-access-control/built-in-roles.md#managed-identity-operator) or [Managed Identity Contributor](../../role-based-access-control/built-in-roles.md#managed-identity-contributor) role assignment.
+
+```bash
+curl 'https://management.azure.com/subscriptions/<SUBSCRIPTION ID>/resourceGroups/<RESOURCE GROUP>/providers/Microsoft.ManagedIdentity/userAssignedIdentities?api-version=2015-08-31-preview' -H "Authorization: Bearer <ACCESS TOKEN>"
+```
+
+```HTTP
+GET https://management.azure.com/subscriptions/<SUBSCRIPTION ID>/resourceGroups/<RESOURCE GROUP>/providers/Microsoft.ManagedIdentity/userAssignedIdentities?api-version=2015-08-31-preview HTTP/1.1
+```
+
+|Request header |Description |
+|||
+|*Content-Type* | Required. Set to `application/json`. |
+|*Authorization* | Required. Set to a valid `Bearer` access token. |
+
+## Delete a user-assigned managed identity
+
+To delete a user-assigned managed identity, your account needs the [Managed Identity Contributor](../../role-based-access-control/built-in-roles.md#managed-identity-contributor) role assignment.
+
+> [!NOTE]
+> Deleting a user-assigned managed identity will not remove the reference from any resource it was assigned to. To remove a user-assigned managed identity from a VM using CURL see [Remove a user-assigned identity from an Azure VM](qs-configure-rest-vm.md#remove-a-user-assigned-managed-identity-from-an-azure-vm).
+
+```bash
+curl 'https://management.azure.com/subscriptions/<SUBSCRIPTION ID>/resourceGroup
+s/<RESOURCE GROUP>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<USER ASSIGNED IDENTITY NAME>?api-version=2015-08-31-preview' -X DELETE -H "Authorization: Bearer <ACCESS TOKEN>"
+```
+
+```HTTP
+DELETE https://management.azure.com/subscriptions/80c696ff-5efa-4909-a64d-f1b616f423ca/resourceGroups/TestRG/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<USER ASSIGNED IDENTITY NAME>?api-version=2015-08-31-preview HTTP/1.1
+```
+|Request header |Description |
+|||
+|*Content-Type* | Required. Set to `application/json`. |
+|*Authorization* | Required. Set to a valid `Bearer` access token. |
+
+## Next steps
+
+For information on how to assign a user-assigned managed identity to an Azure VM/VMSS using CURL see, [Configure managed identities for Azure resources on an Azure VM using REST API calls](qs-configure-rest-vm.md#user-assigned-managed-identity) and [Configure managed identities for Azure resources on a virtual machine scale set using REST API calls](qs-configure-rest-vmss.md#user-assigned-managed-identity).
+++
active-directory How To Manage Ua Identity Arm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-arm.md
Last updated 12/15/2020 + # Create, list, and delete a user-assigned managed identity using Azure Resource Manager
active-directory How To Manage Ua Identity Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-cli.md
Last updated 04/17/2020
+ # Create, list, or delete a user-assigned managed identity using the Azure CLI
active-directory How To Manage Ua Identity Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md
Last updated 08/26/2020 + # Create, list, delete, or assign a role to a user-assigned managed identity using the Azure portal
active-directory How To Manage Ua Identity Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-powershell.md
Last updated 12/02/2020
+ # Create, list, or delete a user-assigned managed identity using Azure PowerShell
active-directory How To Manage Ua Identity Rest https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-rest.md
Last updated 12/02/2020 + # Create, list, or delete a user-assigned managed identity using REST API calls
active-directory How To Use Vm Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/how-to-use-vm-sdk.md
ms.devlang: na
na Previously updated : 11/03/2020 Last updated : 06/07/2021
This article provides a list of SDK samples, which demonstrate use of their resp
| | -- | | .NET | [Deploy an Azure Resource Manager template from a Windows VM using managed identities for Azure resources](https://github.com/Azure-Samples/windowsvm-msi-arm-dotnet) | | .NET Core | [Call Azure services from a Linux VM using managed identities for Azure resources](https://github.com/Azure-Samples/linuxvm-msi-keyvault-arm-dotnet/) |
+| Go | [Azure identity client module for Go](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity#ManagedIdentityCredential)
| Node.js | [Manage resources using managed identities for Azure resources](https://azure.microsoft.com/resources/samples/resources-node-manage-resources-with-msi/) | | Python | [Use managed identities for Azure resources to authenticate simply from inside a VM](https://azure.microsoft.com/resources/samples/resource-manager-python-manage-resources-with-msi/) | | Ruby | [Manage resources from a VM with managed identities for Azure resources enabled](https://github.com/Azure-Samples/resources-ruby-manage-resources-with-msi/) |
active-directory Ethicspoint Incident Management Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/ethicspoint-incident-management-tutorial.md
Previously updated : 02/06/2019 Last updated : 05/31/2021 # Tutorial: Azure Active Directory integration with EthicsPoint Incident Management (EPIM)
-In this tutorial, you learn how to integrate EthicsPoint Incident Management (EPIM) with Azure Active Directory (Azure AD).
-Integrating EthicsPoint Incident Management (EPIM) with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate EthicsPoint Incident Management (EPIM) with Azure Active Directory (Azure AD). When you integrate EthicsPoint Incident Management (EPIM) with Azure AD, you can:
-* You can control in Azure AD who has access to EthicsPoint Incident Management (EPIM).
-* You can enable your users to be automatically signed-in to EthicsPoint Incident Management (EPIM) (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to EthicsPoint Incident Management (EPIM).
+* Enable your users to be automatically signed-in to EthicsPoint Incident Management (EPIM) with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
-To configure Azure AD integration with EthicsPoint Incident Management (EPIM), you need the following items:
+To get started, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* EthicsPoint Incident Management (EPIM) single sign-on enabled subscription
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* EthicsPoint Incident Management (EPIM) single sign-on (SSO) enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* EthicsPoint Incident Management (EPIM) supports **SP** initiated SSO
+* EthicsPoint Incident Management (EPIM) supports **SP** initiated SSO.
-## Adding EthicsPoint Incident Management (EPIM) from the gallery
+## Add EthicsPoint Incident Management (EPIM) from the gallery
To configure the integration of EthicsPoint Incident Management (EPIM) into Azure AD, you need to add EthicsPoint Incident Management (EPIM) from the gallery to your list of managed SaaS apps.
-**To add EthicsPoint Incident Management (EPIM) from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **EthicsPoint Incident Management (EPIM)**, select **EthicsPoint Incident Management (EPIM)** from result panel then click **Add** button to add the application.
-
- ![EthicsPoint Incident Management (EPIM) in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **EthicsPoint Incident Management (EPIM)** in the search box.
+1. Select **EthicsPoint Incident Management (EPIM)** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-In this section, you configure and test Azure AD single sign-on with EthicsPoint Incident Management (EPIM) based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in EthicsPoint Incident Management (EPIM) needs to be established.
+## Configure and test Azure AD SSO for EthicsPoint Incident Management (EPIM)
-To configure and test Azure AD single sign-on with EthicsPoint Incident Management (EPIM), you need to complete the following building blocks:
+Configure and test Azure AD SSO with EthicsPoint Incident Management (EPIM) using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in EthicsPoint Incident Management (EPIM).
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure EthicsPoint Incident Management (EPIM) Single Sign-On](#configure-ethicspoint-incident-management-epim-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create EthicsPoint Incident Management (EPIM) test user](#create-ethicspoint-incident-management-epim-test-user)** - to have a counterpart of Britta Simon in EthicsPoint Incident Management (EPIM) that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+To configure and test Azure AD SSO with EthicsPoint Incident Management (EPIM), perform the following steps:
-### Configure Azure AD single sign-on
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure EthicsPoint Incident Management (EPIM) SSO](#configure-ethicspoint-incident-management-epim-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create EthicsPoint Incident Management (EPIM) test user](#create-ethicspoint-incident-management-epim-test-user)** - to have a counterpart of B.Simon in EthicsPoint Incident Management (EPIM) that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+## Configure Azure AD SSO
-To configure Azure AD single sign-on with EthicsPoint Incident Management (EPIM), perform the following steps:
+Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **EthicsPoint Incident Management (EPIM)** application integration page, select **Single sign-on**.
+1. In the Azure portal, on the **EthicsPoint Incident Management (EPIM)** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Configure single sign-on link](common/select-sso.png)
-
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
-
- ![Single sign-on select mode](common/select-saml-option.png)
-
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
4. On the **Basic SAML Configuration** section, perform the following steps:
- ![EthicsPoint Incident Management (EPIM) Domain and URLs single sign-on information](common/sp-identifier-reply.png)
-
- a. In the **Sign-on URL** text box, type a URL using the following pattern:
-
- ```http
- https://<companyname>.navexglobal.com
- https://<companyname>.ethicspointvp.com
- ```
+ a. In the **Identifier** box, type a URL using the following pattern:
+ `https://<COMPANY_NAME>.navexglobal.com/adfs/services/trust`
- b. In the **Identifier** box, type a URL using the following pattern:
- `https://<companyname>.navexglobal.com/adfs/services/trust`
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://<SERVER_NAME>.navexglobal.com/adfs/ls/`
- c. In the **Reply URL** text box, type a URL using the following pattern:
- `https://<servername>.navexglobal.com/adfs/ls/`
+ c. In the **Sign-on URL** text box, type a URL using one of the following patterns:
+ | Sign-on URL|
+ ||
+ |`https://<COMPANY_NAME>.navexglobal.com`|
+ |`https://<COMPANY_NAME>.ethicspointvp.com`|
+ |
+
> [!NOTE]
- > These values are not real. Update these values with the actual Sign-On URL, Identifier and Reply URL. Contact [EthicsPoint Incident Management (EPIM) Client support team](https://www.navexglobal.com/company/contact-us) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Identifier,Reply URL and Sign-On URL. Contact [EthicsPoint Incident Management (EPIM) Client support team](https://www.navexglobal.com/company/contact-us) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer.
To configure Azure AD single sign-on with EthicsPoint Incident Management (EPIM)
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
-
- b. Azure Ad Identifier
-
- c. Logout URL
-
-### Configure EthicsPoint Incident Management (EPIM) Single Sign-On
-
-To configure single sign-on on **EthicsPoint Incident Management (EPIM)** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [EthicsPoint Incident Management (EPIM) support team](https://www.navexglobal.com/company/contact-us). They set this setting to have the SAML SSO connection set properly on both sides.
- ### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type **brittasimon\@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
+In this section, you'll create a test user in the Azure portal called B.Simon.
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to EthicsPoint Incident Management (EPIM).
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **EthicsPoint Incident Management (EPIM)**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-2. In the applications list, select **EthicsPoint Incident Management (EPIM)**.
-
- ![The EthicsPoint Incident Management (EPIM) link in the Applications list](common/all-applications.png)
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to EthicsPoint Incident Management (EPIM).
-3. In the menu on the left, select **Users and groups**.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **EthicsPoint Incident Management (EPIM)**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
- ![The "Users and groups" link](common/users-groups-blade.png)
+## Configure EthicsPoint Incident Management (EPIM) SSO
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
+To configure single sign-on on **EthicsPoint Incident Management (EPIM)** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [EthicsPoint Incident Management (EPIM) support team](https://www.navexglobal.com/company/contact-us). They set this setting to have the SAML SSO connection set properly on both sides.
### Create EthicsPoint Incident Management (EPIM) test user In this section, you create a user called Britta Simon in EthicsPoint Incident Management (EPIM). Work with [EthicsPoint Incident Management (EPIM) support team](https://www.navexglobal.com/company/contact-us) to add the users in the EthicsPoint Incident Management (EPIM) platform. Users must be created and activated before you use single sign-on.
-### Test single sign-on
+## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the EthicsPoint Incident Management (EPIM) tile in the Access Panel, you should be automatically signed in to the EthicsPoint Incident Management (EPIM) for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* Click on **Test this application** in Azure portal. This will redirect to EthicsPoint Incident Management (EPIM) Sign-on URL where you can initiate the login flow.
-## Additional Resources
+* Go to EthicsPoint Incident Management (EPIM) Sign-on URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the EthicsPoint Incident Management (EPIM) tile in the My Apps, this will redirect to EthicsPoint Incident Management (EPIM) Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure EthicsPoint Incident Management (EPIM) you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Invision Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/invision-tutorial.md
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
e. In the **SAML Certificate** textbox, open the downloaded **Certificate (Base64)** into Notepad, copy the content and paste it into SAML Certificate textbox.
- f. In the **Name ID Format** textbox, use `Unspecified` for the **Name ID Format**.
+ f. In the **Name ID Format** textbox, use `urn:oasis:names:tc:SAML:1.1:nameid-format:Unspecified` for the **Name ID Format**.
g. Select **SHA-256** from the dropdown for the **HASH Algorithm**.
You can also use Microsoft My Apps to test the application in any mode. When you
## Next steps
-Once you configure InVision you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
+Once you configure InVision you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
active-directory Iris Intranet Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/iris-intranet-tutorial.md
Previously updated : 03/25/2019 Last updated : 06/04/2021 # Tutorial: Azure Active Directory integration with Iris Intranet
-In this tutorial, you learn how to integrate Iris Intranet with Azure Active Directory (Azure AD).
-Integrating Iris Intranet with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate Iris Intranet with Azure Active Directory (Azure AD). When you integrate Iris Intranet with Azure AD, you can:
-* You can control in Azure AD who has access to Iris Intranet.
-* You can enable your users to be automatically signed-in to Iris Intranet (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to Iris Intranet.
+* Enable your users to be automatically signed-in to Iris Intranet with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
-To configure Azure AD integration with Iris Intranet, you need the following items:
+To get started, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* Iris Intranet single sign-on enabled subscription
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Iris Intranet single sign-on (SSO) enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* Iris Intranet supports **SP** initiated SSO
+* Iris Intranet supports **SP** initiated SSO.
-* Iris Intranet supports **just-in-time** user provisioning
+* Iris Intranet supports **just-in-time** user provisioning.
-## Adding Iris Intranet from the gallery
+## Add Iris Intranet from the gallery
To configure the integration of Iris Intranet into Azure AD, you need to add Iris Intranet from the gallery to your list of managed SaaS apps.
-**To add Iris Intranet from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **Iris Intranet**, select **Iris Intranet** from result panel then click **Add** button to add the application.
-
- ![Iris Intranet in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Iris Intranet** in the search box.
+1. Select **Iris Intranet** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-In this section, you configure and test Azure AD single sign-on with Iris Intranet based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in Iris Intranet needs to be established.
+## Configure and test Azure AD SSO for Iris Intranet
-To configure and test Azure AD single sign-on with Iris Intranet, you need to complete the following building blocks:
+Configure and test Azure AD SSO with Iris Intranet using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Iris Intranet.
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure Iris Intranet Single Sign-On](#configure-iris-intranet-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Iris Intranet test user](#create-iris-intranet-test-user)** - to have a counterpart of Britta Simon in Iris Intranet that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+To configure and test Azure AD SSO with Iris Intranet, perform the following steps:
-### Configure Azure AD single sign-on
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Iris Intranet SSO](#configure-iris-intranet-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Iris Intranet test user](#create-iris-intranet-test-user)** - to have a counterpart of B.Simon in Iris Intranet that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+## Configure Azure AD SSO
-To configure Azure AD single sign-on with Iris Intranet, perform the following steps:
+Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Iris Intranet** application integration page, select **Single sign-on**.
+1. In the Azure portal, on the **Iris Intranet** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Configure single sign-on link](common/select-sso.png)
-
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
-
- ![Single sign-on select mode](common/select-saml-option.png)
-
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
4. On the **Basic SAML Configuration** section, perform the following steps:
- ![Iris Intranet Domain and URLs single sign-on information](common/sp-identifier.png)
+ a. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
+ `https://<SUBDOMAIN>.irisintranet.com`
- a. In the **Sign on URL** text box, type a URL using the following pattern:
+ b. In the **Sign on URL** text box, type a URL using the following pattern:
`https://<SUBDOMAIN>.irisintranet.com`
- b. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
- `https://<SUBDOMAIN>.irisintranet.com`
- > [!NOTE]
- > These values are not real. Update these values with the actual Sign on URL and Identifier. Contact [Iris Intranet Client support team](mailto:support@triptic.nl) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Identifier and Sign on URL. Contact [Iris Intranet Client support team](mailto:support@triptic.nl) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
5. On the **Set up Single Sign-On with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer. ![The Certificate download link](common/copy-metadataurl.png)
-### Configure Iris Intranet Single Sign-On
-
-To configure single sign-on on **Iris Intranet** side, you need to send the **App Federation Metadata Url** to [Iris Intranet support team](mailto:support@triptic.nl). They set this setting to have the SAML SSO connection set properly on both sides.
- ### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type brittasimon@yourcompanydomain.extension. For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
+In this section, you'll create a test user in the Azure portal called B.Simon.
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Iris Intranet.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Iris Intranet.
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Iris Intranet**.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Iris Intranet**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
- ![Enterprise applications blade](common/enterprise-applications.png)
+## Configure Iris Intranet SSO
-2. In the applications list, select **Iris Intranet**.
-
- ![The Iris Intranet link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
+To configure single sign-on on **Iris Intranet** side, you need to send the **App Federation Metadata Url** to [Iris Intranet support team](mailto:support@triptic.nl). They set this setting to have the SAML SSO connection set properly on both sides.
### Create Iris Intranet test user In this section, a user called Britta Simon is created in Iris Intranet. Iris Intranet supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Iris Intranet, a new one is created after authentication.
-### Test single sign-on
+## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the Iris Intranet tile in the Access Panel, you should be automatically signed in to the Iris Intranet for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* Click on **Test this application** in Azure portal. This will redirect to Iris Intranet Sign-on URL where you can initiate the login flow.
-## Additional resources
+* Go to Iris Intranet Sign-on URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the Iris Intranet tile in the My Apps, this will redirect to Iris Intranet Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure Iris Intranet you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Panorama9 Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/panorama9-tutorial.md
Previously updated : 03/25/2019 Last updated : 06/07/2021 # Tutorial: Azure Active Directory integration with Panorama9
-In this tutorial, you learn how to integrate Panorama9 with Azure Active Directory (Azure AD).
-Integrating Panorama9 with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate Panorama9 with Azure Active Directory (Azure AD). When you integrate Panorama9 with Azure AD, you can:
-* You can control in Azure AD who has access to Panorama9.
-* You can enable your users to be automatically signed-in to Panorama9 (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to Panorama9.
+* Enable your users to be automatically signed-in to Panorama9 with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
-To configure Azure AD integration with Panorama9, you need the following items:
+To get started, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* Panorama9 single sign-on enabled subscription
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Panorama9 single sign-on (SSO) enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* Panorama9 supports **SP** initiated SSO
+* Panorama9 supports **SP** initiated SSO.
-## Adding Panorama9 from the gallery
+## Add Panorama9 from the gallery
To configure the integration of Panorama9 into Azure AD, you need to add Panorama9 from the gallery to your list of managed SaaS apps.
-**To add Panorama9 from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **Panorama9**, select **Panorama9** from result panel then click **Add** button to add the application.
-
- ![Panorama9 in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with Panorama9 based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in Panorama9 needs to be established.
-
-To configure and test Azure AD single sign-on with Panorama9, you need to complete the following building blocks:
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Panorama9** in the search box.
+1. Select **Panorama9** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure Panorama9 Single Sign-On](#configure-panorama9-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Panorama9 test user](#create-panorama9-test-user)** - to have a counterpart of Britta Simon in Panorama9 that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+## Configure and test Azure AD SSO for Panorama9
-### Configure Azure AD single sign-on
+Configure and test Azure AD SSO with Panorama9 using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Panorama9.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+To configure and test Azure AD SSO with Panorama9, perform the following steps:
-To configure Azure AD single sign-on with Panorama9, perform the following steps:
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Panorama9 SSO](#configure-panorama9-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Panorama9 test user](#create-panorama9-test-user)** - to have a counterpart of B.Simon in Panorama9 that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-1. In the [Azure portal](https://portal.azure.com/), on the **Panorama9** application integration page, select **Single sign-on**.
+## Configure Azure AD SSO
- ![Configure single sign-on link](common/select-sso.png)
+Follow these steps to enable Azure AD SSO in the Azure portal.
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+1. In the Azure portal, on the **Panorama9** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Single sign-on select mode](common/select-saml-option.png)
-
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
4. On the **Basic SAML Configuration** section, perform the following steps:
- ![Panorama9 Domain and URLs single sign-on information](common/sp-identifier.png)
-
- a. In the **Sign on URL** text box, type a URL:
+ a. In the **Sign on URL** text box, type the URL:
`https://dashboard.panorama9.com/saml/access/3262` b. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
- `https://www.panorama9.com/saml20/<tenant-name>`
+ `https://www.panorama9.com/saml20/<TENANT_NAME>`
> [!NOTE] > These values are not real. Update these values with the actual Sign on URL and Identifier. Contact [Panorama9 Client support team](https://support.panorama9.com/) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
To configure Azure AD single sign-on with Panorama9, perform the following steps
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
- b. Azure AD Identifier
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
- c. Logout URL
+### Assign the Azure AD test user
-### Configure Panorama9 Single Sign-On
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Panorama9.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Panorama9**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Panorama9 SSO
1. In a different web browser window, sign in to your Panorama9 company site as an administrator. 2. In the toolbar on the top, click **Manage**, and then click **Extensions**.
- ![Extensions](./media/panorama9-tutorial/ic790023.png "Extensions")
+ ![Extensions](./media/panorama9-tutorial/toolbar.png "Extensions")
3. On the **Extensions** dialog, click **Single Sign-On**.
- ![Single Sign-On](./media/panorama9-tutorial/ic790024.png "Single Sign-On")
+ ![Single Sign-On](./media/panorama9-tutorial/extension.png "Single Sign-On")
4. In the **Settings** section, perform the following steps:
- ![Settings](./media/panorama9-tutorial/ic790025.png "Settings")
+ ![Settings](./media/panorama9-tutorial/configuration.png "Settings")
a. In **Identity provider URL** textbox, paste the value of **Login URL**, which you have copied from Azure portal.
To configure Azure AD single sign-on with Panorama9, perform the following steps
5. Click **Save**.
-### Create an Azure AD test user
-
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type brittasimon@yourcompanydomain.extension. For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
-
-### Assign the Azure AD test user
-
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Panorama9.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Panorama9**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-2. In the applications list, select **Panorama9**.
-
- ![The Panorama9 link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
- ### Create Panorama9 test user In order to enable Azure AD users to sign in to Panorama9, they must be provisioned into Panorama9.
In the case of Panorama9, provisioning is a manual task.
2. In the menu on the top, click **Manage**, and then click **Users**.
- ![Screenshot that shows the "Manage" and "Users" tabs selected.](./media/panorama9-tutorial/ic790027.png "Users")
+ ![Screenshot that shows the "Manage" and "Users" tabs selected.](./media/panorama9-tutorial/user.png "Users")
3. In the Users section, Click **+** to add new user.
- ![Users](./media/panorama9-tutorial/ic790028.png "Users")
+ ![Users](./media/panorama9-tutorial/new-user.png "Users")
4. Go to the User data section, type the email address of a valid Azure Active Directory user you want to provision into the **Email** textbox.
In the case of Panorama9, provisioning is a manual task.
> [!NOTE] > The Azure Active Directory account holder receives an email and follows a link to confirm their account before it becomes active.
-### Test single sign-on
+## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the Panorama9 tile in the Access Panel, you should be automatically signed in to the Panorama9 for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* Click on **Test this application** in Azure portal. This will redirect to Panorama9 Sign-on URL where you can initiate the login flow.
-## Additional Resources
+* Go to Panorama9 Sign-on URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the Panorama9 tile in the My Apps, this will redirect to Panorama9 Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure Panorama9 you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Petrovue Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/petrovue-tutorial.md
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with PetroVue | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and PetroVue.
++++++++ Last updated : 06/04/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with PetroVue
+
+In this tutorial, you'll learn how to integrate PetroVue with Azure Active Directory (Azure AD). When you integrate PetroVue with Azure AD, you can:
+
+* Control in Azure AD who has access to PetroVue.
+* Enable your users to be automatically signed-in to PetroVue with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* PetroVue single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* PetroVue supports **SP** initiated SSO.
+
+## Adding PetroVue from the gallery
+
+To configure the integration of PetroVue into Azure AD, you need to add PetroVue from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **PetroVue** in the search box.
+1. Select **PetroVue** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
++
+## Configure and test Azure AD SSO for PetroVue
+
+Configure and test Azure AD SSO with PetroVue using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in PetroVue.
+
+To configure and test Azure AD SSO with PetroVue, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure PetroVue SSO](#configure-petrovue-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create PetroVue test user](#create-petrovue-test-user)** - to have a counterpart of B.Simon in PetroVue that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **PetroVue** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, enter the values for the following fields:
+
+ a. In the **Sign on URL** text box, type a URL using the following pattern:
+ `https://<SUBDOMAIN>.petrolink.net/petrovue/rtv`
+
+ b. In the **Identifier (Entity ID)** text box, type one of the following values:
+
+ | Identifier |
+ ||
+ | `PV4` |
+ | `PetroVue` |
+ |
+
+ > [!NOTE]
+ > The value is not real. Update the value with the actual Sign on URL. Contact [PetroVue Client support team](mailto:ops@petrolink.com) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/certificatebase64.png)
+
+1. On the **Set up PetroVue** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to PetroVue.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **PetroVue**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure PetroVue SSO
+
+To configure single sign-on on **PetroVue** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [PetroVue support team](mailto:ops@petrolink.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create PetroVue test user
+
+In this section, you create a user called Britta Simon in PetroVue. Work with [PetroVue support team](mailto:ops@petrolink.com) to add the users in the PetroVue platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to PetroVue Sign-on URL where you can initiate the login flow.
+
+* Go to PetroVue Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the PetroVue tile in the My Apps, this will redirect to PetroVue Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
++
+## Next steps
+
+Once you configure PetroVue you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
++
active-directory Pexip Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/pexip-tutorial.md
Previously updated : 02/07/2019 Last updated : 05/31/2021 # Tutorial: Azure Active Directory integration with Pexip
-In this tutorial, you learn how to integrate Pexip with Azure Active Directory (Azure AD).
-Integrating Pexip with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate Pexip with Azure Active Directory (Azure AD). When you integrate Pexip with Azure AD, you can:
-* You can control in Azure AD who has access to Pexip.
-* You can enable your users to be automatically signed-in to Pexip (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to Pexip.
+* Enable your users to be automatically signed-in to Pexip with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
-To configure Azure AD integration with Pexip, you need the following items:
+To get started, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* Pexip single sign-on enabled subscription
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Pexip single sign-on (SSO) enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* Pexip supports **SP** initiated SSO
-
-## Adding Pexip from the gallery
-
-To configure the integration of Pexip into Azure AD, you need to add Pexip from the gallery to your list of managed SaaS apps.
-
-**To add Pexip from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
+* Pexip supports **SP** initiated SSO.
- ![The New application button](common/add-new-app.png)
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
-4. In the search box, type **Pexip**, select **Pexip** from result panel then click **Add** button to add the application.
+## Add Pexip from the gallery
- ![Pexip in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with Pexip based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in Pexip needs to be established.
-
-To configure and test Azure AD single sign-on with Pexip, you need to complete the following building blocks:
-
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure Pexip Single Sign-On](#configure-pexip-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Pexip test user](#create-pexip-test-user)** - to have a counterpart of Britta Simon in Pexip that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
-
-### Configure Azure AD single sign-on
+To configure the integration of Pexip into Azure AD, you need to add Pexip from the gallery to your list of managed SaaS apps.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Pexip** in the search box.
+1. Select **Pexip** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-To configure Azure AD single sign-on with Pexip, perform the following steps:
+## Configure and test Azure AD SSO for Pexip
-1. In the [Azure portal](https://portal.azure.com/), on the **Pexip** application integration page, select **Single sign-on**.
+Configure and test Azure AD SSO with Pexip using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Pexip.
- ![Configure single sign-on link](common/select-sso.png)
+To configure and test Azure AD SSO with Pexip, perform the following steps:
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Pexip SSO](#configure-pexip-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Pexip test user](#create-pexip-test-user)** - to have a counterpart of B.Simon in Pexip that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
- ![Single sign-on select mode](common/select-saml-option.png)
+## Configure Azure AD SSO
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
+Follow these steps to enable Azure AD SSO in the Azure portal.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+1. In the Azure portal, on the **Pexip** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
-4. On the **Basic SAML Configuration** section, perform the following steps:
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
- ![Pexip Domain and URLs single sign-on information](common/sp-signonurl.png)
+4. On the **Basic SAML Configuration** section, perform the following step:
In the **Sign-on URL** text box, type the URL: `https://my.videxio.com`
To configure Azure AD single sign-on with Pexip, perform the following steps:
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
-
- b. Azure Ad Identifier
-
- c. Logout URL
-
-### Configure Pexip Single Sign-On
-
-To configure single sign-on on **Pexip** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Pexip support team](https://support.videxio.com). They set this setting to have the SAML SSO connection set properly on both sides.
- ### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type **brittasimon\@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
+In this section, you'll create a test user in the Azure portal called B.Simon.
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Pexip.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Pexip.
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Pexip**.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Pexip**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
- ![Enterprise applications blade](common/enterprise-applications.png)
+## Configure Pexip SSO
-2. In the applications list, select **Pexip**.
-
- ![The Pexip link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
+To configure single sign-on on **Pexip** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Pexip support team](https://support.videxio.com). They set this setting to have the SAML SSO connection set properly on both sides.
### Create Pexip test user In this section, you create a user called Britta Simon in Pexip. Work with [Pexip support team](https://support.videxio.com) to add the users in the Pexip platform. Users must be created and activated before you use single sign-on.
-### Test single sign-on
+## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the Pexip tile in the Access Panel, you should be automatically signed in to the Pexip for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* Click on **Test this application** in Azure portal. This will redirect to Pexip Sign-on URL where you can initiate the login flow.
-## Additional Resources
+* Go to Pexip Sign-on URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the Pexip tile in the My Apps, this will redirect to Pexip Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure Pexip you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Secure Deliver Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/secure-deliver-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure SECURE DELIVER for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to SECURE DELIVER.
+
+documentationcenter: ''
+
+writer: Zhchia
++
+ms.assetid: 20bc4dc5-49b3-4f23-bd41-1a36815f9f49
+++
+ na
+ms.devlang: na
+ Last updated : 06/02/2021+++
+# Tutorial: Configure SECURE DELIVER for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both SECURE DELIVER and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [SECURE DELIVER](https://www.Contoso.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../manage-apps/user-provisioning.md).
++
+## Capabilities Supported
+> [!div class="checklist"]
+> * Create users in SECURE DELIVER
+> * Remove users in SECURE DELIVER when they do not require access anymore
+> * Keep user attributes synchronized between Azure AD and SECURE DELIVER
+> * [Single sign-on](https://docs.microsoft.com/azure/active-directory/saas-apps/securedeliver-tutorial) to SECURE DELIVER (recommended)
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](https://docs.microsoft.com/azure/active-directory/develop/quickstart-create-new-tenant)
+* A user account in Azure AD with [permission](https://docs.microsoft.com/azure/active-directory/users-groups-roles/directory-assign-admin-roles) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](https://docs.microsoft.com/azure/active-directory/manage-apps/user-provisioning).
+2. Determine who will be in [scope for provisioning](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+3. Determine what data to [map between Azure AD and SECURE DELIVER](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes).
+
+## Step 2. Configure SECURE DELIVER to support provisioning with Azure AD
+
+1. The Tenant URL is `https://fcapi.i-securedeliver.jp/sdms/v2/scim`. This value will be entered in the **Tenant URL** field in the Provisioning tab of your SECURE DELIVER application in the Azure portal.
+
+2. Reach out to [SECURE DELIVER support](mailto:iw-sd-support@fujifilm.com) team to get your Secret Token. This value will be entered in the **Secret Token** field in the Provisioning tab of your SECURE DELIVER application in the Azure portal.
+
+## Step 3. Add SECURE DELIVER from the Azure AD application gallery
+
+Add SECURE DELIVER from the Azure AD application gallery to start managing provisioning to SECURE DELIVER. If you have previously setup SECURE DELIVER for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](https://docs.microsoft.com/azure/active-directory/manage-apps/add-gallery-app).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+
+* When assigning users and groups to SECURE DELIVER, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps) to add additional roles.
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
++
+## Step 5. Configure automatic user provisioning to SECURE DELIVER
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for SECURE DELIVER in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Enterprise applications blade](common/enterprise-applications.png)
+
+2. In the applications list, select **SECURE DELIVER**.
+
+ ![The SECURE DELIVER link in the Applications list](common/all-applications.png)
+
+3. Select the **Provisioning** tab.
+
+ ![Provisioning tab](common/provisioning.png)
+
+4. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Provisioning tab automatic](common/provisioning-automatic.png)
+
+5. Under the **Admin Credentials** section, input your SECURE DELIVER Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to SECURE DELIVER. If the connection fails, ensure your SECURE DELIVER account has Admin permissions and try again.
+
+ ![Token](common/provisioning-testconnection-tenanturltoken.png)
+
+6. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Notification Email](common/provisioning-notification-email.png)
+
+7. Select **Save**.
+
+8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to SECURE DELIVER**.
+
+9. Review the user attributes that are synchronized from Azure AD to SECURE DELIVER in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in SECURE DELIVER for update operations. If you choose to change the [matching target attribute](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes), you will need to ensure that the SECURE DELIVER API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported For Filtering|
+ ||||
+ |userName|String|&check;|
+ |displayName|String|
+ |emails[type eq "work"].value|String|
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager|Reference|
+
+10. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../manage-apps/define-conditional-rules-for-provisioning-user-accounts.md).
+
+11. To enable the Azure AD provisioning service for SECURE DELIVER, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+
+12. Define the users and/or groups that you would like to provision to SECURE DELIVER by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Provisioning Scope](common/provisioning-scope.png)
+
+13. When you are ready to provision, click **Save**.
+
+ ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+1. Use the [provisioning logs](https://docs.microsoft.com/azure/active-directory/reports-monitoring/concept-provisioning-logs) to determine which users have been provisioned successfully or unsuccessfully
+2. Check the [progress bar](https://docs.microsoft.com/azure/active-directory/app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user) to see the status of the provisioning cycle and how close it is to completion
+3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](https://docs.microsoft.com/azure/active-directory/manage-apps/application-provisioning-quarantine-status).
+
+## Additional resources
+
+* [Managing user account provisioning for Enterprise Apps](../manage-apps/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../manage-apps/check-status-user-account-provisioning.md)
active-directory Sigma Computing Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/sigma-computing-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure Sigma Computing for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to Sigma Computing.
+
+documentationcenter: ''
+
+writer: Zhchia
++
+ms.assetid: 6108a4de-4420-4baa-bc2f-1c39a1ebe81d
+++
+ na
+ms.devlang: na
+ Last updated : 06/02/2021+++
+# Tutorial: Configure Sigma Computing for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both Sigma Computing and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Sigma Computing](https://www.sigmacomputing.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../manage-apps/user-provisioning.md).
++
+## Capabilities Supported
+> [!div class="checklist"]
+> * Create users in Sigma Computing
+> * Remove users in Sigma Computing when they do not require access anymore
+> * Keep user attributes synchronized between Azure AD and Sigma Computing
+> * Provision groups and group memberships in Sigma Computing
+> * [Single sign-on](https://docs.microsoft.com/azure/active-directory/saas-apps/sigma-computing-tutorial) to Sigma Computing (recommended)
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](https://docs.microsoft.com/azure/active-directory/develop/quickstart-create-new-tenant)
+* A user account in Azure AD with [permission](https://docs.microsoft.com/azure/active-directory/users-groups-roles/directory-assign-admin-roles) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* An admin account in your Sigma organization.
+* An existing [SSO](https://docs.microsoft.com/azure/active-directory/saas-apps/sigma-computing-tutorial) integration with Sigma Computing.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](https://docs.microsoft.com/azure/active-directory/manage-apps/user-provisioning).
+2. Determine who will be in [scope for provisioning](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+3. Determine what data to [map between Azure AD and Sigma Computing](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes).
+
+## Step 2. Configure Sigma Computing to support provisioning with Azure AD
+
+1. Log in to your Sigma account.
+
+2. Navigate to the **Admin Portal** by selecting **Administration** from the user menu.
+
+3. In the left panel, click **Authentication** to open your organizationΓÇÖs Authentication page.
+
+4. Ensure the **Authentication Method** is **SAML** only.
+
+5. Click the **Setup** button under **Account Type and Team Provisioning** to open the Provisioning modal.
+
+ ![Role](media/sigma-computing-provisioning-tutorial/sigma-role-and-team-provisioning.png)
+
+6. Read through the notes provided on the getting started section of the Provisioning modal. Check the confirmation box, and click **Next** to continue.
+
+7. Enter a Token name and click **Next**.
+
+ ![Next](media/sigma-computing-provisioning-tutorial/sigma-create-token.png)
+
+8. Sigma will provide you with a **Bearer Token** and **Directory Base URL**. Copy and save these values in a secure location. These values will be entered in the **Tenant URL** and **Secret Token** field in the Provisioning tab of your Sigma Computing application in the Azure portal. Click **Done**.
+
+ ![Sigma](media/sigma-computing-provisioning-tutorial/sigma-copy-keys.png)
+
+## Step 3. Add Sigma Computing from the Azure AD application gallery
+
+Add Sigma Computing from the Azure AD application gallery to start managing provisioning to Sigma Computing. If you have previously setup Sigma Computing for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](https://docs.microsoft.com/azure/active-directory/manage-apps/add-gallery-app).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+
+* When assigning users and groups to Sigma Computing, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps) to add additional roles.
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
++
+## Step 5. Configure automatic user provisioning to Sigma Computing
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for Sigma Computing in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Enterprise applications blade](common/enterprise-applications.png)
+
+2. In the applications list, select **Sigma Computing**.
+
+ ![The Sigma Computing link in the Applications list](common/all-applications.png)
+
+3. Select the **Provisioning** tab.
+
+ ![Provisioning tab](common/provisioning.png)
+
+4. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Provisioning tab automatic](common/provisioning-automatic.png)
+
+5. Under the **Admin Credentials** section, input your Sigma Computing Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Sigma Computing. If the connection fails, ensure your Sigma Computing account has Admin permissions and try again.
+
+ ![Auth](common/provisioning-testconnection-tenanturltoken.png)
+
+6. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Notification Email](common/provisioning-notification-email.png)
+
+7. Select **Save**.
+
+8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Sigma Computing**.
+
+9. Review the user attributes that are synchronized from Azure AD to Sigma Computing in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Sigma Computing for update operations. If you choose to change the [matching target attribute](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes), you will need to ensure that the Sigma Computing API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported For Filtering|
+ ||||
+ |userName|String|&check;|
+ |userType|String|
+ |active|Boolean|
+ |emails[type eq "work"].value|String|
+ |name.givenName|String|
+ |name.familyName|String|
++
+10. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Sigma Computing**.
+
+11. Review the group attributes that are synchronized from Azure AD to Sigma Computing in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Sigma Computing for update operations. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported For Filtering|
+ ||||
+ |displayName|String|&check;|
+ |members|Reference|
+
+12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../manage-apps/define-conditional-rules-for-provisioning-user-accounts.md).
+
+13. To enable the Azure AD provisioning service for Sigma Computing, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+
+14. Define the users and/or groups that you would like to provision to Sigma Computing by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Provisioning Scope](common/provisioning-scope.png)
+
+15. When you are ready to provision, click **Save**.
+
+ ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+1. Use the [provisioning logs](https://docs.microsoft.com/azure/active-directory/reports-monitoring/concept-provisioning-logs) to determine which users have been provisioned successfully or unsuccessfully
+2. Check the [progress bar](https://docs.microsoft.com/azure/active-directory/app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user) to see the status of the provisioning cycle and how close it is to completion
+3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](https://docs.microsoft.com/azure/active-directory/manage-apps/application-provisioning-quarantine-status).
+
+## Additional resources
+
+* [Managing user account provisioning for Enterprise Apps](../manage-apps/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../manage-apps/check-status-user-account-provisioning.md)
active-directory Tangoanalytics Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/tangoanalytics-tutorial.md
Previously updated : 03/07/2019 Last updated : 06/07/2021 # Tutorial: Azure Active Directory integration with Tango Analytics
-In this tutorial, you learn how to integrate Tango Analytics with Azure Active Directory (Azure AD).
-Integrating Tango Analytics with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate Tango Analytics with Azure Active Directory (Azure AD). When you integrate Tango Analytics with Azure AD, you can:
-* You can control in Azure AD who has access to Tango Analytics.
-* You can enable your users to be automatically signed-in to Tango Analytics (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to Tango Analytics.
+* Enable your users to be automatically signed-in to Tango Analytics with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
-To configure Azure AD integration with Tango Analytics, you need the following items:
+To get started, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* Tango Analytics single sign-on enabled subscription
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Tango Analytics single sign-on (SSO) enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* Tango Analytics supports **IDP** initiated SSO
-
-## Adding Tango Analytics from the gallery
-
-To configure the integration of Tango Analytics into Azure AD, you need to add Tango Analytics from the gallery to your list of managed SaaS apps.
-
-**To add Tango Analytics from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
+* Tango Analytics supports **IDP** initiated SSO.
-4. In the search box, type **Tango Analytics**, select **Tango Analytics** from result panel then click **Add** button to add the application.
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
- ![Tango Analytics in the results list](common/search-new-app.png)
+## Add Tango Analytics from the gallery
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with Tango Analytics based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in Tango Analytics needs to be established.
-
-To configure and test Azure AD single sign-on with Tango Analytics, you need to complete the following building blocks:
-
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure Tango Analytics Single Sign-On](#configure-tango-analytics-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Tango Analytics test user](#create-tango-analytics-test-user)** - to have a counterpart of Britta Simon in Tango Analytics that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+To configure the integration of Tango Analytics into Azure AD, you need to add Tango Analytics from the gallery to your list of managed SaaS apps.
-### Configure Azure AD single sign-on
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Tango Analytics** in the search box.
+1. Select **Tango Analytics** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+## Configure and test Azure AD SSO for Tango Analytics
-To configure Azure AD single sign-on with Tango Analytics, perform the following steps:
+Configure and test Azure AD SSO with Tango Analytics using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Tango Analytics.
-1. In the [Azure portal](https://portal.azure.com/), on the **Tango Analytics** application integration page, select **Single sign-on**.
+To configure and test Azure AD SSO with Tango Analytics, perform the following steps:
- ![Configure single sign-on link](common/select-sso.png)
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Tango Analytics SSO](#configure-tango-analytics-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Tango Analytics test user](#create-tango-analytics-test-user)** - to have a counterpart of B.Simon in Tango Analytics that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+## Configure Azure AD SSO
- ![Single sign-on select mode](common/select-saml-option.png)
+Follow these steps to enable Azure AD SSO in the Azure portal.
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
+1. In the Azure portal, on the **Tango Analytics** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
4. On the **Set up Single Sign-On with SAML** page, perform the following steps:
- ![Tango Analytics Domain and URLs single sign-on information](common/idp-intiated.png)
- a. In the **Identifier** text box, type the value: `TACORE_SSO`
- b. In the **Reply URL** text box, type a URL using the following pattern:
+ b. In the **Reply URL** text box, type the URL:
`https://mts.tangoanalytics.com/saml2/sp/acs/post`
- > [!NOTE]
- > The Reply URL value is not real. Update this with the actual Reply URL. Contact [Tango Analytics Client support team](mailto:support@tangoanalytics.com) to get this value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
- 5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer. ![The Certificate download link](common/metadataxml.png)
To configure Azure AD single sign-on with Tango Analytics, perform the following
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
-
- b. Azure AD Identifier
-
- c. Logout URL
-
-### Configure Tango Analytics Single Sign-On
-
-To configure single sign-on on **Tango Analytics** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Tango Analytics support team](mailto:support@tangoanalytics.com). They set this setting to have the SAML SSO connection set properly on both sides.
- ### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type **brittasimon@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
+In this section, you'll create a test user in the Azure portal called B.Simon.
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Tango Analytics.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Tango Analytics.
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Tango Analytics**.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Tango Analytics**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
- ![Enterprise applications blade](common/enterprise-applications.png)
+## Configure Tango Analytics SSO
-2. In the applications list, select **Tango Analytics**.
-
- ![The Tango Analytics link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
+To configure single sign-on on **Tango Analytics** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Tango Analytics support team](mailto:support@tangoanalytics.com). They set this setting to have the SAML SSO connection set properly on both sides.
### Create Tango Analytics test user In this section, you create a user called Britta Simon in Tango Analytics. Work with [Tango Analytics support team](mailto:support@tangoanalytics.com) to add the users in the Tango Analytics platform. Users must be created and activated before you use single sign-on.
-### Test single sign-on
-
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+## Test SSO
-When you click the Tango Analytics tile in the Access Panel, you should be automatically signed in to the Tango Analytics for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+In this section, you test your Azure AD single sign-on configuration with following options.
-## Additional Resources
+* Click on Test this application in Azure portal and you should be automatically signed in to the Tango Analytics for which you set up the SSO.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the Tango Analytics tile in the My Apps, you should be automatically signed in to the Tango Analytics for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure Tango Analytics you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Thirdpartytrust Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/thirdpartytrust-tutorial.md
Previously updated : 03/27/2019 Last updated : 05/31/2021 # Tutorial: Azure Active Directory integration with ThirdPartyTrust
-In this tutorial, you learn how to integrate ThirdPartyTrust with Azure Active Directory (Azure AD).
-Integrating ThirdPartyTrust with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate ThirdPartyTrust with Azure Active Directory (Azure AD). When you integrate ThirdPartyTrust with Azure AD, you can:
-* You can control in Azure AD who has access to ThirdPartyTrust.
-* You can enable your users to be automatically signed-in to ThirdPartyTrust (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to ThirdPartyTrust.
+* Enable your users to be automatically signed-in to ThirdPartyTrust with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
-To configure Azure AD integration with ThirdPartyTrust, you need the following items:
+To get started, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* ThirdPartyTrust single sign-on enabled subscription
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* ThirdPartyTrust single sign-on (SSO) enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* ThirdPartyTrust supports **SP** and **IDP** initiated SSO
-
-## Adding ThirdPartyTrust from the gallery
-
-To configure the integration of ThirdPartyTrust into Azure AD, you need to add ThirdPartyTrust from the gallery to your list of managed SaaS apps.
-
-**To add ThirdPartyTrust from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
+* ThirdPartyTrust supports **SP** and **IDP** initiated SSO.
-4. In the search box, type **ThirdPartyTrust**, select **ThirdPartyTrust** from result panel then click **Add** button to add the application.
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
- ![ThirdPartyTrust in the results list](common/search-new-app.png)
+## Add ThirdPartyTrust from the gallery
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with ThirdPartyTrust based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in ThirdPartyTrust needs to be established.
-
-To configure and test Azure AD single sign-on with ThirdPartyTrust, you need to complete the following building blocks:
-
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure ThirdPartyTrust Single Sign-On](#configure-thirdpartytrust-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create ThirdPartyTrust test user](#create-thirdpartytrust-test-user)** - to have a counterpart of Britta Simon in ThirdPartyTrust that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+To configure the integration of ThirdPartyTrust into Azure AD, you need to add ThirdPartyTrust from the gallery to your list of managed SaaS apps.
-### Configure Azure AD single sign-on
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **ThirdPartyTrust** in the search box.
+1. Select **ThirdPartyTrust** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+## Configure and test Azure AD SSO for ThirdPartyTrust
-To configure Azure AD single sign-on with ThirdPartyTrust, perform the following steps:
+Configure and test Azure AD SSO with ThirdPartyTrust using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in ThirdPartyTrust.
-1. In the [Azure portal](https://portal.azure.com/), on the **ThirdPartyTrust** application integration page, select **Single sign-on**.
+To configure and test Azure AD SSO with ThirdPartyTrust, perform the following steps:
- ![Configure single sign-on link](common/select-sso.png)
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure ThirdPartyTrust SSO](#configure-thirdpartytrust-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create ThirdPartyTrust test user](#create-thirdpartytrust-test-user)** - to have a counterpart of B.Simon in ThirdPartyTrust that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+## Configure Azure AD SSO
- ![Single sign-on select mode](common/select-saml-option.png)
+Follow these steps to enable Azure AD SSO in the Azure portal.
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
+1. In the Azure portal, on the **ThirdPartyTrust** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
4. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, perform the following step:
- ![ThirdPartyTrust Domain and URLs single sign-on information](common/idp-identifier.png)
-
- In the **Identifier** text box, type a URL:
+ In the **Identifier** text box, type the URL:
`https://api.thirdpartytrust.com/sai3/saml/metadata` 5. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
- ![image](common/both-preintegrated-signon.png)
-
- In the **Sign-on URL** text box, type a URL:
+ In the **Sign-on URL** text box, type the URL:
`https://api.thirdpartytrust.com/sai3/test` 6. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer.
To configure Azure AD single sign-on with ThirdPartyTrust, perform the following
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
-
- b. Azure AD Identifier
-
- c. Logout URL
-
-### Configure ThirdPartyTrust Single Sign-On
-
-To configure single sign-on on **ThirdPartyTrust** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [ThirdPartyTrust support team](mailto:support@thirdpartytrust.com). They set this setting to have the SAML SSO connection set properly on both sides.
- ### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type brittasimon@yourcompanydomain.extension. For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
+In this section, you'll create a test user in the Azure portal called B.Simon.
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to ThirdPartyTrust.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to ThirdPartyTrust.
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **ThirdPartyTrust**.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **ThirdPartyTrust**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
- ![Enterprise applications blade](common/enterprise-applications.png)
+## Configure ThirdPartyTrust SSO
-2. In the applications list, select **ThirdPartyTrust**.
-
- ![The ThirdPartyTrust link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
+To configure single sign-on on **ThirdPartyTrust** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [ThirdPartyTrust support team](mailto:support@thirdpartytrust.com). They set this setting to have the SAML SSO connection set properly on both sides.
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
+### Create ThirdPartyTrust test user
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
+In this section, you create a user called Britta Simon in ThirdPartyTrust. Work with [ThirdPartyTrust support team](mailto:support@thirdpartytrust.com) to add the users in the ThirdPartyTrust platform. Users must be created and activated before you use single sign-on.
-7. In the **Add Assignment** dialog click the **Assign** button.
+## Test SSO
-### Create ThirdPartyTrust test user
+In this section, you test your Azure AD single sign-on configuration with following options.
-In this section, you create a user called Britta Simon in ThirdPartyTrust. Work with [ThirdPartyTrust support team](mailto:support@thirdpartytrust.com) to add the users in the ThirdPartyTrust platform. Users must be created and activated before you use single sign-on.
+#### SP initiated:
-### Test single sign-on
+* Click on **Test this application** in Azure portal. This will redirect to ThirdPartyTrust Sign on URL where you can initiate the login flow.
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+* Go to ThirdPartyTrust Sign-on URL directly and initiate the login flow from there.
-When you click the ThirdPartyTrust tile in the Access Panel, you should be automatically signed in to the ThirdPartyTrust for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+#### IDP initiated:
-## Additional Resources
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the ThirdPartyTrust for which you set up the SSO.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+You can also use Microsoft My Apps to test the application in any mode. When you click the ThirdPartyTrust tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the ThirdPartyTrust for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure ThirdPartyTrust you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Xaitporter Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/xaitporter-tutorial.md
Previously updated : 05/03/2019 Last updated : 05/31/2021 # Tutorial: Azure Active Directory integration with XaitPorter
-In this tutorial, you learn how to integrate XaitPorter with Azure Active Directory (Azure AD).
-Integrating XaitPorter with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate XaitPorter with Azure Active Directory (Azure AD). When you integrate XaitPorter with Azure AD, you can:
-* You can control in Azure AD who has access to XaitPorter.
-* You can enable your users to be automatically signed-in to XaitPorter (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to XaitPorter.
+* Enable your users to be automatically signed-in to XaitPorter with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites To configure Azure AD integration with XaitPorter, you need the following items: * An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/).
-* XaitPorter single sign-on enabled subscription
+* XaitPorter single sign-on enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* XaitPorter supports **SP** initiated SSO
+* XaitPorter supports **SP** initiated SSO.
-## Adding XaitPorter from the gallery
+## Add XaitPorter from the gallery
To configure the integration of XaitPorter into Azure AD, you need to add XaitPorter from the gallery to your list of managed SaaS apps.
-**To add XaitPorter from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **XaitPorter** in the search box.
+1. Select **XaitPorter** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
- ![The New application button](common/add-new-app.png)
+## Configure and test Azure AD SSO for XaitPorter
-4. In the search box, type **XaitPorter**, select **XaitPorter** from result panel then click **Add** button to add the application.
+Configure and test Azure AD SSO with XaitPorter using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in XaitPorter.
- ![XaitPorter in the results list](common/search-new-app.png)
+To configure and test Azure AD SSO with XaitPorter, perform the following steps:
-## Configure and test Azure AD single sign-on
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure XaitPorter SSO](#configure-xaitporter-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create XaitPorter test user](#create-xaitporter-test-user)** - to have a counterpart of B.Simon in XaitPorter that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-In this section, you configure and test Azure AD single sign-on with XaitPorter based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in XaitPorter needs to be established.
+## Configure Azure AD SSO
-To configure and test Azure AD single sign-on with XaitPorter, you need to complete the following building blocks:
+Follow these steps to enable Azure AD SSO in the Azure portal.
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure XaitPorter Single Sign-On](#configure-xaitporter-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create XaitPorter test user](#create-xaitporter-test-user)** - to have a counterpart of Britta Simon in XaitPorter that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+1. In the Azure portal, on the **XaitPorter** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
-### Configure Azure AD single sign-on
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
-In this section, you enable Azure AD single sign-on in the Azure portal.
-
-To configure Azure AD single sign-on with XaitPorter, perform the following steps:
-
-1. In the [Azure portal](https://portal.azure.com/), on the **XaitPorter** application integration page, select **Single sign-on**.
-
- ![Configure single sign-on link](common/select-sso.png)
+4. On the **Basic SAML Configuration** section, perform the following steps:
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+ a. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
+ `https://<SUBDOMAIN>.xaitporter.com`
- ![Single sign-on select mode](common/select-saml-option.png)
+ b. In the **Sign on URL** text box, type a URL using the following pattern:
+ `https://<SUBDOMAIN>.xaitporter.com/saml/login`
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
+ > [!NOTE]
+ > These values are not real. Update these values with the actual identifier and Sign on URL. Contact [XaitPorter Client support team](https://www.xait.com/support/) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+5. On the **Set up Single Sign-On with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
-4. On the **Basic SAML Configuration** section, perform the following steps:
+ ![The Certificate download link](common/copy-metadataurl.png)
- ![XaitPorter Domain and URLs single sign-on information](common/sp-identifier.png)
+6. Provide the **IP address** or the **App Federation Metadata Url** to the [SmartRecruiters support team](https://www.smartrecruiters.com/about-us/contact-us/), so that XaitPorter can ensure that IP address is reachable from your XaitPorter instance configuring approved list at their side.
- a. In the **Sign on URL** text box, type a URL using the following pattern:
- `https://<subdomain>.xaitporter.com/saml/login`
+### Create an Azure AD test user
- b. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
- `https://<subdomain>.xaitporter.com`
+In this section, you'll create a test user in the Azure portal called B.Simon.
- > [!NOTE]
- > These values are not real. Update these values with the actual Sign on URL and Identifier. Contact [XaitPorter Client support team](https://www.xait.com/support/) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
-5. On the **Set up Single Sign-On with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+### Assign the Azure AD test user
- ![The Certificate download link](common/copy-metadataurl.png)
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to XaitPorter.
-6. Provide the **IP address** or the **App Federation Metadata Url** to the [SmartRecruiters support team](https://www.smartrecruiters.com/about-us/contact-us/), so that XaitPorter can ensure that IP address is reachable from your XaitPorter instance configuring approved list at their side.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **XaitPorter**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
-### Configure XaitPorter Single Sign-On
+## Configure XaitPorter SSO
1. To automate the configuration within XaitPorter, you need to install **My Apps Secure Sign-in browser extension** by clicking **Install the extension**.
To configure Azure AD single sign-on with XaitPorter, perform the following step
4. Click on **Admin**.
- ![Screenshot shows Admin selected in the XaitPorter site.](./media/xaitporter-tutorial/user1.png)
+ ![Screenshot shows Admin selected in the XaitPorter site.](./media/xaitporter-tutorial/admin.png)
5. Select **Manage Single Sign-On** from the **System Setup** dropdown list.
- ![Screenshot shows Manage Single Sign-On selected from System Setup.](./media/xaitporter-tutorial/user2.png)
+ ![Screenshot shows Manage Single Sign-On selected from System Setup.](./media/xaitporter-tutorial/user.png)
6. In the **MANAGE SINGLE SIGN-ON** section, perform the following steps:
- ![Screenshot shows the MANAGE SINGLE SIGN-ON section where you can perform these steps.](./media/xaitporter-tutorial/user3.png)
+ ![Screenshot shows the MANAGE SINGLE SIGN-ON section where you can perform these steps.](./media/xaitporter-tutorial/authentication.png)
a. Select **Enable Single Sign-On Authentication**.
To configure Azure AD single sign-on with XaitPorter, perform the following step
d. Click **OK**.
-### Create an Azure AD test user
-
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type brittasimon@yourcompanydomain.extension. For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
-
-### Assign the Azure AD test user
-
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to XaitPorter.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **XaitPorter**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-2. In the applications list, select **XaitPorter**.
-
- ![The XaitPorter link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
- ### Create XaitPorter test user In this section, you create a user called Britta Simon in XaitPorter. Work with [XaitPorter Client support team](https://www.xait.com/support/) to add the users in the XaitPorter platform. Users must be created and activated before you use single sign-on.
-### Test single sign-on
+## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the XaitPorter tile in the Access Panel, you should be automatically signed in to the XaitPorter for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* Click on **Test this application** in Azure portal. This will redirect to XaitPorter Sign-on URL where you can initiate the login flow.
-## Additional Resources
+* Go to XaitPorter Sign-on URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the XaitPorter tile in the My Apps, this will redirect to XaitPorter Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure XaitPorter you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Xmatters Ondemand Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/xmatters-ondemand-tutorial.md
Previously updated : 11/19/2020 Last updated : 06/07/2021 # Tutorial: Azure Active Directory integration with xMatters OnDemand
-In this tutorial, you learn how to integrate xMatters OnDemand with Azure Active Directory (Azure AD).
-Integrating xMatters OnDemand with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate xMatters OnDemand with Azure Active Directory (Azure AD). When you integrate xMatters OnDemand with Azure AD, you can:
-* You can control in Azure AD who has access to xMatters OnDemand.
-* You can enable your users to be automatically signed-in to xMatters OnDemand (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
+* Control in Azure AD who has access to xMatters OnDemand.
+* Enable your users to be automatically signed-in to xMatters OnDemand with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites To configure Azure AD integration with xMatters OnDemand, you need the following items: * An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/).
-* xMatters OnDemand single sign-on enabled subscription
+* xMatters OnDemand single sign-on enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* xMatters OnDemand supports **IDP** initiated SSO
+* xMatters OnDemand supports **IDP** initiated SSO.
-## Adding xMatters OnDemand from the gallery
+## Add xMatters OnDemand from the gallery
To configure the integration of xMatters OnDemand into Azure AD, you need to add xMatters OnDemand from the gallery to your list of managed SaaS apps.
To configure the integration of xMatters OnDemand into Azure AD, you need to add
1. In the **Add from the gallery** section, type **xMatters OnDemand** in the search box. 1. Select **xMatters OnDemand** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. - ## Configure and test Azure AD SSO for xMatters OnDemand Configure and test Azure AD SSO with xMatters OnDemand using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in xMatters OnDemand.
To configure and test Azure AD SSO with xMatters OnDemand, perform the following
1. **[Create xMatters OnDemand test user](#create-xmatters-ondemand-test-user)** - to have a counterpart of Britta Simon in xMatters OnDemand that is linked to the Azure AD representation of user. 3. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-### Configure Azure AD SSO
+## Configure Azure AD SSO
Follow these steps to enable Azure AD SSO in the Azure portal. 1. In the Azure portal, on the **xMatters OnDemand** application integration page, find the **Manage** section and select **single sign-on**. 1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Basic SAML Configuration** section, enter the values for the following fields:
+1. On the **Basic SAML Configuration** section, perform the following steps:
a. In the **Identifier** text box, type a URL using one of the following patterns: | Identifier | | - |
- | `https://<companyname>.au1.xmatters.com.au/` |
- | `https://<companyname>.cs1.xmatters.com/` |
- | `https://<companyname>.xmatters.com/` |
+ | `https://<COMPANY_NAME>.au1.xmatters.com.au/` |
+ | `https://<COMPANY_NAME>.cs1.xmatters.com/` |
+ | `https://<COMPANY_NAME>.xmatters.com/` |
| `https://www.xmatters.com` |
- | `https://<companyname>.xmatters.com.au/` |
+ | `https://<COMPANY_NAME>.xmatters.com.au/` |
b. In the **Reply URL** text box, type a URL using one of the following patterns: | Reply URL | | - |
- | `https://<companyname>.au1.xmatters.com.au` |
- | `https://<companyname>.xmatters.com/sp/<instancename>` |
- | `https://<companyname>.cs1.xmatters.com/sp/<instancename>` |
- | `https://<companyname>.au1.xmatters.com.au/<instancename>` |
+ | `https://<COMPANY_NAME>.au1.xmatters.com.au` |
+ | `https://<COMPANY_NAME>.xmatters.com/sp/<INSTANCE_NAME>` |
+ | `https://<COMPANY_NAME>.cs1.xmatters.com/sp/<INSTANCE_NAME>` |
+ | `https://<COMPANY_NAME>.au1.xmatters.com.au/<INSTANCE_NAME>` |
> [!NOTE] > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [xMatters OnDemand Client support team](https://www.xmatters.com/company/contact-us/) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. 1. In the **Add Assignment** dialog, click the **Assign** button. - ## Configure xMatters OnDemand SSO
-1. In a different web browser window, sign in to your XMatters OnDemand company site as an administrator.
+1. In a different web browser window, sign in to your xMatters OnDemand company site as an administrator.
2. Click on **Admin**, and then click **Company Details**.
- ![Admin page](./media/xmatters-ondemand-tutorial/admin.png "Admin")
+ ![Admin page](./media/xmatters-ondemand-tutorial/admin.png "Admin page")
3. On the **SAML Configuration** page, perform the following steps:
- ![SAML configuration section ](./media/xmatters-ondemand-tutorial/saml-configuration.png "SAML configuration")
+ ![SAML configuration section ](./media/xmatters-ondemand-tutorial/saml-configuration.png "SAML configuration section")
a. Select **Enable SAML**.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
### Create xMatters OnDemand test user
-1. Sign in to your **XMatters OnDemand** tenant.
+1. Sign in to your **xMatters OnDemand** tenant.
2. Go to the **Users Icon** > **Users** and then click **Add Users**.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
![Add a User](./media/xmatters-ondemand-tutorial/add-user-2.png "Add a User") --
-### Test SSO
+## Test SSO
In this section, you test your Azure AD single sign-on configuration with following options.
-* Click on Test this application in Azure portal and you should be automatically signed in to the xMatters OnDemand for which you set up the SSO
+* Click on Test this application in Azure portal and you should be automatically signed in to the xMatters OnDemand for which you set up the SSO.
* You can use Microsoft My Apps. When you click the xMatters OnDemand tile in the My Apps, you should be automatically signed in to the xMatters OnDemand for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
aks Azure Files Csi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/azure-files-csi.md
After editing and saving the file, create the storage class with the [kubectl ap
```console $ kubectl apply -f nfs-sc.yaml
-storageclass.storage.k8s.io/azurefile-csi created
+storageclass.storage.k8s.io/azurefile-csi-nfs created
``` ### Create a deployment with an NFS-backed file share
-You can deploy an example [stateful set](https://github.com/kubernetes-sigs/azurefile-csi-driver/blob/master/deploy/example/statefulset.yaml) that saves timestamps into a file `data.txt` by deploying the following command with the [kubectl apply][kubectl-apply] command:
+You can deploy an example [stateful set](https://github.com/kubernetes-sigs/azurefile-csi-driver/blob/master/deploy/example/nfs/statefulset.yaml) that saves timestamps into a file `data.txt` by deploying the following command with the [kubectl apply][kubectl-apply] command:
```console
-$ kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/azurefile-csi-driver/master/deploy/example/statefulset.yaml
+$ kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/azurefile-csi-driver/master/deploy/example/nfs/statefulset.yaml
statefulset.apps/statefulset-azurefile created ```
aks Kubernetes Walkthrough Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/kubernetes-walkthrough-powershell.md
Title: 'Quickstart: Deploy an AKS cluster by using PowerShell'
description: Learn how to quickly create a Kubernetes cluster, deploy an application, and monitor performance in Azure Kubernetes Service (AKS) using PowerShell. Previously updated : 03/15/2021 Last updated : 03/15/2021
# Quickstart: Deploy an Azure Kubernetes Service cluster using PowerShell Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you quickly deploy and manage clusters. In this quickstart, you will:
-* Deploy an AKS cluster using PowerShell.
-* Run a multi-container application with a web front-end and a Redis instance in the cluster.
+* Deploy an AKS cluster using PowerShell.
+* Run a multi-container application with a web front-end and a Redis instance in the cluster.
* Monitor the health of the cluster and pods that run your application. To learn more about creating a Windows Server node pool, see
Set-AzContext -SubscriptionId 00000000-0000-0000-0000-000000000000
## Create a resource group
-An [Azure resource group](../azure-resource-manager/management/overview.md) is a logical group in which Azure resources are deployed and managed. When you create a resource group, you will be prompted to specify a location. This location is:
+An [Azure resource group](../azure-resource-manager/management/overview.md) is a logical group in which Azure resources are deployed and managed. When you create a resource group, you will be prompted to specify a location. This location is:
* The storage location of your resource group metadata.
-* Where your resources will run in Azure if you don't specify another region during resource creation.
+* Where your resources will run in Azure if you don't specify another region during resource creation.
The following example creates a resource group named **myResourceGroup** in the **eastus** region.
ResourceId : /subscriptions/00000000-0000-0000-0000-000000000000/resource
## Create AKS cluster
-1. Generate an SSH key pair using the `ssh-keygen` command-line utility.
+1. Generate an SSH key pair using the `ssh-keygen` command-line utility.
* For more details, see [Quick steps: Create and use an SSH public-private key pair for Linux VMs in Azure](../virtual-machines/linux/mac-create-ssh-keys.md).
-1. Create an AKS cluster using the [New-AzAks][new-azaks] cmdlet. Azure Monitor for containers is enabled by default.
+1. Create an AKS cluster using the [New-AzAksCluster][new-azakscluster] cmdlet. Azure Monitor for containers is enabled by default.
- The following example creates a cluster named **myAKSCluster** with one node.
+ The following example creates a cluster named **myAKSCluster** with one node.
```azurepowershell-interactive New-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster -NodeCount 1
After a few minutes, the command completes and returns information about the clu
## Connect to the cluster
-To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl][kubectl]. `kubectl` is already installed if you use Azure Cloud Shell.
+To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl][kubectl]. `kubectl` is already installed if you use Azure Cloud Shell.
1. Install `kubectl` locally using the `Install-AzAksKubectl` cmdlet:
To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl
## Run the application
-A [Kubernetes manifest file][kubernetes-deployment] defines a cluster's desired state, such as which container images to run.
+A [Kubernetes manifest file][kubernetes-deployment] defines a cluster's desired state, such as which container images to run.
In this quickstart, you will use a manifest to create all objects needed to run the [Azure Vote application][azure-vote-app]. This manifest includes two [Kubernetes deployments][kubernetes-deployment]: * The sample Azure Vote Python applications.
-* A Redis instance.
+* A Redis instance.
Two [Kubernetes Services][kubernetes-service] are also created: * An internal service for the Redis instance.
To see the Azure Vote app in action, open a web browser to the external IP addre
![Voting app deployed in Azure Kubernetes Service](./media/kubernetes-walkthrough-powershell/voting-app-deployed-in-azure-kubernetes-service.png)
-View the cluster nodes' and pods' health metrics captured by Azure Monitor for containers in the Azure portal.
+View the cluster nodes' and pods' health metrics captured by Azure Monitor for containers in the Azure portal.
## Delete the cluster
Remove-AzResourceGroup -Name myResourceGroup
> [!NOTE] > When you delete the cluster, the Azure Active Directory service principal used by the AKS cluster is not removed. For steps on how to remove the service principal, see [AKS service principal considerations and deletion][sp-delete].
->
+>
> If you used a managed identity, the identity is managed by the platform and does not require removal. ## Get the code
To learn more about AKS, and walk through a complete code to deployment example,
[kubernetes-concepts]: concepts-clusters-workloads.md [install-azure-powershell]: /powershell/azure/install-az-ps [new-azresourcegroup]: /powershell/module/az.resources/new-azresourcegroup
-[new-azaks]: /powershell/module/az.aks/new-azaks
+[new-azakscluster]: /powershell/module/az.aks/new-azakscluster
[import-azakscredential]: /powershell/module/az.aks/import-azakscredential [kubernetes-deployment]: concepts-clusters-workloads.md#deployments-and-yaml-manifests [kubernetes-service]: concepts-network.md#services
aks Windows Container Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/windows-container-powershell.md
Title: Create a Windows Server container on an AKS cluster by using PowerShell
description: Learn how to quickly create a Kubernetes cluster, deploy an application in a Windows Server container in Azure Kubernetes Service (AKS) using PowerShell. Previously updated : 03/12/2021 Last updated : 03/12/2021
If you choose to use PowerShell locally, this article requires that you install
module and connect to your Azure account using the [Connect-AzAccount](/powershell/module/az.accounts/Connect-AzAccount) cmdlet. For more information about installing the Az PowerShell module, see
-[Install Azure PowerShell][install-azure-powershell]. You also must install the Az.Aks PowerShell module:
+[Install Azure PowerShell][install-azure-powershell]. You also must install the Az.Aks PowerShell module:
```azurepowershell-interactive Install-Module Az.Aks
Use the `ssh-keygen` command-line utility to generate an SSH key pair. For more
To run an AKS cluster that supports node pools for Windows Server containers, your cluster needs to use a network policy that uses [Azure CNI][azure-cni-about] (advanced) network plugin. For more detailed information to help plan out the required subnet ranges and network considerations, see
-[configure Azure CNI networking][use-advanced-networking]. Use the [New-AzAks][new-azaks] cmdlet
+[configure Azure CNI networking][use-advanced-networking]. Use the [New-AzAksCluster][new-azakscluster] cmdlet
below to create an AKS cluster named **myAKSCluster**. The following example creates the necessary network resources if they don't exist.
The above command creates a new node pool named **npwin** and adds it to the **m
creating a node pool to run Windows Server containers, the default value for **VmSize** is **Standard_D2s_v3**. If you choose to set the **VmSize** parameter, check the list of [restricted VM sizes][restricted-vm-sizes]. The minimum recommended size is **Standard_D2s_v3**. The
-previous command also uses the default subnet in the default vnet created when running `New-AzAks`.
+previous command also uses the default subnet in the default vnet created when running `New-AzAksCluster`.
## Connect to the cluster
Kubernetes cluster tutorial.
[new-azresourcegroup]: /powershell/module/az.resources/new-azresourcegroup [azure-cni-about]: concepts-network.md#azure-cni-advanced-networking [use-advanced-networking]: configure-azure-cni.md
-[new-azaks]: /powershell/module/az.aks/new-azaks
+[new-azakscluster]: /powershell/module/az.aks/new-azakscluster
[restricted-vm-sizes]: quotas-skus-regions.md#restricted-vm-sizes [import-azakscredential]: /powershell/module/az.aks/import-azakscredential [kubernetes-deployment]: concepts-clusters-workloads.md#deployments-and-yaml-manifests
api-management Api Management Howto Aad https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-howto-aad.md
After the changes are saved, users in the specified Azure AD instance can sign i
After you enable access for users in an Azure AD tenant, you can add Azure AD groups into API Management. As a result, you can control product visibility using Azure AD groups.
-To add an external Azure AD group into APIM, you must first complete the previous section. Additionally, the application you registered must be granted access to the Microsoft Graph API with `Directory.Read.All` permission by following these steps:
+To add an external Azure AD group into APIM, you must first complete the previous section. By default, the application you registered has access to the Microsoft Graph API with the required `User.Read` Delegated permission. You must also give the application access to the Microsoft Graph API and Azure Active Directory Graph API with the `Directory.Read.All` Application permission by following these steps:
-1. Go back to your App Registration that was created in the previous section.
-2. Select **API Permissions**, and then click **+Add a permission**.
-3. In the **Request API Permissions** pane, select the **Microsoft APIs** tab, scroll down and then select the **Azure Active Directory Graph** tile. Select **Application permissions**, search for **Directory**, and then select the **Directory.Read.All** permission.
-4. Click **Add permissions** at the bottom of the pane, and then click **Grant admin consent for {tenantname}** so that you grant access for all users in this directory.
+1. Go back to your app registration that was created in the previous section.
+2. Select **API Permissions**, and then select **Add a permission**.
+3. In the **Request API Permissions** pane, select the **Microsoft APIs** tab, and then select the **Microsoft Graph** tile. Select **Application permissions** and search for **Directory**. Select the **Directory.Read.All** permission, and then select **Add permissions** at the bottom of the pane.
+4. Select **Add a permission**.
+5. In the **Request API Permissions** pane, select the **Microsoft APIs** tab, scroll down, and then select the **Azure Active Directory Graph** tile. Select **Application permissions** and search for **Directory**. Select the **Directory.Read.All** permission, and then select **Add permissions**.
+6. Select **Grant admin consent for {tenantname}** so that you grant access for all users in this directory.
Now you can add external Azure AD groups from the **Groups** tab of your API Management instance. 1. Select the **Groups** tab. 2. Select the **Add AAD group** button.
- !["Add AAD group" button](./media/api-management-howto-aad/api-management-with-aad008.png)
+
+ !["Add A A D group" button](./media/api-management-howto-aad/api-management-with-aad008.png)
3. Select the group that you want to add. 4. Press the **Select** button.
api-management Api Management Using With Internal Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-using-with-internal-vnet.md
Title: Use Azure API Management with internal virtual networks
+ Title: Connect to an internal virtual network using Azure API Management
description: Learn how to set up and configure Azure API Management on an internal virtual network
editor: ''
Previously updated : 04/12/2021 Last updated : 06/08/2021
-# Using Azure API Management service with an internal virtual network
-With Azure Virtual Networks, Azure API Management can manage APIs not accessible on the internet. A number of VPN technologies are available to make the connection. API Management can be deployed in two main modes inside a virtual network:
-* External
-* Internal
+# Connect to an internal virtual network using Azure API Management
+With Azure Virtual Networks (VNETs), Azure API Management can manage internet-inaccessible APIs using several VPN technologies to make the connection. You can deploy API Management either via [external](./api-management-using-with-vnet.md) or internal modes. In this article, you'll learn how to deploy API Management in internal VNET mode.
-When API Management deploys in internal virtual network mode, all the service endpoints (the proxy gateway, the Developer portal, direct management, and Git) are only visible within a virtual network that you control the access to. None of the service endpoints are registered on the public DNS server.
+When API Management deploys in internal VNET mode, you can only view the following service endpoints within a VNET whose access you control.
+* The proxy gateway
+* The developer portal
+* Direct management
+* Git
> [!NOTE]
-> Because there are no DNS entries for the service endpoints, these endpoints will not be accessible until [DNS is configured](#apim-dns-configuration) for the virtual network.
+> None of the service endpoints are registered on the public DNS. The service endpoints will remain inaccessible until you [configure DNS](#apim-dns-configuration) for the VNET.
-Using API Management in internal mode, you can achieve the following scenarios:
+Use API Management in internal mode to:
-* Make APIs hosted in your private datacenter securely accessible by third parties outside of it by using site-to-site or Azure ExpressRoute VPN connections.
+* Make APIs hosted in your private datacenter securely accessible by third parties, using site-to-site or Azure ExpressRoute VPN connections.
* Enable hybrid cloud scenarios by exposing your cloud-based APIs and on-premises APIs through a common gateway.
-* Manage your APIs hosted in multiple geographic locations by using a single gateway endpoint.
+* Manage your APIs hosted in multiple geographic locations, using a single gateway endpoint.
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
Using API Management in internal mode, you can achieve the following scenarios:
## Prerequisites
-To perform the steps described in this article, you must have:
-
-+ **An active Azure subscription**.
-
- [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
++ **An active Azure subscription**. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] + **An Azure API Management instance**. For more information, see [Create an Azure API Management instance](get-started-create-service-instance.md). [!INCLUDE [api-management-public-ip-for-vnet](../../includes/api-management-public-ip-for-vnet.md)]
-When an API Management service is deployed in a virtual network, a [list of ports](./api-management-using-with-vnet.md#required-ports) are used and need to be opened.
+When an API Management service is deployed in a VNET, a [list of ports](./api-management-using-with-vnet.md#required-ports) are used and need to be opened.
+
+## <a name="enable-vpn"> </a>Creating an API Management in an internal VNET
+The API Management service in an internal virtual network is hosted behind an internal load balancer. The load balancer SKU depends on the management API used to create the service. For more information, see [Azure Load Balancer SKUs](../load-balancer/skus.md).
-## <a name="enable-vpn"> </a>Creating an API Management in an internal virtual network
-The API Management service in an internal virtual network is hosted behind an internal load balancer Basic SKU if the service is created with client API version 2020-12-01. For service created with clients having API version 2021-01-01-preview and having a public IP address from the customer's subscription, it is hosted behind an internal load balancer Standard SKU. For more information, see [Azure Load Balancer SKUs](../load-balancer/skus.md).
+| API version | Hosted behind |
+| -- | - |
+| 2020-12-01 | An internal load balancer in the Basic SKU |
+| 2020-01-01-preview, with a public IP address from your subscription | An internal load balancer Standard SKU |
-### Enable a virtual network connection using the Azure portal
+### Enable a VNET connection using the Azure portal
-1. Browse to your Azure API Management instance in the [Azure portal](https://portal.azure.com/).
+1. Navigate to your Azure API Management instance in the [Azure portal](https://portal.azure.com/).
1. Select **Virtual network**. 1. Configure the **Internal** access type. For detailed steps, see [Enable VNET connectivity using the Azure portal](api-management-using-with-vnet.md#enable-vnet-connectivity-using-the-azure-portal).
- ![Menu for setting up an Azure API Management in an internal virtual network][api-management-using-internal-vnet-menu]
+ ![Menu for setting up an Azure API Management in an internal VNET][api-management-using-internal-vnet-menu]
4. Select **Save**.
-After the deployment succeeds, you should see **private** virtual IP address and **public** virtual IP address of your API Management service on the overview blade. The **private** virtual IP address is a load balanced IP address from within the API Management delegated subnet over which `gateway`, `portal`, `management` and `scm` endpoints can be accessed. The **public** virtual IP address is used **only** for control plane traffic to `management` endpoint over port 3443 and can be locked down to the [ApiManagement][ServiceTags] service tag.
+After successful deployment, you should see your API Management service's **private** virtual IP address and **public** virtual IP address on the **Overview** blade.
-![API Management dashboard with an internal virtual network configured][api-management-internal-vnet-dashboard]
+| Virtual IP address | Description |
+| -- | -- |
+| **Private virtual IP address** | A load balanced IP address from within the API Management-delegated subnet, over which you can access `gateway`, `portal`, `management`, and `scm` endpoints. |
+| **Public virtual IP address** | Used *only* for control plane traffic to `management` endpoint over `port 3443`. Can be locked down to the [ApiManagement][ServiceTags] service tag. |
+
+![API Management dashboard with an internal VNET configured][api-management-internal-vnet-dashboard]
> [!NOTE]
-> The Test console available on the Azure Portal will not work for **Internal** VNET deployed service, as the Gateway Url is not registered on the Public DNS. You should instead use the Test Console provided on the **Developer portal**.
+> Since the Gateway URL is not registered on the public DNS, the test console available on the Azure portal will not work for **Internal** VNET deployed service. Instead, use the test console provided on the **Developer portal**.
-### <a name="deploy-apim-internal-vnet"> </a>Deploy API Management into Virtual Network
+### <a name="deploy-apim-internal-vnet"> </a>Deploy API Management into VNET
-You can also enable virtual network connectivity by using the following methods.
+You can also enable VNET connectivity by using the following methods.
### API version 2020-12-01
You can also enable virtual network connectivity by using the following methods.
[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.apimanagement%2Fapi-management-create-with-internal-vnet%2Fazuredeploy.json)
-* Azure PowerShell cmdlets - [Create](/powershell/module/az.apimanagement/new-azapimanagement) or [update](/powershell/module/az.apimanagement/update-azapimanagementregion) an API Management instance in a virtual network
+* Azure PowerShell cmdlets - [Create](/powershell/module/az.apimanagement/new-azapimanagement) or [update](/powershell/module/az.apimanagement/update-azapimanagementregion) an API Management instance in a VNET
## <a name="apim-dns-configuration"></a>DNS configuration
-When API Management is in external virtual network mode, the DNS is managed by Azure. For internal virtual network mode, you have to manage your own DNS. Configuring an Azure DNS private zone and linking it to the virtual network API Management service is deployed into is the recommended option. Learn how to [set up a private zone in Azure DNS](../dns/private-dns-getstarted-portal.md).
+In external VNET mode, Azure manages the DNS. For internal VNET mode, you have to manage your own DNS. We recommend:
+1. Configuring an Azure DNS private zone.
+1. Linking the Azure DNS private zone to the VNET into which you've deployed your API Management service.
+
+Learn how to [set up a private zone in Azure DNS](../dns/private-dns-getstarted-portal.md).
> [!NOTE]
-> API Management service does not listen to requests coming from IP addresses. It only responds to requests to the host name configured on its service endpoints. These endpoints include gateway, the Azure portal and the Developer portal, direct management endpoint, and Git.
+> API Management service does not listen to requests coming from IP addresses. It only responds to requests to the host name configured on its service endpoints. These endpoints include:
+> * Gateway
+> * The Azure portal
+> * The developer portal
+> * Direct management endpoint
+> * Git
### Access on default host names
-When you create an API Management service, named "contosointernalvnet" for example, the following service endpoints are configured by default:
-
- * Gateway or proxy: contosointernalvnet.azure-api.net
-
- * The Developer portal: contosointernalvnet.portal.azure-api.net
-
- * The new Developer portal: contosointernalvnet.developer.azure-api.net
-
- * Direct management endpoint: contosointernalvnet.management.azure-api.net
-
- * Git: contosointernalvnet.scm.azure-api.net
-
-To access these API Management service endpoints, you can create a virtual machine in a subnet connected to the virtual network in which API Management is deployed. Assuming the internal virtual IP address for your service is 10.1.0.5, you can map the hosts file, %SystemDrive%\drivers\etc\hosts, as follows:
-
- * 10.1.0.5 contosointernalvnet.azure-api.net
-
- * 10.1.0.5 contosointernalvnet.portal.azure-api.net
-
- * 10.1.0.5 contosointernalvnet.developer.azure-api.net
-
- * 10.1.0.5 contosointernalvnet.management.azure-api.net
-
- * 10.1.0.5 contosointernalvnet.scm.azure-api.net
+When you create an API Management service (`contosointernalvnet`, for example), the following service endpoints are configured by default:
+
+| Endpoint | Endpoint configuration |
+| -- | -- |
+| Gateway or proxy | `contosointernalvnet.azure-api.net` |
+| Developer portal | `contosointernalvnet.portal.azure-api.net` |
+| The new developer portal | `contosointernalvnet.developer.azure-api.net` |
+| Direct management endpoint | `contosointernalvnet.management.azure-api.net` |
+| Git | `contosointernalvnet.scm.azure-api.net` |
+
+To access these API Management service endpoints, you can create a virtual machine in a subnet connected to the VNET in which API Management is deployed. Assuming the internal virtual IP address for your service is 10.1.0.5, you can map the hosts file, `%SystemDrive%\drivers\etc\hosts`, as follows:
+
+| Internal virtual IP address | Endpoint configuration |
+| -- | -- |
+| 10.1.0.5 | `contosointernalvnet.azure-api.net` |
+| 10.1.0.5 | `contosointernalvnet.portal.azure-api.net` |
+| 10.1.0.5 | `contosointernalvnet.developer.azure-api.net` |
+| 10.1.0.5 | `contosointernalvnet.management.azure-api.net` |
+| 10.1.0.5 | `contosointernalvnet.scm.azure-api.net` |
You can then access all the service endpoints from the virtual machine you created.
-If you use a custom DNS server in a virtual network, you can also create A DNS records and access these endpoints from anywhere in your virtual network.
+If you use a custom DNS server in a VNET, you can also create DNS A-records and access these endpoints from anywhere in your VNET.
### Access on custom domain names
-1. If you don't want to access the API Management service with the default host names, you can set up custom domain names for all your service endpoints as shown in the following image:
+If you don't want to access the API Management service with the default host names:
+
+1. Set up custom domain names for all your service endpoints, as shown in the following image:
![Setting up a custom domain for API Management][api-management-custom-domain-name]
-2. Then you can create records in your DNS server to access the endpoints that are only accessible from within your virtual network.
+2. Create records in your DNS server to access the endpoints accessible from within your VNET.
## <a name="routing"> </a> Routing
-* A load balanced *private* virtual IP address from the subnet range will be reserved and used to access the API Management service endpoints from within the virtual network. This *private* IP address can be found on the Overview blade for the service in the Azure portal. This address must be registered with the DNS servers used by the virtual network.
-* A load balanced *public* IP address (VIP) will also be reserved to provide access to the management service endpoint over port 3443. This *public* IP address can be found on the Overview blade for the service in the Azure portal. The *public* IP address is used only for control plane traffic to the `management` endpoint over port 3443 and can be locked down to the [ApiManagement][ServiceTags] service tag.
-* IP addresses from the subnet IP range (DIP) will be assigned to each VM in the service and will be used to access resources within the virtual network. A public IP address (VIP) will be used to access resources outside the virtual network. If IP restriction lists are used to secure resources within the virtual network, the entire range for the subnet where the API Management service is deployed must be specified to grant or restrict access from the service.
+* A load balanced *private* virtual IP address from the subnet range (DIP) will be reserved for access to the API Management service endpoints from within the VNET.
+ * Find this private IP address on the service's Overview blade in the Azure portal.
+ * Register this address with the DNS servers used by the VNET.
+* A load balanced *public* IP address (VIP) will also be reserved to provide access to the management service endpoint over `port 3443`.
+ * Find this public IP address on the service's Overview blade in the Azure portal.
+ * Only use the *public* IP address for control plane traffic to the `management` endpoint over `port 3443`.
+ * This IP address can be locked down to the [ApiManagement][ServiceTags] service tag.
+* DIP addresses will be assigned to each virtual machine in the service and used to access resources *within* the VNET. A VIP address will be used to access resources *outside* the VNET. If IP restriction lists secure resources within the VNET, you must specify the entire subnet range where the API Management service is deployed to grant or restrict access from the service.
* The load balanced public and private IP addresses can be found on the Overview blade in the Azure portal.
-* The IP addresses assigned for public and private access may change if the service is removed from and then added back into the virtual network. If this happens, it may be necessary to update DNS registrations, routing rules, and IP restriction lists within the virtual network.
+* If you remove or add the service in the VNET, the IP addresses assigned for public and private access may change. You may need to update DNS registrations, routing rules, and IP restriction lists within the VNET.
## <a name="related-content"> </a>Related content To learn more, see the following articles:
-* [Common network configuration problems while setting up Azure API Management in a virtual network][Common network configuration problems]
-* [Virtual network FAQs](../virtual-network/virtual-networks-faq.md)
+* [Common network configuration problems while setting up Azure API Management in a VNET][Common network configuration problems]
+* [VNET FAQs](../virtual-network/virtual-networks-faq.md)
* [Creating a record in DNS](/previous-versions/windows/it-pro/windows-2000-server/bb727018(v=technet.10)) [api-management-using-internal-vnet-menu]: ./media/api-management-using-with-internal-vnet/updated-api-management-using-with-internal-vnet.png
api-management Api Management Using With Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-using-with-vnet.md
Title: How to use Azure API Management with virtual networks
-description: Learn how to setup a connection to a virtual network in Azure API Management and access web services through it.
+ Title: Connect to a virtual network using Azure API Management
+description: Learn how to set up a connection to a virtual network in Azure API Management and access web services through it.
Previously updated : 05/28/2021 Last updated : 06/08/2021
-# How to use Azure API Management with virtual networks
-Azure Virtual Networks (VNETs) allow you to place any of your Azure resources in a non-internet routable network that you control access to. These networks can then be connected to your on-premises networks using various VPN technologies. To learn more about Azure Virtual Networks start with the information here: [Azure Virtual Network Overview](../virtual-network/virtual-networks-overview.md).
+# Connect to a virtual network using Azure API Management
+With Azure Virtual Networks (VNETs), you can place any of your Azure resources in a non-internet-routable network to which you control access. You can then connect VNETs to your on-premises networks using various VPN technologies. To learn more about Azure VNETs, start with the information in the [Azure Virtual Network Overview](../virtual-network/virtual-networks-overview.md).
-Azure API Management can be deployed inside the virtual network (VNET), so it can access backend services within the network. The developer portal and API gateway, can be configured to be accessible either from the Internet or only within the virtual network.
+Azure API Management can be deployed inside the VNET to access backend services within the network. You can configure the developer portal and API gateway to be accessible either from the internet or only within the VNET.
+
+This article explains VNET connectivity options, settings, limitations, and troubleshooting steps for your API Management instance. For configurations specific to the internal mode, where the developer portal and API gateway are accessible only within the VNET, see [Connect to an internal virtual network using Azure API Management](./api-management-using-with-internal-vnet.md).
> [!NOTE] > The API import document URL must be hosted on a publicly accessible internet address.
Azure API Management can be deployed inside the virtual network (VNET), so it ca
## Prerequisites
-To perform the steps described in this article, you must have:
-
-+ **An active Azure subscription.**
-
- [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
++ **An active Azure subscription.** [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] + **An API Management instance.** For more information, see [Create an Azure API Management instance](get-started-create-service-instance.md).
To perform the steps described in this article, you must have:
1. Choose your API Management instance. 1. Select **Virtual network**.
-1. Configure the API Management instance to be deployed inside a Virtual network.
+1. Configure the API Management instance to be deployed inside a VNET.
- :::image type="content" source="media/api-management-using-with-vnet/api-management-menu-vnet.png" alt-text="Select virtual network in Azure portal.":::
+ :::image type="content" source="media/api-management-using-with-vnet/api-management-menu-vnet.png" alt-text="Select VNET in Azure portal.":::
1. Select the desired access type:
- * **Off**: This is the default. API Management is not deployed into a virtual network.
+ * **Off**: Default type. API Management is not deployed into a VNET.
- * **External**: The API Management gateway and developer portal are accessible from the public internet via an external load balancer. The gateway can access resources within the virtual network.
+ * **External**: The API Management gateway and developer portal are accessible from the public internet via an external load balancer. The gateway can access resources within the VNET.
![Public peering][api-management-vnet-public]
- * **Internal**: The API Management gateway and developer portal are accessible only from within the virtual network via an internal load balancer. The gateway can access resources within the virtual network.
+ * **Internal**: The API Management gateway and developer portal are accessible only from within the VNET via an internal load balancer. The gateway can access resources within the VNET.
![Private peering][api-management-vnet-private]
-1. If you selected **External** or **Internal**, you will see a list of all locations (regions) where your API Management service is provisioned. Choose a **Location**, and then pick its **Virtual network**, **Subnet**, and **IP address**. The virtual network list is populated with Resource Manager virtual networks available in your Azure subscriptions that are set up in the region you are configuring.
-
+1. If you selected **External** or **Internal**, you will see a list of all locations (regions) where your API Management service is provisioned.
+1. Choose a **Location**.
+1. Pick **Virtual network**, **Subnet**, and **IP address**.
+ * The VNET list is populated with Resource Manager VNETs available in your Azure subscriptions, set up in the region you are configuring.
- :::image type="content" source="media/api-management-using-with-vnet/api-management-using-vnet-select.png" alt-text="Virtual network settings in the portal.":::
+ :::image type="content" source="media/api-management-using-with-vnet/api-management-using-vnet-select.png" alt-text="VNET settings in the portal.":::
- > [!IMPORTANT]
- > * When your client uses **API version 2020-12-01 or earlier** to deploy an Azure API Management instance in a Resource Manager VNET, the service must be in a dedicated subnet that contains no resources except Azure API Management instances. If an attempt is made to deploy an Azure API Management instance to a Resource Manager VNET subnet that contains other resources, the deployment will fail.
- > * When your client uses **API version 2021-01-01-preview or later** to deploy an Azure API Management instance in a virtual network, only a Resource Manager virtual network is supported. Additionally, the subnet used may contain other resources. You don't have to use a subnet dedicated to API Management instances.
+ > [!IMPORTANT]
+ > * **If using API version 2020-12-01 or earlier to deploy an Azure API Management instance in a Resource Manager VNET:**
+ > The service must be in a dedicated subnet that contains only Azure API Management instances. Attempting to deploy an Azure API Management instance to a Resource Manager VNET subnet that contains other resources will cause the deployment to fail.
+ >
+ > * **If using API version 2021-01-01-preview or later to deploy an Azure API Management instance in a VNET:**
+ > Only a Resource Manager VNET is supported, but the subnet used may contain other resources. You don't have to use a subnet dedicated to API Management instances.
-1. Select **Apply**. The **Virtual network** page of your API Management instance is updated with your new virtual network and subnet choices.
+1. Select **Apply**. The **Virtual network** page of your API Management instance is updated with your new VNET and subnet choices.
-1. Continue configuring virtual network settings for the remaining locations of your API Management instance.
+1. Continue configuring VNET settings for the remaining locations of your API Management instance.
-7. In the top navigation bar, select **Save**, and then select **Apply network configuration**.
+7. In the top navigation bar, select **Save**, then select **Apply network configuration**.
It can take 15 to 45 minutes to update the API Management instance. > [!NOTE]
-> With clients using API version 2020-12-01 and earlier, the VIP address of the API Management instance will change each time the VNET is enabled or disabled. The VIP address will also change when API Management is moved from **External** to **Internal** virtual network, or vice versa.
+> With clients using API version 2020-12-01 and earlier, the VIP address of the API Management instance will change when:
+> * The VNET is enabled or disabled.
+> * API Management is moved from **External** to **Internal** virtual network, or vice versa.
> [!IMPORTANT]
-> If you remove API Management from a VNET or change the one it is deployed in, the previously used VNET can remain locked for up to six hours. During this period it will not be possible to delete the VNET or deploy a new resource to it. This behavior is true for clients using API version 2018-01-01 and earlier. Clients using API version 2019-01-01 and later, the VNET is freed up as soon as the associated API Management service is deleted.
+> * **If you are using API version 2018-01-01 and earlier:**
+> The VNET will lock for up to six hours if you remove API Management from a VNET or change the VNET. During these six hours, you can't delete the VNET or deploy a new resource to it.
+>
+> * **If you are using API version 2019-01-01 and later:**
+> The VNET is available as soon as the associated API Management service is deleted.
### <a name="deploy-apim-external-vnet"> </a>Deploy API Management into External VNET
-You can also enable virtual network connectivity by using the following methods.
+You can also enable VNET connectivity by using the following methods.
### API version 2021-01-01-preview
You can also enable virtual network connectivity by using the following methods.
[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.apimanagement%2Fapi-management-create-with-external-vnet%2Fazuredeploy.json)
-* Azure PowerShell cmdlets - [Create](/powershell/module/az.apimanagement/new-azapimanagement) or [update](/powershell/module/az.apimanagement/update-azapimanagementregion) an API Management instance in a virtual network
+* Azure PowerShell cmdlets - [Create](/powershell/module/az.apimanagement/new-azapimanagement) or [update](/powershell/module/az.apimanagement/update-azapimanagementregion) an API Management instance in a VNET
-## <a name="connect-vnet"> </a>Connect to a web service hosted within a virtual Network
-After your API Management service is connected to the VNET, accessing backend services within it is no different than accessing public services. Just type in the local IP address or the host name (if a DNS server is configured for the VNET) of your web service into the **Web service URL** field when creating a new API or editing an existing one.
+## <a name="connect-vnet"> </a>Connect to a web service hosted within a virtual network
+Once you've connected your API Management service to the VNET, you'll be able to access backend services within it just as you do public services. When creating or editing an API, type the local IP address or the host name (if a DNS server is configured for the VNET) of your web service into the **Web service URL** field.
![Add API from VPN][api-management-setup-vpn-add-api] ## <a name="network-configuration-issues"> </a>Common Network Configuration Issues
-Following is a list of common misconfiguration issues that can occur while deploying API Management service into a Virtual Network.
+Common misconfiguration issues that can occur while deploying API Management service into a VNET include:
-* **Custom DNS server setup**: The API Management service depends on several Azure services. When API Management is hosted in a VNET with a custom DNS server, it needs to resolve the hostnames of those Azure services. Please follow [this](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server) guidance on custom DNS setup. See the ports table below and other network requirements for reference.
+* **Custom DNS server setup:**
+ The API Management service depends on several Azure services. When API Management is hosted in a VNET with a custom DNS server, it needs to resolve the hostnames of those Azure services.
+ * For guidance on custom DNS setup, see [Name resolution for resources in Azure virtual networks](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server).
+ * For reference, see the [ports table](#required-ports) and network requirements.
-> [!IMPORTANT]
-> If you plan to use a Custom DNS Server(s) for the VNET, you should set it up **before** deploying an API Management service into it. Otherwise you need to update the API Management service each time you change the DNS Server(s) by running the [Apply Network Configuration Operation](/rest/api/apimanagement/2019-12-01/apimanagementservice/applynetworkconfigurationupdates)
+ > [!IMPORTANT]
+ > If you plan to use a Custom DNS server(s) for the VNET, set it up **before** deploying an API Management service into it. Otherwise, you'll need to update the API Management service each time you change the DNS Server(s) by running the [Apply Network Configuration Operation](/rest/api/apimanagement/2019-12-01/apimanagementservice/applynetworkconfigurationupdates).
-* **Ports required for API Management**: Inbound and Outbound traffic into the Subnet in which API Management is deployed can be controlled using [Network Security Group][Network Security Group]. If any of these ports are unavailable, API Management may not operate properly and may become inaccessible. Having one or more of these ports blocked is another common misconfiguration issue when using API Management with a VNET.
+* **Ports required for API Management:**
+ You can control inbound and outbound traffic into the subnet in which API Management is deployed by using [network security groups][network security groups]. If any of the following ports are unavailable, API Management may not operate properly and may become inaccessible. Blocked ports are another common misconfiguration issue when using API Management with a VNET.
<a name="required-ports"> </a> When an API Management service instance is hosted in a VNET, the ports in the following table are used.
-| Source / Destination Port(s) | Direction | Transport protocol | [Service Tags](../virtual-network/network-security-groups-overview.md#service-tags) <br> Source / Destination | Purpose (\*) | Virtual Network type |
+| Source / Destination Port(s) | Direction | Transport protocol | [Service Tags](../virtual-network/network-security-groups-overview.md#service-tags) <br> Source / Destination | Purpose (\*) | VNET type |
||--|--||-|-| | * / [80], 443 | Inbound | TCP | INTERNET / VIRTUAL_NETWORK | Client communication to API Management | External | | * / 3443 | Inbound | TCP | ApiManagement / VIRTUAL_NETWORK | Management endpoint for Azure portal and PowerShell | External & Internal |
When an API Management service instance is hosted in a VNET, the ports in the fo
| * / 5671, 5672, 443 | Outbound | TCP | VIRTUAL_NETWORK / EventHub | Dependency for [Log to Event Hub policy](api-management-howto-log-event-hubs.md) and monitoring agent | External & Internal | | * / 445 | Outbound | TCP | VIRTUAL_NETWORK / Storage | Dependency on Azure File Share for [GIT](api-management-configuration-repository-git.md) | External & Internal | | * / 443, 12000 | Outbound | TCP | VIRTUAL_NETWORK / AzureCloud | Health and Monitoring Extension | External & Internal |
-| * / 1886, 443 | Outbound | TCP | VIRTUAL_NETWORK / AzureMonitor | Publish [Diagnostics Logs and Metrics](api-management-howto-use-azure-monitor.md), [Resource Health](../service-health/resource-health-overview.md) and [Application Insights](api-management-howto-app-insights.md) | External & Internal |
+| * / 1886, 443 | Outbound | TCP | VIRTUAL_NETWORK / AzureMonitor | Publish [Diagnostics Logs and Metrics](api-management-howto-use-azure-monitor.md), [Resource Health](../service-health/resource-health-overview.md), and [Application Insights](api-management-howto-app-insights.md) | External & Internal |
| * / 25, 587, 25028 | Outbound | TCP | VIRTUAL_NETWORK / INTERNET | Connect to SMTP Relay for sending e-mails | External & Internal | | * / 6381 - 6383 | Inbound & Outbound | TCP | VIRTUAL_NETWORK / VIRTUAL_NETWORK | Access Redis Service for [Cache](api-management-caching-policies.md) policies between machines | External & Internal | | * / 4290 | Inbound & Outbound | UDP | VIRTUAL_NETWORK / VIRTUAL_NETWORK | Sync Counters for [Rate Limit](api-management-access-restriction-policies.md#LimitCallRateByKey) policies between machines | External & Internal | | * / * | Inbound | TCP | AZURE_LOAD_BALANCER / VIRTUAL_NETWORK | Azure Infrastructure Load Balancer | External & Internal | >[!IMPORTANT]
-> The Ports for which the *Purpose* is **bold** are required for API Management service to be deployed successfully. Blocking the other ports however will cause **degradation** in the ability to use and **monitor the running service and provide the committed SLA**.
+> Bold items in the *Purpose* column are required for API Management service to be deployed successfully. Blocking the other ports, however, will cause **degradation** in the ability to use and **monitor the running service and provide the committed SLA**.
-+ **TLS functionality**: To enable TLS/SSL certificate chain building and validation the API Management service needs Outbound network connectivity to ocsp.msocsp.com, mscrl.microsoft.com and crl.microsoft.com. This dependency is not required, if any certificate you upload to API Management contain the full chain to the CA root.
++ **TLS functionality:**
+ To enable TLS/SSL certificate chain building and validation, the API Management service needs outbound network connectivity to `ocsp.msocsp.com`, `mscrl.microsoft.com`, and `crl.microsoft.com`. This dependency is not required if any certificate you upload to API Management contains the full chain to the CA root.
-+ **DNS Access**: Outbound access on port 53 is required for communication with DNS servers. If a custom DNS server exists on the other end of a VPN gateway, the DNS server must be reachable from the subnet hosting API Management.
++ **DNS Access:**
+ Outbound access on `port 53` is required for communication with DNS servers. If a custom DNS server exists on the other end of a VPN gateway, the DNS server must be reachable from the subnet hosting API Management.
-+ **Metrics and Health Monitoring**: Outbound network connectivity to Azure Monitoring endpoints, which resolve under the following domains. As shown in the table, these URLs are represented under the AzureMonitor service tag for use with Network Security Groups.
++ **Metrics and Health Monitoring:**
+ Outbound network connectivity to Azure Monitoring endpoints, which resolve under the following domains, are represented under the AzureMonitor service tag for use with Network Security Groups.
| Azure Environment | Endpoints | |-||
When an API Management service instance is hosted in a VNET, the ports in the fo
+ **Regional Service Tags**: NSG rules allowing outbound connectivity to Storage, SQL, and Event Hubs service tags may use the regional versions of those tags corresponding to the region containing the API Management instance (for example, Storage.WestUS for an API Management instance in the West US region). In multi-region deployments, the NSG in each region should allow traffic to the service tags for that region and the primary region. > [!IMPORTANT]
- > To enable publishing the [developer portal](api-management-howto-developer-portal.md) for an API Management instance in a virtual network, ensure that you also allow outbound connectivity to blob storage in the West US region. For example, use the **Storage.WestUS** service tag in an NSG rule. Connectivity to blob storage in the West US region is currently required to publish the developer portal for any API Management instance.
+ > Enable publishing the [developer portal](api-management-howto-developer-portal.md) for an API Management instance in a VNET by allowing outbound connectivity to blob storage in the West US region. For example, use the **Storage.WestUS** service tag in an NSG rule. Currently, connectivity to blob storage in the West US region is required to publish the developer portal for any API Management instance.
-+ **SMTP Relay**: Outbound network connectivity for the SMTP Relay, which resolves under the host `smtpi-co1.msn.com`, `smtpi-ch1.msn.com`, `smtpi-db3.msn.com`, `smtpi-sin.msn.com` and `ies.global.microsoft.com`
++ **SMTP Relay:**
+ Outbound network connectivity for the SMTP Relay, which resolves under the host `smtpi-co1.msn.com`, `smtpi-ch1.msn.com`, `smtpi-db3.msn.com`, `smtpi-sin.msn.com` and `ies.global.microsoft.com`
-+ **Developer portal CAPTCHA**: Outbound network connectivity for the developer portal's CAPTCHA, which resolves under the hosts `client.hip.live.com` and `partner.hip.live.com`.
++ **Developer portal CAPTCHA:**
+ Outbound network connectivity for the developer portal's CAPTCHA, which resolves under the hosts `client.hip.live.com` and `partner.hip.live.com`.
-+ **Azure portal Diagnostics**: To enable the flow of diagnostic logs from Azure portal when using the API Management extension from inside a Virtual Network, outbound access to `dc.services.visualstudio.com` on port 443 is required. This helps in troubleshooting issues you might face when using extension.
++ **Azure portal Diagnostics:**
+ When using the API Management extension from inside a VNET, outbound access to `dc.services.visualstudio.com` on `port 443` is required to enable the flow of diagnostic logs from Azure portal. This access helps in troubleshooting issues you might face when using extension.
-+ **Azure Load Balancer**: Allowing Inbound request from Service Tag `AZURE_LOAD_BALANCER` is not a requirement for the `Developer` SKU, since we only deploy one unit of Compute behind it. But Inbound from [168.63.129.16](../virtual-network/what-is-ip-address-168-63-129-16.md) becomes critical when scaling to higher SKU like `Premium`, as failure of Health Probe from Load Balancer, fails a deployment.
++ **Azure Load Balancer:**
+ You're not required to allow inbound request from service tag `AZURE_LOAD_BALANCER` for the `Developer` SKU, since only one compute unit is deployed behind it. But inbound from [168.63.129.16](../virtual-network/what-is-ip-address-168-63-129-16.md) becomes critical when scaling to a higher SKU, like `Premium`, as failure of health probe from load balancer then fails a deployment.
-+ **Application Insights**: If [Azure Application Insights](api-management-howto-app-insights.md) monitoring is enabled on API Management, then we need to allow outbound connectivity to the [Telemetry endpoint](../azure-monitor/app/ip-addresses.md#outgoing-ports) from the Virtual Network.
++ **Application Insights:**
+ If you've enabled [Azure Application Insights](api-management-howto-app-insights.md) monitoring on API Management, allow outbound connectivity to the [Telemetry endpoint](../azure-monitor/app/ip-addresses.md#outgoing-ports) from the VNET.
-+ **Force Tunneling Traffic to On-premises Firewall Using Express Route or Network Virtual Appliance**: A common customer configuration is to define their own default route (0.0.0.0/0) which forces all traffic from the API Management delegated subnet to flow through an on-premises firewall or to a Network virtual appliance. This traffic flow invariably breaks connectivity with Azure API Management because the outbound traffic is either blocked on-premises, or NAT'd to an unrecognizable set of addresses that no longer work with various Azure endpoints. The solution requires you to do a couple of things:
++ **Force Tunneling Traffic to On-premises Firewall Using Express Route or Network Virtual Appliance:**
+ Commonly, you configure and define your own default route (0.0.0.0/0), forcing all traffic from the API Management-delegated subnet to flow through an on-premises firewall or to a network virtual appliance. This traffic flow breaks connectivity with Azure API Management, since outbound traffic is either blocked on-premises, or NAT'd to an unrecognizable set of addresses no longer working with various Azure endpoints. You can solve this issue via a couple of methods:
- * Enable service endpoints on the subnet in which the API Management service is deployed. [Service Endpoints][ServiceEndpoints] need to be enabled for Azure Sql, Azure Storage, Azure EventHub and Azure ServiceBus. Enabling endpoints directly from API Management delegated subnet to these services allows them to use the Microsoft Azure backbone network providing optimal routing for service traffic. If you use Service Endpoints with a forced tunneled Api Management, the above Azure services traffic isn't forced tunneled. The other API Management service dependency traffic is forced tunneled and can't be lost or the API Management service would not function properly.
+ * Enable [service endpoints][ServiceEndpoints] on the subnet in which the API Management service is deployed for:
+ * Azure Sql
+ * Azure Storage
+ * Azure EventHub
+ * Azure ServiceBus, and
+ * Azure KeyVault.
+
+ By enabling endpoints directly from API Management-delegated subnet to these services, you can use the Microsoft Azure backbone network, providing optimal routing for service traffic. If you use service endpoints with a force tunneled API Management, the above Azure services traffic isn't force tunneled. The other API Management service dependency traffic is force tunneled and can't be lost. If lost, the API Management service would not function properly.
- * All the control plane traffic from Internet to the management endpoint of your API Management service are routed through a specific set of Inbound IPs hosted by API Management. When the traffic is force tunneled the responses will not symmetrically map back to these Inbound source IPs. To overcome the limitation, we need to add the following user-defined routes ([UDRs][UDRs]) to steer traffic back to Azure by setting the destination of these host routes to "Internet". The set of Inbound IPs for control Plane traffic is documented [Control Plane IP Addresses](#control-plane-ips)
+ * All the control plane traffic from the internet to the management endpoint of your API Management service is routed through a specific set of inbound IPs, hosted by API Management. When the traffic is force tunneled, the responses will not symmetrically map back to these inbound source IPs. To overcome the limitation, set the destination of the following user-defined routes ([UDRs][UDRs]) to the "Internet", to steer traffic back to Azure. Find the set of inbound IPs for control plane traffic documented in [Control Plane IP Addresses](#control-plane-ips).
- * For other API Management service dependencies which are force tunneled, there should be a way to resolve the hostname and reach out to the endpoint. These include
+ * For other force tunneled API Management service dependencies, resolve the hostname and reach out to the endpoint. These include:
- Metrics and Health Monitoring - Azure portal Diagnostics - SMTP Relay - Developer portal CAPTCHA ## <a name="troubleshooting"> </a>Troubleshooting
-* **Initial Setup**: When the initial deployment of API Management service into a subnet does not succeed, it is advised to first deploy a virtual machine into the same subnet. Next remote desktop into the virtual machine and validate that there is connectivity to one of each resource below in your Azure subscription
+* **Unsuccessful initial deployment of API Management service into a subnet:**
+ * Deploy a virtual machine into the same subnet.
+ * Remote desktop into the virtual machine and validate connectivity to one of each of the following resources in your Azure subscription:
* Azure Storage blob * Azure SQL Database * Azure Storage Table > [!IMPORTANT]
- > After you have validated the connectivity, make sure to remove all the resources deployed in the subnet, before deploying API Management into the subnet.
+ > After validating the connectivity, remove all the resources in the subnet before deploying API Management into the subnet.
-* **Verify network connectivity status**: After deploying API Management into the subnet, use the portal to check the connectivity of your instance to dependencies such as Azure Storage. In the portal, in the left-hand menu, under **Deployment and infrastructure**, select **Network connectivity status**.
+* **Verify network connectivity status:**
+ * After deploying API Management into the subnet, use the portal to check the connectivity of your instance to dependencies, such as Azure Storage.
+ * In the portal, in the left-hand menu, under **Deployment and infrastructure**, select **Network connectivity status**.
:::image type="content" source="media/api-management-using-with-vnet/verify-network-connectivity-status.png" alt-text="Verify network connectivity status in the portal":::
- * Select **Required** to review the connectivity to required Azure services for API Management. A failure indicates that the instance is unable to perform core operations to manage APIs.
- * Select **Optional** to review the connectivity to optional services. Any failure indicates only that the specific functionality will not work (for example, SMTP). A failure may lead to degradation in the ability to use and monitor the API Management instance and provide the committed SLA.
+ | Filter | Description |
+ | -- | -- |
+ | **Required** | Select to review the required Azure services connectivity for API Management. Failure indicates that the instance is unable to perform core operations to manage APIs |
+ | **Optional** | Select to review the optional services connectivity. Failure indicates only that the specific functionality will not work (for example, SMTP). Failure may lead to degradation in using and monitoring the API Management instance and providing the committed SLA. |
-To address connectivity issues, review [Common network configuration issues](#network-configuration-issues) and fix required network settings.
+ To address connectivity issues, review [Common network configuration issues](#network-configuration-issues) and fix required network settings.
-* **Incremental Updates**: When making changes to your network, refer to [NetworkStatus API](/rest/api/apimanagement/2019-12-01/networkstatus), to verify that the API Management service has not lost access to any of the critical resources, which it depends upon. The connectivity status should be updated every 15 minutes.
+* **Incremental Updates:**
+ When making changes to your network, refer to [NetworkStatus API](/rest/api/apimanagement/2019-12-01/networkstatus) to verify that the API Management service has not lost access to critical resources. The connectivity status should be updated every 15 minutes.
-* **Resource Navigation Links**: When deploying into Resource Manager style vnet subnet, API Management reserves the subnet, by creating a resource navigation Link. If the subnet already contains a resource from a different provider, deployment will **fail**. Similarly, when you move an API Management service to a different subnet or delete it, we will remove that resource navigation link.
+* **Resource Navigation Links:**
+ When deploying into a Resource Manager VNET subnet with API version 2020-12-01 and earlier, API Management reserves the subnet by creating a resource navigation link. If the subnet already contains a resource from a different provider, deployment will **fail**. Similarly, when you delete an API Management service, or move it to a different subnet, the resource navigation link will be removed.
## <a name="subnet-size"> </a> Subnet Size Requirement
-Azure reserves some IP addresses within each subnet, and these addresses can't be used. The first and last IP addresses of the subnets are reserved for protocol conformance, along with three more addresses used for Azure services. For more information, see [Are there any restrictions on using IP addresses within these subnets?](../virtual-network/virtual-networks-faq.md#are-there-any-restrictions-on-using-ip-addresses-within-these-subnets)
+Azure reserves some IP addresses within each subnet, which can't be used. The first and last IP addresses of the subnets are reserved for protocol conformance. Three more addresses are used for Azure services. For more information, see [Are there any restrictions on using IP addresses within these subnets?](../virtual-network/virtual-networks-faq.md#are-there-any-restrictions-on-using-ip-addresses-within-these-subnets).
-In addition to the IP addresses used by the Azure VNET infrastructure, each Api Management instance in the subnet uses two IP addresses per unit of Premium SKU or one IP address for the Developer SKU. Each instance reserves an additional IP address for the external load balancer. When deploying into Internal virtual network, it requires an additional IP address for the internal load balancer.
+In addition to the IP addresses used by the Azure VNET infrastructure, each API Management instance in the subnet uses:
+* Two IP addresses per unit of Premium SKU, or
+* One IP address for the Developer SKU.
-Given the calculation above the minimum size of the subnet, in which API Management can be deployed is /29 that gives three usable IP addresses.
+Each instance reserves an extra IP address for the external load balancer. When deploying into [internal VNET](./api-management-using-with-internal-vnet.md), the instance requires an extra IP address for the internal load balancer.
-Each additional scale unit of API Management requires two more IP addresses.
+Given the calculation above, the minimum size of the subnet in which API Management can be deployed is /29, which gives three usable IP addresses. Each extra scale unit of API Management requires two more IP addresses.
## <a name="routing"> </a> Routing
-+ A load balanced public IP address (VIP) will be reserved to provide access to all service endpoints.
-+ An IP address from a subnet IP range (DIP) will be used to access resources within the vnet and a public IP address (VIP) will be used to access resources outside the vnet.
-+ Load balanced public IP address can be found on the Overview/Essentials blade in the Azure portal.
++ A load balanced public IP address (VIP) will be reserved to provide access to all service endpoints and resources outside the VNET.
+ + Load balanced public IP addresses can be found on the **Overview/Essentials** blade in the Azure portal.
++ An IP address from a subnet IP range (DIP) will be used to access resources within the VNET. ## <a name="limitations"> </a>Limitations
-* For clients using API version 2020-12-01 and earlier, a subnet containing API Management instances cannot contain any other Azure resource types.
+* For API version 2020-12-01 and earlier, a subnet containing API Management instances can't contain any other Azure resource types.
* The subnet and the API Management service must be in the same subscription. * A subnet containing API Management instances cannot be moved across subscriptions.
-* For multi-region API Management deployments configured in Internal virtual network mode, users are responsible for managing the load balancing across multiple regions, as they own the routing.
-* Connectivity from a resource in a globally peered VNET in another region to API Management service in Internal mode will not work due to platform limitation. For more information, see [Resources in one virtual network cannot communicate with Azure internal load balancer in peered virtual network](../virtual-network/virtual-network-manage-peering.md#requirements-and-constraints)
+* For multi-region API Management deployments configured in internal VNET mode, users own the routing and are responsible for managing the load balancing across multiple regions.
+* Due to platform limitations, connectivity between a resource in a globally peered VNET in another region and an API Management service in internal mode will not work. For more information, see [Resources in one virtual network cannot communicate with Azure internal load balancer in peered virtual network](../virtual-network/virtual-network-manage-peering.md#requirements-and-constraints).
## <a name="control-plane-ips"> </a> Control Plane IP Addresses
-The IP Addresses are divided by **Azure Environment**. When allowing inbound requests IP address marked with **Global** must be allowed along with the **Region** specific IP Address.
+The IP Addresses are divided by **Azure Environment**. When allowing inbound requests, IP addresses marked with **Global** must be permitted, along with the **Region**-specific IP address.
| **Azure Environment**| **Region**| **IP address**| |--|-||
app-service Quickstart Dotnetcore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/quickstart-dotnetcore.md
Title: "Quickstart: Deploy an ASP.NET web app"
description: Learn how to run web apps in Azure App Service by deploying your first ASP.NET app. ms.assetid: b1e6bd58-48d1-4007-9d6c-53fd6db061e3 Previously updated : 03/30/2021 Last updated : 06/08/2021 zone_pivot_groups: app-service-ide adobe-target: true
target cross-platform with .NET Core 3.1 or .NET 5.0.
In this quickstart, you'll learn how to create and deploy your first ASP.NET web app to [Azure App Service](overview.md). App Service supports various versions of .NET apps, and provides a highly scalable, self-patching web hosting service. ASP.NET web apps are cross-platform and can be hosted on Linux or Windows. When you're finished, you'll have an Azure resource group consisting of an App Service hosting plan and an App Service with a deployed web application.
-> [!TIP]
-> .NET Core 3.1 is the current long-term support (LTS) release of .NET. For more information, see [.NET support policy](https://dotnet.microsoft.com/platform/support/policy/dotnet-core).
- ## Prerequisites :::zone target="docs" pivot="development-environment-vs"
In this quickstart, you'll learn how to create and deploy your first ASP.NET web
## Create an ASP.NET web app
+> [!TIP]
+> .NET Core 3.1 is the current long-term support (LTS) release of .NET. For more information, see [.NET support policy](https://dotnet.microsoft.com/platform/support/policy/dotnet-core).
+ :::zone target="docs" pivot="development-environment-vs" ### [.NET Core 3.1](#tab/netcore31)
You'll see the updated ASP.NET Framework 4.8 web app displayed in the page.
To manage your web app, go to the [Azure portal](https://portal.azure.com), and search for and select **App Services**. On the **App Services** page, select the name of your web app. The **Overview** page for your web app, contains options for basic management like browse, stop, start, restart, and delete. The left menu provides further pages for configuring your app. <!-- ## Clean up resources - H2 added from the next three includes
application-gateway Configuration Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/configuration-infrastructure.md
Previously updated : 05/26/2021 Last updated : 06/08/2021
Application Gateway (Standard or WAF) SKU can support up to 32 instances (32 ins
Application Gateway (Standard_v2 or WAF_v2 SKU) can support up to 125 instances (125 instance IP addresses + 1 private front-end IP + 5 Azure reserved) ΓÇô so a minimum subnet size of /24 is required. > [!IMPORTANT]
-> Starting mid-late May 2021, a minimum subnet size of /24 (256 IPs) per Application Gateway v2 SKU (Standard_v2 or WAF_v2) will be required for new deployments. Existing deployments will not be affected by this requirement but are encouraged to move to a subnet with at least 256 IPs per v2 gateway. This requirement will ensure the subnet has sufficient IP addresses for the gateway to undergo maintenance updates without impact on available capacity.
+> Starting mid-late May 2021, a minimum subnet size of /24 (256 IPs) per Application Gateway v2 SKU (Standard_v2 or WAF_v2) will be required for new deployments. Existing deployments will not be affected by this requirement but are encouraged to move to a subnet with at least 256 IPs per v2 gateway. This requirement will ensure the subnet has sufficient IP addresses for the gateway to undergo maintenance updates without impact on available capacity.
+
+> [!TIP]
+> It is possible to change the subnet of an existing Application Gateway within the same virtual network. You can do this using Azure PowerShell or Azure CLI. For more information, see [Frequently asked questions about Application Gateway](application-gateway-faq.yml#can-i-change-the-virtual-network-or-subnet-for-an-existing-application-gateway)
## Network security groups
For this scenario, use NSGs on the Application Gateway subnet. Put the following
## Next steps -- [Learn about front-end IP address configuration](configuration-front-end-ip.md).
+- [Learn about front-end IP address configuration](configuration-front-end-ip.md).
attestation Audit Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/attestation/audit-logs.md
For each of these events, Azure Attestation collects the following information:
Audit logs are provided in JSON format. Here is an example of what an audit log may look like. ```json
-{"operationName":"SetCurrentPolicy","resultType":"Success","resultDescription":null,"auditEventCategory":["ApplicationManagement"],"nCloud":null,"requestId":null,"callerIpAddress":null,"callerDisplayName":null,"callerIdentities":[{"callerIdentityType":"ObjectID","callerIdentity":"<some object ID>"},{"callerIdentityType":"TenantId","callerIdentity":"<some tenant ID>"}],"targetResources":[{"targetResourceType":"Environment","targetResourceName":"PublicCloud"},{"targetResourceType":"ServiceRegion","targetResourceName":"EastUS2"},{"targetResourceType":"ServiceRole","targetResourceName":"AttestationRpType"},{"targetResourceType":"ServiceRoleInstance","targetResourceName":"<some service role instance>"},{"targetResourceType":"ResourceId","targetResourceName":"/subscriptions/<some subscription ID>/resourceGroups/<some resource group name>/providers/Microsoft.Attestation/attestationProviders/<some instance name>"},{"targetResourceType":"ResourceRegion","targetResourceName":"EastUS2"}],"ifxAuditFormat":"Json","env_ver":"2.1","env_name":"#Ifx.AuditSchema","env_time":"2020-11-23T18:23:29.9427158Z","env_epoch":"MKZ6G","env_seqNum":1277,"env_popSample":0.0,"env_iKey":null,"env_flags":257,"env_cv":"##00000000-0000-0000-0000-000000000000_00000000-0000-0000-0000-000000000000_00000000-0000-0000-0000-000000000000","env_os":null,"env_osVer":null,"env_appId":null,"env_appVer":null,"env_cloud_ver":"1.0","env_cloud_name":null,"env_cloud_role":null,"env_cloud_roleVer":null,"env_cloud_roleInstance":null,"env_cloud_environment":null,"env_cloud_location":null,"env_cloud_deploymentUnit":null}
+{
+ "operationName": "SetCurrentPolicy",
+ "resultType": "Success",
+ "resultDescription": null,
+ "auditEventCategory": [
+ "ApplicationManagement"
+ ],
+ "nCloud": null,
+ "requestId": null,
+ "callerIpAddress": null,
+ "callerDisplayName": null,
+ "callerIdentities": [
+ {
+ "callerIdentityType": "ObjectID",
+ "callerIdentity": "<some object ID>"
+ },
+ {
+ "callerIdentityType": "TenantId",
+ "callerIdentity": "<some tenant ID>"
+ }
+ ],
+ "targetResources": [
+ {
+ "targetResourceType": "Environment",
+ "targetResourceName": "PublicCloud"
+ },
+ {
+ "targetResourceType": "ServiceRegion",
+ "targetResourceName": "EastUS2"
+ },
+ {
+ "targetResourceType": "ServiceRole",
+ "targetResourceName": "AttestationRpType"
+ },
+ {
+ "targetResourceType": "ServiceRoleInstance",
+ "targetResourceName": "<some service role instance>"
+ },
+ {
+ "targetResourceType": "ResourceId",
+ "targetResourceName": "/subscriptions/<some subscription ID>/resourceGroups/<some resource group name>/providers/Microsoft.Attestation/attestationProviders/<some instance name>"
+ },
+ {
+ "targetResourceType": "ResourceRegion",
+ "targetResourceName": "EastUS2"
+ }
+ ],
+ "ifxAuditFormat": "Json",
+ "env_ver": "2.1",
+ "env_name": "#Ifx.AuditSchema",
+ "env_time": "2020-11-23T18:23:29.9427158Z",
+ "env_epoch": "MKZ6G",
+ "env_seqNum": 1277,
+ "env_popSample": 0.0,
+ "env_iKey": null,
+ "env_flags": 257,
+ "env_cv": "##00000000-0000-0000-0000-000000000000_00000000-0000-0000-0000-000000000000_00000000-0000-0000-0000-000000000000",
+ "env_os": null,
+ "env_osVer": null,
+ "env_appId": null,
+ "env_appVer": null,
+ "env_cloud_ver": "1.0",
+ "env_cloud_name": null,
+ "env_cloud_role": null,
+ "env_cloud_roleVer": null,
+ "env_cloud_roleInstance": null,
+ "env_cloud_environment": null,
+ "env_cloud_location": null,
+ "env_cloud_deploymentUnit": null
+}
``` ## Access Audit Logs
attestation Azure Diagnostic Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/attestation/azure-diagnostic-monitoring.md
The Trusted Platform Module (TPM) endpoint service is enabled in the diagnostic
Connect-AzAccount
- Set-AzContext -Subscription <Subscription id>
+ Set-AzContext -Subscription "<Subscription id>"
- $attestationProviderName=<Name of the attestation provider>
+ $attestationProviderName="<Name of the attestation provider>"
- $attestationResourceGroup=<Name of the resource Group>
+ $attestationResourceGroup="<Name of the resource Group>"
$attestationProvider=Get-AzAttestation -Name $attestationProviderName -ResourceGroupName $attestationResourceGroup
- $storageAccount=New-AzStorageAccount -ResourceGroupName $attestationProvider.ResourceGroupName -Name <Storage Account Name> -SkuName Standard_LRS -Location <Location>
+ $storageAccount=New-AzStorageAccount -ResourceGroupName $attestationProvider.ResourceGroupName -Name "<Storage Account Name>" -SkuName Standard_LRS -Location "<Location>"
Set-AzDiagnosticSetting -ResourceId $attestationProvider.Id -StorageAccountId $storageAccount.Id -Enabled $true
attestation Private Endpoint Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/attestation/private-endpoint-powershell.md
Create a resource group with [New-AzResourceGroup](/powershell/module/az.resourc
```azurepowershell-interactive ## Create to your Azure account subscription and create a resource group in a desired location. ## Connect-AzAccount
-Set-AzSubscription ΓÇ£mySubscriptionΓÇ¥
-$rg = ΓÇ£CreateAttestationPrivateLinkTutorial-rgΓÇ¥
-$loc= "eastusΓÇ¥
+Set-AzSubscription "mySubscription"
+$rg = "CreateAttestationPrivateLinkTutorial-rg"
+$loc= "eastus"
New-AzResourceGroup -Name $rg -Location $loc ```
New-AzVM -ResourceGroupName $rg -Location $loc -VM $vmConfig
$attestationProviderName = "myattestationprovider" $attestationProvider = New-AzAttestation -Name $attestationProviderName -ResourceGroupName $rg -Location $loc $attestationProviderId = $attestationProvider.Id-
+```
+## Access the attestation provider from local machine ##
+Enter `nslookup <provider-name>.attest.azure.net`. Replace **\<provider-name>** with the name of the attestation provider instance you created in the previous steps.
+```azurepowershell-interactive
## Access the attestation provider from local machine ##
-Enter nslookup <provider-name>.attest.azure.net. Replace <provider-name> with the name of the attestation provider instance you created in the previous steps.
-
-You'll receive a message similar to what is displayed below:
-
-## PowerShell copy. ##
nslookup myattestationprovider.eus.attest.azure.net
+<# You'll receive a message similar to what is displayed below:
+ Server: cdns01.comcast.net Address: 2001:558:feed::1
Name: eus.service.attest.azure.net
Address: 20.62.219.160 Aliases: myattestationprovider.eus.attest.azure.net attesteusatm.trafficmanager.net+
+#>
``` ## Create private endpoint
In this section, you'll create the private endpoint and connection using:
$privateEndpointConnection = New-AzPrivateLinkServiceConnection -Name "myConnection" -PrivateLinkServiceId $attestationProviderId -GroupID "Standard" ## Disable private endpoint network policy ##
- $vnet.Subnets[0].PrivateEndpointNetworkPolicies = "Disabled"
+$vnet.Subnets[0].PrivateEndpointNetworkPolicies = "Disabled"
$vnet | Set-AzVirtualNetwork ## Create private endpoint
In this section, you'll use the virtual machine you created in the previous step
8. Open Windows PowerShell on the server after you connect.
-9. Enter `nslookup <provider-name>.attest.azure.net`. Replace **\<provider-name>** with the name of the attestation provider instance you created in the previous steps. You'll receive a message similar to what is displayed below:
-
- ```powershell
+9. Enter `nslookup <provider-name>.attest.azure.net`. Replace **\<provider-name>** with the name of the attestation provider instance you created in the previous steps:
+ ```azurepowershell-interactive
## Access the attestation provider from local machine ## nslookup myattestationprovider.eus.attest.azure.net-
+
+ <# You'll receive a message similar to what is displayed below:
+
Server: cdns01.comcast.net Address: 2001:558:feed::1 cdns01.comcast.net can't find myattestationprovider.eus.attest.azure.net: Non-existent domain-
+
+ #>
+
## Access the attestation provider from the VM created in the same virtual network as the private endpoint. ## nslookup myattestationprovider.eus.attest.azure.net-
+
+ <# You'll receive a message similar to what is displayed below:
+
Server: UnKnown Address: 168.63.129.16 Non-authoritative answer: Name: myattestationprovider.eastus.test.attest.azure.net
+
+ #>
```
attestation Quickstart Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/attestation/quickstart-template.md
Last updated 05/20/2021
If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
-[![Deploy To Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.attestation%2Fattestation-provider-create%2Fazuredeploy.json)
+[![Deploy To Azure 1](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/1-CONTRIBUTION-GUIDE/images/deploytoazure.svg?sanitize=true)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.attestation%2Fattestation-provider-create%2Fazuredeploy.json)
## Prerequisites
If you don't have an Azure subscription, create a [free account](https://azure.m
## Review the template
-The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/101-attestation-provider-create).
+The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/attestation-provider-create/).
:::code language="json" source="~/quickstart-templates/quickstarts/microsoft.attestation/attestation-provider-create/azuredeploy.json":::
Azure resources defined in the template:
1. Select the following image to sign in to Azure and open the template.
- [![Deploy To Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.attestation%2Fattestation-provider-create%2Fazuredeploy.json)
+ [![Deploy To Azure 2](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.attestation%2Fattestation-provider-create%2Fazuredeploy.json)
1. Select or enter the following values.
automanage Automanage Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automanage/automanage-virtual-machines.md
After onboarding your virtual machines to Azure Automanage, each best practice s
Azure Automanage also automatically monitors for drift and corrects for it when detected. What this means is if your virtual machine is onboarded to Azure Automanage, we'll not only configure it per Azure best practices, but we'll monitor your machine to ensure that it continues to comply with those best practices across its entire lifecycle. If your virtual machine does drift or deviate from those practices (for example, if a service is offboarded), we will correct it and pull your machine back into the desired state.
+Automanage doesn't store/process customer data outside the geography your VMs are located. In the SoutheastAsia region, Automanage does not store/process data outside of SoutheastAsia.
+ ## Prerequisites There are several prerequisites to consider before trying to enable Azure Automanage on your virtual machines.
azure-arc Managed Instance Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/managed-instance-overview.md
# Azure Arc enabled SQL Managed Instance Overview
-Azure Arc enabled SQL Managed Instance is an Azure SQL data service that can created on the infrastructure of your choice.
+Azure Arc enabled SQL Managed Instance is an Azure SQL data service that can be created on the infrastructure of your choice.
[!INCLUDE [azure-arc-data-preview](../../../includes/azure-arc-data-preview.md)]
azure-cache-for-redis Quickstart Create Redis Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/quickstart-create-redis-enterprise.md
Last updated 02/08/2021
# Quickstart: Create a Redis Enterprise cache Azure Cache for Redis' Enterprise tiers provide fully integrated and managed [Redis Enterprise](https://redislabs.com/redis-enterprise/) on Azure. These new tiers are:+ * Enterprise, which uses volatile memory (DRAM) on a virtual machine to store data * Enterprise Flash, which uses both volatile and non-volatile memory (NVMe or SSD) to store data.
Azure Cache for Redis' Enterprise tiers provide fully integrated and managed [Re
You'll need an Azure subscription before you begin. If you don't have one, create an [account](https://azure.microsoft.com/). For more information, see [special considerations for Enterprise tiers](cache-overview.md#special-considerations-for-enterprise-tiers). ## Create a cache+ 1. To create a cache, sign in to the Azure portal and select **Create a resource**. 1. On the **New** page, select **Databases** and then select **Azure Cache for Redis**.
-
+ :::image type="content" source="media/cache-create/new-cache-menu.png" alt-text="Select Azure Cache for Redis":::
-
+ 1. On the **New Redis Cache** page, configure the settings for your new cache.
-
+ | Setting | Suggested value | Description | | | - | -- |
- | **Subscription** | Drop down and select your subscription. | The subscription under which to create this new Azure Cache for Redis instance. |
- | **Resource group** | Drop down and select a resource group, or select **Create new** and enter a new resource group name. | Name for the resource group in which to create your cache and other resources. By putting all your app resources in one resource group, you can easily manage or delete them together. |
- | **DNS name** | Enter a globally unique name. | The cache name must be a string between 1 and 63 characters that contains only numbers, letters, or hyphens. The name must start and end with a number or letter, and can't contain consecutive hyphens. Your cache instance's *host name* will be *\<DNS name>.<Azure region>.redisenterprise.cache.azure.net*. |
+ | **Subscription** | Drop down and select your subscription. | The subscription under which to create this new Azure Cache for Redis instance. |
+ | **Resource group** | Drop down and select a resource group, or select **Create new** and enter a new resource group name. | Name for the resource group in which to create your cache and other resources. By putting all your app resources in one resource group, you can easily manage or delete them together. |
+ | **DNS name** | Enter a globally unique name. | The cache name must be a string between 1 and 63 characters that contain only numbers, letters, or hyphens. The name must start and end with a number or letter, and can't contain consecutive hyphens. Your cache instance's *host name* will be *\<DNS name>.<Azure region>.redisenterprise.cache.azure.net*. |
| **Location** | Drop down and select a location. | Enterprise tiers are available in selected Azure regions. | | **Cache type** | Drop down and select an *Enterprise* or *Enterprise Flash* tier and a size. | The tier determines the size, performance, and features that are available for the cache. |
-
+ :::image type="content" source="media/cache-create/enterprise-tier-basics.png" alt-text="Enterprise tier Basics tab":::
- > [!IMPORTANT]
+ > [!IMPORTANT]
> Be sure to select **Terms** before you proceed. > 1. Select **Next: Networking** and skip.
-1. Select **Next: Advanced** and set **Clustering policy** to **Enterprise** for a non-clustered cache. Enable **Non-TLS access only** if you plan to connect to the new cache without using TLS. This is not recommended, however.
+1. Select **Next: Advanced** and set **Clustering policy** to **Enterprise** for a non-clustered cache. Enable **Non-TLS access only** if you plan to connect to the new cache without using TLS. Disabling TLS is **not** recommended, however.
:::image type="content" source="media/cache-create/enterprise-tier-advanced.png" alt-text="Screenshot that shows the Enterprise tier Advanced tab.":::
- > [!NOTE]
+ > [!NOTE]
> Redis Enterprise supports two clustering policies. Use the **Enterprise** policy to access > your cache using the regular Redis API, and **OSS** the OSS Cluster API. >
- > [!NOTE]
+ > [!NOTE]
> You can't change modules after you create the cache instance. The setting is create-only. >
-
+ 1. Select **Next: Tags** and skip. 1. Select **Next: Review + create**. :::image type="content" source="media/cache-create/enterprise-tier-summary.png" alt-text="Enterprise tier Review + Create tab":::
-1. Review the settings and click **Create**.
-
+1. Review the settings and select **Create**.
+ It takes some time for the cache to create. You can monitor progress on the Azure Cache for Redis **Overview** page. When **Status** shows as **Running**, the cache is ready to use. ## Next steps
In this quickstart, you learned how to create an Enterprise tier instance of Azu
> [!div class="nextstepaction"] > [Create an ASP.NET web app that uses an Azure Cache for Redis.](./cache-web-app-howto.md)-
azure-cache-for-redis Quickstart Create Redis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/quickstart-create-redis.md
Title: 'Quickstart: Create an open-source Redis cache'
-description: In this quickstart, learn how to create an instance of Azure Cache for Redis in Basic, Standard or Premium tier
+description: In this quickstart, learn how to create an instance of Azure Cache for Redis in Basic, Standard, or Premium tier
Last updated 02/08/2021
# Quickstart: Create an open-source Redis cache
-Azure Cache for Redis provides fully managed [open-source Redis](https://redis.io/) within Azure. You can start with an Azure Cache for Redis instance of any tier (Basic, Standard or Premium) and size, and scale it to meet your application's performance needs. This quickstart demonstrates how to use the Azure portal to create a new Azure Cache for Redis.
+Azure Cache for Redis provides fully managed [open-source Redis](https://redis.io/) within Azure. You can start with an Azure Cache for Redis instance of any tier (Basic, Standard, or Premium) and size, and scale it to meet your application's performance needs. This quickstart demonstrates how to use the Azure portal to create a new Azure Cache for Redis.
## Prerequisites You'll need an Azure subscription before you begin. If you don't have one, create a [free account](https://azure.microsoft.com/free/) first. ## Create a cache+ [!INCLUDE [redis-cache-create](../../includes/redis-cache-create.md)] ## Next steps
azure-functions Functions Custom Handlers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-custom-handlers.md
# Azure Functions custom handlers
-Every Functions app is executed by a language-specific handler. While Azure Functions supports many [language handlers](./supported-languages.md) by default, there are cases where you may want to use other languages or runtimes.
+Every Functions app is executed by a language-specific handler. While Azure Functions features many [language handlers](./supported-languages.md) by default, there are cases where you may want to use other languages or runtimes.
Custom handlers are lightweight web servers that receive events from the Functions host. Any language that supports HTTP primitives can implement a custom handler. Custom handlers are best suited for situations where you want to: -- Implement a function app in a language that's not currently supported, such as Go or Rust.-- Implement a function app in a runtime that's not currently supported, such as Deno.
+- Implement a function app in a language that's not currently offered out-of-the box, such as Go or Rust.
+- Implement a function app in a runtime that's not currently featured by default, such as Deno.
With custom handlers, you can use [triggers and input and output bindings](./functions-triggers-bindings.md) via [extension bundles](./functions-bindings-register.md).
azure-government Compare Azure Government Global Azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/compare-azure-government-global-azure.md
The following API Management **features are not currently available** in Azure G
### [App Service](../app-service/overview.md)
+The following App Service **resources are not currently available** in Azure Government:
+
+- App Service Certificate
+- App Service Managed Certificate
+- App Service Domain
+ The following App Service **features are not currently available** in Azure Government: -- Resource
- - App Service Certificate
- Deployment - Deployment options: only Local Git Repository and External Repository are available - Development tools
Learn more about Azure Government:
Start using Azure Government: - [Guidance for developers](./documentation-government-developer-guide.md)-- [Connect with the Azure Government portal](./documentation-government-get-started-connect-with-portal.md)
+- [Connect with the Azure Government portal](./documentation-government-get-started-connect-with-portal.md)
azure-maps Creator Indoor Maps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/creator-indoor-maps.md
For more information, see [Drawing package warnings and errors](drawing-conversi
Azure Maps Creator provides the following services that support map creation: -- [Dataset service](/rest/api/maps/v2/dataset/createpreview).-- [Tileset service](/rest/api/maps/v2/tileset/createpreview).
+- [Dataset service](/rest/api/maps/v2/dataset).
+- [Tileset service](/rest/api/maps/v2/tileset).
Use the Tileset service to create a vector-based representation of a dataset. Applications can use a tileset to present a visual tile-based view of the dataset. - [Feature State service](/rest/api/maps/v2/featurestate). Use the Feature State service to support dynamic map styling. Applications can use dynamic map styling to reflect real-time events on spaces provided by the IoT system. ### Datasets
-A dataset is a collection of indoor map features. The indoor map features represent facilities that are defined in a converted Drawing package. After you create a dataset with the [Dataset service](/rest/api/maps/v2/dataset/createpreview), you can create any number of [tilesets](#tilesets) or [feature statesets](#feature-statesets).
+A dataset is a collection of indoor map features. The indoor map features represent facilities that are defined in a converted Drawing package. After you create a dataset with the [Dataset service](/rest/api/maps/v2/dataset), you can create any number of [tilesets](#tilesets) or [feature statesets](#feature-statesets).
-At any time, developers can use the [Dataset service](/rest/api/maps/v2/dataset/createpreview) to add or remove facilities to an existing dataset. For more information about how to update an existing dataset using the API, see the append options in [Dataset service](/rest/api/maps/v2/dataset/createpreview). For an example of how to update a dataset, see [Data maintenance](#data-maintenance).
+At any time, developers can use the [Dataset service](/rest/api/maps/v2/dataset) to add or remove facilities to an existing dataset. For more information about how to update an existing dataset using the API, see the append options in [Dataset service](/rest/api/maps/v2/dataset). For an example of how to update a dataset, see [Data maintenance](#data-maintenance).
### Tilesets
-A tileset is a collection of vector data that represents a set of uniform grid tiles. Developers can use the [Tileset service](/rest/api/maps/v2/tileset/createpreview) to create tilesets from a dataset.
+A tileset is a collection of vector data that represents a set of uniform grid tiles. Developers can use the [Tileset service](/rest/api/maps/v2/tileset) to create tilesets from a dataset.
To reflect different content stages, you can create multiple tilesets from the same dataset. For example, you can make one tileset with furniture and equipment, and another tileset without furniture and equipment. You might choose to generate one tileset with the most recent data updates, and another tileset without the most recent data updates.
As you begin to develop solutions for indoor maps, you can discover ways to inte
The following example shows how to update a dataset, create a new tileset, and delete an old tileset: 1. Follow steps in the [Upload a Drawing package](#upload-a-drawing-package) and [Convert a Drawing package](#convert-a-drawing-package) sections to upload and convert the new Drawing package.
-2. Use the [Dataset Create API](/rest/api/maps/v2/dataset/createpreview) to append the converted data to the existing dataset.
-3. Use the [Tileset Create API](/rest/api/maps/v2/tileset/createpreview) to generate a new tileset out of the updated dataset.
+2. Use the [Dataset Create API](/rest/api/maps/v2/dataset) to append the converted data to the existing dataset.
+3. Use the [Tileset Create API](/rest/api/maps/v2/tileset) to generate a new tileset out of the updated dataset.
4. Save the new **tilesetId** for the next step. 5. To enable the visualization of the updated campus dataset, update the tileset identifier in your application. If the old tileset is no longer used, you can delete it.
azure-maps Tutorial Create Store Locator https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/tutorial-create-store-locator.md
Title: 'Tutorial: Create a store locator application using Azure Maps | Microsoft Azure Maps'
-description: Tutorial on how to create store locator web applications. Use the Azure Maps Web SDK to create a webpage, query the search service, and display results on a map.
+ Title: 'Tutorial: Use Microsoft Azure Maps to create store locator web applications'
+description: Tutorial on how to use Microsoft Azure Maps to create store locator web applications.
Previously updated : 08/11/2020 Last updated : 06/07/2021 -+
-# Tutorial: Create a store locator by using Azure Maps
+# Tutorial: Use Azure Maps to create a store locator
-This tutorial guides you through the process of creating a simple store locator by using Azure Maps. Store locators are common. Many of the concepts that are used in this type of application are applicable to many other types of applications. Offering a store locator to customers is a must for most businesses that sell directly to consumers. In this tutorial, you learn how to:
+This tutorial guides you through the process of creating a simple store locator using Azure Maps. In this tutorial, you'll learn how to:
> [!div class="checklist"] > * Create a new webpage by using the Azure Map Control API.
This tutorial guides you through the process of creating a simple store locator
<a id="Intro"></a>
-Jump ahead to the [live store locator example](https://azuremapscodesamples.azurewebsites.net/?sample=Simple%20Store%20Locator) or [source code](https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/AzureMapsCodeSamples/Tutorials/Simple%20Store%20Locator).
- ## Prerequisites 1. [Make an Azure Maps account in Gen 1 (S1) or Gen 2 pricing tier](quick-demo-map-app.md#create-an-azure-maps-account). 2. [Obtain a primary subscription key](quick-demo-map-app.md#get-the-primary-key-for-your-account), also known as the primary key or the subscription key.
-For more information on authentication in Azure Maps, see [manage authentication in Azure Maps](how-to-manage-authentication.md).
+For more information about Azure Maps authentication, see [manage authentication in Azure Maps](how-to-manage-authentication.md).
+
+This tutorial uses the [Visual Studio Code](https://code.visualstudio.com/) application, but you can use a different coding environment.
+
+## Sample code
+
+In this tutorial, we'll create a store locator for a fictional company called Contoso Coffee. Also, the tutorial includes some tips to help you learn about extending the store locator with other optional functionalities.
+
+You can view the [Live store locator sample here](https://azuremapscodesamples.azurewebsites.net/?sample=Simple%20Store%20Locator).
+
+To more easily follow and engage this tutorial, you'll need to download the following resources:
+
+* [Full source code for simple store locator sample](https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/AzureMapsCodeSamples/Tutorials/Simple%20Store%20Locator)
+* [Store location data to import into the store locator dataset](https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/AzureMapsCodeSamples/Tutorials/Simple%20Store%20Locator/data)
+* [Map images](https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/AzureMapsCodeSamples/Tutorials/Simple%20Store%20Locator/images)
+
+## Store locator features
+
+This section lists the features that are supported in the Contoso Coffee store locator application.
+
+### User interface features
+
+* Store logo on the header
+* Map supports panning and zooming
+* A My Location button to search over the user's current location.
+* Page layout adjusts based on the width of the device screen
+* A search box and a search button
-## Design
+### Functionality features
-Before you jump into the code, it's a good idea to begin with a design. Your store locator can be as simple or complex as you want it to be. In this tutorial, we create a simple store locator. We include some tips along the way to help you extend some functionalities if you choose to. We create a store locator for a fictional company called Contoso Coffee. The following figure shows a wireframe of the general layout of the store locator we build in this tutorial:
+* A `keypress` event added to the search box triggers a search when the user presses **Enter**.
+* When the map moves, the distance to each location from the center of the map calculates. The results list updates to display the closest locations at the top of the map.
+* When the user selects a result in the results list, the map is centered over the selected location and information about the location appears in a pop-up window.
+* When the user selects a specific location, the map triggers a pop-up window.
+* When the user zooms out, locations are grouped in clusters. Each cluster is represented by a circle with a number inside the circle. Clusters form and separate as the user changes the zoom level.
+* Selecting a cluster zooms in two levels on the map and centers over the location of the cluster.
-![Wireframe of a store locator application for Contoso Coffee shop locations](./media/tutorial-create-store-locator/SimpleStoreLocatorWireframe.png)
+## Store locator design
-To maximize the usefulness of this store locator, we include a responsive layout that adjusts when a user's screen width is smaller than 700 pixels wide. A responsive layout makes it easy to use the store locator on a small screen, like on a mobile device. Here's a wireframe of a small-screen layout:
+The following figure shows a wireframe of the general layout of our store locator. You can view the live wireframe [here](https://azuremapscodesamples.azurewebsites.net/?sample=Simple%20Store%20Locator).
-![Wireframe of the Contoso Coffee store locator application on a mobile device](./media/tutorial-create-store-locator/SimpleStoreLocatorMobileWireframe.png)</
-The wireframes show a fairly straightforward application. The application has a search box, a list of nearby stores, and a map that has some markers, such as symbols. And, it has a pop-up window that displays additional information when the user selects a marker. In more detail, here are the features we build into this store locator in this tutorial:
+To maximize the usefulness of this store locator, we include a responsive layout that adjusts when a user's screen width is smaller than 700 pixels wide. A responsive layout makes it easy to use the store locator on a small screen, like on a mobile device. Here's a wireframe of the small-screen layout:
-* All locations from the imported tab-delimited data file are loaded on the map.
-* The user can pan and zoom the map, perform a search, and select the My Location GPS button.
-* The page layout adjusts based on the width of the device screen.
-* A header shows the store logo.
-* The user can use a search box and search button to search for a location, such as an address, postal code, or city.
-* A `keypress` event added to the search box triggers a search if the user presses Enter. This functionality often is overlooked, but it creates a better user experience.
-* When the map moves, the distance to each location from the center of the map is calculated. The results list is updated to display the closest locations at the top of the map.
-* When you select a result in the results list, the map is centered over the selected location and information about the location appears in a pop-up window.
-* Selecting a specific location on the map also triggers a pop-up window.
-* When the user zooms out, locations are grouped in clusters. Clusters are represented by a circle with a number inside the circle. Clusters form and separate as the user changes the zoom level.
-* Selecting a cluster zooms in on the map two levels and centers over the location of the cluster.
<a id="create a data-set"></a> ## Create the store location dataset
-Before we develop a store locator application, we need to create a dataset of the stores we want to display on the map. In this tutorial, we use a dataset for a fictitious coffee shop called Contoso Coffee. The dataset for this simple store locator is managed in an Excel workbook. The dataset contains 10,213 Contoso Coffee coffee shop locations spread across nine countries/regions: the United States, Canada, the United Kingdom, France, Germany, Italy, the Netherlands, Denmark, and Spain. Here's a screenshot of what the data looks like:
+This section describes how to create a dataset of the stores that you want to display on the map. The dataset for the Contoso Coffee locator is created inside an Excel workbook. The dataset contains 10,213 Contoso Coffee coffee shop locations spread across nine countries or regions: the United States, Canada, the United Kingdom, France, Germany, Italy, the Netherlands, Denmark, and Spain. Here's a screenshot of what the data looks like:
-![Screenshot of the store locator data in an Excel workbook](./media/tutorial-create-store-locator/StoreLocatorDataSpreadsheet.png)
-You can [download the Excel workbook](https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/AzureMapsCodeSamples/Tutorials/Simple%20Store%20Locator/data).
+To view the full dataset, [download the Excel workbook here](https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/AzureMapsCodeSamples/Tutorials/Simple%20Store%20Locator/data).
Looking at the screenshot of the data, we can make the following observations: * Location information is stored by using the **AddressLine**, **City**, **Municipality** (county), **AdminDivision** (state/province), **PostCode** (postal code), and **Country** columns.
-* The **Latitude** and **Longitude** columns contain the coordinates for each Contoso Coffee coffee shop location. If you don't have coordinates information, you can use the Search services in Azure Maps to determine the location coordinates.
-* Some additional columns contain metadata related to the coffee shops: a phone number, Boolean columns, and store opening and closing times in 24-hour format. The Boolean columns are for Wi-Fi and wheelchair accessibility. You can create your own columns that contain metadata that's more relevant to your location data.
+* The **Latitude** and **Longitude** columns contain the coordinates for each Contoso Coffee location. If you don't have coordinates information, you can use the Search services in Azure Maps to determine the location coordinates.
+* Some other columns contain metadata that's related to the coffee shops: a phone number, Boolean columns, and store opening and closing times in 24-hour format. The Boolean columns are for Wi-Fi and wheelchair accessibility. You can create your own columns that contain metadata that's more relevant to your location data.
> [!NOTE] > Azure Maps renders data in the spherical Mercator projection "EPSG:3857" but reads data in "EPSG:4326" that use the WGS84 datum.
-There are many ways to expose the dataset to the application. One approach is to load the data into a database and expose a web service that queries the data. You can then send the results to the user's browser. This option is ideal for large datasets or for datasets that are updated frequently. However, this option requires more development work and has a higher cost.
+## Load the store location dataset
-Another approach is to convert this dataset into a flat text file that the browser can easily parse. The file itself can be hosted with the rest of the application. This option keeps things simple, but it's a good option only for smaller datasets because the user downloads all the data. We use the flat text file for this dataset because the data file size is smaller than 1 MB.
+ The Contoso Coffee shop locator dataset is small, so we'll convert the Excel worksheet into a tab-delimited text file. This file can then be downloaded by the browser when the application loads.
-To convert the workbook to a flat text file, save the workbook as a tab-delimited file. Each column is delimited by a tab character, which makes the columns easy to parse in our code. You could use comma-separated value (CSV) format, but that option requires more parsing logic. Any field that has a comma around it would be wrapped with quotation marks. To export this data as a tab-delimited file in Excel, select **Save As**. In the **Save as type** drop-down list, select **Text (Tab delimited)(*.txt)**. Name the file *ContosoCoffee.txt*.
+ >[!TIP]
+>If your dataset is too large for client download, or is updated frequently, you might consider storing your dataset in a database. After your data is loaded into a database, you can then set up a web service that accepts queries for the data, and then sends the results to the user's browser.
-![Screenshot of the Save as type dialog box](./media/tutorial-create-store-locator/SaveStoreDataAsTab.png)
+### Convert data to tab-delimited text file
-If you open the text file in Notepad, it looks similar to the following figure:
+To convert the Contoso Coffee shop location data from an Excel workbook into a flat text file:
-![Screenshot of a Notepad file that shows a tab-delimited dataset](./media/tutorial-create-store-locator/StoreDataTabFile.png)
+1. [Download the Excel workbook](https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/AzureMapsCodeSamples/Tutorials/Simple%20Store%20Locator/data).
+
+2. Save the workbook to your hard drive.
+
+3. Load the Excel app.
+
+4. Open the downloaded workbook.
+
+5. Select **Save As**.
+
+6. In the **Save as type** drop-down list, select **Text (Tab delimited)(*.txt)**.
+
+7. Name the file *ContosoCoffee*.
++
+If you open the text file in Notepad, it looks similar to the following text:
+ ## Set up the project
-To create the project, you can use [Visual Studio](https://visualstudio.microsoft.com) or the code editor of your choice. In your project folder, create three files: *https://docsupdatetracker.net/index.html*, *index.css*, and *index.js*. These files define the layout, style, and logic for the application. Create a folder named *data* and add *ContosoCoffee.txt* to the folder. Create another folder named *images*. We use 10 images in this application for icons, buttons, and markers on the map. You can [download these images](https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/AzureMapsCodeSamples/Tutorials/Simple%20Store%20Locator/data). Your project folder should now look like the following figure:
+1. Open the Visual Studio Code app.
+
+2. Select **File**, and then select **Open Workspace...**.
+
+3. Create a new folder and name it "ContosoCoffee".
-![Screenshot of the Simple Store Locator project folder](./media/tutorial-create-store-locator/StoreLocatorVSProject.png)
+4. Select **CONTOSOCOFFEE** in the explorer.
-## Create the user interface
+5. Create the following three files that define the layout, style, and logic for the application:
-To create the user interface, add code to *https://docsupdatetracker.net/index.html*:
+ * *https://docsupdatetracker.net/index.html*
+ * *index.css*
+ * *index.js*
-1. Add the following `meta` tags to the `head` of *https://docsupdatetracker.net/index.html*. The `charset` tag defines the character set (UTF-8). The value of `http-equiv` tells Internet Explorer and Microsoft Edge to use the latest browser versions. And, the last `meta` tag specifies a viewport that works well for responsive layouts.
+6. Create a folder named *data*.
+
+7. Add *ContosoCoffee.txt* to the *data* folder.
+
+8. Create another folder named *images*.
+
+9. If you haven't already, [download these 10 images](https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/AzureMapsCodeSamples/Tutorials/Simple%20Store%20Locator/images).
+
+10. Add the downloaded images to the *images* folder.
+
+ Your workspace folder should now look like the following screenshot:
+
+ :::image type="content" source="./media/tutorial-create-store-locator/store-locator-workspace.png" alt-text="Screenshot of the Simple Store Locator workspace folder.":::
+
+## Create the HTML
+
+To create the HTML:
+
+1. Add the following `meta` tags to the `head` of *https://docsupdatetracker.net/index.html*:
```HTML <meta charset="utf-8">
To create the user interface, add code to *https://docsupdatetracker.net/index.html*:
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> ```
-1. Add references to the Azure Maps web control JavaScript and CSS files:
+2. Add references to the Azure Maps web control JavaScript and CSS files:
```HTML <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css"> <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script> ```
-1. Add a reference to the Azure Maps Services module. The module is a JavaScript library that wraps the Azure Maps REST services and makes them easy to use in JavaScript. The module is useful for powering search functionality.
+3. Add a reference to the Azure Maps Services module. The module is a JavaScript library that wraps the Azure Maps REST services and makes them easy to use in JavaScript. The module is useful for powering search functionality.
```HTML <script src="https://atlas.microsoft.com/sdk/javascript/service/2/atlas-service.min.js"></script> ```
-1. Add references to *index.js* and *index.css*:
+4. Add references to *index.js* and *index.css*.
```HTML <link rel="stylesheet" href="index.css" type="text/css"> <script src="index.js"></script> ```
-1. In the body of the document, add a `header` tag. Inside the `header` tag, add the logo and company name.
+5. In the body of the document, add a `header` tag. Inside the `header` tag, add the logo and company name.
```HTML <header>
To create the user interface, add code to *https://docsupdatetracker.net/index.html*:
</header> ```
-1. Add a `main` tag and create a search panel that has a text box and search button. Also, add `div` references for the map, the list panel, and the My Location GPS button.
+6. Add a `main` tag and create a search panel that has a text box and search button. Also, add `div` references for the map, the list panel, and the My Location GPS button.
```HTML <main>
To create the user interface, add code to *https://docsupdatetracker.net/index.html*:
</main> ```
-When you're finished, *https://docsupdatetracker.net/index.html* should look like [this example https://docsupdatetracker.net/index.html file](https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/master/AzureMapsCodeSamples/Tutorials/Simple%20Store%20Locator/https://docsupdatetracker.net/index.html).
+After you finish, *https://docsupdatetracker.net/index.html* should look like [this example https://docsupdatetracker.net/index.html file](https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/master/AzureMapsCodeSamples/Tutorials/Simple%20Store%20Locator/https://docsupdatetracker.net/index.html).
+
+## Define the CSS Styles
+
+The next step is to define the CSS styles. CSS styles define how the application components are laid out and the application's appearance.
-The next step is to define the CSS styles. CSS styles define how the application components are laid out and the application's appearance. Open *index.css* and add the following code to it. The `@media` style defines alternate style options to use when the screen width is smaller than 700 pixels.
+1. Open *index.css*.
+
+2. Add the following css code:
+
+ >[!NOTE]
+ > The `@media` style defines alternate style options to use when the screen width is smaller than 700 pixels.
```CSS html, body {
The next step is to define the CSS styles. CSS styles define how the application
margin: 0; font-family: Gotham, Helvetica, sans-serif; overflow-x: hidden;
- }
+ }
header { width: calc(100vw - 10px);
The next step is to define the CSS styles. CSS styles define how the application
margin-right: 5px; }
- /* Adjust the layout of the page when the screen width is less than 700 pixels. */
+ /* Adjust the layout of the page when the screen width is fewer than 700 pixels. */
@media screen and (max-width: 700px) { .searchPanel { width: 100vw;
The next step is to define the CSS styles. CSS styles define how the application
} ```
-Run the application now, you'll see the header, search box, and search button. But, the map isn't visible because it hasn't been loaded yet. If you try to do a search, nothing happens. We need to set up the JavaScript logic, which is described in the next section. This logic accesses all the functionality of the store locator.
+Run the application. You'll see the header, search box, and search button. However, the map isn't visible because it hasn't been loaded yet. If you try to do a search, nothing happens. We need to set up the JavaScript logic, which is described in the next section. This logic accesses all the functionality of the store locator.
-## Wire the application with JavaScript
+## Add JavaScript code
-Everything is now set up in the user interface. We still need to add the JavaScript to load and parse the data, and then render the data on the map. To get started, open *index.js* and add code to it, as described in the following steps.
+The JavaScript code in the Contoso Coffee shop locator app enables the following processes:
+
+1. Adds an [event listener](/javascript/api/azure-maps-control/atlas.map#events) called `ready` to wait until the page has completed its loading process. When the page loading is complete, the event handler creates more event listeners to monitor the loading of the map, and give functionality to the search and **My location** buttons.
+
+2. When the user selects the search button, or types a location in the search box then presses enter, a fuzzy search against the user's query is started. The code passes in an array of country/region ISO 2 values to the `countrySet` option to limit the search results to those countries/regions. Limiting the countries/regions to search helps increase the accuracy of the results that are returned.
+
+3. Once the search is finished, the first location result is used as the center focus of the map camera. When the user selects the My Location button, the code retrieves the user's location using the *HTML5 Geolocation API* that's built into the browser. After retrieving the location, the code centers the map over the user's location.
-1. Add global options to make settings easier to update. Define the variables for the map, pop up window, data source, icon layer, and HTML marker. Set the HTML marker to indicate the center of a search area. And, define an instance of the Azure Maps search service client.
+To add the JavaScript:
+
+1. Open *index.js*.
+
+2. Add global options to make settings easier to update. Define the variables for the map, pop up window, data source, icon layer, and HTML marker. Set the HTML marker to indicate the center of a search area. And, define an instance of the Azure Maps search service client.
```JavaScript //The maximum zoom level to cluster data point data on the map.
Everything is now set up in the user interface. We still need to add the JavaScr
var map, popup, datasource, iconLayer, centerMarker, searchURL; ```
-1. Add code to *index.js*. The following code initializes the map. We added an [event listener](/javascript/api/azure-maps-control/atlas.map#events) to wait until the page is finished loading. Then, we wired up events to monitor the loading of the map, and give functionality to the search button and the My location button.
-
- When the user selects the search button, or types a location in the search box then presses enter, a fuzzy search against the user's query is initiated. Pass in an array of country/region ISO 2 values to the `countrySet` option to limit the search results to those countries/regions. Limiting the countries/regions to search helps increase the accuracy of the results that are returned.
-
- Once the search is finished, take the first result and set the map camera over that area. When the user selects the My Location button, retrieve the user's location using the HTML5 Geolocation API. This API is built into the browser. Then, center the map over their location.
+3. Add the following initialization code. Make sure to replace `<Your Azure Maps Key>` with your primary subscription key.
> [!Tip] > When you use pop-up windows, it's best to create a single `Popup` instance and reuse the instance by updating its content and position. For every `Popup`instance you add to your code, multiple DOM elements are added to the page. The more DOM elements there are on a page, the more things the browser has to keep track of. If there are too many items, the browser might become slow. ```JavaScript+ function initialize() { //Initialize a map instance. map = new atlas.Map('myMap', {
Everything is now set up in the user interface. We still need to add the JavaScr
window.onload = initialize; ```
-1. In the map's `ready` event listener, add a zoom control and an HTML marker to display the center of a search area.
+4. In the map's `ready` event listener, add a zoom control and an HTML marker to display the center of a search area.
```JavaScript //Add a zoom control to the map.
Everything is now set up in the user interface. We still need to add the JavaScr
map.markers.add(centerMarker); ```
-1. In the map's `ready` event listener, add a data source. Then, make a call to load and parse the dataset. Enable clustering on the data source. Clustering on the data source groups overlapping points together in a cluster. The clusters separate into individual points as the user zooms in. This behavior provides a better user experience and improves performance.
+5. In the map's `ready` event listener, add a data source. Then, make a call to load and parse the dataset. Enable clustering on the data source. Clustering on the data source groups overlapping points together in a cluster. As the user zooms in, the clusters separate into individual points. This behavior provides a better user experience and improves performance.
```JavaScript //Create a data source, add it to the map, and then enable clustering.
Everything is now set up in the user interface. We still need to add the JavaScr
loadStoreData(); ```
-1. After you load the dataset in the map's `ready` event listener, define a set of layers to render the data. A bubble layer is used to render clustered data points. A symbol layer is used to render the number of points in each cluster above the bubble layer. A second symbol layer renders a custom icon for individual locations on the map.
+6. After the dataset loads in the map's `ready` event listener, define a set of layers to render the data. A bubble layer renders clustered data points. A symbol layer renders the number of points in each cluster above the bubble layer. A second symbol layer renders a custom icon for individual locations on the map.
Add `mouseover` and `mouseout` events to the bubble and icon layers to change the mouse cursor when the user hovers over a cluster or icon on the map. Add a `click` event to the cluster bubble layer. This `click` event zooms in the map two levels and centers the map over a cluster when the user selects any cluster. Add a `click` event to the icon layer. This `click` event displays a pop-up window that shows the details of a coffee shop when a user selects an individual location icon. Add an event to the map to monitor when the map is finished moving. When this event fires, update the items in the list panel.
Everything is now set up in the user interface. We still need to add the JavaScr
}); ```
-1. When the coffee shop dataset is loaded, it must first be downloaded. Then, the text file must be split into lines. The first line contains the header information. To make the code easier to follow, we parse the header into an object, which we can then use to look up the cell index of each property. After the first line, loop through the remaining lines and create a point feature. Add the point feature to the data source. Finally, update the list panel.
+7. When the coffee shop dataset is loaded, it must first be downloaded. Then, the text file must be split into lines. The first line contains the header information. To make the code easier to follow, we parse the header into an object, which we can then use to look up the cell index of each property. After the first line, loop through the remaining lines and create a point feature. Add the point feature to the data source. Finally, update the list panel.
```JavaScript function loadStoreData() {
Everything is now set up in the user interface. We still need to add the JavaScr
} ```
-1. When the list panel is updated, the distance is calculated. This distance is from the center of the map to all point features in the current map view. The features are then sorted by distance. HTML is generated to display each location in the list panel.
+8. When the list panel is updated, the distance is calculated. This distance is from the center of the map to all point features in the current map view. The features are then sorted by distance. HTML is generated to display each location in the list panel.
```JavaScript var listItemTemplate = '<div class="listItem" onclick="itemSelected(\'{id}\')"><div class="listItem-title">{title}</div>{city}<br />Open until {closes}<br />{distance} miles away</div>';
Everything is now set up in the user interface. We still need to add the JavaScr
} ```
-1. When the user selects an item in the list panel, the shape to which the item is related is retrieved from the data source. A pop-up window is generated that's based on the property information stored in the shape. The map is centered over the shape. If the map is less than 700 pixels wide, the map view is offset so the pop-up window is visible.
+9. When the user selects an item in the list panel, the shape to which the item is related is retrieved from the data source. A pop-up window is generated that's based on the property information stored in the shape. The map centers over the shape. If the map is fewer than 700 pixels wide, the map view is offset so the pop-up window is visible.
```JavaScript //When a user selects a result in the side panel, look up the shape by its ID value and display the pop-up window.
Everything is now set up in the user interface. We still need to add the JavaScr
var center = shape.getCoordinates(); var offset;
- //If the map is less than 700 pixels wide, then the layout is set for small screens.
+ //If the map is fewer than 700 pixels wide, then the layout is set for small screens.
if (map.getCanvas().width < 700) { //When the map is small, offset the center of the map relative to the shape so that there is room for the popup to appear. offset = [0, -80];
Now, you have a fully functional store locator. In a web browser, open the *inde
The first time a user selects the My Location button, the browser displays a security warning that asks for permission to access the user's location. If the user agrees to share their location, the map zooms in on the user's location, and nearby coffee shops are shown.
-![Screenshot of the browser's request to access the user's location](./media/tutorial-create-store-locator/GeolocationApiWarning.png)
+![Screenshot of the browser's request to access the user's location](./media/tutorial-create-store-locator/geolocation-api-warning.png)
When you zoom in close enough in an area that has coffee shop locations, the clusters separate into individual locations. Select one of the icons on the map or select an item in the side panel to see a pop-up window. The pop-up shows information for the selected location.
-![Screenshot of the finished store locator](./media/tutorial-create-store-locator/FinishedSimpleStoreLocator.png)
+![Screenshot of the finished store locator](./media/tutorial-create-store-locator/finished-simple-store-locator.png)
-If you resize the browser window to less than 700 pixels wide or open the application on a mobile device, the layout changes to be better suited for smaller screens.
+If you resize the browser window to fewer than 700 pixels wide or open the application on a mobile device, the layout changes to be better suited for smaller screens.
-![Screenshot of the small-screen version of the store locator](./media/tutorial-create-store-locator/FinishedSimpleStoreLocatorSmallScreen.png)
+![Screenshot of the small-screen version of the store locator](./media/tutorial-create-store-locator/finished-simple-store-locator-mobile.png)
In this tutorial, you learned how to create a basic store locator by using Azure Maps. The store locator you create in this tutorial might have all the functionality you need. You can add features to your store locator or use more advance features for a more custom user experience:
In this tutorial, you learned how to create a basic store locator by using Azure
* Add [support for multiple languages](https://azuremapscodesamples.azurewebsites.net/?sample=Map%20Localization). * Allow the user to [filter locations along a route](https://azuremapscodesamples.azurewebsites.net/?sample=Filter%20Data%20Along%20Route). * Add the ability to [set filters](https://azuremapscodesamples.azurewebsites.net/?sample=Filter%20Symbols%20by%20Property).
- * Add support to specify an initial search value by using a query string. When you include this option in your store locator, users can bookmark and share searches. It also provides an easy method for you to pass searches to this page from another page.
+ * Add support to specify an initial search value by using a query string. When you include this option in your store locator, users are then able to bookmark and share searches. It also provides an easy method for you to pass searches to this page from another page.
* Deploy your store locator as an [Azure App Service Web App](../app-service/quickstart-html.md). * Store your data in a database and search for nearby locations. To learn more, see the [SQL Server spatial data types overview](/sql/relational-databases/spatial/spatial-data-types-overview?preserve-view=true&view=sql-server-2017) and [Query spatial data for the nearest neighbor](/sql/relational-databases/spatial/query-spatial-data-for-nearest-neighbor?preserve-view=true&view=sql-server-2017).
-You can [View full source code](https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/AzureMapsCodeSamples/Tutorials/Simple%20Store%20Locator), [View live sample](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Simple%20Store%20Locator) and learn more about the coverage and capabilities of Azure Maps by using [Zoom levels and tile grid](zoom-levels-and-tile-grid.md). You can also [Use data-driven style expressions](data-driven-style-expressions-web-sdk.md) to apply to your business logic.
+You can [view full source code here](https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/AzureMapsCodeSamples/Tutorials/Simple%20Store%20Locator). [View the live sample](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Simple%20Store%20Locator) and learn more about the coverage and capabilities of Azure Maps by using [Zoom levels and tile grid](zoom-levels-and-tile-grid.md). You can also [Use data-driven style expressions](data-driven-style-expressions-web-sdk.md) to apply to your business logic.
## Clean up resources
azure-maps Tutorial Creator Indoor Maps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/tutorial-creator-indoor-maps.md
To upload the Drawing package:
15. Select **Send**.
-16. In the response window, select the **Headers** tab.
+16. In the response window, select the **Headers** tab.
17. Copy the value of the **Operation-Location** key, which is the `status URL`. We'll use the `status URL` to check the status of the Drawing package upload.
The following JSON fragment displays a sample conversion warning:
## Create a dataset
-A dataset is a collection of map features, such as buildings, levels, and rooms. To create a dataset, use the [Dataset Create API](/rest/api/maps/v2/dataset/createpreview). The Dataset Create API takes the `conversionId` for the converted Drawing package and returns a `datasetId` of the created dataset.
+A dataset is a collection of map features, such as buildings, levels, and rooms. To create a dataset, use the [Dataset Create API](/rest/api/maps/v2/dataset). The Dataset Create API takes the `conversionId` for the converted Drawing package and returns a `datasetId` of the created dataset.
To create a dataset:
To create a dataset:
5. Select the **POST** HTTP method.
-6. Enter the following URL to the [Dataset API](/rest/api/maps/v2/dataset/createpreview). The request should look like the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key, and `{conversionId`} with the `conversionId` obtained in [Check Drawing package conversion status](#check-the-drawing-package-conversion-status)):
+6. Enter the following URL to the [Dataset API](/rest/api/maps/v2/dataset). The request should look like the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key, and `{conversionId`} with the `conversionId` obtained in [Check Drawing package conversion status](#check-the-drawing-package-conversion-status)):
```http https://us.atlas.microsoft.com/datasets?api-version=2.0&conversionId={conversionId}&type=facility&subscription-key={Azure-Maps-Primary-Subscription-key}
To create a tileset:
5. Select the **POST** HTTP method.
-6. Enter the following URL to the [Tileset API](/rest/api/maps/v2/tileset/createpreview). The request should look like the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key), and `{datasetId`} with the `datasetId` obtained in [Check dataset creation status](#check-the-dataset-creation-status):
+6. Enter the following URL to the [Tileset API](/rest/api/maps/v2/tileset). The request should look like the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key), and `{datasetId`} with the `datasetId` obtained in [Check dataset creation status](#check-the-dataset-creation-status):
```http https://us.atlas.microsoft.com/tilesets?api-version=2.0&datasetID={datasetId}&subscription-key={Azure-Maps-Primary-Subscription-key}
azure-monitor Export Stream Analytics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/export-stream-analytics.md
Last updated 01/08/2019
[Azure Stream Analytics](https://azure.microsoft.com/services/stream-analytics/) is the ideal tool for processing data [exported from Application Insights](export-telemetry.md). Stream Analytics can pull data from a variety of sources. It can transform and filter the data, and then route it to a variety of sinks.
-In this example, we'll create an adaptor that takes data from Application Insights, renames and processes some of the fields, and pipes it into Power BI.
+In this example, we'll create an adaptor that takes data from Application Insights using continuous export, renames and processes some of the fields, and pipes it into Power BI.
> [!WARNING] > There are much better and easier [recommended ways to display Application Insights data in Power BI](./export-power-bi.md). The path illustrated here is just an example to illustrate how to process exported data.
+> [!IMPORTANT]
+> Continuous export has been deprecated and is only supported for classic Application Insights resources. [Migrate to a workspace-based Application Insights resource](convert-classic-resource.md) to use [diagnostic settings](export-telemetry.md#diagnostic-settings-based-export) for exporting telemetry.
++ ![Block diagram for export through SA to PBI](./media/export-stream-analytics/020.png) ## Create storage in Azure
Continuous export always outputs data to an Azure Storage account, so you need t
## Start continuous export to Azure storage
-[Continuous export](export-telemetry.md) moves data from Application Insights into Azure storage.
+[Continuous export](export-telemetry.md) moves data from Application Insights into Azure storage.
1. In the Azure portal, browse to the Application Insights resource you created for your application.
azure-monitor Data Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/data-security.md
Contact us with any questions, suggestions, or issues about any of the following
## Sending data securely using TLS 1.2
-To insure the security of data in transit to Log Analytics, we strongly encourage you to configure the agent to use at least Transport Layer Security (TLS) 1.2. Older versions of TLS/Secure Sockets Layer (SSL) have been found to be vulnerable and while they still currently work to allow backwards compatibility, they are **not recommended**, and the industry is quickly moving to abandon support for these older protocols.
+To ensure the security of data in transit to Log Analytics, we strongly encourage you to configure the agent to use at least Transport Layer Security (TLS) 1.2. Older versions of TLS/Secure Sockets Layer (SSL) have been found to be vulnerable and while they still currently work to allow backwards compatibility, they are **not recommended**, and the industry is quickly moving to abandon support for these older protocols.
The [PCI Security Standards Council](https://www.pcisecuritystandards.org/) has set a [deadline of June 30th, 2018](https://www.pcisecuritystandards.org/pdfs/PCI_SSC_Migrating_from_SSL_and_Early_TLS_Resource_Guide.pdf) to disable older versions of TLS/SSL and upgrade to more secure protocols. Once Azure drops legacy support, if your agents cannot communicate over at least TLS 1.2 you would not be able to send data to Log Analytics.
azure-monitor Manage Cost Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/manage-cost-storage.md
na Previously updated : 06/02/2021 Last updated : 06/08/2021
let workspaceHasSecurityCenter = false; // Specify if the workspace has Azure S
let PerNodePrice = 15.; // Enter your montly price per monitored nodes let PerNodeOveragePrice = 2.30; // Enter your price per GB for data overage in the Per Node pricing tier let PerGBPrice = 2.30; // Enter your price per GB in the Pay-as-you-go pricing tier
-let CarRes100Price = 196.; // Enter your price for the 100 GB/day commitment tier
-let CarRes200Price = 368.; // Enter your price for the 200 GB/day commitment tier
-let CarRes300Price = 540.; // Enter your price for the 300 GB/day commitment tier
-let CarRes400Price = 704.; // Enter your price for the 400 GB/day commitment tier
-let CarRes500Price = 865.; // Enter your price for the 500 GB/day commitment tier
+let CommitmentTier100Price = 196.; // Enter your price for the 100 GB/day commitment tier
+let CommitmentTier200Price = 368.; // Enter your price for the 200 GB/day commitment tier
+let CommitmentTier300Price = 540.; // Enter your price for the 300 GB/day commitment tier
+let CommitmentTier400Price = 704.; // Enter your price for the 400 GB/day commitment tier
+let CommitmentTier500Price = 865.; // Enter your price for the 500 GB/day commitment tier
+let CommitmentTier1000Price = 1700.; // Enter your price for the 1000 GB/day commitment tier
+let CommitmentTier2000Price = 3320.; // Enter your price for the 2000 GB/day commitment tier
+let CommitmentTier5000Price = 8050.; // Enter your price for the 5000 GB/day commitment tier
// let SecurityDataTypes=dynamic(["SecurityAlert", "SecurityBaseline", "SecurityBaselineSummary", "SecurityDetection", "SecurityEvent", "WindowsFirewall", "MaliciousIPCommunication", "LinuxAuditLog", "SysmonEvent", "ProtectionStatus", "WindowsEvent", "Update", "UpdateSummary"]); let StartDate = startofday(datetime_add("Day",-1*daysToEvaluate,now()));
union *
| extend billableGB = iff(workspaceHasSecurityCenter, (NonSecurityDataGB + max_of(SecurityDataGB - 0.5*ASCnodesPerDay, 0.)), DataGB ) | extend PerGBDailyCost = billableGB * PerGBPrice
-| extend CapRes100DailyCost = CarRes100Price + max_of(billableGB - 100, 0.)* PerGBPrice
-| extend CapRes200DailyCost = CarRes200Price + max_of(billableGB - 200, 0.)* PerGBPrice
-| extend CapRes300DailyCost = CarRes300Price + max_of(billableGB - 300, 0.)* PerGBPrice
-| extend CapRes400DailyCost = CarRes400Price + max_of(billableGB - 400, 0.)* PerGBPrice
-| extend CapResLevel500AndAbove = max_of(floor(billableGB, 100),500)
-| extend CapRes500AndAboveDailyCost = CarRes500Price*CapResLevel500AndAbove/500 + max_of(billableGB - CapResLevel500AndAbove, 0.)* PerGBPrice
+| extend CommitmentTier100DailyCost = CommitmentTier100Price + max_of(billableGB - 100, 0.)* CommitmentTier100Price/100.
+| extend CommitmentTier200DailyCost = CommitmentTier200Price + max_of(billableGB - 200, 0.)* CommitmentTier200Price/200.
+| extend CommitmentTier300DailyCost = CommitmentTier300Price + max_of(billableGB - 300, 0.)* CommitmentTier300Price/300.
+| extend CommitmentTier400DailyCost = CommitmentTier400Price + max_of(billableGB - 400, 0.)* CommitmentTier400Price/400.
+| extend CommitmentTier500DailyCost = CommitmentTier500Price + max_of(billableGB - 500, 0.)* CommitmentTier500Price/500.
+| extend CommitmentTier1000DailyCost = CommitmentTier1000Price + max_of(billableGB - 1000, 0.)* CommitmentTier1000Price/1000.
+| extend CommitmentTier2000DailyCost = CommitmentTier2000Price + max_of(billableGB - 2000, 0.)* CommitmentTier2000Price/2000.
+| extend CommitmentTier5000DailyCost = CommitmentTier5000Price + max_of(billableGB - 5000, 0.)* CommitmentTier5000Price/5000.
| extend MinCost = min_of(
- PerNodeDailyCost,PerGBDailyCost,CapRes100DailyCost,CapRes200DailyCost,
- CapRes300DailyCost, CapRes400DailyCost, CapRes500AndAboveDailyCost)
+ PerNodeDailyCost,PerGBDailyCost,CommitmentTier100DailyCost,CommitmentTier200DailyCost,
+ CommitmentTier300DailyCost, CommitmentTier400DailyCost, CommitmentTier500DailyCost, CommitmentTier1000DailyCost, CommitmentTier2000DailyCost, CommitmentTier5000DailyCost)
| extend Recommendation = case( MinCost == PerNodeDailyCost, "Per node tier", MinCost == PerGBDailyCost, "Pay-as-you-go tier",
- MinCost == CapRes100DailyCost, "Cap**ommitment tier (100 GB/day)",
- MinCost == CapRes200DailyCost, "Commitment tier (200 GB/day)",
- MinCost == CapRes300DailyCost, "Commitment tier (300 GB/day)",
- MinCost == CapRes400DailyCost, "Commitment tier (400 GB/day)",
- MinCost == CapRes500AndAboveDailyCost, strcat("Commitment tier (",CapResLevel500AndAbove," GB/day)"),
+ MinCost == CommitmentTier100DailyCost, "Commitment tier (100 GB/day)",
+ MinCost == CommitmentTier200DailyCost, "Commitment tier (200 GB/day)",
+ MinCost == CommitmentTier300DailyCost, "Commitment tier (300 GB/day)",
+ MinCost == CommitmentTier400DailyCost, "Commitment tier (400 GB/day)",
+ MinCost == CommitmentTier500DailyCost, "Commitment tier (500 GB/day)",
+ MinCost == CommitmentTier1000DailyCost, "Commitment tier (1000 GB/day)",
+ MinCost == CommitmentTier2000DailyCost, "Commitment tier (2000 GB/day)",
+ MinCost == CommitmentTier5000DailyCost, "Commitment tier (5000 GB/day)",
"Error" ) | project day, nodesPerDay, ASCnodesPerDay, NonSecurityDataGB, SecurityDataGB, OverageGB, AvgGbPerNode, PerGBDailyCost, PerNodeDailyCost,
- CapRes100DailyCost, CapRes200DailyCost, CapRes300DailyCost, CapRes400DailyCost, CapRes500AndAboveDailyCost, Recommendation
+ CommitmentTier100DailyCost, CommitmentTier200DailyCost, CommitmentTier300DailyCost, CommitmentTier400DailyCost, CommitmentTier500DailyCost, CommitmentTier1000DailyCost, CommitmentTier2000DailyCost, CommitmentTier5000DailyCost, Recommendation
| sort by day asc //| project day, Recommendation // Comment this line to see details | sort by day asc
azure-monitor Workbooks Grid Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/visualize/workbooks-grid-visualizations.md
The author can customize the width of any column in the grid using the *Custom C
![Screenshot of column settings with the custom column width field indicated in a red box](./media/workbooks-grid-visualizations/custom-column-width-setting.png)
-If the field is left black, then the width will be automatically determined based on the number of characters in the column and the number of visible columns. The default unit is "ch" (characters).
+If the field is left blank, then the width will be automatically determined based on the number of characters in the column and the number of visible columns. The default unit is "ch" (characters).
Selecting the blue **(Current Width)** button in the label will fill the text field with the selected column's current width. If a value is present in the custom width field with no unit of measurement, then the default will be used.
Combining fr, %, px, and ch widths is possible and works similarly to the previo
## Next steps * Learn how to create a [tree in workbooks](workbooks-tree-visualizations.md).
-* Learn how to create [workbook text parameters](workbooks-text.md).
+* Learn how to create [workbook text parameters](workbooks-text.md).
azure-monitor Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/whats-new.md
Title: "Azure Monitor docs: What's new for April 2021"
-description: "What's new in the Azure Monitor docs for April 2021"
+ Title: "Azure Monitor docs: What's new for May, 2021"
+description: "What's new in the Azure Monitor docs for May, 2021."
Previously updated : 05/01/2021 Last updated : 06/03/2021
-# Azure Monitor docs: What's new for April 2021
+# Azure Monitor docs: What's new for May, 2021
-Welcome to what's new in the Azure Monitor docs for April 2021. This article lists some of the significant changes to docs during this period.
+Welcome to what's new in the Azure Monitor docs from May, 2021. This article lists some of the major changes to docs during this period.
-## Agents
+## General
**Updated articles** -- [Configure data collection for the Azure Monitor agent (preview)](agents/data-collection-rule-azure-monitor-agent.md)-- [Overview of Azure Monitor agents](agents/agents-overview.md)-- [Collect Windows and Linux performance data sources with Log Analytics agent](agents/data-sources-performance-counters.md)
+- [Azure Monitor Frequently Asked Questions](faq.md)
+- [Azure Monitor partner integrations](partners.md)
## Alerts **Updated articles** -- [Action rules (preview)](alerts/alerts-action-rules.md)-- [Create, view, and manage log alerts using Azure Monitor](alerts/alerts-log.md)-- [Troubleshoot problems in IT Service Management Connector](alerts/itsmc-troubleshoot-overview.md)
+- [Log alerts in Azure Monitor](alerts/alerts-unified-log.md)
## Application Insights **New articles** -- [Sampling overrides (preview) - Azure Monitor Application Insights for Java](app/java-standalone-sampling-overrides.md)-- [Configuring JMX metrics](app/java-jmx-metrics-configuration.md)
+- [Private testing](app/availability-private-test.md)
**Updated articles** -- [Application Insights for web pages](app/javascript.md)-- [Configuration options - Azure Monitor Application Insights for Java](app/java-standalone-config.md)-- [Quickstart: Start monitoring your website with Azure Monitor Application Insights](app/website-monitoring.md)-- [Visualizations for Application Change Analysis (preview)](app/change-analysis-visualizations.md)-- [Use Application Change Analysis (preview) in Azure Monitor](app/change-analysis.md)-- [Application Insights API for custom events and metrics](app/api-custom-events-metrics.md)
+- [Release annotations for Application Insights](app/annotations.md)
+- [Application Insights logging with .NET](app/ilogger.md)
+- [Diagnose exceptions in web apps with Application Insights](app/asp-net-exceptions.md)
+- [Application Monitoring for Azure App Service](app/azure-web-apps.md)
+- [What is auto-instrumentation or codeless attach - Azure Monitor Application Insights?](app/codeless-overview.md)
- [Java codeless application monitoring Azure Monitor Application Insights](app/java-in-process-agent.md)-- [Enable Snapshot Debugger for .NET apps in Azure App Service](app/snapshot-debugger-appservice.md)-- [Enable Snapshot Debugger for .NET and .NET Core apps in Azure Functions](app/snapshot-debugger-function-app.md)-- [Troubleshoot problems enabling Application Insights Snapshot Debugger or viewing snapshots](app/snapshot-debugger-troubleshoot.md)-- [Release notes for Azure Web App extension for Application Insights](app/web-app-extension-release-notes.md)-- [Set up Azure Monitor for your Python application](app/opencensus-python.md) - [Upgrading from Application Insights Java 2.x SDK](app/java-standalone-upgrade-from-2x.md)-- [Use Stream Analytics to process exported data from Application Insights](app/export-stream-analytics.md)-- [Troubleshooting guide: Azure Monitor Application Insights for Java](app/java-standalone-troubleshoot.md)
+- [Quickstart: Get started with Application Insights in a Java web project](app/java-2x-get-started.md)
+- [Adding the JVM arg - Azure Monitor Application Insights for Java](app/java-standalone-arguments.md)
+- [Create and run custom availability tests using Azure Functions](app/availability-azure-functions.md)
+- [Set up Azure Monitor for your Python application](app/opencensus-python.md)
## Containers **Updated articles** -- [Troubleshooting Container insights](containers/container-insights-troubleshoot.md)-- [How to view Kubernetes logs, events, and pod metrics in real-time](containers/container-insights-livedata-overview.md)-- [How to query logs from Container insights](containers/container-insights-log-search.md)-- [Configure PV monitoring with Container insights](containers/container-insights-persistent-volumes.md)-- [Monitor your Kubernetes cluster performance with Container insights](containers/container-insights-analyze.md)-- [Configure Azure Red Hat OpenShift v3 with Container insights](containers/container-insights-azure-redhat-setup.md)-- [Configure Azure Red Hat OpenShift v4.x with Container insights](containers/container-insights-azure-redhat4-setup.md)-- [Enable monitoring of Azure Arc enabled Kubernetes cluster](containers/container-insights-enable-arc-enabled-clusters.md)-- [Configure hybrid Kubernetes clusters with Container insights](containers/container-insights-hybrid-setup.md)-- [Recommended metric alerts (preview) from Container insights](containers/container-insights-metric-alerts.md)-- [Enable Container insights](containers/container-insights-onboard.md)-- [Container insights overview](containers/container-insights-overview.md)-- [Configure scraping of Prometheus metrics with Container insights](containers/container-insights-prometheus-integration.md)
+- [Configure agent data collection for Container insights](containers/container-insights-agent-config.md)
## Essentials **Updated articles** -- [Advanced features of the Azure metrics explorer](essentials/metrics-charts.md)-- [Application Insights log-based metrics](essentials/app-insights-metrics.md)-- [Getting started with Azure Metrics Explorer](essentials/metrics-getting-started.md)-
-## General
-
-**Updated articles**
--- [Azure Monitor Frequently Asked Questions](faq.md)-- [Azure Monitor docs: What's new for February 1, 2021 - February 28, 2021](whats-new.md)-- [Azure Monitor for existing Operations Manager customers](azure-monitor-operations-manager.md)-- [Deploy Azure Monitor at scale using Azure Policy](deploy-scale.md)-- [Deploy Azure Monitor](deploy.md)
+- [Supported metrics with Azure Monitor](essentials/metrics-supported.md)
+- [Supported categories for Azure Resource Logs](essentials/resource-logs-categories.md)
## Insights **Updated articles** -- [Azure Monitor Network Insights](insights/network-insights-overview.md)-- [Wire Data 2.0 (Preview) solution in Azure Monitor (Retired)](insights/wire-data.md)-- [Monitor your SQL deployments with SQL insights (preview)](insights/sql-insights-overview.md)
+- [Monitoring your key vault service with Key Vault insights](insights/key-vault-insights-overview.md)
+- [Monitoring your storage service with Azure Monitor Storage insights](insights/storage-insights-overview.md)
## Logs
-**Updated articles**
--- [Create a Log Analytics workspace in the Azure portal](logs/quick-create-workspace.md)-- [Manage usage and costs with Azure Monitor Logs](logs/manage-cost-storage.md)-- [Log data ingestion time in Azure Monitor](logs/data-ingestion-time.md)-
-## Virtual Machines
- **New articles** -- [Troubleshoot VM insights](vm/vminsights-troubleshoot.md)
+- [Log Analytics Workspace Insights (preview)](logs/log-analytics-workspace-insights-overview.md)
+- [Using queries in Azure Monitor Log Analytics](logs/queries.md)
+- [Query packs in Azure Monitor Logs (preview)](logs/query-packs.md)
+- [Save a query in Azure Monitor Log Analytics (preview)](logs/save-query.md)
**Updated articles** -- [Create interactive reports VM insights with workbooks](vm/vminsights-workbooks.md)-- [Enable VM insights overview](vm/vminsights-enable-overview.md)-- [Troubleshoot Azure Monitor for VMs guest health (preview)](vm/vminsights-health-troubleshoot.md)-- [Monitoring Azure virtual machines with Azure Monitor](vm/monitor-vm-azure.md)-- [Integrate System Center Operations Manager with VM insights Map feature](vm/service-map-scom.md)-- [How to create alerts from VM insights](vm/vminsights-alerts.md)-- [Configure Log Analytics workspace for VM insights](vm/vminsights-configure-workspace.md)-- [Enable VM insights by using Azure Policy](vm/vminsights-enable-policy.md)-- [Enable VM insights using Resource Manager templates](vm/vminsights-enable-resource-manager.md)-- [VM insights Generally Available (GA) Frequently Asked Questions](vm/vminsights-ga-release-faq.md)-- [Enable VM insights guest health (preview)](vm/vminsights-health-enable.md)-- [Disable monitoring of your VMs in VM insights](vm/vminsights-optout.md)-- [Overview of VM insights](vm/vminsights-overview.md)-- [How to chart performance with VM insights](vm/vminsights-performance.md)-
-## Visualizations
-
-**Updated articles**
--- [Programmatically manage workbooks](visualize/workbooks-automate.md)-
-## Community contributors
-
-The following people contributed to the Azure Monitor docs during this period. Thank you! Learn how to contribute by following the links under "Get involved" in the [what's new landing page](index.yml).
+- [Log Analytics workspace data export in Azure Monitor (preview)](logs/logs-data-export.md)
-- [Amrinder-Singh29](https://github.com/Amrinder-Singh29) (1)-- [artemious7](https://github.com/artemious7) - Artem (1)-- [burnhamrobertp](https://github.com/burnhamrobertp) - Robert Burnham (1)-- [kchopein](https://github.com/kchopein) - KchoPein! (1)-- [kmadof](https://github.com/kmadof) - Krzysztof Madej (1)-- [stversch](https://github.com/stversch) - Steve Verschaeve - MSFT (1)
azure-netapp-files Azure Netapp Files Faqs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-faqs.md
na ms.devlang: na Previously updated : 05/25/2021 Last updated : 06/08/2021 # FAQs About Azure NetApp Files
No, Azure NetApp Files does not currently support dual stack (IPv4 and IPv6) VNe
### Can the network traffic between the Azure VM and the storage be encrypted?
-Data traffic between NFSv4.1 clients and Azure NetApp Files volumes can be encrypted using Kerberos with AES-256 encryption. See [Configure NFSv4.1 Kerberos encryption for Azure NetApp Files](configure-kerberos-encryption.md) for details.
+Azure NetApp Files data traffic is inherently secure by design, as it does not provide a public endpoint and data traffic stays within customer-owned VNet. Data-in-flight is not encrypted by default. However, data traffic from an Azure VM (running an NFS or SMB client) to Azure NetApp Files is as secure as any other Azure-VM-to-VM traffic.
-Data traffic between NFSv3 or SMB3 clients to Azure NetApp Files volumes is not encrypted. However, the traffic from an Azure VM (running an NFS or SMB client) to Azure NetApp Files is as secure as any other Azure-VM-to-VM traffic. This traffic is local to the Azure data-center network.
+NFSv3 protocol does not provide support for encryption, so this data-in-flight cannot be encrypted. However, NFSv4.1 and SMB3 data-in-flight encryption can optionally be enabled. Data traffic between NFSv4.1 clients and Azure NetApp Files volumes can be encrypted using Kerberos with AES-256 encryption. See [Configure NFSv4.1 Kerberos encryption for Azure NetApp Files](configure-kerberos-encryption.md) for details. Data traffic between SMB3 clients and Azure NetApp Files volumes can be encrypted using the AES-CCM algorithm on SMB 3.0, and the AES-GCM algorithm on SMB 3.1.1 connections. See [Create an SMB volume for Azure NetApp Files](azure-netapp-files-create-volumes-smb.md) for details.
### Can the storage be encrypted at rest?
azure-resource-manager Tag Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/tag-resources.md
The following limitations apply to tags:
> > * Azure Front Door doesn't support the use of `#` in the tag name. >
- > * Azure Automation and Azure CDN only support 15 tags on resources.
+ > * The follow Azure resources only support 15 tags:
+ > * Azure Automation
+ > * Azure CDN
+ > * Azure DNS (Zone and A records)
+ > * Azure Private DNS (Zone, A records, and virtual network link)
+
+* Azure Automation and Azure CDN only support 15 tags on resources.
## Next steps
azure-resource-manager Deployment History Deletions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deployment-history-deletions.md
Title: Deployment history deletions description: Describes how Azure Resource Manager automatically deletes deployments from the deployment history. Deployments are deleted when the history is close to exceeding the limit of 800. Previously updated : 03/23/2021 Last updated : 06/04/2021 # Automatic deletions from deployment history
Every time you deploy a template, information about the deployment is written to
Azure Resource Manager automatically deletes deployments from your history as you near the limit. Automatic deletion is a change from past behavior. Previously, you had to manually delete deployments from the deployment history to avoid getting an error. This change was implemented on August 6, 2020.
-**Automatic deletions are supported for resource group deployments. Currently, deployments in the history for [subscription](deploy-to-subscription.md), [management group](deploy-to-management-group.md), and [tenant](deploy-to-tenant.md) deployments aren't automatically deleted.**
+**Automatic deletions are supported for resource group and subscription deployments. Currently, deployments in the history for [management group](deploy-to-management-group.md) and [tenant](deploy-to-tenant.md) deployments aren't automatically deleted.**
> [!NOTE] > Deleting a deployment from the history doesn't affect any of the resources that were deployed. ## When deployments are deleted
-Deployments are deleted from your history when you exceed 775 deployments. Azure Resource Manager deletes deployments until the history is down to 750. The oldest deployments are always deleted first.
+Deployments are deleted from your history when you exceed 700 deployments. Azure Resource Manager deletes deployments until the history is down to 600. The oldest deployments are always deleted first.
-> [!NOTE]
-> The starting number (775) and the ending number (750) are subject to change.
->
+> [!IMPORTANT]
> If your resource group is already at the 800 limit, your next deployment fails with an error. The automatic deletion process starts immediately. You can try your deployment again after a short wait. In addition to deployments, you also trigger deletions when you run the [what-if operation](template-deploy-what-if.md) or validate a deployment.
azure-sql-edge Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/release-notes.md
Last updated 11/24/2020
This article describes what's new and what has changed with every new build of Azure SQL Edge.
+## Azure SQL Edge 1.0.4
+
+SQL engine build 15.0.2000.1558
+
+### What's new?
+
+- PREDICT support for ONNX
+ - Improvements in handling of null data in PREDICT for ONNX
+ ## Azure SQL Edge 1.0.3 SQL engine build 15.0.2000.1554
azure-sql Doc Changes Updates Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/doc-changes-updates-release-notes.md
-+ ms.devlang: Previously updated : 04/17/2021 Last updated : 06/03/2021 # What's new in Azure SQL Database & SQL Managed Instance? [!INCLUDE[appliesto-sqldb-sqlmi](../includes/appliesto-sqldb-sqlmi.md)]
The following features are enabled in the SQL Managed Instance deployment model
If an instance participates in an [auto-failover group](./auto-failover-group-overview.md), changing the instance's [connection type](../managed-instance/connection-types-overview.md) does not take effect for the connections established through the failover group listener endpoint.
-**Workaround**: Drop and recreate auto-failover group afer changing the connection type.
+**Workaround**: Drop and recreate auto-failover group after changing the connection type.
### Procedure sp_send_dbmail may transiently fail when @query parameter is used
-Procedure sp_send_dbmail may transiently fail when `@query` parameter is used. When this issue occurs, every second execution of procedure sp_send_dbmail fails with error `Msg 22050, Level 16, State 1` and message `Failed to initialize sqlcmd library with error number -2147467259`. To be able to see this error properly, the procedure should be called with default value 0 for the parameter `@exclude_query_output`, otherwise the error will not be propagated.
-This problem is caused by a known bug related to how sp_send_dbmail is using impersonation and connection pooling.
-To work around this issue wrap code for sending email into a retry logic that relies on output parameter `@mailitem_id`. If the execution fails, then parameter value will be NULL, indicating sp_send_dbmail should be called one more time to successfully send an email. Here is an example this retry logic.
+Procedure `sp_send_dbmail` may transiently fail when `@query` parameter is used. When this issue occurs, every second execution of procedure sp_send_dbmail fails with error `Msg 22050, Level 16, State 1` and message `Failed to initialize sqlcmd library with error number -2147467259`. To be able to see this error properly, the procedure should be called with default value 0 for the parameter `@exclude_query_output`, otherwise the error will not be propagated.
+This problem is caused by a known bug related to how `sp_send_dbmail` is using impersonation and connection pooling.
+To work around this issue wrap code for sending email into a retry logic that relies on output parameter `@mailitem_id`. If the execution fails, then parameter value will be NULL, indicating `sp_send_dbmail` should be called one more time to successfully send an email. Here is an example this retry logic.
```sql CREATE PROCEDURE send_dbmail_with_retry AS BEGIN
END
### Distributed transactions can be executed after removing Managed Instance from Server Trust Group
-[Server Trust Groups](../managed-instance/server-trust-group-overview.md) are used to establish trust between Managed Instances that is prerequisite for executing [distributed transactions](./elastic-transactions-overview.md). After removing Managed Instance from Server Trust Group or deleting the group you still might be able to execute distributed transactions. There is a workaround you can apply to be sure that distributed transactions are disabled and that is [user-initiated manual failover](../managed-instance/user-initiated-failover.md) on Managed Instance.
+[Server Trust Groups](../managed-instance/server-trust-group-overview.md) are used to establish trust between Managed Instances that is prerequisite for executing [distributed transactions](./elastic-transactions-overview.md). After removing Managed Instance from Server Trust Group or deleting the group, you still might be able to execute distributed transactions. There is a workaround you can apply to be sure that distributed transactions are disabled and that is [user-initiated manual failover](../managed-instance/user-initiated-failover.md) on Managed Instance.
### Distributed transactions cannot be executed after Managed Instance scaling operation Managed Instance scaling operations that include changing service tier or number of vCores will reset Server Trust Group settings on the backend and disable running [distributed transactions](./elastic-transactions-overview.md). As a workaround, delete and create new [Server Trust Group](../managed-instance/server-trust-group-overview.md) on Azure portal.
-### BULK INSERT and BACKUP/RESTORE statements cannot use Managed Identity to access Azure storage
+### BULK INSERT and BACKUP/RESTORE statements should use SAS Key to access Azure storage
-Bulk insert, BACKUP, and RESTORE statements, and OPENROWSET function cannot use `DATABASE SCOPED CREDENTIAL` with Managed Identity to authenticate to Azure storage. As a workaround, switch to SHARED ACCESS SIGNATURE authentication. The following example will not work on Azure SQL (both Database and Managed Instance):
+Currently, it is not supported to use `DATABASE SCOPED CREDENTIAL` syntax with Managed Identity to authenticate to Azure storage. Microsoft recommends using a [shared access signature](../../storage/common/storage-sas-overview.md) for the [database scoped credential](/sql/t-sql/statements/create-credential-transact-sql#d-creating-a-credential-using-a-sas-token), when accessing Azure storage for bulk insert, `BACKUP` and `RESTORE` statements, or the `OPENROWSET` function. For example:
```sql
-CREATE DATABASE SCOPED CREDENTIAL msi_cred WITH IDENTITY = 'Managed Identity';
+CREATE DATABASE SCOPED CREDENTIAL sas_cred WITH IDENTITY = 'SHARED ACCESS SIGNATURE',
+ SECRET = '******srt=sco&sp=rwac&se=2017-02-01T00:55:34Z&st=2016-12-29T16:55:34Z***************';
GO CREATE EXTERNAL DATA SOURCE MyAzureBlobStorage
- WITH ( TYPE = BLOB_STORAGE, LOCATION = 'https://****************.blob.core.windows.net/curriculum', CREDENTIAL= msi_cred );
+ WITH ( TYPE = BLOB_STORAGE, LOCATION = 'https://****************.blob.core.windows.net/invoices', CREDENTIAL= sas_cred );
GO BULK INSERT Sales.Invoices FROM 'inv-2017-12-08.csv' WITH (DATA_SOURCE = 'MyAzureBlobStorage'); ```
-**Workaround**: Use [Shared Access Signature to authenticate to storage](/sql/t-sql/statements/bulk-insert-transact-sql#f-importing-data-from-a-file-in-azure-blob-storage).
+For another example of using `BULK INSERT` with an SAS key, see [Shared Access Signature to authenticate to storage](/sql/t-sql/statements/bulk-insert-transact-sql#f-importing-data-from-a-file-in-azure-blob-storage).
### Service Principal cannot access Azure AD and AKV
-In some circumstances there might exist an issue with Service Principal used to access Azure AD and Azure Key Vault (AKV) services. As a result, this issue impacts usage of Azure AD authentication and Transparent Database Encryption (TDE) with SQL Managed Instance. This might be experienced as an intermittent connectivity issue, or not being able to run statements such are CREATE LOGIN/USER FROM EXTERNAL PROVIDER or EXECUTE AS LOGIN/USER. Setting up TDE with customer-managed key on a new Azure SQL Managed Instance might also not work in some circumstances.
+In some circumstances, there might exist an issue with Service Principal used to access Azure AD and Azure Key Vault (AKV) services. As a result, this issue impacts usage of Azure AD authentication and Transparent Database Encryption (TDE) with SQL Managed Instance. This might be experienced as an intermittent connectivity issue, or not being able to run statements such are `CREATE LOGIN/USER FROM EXTERNAL PROVIDER` or `EXECUTE AS LOGIN/USER`. Setting up TDE with customer-managed key on a new Azure SQL Managed Instance might also not work in some circumstances.
-**Workaround**: To prevent this issue from occurring on your SQL Managed Instance before executing any update commands, or in case you have already experienced this issue after update commands, go to Azure portal, access SQL Managed Instance [Active Directory admin blade](./authentication-aad-configure.md?tabs=azure-powershell#azure-portal). Verify if you can see the error message "Managed Instance needs a Service Principal to access Azure Active Directory. Click here to create a Service Principal". In case you have encountered this error message, click on it, and follow the step-by-step instructions provided until this error have been resolved.
+**Workaround**: To prevent this issue from occurring on your SQL Managed Instance before executing any update commands, or in case you have already experienced this issue after update commands, go to Azure portal, access SQL Managed Instance [Active Directory admin page](./authentication-aad-configure.md?tabs=azure-powershell#azure-portal). Verify if you can see the error message "Managed Instance needs a Service Principal to access Azure Active Directory. Click here to create a Service Principal". In case you have encountered this error message, click on it, and follow the step-by-step instructions provided until this error have been resolved.
### Restoring manual backup without CHECKSUM might fail
If a failover group spans across instances in different Azure subscriptions or r
### SQL Agent roles need explicit EXECUTE permissions for non-sysadmin logins
-If non-sysadmin logins are added to any [SQL Agent fixed database roles](/sql/ssms/agent/sql-server-agent-fixed-database-roles), there exists an issue in which explicit EXECUTE permissions need to be granted to the master stored procedures for these logins to work. If this issue is encountered, the error message "The EXECUTE permission was denied on the object <object_name> (Microsoft SQL Server, Error: 229)" will be shown.
+If non-sysadmin logins are added to any [SQL Agent fixed database roles](/sql/ssms/agent/sql-server-agent-fixed-database-roles), there exists an issue in which explicit EXECUTE permissions need to be granted to three stored procedures in the master database for these logins to work. If this issue is encountered, the error message "The EXECUTE permission was denied on the object <object_name> (Microsoft SQL Server, Error: 229)" will be shown.
**Workaround**: Once you add logins to a SQL Agent fixed database role (SQLAgentUserRole, SQLAgentReaderRole, or SQLAgentOperatorRole), for each of the logins added to these roles, execute the below T-SQL script to explicitly grant EXECUTE permissions to the stored procedures listed. ```tsql USE [master] GO
-CREATE USER [login_name] FOR LOGIN [login_name]
+CREATE USER [login_name] FOR LOGIN [login_name];
GO
-GRANT EXECUTE ON master.dbo.xp_sqlagent_enum_jobs TO [login_name]
-GRANT EXECUTE ON master.dbo.xp_sqlagent_is_starting TO [login_name]
-GRANT EXECUTE ON master.dbo.xp_sqlagent_notify TO [login_name]
+GRANT EXECUTE ON master.dbo.xp_sqlagent_enum_jobs TO [login_name];
+GRANT EXECUTE ON master.dbo.xp_sqlagent_is_starting TO [login_name];
+GRANT EXECUTE ON master.dbo.xp_sqlagent_notify TO [login_name];
``` ### SQL Agent jobs can be interrupted by Agent process restart
You can [identify the number of remaining files](https://medium.com/azure-sqldb-
Several system views, performance counters, error messages, XEvents, and error log entries display GUID database identifiers instead of the actual database names. Don't rely on these GUID identifiers because they're replaced with actual database names in the future.
-**Workaround**: Use sys.databases view to resolve the actual database name from the physical database name, specified in the form of GUID database identifiers:
+**Workaround**: Use `sys.databases` view to resolve the actual database name from the physical database name, specified in the form of GUID database identifiers:
```tsql SELECT name as ActualDatabaseName, physical_database_name as GUIDDatabaseIdentifier FROM sys.databases
-WHERE database_id > 4
+WHERE database_id > 4;
``` ### Error logs aren't persisted
azure-sql Ledger How To Access Acl Digest https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/ledger-how-to-access-acl-digest.md
This article shows you how to access an [Azure SQL Database ledger](ledger-overv
- Python 2.7, 3.5.3, or later - Have an existing Azure SQL Database with ledger enabled. See [Quickstart: Create an Azure SQL Database with ledger enabled](ledger-create-a-single-database-with-ledger-enabled.md) if you haven't already created an Azure SQL Database.-- [Azure Confidential Ledger client library for Python](https://github.com/Azure/azure-sdk-for-python/blob/b42651ae4791aca8c9fbe282832b81badf798aa9/sdk/confidentialledger/azure-confidentialledger/README.md#create-a-client)
+- [Azure Confidential Ledger client library for Python](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/confidentialledger/azure-confidentialledger)
- A running instance of [Azure Confidential Ledger](/azure/confidential-ledger/). ## How does the integration work? Azure SQL server calculates the digests of the [ledger database(s)](ledger-overview.md#ledger-database) periodically and stores them in Azure Confidential Ledger. At any time, a user can validate the integrity of the data by downloading the digests from Azure Confidential Ledger and comparing them to the digests stored in Azure SQL Database ledger. The following steps will explain it.
-## Step 1 - Find the Digest location
+## 1. Find the Digest location
> [!NOTE] > The query will return more than one row if multiple Azure Confidential Ledger instances were used to store the digest. For each row, repeat steps 2 through 6 to download the digests from all instances of Azure Confidential Ledger.
Using the [SQL Server Management Studio (SSMS)](/sql/ssms/download-sql-server-ma
SELECT * FROM sys.database_ledger_digest_locations WHERE path like '%.confidential-ledger.azure.com% ```
-## Step 2 - Determine the Subledgerid
+## 2. Determine the Subledgerid
We're interested in the value in the path column from the query output. It consists of two parts, namely the `host name` and the `subledgerid`. As an example, in the Url `https://contoso-ledger.confidential-ledger.azure.com/sqldbledgerdigests/ledgersvr2/ledgerdb/2021-04-13T21:20:51.0000000`, the `host name` is `https://contoso-ledger.confidential-ledger.azure.com` and the `subledgerid` is `sqldbledgerdigests/ledgersvr2/ledgerdb/2021-04-13T21:20:51.0000000`. We'll use it in Step 4 to download the digests.
-## Step 3 - Obtain an Azure AD token
+## 3. Obtain an Azure AD token
-The Azure Confidential Ledger API accepts an Azure Active Directory (Azure AD) Bearer token as the caller identity. This identity needs access to ACL via Azure Resource Manager during provisioning. The user who had enabled ledger in SQL Database is automatically given administrator access to Azure Confidential Ledger. To obtain a token, the user needs to authenticate using [Azure CLI](/cli/azure/install-azure-cli) with the same account that was used with Azure portal. Once the user has authenticated, they can use [DefaultAzureCredentials()](/dotnet/api/azure.identity.defaultazurecredential) to retrieve a bearer token and call Azure Confidential Ledger API.
+The Azure Confidential Ledger API accepts an Azure Active Directory (Azure AD) Bearer token as the caller identity. This identity needs access to ACL via Azure Resource Manager during provisioning. The user who had enabled ledger in SQL Database is automatically given administrator access to Azure Confidential Ledger. To obtain a token, the user needs to authenticate using [Azure CLI](/cli/azure/install-azure-cli) with the same account that was used with Azure portal. Once the user has authenticated, they can use [AzureCliCredential](/python/api/azure-identity/azure.identity.azureclicredential) to retrieve a bearer token and call Azure Confidential Ledger API.
Log in to Azure AD using the identity with access to ACL.
az login
Retrieve the Bearer token. ```python
-from azure.identity import DefaultAzureCredential
-credential = DefaultAzureCredential()
+from azure.identity import AzureCliCredential
+credential = AzureCliCredential()
```
-## Step 4 - Download the digests from Azure Confidential Ledger
+## 4. Download the digests from Azure Confidential Ledger
-The following Python script downloads the digests from Azure Confidential Ledger. The script uses the [Azure Confidential Ledger client library for Python.](https://github.com/Azure/azure-sdk-for-python/blob/b42651ae4791aca8c9fbe282832b81badf798aa9/sdk/confidentialledger/azure-confidentialledger/README.md#create-a-client)
+The following Python script downloads the digests from Azure Confidential Ledger. The script uses the [Azure Confidential Ledger client library for Python.](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/confidentialledger/azure-confidentialledger)
```python
-from azure.identity import DefaultAzureCredential
+from azure.identity import AzureCliCredential
from azure.confidentialledger import ConfidentialLedgerClient from azure.confidentialledger.identity_service import ConfidentialLedgerIdentityServiceClient
ledger_tls_cert_file_name = f"{ledger_id}_certificate.pem"
with open(ledger_tls_cert_file_name, "w") as cert_file: cert_file.write(network_identity.ledger_tls_certificate)
-credential = DefaultAzureCredential()
+credential = AzureCliCredential()
ledger_client = ConfidentialLedgerClient( endpoint=ledger_host_url, credential=credential,
else:
print("\n***No more digests were found for the supplied SubledgerID.") ```
-## Step 5 - Download the Digests from the SQL Server
+## 5. Download the Digests from the SQL Server
> [!NOTE] > This is a way to confirm that the hashes stored in the Azure SQL Database ledger have not changed over time. For complete audit of the integrity of the Azure SQL Database ledger, see [How to verify a ledger table to detect tampering](ledger-verify-database.md).
Using [SSMS](/sql/ssms/download-sql-server-management-studio-ssms), run the foll
SELECT * FROM sys.database_ledger_blocks ```
-## Step 6 - Comparison
+## 6. Comparison
Compare the digest retrieved from the Azure Confidential Ledger to the digest returned from your SQL database using the `block_id` as the key. For example, the digest of `block_id` = `1` is the value of the `previous_block_hash` column in the `block_id`= `2` row. Similarly, for `block_id` = `3`, it's the value of the `previous_block_id` column in the `block_id` = `4` row. A mismatch in the hash value is an indicator of a potential data tampering.
If data tampering is suspected, see [How to verify a ledger table to detect tamp
- [Digest management and database verification](ledger-digest-management-and-database-verification.md) - [Append-only ledger tables](ledger-append-only-ledger-tables.md) - [Updatable ledger tables](ledger-updatable-ledger-tables.md)-- [How to verify a ledger table to detect tampering](ledger-verify-database.md)
+- [How to verify a ledger table to detect tampering](ledger-verify-database.md)
azure-sql Management Operations Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/management-operations-overview.md
Previously updated : 07/10/2020 Last updated : 06/08/2021 # Overview of Azure SQL Managed Instance management operations
Management operations consist of multiple steps. With [Operations API introduced
|Old SQL instance cleanup |Removing old SQL process from the virtual cluster | > [!NOTE]
-> As a result of scaling instances, underlying virtual cluster will go through process of releasing unused capacity and possible capacity defragmentation, which could impact instances that did not participate in creation / scaling operations.
+> Once instance scaling is completed, underlying virtual cluster will go through process of releasing unused capacity and possible capacity defragmentation, which could impact instances from the same subnet that did not participate in scaling operation, causing their failover.
## Management operations cross-impact
azure-sql Db2 To Sql Database Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/database/db2-to-sql-database-guide.md
For additional assistance, see the following resources, which were developed in
|Asset |Description | ||| |[Data workload assessment model and tool](https://github.com/Microsoft/DataMigrationTeam/tree/master/Data%20Workload%20Assessment%20Model%20and%20Tool)| This tool provides suggested "best fit" target platforms, cloud readiness, and application/database remediation level for a given workload. It offers simple, one-click calculation and report generation that helps to accelerate large estate assessments by providing and automated and uniform target platform decision process.|
-|[Db2 zOS data assets discovery and assessment package](https://github.com/microsoft/DataMigrationTeam/tree/master/DB2%20zOS%20Data%20Assets%20Discovery%20and%20Assessment%20Package)|After running the SQL script on a database, you can export the results to a file on the file system. Several file formats are supported, including *.csv, so that you can capture the results in external tools such as spreadsheets. This method can be useful if you want to easily share results with teams that do not have the workbench installed.|
-|[IBM Db2 LUW inventory scripts and artifacts](https://github.com/microsoft/DataMigrationTeam/blob/master/IBM%20DB2%20LUW%20Inventory%20Scripts%20and%20Artifacts)|This asset includes a SQL query that hits IBM Db2 LUW version 11.1 system tables and provides a count of objects by schema and object type, a rough estimate of "raw data" in each schema, and the sizing of tables in each schema, with results stored in a CSV format.|
+|[Db2 zOS data assets discovery and assessment package](https://www.microsoft.com/download/details.aspx?id=103108)|After running the SQL script on a database, you can export the results to a file on the file system. Several file formats are supported, including *.csv, so that you can capture the results in external tools such as spreadsheets. This method can be useful if you want to easily share results with teams that do not have the workbench installed.|
+|[IBM Db2 LUW inventory scripts and artifacts](https://www.microsoft.com/download/details.aspx?id=103109)|This asset includes a SQL query that hits IBM Db2 LUW version 11.1 system tables and provides a count of objects by schema and object type, a rough estimate of "raw data" in each schema, and the sizing of tables in each schema, with results stored in a CSV format.|
The Data SQL Engineering team developed these resources. This team's core charter is to unblock and accelerate complex modernization for data platform migration projects to Microsoft's Azure data platform.
The Data SQL Engineering team developed these resources. This team's core charte
- [Cloud Migration Resources](https://azure.microsoft.com/migration/resources) - To assess the application access layer, see [Data Access Migration Toolkit](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit).-- For details on how to perform data access layer A/B testing, see [Database Experimentation Assistant](/sql/dea/database-experimentation-assistant-overview).
+- For details on how to perform data access layer A/B testing, see [Database Experimentation Assistant](/sql/dea/database-experimentation-assistant-overview).
azure-sql Db2 To Managed Instance Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/managed-instance/db2-to-managed-instance-guide.md
For additional assistance, see the following resources, which were developed in
|Asset |Description | ||| |[Data workload assessment model and tool](https://github.com/Microsoft/DataMigrationTeam/tree/master/Data%20Workload%20Assessment%20Model%20and%20Tool)| This tool provides suggested "best fit" target platforms, cloud readiness, and application/database remediation level for a given workload. It offers simple, one-click calculation and report generation that helps to accelerate large estate assessments by providing and automated and uniform target platform decision process.|
-|[Db2 zOS data assets discovery and assessment package](https://github.com/microsoft/DataMigrationTeam/tree/master/DB2%20zOS%20Data%20Assets%20Discovery%20and%20Assessment%20Package)|After running the SQL script on a database, you can export the results to a file on the file system. Several file formats are supported, including *.csv, so that you can capture the results in external tools such as spreadsheets. This method can be useful if you want to easily share results with teams that do not have the workbench installed.|
-|[IBM Db2 LUW inventory scripts and artifacts](https://github.com/microsoft/DataMigrationTeam/blob/master/IBM%20DB2%20LUW%20Inventory%20Scripts%20and%20Artifacts)|This asset includes a SQL query that hits IBM Db2 LUW version 11.1 system tables and provides a count of objects by schema and object type, a rough estimate of "raw data" in each schema, and the sizing of tables in each schema, with results stored in a CSV format.|
+|[Db2 zOS data assets discovery and assessment package](https://www.microsoft.com/download/details.aspx?id=103108)|After running the SQL script on a database, you can export the results to a file on the file system. Several file formats are supported, including \*.csv, so that you can capture the results in external tools such as spreadsheets. This method can be useful if you want to easily share results with teams that do not have the workbench installed.|
+|[IBM Db2 LUW inventory scripts and artifacts](https://www.microsoft.com/download/details.aspx?id=103109)|This asset includes a SQL query that hits IBM Db2 LUW version 11.1 system tables and provides a count of objects by schema and object type, a rough estimate of "raw data" in each schema, and the sizing of tables in each schema, with results stored in a CSV format.|
The Data SQL Engineering team developed these resources. This team's core charter is to unblock and accelerate complex modernization for data platform migration projects to Microsoft's Azure data platform.
The Data SQL Engineering team developed these resources. This team's core charte
- [Best practices for costing and sizing workloads migrated to Azure](/azure/cloud-adoption-framework/migrate/azure-best-practices/migrate-best-practices-costs) - To assess the application access layer, see [Data Access Migration Toolkit](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit).-- For details on how to perform data access layer A/B testing, see [Database Experimentation Assistant](/sql/dea/database-experimentation-assistant-overview).
+- For details on how to perform data access layer A/B testing, see [Database Experimentation Assistant](/sql/dea/database-experimentation-assistant-overview).
azure-sql Db2 To Sql On Azure Vm Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/virtual-machines/db2-to-sql-on-azure-vm-guide.md
For additional assistance, see the following resources, which were developed in
|Asset |Description | ||| |[Data workload assessment model and tool](https://github.com/Microsoft/DataMigrationTeam/tree/master/Data%20Workload%20Assessment%20Model%20and%20Tool)| This tool provides suggested "best fit" target platforms, cloud readiness, and application/database remediation level for a given workload. It offers simple, one-click calculation and report generation that helps to accelerate large estate assessments by providing and automated and uniform target platform decision process.|
-|[Db2 zOS data assets discovery and assessment package](https://github.com/microsoft/DataMigrationTeam/tree/master/DB2%20zOS%20Data%20Assets%20Discovery%20and%20Assessment%20Package)|After running the SQL script on a database, you can export the results to a file on the file system. Several file formats are supported, including *.csv, so that you can capture the results in external tools such as spreadsheets. This method can be useful if you want to easily share results with teams that do not have the workbench installed.|
-|[IBM Db2 LUW inventory scripts and artifacts](https://github.com/microsoft/DataMigrationTeam/tree/master/IBM%20DB2%20LUW%20Inventory%20Scripts%20and%20Artifacts)|This asset includes a SQL query that hits IBM Db2 LUW version 11.1 system tables and provides a count of objects by schema and object type, a rough estimate of "raw data" in each schema, and the sizing of tables in each schema, with results stored in a CSV format.|
+|[Db2 zOS data assets discovery and assessment package](https://www.microsoft.com/download/details.aspx?id=103108)|After running the SQL script on a database, you can export the results to a file on the file system. Several file formats are supported, including \*.csv, so that you can capture the results in external tools such as spreadsheets. This method can be useful if you want to easily share results with teams that do not have the workbench installed.|
+|[IBM Db2 LUW inventory scripts and artifacts](https://www.microsoft.com/download/details.aspx?id=103109)|This asset includes a SQL query that hits IBM Db2 LUW version 11.1 system tables and provides a count of objects by schema and object type, a rough estimate of "raw data" in each schema, and the sizing of tables in each schema, with results stored in a CSV format.|
The Data SQL Engineering team developed these resources. This team's core charter is to unblock and accelerate complex modernization for data platform migration projects to Microsoft's Azure data platform.
After migration, review the [Post-migration validation and optimization guide](/
For Microsoft and third-party services and tools that are available to assist you with various database and data migration scenarios, see [Data migration services and tools](../../../dms/dms-tools-matrix.md).
-For video content, see [Overview of the migration journey](https://azure.microsoft.com/resources/videos/overview-of-migration-and-recommended-tools-services/).
+For video content, see [Overview of the migration journey](https://azure.microsoft.com/resources/videos/overview-of-migration-and-recommended-tools-services/).
azure-sql Sql Agent Extension Manually Register Single Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/sql-agent-extension-manually-register-single-vm.md
To upgrade the extension to full mode, run the following Azure PowerShell code s
$vm = Get-AzVM -Name <vm_name> -ResourceGroupName <resource_group_name> # Register with SQL IaaS Agent extension in full mode
- Update-AzSqlVM -Name $vm.Name -ResourceGroupName $vm.ResourceGroupName -SqlManagementType Full
+ Update-AzSqlVM -Name $vm.Name -ResourceGroupName $vm.ResourceGroupName -SqlManagementType Full -Location $vm.Location
```
azure-video-analyzer Computer Vision For Spatial Analysis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/computer-vision-for-spatial-analysis.md
You will need this key and endpoint URI in your deployment manifest files to dep
1. Clone the repo from this location: [https://github.com/Azure-Samples/azure-video-analyzer-iot-edge-csharp](https://github.com/Azure-Samples/azure-video-analyzer-iot-edge-csharp). 1. In Visual Studio Code, open the folder where the repo has been downloaded. 1. In Visual Studio Code, go to the src/cloud-to-device-console-app folder. There, create a file and name it *appsettings.json*. This file will contain the settings needed to run the program.
-1. Get the `IotHubConnectionString` from the edge device by following these steps:
-
- - go to your IoT Hub in Azure portal and click on `Shared access policies` in the left navigation pane.
- - Click on `iothubowner` get the shared access keys.
- - Copy the `Connection String ΓÇô primary key` and paste it in the input box on the VSCode.
-
- The connection string will look like: <br/>`HostName=xxx.azure-devices.net;SharedAccessKeyName=iothubowner;SharedAccessKey=xxx`
-
-1. Copy the below contents into the file. Make sure you replace the variables.
-
+1. Copy the contents of the appsettings.json file from Azure portal. The text should look like the following code.
```json { "IoThubConnectionString": "HostName=<IoTHubName>.azure-devices.net;SharedAccessKeyName=iothubowner;SharedAccessKey=<SharedAccessKey>",
You will need this key and endpoint URI in your deployment manifest files to dep
``` 1. Go to the src/edge folder and create a file named .env.
-1. Copy the contents of the env file from Azure portal. The text should look like the following code.
+1. Copy the contents of the env.txt file from Azure portal. The text should look like the following code.
```env SUBSCRIPTION_ID="<Subscription ID>"
There are a few things you need to pay attention to in the deployment template f
1. `IpcMode` in `avaedge` and `spatialanalysis` module createOptions should be same and set to **host**. 1. For the RTSP simulator to work, ensure that you have set up the Volume Bounds when using an Azure Stack Edge device.
- 1. [Connect to the SMB share](../../databox-online/azure-stack-edge-deploy-add-shares.md#connect-to-an-smb-share) and copy the [sample stairwell video file](https://lvamedia.blob.core.windows.net/public/2018-03-05.10-27-03.10-30-01.admin.G329.mp4) to the Local share.
+ 1. [Connect to the SMB share](../../databox-online/azure-stack-edge-deploy-add-shares.md#connect-to-an-smb-share) and copy the [sample stairwell video file](https://lvamedia.blob.core.windows.net/public/2018-03-05.10-27-03.10-30-01.admin.G329.mkv) to the Local share.
> [!VIDEO https://www.microsoft.com/videoplayer/embed/RWDRJd]
In operations.json:
{ "opName": "pipelineTopologySet", "opParams": {
- "topologyUrl": "https://raw.githubusercontent.com/Azure/video-analyzer/main/pipelines/live/topologies/spatial-analysis/person-count-operation-topology.json"
+ "pipelineTopologyUrl": "https://raw.githubusercontent.com/Azure/video-analyzer/main/pipelines/live/topologies/spatial-analysis/person-count-operation-topology.json"
} }, ```
In operations.json:
"parameters": [ { "name": "rtspUrl",
- "value": " rtsp://rtspsim:554/media/stairwell.mkv"
+ "value": " rtsp://rtspsim:554/media/2018-03-05.10-27-03.10-30-01.admin.G329.mkv"
}, { "name": "rtspUserName",
In operations.json:
], ```
-Run a debug session and follow **TERMINAL** instructions, it will set pipelineTopology, set livePipeline, activate livePipeline, and finally delete the resources.
+Run a debug session by selecting F5 and follow **TERMINAL** instructions, it will set pipelineTopology, set livePipeline, activate livePipeline, and finally delete the resources.
## Interpret results
Sample output for personZoneEvent (from `SpatialAnalysisPersonZoneCrossingOperat
</details>
-## Video Player
+## Playing back the recording
-You can use a video player to view the generated video including the inferences (bounding boxes) as shown below:
+You can examine the Video Analyzer video resource that was created by the live pipeline by logging in to the Azure portal and viewing the video.
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/spatial-analysis/inference.png" alt-text="Bounding boxes":::
+1. Open your web browser, and go to the [Azure portal](https://portal.azure.com/). Enter your credentials to sign in to the portal. The default view is your service dashboard.
+1. Locate your Video Analyzers account among the resources you have in your subscription, and open the account pane.
+1. Select **Videos** in the **Video Analyzers** list.
+1. You'll find a video listed with the name `personcount`. This is the name chosen in your pipeline topology file.
+1. Select the video.
+1. On the video details page, click the **Play** icon
+
+ > [!div class="mx-imgBorder"]
+ > :::image type="content" source="./media/spatial-analysis/sa-video-playback.png" alt-text="Screenshot of video playback":::
+
+1. To view the inference metadata as bounding boxes on the video, click the **bounding box** icon
+ > [!div class="mx-imgBorder"]
+ > :::image type="content" source="./media/record-stream-inference-data-with-video/bounding-box.png" alt-text="Bounding box icon":::
+
+> [!NOTE]
+> Because the source of the video was a container simulating a camera feed, the time stamps in the video are related to when you activated the live pipeline and when you deactivated it.
## Troubleshooting
azure-video-analyzer Record Stream Inference Data With Video https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/record-stream-inference-data-with-video.md
You can examine the Video Analyzer video resource that was created by the live p
1. You'll find a video listed with the name `sample-cvr-with-inference-metadata`. This is the name chosen in your pipeline topology file. 1. Select the video. 1. On the video details page, click the **Play** icon-
+1. To view the inference metadata as bounding boxes on the video, click the **bounding box** icon (circled in red)
> [!div class="mx-imgBorder"] > :::image type="content" source="./media/record-stream-inference-data-with-video/video-playback.png" alt-text="Screenshot of video playback":::
-
-1. To view the inference metadata as bounding boxes on the video, click the **bounding box** icon
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/record-stream-inference-data-with-video/bounding-box.png" alt-text="Bounding box icon":::
> [!NOTE] > Because the source of the video was a container simulating a camera feed, the time stamps in the video are related to when you activated the live pipeline and when you deactivated it.
azure-vmware Azure Security Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/azure-security-integration.md
Title: Protect your Azure VMware Solution VMs with Azure Security Center integration
-description: Protect your Azure VMware Solution VMs with Azure's native security tools from the Azure Security Center dashboard.
+ Title: Integrate Azure Security Center with Azure VMware Solution
+description: Learn how to protect your Azure VMware Solution VMs with Azure's native security tools from the Azure Security Center dashboard.
Previously updated : 02/12/2021 Last updated : 06/14/2021
-# Protect your Azure VMware Solution VMs with Azure Security Center integration
+# Integrate Azure Security Center with Azure VMware Solution
-Azure native security tools provide protection for a hybrid environment of Azure, Azure VMware Solution, and on-premises virtual machines (VMs). This article shows you how to set up Azure tools for hybrid environment security. You'll use these tools to identify and address various threats.
+Azure Security Center provides advanced threat protection across your Azure VMware Solution and on-premises virtual machines (VMs). It assesses the vulnerability of Azure VMware Solution VMs and raise alerts as needed. These security alerts can be forwarded to Azure Monitor for resolution. You can define security policies in Azure Security Center. For more information, see [Working with security policies](../security-center/tutorial-security-policy.md).
-## Azure native services
-
-Here's a quick summary of Azure native
--- **Log Analytics workspace:** Log Analytics workspace is a unique environment to store log data. Each workspace has its own data repository and configuration. Data sources and solutions are configured to store their data in a specific workspace.-- **Azure Security Center:** Azure Security Center is a unified infrastructure security management system. It strengthens security of data centers, and provides advanced threat protection across hybrid workloads in the cloud or on premises.-- **Azure Sentinel:** Azure Sentinel is a cloud-native, security information event management (SIEM) solution. It provides security analytics, alert detection, and automated threat response across an environment.-
-## Topology
+Azure Security Center offers many features, including:
+- File integrity monitoring
+- Fileless attack detection
+- Operating system patch assessment
+- Security misconfigurations assessment
+- Endpoint protection assessment
+The diagram shows the integrated monitoring architecture of integrated security for Azure VMware Solution VMs.
+
:::image type="content" source="media/azure-security-integration/azure-integrated-security-architecture.png" alt-text="Diagram showing the architecture of Azure Integrated Security." border="false":::
-The Log Analytics agent enables collection of log data from Azure, Azure VMware Solution, and on-premises VMs. The log data is sent to Azure Monitor Logs and is stored in a Log Analytics workspace. You can deploy the Log Analytics agent using Arc enabled servers [VM extensions support](../azure-arc/servers/manage-vm-extensions.md) for new and existing VMs.
-
-Once the logs are collected by the Log Analytics workspace, you can configure the Log Analytics workspace with Azure Security Center. Azure Security Center will assess the vulnerability status of Azure VMware Solution VMs and raise an alert for any critical vulnerability. For instance, it assesses missing operating system patches, security misconfigurations, and [endpoint protection](../security-center/security-center-services.md).
-
-You can configure the Log Analytics workspace with Azure Sentinel for alert detection, threat visibility, hunting, and threat response. In the preceding diagram, Azure Security Center is connected to Azure Sentinel using Azure Security Center connector. Azure Security Center will forward the environment vulnerability to Azure Sentinel to create an incident and map with other threats. You can also create the scheduled rules query to detect unwanted activity and convert it to the incidents.
-
-## Benefits
--- Azure native services can be used for hybrid environment security in Azure, Azure VMware Solution, and on-premises services.-- Using a Log Analytics workspace, you can collect the data or the logs to a single point and present the same data to different Azure native services.-- Azure Security Center offers many features, including:
- - File integrity monitoring
- - Fileless attack detection
- - Operating system patch assessment
- - Security misconfigurations assessment
- - Endpoint protection assessment
-- Azure Sentinel allows you to:
- - Collect data at cloud scale across all users, devices, applications, and infrastructure, both on premises and in multiple clouds.
- - Detect previously undetected threats.
- - Investigate threats with artificial intelligence and hunt for suspicious activities at scale.
- - Respond to incidents rapidly with built-in orchestration and automation of common tasks.
-
-## Create a Log Analytics workspace
-You'll need a Log Analytics workspace to collect data from various sources. For more information, see [Create a Log Analytics workspace from the Azure portal](../azure-monitor/logs/quick-create-workspace.md).
+## Prerequisites
-## Deploy Security Center and configure Azure VMware Solution VMs
+- [Plan for optimized use of Security Center](../security-center/security-center-planning-and-operations-guide.md).
-Azure Security Center is a pre-configured tool that doesn't require deployment. In the Azure portal, search for **Security Center** and select it.
+- [Review the supported platforms in Security Center](../security-center/security-center-os-coverage.md).
-### Enable Azure Defender
+- [Create a Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md) to collect data from various sources.
-Azure Defender extends Azure Security Center's advanced threat protection across your hybrid workloads, both on premises and in the cloud. So to protect your Azure VMware Solution VMs, you'll need to enable Azure Defender.
+- [Enable Azure Security Center in your subscription](../security-center/security-center-get-started.md).
-1. In Security Center, select **Getting started**.
+ >[!NOTE]
+ >Azure Security Center is a pre-configured tool that doesn't require deployment, but you'll need to enable it in the Azure portal.
-2. Select the **Upgrade** tab and then select your subscription or workspace.
+- [Enable Azure Defender](../security-center/enable-azure-defender.md).
-3. Select **Upgrade** to enable Azure Defender.
## Add Azure VMware Solution VMs to Security Center
Azure Defender extends Azure Security Center's advanced threat protection across
4. On the **Prerequisites** tab, select **Next**.
-5. On the **Resource details** tab, fill in the following details:
+5. On the **Resource details** tab, fill in the following details and then select **Next: Tags**.
+ - Subscription+ - Resource group+ - Region + - Operating system+ - Proxy Server details
- Then select **Next: Tags**.
- 6. On the **Tags** tab, select **Next**. 7. On the **Download and run script** tab, select **Download**.
Azure Defender extends Azure Security Center's advanced threat protection across
## View recommendations and passed assessments
+This provides you with the security health details of your resource.
+ 1. In Azure Security Center, select **Inventory** from the left pane. 2. For Resource type, select **Servers - Azure Arc**.
Azure Defender extends Azure Security Center's advanced threat protection across
## Deploy an Azure Sentinel workspace
-Azure Sentinel is built on top of a Log Analytics workspace. Your first step in onboarding Azure Sentinel is to select the Log Analytics workspace you wish to use for that purpose.
+Azure Sentinel is built on top of a Log Analytics workspace, so you'll just need to select the Log Analytics workspace you want to use.
1. In the Azure portal, search for **Azure Sentinel**, and select it.
Azure Sentinel is built on top of a Log Analytics workspace. Your first step in
3. Select the Log Analytics workspace and select **Add**.
-## Enable data collector for security events on Azure VMware Solution VMs
-
-Now you're ready to connect Azure Sentinel with your data sources, in this case, security events.
+## Enable data collector for security events
1. On the Azure Sentinel workspaces page, select the configured workspace.
Now you're ready to connect Azure Sentinel with your data sources, in this case,
4. On the connector page, select the events you wish to stream and then select **Apply Changes**.
- :::image type="content" source="media/azure-security-integration/select-events-you-want-to-stream.png" alt-text="Screenshot of Security Events page in Azure Sentinel where you can select which events to stream.":::
+ :::image type="content" source="media/azure-security-integration/select-events-you-want-to-stream.png" alt-text="Screenshot of Security Events page in Azure Sentinel where you can select which events to stream.":::
## Connect Azure Sentinel with Azure Security Center
After connecting data sources to Azure Sentinel, you can create rules to generat
3. Select **+Create** and on the drop-down, select **Scheduled query rule**.
-4. On the **General** tab, enter the required information.
+4. On the **General** tab, enter the required information and then select **Next: Set rule logic**.
- Name+ - Description+ - Tactics+ - Severity
- - Status
- Select **Next: Set rule logic >**.
+ - Status
-5. On the **Set rule logic** tab, enter the required information.
+5. On the **Set rule logic** tab, enter the required information and then select **Next**.
- Rule query (here showing our example query)
After connecting data sources to Azure Sentinel, you can create rules to generat
``` - Map entities+ - Query scheduling+ - Alert threshold+ - Event grouping+ - Suppression
- Select **Next**.
-6. On the **Incident settings** tab, enable **Create incidents from alerts triggered by this analytics rule** and select **Next: Automated response >**.
+6. On the **Incident settings** tab, enable **Create incidents from alerts triggered by this analytics rule** and select **Next: Automated response**.
:::image type="content" source="media/azure-security-integration/create-new-analytic-rule-wizard.png" alt-text="Screenshot of the Analytic rule wizard for creating a new rule in Azure Sentinel. Shows Create incidents from alerts triggered by this rule as enabled.":::
-7. Select **Next: Review >**.
+7. Select **Next: Review**.
8. On the **Review and create** tab, review the information and select **Create**.
-After the third failed attempt to sign in to Windows server, the created rule triggers an incident for every unsuccessful attempt.
+>[!TIP]
+>After the third failed attempt to sign in to Windows server, the created rule triggers an incident for every unsuccessful attempt.
## View alerts
You can view generated incidents with Azure Sentinel. You can also assign incide
2. Under Threat Management, select **Incidents**.
-3. Select an incident. You can then assign the incident to a team for resolution.
+3. Select an incident and then assign it to a team for resolution.
:::image type="content" source="media/azure-security-integration/assign-incident.png" alt-text="Screenshot of Azure Sentinel Incidents page with incident selected and option to assign the incident for resolution.":::
- After resolving the issue, you can close it.
+>[!TIP]
+>After resolving the issue, you can close it.
## Hunt security threats with queries You can create queries or use the available pre-defined query in Azure Sentinel to identify threats in your environment. The following steps run a pre-defined query.
-1. Go to the Azure Sentinel overview page.
+1. On the Azure Sentinel overview page, under Threat management, select **Hunting**. A list of pre-defined queries is displayed.
-2. Under Threat management, select **Hunting**. A list of pre-defined queries is displayed.
+ >[!TIP]
+ >You can also create a new query by selecting **+New Query**.
+ >
+ >:::image type="content" source="media/azure-security-integration/create-new-query.png" alt-text="Screenshot of Azure Sentinel Hunting page with + New Query highlighted.":::
3. Select a query and then select **Run Query**. 4. Select **View Results** to check the results.
-### Create a new query
-
-1. Under Threat management, select **Hunting** and then **+New Query**.
-
- :::image type="content" source="media/azure-security-integration/create-new-query.png" alt-text="Screenshot of Azure Sentinel Hunting page with + New Query highlighted.":::
-
-2. Fill in the following information to create a custom query.
- - Name
- - Description
- - Custom query
- - Enter Mapping
- - Tactics
-
-3. Select **Create**. You can then select the created query, **Run Query**, and **View Results**.
## Next steps Now that you've covered how to protect your Azure VMware Solution VMs, you may want to learn about: -- Using the [Azure Defender dashboard](../security-center/azure-defender-dashboard.md)
+- [Using the Azure Defender dashboard](../security-center/azure-defender-dashboard.md)
- [Advanced multistage attack detection in Azure Sentinel](../azure-monitor/logs/quick-create-workspace.md)-- [Monitor and manage Azure VMware Solution VMs](lifecycle-management-of-azure-vmware-solution-vms.md)
+- [Integrating Azure native services in Azure VMware Solution](integrate-azure-native-services.md)
azure-vmware Concepts Monitor Protection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-monitor-protection.md
+
+ Title: Concepts - Monitor and protection
+description: Learn about the Azure native services that help secure and protect your Azure VMware Solution workloads.
+ Last updated : 06/14/2021++
+# Monitor and protect Azure VMware Solution workloads
+
+Microsoft Azure native services let you monitor, manage, and protect your virtual machines (VMs) on Azure VMware Solution and on-premises VMs. The Azure native services that you can integrate with Azure VMware Solution include:
+
+- **Log Analytics workspace** is a unique environment to store log data. Each workspace has its own data repository and configuration. Data sources and solutions are configured to store their data in a specific workspace. Easily deploy the Log Analytics agent using Azure Arc enabled servers VM extension support for new and existing VMs.
+- **Azure Security Center** is a unified infrastructure security management system. It strengthens security of data centers, and provides advanced threat protection across hybrid workloads in the cloud or on premises. It assesses the vulnerability of Azure VMware Solution VMs and raise alerts as needed. These security alerts can be forwarded to Azure Monitor for resolution. You can define security policies in Azure Security Center. For more information, see [Integrate Azure Security Center with Azure VMware Solution](azure-security-integration.md).
+- **[Azure Monitor](../azure-monitor/vm/vminsights-enable-overview.md)** is a comprehensive solution for collecting, analyzing, and acting on telemetry from your cloud and on-premises environments. It requires no deployment. With Azure Monitor, you can monitor guest operating system performance and discover and map application dependencies for Azure VMware Solution or on-premises VMs. Your Log Analytics workspace in Azure Monitor enables log collection and performance counter collection using the Log Analytics agent or extensions. Collect data and logs to a single point and present that data to different Azure native services.
+- **Azure Arc** extends Azure management to any infrastructure, including Azure VMware Solution, on-premises, or other cloud platforms. [Azure Arc enabled servers](../azure-arc/servers/overview.md) enables you to manage your Windows and Linux physical servers and virtual machines hosted *outside* of Azure, on your corporate network, or other cloud provider. You can attach a Kubernetes cluster hosted in your Azure VMware Solution environment using [Azure Arc enabled Kubernetes](../azure-arc/kubernetes/overview.md).
+- **[Azure Update Management](../automation/update-management/overview.md)** in Azure Automation manages operating system updates for your Windows and Linux machines in a hybrid environment. It monitors patching compliance and forwards patching deviation alerts to Azure Monitor for remediation. Azure Update Management must connect to your Log Analytics workspace to use stored data to assess the status of updates on your VMs.
+
++
+## Topology
+
+The diagram shows the integrated monitoring architecture for Azure VMware Solution VMs.
++
+The Log Analytics agent enables collection of log data from Azure, Azure VMware Solution, and on-premises VMs. The log data is sent to Azure Monitor Logs and stored in a Log Analytics workspace. You can deploy the Log Analytics agent using Arc enabled servers [VM extensions support](../azure-arc/servers/manage-vm-extensions.md) for new and existing VMs.
+
+Once the logs are collected by the Log Analytics workspace, you can configure the Log Analytics workspace with Azure Security Center. Azure Security Center assesses the vulnerability status of Azure VMware Solution VMs and raise an alert for any critical vulnerability. For instance, it assesses missing operating system patches, security misconfigurations, and [endpoint protection](../security-center/security-center-services.md).
+
+You can configure the Log Analytics workspace with Azure Sentinel for alert detection, threat visibility, hunting, and threat response. In the preceding diagram, Azure Security Center is connected to Azure Sentinel using Azure Security Center connector. Azure Security Center will forward the environment vulnerability to Azure Sentinel to create an incident and map with other threats. You can also create the scheduled rules query to detect unwanted activity and convert it to the incidents.
++
+## Next steps
+
+Now that you've covered Azure VMware Solution network and interconnectivity concepts, you may want to learn about:
+
+- [Integrating Azure native services in Azure VMware Solution](integrate-azure-native-services.md)
+- [Integrate Azure Security Center with Azure VMware Solution](azure-security-integration.md)
+- [Automation account authentication](../automation/automation-security-overview.md)
+- [Designing your Azure Monitor Logs deployment](../azure-monitor/logs/design-logs-deployment.md) and [Azure Monitor](../azure-monitor/overview.md)
+- [Azure Security Center planning](../security-center/security-center-planning-and-operations-guide.md) and [Supported platforms for Security Center](../security-center/security-center-os-coverage.md)
++
azure-vmware Configure L2 Stretched Vmware Hcx Networks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/configure-l2-stretched-vmware-hcx-networks.md
DHCP does not work for virtual machines (VMs) on the VMware HCX L2 stretch netwo
:::image type="content" source="media/manage-dhcp/hcx-find-destination-network.png" alt-text="Screenshot of a network extension in VMware vSphere Client" lightbox="media/manage-dhcp/hcx-find-destination-network.png":::
-1. In the Azure VMware Solution NSX-T Manager, select **Networking** > **Segments** > **Segment Profiles**.
+1. In NSX-T Manager, select **Networking** > **Segments** > **Segment Profiles**.
1. Select **Add Segment Profile** and then **Segment Security**.
DHCP does not work for virtual machines (VMs) on the VMware HCX L2 stretch netwo
:::image type="content" source="media/manage-dhcp/add-segment-profile-bpdu-filter-dhcp-options.png" alt-text="Screenshot showing the BPDU Filter toggled on and the DHCP toggles off" lightbox="media/manage-dhcp/add-segment-profile-bpdu-filter-dhcp-options.png":::
- :::image type="content" source="media/manage-dhcp/edit-segment-security.png" alt-text="Screenshot of the Segment Security field" lightbox="media/manage-dhcp/edit-segment-security.png":::
+ :::image type="content" source="media/manage-dhcp/edit-segment-security.png" alt-text="Screenshot of the Segment Security field" lightbox="media/manage-dhcp/edit-segment-security.png":::
azure-vmware Configure Nsx Network Components Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/configure-nsx-network-components-azure-portal.md
You'll have four options to configure NSX-T components in the Azure VMware Solut
- **DNS** ΓÇô Create a DNS forwarder to send DNS requests to a designated DNS server for resolution. >[!IMPORTANT]
->You'll still have access to the NSX-T Manager console, where you can use the advanced settings mentioned and other NSX-T features.
+>You can still use NSX-T Manager for the advanced settings mentioned and other NSX-T features.
## Prerequisites Virtual machines (VMs) created or migrated to the Azure VMware Solution private cloud should be attached to a network segment.
azure-vmware Deploy Vm Content Library https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/deploy-vm-content-library.md
Now that the content library has been created, you can add an ISO image to deplo
Now that you've covered creating a content library to deploy VMs in Azure VMware Solution, you may want to learn about: - [How to migrate VM workloads to your private cloud](tutorial-deploy-vmware-hcx.md)-- [Monitor and manage Azure VMware Solution VMs](lifecycle-management-of-azure-vmware-solution-vms.md)
+- [Integrating Azure native services in Azure VMware Solution](integrate-azure-native-services.md)
<!-- LINKS - external-->
azure-vmware Integrate Azure Native Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/integrate-azure-native-services.md
+
+ Title: Integrate and deploy Azure native services
+description: Learn how to integrate and deploy Microsoft Azure native tools to monitor and manage your Azure VMware Solution workloads.
+ Last updated : 06/14/2021++
+# Integrate and deploy Azure native services
+
+Microsoft Azure native services let you monitor, manage, and protect your virtual machines (VMs) in a hybrid environment (Azure, Azure VMware Solution, and on-premises). For more information, see [Supported features for VMs](../security-center/security-center-services.md).
+
+The Azure native services that you can integrate with Azure VMware Solution include:
+
+- **Log Analytics workspace:** Each workspace has its own data repository and configuration for storing log data. Data sources and solutions are configured to store their data in a specific workspace. Easily deploy the Log Analytics agent using Azure Arc enabled servers VM extension support for new and existing VMs.
+- **Azure Security Center:** Unified infrastructure security management system that strengthens security of data centers, and provides advanced threat protection across hybrid workloads in the cloud or on premises.
+- **Azure Sentinel** is a cloud-native, security information event management (SIEM) solution. It provides security analytics, alert detection, and automated threat response across an environment. Azure Sentinel is built on top of a Log Analytics workspace.
+- **Azure Arc:** Extends Azure management to any infrastructure, including Azure VMware Solution, on-premises, or other cloud platforms.
+- **Azure Update Management:** Manages operating system updates for your Windows and Linux machines in a hybrid environment.
+- **Azure Monitor** Comprehensive solution for collecting, analyzing, and acting on telemetry from your cloud and on-premises environments. It requires no deployment.
+
+In this article, you'll integrate Azure native services in your Azure VMware Solution private cloud. You'll also learn how to use the tools to manage your VMs throughout their lifecycle.
++
+## Enable Azure Update Management
+
+[Azure Update Management](../automation/update-management/overview.md) in Azure Automation manages operating system updates for your Windows and Linux machines in a hybrid environment. It monitors patching compliance and forwards patching deviation alerts to Azure Monitor for remediation. Azure Update Management must connect to your Log Analytics workspace to use stored data to assess the status of updates on your VMs.
+
+1. [Create an Azure Automation account](../automation/automation-create-standalone-account.md).
+
+ >[!TIP]
+ >You can [use an Azure Resource Manager (ARM) template to create an Automation accoun](../automation/quickstart-create-automation-account-template.md). Using an ARM template takes fewer steps compared to other deployment methods.
+
+1. [Enable Update Management from an Automation account](../automation/update-management/enable-from-automation-account.md). This links your Log Analytics workspace to your automation account. It also enables Azure and non-Azure VMs in Update Management.
+
+ - If you have a workspace, select **Update management**. Then select the Log Analytics workspace, and Automation account and select **Enable**. The setup takes up to 15 minutes to complete.
+
+ - If you want to create a new Log Analytics workspace, see [Create a Log Analytics workspace in the Azure portal](../azure-monitor/logs/quick-create-workspace.md). You can also create a workspace with [CLI](../azure-monitor/logs/quick-create-workspace-cli.md), [PowerShell](../azure-monitor/logs/powershell-workspace-configuration.md), or [Azure Resource Manager template](../azure-monitor/logs/resource-manager-workspace.md).
+
+1. Once you've enabled Update Management, you can [deploy updates on VMs and review the results](../automation/update-management/deploy-updates.md).
++
+## Enable Azure Security Center
+
+Azure Security Center provides advanced threat protection across your Azure VMware Solution and on-premises virtual machines (VMs). It assesses the vulnerability of Azure VMware Solution VMs and raise alerts as needed. These security alerts can be forwarded to Azure Monitor for resolution.
+
+Azure Security Center offers many features, including:
+- File integrity monitoring
+- Fileless attack detection
+- Operating system patch assessment
+- Security misconfigurations assessment
+- Endpoint protection assessment
+
+>[!NOTE]
+>Azure Security Center is a pre-configured tool that doesn't require deployment, but you'll need to enable it in the Azure portal.
+
+To enable Azure Security Center, see [Integrate Azure Security Center with Azure VMware Solution](azure-security-integration.md).
+
+## Onboard VMs to Azure Arc enabled servers
+
+Azure Arc extends Azure management to any infrastructure, including Azure VMware Solution and on-premises. [Azure Arc enabled servers](../azure-arc/servers/overview.md) lets you manage your Windows and Linux physical servers and virtual machines hosted *outside* of Azure, on your corporate network, or another cloud provider.
+
+For information on enabling Azure Arc enabled servers for multiple Windows or Linux VMs, see [Connect hybrid machines to Azure at scale](../azure-arc/servers/onboard-service-principal.md).
+
+## Onboard hybrid Kubernetes clusters with Arc enabled Kubernetes
+
+[Azure Arc enabled Kubernetes](../azure-arc/kubernetes/overview.md) lets you attach a Kubernetes cluster hosted in your Azure VMware Solution environment.
+
+For more information, see [Create an Azure Arc-enabled onboarding Service Principal](../azure-arc/kubernetes/create-onboarding-service-principal.md).
+
+## Deploy the Log Analytics agent
+
+You can monitor Azure VMware Solution VMs through the Log Analytics agent. Machines connected to the Log Analytics workspace use the [Log Analytics agent](../azure-monitor/agents/log-analytics-agent.md) to collect data about changes to installed software, Microsoft services, Windows registry and files, and Linux daemons on monitored servers. When data is available, the agent sends it to Azure Monitor Logs for processing. Azure Monitor Logs applies logic to the received data, records it, and makes it available for analysis.
+
+Deploy the Log Analytics agent by using [Azure Arc enabled servers VM extension support](../azure-arc/servers/manage-vm-extensions.md).
+
+## Enable Azure Monitor
+
+[Azure Monitor](../azure-monitor/overview.md) is a comprehensive solution for collecting, analyzing, and acting on telemetry from your cloud and on-premises environments. Some of the added benefits of Azure Monitor include:
+
+ - Seamless monitoring
+
+ - Better infrastructure visibility
+
+ - Instant notifications
+
+ - Automatic resolution
+
+ - Cost efficiency
+
+You can collect data from different sources to monitor and analyze. For more information, see [Sources of monitoring data for Azure Monitor](../azure-monitor/agents/data-sources.md). You can also collect different types of data for analysis, visualization, and alerting. For more information, see [Azure Monitor data platform](../azure-monitor/data-platform.md).
+
+You can monitor guest operating system performance and discover and map application dependencies for Azure VMware Solution or on-premises VMs. You can also create alert rules to identify issues in your environment, like high use of resources, missing patches, low disk space, and heartbeat of your VMs. You can set an automated response to detected events by sending an alert to IT Service Management (ITSM) tools. Alert detection notification can also be sent via email.
++
+1. [Design your Azure Monitor Logs deployment](../azure-monitor/logs/design-logs-deployment.md)
+
+1. [Enable Azure Monitor for VMs overview](../azure-monitor/vm/vminsights-enable-overview.md)
+
+1. [Configure Log Analytics workspace for Azure Monitor for VMs](../azure-monitor/vm/vminsights-configure-workspace.md).
+
+1. Create alert rules to identify issues in your environment:
+ - [Create, view, and manage metric alerts using Azure Monitor](../azure-monitor/alerts/alerts-metric.md).
+ - [Create, view, and manage log alerts using Azure Monitor](../azure-monitor/alerts/alerts-log.md).
+ - [Action rules](../azure-monitor/alerts/alerts-action-rules.md) to set automated actions and notifications.
+ - [Connect Azure to ITSM tools using IT Service Management Connector](../azure-monitor/alerts/itsmc-overview.md).
azure-vmware Lifecycle Management Of Azure Vmware Solution Vms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/lifecycle-management-of-azure-vmware-solution-vms.md
- Title: Monitor and manage Azure VMware Solution VMs
-description: Learn to manage all aspects of the lifecycle of your Azure VMware Solution VMs with Microsoft Azure native tools.
- Previously updated : 05/04/2021--
-# Monitor and manage Azure VMware Solution VMs
---
-Microsoft Azure native tools allow you to monitor and manage your virtual machines (VMs) in the Azure environment. Yet they also allow you to monitor and manage your VMs on Azure VMware Solution and your on-premises VMs. In this article, we'll look at the integrated monitoring architecture Azure offers, and how you can use its native tools to manage your Azure VMware Solution VMs throughout their lifecycle.
-
-## Benefits
--- Azure native services can be used to manage your VMs in a hybrid environment (Azure, Azure VMware Solution, and on-premises).-- Integrated monitoring and visibility of your Azure, Azure VMware Solution, and on-premises VMs.-- With Azure Update Management in Azure Automation, you can manage operating system updates for both your Windows and Linux machines. -- Azure Security Center provides advanced threat protection, including:
- - File integrity monitoring
- - Fileless security alerts
- - Operating system patch assessment
- - Security misconfigurations assessment
- - Endpoint protection assessment
-- Easily deploy the Log Analytics agent using Azure Arc enabled servers VM extension support for new and existing VMs. -- Your Log Analytics workspace in Azure Monitor enables log collection and performance counter collection using the Log Analytics agent or extensions. Collect data and logs to a single point and present that data to different Azure native services. -- Added benefits of Azure Monitor include:
- - Seamless monitoring
- - Better infrastructure visibility
- - Instant notifications
- - Automatic resolution
- - Cost efficiency
-
-## Integrated Azure monitoring architecture
-
-The diagram shows the integrated monitoring architecture for Azure VMware Solution VMs.
-
-![Integrated Azure monitoring architecture](media/lifecycle-management-azure-vmware-solutions-virtual-machines/integrated-azure-monitoring-architecture.png)
-
-## Before you start
-
-If you are new to Azure or unfamiliar with any of the services previously mentioned, review the following articles:
--- [Automation account authentication overview](../automation/automation-security-overview.md)-- [Designing your Azure Monitor Logs deployment](../azure-monitor/logs/design-logs-deployment.md) and [Azure Monitor](../azure-monitor/overview.md)-- [Planning](../security-center/security-center-planning-and-operations-guide.md) and [Supported platforms](../security-center/security-center-os-coverage.md) for Azure Security Center-- [Enable Azure Monitor for VMs overview](../azure-monitor/vm/vminsights-enable-overview.md)-- [What is Azure Arc enabled servers?](../azure-arc/servers/overview.md) and [What is Azure Arc enabled Kubernetes?](../azure-arc/kubernetes/overview.md)-- [Update Management overview](../automation/update-management/overview.md)-
-## Integrate and deploy Azure native services
-
-### Enable Azure Update Management
-
-Azure Update Management in Azure Automation manages operating system updates for your Windows and Linux machines in a hybrid environment. It monitors patching compliance and forwards patching deviation alerts to Azure Monitor for remediation. Azure Update Management must connect to your Log Analytics workspace to use stored data to assess the status of updates on your VMs.
-
-1. Before you can add Log Analytics to Azure Update Management, you first need to [Create an Azure Automation account](../automation/automation-create-standalone-account.md). If you prefer to create your account using a template, see [Create an Automation account using an Azure Resource Manager template](../automation/quickstart-create-automation-account-template.md).
-
-2. **Log Analytics workspace** enables log collection and performance counter collection using the Log Analytics agent or extensions. To create your Log Analytics workspace, see [Create a Log Analytics workspace in the Azure portal](../azure-monitor/logs/quick-create-workspace.md). If you prefer, you can also create a workspace via [CLI](../azure-monitor/logs/quick-create-workspace-cli.md), [PowerShell](../azure-monitor/logs/powershell-workspace-configuration.md), or [Azure Resource Manager template](../azure-monitor/logs/resource-manager-workspace.md).
-
-3. To enable Azure Update Management for your VMs, see [Enable Update Management from an Automation account](../automation/update-management/enable-from-automation-account.md). In the process, you will link your Log Analytics workspace with your automation account.
-
-4. Once you've added VMs to Azure Update Management, you can [Deploy updates on VMs and review results](../automation/update-management/deploy-updates.md).
-
-### Enable Azure Security Center
-
-Azure Security Center provides advanced threat protection across your hybrid workloads in the cloud and on premises. It will assess the vulnerability of Azure VMware Solution VMs and raise alerts as needed. These security alerts can be forwarded to Azure Monitor for resolution.
-
-Azure Security Center does not require deployment. For more information, see a list of [Supported features for virtual machines](../security-center/security-center-services.md).
-
-1. To add Azure VMware Solution VMs and non-Azure VMs to Security Center, see [Quickstart: Setting up Azure Security Center](../security-center/security-center-get-started.md).
-
-2. After adding Azure VMware Solution VMs or VMs from a non-Azure environment, enable Azure Defender in Security Center. Security Center will assess the VMs for potential security issues. It also provides recommendations in the Overview tab. For more information, see [Security recommendations in Azure Security Center](../security-center/security-center-recommendations.md).
-
-3. You can define security policies in Azure Security Center. For information on configuring your security policies, see [Working with security policies](../security-center/tutorial-security-policy.md).
-
-### Onboard VMs to Azure Arc enabled servers
-
-Azure Arc extends Azure management to any infrastructure, including Azure VMware Solution, on-premises, or other cloud platforms.
--- For information on enabling Azure Arc enabled servers for multiple Windows or Linux VMs, see [Connect hybrid machines to Azure at scale](../azure-arc/servers/onboard-service-principal.md).-
-### Onboard hybrid Kubernetes clusters with Arc enabled Kubernetes
-
-You can attach a Kubernetes cluster hosted in your Azure VMware Solution environment using Azure Arc enabled Kubernetes.
--- For more information, see [Create an Azure Arc-enabled onboarding Service Principal](../azure-arc/kubernetes/create-onboarding-service-principal.md).-
-### Deploy the Log Analytics agent
-
-Azure VMware Solution VMs can be monitored through the Log Analytics agent (also referred to as Microsoft Monitoring Agent (MMA) or OMS Linux agent). You already created a Log Analytics workspace while enabling Azure Automation Update Management.
--- Deploy the Log Analytics agent by using [Azure Arc enabled servers VM extension support](../azure-arc/servers/manage-vm-extensions.md).-
-### Enable Azure Monitor
-
-Azure Monitor is a comprehensive solution for collecting, analyzing, and acting on telemetry from your cloud and on-premises environments. It requires no deployment. With Azure Monitor, you can monitor guest operating system performance and discover and map application dependencies for Azure VMware Solution or on-premises VMs.
--- Azure Monitor allows you to collect data from different sources to monitor and analyze. For more information, see [Sources of monitoring data for Azure Monitor](../azure-monitor/agents/data-sources.md).--- Collect different types of data for analysis, visualization, and alerting. For more information, see [Azure Monitor data platform](../azure-monitor/data-platform.md).--- To configure Azure Monitor with your Log Analytics workspace, see [Configure Log Analytics workspace for Azure Monitor for VMs](../azure-monitor/vm/vminsights-configure-workspace.md).--- You can create alert rules to identify issues in your environment, like high use of resources, missing patches, low disk space, and heartbeat of your VMs. You can also set an automated response to detected events by sending an alert to IT Service Management (ITSM) tools. Alert detection notification can also be sent via email. To create such rules, see:
- - [Create, view, and manage metric alerts using Azure Monitor](../azure-monitor/alerts/alerts-metric.md).
- - [Create, view, and manage log alerts using Azure Monitor](../azure-monitor/alerts/alerts-log.md).
- - [Action rules](../azure-monitor/alerts/alerts-action-rules.md) to set automated actions and notifications.
- - [Connect Azure to ITSM tools using IT Service Management Connector](../azure-monitor/alerts/itsmc-overview.md).
azure-vmware Netapp Files With Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/netapp-files-with-azure-vmware-solution.md
Title: Integrate Azure NetApp Files with Azure VMware Solution description: Use Azure NetApp Files with Azure VMware Solution VMs to migrate and sync data across on-premises servers, Azure VMware Solution VMs, and cloud infrastructures. Previously updated : 02/10/2021 Last updated : 06/08/2021 # Integrate Azure NetApp Files with Azure VMware Solution
-In this article, we'll walk through the steps of integrating Azure NetApp Files with Azure VMware Solution-based workloads. The guest operating system will run inside virtual machines (VMs) accessing Azure NetApp Files volumes.
+[Azure NetApp Files](../azure-netapp-files/azure-netapp-files-introduction.md) is an Azure service for migration and running the most demanding enterprise file-workloads in the cloud: databases, SAP, and high-performance computing applications, with no code changes. In this article, you'll set up, test, and verify the Azure NetApp Files volume as a file share for Azure VMware Solution workloads using the Network File System (NFS) protocol. The guest operating system runs inside virtual machines (VMs) accessing Azure NetApp Files volumes.
-## Azure NetApp Files overview
+Azure NetApp Files and Azure VMware Solution are created in the same Azure region. Azure NetApp Files is available in many [Azure regions](https://azure.microsoft.com/global-infrastructure/services/?products=netapp,azure-vmware&regions=all) and supports cross-region replication. For information on Azure NetApp Files configuration methods, see [Storage hierarchy of Azure NetApp Files](../azure-netapp-files/azure-netapp-files-understand-storage-hierarchy.md).
-[Azure NetApp Files](../azure-netapp-files/azure-netapp-files-introduction.md) is an Azure service for migration and running the most demanding enterprise file-workloads in the cloud: databases, SAP, and high-performance computing applications, with no code changes.
-
-### Features
-(Services where Azure NetApp Files are used.)
+Services where Azure NetApp Files are used:
- **Active Directory connections**: Azure NetApp Files supports [Active Directory Domain Services and Azure Active Directory Domain Services](../azure-netapp-files/create-active-directory-connections.md#decide-which-domain-services-to-use).
In this article, we'll walk through the steps of integrating Azure NetApp Files
- **Azure VMware Solution**: Azure NetApp Files shares can be mounted from VMs that are created in the Azure VMware Solution environment.
-Azure NetApp Files is available in many Azure regions and supports cross-region replication. For information on Azure NetApp Files configuration methods, see [Storage hierarchy of Azure NetApp Files](../azure-netapp-files/azure-netapp-files-understand-storage-hierarchy.md).
-
-## Reference architecture
-The following diagram illustrates a connection via Azure ExpressRoute to an Azure VMware Solution private cloud. The Azure VMware Solution environment accesses the Azure NetApp Files share mounted on Azure VMware Solution VMs.
+The diagram shows a connection through Azure ExpressRoute to an Azure VMware Solution private cloud. The Azure VMware Solution environment accesses the Azure NetApp Files share mounted on Azure VMware Solution VMs.
![Diagram showing NetApp Files for Azure VMware Solution architecture.](media/net-app-files/net-app-files-topology.png)
-This article covers instructions to set up, test, and verify the Azure NetApp Files volume as a file share for Azure VMware Solution VMs. In this scenario, we've used the NFS protocol. Azure NetApp Files and Azure VMware Solution are created in the same Azure region.
## Prerequisites
This article covers instructions to set up, test, and verify the Azure NetApp Fi
> * Linux VM on Azure VMware Solution > * Windows VMs on Azure VMware Solution
-## Regions supported
-List of supported regions can be found at [Azure Products by Region](https://azure.microsoft.com/global-infrastructure/services/?products=netapp,azure-vmware&regions=all).
+## Create and mount Azure NetApp Files volumes
-## Verify pre-configured Azure NetApp Files
+You'll create and mount Azure NetApp Files volumes onto Azure VMware Solution VMs.
+
+1. [Create a NetApp account](../azure-netapp-files/azure-netapp-files-create-netapp-account.md).
+
+1. [Set up a capacity pool](../azure-netapp-files/azure-netapp-files-set-up-capacity-pool.md).
+
+1. [Create an SMB volume for Azure NetApp Files](../azure-netapp-files/azure-netapp-files-create-volumes-smb.md).
-Follow the step-by-step instructions in the following articles to create and Mount Azure NetApp Files volumes onto Azure VMware Solution VMs.
+1. [Create an NFS volume for Azure NetApp Files](../azure-netapp-files/azure-netapp-files-create-volumes.md).
-- [Create a NetApp account](../azure-netapp-files/azure-netapp-files-create-netapp-account.md)-- [Set up a capacity pool](../azure-netapp-files/azure-netapp-files-set-up-capacity-pool.md)-- [Create an SMB volume for Azure NetApp Files](../azure-netapp-files/azure-netapp-files-create-volumes-smb.md)-- [Create an NFS volume for Azure NetApp Files](../azure-netapp-files/azure-netapp-files-create-volumes.md)-- [Delegate a subnet to Azure NetApp Files](../azure-netapp-files/azure-netapp-files-delegate-subnet.md)
+1. [Delegate a subnet to Azure NetApp Files](../azure-netapp-files/azure-netapp-files-delegate-subnet.md).
-The following steps include verification of the pre-configured Azure NetApp Files created in Azure on Azure NetApp Files Premium service level.
+
+## Verify pre-configured Azure NetApp Files
+
+You'll verify the pre-configured Azure NetApp Files created in Azure on Azure NetApp Files Premium service level.
1. In the Azure portal, under **STORAGE**, select **Azure NetApp Files**. A list of your configured Azure NetApp Files will show.
- :::image type="content" source="media/net-app-files/azure-net-app-files-list.png" alt-text="Screenshot showing list of pre-configured Azure NetApp Files.":::
+ :::image type="content" source="media/net-app-files/azure-net-app-files-list.png" alt-text="Screenshot showing list of pre-configured Azure NetApp Files.":::
2. Select a configured NetApp Files account to view its settings. For example, select **Contoso-anf2**. 3. Select **Capacity pools** to verify the configured pool.
- :::image type="content" source="media/net-app-files/net-app-settings.png" alt-text="Screenshot showing options to view capacity pools and volumes of a configured NetApp Files account.":::
+ :::image type="content" source="media/net-app-files/net-app-settings.png" alt-text="Screenshot showing options to view capacity pools and volumes of a configured NetApp Files account.":::
- The Capacity pools page opens showing the capacity and service level. In this example, the storage pool is configured as 4 TiB with a Premium service level.
+ The Capacity pools page opens showing the capacity and service level. In this example, the storage pool is configured as 4 TiB with a Premium service level.
4. Select **Volumes** to view volumes created under the capacity pool. (See preceding screenshot.) 5. Select a volume to view its configuration.
- :::image type="content" source="media/net-app-files/azure-net-app-volumes.png" alt-text="Screenshot showing volumes created under the capacity pool.":::
+ :::image type="content" source="media/net-app-files/azure-net-app-volumes.png" alt-text="Screenshot showing volumes created under the capacity pool.":::
- A window opens showing the configuration details of the volume.
+ A window opens showing the configuration details of the volume.
- :::image type="content" source="media/net-app-files/configuration-of-volume.png" alt-text="Screenshot showing configuration details of a volume.":::
+ :::image type="content" source="media/net-app-files/configuration-of-volume.png" alt-text="Screenshot showing configuration details of a volume.":::
- You can see that anfvolume has a size of 200 GiB and is in capacity pool anfpool1. It's exported as an NFS file share via 10.22.3.4:/ANFVOLUME. One private IP from the Azure Virtual Network (VNet) was created for Azure NetApp Files and the NFS path to mount on the VM.
+ You can see that anfvolume has a size of 200 GiB and is in capacity pool anfpool1. It's exported as an NFS file share via 10.22.3.4:/ANFVOLUME. One private IP from the Azure Virtual Network (VNet) was created for Azure NetApp Files and the NFS path to mount on the VM.
- To learn about Azure NetApp Files volume performance by size or "Quota," see [Performance considerations for Azure NetApp Files](../azure-netapp-files/azure-netapp-files-performance-considerations.md).
+ To learn about Azure NetApp Files volume performance by size or "Quota," see [Performance considerations for Azure NetApp Files](../azure-netapp-files/azure-netapp-files-performance-considerations.md).
## Verify pre-configured Azure VMware Solution VM share mapping To make your Azure NetApp Files share accessible to your Azure VMware Solution VM, you'll need to understand SMB and NFS share mapping. Only after configuring the SMB or NFS volumes, can you mount them as documented here. -- SMB share: Create an Active Directory connection before deploying an SMB volume. The specified domain controllers must be accessible by the delegated subnet of Azure NetApp Files for a successful connection. Once the Active Directory is configured within the Azure NetApp Files account, it will appear as a selectable item while creating SMB volumes.--- NFS share: Azure NetApp Files contributes to creating the volumes using NFS or dual protocol (NFS and SMB). A volume's capacity consumption counts against its pool's provisioned capacity. NFS can be mounted to the Linux server by using the command lines or /etc/fstab entries.-
-## Use Cases of Azure NetApp Files with Azure VMware Solution
+- **SMB share:** Create an Active Directory connection before deploying an SMB volume. The specified domain controllers must be accessible by the delegated subnet of Azure NetApp Files for a successful connection. Once the Active Directory is configured within the Azure NetApp Files account, it will appear as a selectable item while creating SMB volumes.
-The following are just a few compelling Azure NetApp Files use cases.
-- Horizon profile management-- Citrix profile management-- Remote Desktop Services profile management-- File shares on Azure VMware Solution
+- **NFS share:** Azure NetApp Files contributes to creating the volumes using NFS or dual protocol (NFS and SMB). A volume's capacity consumption counts against its pool's provisioned capacity. NFS can be mounted to the Linux server by using the command lines or /etc/fstab entries.
## Next steps
azure-vmware Reserved Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/reserved-instance.md
Now that you've covered reserved instance of Azure VMware Solution, you may want
- [Creating an Azure VMware Solution assessment](../migrate/how-to-create-azure-vmware-solution-assessment.md). - [Configure DHCP for Azure VMware Solution](configure-dhcp-azure-vmware-solution.md).-- [Monitor and manage Azure VMware Solution VMs](lifecycle-management-of-azure-vmware-solution-vms.md).
+- [Integrating Azure native services in Azure VMware Solution](integrate-azure-native-services.md).
azure-vmware Rotate Cloudadmin Credentials https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/rotate-cloudadmin-credentials.md
In this step, you'll update HCX Connector with the updated credentials.
Now that you've covered resetting vCenter Server and NSX-T Manager credentials for Azure VMware Solution, you may want to learn about: - [Configure NSX network components in Azure VMware Solution](configure-nsx-network-components-azure-portal.md)-- [Monitor and manage Azure VMware Solution VMs](lifecycle-management-of-azure-vmware-solution-vms.md)
+- [Integrating Azure native services in Azure VMware Solution](integrate-azure-native-services.md)
- [Deploy disaster recovery of virtual machines using Azure VMware Solution](disaster-recovery-for-virtual-machines.md)
azure-vmware Tutorial Network Checklist https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/tutorial-network-checklist.md
The subnets:
| Private cloud management | `/26` | `10.10.0.0/26` | | HCX Mgmt Migrations | `/26` | `10.10.0.64/26` | | Global Reach Reserved | `/26` | `10.10.0.128/26` |
-| ExpressRoute Reserved | `/27` | `10.10.0.192/27` |
+| NSX-T DNS Service | `/32` | `10.10.0.192/32` |
+| Reserved | `/32` | `10.10.0.193/32` |
+| Reserved | `/32` | `10.10.0.194/32` |
+| Reserved | `/32` | `10.10.0.195/32` |
+| Reserved | `/30` | `10.10.0.196/30` |
+| Reserved | `/29` | `10.10.0.200/29` |
+| Reserved | `/28` | `10.10.0.208/28` |
| ExpressRoute peering | `/27` | `10.10.0.224/27` | | ESXi Management | `/25` | `10.10.1.0/25` | | vMotion Network | `/25` | `10.10.1.128/25` |
backup Backup Azure Delete Vault https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-delete-vault.md
Title: Delete a Microsoft Azure Recovery Services vault description: In this article, learn how to remove dependencies and then delete an Azure Backup Recovery Services vault. Previously updated : 04/26/2021 Last updated : 06/07/2021 # Delete an Azure Backup Recovery Services vault
backup Disk Backup Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/disk-backup-support-matrix.md
You can use [Azure Backup](./backup-overview.md) to protect Azure Disks. This ar
## Supported regions
-Azure Disk Backup is available in all public cloud regions, expect France South, South Africa West, and is currently not available in Sovereign cloud regions. These regions will be announced when they become available.
+Azure Disk Backup is available in all public cloud regions, except France South, South Africa West, and is currently not available in Sovereign cloud regions. These regions will be announced when they become available.
## Limitations
backup Disk Backup Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/disk-backup-troubleshoot.md
Title: Troubleshooting backup failures in Azure Disk Backup description: Learn how to troubleshoot backup failures in Azure Disk Backup Previously updated : 01/07/2021 Last updated : 06/08/2021 # Troubleshooting backup failures in Azure Disk Backup
Error Message: Unable to start the operation as maximum number of allowed concur
Recommended Action: Wait until the previous running backup completes.
+### Error Code: UserErrorMissingSubscriptionRegistration
+
+Error Message: The subscription is not registered to use namespace ΓÇÿMicrosoft.ComputeΓÇÖ.
+
+Recommended Action: The required resource provider hasn't been registered for your subscription. Register both the resource providers' namespace (_Microsoft.Compute_ and _Microsoft.Storage_) using the steps in [Solution 3](../azure-resource-manager/templates/error-register-resource-provider.md#solution-3azure-portal).
+ ## Next steps -- [Azure Disk Backup support matrix](disk-backup-support-matrix.md)
+[Azure Disk Backup support matrix](disk-backup-support-matrix.md)
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/language-support.md
Some features of Computer Vision support multiple languages; any features not me
Computer Vision's OCR APIs support several languages. They do not require you to specify a language code. See the [Optical Character Recognition (OCR) overview](overview-ocr.md) for more information.
-|Language| Language code | OCR API | Read 3.0/3.1 | Read v3.2 |
+|Language| Language code | Read 3.2 | OCR API | Read 3.0/3.1 |
|:--|:-:|:--:|::|::|
-|Afrikaans|`af`| | |Γ£ö |
-|Albanian |`sq`| | |Γ£ö |
-|Arabic | `ar`|Γ£ö | | |
-|Asturian |`ast`| | |Γ£ö |
-|Basque |`eu`| | |Γ£ö |
-|Bislama |`bi`| | |Γ£ö |
-|Breton |`br`| | |Γ£ö |
-|Catalan |`ca`| | |Γ£ö |
-|Cebuano |`ceb`| | |Γ£ö |
-|Chamorro |`ch`| | |Γ£ö |
-|Chinese (Simplified) | `zh-Hans`|Γ£ö | |Γ£ö |
-|Chinese (Traditional) | `zh-Hant`|Γ£ö | |Γ£ö |
-|Cornish |`kw`| | |Γ£ö |
-|Corsican |`co`| | |Γ£ö |
-|Crimean Tatar (Latin) |`crh`| | |Γ£ö |
-|Czech | `cs` |Γ£ö | |Γ£ö |
-|Danish | `da` |Γ£ö | |Γ£ö |
+|Afrikaans|`af`|Γ£ö | | |
+|Albanian |`sq`|Γ£ö | | |
+|Arabic | `ar`| Γ£ö | | |
+|Asturian |`ast`|Γ£ö | | |
+|Basque |`eu`| Γ£ö | | |
+|Bislama |`bi`|Γ£ö | | |
+|Breton |`br`|Γ£ö | | |
+|Catalan |`ca`|Γ£ö | | |
+|Cebuano |`ceb`|Γ£ö | | |
+|Chamorro |`ch`|Γ£ö| | |
+|Chinese (Simplified) | `zh-Hans`|Γ£ö |Γ£ö | |
+|Chinese (Traditional) | `zh-Hant`|Γ£ö |Γ£ö | |
+|Cornish |`kw`|Γ£ö | | |
+|Corsican |`co`|Γ£ö | | |
+|Crimean Tatar (Latin) |`crh`| Γ£ö | | |
+|Czech | `cs` |Γ£ö | Γ£ö | |
+|Danish | `da` |Γ£ö | Γ£ö | |
|Dutch | `nl` |Γ£ö |Γ£ö |Γ£ö | |English | `en` |Γ£ö |Γ£ö |Γ£ö |
-|Estonian |`et`| | |Γ£ö |
-|Fijian |`fj`| | |Γ£ö |
-|Filipino |`fil`| | |Γ£ö |
-|Finnish | `fi` |Γ£ö | |Γ£ö |
+|Estonian |`et`|Γ£ö | | |
+|Fijian |`fj`|Γ£ö | | |
+|Filipino |`fil`|Γ£ö | | |
+|Finnish | `fi` |Γ£ö |Γ£ö | |
|French | `fr` |Γ£ö |Γ£ö |Γ£ö |
-|Friulian | `fur` | | |Γ£ö |
-|Galician | `gl` | | |Γ£ö |
+|Friulian | `fur` |Γ£ö | | |
+|Galician | `gl` |Γ£ö | | |
|German | `de` |Γ£ö |Γ£ö |Γ£ö |
-|Gilbertese | `gil` | | |Γ£ö |
-|Greek | `el` |Γ£ö | | |
-|Greenlandic | `kl` | | |Γ£ö |
-|Haitian Creole | `ht` | | |Γ£ö |
-|Hani | `hni` | | |Γ£ö |
-|Hmong Daw (Latin) | `mww` | | |Γ£ö |
-|Hungarian | `hu` |Γ£ö | | Γ£ö |
-|Indonesian | `id` | | |Γ£ö |
-|Interlingua | `ia` | | |Γ£ö |
-|Inuktitut (Latin) | `iu` | | |Γ£ö |
-|Irish | `ga` | | |Γ£ö |
+|Gilbertese | `gil` |Γ£ö | | |
+|Greek | `el` | |Γ£ö | |
+|Greenlandic | `kl` |Γ£ö | | |
+|Haitian Creole | `ht` |Γ£ö | | |
+|Hani | `hni` |Γ£ö | | |
+|Hmong Daw (Latin) | `mww` | Γ£ö | | |
+|Hungarian | `hu` | Γ£ö |Γ£ö | |
+|Indonesian | `id` |Γ£ö | | |
+|Interlingua | `ia` |Γ£ö | | |
+|Inuktitut (Latin) | `iu` | Γ£ö | | |
+|Irish | `ga` |Γ£ö | | |
|Italian | `it` |Γ£ö |Γ£ö |Γ£ö |
-|Japanese | `ja` |Γ£ö | |Γ£ö |
-|Javanese | `jv` | | |Γ£ö |
-|K'iche' | `quc` | | |Γ£ö |
-|Kabuverdianu | `kea` | | |Γ£ö |
-|Kachin (Latin) | `kac` | | |Γ£ö |
-|Kara-Kalpak | `kaa` | | |Γ£ö |
-|Kashubian | `csb` | | |Γ£ö |
-|Khasi | `kha` | | |Γ£ö |
-|Korean | `ko` |Γ£ö | |Γ£ö |
-|Kurdish (latin) | `kur` | | |Γ£ö |
-|Luxembourgish | `lb` | | |Γ£ö |
-|Malay (Latin) | `ms` | | |Γ£ö |
-|Manx | `gv` | | |Γ£ö |
-|Neapolitan | `nap` | | |Γ£ö |
-|Norwegian | `nb` |Γ£ö | | |
-|Norwegian | `no` | | |Γ£ö |
-|Occitan | `oc` | | |Γ£ö |
-|Polish | `pl` |Γ£ö | |Γ£ö |
+|Japanese | `ja` |Γ£ö |Γ£ö | |
+|Javanese | `jv` |Γ£ö | | |
+|K'iche' | `quc` |Γ£ö | | |
+|Kabuverdianu | `kea` |Γ£ö | | |
+|Kachin (Latin) | `kac` |Γ£ö | | |
+|Kara-Kalpak | `kaa` | Γ£ö | | |
+|Kashubian | `csb` |Γ£ö | | |
+|Khasi | `kha` | Γ£ö | | |
+|Korean | `ko` |Γ£ö |Γ£ö | |
+|Kurdish (latin) | `kur` |Γ£ö | | |
+|Luxembourgish | `lb` | Γ£ö | | |
+|Malay (Latin) | `ms` | Γ£ö | | |
+|Manx | `gv` | Γ£ö | | |
+|Neapolitan | `nap` | Γ£ö | | |
+|Norwegian | `nb` | | Γ£ö | |
+|Norwegian | `no` | Γ£ö | | |
+|Occitan | `oc` | Γ£ö | | |
+|Polish | `pl` | Γ£ö |Γ£ö | |
|Portuguese | `pt` |Γ£ö |Γ£ö |Γ£ö |
-|Romanian | `ro` |Γ£ö | | |
-|Romansh | `rm` | | |Γ£ö |
-|Russian | `ru` |Γ£ö | | |
-|Scots | `sco` | | |Γ£ö |
-|Scottish Gaelic | `gd` | | |Γ£ö |
-|Serbian (Cyrillic) | `sr-Cyrl` |Γ£ö | | |
-|Serbian (Latin) | `sr-Latn` |Γ£ö | | |
-|Slovak | `sk` |Γ£ö | | |
-|Slovenian | `slv` | | |Γ£ö |
+|Romanian | `ro` | | Γ£ö | |
+|Romansh | `rm` | Γ£ö | | |
+|Russian | `ru` | |Γ£ö | |
+|Scots | `sco` | Γ£ö | | |
+|Scottish Gaelic | `gd` |Γ£ö | | |
+|Serbian (Cyrillic) | `sr-Cyrl` | |Γ£ö | |
+|Serbian (Latin) | `sr-Latn` | |Γ£ö | |
+|Slovak | `sk` | |Γ£ö | |
+|Slovenian | `slv` | Γ£ö || |
|Spanish | `es` |Γ£ö |Γ£ö |Γ£ö |
-|Swahili (Latin) | `sw` | | |Γ£ö |
-|Swedish | `sv` |Γ£ö | |Γ£ö |
-|Tatar (Latin) | `tat` | | |Γ£ö |
-|Tetum | `tet` | | |Γ£ö |
-|Turkish | `tr` |Γ£ö | |Γ£ö |
-|Upper Sorbian | `hsb` | | |Γ£ö |
-|Uzbek (Latin) | `uz` | | |Γ£ö |
-|Volap├╝k | `vo` | | |Γ£ö |
-|Walser | `wae` | | |Γ£ö |
-|Western Frisian | `fy` | | |Γ£ö |
-|Yucatec Maya | `yua` | | |Γ£ö |
-|Zhuang | `za` | | |Γ£ö |
-|Zulu | `zu` | | |Γ£ö |
+|Swahili (Latin) | `sw` |Γ£ö | | |
+|Swedish | `sv` |Γ£ö |Γ£ö | |
+|Tatar (Latin) | `tat` | Γ£ö | | |
+|Tetum | `tet` |Γ£ö | | |
+|Turkish | `tr` |Γ£ö | Γ£ö | |
+|Upper Sorbian | `hsb` |Γ£ö | | |
+|Uzbek (Latin) | `uz` |Γ£ö | | |
+|Volap├╝k | `vo` | Γ£ö | | |
+|Walser | `wae` | Γ£ö | | |
+|Western Frisian | `fy` | Γ£ö | | |
+|Yucatec Maya | `yua` | Γ£ö | | |
+|Zhuang | `za` |Γ£ö | | |
+|Zulu | `zu` | Γ£ö | | |
## Image analysis
cognitive-services Spatial Analysis Camera Placement https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/spatial-analysis-camera-placement.md
Title: Spatial Analysis camera placement
description: Learn how to set up a camera for use with Spatial Analysis -+ Previously updated : 01/12/2021- Last updated : 06/08/2021+
cognitive-services Spatial Analysis Container https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/spatial-analysis-container.md
Title: How to install and run the Spatial Analysis container - Computer Vision
description: The Spatial Analysis container lets you can detect people and distances. -+ Previously updated : 01/12/2021- Last updated : 06/08/2021+ # Install and run the Spatial Analysis container (Preview)
cognitive-services Spatial Analysis Logging https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/spatial-analysis-logging.md
Title: Telemetry and logging for Spatial Analysis containers
description: Spatial Analysis provides each container with a common configuration framework insights, logging, and security settings. -+ Previously updated : 01/12/2021- Last updated : 06/08/2021+ # Telemetry and troubleshooting
cognitive-services Spatial Analysis Operations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/spatial-analysis-operations.md
Title: Spatial Analysis operations
description: The Spatial Analysis operations. -+ Previously updated : 01/12/2021- Last updated : 06/08/2021+ # Spatial Analysis operations
cognitive-services Spatial Analysis Web App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/spatial-analysis-web-app.md
Title: Deploy a Spatial Analysis web app
description: Learn how to use Spatial Analysis in a web application. -+ Previously updated : 01/12/2021- Last updated : 06/08/2021+ # How to: Deploy a Spatial Analysis web application
cognitive-services Spatial Analysis Zone Line Placement https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/spatial-analysis-zone-line-placement.md
Title: Spatial Analysis zone and line placement
description: Learn how to set up zones and lines with Spatial Analysis -+ Previously updated : 09/01/2020- Last updated : 06/08/2021+
cognitive-services Export Model Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/export-model-python.md
After you have [exported your TensorFlow model](./export-your-model.md) from the Custom Vision Service, this quickstart will show you how to use this model locally to classify images. > [!NOTE]
-> This tutorial applies only to models exported from image classification projects.
+> This tutorial applies only to models exported from "General (compact)" image classification projects. If you exported other models, please visit our [sample code repository](https://github.com/Azure-Samples/customvision-export-samples).
## Prerequisites
The results of running the image tensor through the model will then need to be m
Next, learn how to wrap your model into a mobile application: * [Use your exported Tensorflow model in an Android application](https://github.com/Azure-Samples/cognitive-services-android-customvision-sample) * [Use your exported CoreML model in an Swift iOS application](https://go.microsoft.com/fwlink/?linkid=857726)
-* [Use your exported CoreML model in an iOS application with Xamarin](https://github.com/xamarin/ios-samples/tree/master/ios11/CoreMLAzureModel)
+* [Use your exported CoreML model in an iOS application with Xamarin](https://github.com/xamarin/ios-samples/tree/master/ios11/CoreMLAzureModel)
cognitive-services Face Detection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Face/concepts/face-detection.md
Use the following tips to make sure that your input images give the most accurat
* Faces outside the size range of 36 x 36 to 4096 x 4096 pixels will not be detected. * Some faces might not be detected because of technical challenges. Extreme face angles (head pose) or face occlusion (objects such as sunglasses or hands that block part of the face) can affect detection. Frontal and near-frontal faces give the best results.
+Input data with orientation information:
+* Some input images with JPEG format might contain orientation information in Exchangeable image file format (Exif) metadata. If Exif orientation is available, images will be automatically rotated to the correct orientation before sending for face detection. The face rectangle, landmarks, and head pose for each detected face will be estimated based on the rotated image.
+* To properly display the face rectangle and landmarks, you need to make sure the image is rotated correctly. Most of image visualization tools will auto-rotate the image according to its Exif orientation by default. For other tools, you might need to apply the rotation using your own code. The following examples show a face rectangle on a rotated image (left) and a non-rotated image (right).
+
+![Two face images with/without rotation](../Images/image-rotation.png)
+ If you're detecting faces from a video feed, you may be able to improve performance by adjusting certain settings on your video camera: * **Smoothing**: Many video cameras apply a smoothing effect. You should turn this off if you can because it creates a blur between frames and reduces clarity.
cognitive-services How To Use Logging https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-use-logging.md
More about data and file storage for Android applications is available [here](ht
#### iOS
-Only directories inside the application sandbox are accessible. Files can be created in the documents, library, and temp directories. Files in the documents directory can be made available to a user. The following code snippet shows creation of a log file in the application document directory:
+Only directories inside the application sandbox are accessible. Files can be created in the documents, library, and temp directories. Files in the documents directory can be made available to a user.
+
+If you are using Objective-C on iOS, use the following code snippet to create a log file in the application document directory:
```objc NSString *filePath = [
To access a created file, add the below properties to the `Info.plist` property
<true/> ```
+If you are using Swift on iOS, please use the following code snippet to enable logs:
+```swift
+let documentsDirectoryPathString = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true).first!
+let documentsDirectoryPath = NSURL(string: documentsDirectoryPathString)!
+let logFilePath = documentsDirectoryPath.appendingPathComponent("swift.log")
+self.speechConfig!.setPropertyTo(logFilePath!.absoluteString, by: SPXPropertyId.speechLogFilename)
+```
+ More about iOS File System is available [here](https://developer.apple.com/library/archive/documentation/FileManagement/Conceptual/FileSystemProgrammingGuide/FileSystemOverview/FileSystemOverview.html). ## Next steps
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/language-support.md
Custom Voice is available in the neural tier (a.k.a, Custom Neural Voice). Based
| Italian (Italy) | `it-IT` | Yes | Yes | | Japanese (Japan) | `ja-JP` | Yes | Yes | | Korean (Korea) | `ko-KR` | Yes | Yes |
+| Norwegian (Bokmål, Norway) | `nb-NO` | Yes | No |
| Portuguese (Brazil) | `pt-BR` | Yes | Yes | | Spanish (Mexico) | `es-MX` | Yes | Yes | | Spanish (Spain) | `es-ES` | Yes | Yes |
cognitive-services Speech Synthesis Markup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-synthesis-markup.md
Within the `speak` element, you can specify multiple voices for text-to-speech o
|--|-|| | `name` | Identifies the voice used for text-to-speech output. For a complete list of supported voices, see [Language support](language-support.md#text-to-speech). | Required |
-> [!IMPORTANT]
-> Multiple voices are incompatible with the word boundary feature. The word boundary feature needs to be disabled in order to use multiple voices.
-
-### Disable word boundary
-
-Depending on the Speech SDK language, you'll set the `"SpeechServiceResponse_Synthesis_WordBoundaryEnabled"` property to `false` on an instance of the `SpeechConfig` object.
-
-# [C#](#tab/csharp)
-
-For more information, see <a href="/dotnet/api/microsoft.cognitiveservices.speech.speechconfig.setproperty" target="_blank"> `SetProperty` </a>.
-
-```csharp
-speechConfig.SetProperty(
- "SpeechServiceResponse_Synthesis_WordBoundaryEnabled", "false");
-```
-
-# [C++](#tab/cpp)
-
-For more information, see <a href="/cpp/cognitive-services/speech/speechconfig#setproperty" target="_blank"> `SetProperty` </a>.
-
-```cpp
-speechConfig->SetProperty(
- "SpeechServiceResponse_Synthesis_WordBoundaryEnabled", "false");
-```
-
-# [Java](#tab/java)
-
-For more information, see <a href="/java/api/com.microsoft.cognitiveservices.speech.speechconfig.setproperty#com_microsoft_cognitiveservices_speech_SpeechConfig_setProperty_String_String_" target="_blank"> `setProperty` </a>.
-
-```java
-speechConfig.setProperty(
- "SpeechServiceResponse_Synthesis_WordBoundaryEnabled", "false");
-```
-
-# [Python](#tab/python)
-
-For more information, see <a href="/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechconfig#set-property-by-name-property-name--str--value--str-" target="_blank"> `set_property_by_name` </a>.
-
-```python
-speech_config.set_property_by_name(
- "SpeechServiceResponse_Synthesis_WordBoundaryEnabled", "false");
-```
-
-# [JavaScript](#tab/javascript)
-
-For more information, see <a href="/javascript/api/microsoft-cognitiveservices-speech-sdk/speechconfig#setproperty-string--string-" target="_blank"> `setProperty`</a>.
-
-```javascript
-speechConfig.setProperty(
- "SpeechServiceResponse_Synthesis_WordBoundaryEnabled", "false");
-```
-
-# [Objective-C](#tab/objectivec)
-
-For more information, see <a href="/objectivec/cognitive-services/speech/spxspeechconfiguration#setpropertytobyname" target="_blank"> `setPropertyTo` </a>.
-
-```objectivec
-[speechConfig setPropertyTo:@"false" byName:@"SpeechServiceResponse_Synthesis_WordBoundaryEnabled"];
-```
-
-# [Swift](#tab/swift)
-
-For more information, see <a href="/objectivec/cognitive-services/speech/spxspeechconfiguration#setpropertytobyname" target="_blank"> `setPropertyTo` </a>.
-
-```swift
-speechConfig!.setPropertyTo(
- "false", byName: "SpeechServiceResponse_Synthesis_WordBoundaryEnabled")
-```
--- **Example** ```xml
Currently, speaking style adjustments are supported for the following neural voi
* `zh-CN-XiaoxiaoNeural` * `zh-CN-YunyangNeural` * `zh-CN-YunyeNeural`
-* `zh-CN-YunxiNeural`
-* `zh-CN-XiaohanNeural`
-* `zh-CN-XiaomoNeural`
-* `zh-CN-XiaoxuanNeural`
+* `zh-CN-YunxiNeural`
+* `zh-CN-XiaohanNeural`
+* `zh-CN-XiaomoNeural`
+* `zh-CN-XiaoxuanNeural`
* `zh-CN-XiaoruiNeural` The intensity of speaking style can be further changed to better fit your use case. You can specify a stronger or softer style with `styledegree` to make the speech more expressive or subdued. Currently, speaking style adjustments are supported for Chinese (Mandarin, Simplified) neural voices.
Above changes are applied at the sentence level, and styles and role-plays vary
<mstts:express-as role="string" style="string"></mstts:express-as> ``` > [!NOTE]
-> At the moment, `styledegree` only supports Chinese (Mandarin, Simplified) neural voices. `role` only supports zh-CN-XiaomoNeural and zh-CN-XiaoxuanNeural.
+> At the moment, `styledegree` only supports Chinese (Mandarin, Simplified) neural voices. `role` only supports zh-CN-XiaomoNeural and zh-CN-XiaoxuanNeural.
**Attributes**
To define how multiple entities are read, you can create a custom lexicon, which
<phoneme> bɛˈniːnji</phoneme> </lexeme> <lexeme>
- <grapheme>😀</grapheme>
- <alias>test emoji</alias>
+ <grapheme>😀</grapheme>
+ <alias>test emoji</alias>
</lexeme> </lexicon> ```
cognitive-services Cognitive Services Container Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/cognitive-services-container-support.md
Previously updated : 05/13/2021 Last updated : 06/07/2021 keywords: on-premises, Docker, container, Kubernetes #Customer intent: As a potential customer, I want to know more about how Cognitive Services provides and supports Docker containers for each service.
Azure Cognitive Services containers provide the following set of Docker containe
| [Text Analytics][ta-containers-keyphrase] | **Key Phrase Extraction** ([image](https://go.microsoft.com/fwlink/?linkid=2018757&clcid=0x409)) | Extracts key phrases to identify the main points. For example, for the input text "The food was delicious and there were wonderful staff", the API returns the main talking points: "food" and "wonderful staff". | Preview | | [Text Analytics][ta-containers-language] | **Text Language Detection** ([image](https://go.microsoft.com/fwlink/?linkid=2018759&clcid=0x409)) | For up to 120 languages, detects which language the input text is written in and report a single language code for every document submitted on the request. The language code is paired with a score indicating the strength of the score. | Generally available | | [Text Analytics][ta-containers-sentiment] | **Sentiment Analysis v3** ([image](https://go.microsoft.com/fwlink/?linkid=2018654&clcid=0x409)) | Analyzes raw text for clues about positive or negative sentiment. This version of sentiment analysis returns sentiment labels (for example *positive* or *negative*) for each document and sentence within it. | Generally available |
-| [Text Analytics][ta-containers-health] | **Text Analytics for health** | Extract and label medical information from unstructured clinical text. | Gated preview. [Request access][request-access]. |
+| [Text Analytics][ta-containers-health] | **Text Analytics for health** | Extract and label medical information from unstructured clinical text. | Preview |
| [Translator][tr-containers] | **Translator** | Translate text in several languages and dialects. | Gated preview. [Request access][request-access]. | ### Speech containers
cognitive-services Label Tool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/label-tool.md
To complete this quickstart, you must have:
## Try it out
-To try out the Form Recognizer Sample Labeling Tool online, go to the [FOTT website](https://fott-preview.azurewebsites.net/).
+To try out the Form Recognizer Sample Labeling Tool online, go to the [FOTT website](https://fott-2.1.azurewebsites.net/).
-### [v2.1 preview](#tab/v2-1)
+### [v2.1](#tab/v2-1)
> [!div class="nextstepaction"]
-> [Try Prebuilt Models](https://fott-preview.azurewebsites.net/)
+> [Try Prebuilt Models](https://fott-2.1.azurewebsites.net/)
### [v2.0](#tab/v2-0)
You'll use the Docker engine to run the sample labeling tool. Follow these steps
1. Get the sample labeling tool container with the `docker pull` command.
-### [v2.1 preview](#tab/v2-1)
+### [v2.1](#tab/v2-1)
```console
- docker pull mcr.microsoft.com/azure-cognitive-services/custom-form/labeltool:latest-preview
+ docker pull mcr.microsoft.com/azure-cognitive-services/custom-form/labeltool:latest
``` ### [v2.0](#tab/v2-0)
docker pull mcr.microsoft.com/azure-cognitive-services/custom-form/labeltool
</br> 3. Now you're ready to run the container with `docker run`.
-### [v2.1 preview](#tab/v2-1)
+### [v2.1](#tab/v2-1)
```console
- docker run -it -p 3000:80 mcr.microsoft.com/azure-cognitive-services/custom-form/labeltool:latest-preview eula=accept
+ docker run -it -p 3000:80 mcr.microsoft.com/azure-cognitive-services/custom-form/labeltool:latest eula=accept
``` ### [v2.0](#tab/v2-0)
In v2.1, if your training document does not have a value filled in, you can draw
Next, you'll create tags (labels) and apply them to the text elements that you want the model to analyze.
-### [v2.1 preview](#tab/v2-1)
+### [v2.1](#tab/v2-1)
1. First, use the tags editor pane to create the tags you'd like to identify. 1. Select **+** to create a new tag.
The following value types and variations are currently supported:
* `time` * `integer`
-* `selectionMark` ΓÇô _New in v2.1-preview.1!_
+* `selectionMark`
> [!NOTE] > See these rules for date formatting:
After training finishes, examine the **Average Accuracy** value. If it's low, yo
## Compose trained models
-### [v2.1 preview](#tab/v2-1)
+### [v2.1](#tab/v2-1)
With Model Compose, you can compose up to 100 models to a single model ID. When you call Analyze with the composed `modelID`, Form Recognizer will first classify the form you submitted, choose the best matching model, and then return results for that model. This operation is useful when incoming forms may belong to one of several templates.
Choose the **Compose button**. In the pop-up, name your new composed model and s
### [v2.0](#tab/v2-0)
-This feature is currently available in v2.1. preview.
+This feature is only available in v2.1
cognitive-services Model Versioning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/concepts/model-versioning.md
Use the table below to find which model versions are supported by each hosted en
| `/sentiment` | `2019-10-01`, `2020-04-01` | `2020-04-01` | | `/languages` | `2019-10-01`, `2020-07-01`, `2020-09-01`, `2021-01-05` | `2021-01-05` | | `/entities/linking` | `2019-10-01`, `2020-02-01` | `2020-02-01` |
-| `/entities/recognition/general` | `2019-10-01`, `2020-02-01`, `2020-04-01`,`2021-01-15` | `2021-01-15` |
+| `/entities/recognition/general` | `2019-10-01`, `2020-02-01`, `2020-04-01`,`2021-01-15`,`2021-06-01` | `2021-06-01` |
| `/entities/recognition/pii` | `2019-10-01`, `2020-02-01`, `2020-04-01`,`2020-07-01`, `2021-01-15` | `2021-01-15` | | `/entities/health` | `2021-05-15` | `2021-05-15` | | `/keyphrases` | `2019-10-01`, `2020-07-01`, `2021-06-01` | `2021-06-01` |
cognitive-services Text Analytics For Health https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/how-tos/text-analytics-for-health.md
Previously updated : 06/03/2021 Last updated : 06/07/2021
See the [entity categories](../named-entity-types.md?tabs=health) returned by Te
Text Analytics for health only supports English language documents.
-## Request access to the public preview
-
-Fill out and submit the [Cognitive Services request form](https://aka.ms/csgate) to request access to the Text Analytics for health public preview. You will not be billed for Text Analytics for health usage.
-
-The form requests information about you, your company, and the user scenario for which you'll use the container. After you submit the form, the Azure Cognitive Services team will review it and email you with a decision.
-
-> [!IMPORTANT]
-> * On the form, you must use an email address associated with an Azure subscription ID.
-> * The Azure resource you use must have been created with the approved Azure subscription ID.
-> * Check your email (both inbox and junk folders) for updates on the status of your application from Microsoft.
- ## Using the Docker container To run the Text Analytics for health container in your own environment, follow these [instructions to download and install the container](../how-tos/text-analytics-how-to-install-containers.md?tabs=healthcare).
cognitive-services Text Analytics How To Install Containers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/how-tos/text-analytics-how-to-install-containers.md
Previously updated : 03/29/2021 Last updated : 06/02/2021 keywords: on-premises, Docker, container, sentiment analysis, natural language processing
keywords: on-premises, Docker, container, sentiment analysis, natural language p
> [!NOTE] > * The container for Sentiment Analysis and language detection are now Generally Available. The key phrase extraction container is available as an ungated public preview. > * Entity linking and NER are not currently available as a container.
-> * Accessing the Text Analytics for health container requires a [request form](https://aka.ms/csgate). Currently, you will not be billed for its usage.
> * The container image locations may have recently changed. Read this article to see the updated location for this container. Containers enable you to run the Text Analytic APIs in your own environment and are great for your specific security and data governance requirements. The Text Analytics containers provide advanced natural language processing over raw text, and include three main functions: sentiment analysis, key phrase extraction, and language detection.
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/whats-new.md
The Text Analytics API is updated on an ongoing basis. To stay up-to-date with r
### General API updates * New model-version `2021-06-01` for key phrase extraction, which adds support for simplified Chinese.
+* The `2021-06-01` model version for [Named Entity Recognition](how-tos/text-analytics-how-to-entity-linking.md) v3.x, which provides
+ * Improved AI quality and expanded language support for the *Skill* entity category.
+ * Added Spanish, French, German, Italian and Portuguese language support for the *Skill* entity category
+* Asynchronous operation and Text Analytics for health are available in all regions
### Text Analytics for health updates
+* You no longer need to apply for access to preview Text Analytics for health.
* A new model version `2021-05-15` for the `/health` endpoint and on-premise container which provides * 5 new entity types: `ALLERGEN`, `CONDITION_SCALE`, `COURSE`, `EXPRESSION` and `MUTATION_TYPE`, * 14 new relation types, * Assertion detection expanded for new entity types and * Linking support for ALLERGEN entity type
+
## May 2021 * [Custom question answering](../qnamaker/custom-question-answering.md) (previously QnA maker) can now be accessed using a Text Analytics resource.
confidential-ledger Quickstart Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/confidential-ledger/quickstart-template.md
Last updated 04/15/2021
-# [![Deploy To Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F101-confidential-ledger-create%2Fazuredeploy.json)
# Quickstart: Create an Microsoft Azure Confidential Ledger with an ARM template
-[Microsoft Azure Confidential Ledger](overview.md) is a new and highly secure service for managing sensitive data records. This quickstart focuses on the process of deploying an Azure Resource Manager template (ARM template) to create a new ledger.
+[Microsoft Azure Confidential Ledger](overview.md) is a new and highly secure service for managing sensitive data records. This quickstart describes how to use an Azure Resource Manager template (ARM template) to create a new ledger.
[!INCLUDE [About Azure Resource Manager](../../includes/resource-manager-quickstart-introduction.md)] If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
-[![Deploy To Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F101-confidential-ledger-create%2Fazuredeploy.json)
+[![Deploy To Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.confidentialledger%2Fconfidential-ledger-create%2Fazuredeploy.json)
## Prerequisites
+### Azure subscription
+ If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+### Register the resource provider
++
+### Obtain your principal ID
+
+The template requires a principal ID. You can obtain your principal ID my running the Azure CLI [az ad sp list](/cli/azure/ad/sp#az_ad_sp_list) command, with the `--show-mine` flag:
+
+```azurecli-interactive
+az ad sp list --show-mine -o table
+```
+
+Your principal ID is shown in the "ObjectId" column.
+ ## Review the template
-The template used in this quickstart is from [Azure Quickstart Templates](https://github.com/Azure/azure-quickstart-templates).
+The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates).
+ Azure resources defined in the template:
Azure resources defined in the template:
1. Select the following image to sign in to Azure and open the template.
- [![Deploy To Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F101-confidential-ledger-create%2Fazuredeploy.json)
+ [![Deploy To Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.confidentialledger%2Fconfidential-ledger-create%2Fazuredeploy.json)
1. Select or enter the following values.
Azure resources defined in the template:
- **Ledger name**: Select a name for your ledger. Ledger names must be globally unique. - **Location**: Select a location. For example, **East US**.
+ - **PrincipalId**: Provide the Principal ID you noted in the [Prerequisites](#obtain-your-principal-id) section above.
1. Select **Purchase**. After the Confidential Ledger resource has been deployed successfully, you will receive a notification.
Write-Host "Press [ENTER] to continue..."
## Next steps
-In this quickstart, you created an Confidential Ledger resource using an ARM template and validated the deployment. To learn more about the service, see [Overview of Microsoft Azure Confidential Ledger](overview.md).
--
+In this quickstart, you created an Confidential Ledger resource using an ARM template and validated the deployment. To learn more about the service, see [Overview of Microsoft Azure Confidential Ledger](overview.md).
cosmos-db Change Feed https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/change-feed.md
Previously updated : 04/08/2020 Last updated : 06/07/2021
Change feed is available for each logical partition key within the container, an
* The change feed includes inserts and update operations made to items within the container. You can capture deletes by setting a "soft-delete" flag within your items (for example, documents) in place of deletes. Alternatively, you can set a finite expiration period for your items with the [TTL capability](time-to-live.md). For example, 24 hours and use the value of that property to capture deletes. With this solution, you have to process the changes within a shorter time interval than the TTL expiration period.
-* Each change to an item appears exactly once in the change feed, and the clients must manage the checkpointing logic. If you want to avoid the complexity of managing checkpoints, the change feed processor provides automatic checkpointing and "at least once" semantics. See [using change feed with change feed processor](change-feed-processor.md).
- * Only the most recent change for a given item is included in the change log. Intermediate changes may not be available.
+* Each change included in the change log appears exactly once in the change feed, and the clients must manage the checkpointing logic. If you want to avoid the complexity of managing checkpoints, the change feed processor provides automatic checkpointing and "at least once" semantics. [using change feed with change feed processor](change-feed-processor.md).
+ * The change feed is sorted by the order of modification within each logical partition key value. There is no guaranteed order across the partition key values. * Changes can be synchronized from any point-in-time, that is there is no fixed data retention period for which changes are available.
cosmos-db Continuous Backup Restore Resource Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/continuous-backup-restore-resource-model.md
description: This article explains the resource model for the Azure Cosmos DB po
Previously updated : 02/22/2021 Last updated : 06/08/2021
This resource contains a database account instance that can be restored. The dat
| restorableLocations: creationTime | The time in UTC when the regional account was created.| | restorableLocations: deletionTime | The time in UTC when the regional account was deleted. This value is empty if the regional account is live.|
-To get a list of all restorable accounts, see [Restorable Database Accounts - list](/rest/api/cosmos-db-resource-provider/2021-03-01-preview/restorabledatabaseaccounts/list) or [Restorable Database Accounts- list by location](/rest/api/cosmos-db-resource-provider/2021-03-01-preview/restorabledatabaseaccounts/listbylocation) articles.
+To get a list of all restorable accounts, see [Restorable Database Accounts - list](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/restorabledatabaseaccounts/list) or [Restorable Database Accounts- list by location](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/restorabledatabaseaccounts/listbylocation) articles.
### Restorable SQL database
Each resource contains information of a mutation event such as creation and dele
| operationType | The operation type of this database event. Here are the possible values:<br/><ul><li>Create: database creation event</li><li>Delete: database deletion event</li><li>Replace: database modification event</li><li>SystemOperation: database modification event triggered by the system. This event is not initiated by the user</li></ul> | | database |The properties of the SQL database at the time of the event|
-To get a list of all database mutations, see [Restorable Sql Databases - List](/rest/api/cosmos-db-resource-provider/2021-03-01-preview/restorablesqldatabases/list) article.
+To get a list of all database mutations, see [Restorable Sql Databases - List](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/restorablesqldatabases/list) article.
### Restorable SQL container
Each resource contains information of a mutation event such as creation and dele
| operationType | The operation type of this container event. Here are the possible values: <br/><ul><li>Create: container creation event</li><li>Delete: container deletion event</li><li>Replace: container modification event</li><li>SystemOperation: container modification event triggered by the system. This event is not initiated by the user</li></ul> | | container | The properties of the SQL container at the time of the event.|
-To get a list of all container mutations under the same database, see [Restorable Sql Containers - List](/rest/api/cosmos-db-resource-provider/2021-03-01-preview/restorablesqlcontainers/list) article.
+To get a list of all container mutations under the same database, see [Restorable Sql Containers - List](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/restorablesqlcontainers/list) article.
### Restorable SQL resources
Each resource represents a single database and all the containers under that dat
| databaseName | The name of the SQL database. | collectionNames | The list of SQL containers under this database.|
-To get a list of SQL database and container combo that exist on the account at the given timestamp and location, see [Restorable Sql Resources - List](/rest/api/cosmos-db-resource-provider/2021-03-01-preview/restorablesqlresources/list) article.
+To get a list of SQL database and container combo that exist on the account at the given timestamp and location, see [Restorable Sql Resources - List](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/restorablesqlresources/list) article.
### Restorable MongoDB database
Each resource contains information of a mutation event such as creation and dele
| ownerResourceId | The resource ID of the MongoDB database. | | operationType | The operation type of this database event. Here are the possible values:<br/><ul><li> Create: database creation event</li><li> Delete: database deletion event</li><li> Replace: database modification event</li><li> SystemOperation: database modification event triggered by the system. This event is not initiated by the user </li></ul> |
-To get a list of all database mutation, see [Restorable Mongodb Databases - List](/rest/api/cosmos-db-resource-provider/2021-03-01-preview/restorablemongodbdatabases/list) article.
+To get a list of all database mutation, see [Restorable Mongodb Databases - List](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/restorablemongodbdatabases/list) article.
### Restorable MongoDB collection
Each resource contains information of a mutation event such as creation and dele
| ownerResourceId | The resource ID of the MongoDB collection. | | operationType |The operation type of this collection event. Here are the possible values:<br/><ul><li>Create: collection creation event</li><li>Delete: collection deletion event</li><li>Replace: collection modification event</li><li>SystemOperation: collection modification event triggered by the system. This event is not initiated by the user</li></ul> |
-To get a list of all container mutations under the same database, see [Restorable Mongodb Collections - List](/rest/api/cosmos-db-resource-provider/2021-03-01-preview/restorablemongodbcollections/list) article.
+To get a list of all container mutations under the same database, see [Restorable Mongodb Collections - List](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/restorablemongodbcollections/list) article.
### Restorable MongoDB resources
Each resource represents a single database and all the collections under that da
| databaseName |The name of the MongoDB database. | | collectionNames | The list of MongoDB collections under this database. |
-To get a list of all MongoDB database and collection combinations that exist on the account at the given timestamp and location, see [Restorable Mongodb Resources - List](/rest/api/cosmos-db-resource-provider/2021-03-01-preview/restorablemongodbresources/list) article.
+To get a list of all MongoDB database and collection combinations that exist on the account at the given timestamp and location, see [Restorable Mongodb Resources - List](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/restorablemongodbresources/list) article.
## Next steps
cosmos-db Cosmos Db Advanced Threat Protection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cosmos-db-advanced-threat-protection.md
description: Learn how Azure Cosmos DB provides encryption of data at rest and h
Previously updated : 12/13/2019 Last updated : 06/08/2021
Use the following PowerShell cmdlets:
Use an Azure Resource Manager (ARM) template to set up Cosmos DB with Advanced Threat Protection enabled. For more information, see
-[Create a CosmosDB Account with Advanced Threat Protection](https://azure.microsoft.com/resources/templates/201-cosmosdb-advanced-threat-protection-create-account/).
+[Create a CosmosDB Account with Advanced Threat Protection](https://azure.microsoft.com/resources/templates/cosmosdb-advanced-threat-protection-create-account/).
# [Azure Policy](#tab/azure-policy)
cosmos-db How To Configure Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/how-to-configure-private-endpoints.md
description: Learn how to set up Azure Private Link to access an Azure Cosmos ac
Previously updated : 06/01/2021 Last updated : 06/08/2021
The following situations and outcomes are possible when you use Private Link in
As described in the previous section, and unless specific firewall rules have been set, adding a private endpoint makes your Azure Cosmos account accessible through private endpoints only. This means that the Azure Cosmos account could be reached from public traffic after it is created and before a private endpoint gets added. To make sure that public network access is disabled even before the creation of private endpoints, you can set the `publicNetworkAccess` flag to `Disabled` during account creation. Note that this flag takes precedence over any IP or virtual network rule; all public and virtual network traffic is blocked when the flag is set to `Disabled`, even if the source IP or virtual network is allowed in the firewall configuration.
-See [this Azure Resource Manager template](https://azure.microsoft.com/resources/templates/101-cosmosdb-private-endpoint/) for an example showing how to use this flag.
+See [this Azure Resource Manager template](https://azure.microsoft.com/resources/templates/cosmosdb-private-endpoint/) for an example showing how to use this flag.
## Adding private endpoints to an existing Cosmos account with no downtime
cosmos-db How To Setup Rbac https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/how-to-setup-rbac.md
description: Learn how to configure role-based access control with Azure Active
Previously updated : 06/01/2021 Last updated : 06/08/2021
When creating a custom role definition, you need to provide:
> [!NOTE] > The operations described below are available in: > - Azure PowerShell: [Az.CosmosDB version 1.2.0](https://www.powershellgallery.com/packages/Az.CosmosDB/1.2.0) or higher
-> - [Azure CLI](https://docs.microsoft.com/cli/azure/install-azure-cli): version 2.24.0 or higher
+> - [Azure CLI](/cli/azure/install-azure-cli): version 2.24.0 or higher
### Using Azure PowerShell
az cosmosdb sql role definition list --account-name $accountName --resource-grou
### Using Azure Resource Manager templates
-See [this page](/rest/api/cosmos-db-resource-provider/2021-03-01-preview/sqlresources2/createupdatesqlroledefinition) for a reference and examples of using Azure Resource Manager templates to create role definitions.
+See [this page](/rest/api/cosmos-db-resource-provider/2021-04-15/sqlresources2/createupdatesqlroledefinition) for a reference and examples of using Azure Resource Manager templates to create role definitions.
## <a id="role-assignments"></a> Create role assignments
You can associate built-in or custom role definitions with your Azure AD identit
> [!NOTE] > The operations described below are available in: > - Azure PowerShell: [Az.CosmosDB version 1.2.0](https://www.powershellgallery.com/packages/Az.CosmosDB/1.2.0) or higher
-> - [Azure CLI](https://docs.microsoft.com/cli/azure/install-azure-cli): version 2.24.0 or higher
+> - [Azure CLI](/cli/azure/install-azure-cli): version 2.24.0 or higher
### Using Azure PowerShell
az cosmosdb sql role assignment create --account-name $accountName --resource-gr
### Using Azure Resource Manager templates
-See [this page](/rest/api/cosmos-db-resource-provider/2021-03-01-preview/sqlresources2/createupdatesqlroleassignment) for a reference and examples of using Azure Resource Manager templates to create role assignments.
+See [this page](/rest/api/cosmos-db-resource-provider/2021-04-15/sqlresources2/createupdatesqlroleassignment) for a reference and examples of using Azure Resource Manager templates to create role assignments.
## Initialize the SDK with Azure AD
cosmos-db Managed Identity Based Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/managed-identity-based-authentication.md
Previously updated : 06/02/2021 Last updated : 06/08/2021
az role assignment create --assignee $principalId --role "DocumentDB Account Con
Now we have a function app that has a system-assigned managed identity with the **DocumentDB Account Contributor** role in the Azure Cosmos DB permissions. The following function app code will get the Azure Cosmos DB keys, create a CosmosClient object, get the temperature of the aquarium, and then save this to Azure Cosmos DB.
-This sample uses the [List Keys API](/rest/api/cosmos-db-resource-provider/2021-03-15/databaseaccounts/listkeys) to access your Azure Cosmos DB account keys.
+This sample uses the [List Keys API](/rest/api/cosmos-db-resource-provider/2021-04-15/databaseaccounts/listkeys) to access your Azure Cosmos DB account keys.
> [!IMPORTANT]
-> If you want to [assign the Cosmos DB Account Reader](#grant-access-to-your-azure-cosmos-account) role, you'll need to use the [List Read Only Keys API](/rest/api/cosmos-db-resource-provider/2021-03-15/databaseaccounts/listreadonlykeys). This will populate just the read-only keys.
+> If you want to [assign the Cosmos DB Account Reader](#grant-access-to-your-azure-cosmos-account) role, you'll need to use the [List Read Only Keys API](/rest/api/cosmos-db-resource-provider/2021-04-15/databaseaccounts/listreadonlykeys). This will populate just the read-only keys.
The List Keys API returns the `DatabaseAccountListKeysResult` object. This type isn't defined in the C# libraries. The following code shows the implementation of this class:
cosmos-db Sql Api Java Sdk Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-api-java-sdk-samples.md
Previously updated : 09/23/2020 Last updated : 06/08/2021
The [Collection CRUD Samples](https://github.com/Azure/azure-documentdb-java/blo
## Autoscale collection examples
-To learn more about autoscale before running these samples, take a look at these instructions for enabling autoscale in your [account](https://azure.microsoft.com/resources/templates/101-cosmosdb-sql-autoscale/) and in your [databases and containers](./provision-throughput-autoscale.md).
+To learn more about autoscale before running these samples, take a look at these instructions for enabling autoscale in your [account](https://azure.microsoft.com/resources/templates/cosmosdb-sql-autoscale/) and in your [databases and containers](./provision-throughput-autoscale.md).
The [autoscale Database CRUD Samples](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/autoscaledatabasecrud/sync/AutoscaleDatabaseCRUDQuickstart.java) file shows how to perform the following tasks.
cosmos-db Sql Api Sdk Java Spark V3 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-api-sdk-java-spark-v3.md
developers to work with data using a variety of standard APIs, such as SQL, Mong
## Documentation -- [Getting started](https://github.com/Azure/azure-sdk-for-jav)-- [Catalog API](https://github.com/Azure/azure-sdk-for-jav)-- [Configuration Parameter Reference](https://github.com/Azure/azure-sdk-for-jav)
+- [Getting started](https://github.com/Azure/azure-sdk-for-jav)
+- [Catalog API](https://github.com/Azure/azure-sdk-for-jav)
+- [Configuration Parameter Reference](https://github.com/Azure/azure-sdk-for-jav)
## Version compatibility
cost-management-billing Tutorial User Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/cloudyn/tutorial-user-access.md
You cannot remove yourself as a user.
## Delete or export personal data
-If you want to delete or export personal data from Cloudyn, you need to create a support ticket. When the support ticket is created, it acts as formal request - a Data Subject Request. Microsoft then takes prompt action to remove the account and delete any customer or personal data. To learn about how you can request to have your data deleted or exported, see [Data Subject Requests of Cloudyn Data](https://www.cloudyn.com/cloudyn-gdpr-requests).
+If you want to delete or export personal data from Cloudyn, you need to create a support ticket. When the support ticket is created, it acts as formal request - a Data Subject Request. Microsoft then takes prompt action to remove the account and delete any customer or personal data.
## Create and manage entities
cost-management-billing Aws Integration Set Up Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/costs/aws-integration-set-up-configure.md
Title: Set up AWS integration with Azure Cost Management
description: This article walks you through setting up and configuring AWS Cost and Usage report integration with Azure Cost Management. Previously updated : 05/10/2021 Last updated : 06/08/2021
Watch the video [How to set up Connectors for AWS in Cost Management](https://ww
## Create a Cost and Usage report in AWS
-Using a Cost and Usage report is the AWS-recommended way to collect and process AWS costs. For more information, see the [AWS Cost and Usage Report](https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/billing-reports-costusage.html) documentation.
+Using a Cost and Usage report is the AWS-recommended way to collect and process AWS costs. The Cost Management cross cloud connector supports cost and usage reports configured at the management (consolidated) account level. For more information, see the [AWS Cost and Usage Report](https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/billing-reports-costusage.html) documentation.
Use the **Cost & Usage Reports** page of the Billing and Cost Management console in AWS to create a Cost and Usage report with the following steps:
If you don't specify a prefix, the default prefix is the name that you specified
It can take up to 24 hours for AWS to start delivering reports to your Amazon S3 bucket. After delivery starts, AWS updates the AWS Cost and Usage report files at least once a day. You can continue configuring your AWS environment without waiting for delivery to start.
+> [!NOTE]
+> Cost and usage reports configured at the member (linked) account level aren't currently supported.
+ ## Create a role and policy in AWS Azure Cost Management accesses the S3 bucket where the Cost and Usage report is located several times a day. The service needs access to credentials to check for new data. You create a role and policy in AWS to allow Cost Management to access it.
cost-management-billing Migrate Cost Management Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/costs/migrate-cost-management-api.md
If you use any existing EA APIs, you need to update them to support MCA billing
| Purpose | Old offering | New offering | | | | |
-| Cloudyn | [Cloudyn.com](https://www.cloudyn.com) | [Azure Cost Management](https://azure.microsoft.com/services/cost-management/) |
+| Cloudyn | Cloudyn | [Azure Cost Management](https://azure.microsoft.com/services/cost-management/) |
| Power BI | [Microsoft Consumption Insights](/power-bi/desktop-connect-azure-consumption-insights) content pack and connector | [Azure Consumption Insights connector](/power-bi/desktop-connect-azure-consumption-insights) | ## APIs to get balance and credits
To get reservation summaries with the Reservation Summaries API:
## Move from Cloudyn to Cost Management
-Organizations using [Cloudyn](https://cloudyn.com) should start using [Azure Cost Management](https://azure.microsoft.com/services/cost-management/) for any cost management needs. Cost Management is available in the Azure portal with no onboarding and an eight-hour latency. For more information, see the [Cost Management documentation](../index.yml).
+Organizations using Cloudyn should start using [Azure Cost Management](https://azure.microsoft.com/services/cost-management/) for any cost management needs. Cost Management is available in the Azure portal with no onboarding and an eight-hour latency. For more information, see the [Cost Management documentation](../index.yml).
With Azure Cost Management, you can:
cost-management-billing Assign Roles Azure Service Principals https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/assign-roles-azure-service-principals.md
Later in this article, you'll give permission to the Azure AD app to act by usin
## Assign enrollment account role permission to the SPN
-1. Read the [Role Assignments - Put](/rest/api/billing/2019-10-01-preview/roleassignments/put) REST API article. While you read the article, select **Try it** to get started by using the SPN.
+1. Read the [Role Assignments - Put](/rest/api/billing/2019-10-01-preview/role-assignments/put) REST API article. While you read the article, select **Try it** to get started by using the SPN.
:::image type="content" source="./media/assign-roles-azure-service-principals/put-try-it.png" alt-text="Screenshot showing the Try It option in the Put article." lightbox="./media/assign-roles-azure-service-principals/put-try-it.png" :::
Later in this article, you'll give permission to the Azure AD app to act by usin
- `billingRoleAssignmentName`: This parameter is a unique GUID that you need to provide. You can generate a GUID using the [New-Guid](/powershell/module/microsoft.powershell.utility/new-guid) PowerShell command. You can also use the [Online GUID / UUID Generator](https://guidgenerator.com/) website to generate a unique GUID.
- - `api-version`: Use the **2019-10-01-preview** version. Use the sample request body at [Role Assignments - Put - Examples](/rest/api/billing/2019-10-01-preview/roleassignments/put#examples).
+ - `api-version`: Use the **2019-10-01-preview** version. Use the sample request body at [Role Assignments - Put - Examples](/rest/api/billing/2019-10-01-preview/role-assignments/put#examples).
The request body has JSON code with three parameters that you need to use.
For the EA purchaser role, use the same steps for the enrollment reader. Specify
## Assign the department reader role to the SPN
-1. Read the [Enrollment Department Role Assignments - Put](/rest/api/billing/2019-10-01-preview/enrollmentdepartmentroleassignments/put) REST API article. While you read the article, select **Try it**.
+1. Read the [Enrollment Department Role Assignments - Put](/rest/api/billing/2019-10-01-preview/enrollment-department-role-assignments/put) REST API article. While you read the article, select **Try it**.
:::image type="content" source="./media/assign-roles-azure-service-principals/enrollment-department-role-assignments-put-try-it.png" alt-text="Screenshot showing the Try It option in the Enrollment Department Role Assignments Put article." lightbox="./media/assign-roles-azure-service-principals/enrollment-department-role-assignments-put-try-it.png" :::
For the EA purchaser role, use the same steps for the enrollment reader. Specify
:::image type="content" source="./media/assign-roles-azure-service-principals/department-id.png" alt-text="Screenshot showing an example department ID." lightbox="./media/assign-roles-azure-service-principals/department-id.png" :::
- - `api-version`: Use the **2019-10-01-preview** version. Use the sample at [Enrollment Department Role Assignments - Put](/billing/2019-10-01-preview/enrollmentdepartmentroleassignments/put).
+ - `api-version`: Use the **2019-10-01-preview** version. Use the sample at [Enrollment Department Role Assignments - Put](/rest/api/billing/2019-10-01-preview/enrollment-department-role-assignments/put).
The request body has JSON code with three parameters that you need to use.
Now you can use the SPN to automatically access EA APIs. The SPN has the Departm
## Assign the subscription creator role to the SPN
-1. Read the [Enrollment Account Role Assignments - Put](/rest/api/billing/2019-10-01-preview/enrollmentaccountroleassignments/put) article. While you read it, select **Try It** to assign the subscription creator role to the SPN.
+1. Read the [Enrollment Account Role Assignments - Put](/rest/api/billing/2019-10-01-preview/enrollment-account-role-assignments/put) article. While you read it, select **Try It** to assign the subscription creator role to the SPN.
:::image type="content" source="./media/assign-roles-azure-service-principals/enrollment-department-role-assignments-put-try-it.png" alt-text="Screenshot showing the Try It option in the Enrollment Account Role Assignments Put article." lightbox="./media/assign-roles-azure-service-principals/enrollment-department-role-assignments-put-try-it.png" ::: 1. Use your account credentials to sign in to the tenant with the enrollment access that you want to assign.
-1. Provide the following parameters as part of the API request. Read the article at [Enrollment Account Role Assignments - Put - URI Parameters](/rest/api/billing/2019-10-01-preview/enrollmentaccountroleassignments/put#uri-parameters).
+1. Provide the following parameters as part of the API request. Read the article at [Enrollment Account Role Assignments - Put - URI Parameters](/rest/api/billing/2019-10-01-preview/enrollment-account-role-assignments/put#uri-parameters).
- `billingAccountName`: This parameter is the **Billing account ID**. You can find it in the Azure portal on the **Cost Management + Billing overview** page.
Now you can use the SPN to automatically access EA APIs. The SPN has the Departm
:::image type="content" source="./media/assign-roles-azure-service-principals/account-id.png" alt-text="Screenshot showing the account ID." lightbox="./media/assign-roles-azure-service-principals/account-id.png" :::
- - `api-version`: Use the **2019-10-01-preview** version. Use the sample at [Enrollment Department Role Assignments - Put - Examples](/rest/api/billing/2019-10-01-preview/enrollmentdepartmentroleassignments/put#putenrollmentdepartmentadministratorroleassignment).
+ - `api-version`: Use the **2019-10-01-preview** version. Use the sample at [Enrollment Department Role Assignments - Put - Examples](/rest/api/billing/2019-10-01-preview/enrollment-department-role-assignments/put#examples).
The request body has JSON code with three parameters that you need to use.
data-factory Concepts Data Flow Performance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-data-flow-performance.md
Previously updated : 04/10/2021 Last updated : 06/07/2021 # Mapping data flows performance and tuning guide
However, if most of your data flows execute in parallel, it is not recommended t
> [!NOTE] > Time to live is not available when using the auto-resolve integration runtime
-> [!NOTE]
-> Quick re-use of existing clusters is a feature in the Azure Integration Runtime that is currently in public preview
- ## Optimizing sources For every source except Azure SQL Database, it is recommended that you keep **Use current partitioning** as the selected value. When reading from all other source systems, data flows automatically partitions data evenly based upon the size of the data. A new partition is created for about every 128 MB of data. As your data size increases, the number of partitions increase.
If your data flows execute in parallel, its recommended to not enable the Azure
If you execute your data flow activities in sequence, it is recommended that you set a TTL in the Azure IR configuration. ADF will reuse the compute resources resulting in a faster cluster start up time. Each activity will still be isolated receive a new Spark context for each execution. To reduce the time between sequential activities even more, set the "quick re-use" checkbox on the Azure IR to tell ADF to re-use the existing cluster.
-> [!NOTE]
-> Quick re-use of existing clusters is a feature in the Azure Integration Runtime that is currently in public preview
- ### Overloading a single data flow If you put all of your logic inside of a single data flow, ADF will execute the entire job on a single Spark instance. While this may seem like a way to reduce costs, it mixes together different logical flows and can be difficult to monitor and debug. If one component fails, all other parts of the job will fail as well. The Azure Data Factory team recommends organizing data flows by independent flows of business logic. If your data flow becomes too large, splitting it into separates components will make monitoring and debugging easier. While there is no hard limit on the number of transformations in a data flow, having too many will make the job complex.
data-factory Connector Mongodb Atlas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-mongodb-atlas.md
Title: Copy data from MongoDB Atlas
-description: Learn how to copy data from MongoDB Atlas to supported sink data stores by using a copy activity in an Azure Data Factory pipeline.
+ Title: Copy data from or to MongoDB Atlas
+description: Learn how to copy data from MongoDB Atlas to supported sink data stores, or from supported source data stores to MongoDB Atlas, by using a copy activity in an Azure Data Factory pipeline.
Previously updated : 09/28/2020 Last updated : 06/01/2021
-# Copy data from MongoDB Atlas using Azure Data Factory
+# Copy data from or to MongoDB Atlas using Azure Data Factory
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-This article outlines how to use the Copy Activity in Azure Data Factory to copy data from a MongoDB Atlas database. It builds on the [copy activity overview](copy-activity-overview.md) article that presents a general overview of copy activity.
+This article outlines how to use the Copy Activity in Azure Data Factory to copy data from and to a MongoDB Atlas database. It builds on the [copy activity overview](copy-activity-overview.md) article that presents a general overview of copy activity.
## Supported capabilities
-You can copy data from MongoDB Atlas database to any supported sink data store. For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
+You can copy data from MongoDB Atlas database to any supported sink data store, or copy data from any supported source data store to MongoDB Atlas database. For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
Specifically, this MongoDB Atlas connector supports **versions up to 4.2**.
For a full list of sections and properties that are available for defining datas
## Copy activity properties
-For a full list of sections and properties available for defining activities, see the [Pipelines](concepts-pipelines-activities.md) article. This section provides a list of properties supported by MongoDB Atlas source.
+For a full list of sections and properties available for defining activities, see the [Pipelines](concepts-pipelines-activities.md) article. This section provides a list of properties supported by MongoDB Atlas source and sink.
### MongoDB Atlas as source
The following properties are supported in the copy activity **source** section:
] ```
-## Export JSON documents as-is
+### MongoDB Atlas as sink
+
+The following properties are supported in the Copy Activity **sink** section:
+
+| Property | Description | Required |
+|: |: |: |
+| type | The **type** property of the Copy Activity sink must be set to **MongoDbAtlasSink**. |Yes |
+| writeBehavior |Describes how to write data to MongoDB Atlas. Allowed values: **insert** and **upsert**.<br/><br/>The behavior of **upsert** is to replace the document if a document with the same `_id` already exists; otherwise, insert the document.<br /><br />**Note**: Data Factory automatically generates an `_id` for a document if an `_id` isn't specified either in the original document or by column mapping. This means that you must ensure that, for **upsert** to work as expected, your document has an ID. |No<br />(the default is **insert**) |
+| writeBatchSize | The **writeBatchSize** property controls the size of documents to write in each batch. You can try increasing the value for **writeBatchSize** to improve performance and decreasing the value if your document size being large. |No<br />(the default is **10,000**) |
+| writeBatchTimeout | The wait time for the batch insert operation to finish before it times out. The allowed value is timespan. | No<br/>(the default is **00:30:00** - 30 minutes) |
+
+>[!TIP]
+>To import JSON documents as-is, refer to [Import or export JSON documents](#import-and-export-json-documents) section; to copy from tabular-shaped data, refer to [Schema mapping](#schema-mapping).
+
+**Example**
+
+```json
+"activities":[
+ {
+ "name": "CopyToMongoDBAtlas",
+ "type": "Copy",
+ "inputs": [
+ {
+ "referenceName": "<input dataset name>",
+ "type": "DatasetReference"
+ }
+ ],
+ "outputs": [
+ {
+ "referenceName": "<Document DB output dataset name>",
+ "type": "DatasetReference"
+ }
+ ],
+ "typeProperties": {
+ "source": {
+ "type": "<source type>"
+ },
+ "sink": {
+ "type": "MongoDbAtlasSink",
+ "writeBehavior": "upsert"
+ }
+ }
+ }
+]
+```
+
+## Import and Export JSON documents
+
+You can use this MongoDB Atlas connector to easily:
+
+* Copy documents between two MongoDB Atlas collections as-is.
+* Import JSON documents from various sources to MongoDB Atlas, including from Azure Cosmos DB, Azure Blob storage, Azure Data Lake Store, and other file-based stores that Azure Data Factory supports.
+* Export JSON documents from a MongoDB Atlas collection to various file-based stores.
+
+To achieve such schema-agnostic copy, skip the "structure" (also called *schema*) section in dataset and schema mapping in copy activity.
-You can use this MongoDB Atlas connector to export JSON documents as-is from a MongoDB Atlas collection to various file-based stores or to Azure Cosmos DB. To achieve such schema-agnostic copy, skip the "structure" (also called *schema*) section in dataset and schema mapping in copy activity.
## Schema mapping
-To copy data from MongoDB Atlas to tabular sink, refer to [schema mapping](copy-activity-schema-and-type-mapping.md#schema-mapping).
+To copy data from MongoDB Atlas to tabular sink or reversed, refer to [schema mapping](copy-activity-schema-and-type-mapping.md#schema-mapping).
## Next steps For a list of data stores supported as sources and sinks by the copy activity in Azure Data Factory, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Mongodb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-mongodb.md
Title: Copy data from MongoDB
-description: Learn how to copy data from Mongo DB to supported sink data stores by using a copy activity in an Azure Data Factory pipeline.
+ Title: Copy data from or to MongoDB
+description: Learn how to copy data from MongoDB to supported sink data stores, or from supported source data stores to MongoDB, by using a copy activity in an Azure Data Factory pipeline.
Previously updated : 01/08/2021 Last updated : 06/01/2021
-# Copy data from MongoDB using Azure Data Factory
+# Copy data from or to MongoDB by using Azure Data Factory
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-This article outlines how to use the Copy Activity in Azure Data Factory to copy data from a MongoDB database. It builds on the [copy activity overview](copy-activity-overview.md) article that presents a general overview of copy activity.
+This article outlines how to use the Copy Activity in Azure Data Factory to copy data from and to a MongoDB database. It builds on the [copy activity overview](copy-activity-overview.md) article that presents a general overview of copy activity.
>[!IMPORTANT] >ADF release this new version of MongoDB connector which provides better native MongoDB support. If you are using the previous MongoDB connector in your solution which is supported as-is for backward compatibility, refer to [MongoDB connector (legacy)](connector-mongodb-legacy.md) article.
This article outlines how to use the Copy Activity in Azure Data Factory to copy
## Supported capabilities
-You can copy data from MongoDB database to any supported sink data store. For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
+You can copy data from MongoDB database to any supported sink data store, or copy data from any supported source data store to MongoDB database. For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
Specifically, this MongoDB connector supports **versions up to 4.2**.
For a full list of sections and properties that are available for defining datas
## Copy activity properties
-For a full list of sections and properties available for defining activities, see the [Pipelines](concepts-pipelines-activities.md) article. This section provides a list of properties supported by MongoDB source.
+For a full list of sections and properties available for defining activities, see the [Pipelines](concepts-pipelines-activities.md) article. This section provides a list of properties supported by MongoDB source and sink.
### MongoDB as source
The following properties are supported in the copy activity **source** section:
] ```
+### MongoDB as sink
-## Export JSON documents as-is
+The following properties are supported in the Copy Activity **sink** section:
-You can use this MongoDB connector to export JSON documents as-is from a MongoDB collection to various file-based stores or to Azure Cosmos DB. To achieve such schema-agnostic copy, skip the "structure" (also called *schema*) section in dataset and schema mapping in copy activity.
+| Property | Description | Required |
+|: |: |: |
+| type | The **type** property of the Copy Activity sink must be set to **MongoDbV2Sink**. |Yes |
+| writeBehavior |Describes how to write data to MongoDB. Allowed values: **insert** and **upsert**.<br/><br/>The behavior of **upsert** is to replace the document if a document with the same `_id` already exists; otherwise, insert the document.<br /><br />**Note**: Data Factory automatically generates an `_id` for a document if an `_id` isn't specified either in the original document or by column mapping. This means that you must ensure that, for **upsert** to work as expected, your document has an ID. |No<br />(the default is **insert**) |
+| writeBatchSize | The **writeBatchSize** property controls the size of documents to write in each batch. You can try increasing the value for **writeBatchSize** to improve performance and decreasing the value if your document size being large. |No<br />(the default is **10,000**) |
+| writeBatchTimeout | The wait time for the batch insert operation to finish before it times out. The allowed value is timespan. | No<br/>(the default is **00:30:00** - 30 minutes) |
+
+>[!TIP]
+>To import JSON documents as-is, refer to [Import or export JSON documents](#import-and-export-json-documents) section; to copy from tabular-shaped data, refer to [Schema mapping](#schema-mapping).
+
+**Example**
+
+```json
+"activities":[
+ {
+ "name": "CopyToMongoDB",
+ "type": "Copy",
+ "inputs": [
+ {
+ "referenceName": "<input dataset name>",
+ "type": "DatasetReference"
+ }
+ ],
+ "outputs": [
+ {
+ "referenceName": "<Document DB output dataset name>",
+ "type": "DatasetReference"
+ }
+ ],
+ "typeProperties": {
+ "source": {
+ "type": "<source type>"
+ },
+ "sink": {
+ "type": "MongoDbV2Sink",
+ "writeBehavior": "upsert"
+ }
+ }
+ }
+]
+```
+
+## Import and export JSON documents
+
+You can use this MongoDB connector to easily:
+
+* Copy documents between two MongoDB collections as-is.
+* Import JSON documents from various sources to MongoDB, including from Azure Cosmos DB, Azure Blob storage, Azure Data Lake Store, and other file-based stores that Azure Data Factory supports.
+* Export JSON documents from a MongoDB collection to various file-based stores.
+
+To achieve such schema-agnostic copy, skip the "structure" (also called *schema*) section in dataset and schema mapping in copy activity.
## Schema mapping
-To copy data from MongoDB to tabular sink, refer to [schema mapping](copy-activity-schema-and-type-mapping.md#schema-mapping).
+To copy data from MongoDB to tabular sink or reversed, refer to [schema mapping](copy-activity-schema-and-type-mapping.md#schema-mapping).
## Next steps
data-factory Connector Office 365 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-office-365.md
For now, within a single copy activity you can only **copy data from Office 365
To copy data from Office 365 into Azure, you need to complete the following prerequisite steps: -- Your Office 365 tenant admin must complete on-boarding actions as described [here](/graph/data-connect-get-started).
+- Your Office 365 tenant admin must complete on-boarding actions as described [here](/events/build-may-2021/microsoft-365-teams/breakouts/od483/).
- Create and configure an Azure AD web application in Azure Active Directory. For instructions, see [Create an Azure AD application](../active-directory/develop/howto-create-service-principal-portal.md#register-an-application-with-azure-ad-and-create-a-service-principal). - Make note of the following values, which you will use to define the linked service for Office 365: - Tenant ID. For instructions, see [Get tenant ID](../active-directory/develop/howto-create-service-principal-portal.md#get-tenant-and-app-id-values-for-signing-in).
To copy data from Office 365 into Azure, you need to complete the following prer
If this is the first time you are requesting data for this context (a combination of which data table is being access, which destination account is the data being loaded into, and which user identity is making the data access request), you will see the copy activity status as "In Progress", and only when you click into ["Details" link under Actions](copy-activity-overview.md#monitoring) will you see the status as "RequestingConsent". A member of the data access approver group needs to approve the request in the Privileged Access Management before the data extraction can proceed.
-Refer [here](/graph/data-connect-tips#approve-pam-requests-via-office-365-admin-portal) on how the approver can approve the data access request, and refer [here](/graph/data-connect-pam) for an explanation on the overall integration with Privileged Access Management, including how to set up the data access approver group.
+Refer [here](/graph/data-connect-faq#how-can-i-approve-pam-requests-via-microsoft-365-admin-portal) on how the approver can approve the data access request, and refer [here](/graph/data-connect-pam) for an explanation on the overall integration with Privileged Access Management, including how to set up the data access approver group.
## Policy validation
data-factory Connector Sharepoint Online List https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-sharepoint-online-list.md
You can copy file from SharePoint Online by using **Web activity** to authentica
- **Method**: POST - **Headers**: - Content-Type: application/x-www-form-urlencoded
- - **Body**: `grant_type=client_credentials&client_id=[Client-ID]@[Tenant-ID]&client_secret=[Client-Secret]&resource=00000003-0000-0ff1-ce00-000000000000/[Tenant-Name].sharepoint.com@[Tenant-ID]`. Replace the client ID, client secret, tenant ID and tenant name.
+ - **Body**: `grant_type=client_credentials&client_id=[Client-ID]@[Tenant-ID]&client_secret=[Client-Secret]&resource=00000003-0000-0ff1-ce00-000000000000/[Tenant-Name].sharepoint.com@[Tenant-ID]`. Replace the client ID (application ID), client secret (application key), tenant ID, and tenant name (of the SharePoint tenant).
> [!CAUTION] > Set the Secure Output option to true in Web activity to prevent the token value from being logged in plain text. Any further activities that consume this value should have their Secure Input option set to true.
data-factory Connector Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-troubleshoot-guide.md
description: Learn how to troubleshoot connector issues in Azure Data Factory.
Previously updated : 05/18/2021 Last updated : 06/07/2021
Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
- **Message**: `Failed to connect to Dynamics: %message;`
-AADSTS50079: Due to a configuration change made by your administrator, or because you moved to a new location, you must enroll in multi-factor authentication to access '00000007-0000-0000-c000-000000000000'` if your use case meets **all** of the following three conditions:
+ - **Cause**: You are seeing `ERROR REQUESTING ORGS FROM THE DISCOVERY SERVERFCB 'EnableRegionalDisco' is disabled.`
+ or otherwise `Unable to Login to Dynamics CRM, message:ERROR REQUESTING Token FROM THE Authentication context - USER intervention required but not permitted by prompt behavior AADSTS50079: Due to a configuration change made by your administrator, or because you moved to a new location, you must enroll in multi-factor authentication to access '00000007-0000-0000-c000-000000000000'` If your use case meets **all** of the following three conditions:
- You are connecting to Dynamics 365, Common Data Service, or Dynamics CRM. - You are using Office365 Authentication. - Your tenant and user is configured in Azure Active Directory for [conditional access](/azure/active-directory/conditional-access/overview) and/or Multi-Factor Authentication is required (see this [link](/powerapps/developer/data-platform/authenticate-office365-deprecation) to Dataverse doc).
data-factory Copy Activity Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/copy-activity-overview.md
description: Learn about the Copy activity in Azure Data Factory. You can use it
Previously updated : 10/12/2020 Last updated : 6/1/2021
data-factory Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/introduction.md
This visual guide provides a high level overview of of the Data Factory architec
:::image type="content" source="media\introduction\data-factory-visual-guide-small.png" alt-text="A detailed visual guide to the complete system architecture for Azure Data Factory, presented in a single high resolution image." lightbox="media\introduction\data-factory-visual-guide.png":::
+To see more detail, click the preceding image to zoom in, or browse to the [high resolution image](/azure/data-factory/media/introduction/data-factory-visual-guide.png#lightbox).
+ ### Connect and collect Enterprises have data of various types that are located in disparate sources on-premises, in the cloud, structured, unstructured, and semi-structured, all arriving at different intervals and speeds.
data-factory Quickstart Create Data Factory Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/quickstart-create-data-factory-azure-cli.md
This quickstart uses an Azure Storage account, which includes a container with a
## Create a data factory
-To create an Azure data factory, run the [az datafactory factory create](/cli/azure/datafactory/factory#az_datafactory_factory_create) command:
+To create an Azure data factory, run the [az datafactory factory create](/cli/azure/datafactory#az_datafactory_create) command:
```azurecli az datafactory factory create --resource-group ADFQuickStartRG \
az datafactory factory create --resource-group ADFQuickStartRG \
> [!IMPORTANT] > Replace `ADFTutorialFactory` with a globally unique data factory name, for example, ADFTutorialFactorySP1127.
-You can see the data factory that you created by using the [az datafactory factory show](/cli/azure/datafactory/factory#az_datafactory_factory_show) command:
+You can see the data factory that you created by using the [az datafactory factory show](/cli/azure/datafactory#az_datafactory_factory_show) command:
```azurecli az datafactory factory show --resource-group ADFQuickStartRG \
az datafactory factory show --resource-group ADFQuickStartRG \
Next, create a linked service and two datasets.
-1. Get the connection string for your storage account by using the [az storage account show-connection-string](/cli/azure/datafactory/factory#az_datafactory_factory_show) command:
+1. Get the connection string for your storage account by using the [az storage account show-connection-string](/cli/azure/datafactory#az_datafactory_factory_show) command:
```azurecli az storage account show-connection-string --resource-group ADFQuickStartRG \
data-factory Wrangling Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/wrangling-tutorial.md
Previously updated : 05/14/2021 Last updated : 06/08/2021 # Prepare data with data wrangling
Add a **Source dataset** for your Power Query mash-up. You can either choose an
Click **Create** to open the Power Query Online mashup editor.
-![Screenshot that shows the Create button that opens the Power Query Online mashup editor.](media/wrangling-data-flow/tutorial5.png)
+First, you will choose a dataset source for the mashup editor.
+
+![Power Query source.](media/wrangling-data-flow/pq-new-source.png)
+
+Once you have completed building your Power Query, you can save it and add the mashup as an activity to your pipeline. That is when you will set the sink dataset properties.
+
+![Power Query sink.](media/wrangling-data-flow/pq-new-sink.png)
Author your wrangling Power Query using code-free data preparation. For the list of available functions, see [transformation functions](wrangling-functions.md). ADF translates the M script into a data flow script so that you can execute your Power Query at scale using the Azure Data Factory data flow Spark environment.
databox-online Azure Stack Edge Gpu Create Virtual Machine Image https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-create-virtual-machine-image.md
Title: Create VM images for your Azure Stack Edge Pro GPU device
-description: Describes how to create linux or Windows VM images to use with your Azure Stack Edge Pro GPU device.
+ Title: Create custom VM images for your Azure Stack Edge Pro GPU device
+description: Describes how to create custom Windows and Linux VM images for deploying virtual machines on Azure Stack Edge Pro GPU devices.
Previously updated : 05/28/2021 Last updated : 6/08/2021
-#Customer intent: As an IT admin, I need to understand how to create and upload Azure VM images that I can use with my Azure Stack Edge Pro device so that I can deploy VMs on the device.
+#Customer intent: As an IT admin, I need to understand how to create Azure VM images that I can use to deploy virtual machines on my Azure Stack Edge Pro GPU device.
-# Create custom VM images for your Azure Stack Edge Pro device
+# Create custom VM images for your Azure Stack Edge Pro GPU device
[!INCLUDE [applies-to-GPU-and-pro-r-and-mini-r-skus](../../includes/azure-stack-edge-applies-to-gpu-pro-r-mini-r-sku.md)]
-To deploy VMs on your Azure Stack Edge Pro device, you need to be able to create custom VM images that you can use to create VMs. This article describes the steps that are required to create Linux or Windows VM custom images that you can use to deploy VMs on your Azure Stack Edge Pro device.
+To deploy VMs on your Azure Stack Edge Pro GPU device, you need to be able to create custom VM images that you can use to create VMs in Azure. This article describes the steps to create custom VM images in Azure for Windows and Linux VMs and download or copy those images to an Azure Storage account.
-## VM image workflow
+There's a required workflow for preparing a custom VM image. For the image source, you need to use a fixed VHD from a Gen1 VM of any size that Azure supports. For VM size options, see [Supported VM sizes](azure-stack-edge-gpu-virtual-machine-sizes.md#supported-vm-sizes).
-The workflow requires you to create a virtual machine in Azure, customize the VM, generalize, and then download the VHD corresponding to that VM. This generalized VHD is uploaded to Azure Stack Edge Pro. A managed disk is created from that VHD. An image is created from the managed disk. And, finally, VMs are created from that image.
+## Prerequisites
-For more information, go to [Deploy a VM on your Azure Stack Edge Pro device using Azure PowerShell](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md).
+Complete the following prerequisite before you create your VM image:
+- [Download AzCopy](/azure/storage/common/storage-use-azcopy-v10#download-azcopy). AzCopy gives you a fast way to copy an OS disk to an Azure Storage account.
-## Create a Windows custom VM image
+
-Do the following steps to create a Windows VM image.
+## Create a custom VM image
-1. Create a Windows Virtual Machine. For more information, go to [Tutorial: Create and manage Windows VMs with Azure PowerShell](../virtual-machines/windows/tutorial-manage-vm.md)
+The steps for preparing a custom VM image vary for a Windows or Linux VM.
-2. Download an existing OS disk.
- - Follow the steps in [Download a VHD](../virtual-machines/windows/download-vhd.md).
+### [Windows](#tab/windows)
- - Use the following `sysprep` command instead of what is described in the preceding procedure.
-
- `c:\windows\system32\sysprep\sysprep.exe /oobe /generalize /shutdown /mode:vm`
-
- You can also refer to [Sysprep (system preparation) overview](/windows-hardware/manufacture/desktop/sysprep--system-preparation--overview).
+Do the following steps to create a Windows VM image:
+
+1. Create a Windows virtual machine in Azure. For portal instructions, see [Create a Windows virtual machine in the Azure portal](/azure/virtual-machines/windows/quick-create-portal). For PowerShell instructions, see [Tutorial: Create and manage Windows VMs with Azure PowerShell](../virtual-machines/windows/tutorial-manage-vm.md).
+
+ The virtual machine must be a Generation 1 VM. The OS disk that you use to create your VM image must be a fixed-size VHD of any size that Azure supports. For VM size options, see [Supported VM sizes](azure-stack-edge-gpu-virtual-machine-sizes.md#supported-vm-sizes).
+
+2. Generalize the virtual machine. To generalize the VM, [connect to the virtual machine](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md#connect-to-a-windows-vm), open a command prompt, and run the following `sysprep` command:
+
+ ```dos
+ c:\windows\system32\sysprep\sysprep.exe /oobe /generalize /shutdown /mode:vm
+ ```
+
+ > [!IMPORTANT]
+ > After the command is complete, the VM will shut down. **Do not restart the VM.** Restarting the VM will corrupt the disk you just prepared.
-Use this VHD to now create and deploy a VM on your Azure Stack Edge Pro device.
-## Create a Linux custom VM image
+### [Linux](#tab/linux)
-Do the following steps to create a Linux VM image.
+Do the following steps to create a Linux VM image:
-1. Create a Linux Virtual Machine. For more information, go to [Tutorial: Create and manage Linux VMs with the Azure CLI](../virtual-machines/linux/tutorial-manage-vm.md).
+1. Create a Linux virtual machine in Azure. For portal instructions, see [Quickstart: Create a Linux VM in the Azure portal](../virtual-machines/linux/quick-create-portal.md). For PowerShell instructions, see [Quickstart: Create a Linux VM in Azure with PowerShell](../virtual-machines/linux/quick-create-powershell.md).
+
+ You can use any Gen1 VM with a fixed-size VHD in Azure Marketplace to create Linux custom images, with the exception of Red Hat Enterprise Linux (RHEL) images, which require extra steps. For a list of Azure Marketplace images that could work, see [Azure Marketplace items available for Azure Stack Hub](/azure-stack/operator/azure-stack-marketplace-azure-items?view=azs-1910&preserve-view=true). For guidance on RHEL images, see [Using RHEL BYOS images](#using-rhel-byos-images), below.
1. Deprovision the VM. Use the Azure VM agent to delete machine-specific files and data. Use the `waagent` command with the `-deprovision+user` parameter on your source Linux VM. For more information, see [Understanding and using Azure Linux Agent](../virtual-machines/extensions/agent-linux.md).
- 1. Connect to your Linux VM with an SSH client.
+ 1. [Connect to your Linux VM with an SSH client](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md#connect-to-a-linux-vm).
2. In the SSH window, enter the following command: ```bash
Do the following steps to create a Linux VM image.
4. After the command completes, enter **exit** to close the SSH client. The VM will still be running at this point.
-1. [Download existing OS disk](../virtual-machines/linux/download-vhd.md).
+### Using RHEL BYOS images
-Use this VHD to now create and deploy a VM on your Azure Stack Edge Pro device. You can use the following two Azure Marketplace images to create Linux custom images:
+If using Red Hat Enterprise Linux (RHEL) images, only the Red Hat Enterprise Linux Bring Your Own Subscription (RHEL BYOS) images, also known as the Red Hat gold images, are supported and can be used to create your VM image. The standard pay-as-you-go RHEL images on Azure Marketplace are not supported on Azure Stack Edge.
-|Item name |Description |Publisher |
-||||
-|[Ubuntu Server](https://azuremarketplace.microsoft.com/marketplace/apps/canonical.ubuntuserver) |Ubuntu Server is the world's most popular Linux for cloud environments.|Canonical|
-|[Debian 8 "Jessie"](https://azuremarketplace.microsoft.com/marketplace/apps/credativ.debian) |Debian GNU/Linux is one of the most popular Linux distributions. |credativ|
+To create a VM image using the RHEL BYOS image, follow these steps:
-For a full list of Azure Marketplace images that could work (presently not tested), go to [Azure Marketplace items available for Azure Stack Hub](/azure-stack/operator/azure-stack-marketplace-azure-items?view=azs-1910&preserve-view=true).
+1. Log in to [Red Hat Subscription Management](https://access.redhat.com/management). Navigate to the [Cloud Access Dashboard](https://access.redhat.com/management/cloud) from the top menu bar.
+1. Enable your Azure subscription. See [detailed instructions](https://access.redhat.com/documentation/en-us/red_hat_subscription_management/1/html/red_hat_cloud_access_reference_guide/getting-started-with-ca_cloud-access). Enabling the subscription will allow you to access the Red Hat Gold Images.
-### Using RHEL BYOS image
+1. Accept the Azure terms of use (only once per Azure Subscription, per image) and provision a VM. See [instructions](https://access.redhat.com/documentation/en-us/red_hat_subscription_management/1/html/red_hat_cloud_access_reference_guide/understanding-gold-images_cloud-access).
-If using Red Hat Enterprise Linux (RHEL) images, only the Red Hat Enterprise Linux Bring Your Own Subscription(RHEL BYOS) images, also known as the Red Hat gold images are supported and can be used to create your VM image. The standard pay-as-you-go RHEL images are not supported on Azure Marketplace.
+You can now use the VM that you provisioned to [Create a VM custom image](#create-a-custom-vm-image) in Linux.
-To create a VM image using the RHEL BYOS image, follow these steps:
+
-1. Log in to the [Red Hat Subscription Management](https://access.redhat.com/management). Navigate to the [Cloud Access Dashboard](https://access.redhat.com/management/cloud) from the top menu bar.
-1. Enable your Azure subscription. See [detailed instructions](https://access.redhat.com/documentation/en-us/red_hat_subscription_management/1/html/red_hat_cloud_access_reference_guide/enabling-and-maintaining-subs_cloud-access#proc_enabling-sub-new-ccsp_cloud-access). This will allow you to access the Red Hat Gold Images.
-1. Accept the Azure terms of use (only once per Azure Subscription, per image) and provision a VM. See [instructions](https://access.redhat.com/documentation/en-us/red_hat_subscription_management/1/html/red_hat_cloud_access_reference_guide/cloud-access-gold-images_cloud-access#proc_using-gold-images-azure_cloud-access).
+## Download OS disk to storage account
-You can now use the VM that you provisioned to [Create a Linux VM custom image](#create-a-linux-custom-vm-image).
-
+To use your custom VM image to deploy VMs on your device, you must download the OS disk to an Azure Storage account. We recommend that you use the same storage account that you used for your device.
+
+To download the OS disk for the VM to an Azure storage account, do the following steps:
+
+1. [Stop the VM in the portal](/azure/virtual-machines/windows/download-vhd#stop-the-vm). You need to do this to deallocate the OS disk even if your Windows VM was shut down after you ran `sysprep` to generalize it.
+
+1. [Generate a download URL for the OS disk](/azure/virtual-machines/windows/download-vhd#generate-download-url), and make a note of the URL. By default, the URL expires after 3600 seconds (1 hour). You can increase that time if needed.
+
+1. Download the VHD to your Azure Storage account using one of these methods:
+
+ - Method 1: For a faster transfer, use AzCopy to copy the VHD to your Azure Storage account. For instructions, see [Use AzCopy to copy VM image to storage account](#copy-vhd-to-storage-account-using-azcopy), below.
+
+ - Method 2: For a simple, one-click method, you can select **Download the VHD file** when you generate a download URL (in step 3b) to download the disk from the portal. **When you use this method, the disk copy can take quite a long time, and you'll need to [upload the VHD to your Azure storage account](azure-stack-edge-gpu-deploy-virtual-machine-templates.md#use-storage-explorer-for-upload) to be able to create VMs using the portal.**
+
+You can now use this VHD to create and deploy VMs on your Azure Stack Edge Pro GPU device.
+
+## Copy VHD to storage account using AzCopy
+
+The following procedures describe how to use AzCopy to copy a custom VM image to an Azure Storage account so you can use the image to deploy VMs on your Azure Stack Edge Pro GPU device. We recommend that you store your custom VM images in the same storage account that you're using for your Azure Stack Edge Pro GPU device.
++
+### Create target URI for a container
+
+AzCopy requires a *target URI* that tells where to copy the new image to in your storage account. Before you run AzCopy, you'll generate a shared-access signature (SAS) URL for the blob container you want to copy the file to. To create the target URI, you'll add the filename to the SAS URL.
+
+To create the target URI for your prepared VHD, do the following steps:
+
+1. Generate a SAS URL for a container in an Azure Storage account, do the following steps:
+
+ 1. In the Azure portal, open the storage account, and select **Containers**. Select and then right-click the blob container you want to use, and select **Generate SAS**.
+
+ ![Screenshot of the Generate SAS option for a blob container in the Azure portal](./media/azure-stack-edge-gpu-create-virtual-machine-image/blob-sas-url-01.png)
+
+ 1. On the **Generate SAS** screen, select **Read** and **Write** in **Permissions**.
+
+ ![Screenshot of the Generate SAS screen with Read and Write permissions selected](./media/azure-stack-edge-gpu-create-virtual-machine-image/blob-sas-url-02.png)
+
+ 1. Select **Generate SAS token and URL**, and then select **Copy** to copy the **Blob SAS URL**.
+
+ ![Screenshot of the Generate SAS screen, with options for generating and copying a Blob SAS URL](./media/azure-stack-edge-gpu-create-virtual-machine-image/blob-sas-url-03.png)
+
+1. To create the target URI for the `azcopy` command, add the desired filename to the SAS URL.
+
+ The Blob SAS URL has the following format.
+
+ ![Graphic of a Blob SAS URL, with container path and place to insert the new filename labeled](./media/azure-stack-edge-gpu-create-virtual-machine-image/blob-sas-url-04.png)
+
+ Insert the filename, in the format `/<filename>.vhd` before the question mark that begins the query string. The filename extension must be VHD.
+
+ For example, the following Blob SAS URL will copy the **osdisk.vhd** file to the **virtualmachines** blob container in **mystorageaccount**.
+
+ ![Graphic of a Blob SAS URL example for a VHD named osdisk](./media/azure-stack-edge-gpu-create-virtual-machine-image/blob-sas-url-05.png)
++
+### Copy VHD to blob container
+
+To copy your VHD to a blob container using AzCopy, do the following steps:
+
+ 1. [Download AZCopy](/azure/storage/common/storage-use-azcopy-v10#download-azcopy) if you haven't done that already.
+ 1. In PowerShell, navigate to the directory where you stored azcopy.exe, and run the following command:
+
+ `.\azcopy copy <source URI> <target URI> --recursive`
+
+ where:
+ * `<source URI>` is the download URL that you generated earlier.
+ * `<target URI>` tells which blob container to copy the new image to in your Azure Storage account. For instructions, see [Use AzCopy to copy VM image to storage account](#copy-vhd-to-storage-account-using-azcopy).
+
+ For example, the following URI will copy a file named **windowsosdisk.vhd** to the **virtual machines** blob container in the **mystorageaccount** storage account:
+
+ ```azcopy
+ .\azcopy copy "https://md-h1rvdq3wwtdp.z24.blob.storage.azure.net/gxs3kpbgjhkr/abcd?sv=2018-03-28&sr=b&si=f86003fc-a231-43b0-baf2-61dd51e3a05a&sig=o5Rj%2BNZSook%2FVNMcuCcwEwsr0i7sy%2F7gIDzak6JhlKg%3D" "https://mystorageaccount.blob.core.windows.net/virtualmachines/osdisk.vhd?sp=rw&st=2021-05-21T16:52:24Z&se=2021-05-22T00:52:24Z&spr=https&sv=2020-02-10&sr=c&sig=PV3Q3zpaQ%2FOLidbQJDKlW9nK%2BJ7PkzYv2Eczxko5k%2Bg%3D" --recursive
+ ```
+
+#### Sample output
+
+For the example AzCopy command above, the following output indicates a successful copy was completed.
+
+ ```output
+ PS C:\azcopy\azcopy_windows_amd64_10.10.0> .\azcopy copy "https://md-h1rvdq3wwtdp.z24.blob.storage.azure.net/gxs3kpbgjhkr/abcd?sv=2018-03-28&sr=b&si=f86003fc-a231-43b0-baf2-61dd51e3a05a&sig=o5Rj%2BNZSook%2FVNMcuCcwEwsr0i7sy%2F7gIDzak6JhlKg%3D" "https://mystorageaccount.blob.core.windows.net/virtualmachines/osdisk.vhd?sp=rw&st=2021-05-21T16:52:24Z&se=2021-05-22T00:52:24Z&spr=https&sv=2020-02-10&sr=c&sig=PV3Q3zpaQ%2FOLidbQJDKlW9nK%2BJ7PkzYv2Eczxko5k%2Bg%3D" --recursive
+ INFO: Scanning...
+ INFO: Failed to create one or more destination container(s). Your transfers may still succeed if the container already exists.
+ INFO: Any empty folders will not be processed, because source and/or destination doesn't have full folder support
+
+ Job 783f2177-8317-3e4b-7d2f-697a8f1ab63c has started
+ Log file is located at: C:\Users\aseuser\.azcopy\783f2177-8317-3e4b-7d2f-697a8f1ab63c.log
+
+ INFO: Destination could not accommodate the tier P10. Going ahead with the default tier. In case of service to service transfer, consider setting the flag --s2s-preserve-access-tier=false.
+ 100.0 %, 0 Done, 0 Failed, 1 Pending, 0 Skipped, 1 Total,
+
+ Job 783f2177-8317-3e4b-7d2f-697a8f1ab63c summary
+ Elapsed Time (Minutes): 1.4671
+ Number of File Transfers: 1
+ Number of Folder Property Transfers: 0
+ Total Number of Transfers: 1
+ Number of Transfers Completed: 1
+ Number of Transfers Failed: 0
+ Number of Transfers Skipped: 0
+ TotalBytesTransferred: 136367309312
+ Final Job Status: Completed
+
+ PS C:\azcopy\azcopy_windows_amd64_10.10.0>
+ ```
+ ## Next steps
-[Deploy VMs on your Azure Stack Edge Pro device](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md).
+- [Deploy VMs on your device using the Azure portal](azure-stack-edge-gpu-deploy-virtual-machine-portal.md)
+- [Deploy VMs on your device via PowerShell](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md)
databox-online Azure Stack Edge Gpu Create Virtual Machine Marketplace Image https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-create-virtual-machine-marketplace-image.md
+
+ Title: Use Azure Marketplace image to create VM image for Azure Stack Edge Pro GPU device
+description: Describes how to use an Azure Marketplace image to create a VM image to use on your Azure Stack Edge Pro GPU device.
++++++ Last updated : 06/07/2021+
+#Customer intent: As an IT admin, I need to understand how to create and upload Azure VM images that I can use with my Azure Stack Edge Pro device so that I can deploy VMs on the device.
++
+# Use Azure Marketplace image to create VM image for your Azure Stack Edge Pro GPU
++
+To deploy VMs on your Azure Stack Edge Pro GPU device, you need to create a VM image that you can use to create VMs. This article describes the steps that are required to create a VM image starting from an Azure Marketplace image. You can then use this VM image to deploy VMs on your Azure Stack Edge Pro GPU device.
+
+## VM image workflow
+
+The following steps describe the VM image workflow using an Azure Marketplace workflow:
+
+1. Connect to the Azure Cloud Shell or a client with Azure CLI installed.
+2. Search the Azure Marketplace and identify your preferred image.
+3. Create a new managed disk from the Marketplace image.
+4. Export a VHD from the managed disk to Azure Storage account.
+5. Clean up the managed disk.
++
+For more information, go to [Deploy a VM on your Azure Stack Edge Pro device using Azure PowerShell](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md).
+
+## Prerequisites
+
+Before you can use Azure Marketplace images for Azure Stack Edge, make sure that you are connected to Azure in either of the following ways.
+++
+## Search for Azure Marketplace images
+
+You will now identify a specific Azure Marketplace image that you wish to use. Azure Marketplace hosts thousands of VM images.
+
+To find some of the most common Marketplace images that match your search criteria, run the following command.
+
+```azurecli
+az vm image list --all [--publisher <Publisher>] [--offer <Offer>] [--sku <SKU>]
+```
+The last three flags are optional but excluding them returns a long list.
+
+Some example queries are:
+
+```azurecli
+#Returns all images of type ΓÇ£Windows ServerΓÇ¥
+az vm image list --all --publisher "MicrosoftWindowsserver" --offer "WindowsServer"
+
+#Returns all Windows Server 2019 Datacenter images from West US published by Microsoft
+az vm image list --all --location "westus" --publisher "MicrosoftWindowsserver" --offer "WindowsServer" --sku "2019-Datacenter"
+
+#Returns all VM images from a publisher
+az vm image list --all --publisher "Canonical"
+```
+
+Here is an example output when VM images of a certain publisher, offer, and SKU were queried.
+
+```output
+PS /home/user> az vm image list --all --publisher "Canonical" --offer "UbuntuServer" --sku "12.04.4-LTS"
+[
+ {
+ "offer": "UbuntuServer",
+ "publisher": "Canonical",
+ "sku": "12.04.4-LTS",
+ "urn": "Canonical:UbuntuServer:12.04.4-LTS:12.04.201402270",
+ "version": "12.04.201402270"
+ },
+ {
+ "offer": "UbuntuServer",
+ "publisher": "Canonical",
+ "sku": "12.04.4-LTS",
+ "urn": "Canonical:UbuntuServer:12.04.4-LTS:12.04.201404080",
+ "version": "12.04.201404080"
+ },
+ {
+ "offer": "UbuntuServer",
+ "publisher": "Canonical",
+ "sku": "12.04.4-LTS",
+ "urn": "Canonical:UbuntuServer:12.04.4-LTS:12.04.201404280",
+ "version": "12.04.201404280"
+ },
+ {
+ "offer": "UbuntuServer",
+ "publisher": "Canonical",
+ "sku": "12.04.4-LTS",
+ "urn": "Canonical:UbuntuServer:12.04.4-LTS:12.04.201405140",
+ "version": "12.04.201405140"
+ },
+ {
+ "offer": "UbuntuServer",
+ "publisher": "Canonical",
+ "sku": "12.04.4-LTS",
+ "urn": "Canonical:UbuntuServer:12.04.4-LTS:12.04.201406060",
+ "version": "12.04.201406060"
+ },
+ {
+ "offer": "UbuntuServer",
+ "publisher": "Canonical",
+ "sku": "12.04.4-LTS",
+ "urn": "Canonical:UbuntuServer:12.04.4-LTS:12.04.201406190",
+ "version": "12.04.201406190"
+ },
+ {
+ "offer": "UbuntuServer",
+ "publisher": "Canonical",
+ "sku": "12.04.4-LTS",
+ "urn": "Canonical:UbuntuServer:12.04.4-LTS:12.04.201407020",
+ "version": "12.04.201407020"
+ },
+ {
+ "offer": "UbuntuServer",
+ "publisher": "Canonical",
+ "sku": "12.04.4-LTS",
+ "urn": "Canonical:UbuntuServer:12.04.4-LTS:12.04.201407170",
+ "version": "12.04.201407170"
+ }
+]
+PS /home/user>
+```
+
+>[!IMPORTANT]
+> Use only the Gen 1 images. Any images specified as Gen 2 (usually the sku has a "-g2" suffix), do not work on Azure Stack Edge.
+
+In this example, we will select Windows Server 2019 Datacenter Core, version 2019.0.20190410. We will identify this image by its Universal Resource Number (ΓÇ£URNΓÇ¥).
+
+
+Below is a list of URNs for some of the most common images. If you just want the latest version of a particular OS, the version number can be replaced with ΓÇ£latestΓÇ¥ in the URN. For example, ΓÇ£MicrosoftWindowsServer:WindowsServer:2019-Datacenter:LatestΓÇ¥.
++
+| OS | SKU | Version | URN |
+|--|--|--|-|
+| Windows Server | 2019 Datacenter | 17763.1879.2104091832 | MicrosoftWindowsServer:WindowsServer:2019-Datacenter:17763.1879.2104091832 |
+| Windows Server | 2019 Datacenter (30 GB small disk) | 17763.1879.2104091832 | MicrosoftWindowsServer:WindowsServer:2019-Datacenter-smalldisk:17763.1879.2104091832 |
+| Windows Server | 2019 Datacenter Core | 17763.1879.2104091832 | MicrosoftWindowsServer:WindowsServer:2019-Datacenter-Core:17763.1879.2104091832 |
+| Windows Server | 2019 Datacenter Core (30 GB small disk) | 17763.1879.2104091832 | MicrosoftWindowsServer:WindowsServer:2019-Datacenter-Core-smalldisk:17763.1879.2104091832 |
+| Windows Desktop | Windows 10 20H2 Pro | 19042.928.2104091209 | MicrosoftWindowsDesktop:Windows-10:20h2-pro:19042.928.2104091209 |
+| Ubuntu Server | Canonical Ubuntu Server 18.04 LTS | 18.04.202002180 | Canonical:UbuntuServer:18.04-LTS:18.04.202002180 |
+| Ubuntu Server | Canonical Ubuntu Server 16.04 LTS | 16.04.202104160 | Canonical:UbuntuServer:16.04-LTS:16.04.202104160 |
+| CentOS | CentOS 8.1 | 8.1.2020062400 | OpenLogic:CentOS:8_1:8.1.2020062400 |
+| CentOS | CentOS 7.7 | 7.7.2020062400 | OpenLogic:CentOS:7.7:7.7.2020062400 |
+++
+## Create a new managed disk from the Marketplace image
+
+Create an Azure Managed Disk from your chosen Marketplace image.
+
+1. Set some parameters.
+
+ ```azurecli
+ $urn = <URN of the Marketplace image> #Example: ΓÇ£MicrosoftWindowsServer:WindowsServer:2019-Datacenter:LatestΓÇ¥
+ $diskName = <disk name> #Name for new disk to be created
+ $diskRG = <resource group> #Resource group that contains the new disk
+ ```
++
+1. Create the disk and generate a SAS access URL.
+
+ ```azurecli
+ az disk create -g $diskRG -n $diskName --image-reference $urn
+ $sas = az disk grant-access --duration-in-seconds 36000 --access-level Read --name $diskName --resource-group $diskRG
+ $diskAccessSAS = ($sas | ConvertFrom-Json)[0].accessSas
+ ```
+
+Here is an example output:
+
+```output
+PS /home/user> $urn = ΓÇ£MicrosoftWindowsServer:WindowsServer:2019-Datacenter:LatestΓÇ¥
+PS /home/user> $diskName = "newmanageddisk1"
+PS /home/user> $diskRG = "newrgmd1"
+PS /home/user> az disk create -g $diskRG -n $diskName --image-reference $urn
+{
+ "burstingEnabled": null,
+ "creationData": {
+ "createOption": "FromImage",
+ "galleryImageReference": null,
+ "imageReference": {
+ "id": "/Subscriptions/db4e2fdb-6d80-4e6e-b7cd-736098270664/Providers/Microsoft.Compute/Locations/eastus/Publishers/MicrosoftWindowsServer/ArtifactTypes/VMImage/Offers/WindowsServer/Skus/2019-Datacenter/Versions/17763.1935.2105080716",
+ "lun": null
+ },
+ "logicalSectorSize": null,
+ "sourceResourceId": null,
+ "sourceUniqueId": null,
+ "sourceUri": null,
+ "storageAccountId": null,
+ "uploadSizeBytes": null
+ },
+ "diskAccessId": null,
+ "diskIopsReadOnly": null,
+ "diskIopsReadWrite": 500,
+ "diskMBpsReadOnly": null,
+ "diskMBpsReadWrite": 100,
+ "diskSizeBytes": 136367308800,
+ "diskSizeGb": 127,
+ "diskState": "Unattached",
+ "encryption": {
+ "diskEncryptionSetId": null,
+ "type": "EncryptionAtRestWithPlatformKey"
+ },
+ "encryptionSettingsCollection": null,
+ "extendedLocation": null,
+ "hyperVGeneration": "V1",
+ "id": "/subscriptions/db4e2fdb-6d80-4e6e-b7cd-736098270664/resourceGroups/newrgmd1/providers/Microsoft.Compute/disks/NewManagedDisk1",
+ "location": "eastus",
+ "managedBy": null,
+ "managedByExtended": null,
+ "maxShares": null,
+ "name": "NewManagedDisk1",
+ "networkAccessPolicy": "AllowAll",
+ "osType": "Windows",
+ "propertyUpdatesInProgress": null,
+ "provisioningState": "Succeeded",
+ "purchasePlan": null,
+ "resourceGroup": "newrgmd1",
+ "securityProfile": null,
+ "shareInfo": null,
+ "sku": {
+ "name": "Premium_LRS",
+ "tier": "Premium"
+ },
+ "supportsHibernation": null,
+ "tags": {},
+ "tier": "P10",
+ "timeCreated": "2021-06-08T00:39:34.205982+00:00",
+ "type": "Microsoft.Compute/disks",
+ "uniqueId": "1a649ad4-3b95-471e-89ef-1d2ed1f51525",
+ "zones": null
+}
+
+PS /home/user> $sas = az disk grant-access --duration-in-seconds 36000 --access-level Read --name $diskName --resource-group $diskRG
+PS /home/user> $diskAccessSAS = ($sas | ConvertFrom-Json)[0].accessSas
+PS /home/user>
+```
+
+## Export a VHD from the managed disk to Azure Storage
+
+This step will export a VHD from the managed disk to your preferred Azure blob storage account. This VHD can then be used to create VM images on Azure Stack Edge.
+
+1. Set the destination storage account where the VHD will be copied.
+
+ ```azurecli
+ $storageAccountName = <destination storage account name>
+ $containerName = <destination container name>
+ $destBlobName = <blobname.vhd> #Blob that will be created, including .vhd extension
+ $storageAccountKey = <storage account key>
+ ```
+
+1. Copy the VHD to the destination storage account.
+
+ ```azurecli
+ $destContext = New-AzureStorageContext -StorageAccountName $storageAccountName -StorageAccountKey $storageAccountKey
+ Start-AzureStorageBlobCopy -AbsoluteUri $diskAccessSAS -DestContainer $containerName -DestContext $destContext -DestBlob $destBlobName
+ ```
+
+ The VHD copy will take several minutes to complete. Ensure the copy has completed before proceeding by running the following command. The status field will show ΓÇ£SuccessΓÇ¥ when complete.
+
+ ```azurecli
+ Get-AzureStorageBlobCopyState ΓÇôContainer $containerName ΓÇôContext $destContext -Blob $destBlobName
+ ```
+
+Here is an example output:
+
+```output
+PS /home/user> $storageAccountName = "edgeazurevmeus"
+PS /home/user> $containerName = "azurevmmp"
+PS /home/user> $destBlobName = "newblobmp.vhd"
+PS /home/user> $storageAccountKey = "n9sCytWLdTBz0F4Sco9SkPGWp6BJBtf7BJBk79msf1PfxJGQdqSfu6TboZWZ10xyZdc4y+Att08cC9B79jB0YA=="
+
+PS /home/user> $destContext = New-AzureStorageContext -StorageAccountName $storageAccountName -StorageAccountKey $storageAccountKey
+PS /home/user> Start-AzureStorageBlobCopy -AbsoluteUri $diskAccessSAS -DestContainer $containerName -DestContext $destContext -DestBlob $destBlobName
+
+ AccountName: edgeazurevmeus, ContainerName: azurevmmp
+
+Name BlobType Length ContentType LastModified AccessTier SnapshotTime IsDeleted VersionId
+- -- -- -
+newblobmp.vhd PageBlob -1 2021-06-08 00:50:10Z False
+
+PS /home/user> Get-AzureStorageBlobCopyState ΓÇôContainer $containerName ΓÇôContext $destContext -Blob $destBlobName
+
+CopyId : 24a1e3f5-886a-490d-9dd7-562bb4acff58
+CompletionTime :
+Status : Pending
+Source : https://md-lfn221fppr2c.blob.core.windows.net/d4tb2hp5ff2q/abcd?sv=2018-03-28&sr=b&si=4f588db1-9aac-44d9-9607-35497cc08a7f
+BytesCopied : 696254464
+TotalBytes : 136367309312
+StatusDescription :
+DestinationSnapshotTime :
+
+PS /home/user> Get-AzureStorageBlobCopyState ΓÇôContainer $containerName ΓÇôContext $destContext -Blob $destBlobName
+
+CopyId : 24a1e3f5-886a-490d-9dd7-562bb4acff58
+CompletionTime : 6/8/2021 12:57:26 AM +00:00
+Status : Success
+Source : https://md-lfn221fppr2c.blob.core.windows.net/d4tb2hp5ff2q/abcd?sv=2018-03-28&sr=b&si=4f588db1-9aac-44d9-9607-35497cc08a7f
+BytesCopied : 136367309312
+TotalBytes : 136367309312
+StatusDescription :
+DestinationSnapshotTime :
+```
+
+## Clean up the managed disk
+
+To delete the managed disk you created, follow these steps:
+
+```azurecli
+az disk revoke-access --name $diskName --resource-group $diskRG
+az disk delete --name $diskName --resource-group $diskRG --yes
+```
+The deletion takes a couple minutes to complete.
+
+## Next steps
+
+[Deploy VMs on your Azure Stack Edge Pro GPU device](azure-stack-edge-gpu-deploy-virtual-machine-portal.md).
databox-online Azure Stack Edge Gpu Deploy Checklist https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-checklist.md
Previously updated : 02/24/2021 Last updated : 06/07/2021 # Deployment checklist for your Azure Stack Edge Pro GPU device
Use the following checklist to ensure you have this information after you have p
|--|-|-| | Device management | <li>Azure subscription</li><li>Resource providers registered</li><li>Azure Storage account</li>|<li>Enabled for Azure Stack Edge Pro/Data Box Gateway, owner or contributor access.</li><li>In Azure portal, go to **Home > Subscriptions > Your-subscription > Resource providers**. Search for `Microsoft.DataBoxEdge` and register. Repeat for `Microsoft.Devices` if deploying IoT workloads.</li><li>Need access credentials</li> | | Device installation | Power cables in the package. <br>For US, an SVE 18/3 cable rated for 125 V and 15 Amps with a NEMA 5-15P to C13 (input to output) connector is shipped. | For more information, see the list of [Supported power cords by country](azure-stack-edge-technical-specifications-power-cords-regional.md) |
-| | <li>At least 1 X 1-GbE RJ-45 network cable for Port 1 </li><li> At least 1 X 25-GbE SFP+ copper cable for Port 3, Port 4, Port 5, or Port 6</li>| Customer needs to procure these cables.<br>For a full list of supported network cables, switches, and transceivers for device network cards, see [Cavium FastlinQ 41000 Series Interoperability Matrix](https://www.marvell.com/documents/xalflardzafh32cfvi0z/) and [Mellanox dual port 25G ConnectX-4 channel network adapter compatible products](https://docs.mellanox.com/display/ConnectX4LxFirmwarev14271016/Firmware+Compatible+Products).|
+| | <li>At least 1 X 1-GbE RJ-45 network cable for Port 1 </li><li> At least 1 X 25/10-GbE SFP+ copper cable for Port 3, Port 4, Port 5, or Port 6</li>| Customer needs to procure these cables.<br>For a full list of supported network cables, switches, and transceivers for device network cards from Cavium, see [Cavium FastlinQ 41000 Series Interoperability Matrix](https://www.marvell.com/documents/xalflardzafh32cfvi0z/)<br>For a full list of supported cables and modules for 25 GbE and 10 GbE from Mellanox, see [Mellanox dual port 25G ConnectX-4 channel network adapter compatible products](https://docs.mellanox.com/display/ConnectX4LxFirmwarev14271016/Firmware+Compatible+Products).|
| First-time device connection | <li>Laptop whose IPv4 settings can be changed. This laptop connects to Port 1 via a switch or a USB to Ethernet adaptor. </li><!--<li> A minimum of 1 GbE switch must be used for the device once the initial setup is complete. The local web UI will not be accessible if the connected switch is not at least 1 Gbe.</li>-->| | | Device sign-in | Device administrator password, between 8 and 16 characters, including three of the following character types: uppercase, lowercase, numeric, and special characters. | Default password is *Password1*, which expires at first sign-in. | | Network settings | Device comes with 2 x 1-GbE, 4 x 25-GbE network ports. <li>Port 1 is used to configure management settings only. One or more data ports can be connected and configured. </li><li> At least one data network interface from among Port 2 - Port 6 needs to be connected to the Internet (with connectivity to Azure).</li><li> DHCP and static IPv4 configuration supported. | Static IPv4 configuration requires IP, DNS server, and default gateway. |
databox-online Azure Stack Edge Gpu Deploy Install https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-install.md
Previously updated : 12/21/2020 Last updated : 06/04/2021 # Customer intent: As an IT admin, I need to understand how to install Azure Stack Edge Pro in datacenter so I can use it to transfer data to Azure.
Before you start cabling your device, you need the following things:
- Your Azure Stack Edge Pro physical device, unpacked, and rack mounted. - Two power cables. - At least one 1-GbE RJ-45 network cable to connect to the management interface. There are two 1-GbE network interfaces, one management and one data, on the device.-- One 25-GbE SFP+ copper cable for each data network interface to be configured. At least one data network interface from among PORT 2, PORT 3, PORT 4, PORT 5, or PORT 6 needs to be connected to the Internet (with connectivity to Azure).
+- One 25/10-GbE SFP+ copper cable for each data network interface to be configured. At least one data network interface from among PORT 2, PORT 3, PORT 4, PORT 5, or PORT 6 needs to be connected to the Internet (with connectivity to Azure).
- Access to two power distribution units (recommended). - At least one 1-GbE network switch to connect a 1-GbE network interface to the Internet for data. The local web UI will not be accessible if the connected switch is not at least 1 GbE. If using 25/10-GbE interface for data, you will need a 25-GbE or 10-GbE switch.
On your Azure Stack Edge Pro device:
- **Custom Microsoft `Qlogic` Cavium 25G NDC adapter** - Port 1 through port 4. - **Mellanox dual port 25G ConnectX-4 channel network adapter** - Port 5 and port 6.
-For a full list of supported cables, switches, and transceivers for these network cards, go to:
+For a full list of supported cables, switches, and transceivers for these network adapter cards, see:
- [`Qlogic` Cavium 25G NDC adapter interoperability matrix](https://www.marvell.com/documents/xalflardzafh32cfvi0z/).-- [Mellanox dual port 25G ConnectX-4 channel network adapter compatible products](https://docs.mellanox.com/display/ConnectX4LxFirmwarev14271016/Firmware+Compatible+Products).
+- 25 GbE and 10 GbE cables and modules in [Mellanox dual port 25G ConnectX-4 channel network adapter compatible products](https://docs.mellanox.com/display/ConnectX4LxFirmwarev14271016/Firmware+Compatible+Products).
Take the following steps to cable your device for power and network.
defender-for-iot Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/getting-started.md
Title: 'Quickstart: Getting started' description: In this quickstart, learn how to get started with understanding the basic workflow for Defender for IoT deployment. Previously updated : 05/10/2021 Last updated : 06/03/2021 # Quickstart: Get started with Defender for IoT
Registration includes:
- Defining committed devices. - Downloading an activation file for the on-premises management console.
-To register:
+**To register**:
1. Go to the Azure Defender for IoT portal.
After you acquire your on-premises management console appliance:
- Install the software. - Activate and carry out initial management console setup.
-To install and set up:
+**To install and set up**:
1. Select **Getting Started** from the Defender for IoT portal.+ 1. Select the **On-premises management console** tab.+ 1. Choose a version and select **Download**.+ 1. Install the on-premises management console software. For more information, see [Defender for IoT installation](how-to-install-software.md).+ 1. Activate and set up the management console. For more information, see [Activate and set up your on-premises management console](how-to-activate-and-set-up-your-on-premises-management-console.md). ## Onboard a sensor ##
To install and set up:
Onboard a sensor by registering it with Azure Defender for IoT and downloading a sensor activation file: 1. Define a sensor name and associate it with a subscription.+ 1. Choose a sensor connection mode: - **Cloud connected sensors**: Information that sensors detect is displayed in the sensor console. In addition, alert information is delivered through an IoT hub and can be shared with other Azure services, such as Azure Sentinel. You can also choose to automatically push threat intelligence packages from the Azure Defender for IoT portal to your sensors. For more information, see [Threat intelligence research and packages](how-to-work-with-threat-intelligence-packages.md). - **Locally managed sensors**: Information that sensors detect is displayed in the sensor console. If you're working in an air-gapped network and want a unified view of all information detected by multiple locally managed sensors, work with the on-premises management console.
-1. Download a sensor activation file.
+1. Select a site to associate your sensor to within an IoT Hub. The IoT Hub will serve as a gateway between this sensor and Azure Defender for IoT. Define the display name, and zone. You can also add descriptive tags. The display name, zone, and tags are descriptive entries on the [Sites and Sensors page](how-to-manage-sensors-on-the-cloud.md#view-onboarded-sensors).
+
+1. Select **Register**.
+
+1. Select **Download activation file**.
For details about onboarding, see [Onboard and manage sensors in the Defender for IoT portal](how-to-manage-sensors-on-the-cloud.md).
defender-for-iot How To Manage Sensors On The Cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-manage-sensors-on-the-cloud.md
Title: Manage sensors in the Defender for IoT portal description: Learn how to onboard, view, and manage sensors in the Defender for IoT portal. Previously updated : 04/29/2021 Last updated : 06/03/2021
You onboard a sensor by registering it with Azure Defender for IoT and downloadi
- **Cloud-connected sensors**: Information that the sensor detects is displayed in the sensor console. Alert information is delivered through an IoT hub and can be shared with other Azure services, such as Azure Sentinel. In addition, threat intelligence packages can be pushed from the Azure Defender for IoT portal to sensors. Conversely when, the sensor is not cloud connected, you must download threat intelligence packages and then upload them to your enterprise sensors. To allow Defender for IoT to push packages to sensors, enable the **Automatic Threat Intelligence Updates** toggle. For more information, see [Threat intelligence research and packages](how-to-work-with-threat-intelligence-packages.md).
- Choose an IoT hub to serve as a gateway between this sensor and the Azure Defender for IoT portal. Define a site name and zone. You can also add descriptive tags. The site name, zone, and tags are descriptive entries on the [Sites and Sensors page](#view-onboarded-sensors).
+ For cloud connected sensors, the name defined during onboarding is the name that appears in the sensor console. You can't change this name from the console directly. For locally managed sensors, the name applied during onboarding will be stored in Azure but can be updated in the sensor console.
- **Locally managed sensors**: Information that sensors detect is displayed in the sensor console. If you're working in an air-gapped network and want a unified view of all information detected by multiple locally managed sensors, work with the on-premises management console.
- For cloud connected sensors, the name defined during onboarding is the name that appears in the sensor console. You can't change this name from the console directly. For locally managed sensors, the name applied during onboarding will be stored in Azure but can be updated in the sensor console.
+1. Select a site to associate your sensor to within an IoT Hub. The IoT Hub will serve as a gateway between this sensor and Azure Defender for IoT. Define the display name, and zone. You can also add descriptive tags. The display name, zone, and tags are descriptive entries on the [Sites and Sensors page](#view-onboarded-sensors).
+
+1. Select **Register**.
### Download the sensor activation file
-The sensor activation file contains instructions about the management mode of the sensor. You download a unique activation file for each sensor that you deploy. A user who signs in to the sensor console for the first time uploads the activation file to the sensor.
+After registering a sensor you will be able to download an activation file. The sensor activation file contains instructions about the management mode of the sensor. You download a unique activation file for each sensor that you deploy. A user who signs in to the sensor console for the first time uploads the activation file to the sensor.
**To download an activation file:**
-1. On the **Onboard Sensor** page, select **download activation file**.
+1. On the **Onboard Sensor** page, select **Register**
+
+1. Select **download activation file**.
1. Make the file accessible to the user who's signing in to the sensor console for the first time.
devtest-labs Enable Managed Identities Lab Vms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/enable-managed-identities-lab-vms.md
To add a user assigned managed identity for lab VMs, follow these steps:
1. After creating an identity, note the resource ID of the identity. It should look like the following sample:
- `/subscriptions/0000000000-0000-0000-0000-00000000000000/resourceGroups/<RESOURCE GROUP NAME> /providers/Microsoft.ManagedIdentity/userAssignedIdentities/<NAME of USER IDENTITY>`.
-2. Run a PUT HTTPS method to add a new **ServiceRunner** resource to the lab as shown in the following example.
+ `/subscriptions/0000000000-0000-0000-0000-00000000000000/resourceGroups/{rg}/providers/Microsoft.ManagedIdentity/userAssignedIdentities/{identityName}`.
+
+2. Perform a PUT HTTPS method on the lab resource to add one or multiple user assigned identities to the **managementIdentities** field.
- Service runner resource is a proxy resource to manage and control managed identities in DevTest Labs. The service runner name can be any valid name, but we recommend you use the name of the managed identity resource.
```json {
- "identity": {
- "type": "userAssigned",
- "userAssignedIdentities": {
- "[userAssignedIdentityResourceId]": {}
- }
- },
"location": "southeastasia", "properties": {
- "identityUsageType": "VirtualMachine"
- }
+ ...
+ "managementIdentities": {
+ "/subscriptions/0000000000-0000-0000-0000-00000000000000/resourceGroups/{rg}/providers/Microsoft.ManagedIdentity/userAssignedIdentities/{identityName}": {}
+ },
+ ...
+ },
+ ...
} ```
devtest-labs Use Managed Identities Environments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/use-managed-identities-environments.md
Last updated 06/26/2020
# Use Azure managed identities to deploy environments in a lab
-As a lab owner, you can use a managed identity to deploy environments in a lab. This feature is helpful in scenarios where the environment contains or has references to Azure resources such as key vaults, shared image galleries, and networks that are external to the environmentΓÇÖs resource group. It enables creation of sandbox environments that aren't limited to the resource group of that environment.
+As a lab owner, you can use a managed identity to deploy environments in a lab. This feature is helpful in scenarios where the environment contains or has references to Azure resources such as key vaults, shared image galleries, and networks that are external to the environmentΓÇÖs resource group. It enables creation of sandbox environments that aren't limited to the resource group of that environment.
+
+By default, when you create an environment, the lab creates a system-assigned identity to access Azure resources and services on a lab userΓÇÖs behalf while deploying the Azure Resource Manager template (ARM template). Learn more about [why a lab creates a system-assigned identity](configure-lab-identity.md#scenarios-for-using-labs-system-assigned-identity). For new and existing labs, a system-assigned identity is created by default the first time a lab environment is created.
+
+Note that as a lab owner, you can choose to grant the labΓÇÖs system-assigned identity permissions to access Azure resources outside the lab or you can use your own user-assigned identity for the scenario. The labΓÇÖs system-assigned identity is valid only for the life of the lab. The system-assigned identify is deleted when you delete the lab. When you have environments in multiple labs that need to use an identity, consider using a user-assigned identity.
> [!NOTE] > Currently, a single user-assigned identity is supported per lab.
To change the user-managed identity assigned to the lab, remove the identity att
1. After creating an identity, note the resource ID of this identity. It should look like the following sample:
- `/subscriptions/0000000000-0000-0000-0000-00000000000000/resourceGroups/<RESOURCE GROUP NAME> /providers/Microsoft.ManagedIdentity/userAssignedIdentities/<NAME of USER IDENTITY>`.
-1. Perform a PUT Https method to add a new `ServiceRunner` resource to the lab similar to the following example. Service runner resource is a proxy resource to manage and control managed identities in DevTest Labs. The service runner name can be any valid name but we recommend that you use the name of the managed identity resource.
-
- ```json
- PUT https://management.azure.com/subscriptions/{subId}/resourceGroups/{rg}/providers/Microsoft.Devtestlab/labs/{yourlabname}/serviceRunners/{serviceRunnerName}
+ `/subscriptions/0000000000-0000-0000-0000-00000000000000/resourceGroups/{rg}/providers/Microsoft.ManagedIdentity/userAssignedIdentities/{identityName}`.
- {
- "location": "{location}",
- "identity":{
- "type": "userAssigned",
- "userAssignedIdentities":{
- "[userAssignedIdentityResourceId]":{}
- }
- }
- "properties":{
- "identityUsageType":"Environment"
- }
-
- }
- ```
-
- Here's an example:
+1. Perform a PUT HTTPS method on the lab resource to add a user-assigned identity or enable a system-assigned identity for the lab.
+ > [!NOTE]
+ > Regardless of whether you create a user-assigned identity, the lab automatically creates a system-assigned identity the first time a lab environment is created. However, if a user-assigned identity is already configured for the lab, the DevTest Lab service continues to use that identity to deploy lab environments.
+
```json
- PUT https://management.azure.com/subscriptions/0000000000-0000-0000-0000-000000000000000/resourceGroups/exampleRG/providers/Microsoft.Devtestlab/labs/mylab/serviceRunners/sampleuseridentity
+
+ PUT https://management.azure.com/subscriptions/{subId}/resourceGroups/{rg}/providers/Microsoft.Devtestlab/labs/{labname}
{
- "location": "eastus",
+ "location": "{location}",
+ "properties":ΓÇ»{
+ **lab properties**
+ }
"identity":{
- "type": "userAssigned",
+ "type": "SystemAssigned,UserAssigned",
"userAssignedIdentities":{
- "/subscriptions/0000000000-0000-0000-0000-000000000000000/resourceGroups/exampleRG/providers/Microsoft.ManagedIdentity/userAssignedIdentities/sampleuseridentity":{}
+ "/subscriptions/0000000000-0000-0000-0000-00000000000000/resourceGroups/{rg}/providers/Microsoft.ManagedIdentity/userAssignedIdentities/{identityName}":{}
}
- }
- "properties":{
- "identityUsageType":"Environment"
- }
+ }
}
+
``` Once the user assigned identity is added to the lab, the Azure DevTest Labs service will use it while deploying Azure Resource Manager environments. For example, if you need your Resource Manager template to access an external shared image gallery image, make sure that the identity you added to the lab has minimum required permissions for the shared image gallery resource.
event-grid Advanced Filtering https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/edge/advanced-filtering.md
Title: Advanced filtering - Azure Event Grid IoT Edge | Microsoft Docs description: Advanced filtering in Event Grid on IoT Edge.- - Last updated 05/10/2021
event-grid Delivery Output Batching https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/edge/delivery-output-batching.md
Title: Output batching in Azure Event Grid IoT Edge | Microsoft Docs description: Output batching in Event Grid on IoT Edge.- - Last updated 05/10/2021
event-grid Twin Json https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/edge/twin-json.md
Title: Module Twin - Azure Event Grid IoT Edge | Microsoft Docs description: Configuration via Module Twin.- - Last updated 05/10/2021
event-hubs Event Hubs Kafka Connect Debezium https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-kafka-connect-debezium.md
psql -h my-postgres.postgres.database.azure.com -p 5432 -U testuser@my-postgres
**Create a table and insert records** ```sql
-CREATE TABLE todos (id SERIAL, description VARCHAR(50), todo_status VARCHAR(10), PRIMARY KEY(id));
+CREATE TABLE todos (id SERIAL, description VARCHAR(50), todo_status VARCHAR(12), PRIMARY KEY(id));
INSERT INTO todos (description, todo_status) VALUES ('setup postgresql on azure', 'complete'); INSERT INTO todos (description, todo_status) VALUES ('setup kafka connect', 'complete');
healthcare-apis Davinci Drug Formulary Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/davinci-drug-formulary-tutorial.md
Title: Tutorial - DaVinci Drug Formulary - Azure API for FHIR
-description: This tutorial walks through setting up the Azure API for FHIR to pass the Touchstone tests against the DaVinci Drug Formulary implementation guide.
+ Title: Tutorial - Da Vinci Drug Formulary - Azure API for FHIR
+description: This tutorial walks through setting up the Azure API for FHIR to pass the Touchstone tests against the DaVinci Drug Formulary implementation guide.
Previously updated : 06/01/2021 Last updated : 06/07/2021
-# DaVinci Drug Formulary
+# Da Vinci Drug Formulary
-In this tutorial, we'll walk through setting up the Azure API for FHIR to pass the [Touchstone](https://touchstone.aegis.net/touchstone/) tests for the [DaVinci Payer Data Exchange US Drug Formulary Implementation Guide](http://hl7.org/fhir/us/Davinci-drug-formulary/).
+In this tutorial, we'll walk through setting up the Azure API for FHIR to pass the [Touchstone](https://touchstone.aegis.net/touchstone/) tests for the [Da Vinci Payer Data Exchange US Drug Formulary Implementation Guide](http://hl7.org/fhir/us/Davinci-drug-formulary/).
## Touchstone capability statement
-The first test that we'll focus on is testing the Azure API for FHIR against the [DaVinci Drug Formulary capability
-statement](https://touchstone.aegis.net/touchstone/testdefinitions?selectedTestGrp=/FHIRSandbox/DaVinci/FHIR4-0-1-Test/PDEX/Formulary/00-Capability&activeOnly=false&contentEntry=TEST_SCRIPTS). If you run this test without any updates, the test will fail due to missing search parameters and missing profiles.
+The first test that we'll focus on is testing the Azure API for FHIR against the [Da Vinci Drug Formulary capability
+statement](https://touchstone.aegis.net/touchstone/testdefinitions?selectedTestGrp=/FHIRSandbox/DaVinci/FHIR4-0-1-Test/PDEX/Formulary/00-Capability&activeOnly=false&contentEntry=TEST_SCRIPTS). If you run this test without any updates, the test will fail due to
+missing search parameters and missing profiles.
### Define search parameters
Outside of defining search parameters, the only other update you need to make to
To assist with creation of these search parameters and profiles, we have the [Da Vinci Formulary](https://github.com/microsoft/fhir-server/blob/main/docs/rest/DaVinciFormulary/DaVinciFormulary.http) sample HTTP file on the open-source site that includes all the steps outlined above in a single file. Once you've uploaded all the necessary profiles and search parameters, you can run the capability statement test in Touchstone. You should get a successful run: ## Touchstone query test
-The second test is the [query capabilities](https://touchstone.aegis.net/touchstone/testdefinitions?selectedTestGrp=/FHIRSandbox/DaVinci/FHIR4-0-1-Test/PDEX/Formulary/01-Query&activeOnly=false&contentEntry=TEST_SCRIPTS). This test validates that you can search for specific `CoveragePlan` and `Drug` resources using various parameters. The best path would be to test against resources that you already have in your database, but we also have the [DaVinciFormulary_Sample_Resources](https://github.com/microsoft/fhir-server/blob/main/docs/rest/DaVinciFormulary/DaVinciFormulary_Sample_Resources.http) HTTP file available with sample resources pulled from the examples in the IG that you can use to create the resources and test against.
+The second test is the [query capabilities](https://touchstone.aegis.net/touchstone/testdefinitions?selectedTestGrp=/FHIRSandbox/DaVinci/FHIR4-0-1-Test/PDEX/Formulary/01-Query&activeOnly=false&contentEntry=TEST_SCRIPTS). This test validates that you can search for specific Coverage Plan and Drug resources using various parameters. The best path would be to test against resources that you already have in your database, but we also have the [Da VinciFormulary_Sample_Resources](https://github.com/microsoft/fhir-server/blob/main/docs/rest/DaVinciFormulary/DaVinciFormulary_Sample_Resources.http) HTTP file available with sample resources pulled from the examples in the IG that you can use to create the resources and test against.
## Next steps
healthcare-apis Davinci Pdex Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/davinci-pdex-tutorial.md
Title: Tutorial - DaVinci PDex - Azure API for FHIR
+ Title: Tutorial - Da Vinci PDex - Azure API for FHIR
description: This tutorial walks through setting up the Azure API for FHIR to pass tests for the Da Vinci Payer Data Exchange Implementation Guide.
Previously updated : 06/02/2021 Last updated : 06/07/2021
-# DaVinci PDex
+# Da Vinci PDex
In this tutorial, we'll walk through setting up the Azure API for FHIR to pass the [Touchstone](https://touchstone.aegis.net/touchstone/) tests for the [Da Vinci Payer Data Exchange Implementation Guide](http://hl7.org/fhir/us/davinci-pdex/toc.html) (PDex IG).
The first set of tests that we'll focus on is testing the Azure API for FHIR aga
* The third test validates that the [$patient-everything operation](patient-everything.md) is supported. Right now, this test will fail. The operation will be available in mid-June 2021 in the Azure API for FHIR and is available now in the open-source FHIR server on Cosmos DB. However, it is missing from the capability statement, so this test will fail until we release a fix to bug [1989](https://github.com/microsoft/fhir-server/issues/1989). ## Touchstone $member-match test
The [second test](https://touchstone.aegis.net/touchstone/testdefinitions?select
In this test, youΓÇÖll need to load some sample data for the test to pass. We have a rest file [here](https://github.com/microsoft/fhir-server/blob/main/docs/rest/PayerDataExchange/membermatch.http) with the patient and coverage linked that you will need for the test. Once this data is loaded, you'll be able to successfully pass this test. If the data is not loaded, you'll receive a 422 response due to not finding an exact match. ## Touchstone patient by reference The next tests we'll review is the [patient by reference](https://touchstone.aegis.net/touchstone/testdefinitions?selectedTestGrp=/FHIRSandbox/DaVinci/FHIR4-0-1-Test/PDEX/PayerExchange/02-PatientByReference&activeOnly=false&contentEntry=TEST_SCRIPTS) tests. This set of tests validate that you can find a patient based on various search criteria. The best way to test the patient by reference will be to test against your own data, but we have uploaded a [sample resource file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/PayerDataExchange/PDex_Sample_Data.http) that you can load to use as well. ## Touchstone patient/$everything test
healthcare-apis De Identified Export https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/de-identified-export.md
The $export command can also be used to export de-identified data from the FHIR
|Query parameter | Example |Optionality| Description| |||--|| | _\_anonymizationConfig_ |DemoConfig.json|Required for de-identified export |Name of the configuration file. See the configuration file format [here](https://github.com/microsoft/FHIR-Tools-for-Anonymization#configuration-file-format). This file should be kept inside a container named **anonymization** within the same Azure storage account that is configured as the export location. |
-| _\_anonymizationConfigEtag_|"0x8D8494A069489EC"|Optional for de-identified export|This is the Etag of the configuration file. You can get the Etag using Azure storage explorer from the blob property|
+| _\_anonymizationConfigEtag_|"0x8D8494A069489EC"|Optional for de-identified export|This is the Etag of the configuration file. You can get the Etag using Azure Storage Explorer from the blob property|
> [!IMPORTANT]
-> Both raw export as well as de-identified export writes to the same Azure storage account specified as part of export configuration. It is recommended that you use different containers corresponding to different de-identified config and manage user access at the container level.
+> Both raw export as well as de-identified export writes to the same Azure storage account specified as part of export configuration. It is recommended that you use different containers corresponding to different de-identified config and manage user access at the container level.
healthcare-apis Patient Everything https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/patient-everything.md
The Azure API for FHIR validates that it can find the patient matching the provi
* [Patient resource](https://www.hl7.org/fhir/patient.html) * Resources that are directly referenced by the Patient resource (except link) * Resources in the Patient's [compartment](https://www.hl7.org/fhir/compartmentdefinition-patient.html)
-* [Device resources](https://www.hl7.org/fhir/device.html) that reference the Patient resource
+* [Device resources](https://www.hl7.org/fhir/device.html) that reference the Patient resource. Note that this is limited to 100 devices. If the patient has more than 100 devices linked to them, only 100 will be returned.
> [!Note]
iot-dps Tls Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-dps/tls-support.md
DPS uses [Transport Layer Security (TLS)](http://wikipedia.org/wiki/Transport_La
Current TLS protocol versions supported by DPS are: * TLS 1.2
-TLS 1.0 and 1.1 are considered legacy and are planned for deprecation. For more information, see [Deprecating TLS 1.0 and 1.1 for IoT Hub](../iot-hub/iot-hub-tls-deprecating-1-0-and-1-1.md).
- ## Restrict connections to TLS 1.2 For added security, it is advised to configure your DPS instances to *only* allow device client connections that use TLS version 1.2 and to enforce the use of [recommended ciphers](#recommended-ciphers).
iot-hub Iot Hub Device Streams Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-device-streams-overview.md
Two sides of each stream (on the device and service side) use the IoT Hub SDK to
* The NodeJS and C# SDK support device streams on the service side.
-## IoT Hub device stream samples
-
-There are two [quickstart samples](./index.yml) available on the IoT Hub page. These demonstrate the use of device streams by applications.
-
-* The *echo* sample demonstrates programmatic use of device streams (by calling the SDK API's directly).
-
-* The *local proxy* sample demonstrates the tunneling of off-the-shelf client/server application traffic (such as SSH, RDP, or web) through device streams.
-
-These samples are covered in greater detail below.
-
-### Echo sample
-
-The echo sample demonstrates programmatic use of device streams to send and receive bytes between service and device applications. Note that you can use service and device programs in different languages. For example, you can use the C device program with the C# service program.
-
-Here are the echo samples:
-
-* [C# service and service program](quickstart-device-streams-echo-csharp.md)
-
-* [Node.js service program](quickstart-device-streams-echo-nodejs.md)
-
-* [C device program](quickstart-device-streams-echo-c.md)
-
-### Local proxy sample (for SSH or RDP)
-
-The local proxy sample demonstrates a way to enable tunneling of an existing application's traffic that involves communication between a client and a server program. This set up works for client/server protocols like SSH and RDP, where the service-side acts as a client (running SSH or RDP client programs), and the device-side acts as the server (running SSH daemon or RDP server programs).
-
-This section describes the use of device streams to enable the user to SSH to a device over device streams (the case for RDP or other client/server application are similar by using the protocol's corresponding port).
-
-The setup leverages two *local proxy* programs shown in the figure below, namely *device-local proxy* and *service-local proxy*. The local proxy programs are responsible for performing the [device stream initiation handshake](#device-stream-creation-flow) with IoT Hub, and interacting with SSH client and SSH daemon using regular client/server sockets.
-
-!["Device stream proxy setup for SSH/RDP"](./media/iot-hub-device-streams-overview/iot-hub-device-streams-ssh.png)
-
-1. The user runs service-local proxy to initiate a device stream to the device.
-
-2. The device-local proxy accepts the stream initiation request and the tunnel is established to IoT Hub's streaming endpoint (as discussed above).
-
-3. The device-local proxy connects to the SSH daemon endpoint listening on port 22 on the device.
-
-4. The service-local proxy listens on a designated port awaiting new SSH connections from the user (port 2222 used in the sample, but this can be configured to any other available port). The user points the SSH client to the service-local proxy port on localhost.
-
-### Notes
-
-* The above steps complete an end-to-end tunnel between the SSH client (on the right) to the SSH daemon (on the left). Part of this end-to-end connectivity involves sending traffic over a device stream to IoT Hub.
-
-* The arrows in the figure indicate the direction in which connections are established between endpoints. Specifically, note that there is no inbound connections going to the device (this is often blocked by a firewall).
-
-* The choice of using port 2222 on the service-local proxy is an arbitrary choice. The proxy can be configured to use any other available port.
-
-* The choice of port 22 is protocol-dependent and specific to SSH in this case. For the case of RDP, the port 3389 must be used. This can be configured in the provided sample programs.
-
-Use the links below for instructions on how to run the local proxy programs in your language of choice. Similar to the [echo sample](#echo-sample), you can run device- and service-local proxy programs in different languages as they are fully interoperable.
-
-* [C# service and service program](quickstart-device-streams-proxy-csharp.md)
-
-* [Node.js service program](quickstart-device-streams-proxy-nodejs.md)
-
-* [C device program](quickstart-device-streams-proxy-c.md)
- ## Next steps Use the links below to learn more about device streams.
iot-hub Iot Hub Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-managed-identity.md
IoT Hub's [file upload](iot-hub-devguide-file-upload.md) feature allows devices
:::image type="content" source="./media/iot-hub-managed-identity/file-upload.png" alt-text="IoT Hub file upload with msi":::
+ > [!NOTE]
+ > In the file upload scenario, both hub and your device need to connect with your storage account. The steps above are for connecting your IoT hub to your storage account with desired authentication type. You still need to connect your device to storage using the SAS URI. Please follow the steps in [file upload](iot-hub-devguide-file-upload.md).
+ ### Bulk device import/export IoT Hub supports the functionality to [import/export devices](iot-hub-bulk-identity-mgmt.md)' information in bulk from/to a customer-provided storage blob. This functionality requires connectivity from IoT Hub to the storage account.
iot-hub Quickstart Device Streams Echo C https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/quickstart-device-streams-echo-c.md
-- Title: Quickstart - Communicate to device app in C with Azure IoT Hub device streams
-description: In this quickstart, you run a C device-side application that communicates with an IoT device via a device stream.
----- Previously updated : 08/20/2019----
-# Quickstart: Communicate to a device application in C via IoT Hub device streams (preview)
--
-Azure IoT Hub currently supports device streams as a [preview feature](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-[IoT Hub device streams](iot-hub-device-streams-overview.md) allow service and device applications to communicate in a secure and firewall-friendly manner. During public preview, the C SDK supports device streams on the device side only. As a result, this quickstart covers instructions to run only the device-side application. To run a corresponding service-side application, see these articles:
-
-* [Communicate to device apps in C# via IoT Hub device streams](./quickstart-device-streams-echo-csharp.md)
-
-* [Communicate to device apps in Node.js via IoT Hub device streams](./quickstart-device-streams-echo-nodejs.md)
-
-The device-side C application in this quickstart has the following functionality:
-
-* Establish a device stream to an IoT device.
-
-* Receive data that's sent from the service-side application and echo it back.
-
-The code demonstrates the initiation process of a device stream, as well as how to use it to send and receive data.
--
-## Prerequisites
-
-You need the following prerequisites:
-
-* Install [Visual Studio 2019](https://www.visualstudio.com/vs/) with the **Desktop development with C++** workload enabled.
-
-* Install the latest version of [Git](https://git-scm.com/download/).
---
-The preview of device streams is currently supported only for IoT hubs that are created in the following regions:
-
- * Central US
- * Central US EUAP
- * North Europe
- * Southeast Asia
-
-## Prepare the development environment
-
-For this quickstart, you use the [Azure IoT device SDK for C](iot-hub-device-sdk-c-intro.md). You prepare a development environment used to clone and build the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c) from GitHub. The SDK on GitHub includes the sample code that's used in this quickstart.
-
- > [!NOTE]
- > Before you begin this procedure, be sure that Visual Studio is installed with the **Desktop development with C++** workload.
-
-1. Install the [CMake build system](https://cmake.org/download/) as described on the download page.
-
-1. Open a command prompt or Git Bash shell. Run the following commands to clone the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c) GitHub repository:
-
- ```cmd/sh
- git clone -b public-preview https://github.com/Azure/azure-iot-sdk-c.git
- cd azure-iot-sdk-c
- git submodule update --init
- ```
-
- This operation should take a few minutes.
-
-1. Create a *cmake* subdirectory in the root directory of the git repository, and navigate to that folder. Run the following commands from the *azure-iot-sdk-c* directory:
-
- ```cmd/sh
- mkdir cmake
- cd cmake
- ```
-
-1. Run the following commands from the *cmake* directory to build a version of the SDK that's specific to your development client platform.
-
- * In Linux:
-
- ```bash
- cmake ..
- make -j
- ```
-
- * In Windows, open a [Developer Command Prompt for Visual Studio](/dotnet/framework/tools/developer-command-prompt-for-vs). Run the command for your version of Visual Studio. This quickstart uses Visual Studio 2019. These commands create a Visual Studio solution for the simulated device in the *cmake* directory.
-
- ```cmd
- rem For VS2015
- cmake .. -G "Visual Studio 14 2015"
-
- rem Or for VS2017
- cmake .. -G "Visual Studio 15 2017"
-
- rem Or for VS2019
- cmake .. -G "Visual Studio 16 2019"
-
- rem Then build the project
- cmake --build . -- /m /p:Configuration=Release
- ```
-
-## Create an IoT hub
--
-## Register a device
-
-You must register a device with your IoT hub before it can connect. In this section, you use Azure Cloud Shell with the [IoT Extension](/cli/azure/iot) to register a simulated device.
-
-1. To create the device identity, run the following command in Cloud Shell:
-
- > [!NOTE]
- > * Replace the *YourIoTHubName* placeholder with the name you chose for your IoT hub.
- > * For the name of the device you're registering, it's recommended to use *MyDevice* as shown. If you choose a different name for your device, use that name throughout this article, and update the device name in the sample applications before you run them.
-
- ```azurecli-interactive
- az iot hub device-identity create --hub-name {YourIoTHubName} --device-id MyDevice
- ```
-
-1. To get the *device connection string* for the device that you just registered, run the following command in Cloud Shell:
-
- > [!NOTE]
- > Replace the *YourIoTHubName* placeholder with the name you chose for your IoT hub.
-
- ```azurecli-interactive
- az iot hub device-identity connection-string show --hub-name {YourIoTHubName} --device-id MyDevice --output table
- ```
-
- Note the returned device connection string for later use in this quickstart. It looks like the following example:
-
- `HostName={YourIoTHubName}.azure-devices.net;DeviceId=MyDevice;SharedAccessKey={YourSharedAccessKey}`
-
-## Communicate between the device and the service via device streams
-
-In this section, you run both the device-side application and the service-side application and communicate between the two.
-
-### Run the device-side application
-
-To run the device-side application, follow these steps:
-
-1. Provide your device credentials by editing the **iothub_client_c2d_streaming_sample.c** source file in the `iothub_client/samples/iothub_client_c2d_streaming_sample` folder and adding your device connection string.
-
- ```C
- /* Paste in your iothub connection string */
- static const char* connectionString = "{DeviceConnectionString}";
- ```
-
-1. Compile the code with the following commands:
-
- ```bash
- # In Linux
- # Go to the sample's folder cmake/iothub_client/samples/iothub_client_c2d_streaming_sample
- make -j
- ```
-
- ```cmd
- rem In Windows
- rem Go to the cmake folder at the root of repo
- cmake --build . -- /m /p:Configuration=Release
- ```
-
-1. Run the compiled program:
-
- ```bash
- # In Linux
- # Go to the sample's folder cmake/iothub_client/samples/iothub_client_c2d_streaming_sample
- ./iothub_client_c2d_streaming_sample
- ```
-
- ```cmd
- rem In Windows
- rem Go to the sample's release folder cmake\iothub_client\samples\iothub_client_c2d_streaming_sample\Release
- iothub_client_c2d_streaming_sample.exe
- ```
-
-### Run the service-side application
-
-As mentioned previously, the IoT Hub C SDK supports device streams on the device side only. To build and run the accompanying service-side application, follow the instructions in one of the following quickstarts:
-
-* [Communicate to a device app in C# via IoT Hub device streams](./quickstart-device-streams-echo-csharp.md)
-
-* [Communicate to a device app in Node.js via IoT Hub device streams](./quickstart-device-streams-echo-nodejs.md)
-
-## Clean up resources
--
-## Next steps
-
-In this quickstart, you set up an IoT hub, registered a device, established a device stream between a C application on the device and another application on the service side, and used the stream to send data back and forth between the applications.
-
-To learn more about device streams, see:
-
-> [!div class="nextstepaction"]
-> [Device streams overview](./iot-hub-device-streams-overview.md)
iot-hub Quickstart Device Streams Echo Csharp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/quickstart-device-streams-echo-csharp.md
- Title: Quickstart - Communicate to device app in C# with Azure IoT Hub device streams
-description: In this quickstart, you run two sample C# applications that communicate via a device stream established through IoT Hub.
----- Previously updated : 03/14/2019---
-# Quickstart: Communicate to a device application in C# via IoT Hub device streams (preview)
--
-Azure IoT Hub currently supports device streams as a [preview feature](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-[IoT Hub device streams](./iot-hub-device-streams-overview.md) allow service and device applications to communicate in a secure and firewall-friendly manner. This quickstart involves two C# applications that take advantage of device streams to send data back and forth (echo).
--
-## Prerequisites
-
-* The preview of device streams is currently supported only for IoT hubs that are created in the following regions:
- * Central US
- * Central US EUAP
- * North Europe
- * Southeast Asia
-
-* The two sample applications that you run in this quickstart are written in C#. You need the .NET Core SDK 2.1.0 or later on your development machine.
-
- Download the [.NET Core SDK for multiple platforms from .NET](https://dotnet.microsoft.com/download).
-
- Verify the current version of C# on your development machine by using the following command:
-
- ```
- dotnet --version
- ```
-
-* [Download the Azure IoT C# samples](https://github.com/Azure-Samples/azure-iot-samples-csharp/archive/master.zip) and extract the ZIP archive. You need it on both the device side and the service side.
---
-## Create an IoT hub
--
-## Register a device
-
-A device must be registered with your IoT hub before it can connect. In this section, you use Azure Cloud Shell to register a simulated device.
-
-1. To create the device identity, run the following command in Cloud Shell:
-
- > [!NOTE]
- > * Replace the *YourIoTHubName* placeholder with the name you chose for your IoT hub.
- > * For the name of the device you're registering, it's recommended to use *MyDevice* as shown. If you choose a different name for your device, use that name throughout this article, and update the device name in the sample applications before you run them.
-
- ```azurecli-interactive
- az iot hub device-identity create --hub-name {YourIoTHubName} --device-id MyDevice
- ```
-
-1. To get the *device connection string* for the device that you just registered, run the following command in Cloud Shell:
-
- > [!NOTE]
- > Replace the *YourIoTHubName* placeholder with the name you chose for your IoT hub.
-
- ```azurecli-interactive
- az iot hub device-identity connection-string show --hub-name {YourIoTHubName} --device-id MyDevice --output table
- ```
-
- Note the returned device connection string for later use in this quickstart. It looks like the following example:
-
- `HostName={YourIoTHubName}.azure-devices.net;DeviceId=MyDevice;SharedAccessKey={YourSharedAccessKey}`
-
-3. You also need the *service connection string* from your IoT hub to enable the service-side application to connect to your IoT hub and establish a device stream. The following command retrieves this value for your IoT hub:
-
- > [!NOTE]
- > Replace the *YourIoTHubName* placeholder with the name you chose for your IoT hub.
-
- ```azurecli-interactive
- az iot hub show-connection-string --policy-name service --name {YourIoTHubName} --output table
- ```
-
- Note the returned service connection string for later use in this quickstart. It looks like the following example:
-
- `"HostName={YourIoTHubName}.azure-devices.net;SharedAccessKeyName=service;SharedAccessKey={YourSharedAccessKey}"`
-
-## Communicate between the device and the service via device streams
-
-In this section, you run both the device-side application and the service-side application and communicate between the two.
-
-### Run the service-side application
-
-In a local terminal window, navigate to the `iot-hub/Quickstarts/device-streams-echo/service` directory in your unzipped project folder. Keep the following information handy:
-
-| Parameter name | Parameter value |
-|-|--|
-| `ServiceConnectionString` | The service connection string of your IoT hub. |
-| `MyDevice` | The identifier of the device you created earlier. |
-
-Compile and run the code with the following commands:
-
-```
-cd ./iot-hub/Quickstarts/device-streams-echo/service/
-
-# Build the application
-dotnet build
-
-# Run the application
-# In Linux or macOS
-dotnet run "{ServiceConnectionString}" "MyDevice"
-
-# In Windows
-dotnet run {ServiceConnectionString} MyDevice
-```
-The application will wait for the device application to become available.
-
-> [!NOTE]
-> A timeout occurs if the device-side application doesn't respond in time.
-
-### Run the device-side application
-
-In another local terminal window, navigate to the `iot-hub/Quickstarts/device-streams-echo/device` directory in your unzipped project folder. Keep the following information handy:
-
-| Parameter name | Parameter value |
-|-|--|
-| `DeviceConnectionString` | The device connection string of your IoT Hub. |
-
-Compile and run the code with the following commands:
-
-```
-cd ./iot-hub/Quickstarts/device-streams-echo/device/
-
-# Build the application
-dotnet build
-
-# Run the application
-# In Linux or macOS
-dotnet run "{DeviceConnectionString}"
-
-# In Windows
-dotnet run {DeviceConnectionString}
-```
-
-At the end of the last step, the service-side application initiates a stream to your device. After the stream is established, the application sends a string buffer to the service over the stream. In this sample, the service-side application simply echoes back the same data to the device, which demonstrates a successful bidirectional communication between the two applications.
-
-Console output on the device side:
-
-![Console output on the device side](./media/quickstart-device-streams-echo-csharp/device-console-output.png)
-
-Console output on the service side:
-
-![Console output on the service side](./media/quickstart-device-streams-echo-csharp/service-console-output.png)
-
-The traffic being sent over the stream is tunneled through the IoT hub rather than sent directly. The benefits provided are detailed in [Device streams benefits](./iot-hub-device-streams-overview.md#benefits).
-
-## Clean up resources
--
-## Next steps
-
-In this quickstart, you set up an IoT hub, registered a device, established a device stream between C# applications on the device and service sides, and used the stream to send data back and forth between the applications.
-
-To learn more about device streams, see:
-
-> [!div class="nextstepaction"]
-> [Device streams overview](./iot-hub-device-streams-overview.md)
iot-hub Quickstart Device Streams Echo Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/quickstart-device-streams-echo-nodejs.md
- Title: Quickstart - Communicate to device app in Node.js with Azure IoT Hub device streams
-description: In this quickstart, you will run a Node.js service-side application that communicates with an IoT device via a device stream.
----- Previously updated : 03/14/2019---
-# Quickstart: Communicate to a device application in Node.js via IoT Hub device streams (preview)
--
-In this quickstart, you run a service-side application and set up communication between a device and service by using device streams. Azure IoT Hub device streams allow service and device applications to communicate in a secure and firewall-friendly manner. During public preview, the Node.js SDK only supports device streams on the service side. As a result, this quickstart only covers instructions to run the service-side application.
-
-## Prerequisites
-
-* Completion of [Communicate to device apps in C via IoT Hub device streams](./quickstart-device-streams-echo-c.md) or [Communicate to device apps in C# via IoT Hub device streams](./quickstart-device-streams-echo-csharp.md).
-
-* An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
-
-* [Node.js 10+](https://nodejs.org).
-
- You can verify the current version of Node.js on your development machine using the following command:
-
- ```cmd/sh
- node --version
- ```
-
-* [A sample Node.js project](https://github.com/Azure-Samples/azure-iot-samples-node/archive/streams-preview.zip).
---
-Microsoft Azure IoT Hub currently supports device streams as a [preview feature](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-> [!IMPORTANT]
-> The preview of device streams is currently only supported for IoT Hubs created in the following regions:
->
-> * Central US
-> * Central US EUAP
-> * North Europe
-> * Southeast Asia
-
-## Create an IoT hub
-
-If you completed the previous [Quickstart: Send telemetry from a device to an IoT hub](quickstart-send-telemetry-node.md), you can skip this step.
--
-## Register a device
-
-If you completed the previous [Quickstart: Send telemetry from a device to an IoT hub](quickstart-send-telemetry-node.md), you can skip this step.
-
-A device must be registered with your IoT hub before it can connect. In this quickstart, you use the Azure Cloud Shell to register a simulated device.
-
-1. Run the following command in Azure Cloud Shell to create the device identity.
-
- **YourIoTHubName**: Replace this placeholder below with the name you chose for your IoT hub.
-
- **MyDevice**: This is the name for the device you're registering. It's recommended to use **MyDevice** as shown. If you choose a different name for your device, you also need to use that name throughout this article, and update the device name in the sample applications before you run them.
-
- ```azurecli-interactive
- az iot hub device-identity create --hub-name {YourIoTHubName} --device-id MyDevice
- ```
-
-2. You also need a *service connection string* to enable the back-end application to connect to your IoT hub and retrieve the messages. The following command retrieves the service connection string for your IoT hub:
-
- **YourIoTHubName**: Replace this placeholder below with the name you chose for your IoT hub.
-
- ```azurecli-interactive
- az iot hub connection-string show --policy-name service --name {YourIoTHubName} --output table
- ```
-
- Note the returned service connection string for later use in this quickstart. It looks like the following example:
-
- `"HostName={YourIoTHubName}.azure-devices.net;SharedAccessKeyName=service;SharedAccessKey={YourSharedAccessKey}"`
-
-## Communicate between device and service via device streams
-
-In this section, you run both the device-side application and the service-side application and communicate between the two.
-
-### Run the device-side application
-
-As mentioned earlier, IoT Hub Node.js SDK only supports device streams on the service side. For a device-side application, use one of the accompanying device programs available in these quickstarts:
-
-* [Communicate to device apps in C via IoT Hub device streams](./quickstart-device-streams-echo-c.md)
-
-* [Communicate to device apps in C# via IoT Hub device streams](./quickstart-device-streams-echo-csharp.md)
-
-Ensure the device-side application is running before proceeding to the next step.
-
-### Run the service-side application
-
-The service-side Node.js application in this quickstart has the following functionalities:
-
-* Creates a device stream to an IoT device.
-* Reads input from command line and sends it to the device application, which will echo it back.
-
-The code will demonstrate the initiation process of a device stream, as well as how to use it to send and receive data.
-
-Assuming the device-side application is running, follow the steps below in a local terminal window to run the service-side application in Node.js:
-
-* Provide your service credentials and device ID as environment variables.
-
- ```cmd/sh
- # In Linux
- export IOTHUB_CONNECTION_STRING="{ServiceConnectionString}"
- export STREAMING_TARGET_DEVICE="MyDevice"
-
- # In Windows
- SET IOTHUB_CONNECTION_STRING={ServiceConnectionString}
- SET STREAMING_TARGET_DEVICE=MyDevice
- ```
-
- Change the ServiceConnectionString placeholder to match your service connection string, and **MyDevice** to match your device ID if you gave yours a different name.
-
-* Navigate to `Quickstarts/device-streams-service` in your unzipped project folder and run the sample using node.
-
- ```cmd/sh
- cd azure-iot-samples-node-streams-preview/iot-hub/Quickstarts/device-streams-service
-
- # Install the preview service SDK, and other dependencies
- npm install azure-iothub@streams-preview
- npm install
-
- node echo.js
- ```
-
-At the end of the last step, the service-side program will initiate a stream to your device and once established will send a string buffer to the service over the stream. In this sample, the service-side program simply reads the `stdin` on the terminal and sends it to the device, which will then echo it back. This demonstrates successful bidirectional communication between the two applications.
-
-![Service-side console output](./media/quickstart-device-streams-echo-nodejs/service-console-output.png)
-
-You can then terminate the program by pressing enter again.
-
-## Clean up resources
--
-## Next steps
-
-In this quickstart, you set up an IoT hub, registered a device, established a device stream between applications on the device and service side, and used the stream to send data back and forth between the applications.
-
-Use the links below to learn more about device streams:
-
-> [!div class="nextstepaction"]
-> [Device streams overview](./iot-hub-device-streams-overview.md)
iot-hub Quickstart Device Streams Proxy C https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/quickstart-device-streams-proxy-c.md
- Title: 'Quickstart - Azure IoT Hub device streams C quickstart for SSH and RDP'
-description: In this quickstart, you run a sample C application that acts as a proxy to enable SSH and RDP scenarios over IoT Hub device streams.
----- Previously updated : 03/14/2019---
-# Quickstart: Enable SSH and RDP over an IoT Hub device stream by using a C proxy application (preview)
--
-Azure IoT Hub currently supports device streams as a [preview feature](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-[IoT Hub device streams](./iot-hub-device-streams-overview.md) allow service and device applications to communicate in a secure and firewall-friendly manner. For an overview of the setup, see [the Local Proxy Sample page](./iot-hub-device-streams-overview.md#local-proxy-sample-for-ssh-or-rdp).
-
-This quickstart describes the setup for tunneling Secure Shell (SSH) traffic (using port 22) through device streams. The setup for Remote Desktop Protocol (RDP) traffic is similar and requires a simple configuration change. Because device streams are application- and protocol-agnostic, you can modify this quickstart to accommodate other types of application traffic.
-
-## Prerequisites
-
-* The preview of device streams is currently supported only for IoT hubs that are created in the following regions:
-
- * Central US
- * Central US EUAP
- * North Europe
- * Southeast Asia
-
-* Install [Visual Studio 2019](https://www.visualstudio.com/vs/) with the [Desktop development with C++](https://www.visualstudio.com/vs/support/selecting-workloads-visual-studio-2017/) workload enabled.
-* Install the latest version of [Git](https://git-scm.com/download/).
---
-## How it works
-
-The following figure illustrates how the device- and service-local proxy programs enable end-to-end connectivity between the SSH client and SSH daemon processes. During public preview, the C SDK supports device streams on the device side only. As a result, this quickstart covers instructions to run only the device-local proxy application. To build and run the accompanying service-side application, follow the instructions in one of the following quickstarts:
-
-* [SSH/RDP over IoT Hub device streams using C# proxy](./quickstart-device-streams-proxy-csharp.md)
-* [SSH/RDP over IoT Hub device streams using NodeJS proxy](./quickstart-device-streams-proxy-nodejs.md).
-
-![Local proxy setup](./media/quickstart-device-streams-proxy-c/device-stream-proxy-diagram.png)
-
-1. The service-local proxy connects to the IoT hub and starts a device stream to the target device.
-
-2. The device-local proxy completes the stream initiation handshake and establishes an end-to-end streaming tunnel through the IoT hub's streaming endpoint to the service side.
-
-3. The device-local proxy connects to the SSH daemon that's listening on port 22 on the device. This setting is configurable, as described in the "Run the device-local proxy application" section.
-
-4. The service-local proxy waits for new SSH connections from a user by listening on a designated port, which in this case is port 2222. This setting is configurable, as described in the "Run the device-local proxy application" section. When the user connects via SSH client, the tunnel enables SSH application traffic to be transferred between the SSH client and server programs.
-
-> [!NOTE]
-> SSH traffic that's sent over a device stream is tunneled through the IoT hub's streaming endpoint rather than sent directly between service and device. For more information, see the [benefits of using Iot Hub device streams](iot-hub-device-streams-overview.md#benefits). Furthermore, the figure illustrates the SSH daemon that's running on the same device (or machine) as the device-local proxy. In this quickstart, providing the SSH daemon IP address allows the device-local proxy and the daemon to run on different machines as well.
--
-## Prepare the development environment
-
-For this quickstart, you use the [Azure IoT device SDK for C](iot-hub-device-sdk-c-intro.md). You prepare a development environment used to clone and build the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c) from GitHub. The SDK on GitHub includes the sample code that's used in this quickstart.
-
-1. Download the [CMake build system](https://cmake.org/download/).
-
- It's important that the Visual Studio prerequisites (Visual Studio and the *Desktop development with C++* workload) are installed on your machine, *before* you start the CMake installation. After the prerequisites are in place and the download is verified, you can install the CMake build system.
-
-1. Open a command prompt or Git Bash shell. Run the following commands to clone the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c) GitHub repository:
-
- ```cmd/sh
- git clone -b public-preview https://github.com/Azure/azure-iot-sdk-c.git
- cd azure-iot-sdk-c
- git submodule update --init
- ```
-
- This operation should take a few minutes.
-
-1. Create a *cmake* subdirectory in the root directory of the git repository, and navigate to that folder. Run the following commands from the *azure-iot-sdk-c* directory:
-
- ```cmd/sh
- mkdir cmake
- cd cmake
- ```
-
-1. Run the following commands from the *cmake* directory to build a version of the SDK that's specific to your development client platform.
-
- * In Linux:
-
- ```bash
- cmake ..
- make -j
- ```
-
- * In Windows, run the following commands in Developer Command Prompt for Visual Studio 2015 or 2017. A Visual Studio solution for the simulated device will be generated in the *cmake* directory.
-
- ```cmd
- rem For VS2015
- cmake .. -G "Visual Studio 14 2015"
-
- rem Or for VS2017
- cmake .. -G "Visual Studio 15 2017"
-
- rem Or for VS2019
- cmake .. -G "Visual Studio 16 2019"
-
- rem Then build the project
- cmake --build . -- /m /p:Configuration=Release
- ```
-
-## Create an IoT hub
--
-## Register a device
-
-A device must be registered with your IoT hub before it can connect. In this section, you use Azure Cloud Shell with the [IoT extension](/cli/azure/iot) to register a simulated device.
-
-1. To create the device identity, run the following command in Cloud Shell:
-
- > [!NOTE]
- > * Replace the *YourIoTHubName* placeholder with the name you chose for your IoT hub.
- > * For the name of the device you're registering, it's recommended to use *MyDevice* as shown. If you choose a different name for your device, use that name throughout this article, and update the device name in the sample applications before you run them.
-
- ```azurecli-interactive
- az iot hub device-identity create --hub-name {YourIoTHubName} --device-id MyDevice
- ```
-
-1. To get the *device connection string* for the device that you just registered, run the following commands in Cloud Shell:
-
- > [!NOTE]
- > Replace the *YourIoTHubName* placeholder with the name you chose for your IoT hub.
-
- ```azurecli-interactive
- az iot hub device-identity connection-string show --hub-name {YourIoTHubName} --device-id MyDevice --output table
- ```
-
- Note the returned device connection string for later use in this quickstart. It looks like the following example:
-
- `HostName={YourIoTHubName}.azure-devices.net;DeviceId=MyDevice;SharedAccessKey={YourSharedAccessKey}`
-
-## SSH to a device via device streams
-
-In this section, you establish an end-to-end stream to tunnel SSH traffic.
-
-### Run the device-local proxy application
-
-1. Edit the source file **iothub_client_c2d_streaming_proxy_sample.c** in the folder `iothub_client/samples/iothub_client_c2d_streaming_proxy_sample`, and provide your device connection string, target device IP/hostname, and the SSH port 22:
-
- ```C
- /* Paste in your device connection string */
- static const char* connectionString = "{DeviceConnectionString}";
- static const char* localHost = "{IP/Host of your target machine}"; // Address of the local server to connect to.
- static const size_t localPort = 22; // Port of the local server to connect to.
- ```
-
-1. Compile the sample:
-
- ```bash
- # In Linux
- # Go to the sample's folder cmake/iothub_client/samples/iothub_client_c2d_streaming_proxy_sample
- make -j
- ```
-
- ```cmd
- rem In Windows
- rem Go to cmake at root of repository
- cmake --build . -- /m /p:Configuration=Release
- ```
-
-1. Run the compiled program on the device:
-
- ```bash
- # In Linux
- # Go to the sample's folder cmake/iothub_client/samples/iothub_client_c2d_streaming_proxy_sample
- ./iothub_client_c2d_streaming_proxy_sample
- ```
-
- ```cmd
- rem In Windows
- rem Go to the sample's release folder cmake\iothub_client\samples\iothub_client_c2d_streaming_proxy_sample\Release
- iothub_client_c2d_streaming_proxy_sample.exe
- ```
-
-### Run the service-local proxy application
-
-As discussed in the "How it works" section, establishing an end-to-end stream to tunnel SSH traffic requires a local proxy at each end (on both the service and the device sides). During public preview, the IoT Hub C SDK supports device streams on the device side only. To build and run the service-local proxy, follow the instructions in one of the following quickstarts:
-
- * [SSH/RDP over IoT Hub device streams using C# proxy apps](./quickstart-device-streams-proxy-csharp.md)
- * [SSH/RDP over IoT Hub device streams using Node.js proxy apps](./quickstart-device-streams-proxy-nodejs.md)
-
-### Establish an SSH session
-
-After both the device- and service-local proxies are running, use your SSH client program and connect to the service-local proxy on port 2222 (instead of the SSH daemon directly).
-
-```cmd/sh
-ssh {username}@localhost -p 2222
-```
-
-At this point, the SSH sign-in window prompts you to enter your credentials.
-
-The following image shows the console output on the device-local proxy, which connects to the SSH daemon at `IP_address:22`:
-
-![Device-local proxy output](./media/quickstart-device-streams-proxy-c/device-console-output.png)
-
-The following image shows the console output of the SSH client program. The SSH client communicates to the SSH daemon by connecting to port 22, which the service-local proxy is listening on:
-
-![SSH client output](./media/quickstart-device-streams-proxy-csharp/ssh-console-output.png)
-
-## Clean up resources
--
-## Next steps
-
-In this quickstart, you set up an IoT hub, registered a device, deployed a device- and a service-local proxy program to establish a device stream through IoT Hub, and used the proxies to tunnel SSH traffic.
-
-To learn more about device streams, see:
-
-> [!div class="nextstepaction"]
-> [Device streams overview](./iot-hub-device-streams-overview.md)
iot-hub Quickstart Device Streams Proxy Csharp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/quickstart-device-streams-proxy-csharp.md
- Title: Quickstart - Azure IoT Hub device streams C# quickstart for SSH and RDP
-description: In this quickstart, you run two sample C# applications that enable SSH and RDP scenarios over an IoT Hub device stream.
----- Previously updated : 03/14/2019---
-# Quickstart: Enable SSH and RDP over an IoT Hub device stream by using a C# proxy application (preview)
--
-Microsoft Azure IoT Hub currently supports device streams as a [preview feature](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-[IoT Hub device streams](iot-hub-device-streams-overview.md) allow service and device applications to communicate in a secure and firewall-friendly manner. This quickstart guide involves two C# applications that enable client-server application traffic (such as Secure Shell [SSH] and Remote Desktop Protocol [RDP] to be sent over a device stream that's established through an IoT hub. For an overview of the setup, see [Local proxy application sample for SSH or RDP](iot-hub-device-streams-overview.md#local-proxy-sample-for-ssh-or-rdp).
-
-This article first describes the setup for SSH (using port 22) and then describes how to modify the setup's port for RDP. Because device streams are application- and protocol-agnostic, the same sample can be modified to accommodate other types of application traffic. This modification usually involves only changing the communication port to the one that's used by the intended application.
-
-## Prerequisites
-
-* The preview of device streams is currently supported only for IoT hubs that are created in the following regions:
-
- * Central US
- * Central US EUAP
- * Southeast Asia
- * North Europe
-
-* The two sample applications that you run in this quickstart are written in C#. You need the .NET Core SDK 2.1.0 or later on your development machine.
-
- You can download the [.NET Core SDK for multiple platforms from .NET](https://dotnet.microsoft.com/download).
-
- Verify the current version of C# on your development machine by using the following command:
-
- ```
- dotnet --version
- ```
-
-* [Download the Azure IoT C# samples](https://github.com/Azure-Samples/azure-iot-samples-csharp/archive/master.zip), and extract the ZIP archive.
-
-* A valid user account and credential on the device (Windows or Linux) used to authenticate the user.
---
-## How it works
-
-The following figure illustrates how the device-local and service-local proxy applications in this sample enable end-to-end connectivity between the SSH client and SSH daemon processes. Here, we assume that the daemon is running on the same device as the device-local proxy application.
-
-![Local proxy application setup](./media/quickstart-device-streams-proxy-csharp/device-stream-proxy-diagram.png)
-
-1. The service-local proxy application connects to the IoT hub and initiates a device stream to the target device.
-
-1. The device-local proxy application completes the stream initiation handshake and establishes an end-to-end streaming tunnel through the IoT hub's streaming endpoint to the service side.
-
-1. The device-local proxy application connects to the SSH daemon that's listening on port 22 on the device. This setting is configurable, as described in the "Run the device-local proxy application" section.
-
-1. The service-local proxy application waits for new SSH connections from a user by listening on a designated port, which in this case is port 2222. This setting is configurable, as described in the "Run the service-local proxy application" section. When the user connects via the SSH client, the tunnel enables SSH application traffic to be transferred between the SSH client and server application.
-
-> [!NOTE]
-> SSH traffic that's sent over a device stream is tunneled through the IoT hub's streaming endpoint rather than sent directly between service and device. For more information, see the [benefits of using Iot Hub device streams](iot-hub-device-streams-overview.md#benefits).
--
-## Create an IoT hub
--
-## Register a device
-
-A device must be registered with your IoT hub before it can connect. In this quickstart, you use Azure Cloud Shell to register a simulated device.
-
-1. To create the device identity, run the following command in Cloud Shell:
-
- > [!NOTE]
- > * Replace the *YourIoTHubName* placeholder with the name you chose for your IoT hub.
- > * For the name of the device you're registering, it's recommended to use *MyDevice* as shown. If you choose a different name for your device, use that name throughout this article, and update the device name in the sample applications before you run them.
-
- ```azurecli-interactive
- az iot hub device-identity create --hub-name {YourIoTHubName} --device-id MyDevice
- ```
-
-1. To get the *device connection string* for the device that you just registered, run the following commands in Cloud Shell:
-
- > [!NOTE]
- > Replace the *YourIoTHubName* placeholder with the name you chose for your IoT hub.
-
- ```azurecli-interactive
- az iot hub device-identity connection-string show --hub-name {YourIoTHubName} --device-id MyDevice --output table
- ```
-
- Note the returned device connection string for later use in this quickstart. It looks like the following example:
-
- `HostName={YourIoTHubName}.azure-devices.net;DeviceId=MyDevice;SharedAccessKey={YourSharedAccessKey}`
-
-1. To connect to your IoT hub and establish a device stream, you also need the *service connection string* from your IoT hub to enable the service-side application. The following command retrieves this value for your IoT hub:
-
- > [!NOTE]
- > Replace the *YourIoTHubName* placeholder with the name you chose for your IoT hub.
-
- ```azurecli-interactive
- az iot hub show-connection-string --policy-name service --name {YourIoTHubName} --output table
- ```
-
- Note the returned service connection string for later use in this quickstart. It looks like the following example:
-
- `"HostName={YourIoTHubName}.azure-devices.net;SharedAccessKeyName=service;SharedAccessKey={YourSharedAccessKey}"`
-
-## SSH to a device via device streams
-
-In this section, you establish an end-to-end stream to tunnel SSH traffic.
-
-### Run the device-local proxy application
-
-In a local terminal window, navigate to the `device-streams-proxy/device` directory in your unzipped project folder. Keep the following information handy:
-
-| Argument name | Argument value |
-|-|--|
-| `DeviceConnectionString` | The device connection string of the device that you created earlier. |
-| `targetServiceHostName` | The IP address where the SSH server listens. The address would be `localhost` if it were the same IP where the device-local proxy application is running. |
-| `targetServicePort` | The port that's used by your application protocol (for SSH, by default, this would be port 22). |
-
-Compile and run the code with the following commands:
-
-```
-cd ./iot-hub/Quickstarts/device-streams-proxy/device/
-
-# Build the application
-dotnet build
-
-# Run the application
-# In Linux or macOS
-dotnet run ${DeviceConnectionString} localhost 22
-
-# In Windows
-dotnet run {DeviceConnectionString} localhost 22
-```
-
-### Run the service-local proxy application
-
-In another local terminal window, navigate to `iot-hub/quickstarts/device-streams-proxy/service` in your unzipped project folder. Keep the following information handy:
-
-| Parameter name | Parameter value |
-|-|--|
-| `ServiceConnectionString` | The service connection string of your IoT Hub. |
-| `MyDevice` | The identifier of the device you created earlier. |
-| `localPortNumber` | A local port that your SSH client will connect to. We use port 2222 in this sample, but you could use other arbitrary numbers. |
-
-Compile and run the code with the following commands:
-
-```
-cd ./iot-hub/Quickstarts/device-streams-proxy/service/
-
-# Build the application
-dotnet build
-
-# Run the application
-# In Linux or macOS
-dotnet run ${ServiceConnectionString} MyDevice 2222
-
-# In Windows
-dotnet run {ServiceConnectionString} MyDevice 2222
-```
-
-### Run the SSH client
-
-Now use your SSH client application and connect to service-local proxy application on port 2222 (instead of the SSH daemon directly).
-
-```
-ssh {username}@localhost -p 2222
-```
-
-At this point, the SSH sign-in window prompts you to enter your credentials.
-
-Console output on the service side (the service-local proxy application listens on port 2222):
-
-![Service-local proxy application output](./media/quickstart-device-streams-proxy-csharp/service-console-output.png)
-
-Console output on the device-local proxy application, which connects to the SSH daemon at *IP_address:22*:
-
-![Device-local proxy application output](./media/quickstart-device-streams-proxy-csharp/device-console-output.png)
-
-Console output of the SSH client application. The SSH client communicates to the SSH daemon by connecting to port 22, which the service-local proxy application is listening on:
-
-![SSH client application output](./media/quickstart-device-streams-proxy-csharp/ssh-console-output.png)
-
-## RDP to a device via device streams
-
-The setup for RDP is similar to the setup for SSH (described above). You use the RDP destination IP and port 3389 instead and use the RDP client (instead of the SSH client).
-
-### Run the device-local proxy application (RDP)
-
-In a local terminal window, navigate to the `device-streams-proxy/device` directory in your unzipped project folder. Keep the following information handy:
-
-| Argument name | Argument value |
-|-|--|
-| `DeviceConnectionString` | The device connection string of the device that you created earlier. |
-| `targetServiceHostName` | The hostname or IP address where RDP server runs. The address would be `localhost` if it were the same IP where the device-local proxy application is running. |
-| `targetServicePort` | The port used by your application protocol (for RDP, by default, this would be port 3389). |
-
-Compile and run the code with the following commands:
-
-```
-cd ./iot-hub/Quickstarts/device-streams-proxy/device
-
-# Run the application
-# In Linux or macOS
-dotnet run ${DeviceConnectionString} localhost 3389
-
-# In Windows
-dotnet run {DeviceConnectionString} localhost 3389
-```
-
-### Run the service-local proxy application (RDP)
-
-In another local terminal window, navigate to `device-streams-proxy/service` in your unzipped project folder. Keep the following information handy:
-
-| Parameter name | Parameter value |
-|-|--|
-| `ServiceConnectionString` | The service connection string of your IoT Hub. |
-| `MyDevice` | The identifier of the device you created earlier. |
-| `localPortNumber` | A local port that your SSH client will connect to. We use port 2222 in this sample, but you could modify this to other arbitrary numbers. |
-
-Compile and run the code with the following commands:
-
-```
-cd ./iot-hub/Quickstarts/device-streams-proxy/service/
-
-# Build the application
-dotnet build
-
-# Run the application
-# In Linux or macOS
-dotnet run ${ServiceConnectionString} MyDevice 2222
-
-# In Windows
-dotnet run {ServiceConnectionString} MyDevice 2222
-```
-
-### Run RDP client
-
-Now use your RDP client application and connect to the service-local proxy application on port 2222 (this was an arbitrary available port that you chose earlier).
-
-![RDP connects to the service-local proxy application](./media/quickstart-device-streams-proxy-csharp/rdp-screen-capture.png)
-
-## Clean up resources
--
-## Next steps
-
-In this quickstart, you set up an IoT hub, registered a device, deployed device-local and service-local proxy applications to establish a device stream through the IoT hub, and used the proxy applications to tunnel SSH or RDP traffic. The same paradigm can accommodate other client-server protocols, where the server runs on the device (for example, the SSH daemon).
-
-To learn more about device streams, see:
-
-> [!div class="nextstepaction"]
-> [Device streams overview](./iot-hub-device-streams-overview.md)
iot-hub Quickstart Device Streams Proxy Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/quickstart-device-streams-proxy-nodejs.md
- Title: Quickstart - Azure IoT Hub device streams Node.js quickstart for SSH and RDP
-description: In this quickstart, you run a sample Node.js application that acts as a proxy to enable SSH and RDP scenarios over IoT Hub device streams.
----- Previously updated : 03/14/2019---
-# Quickstart: Enable SSH and RDP over an IoT Hub device stream by using a Node.js proxy application (preview)
--
-In this quickstart, you enable Secure Shell (SSH) and Remote Desktop Protocol (RDP) traffic to be sent to the device over a device stream. Azure IoT Hub device streams allow service and device applications to communicate in a secure and firewall-friendly manner. This quickstart describes the execution of a Node.js proxy application that's running on the service side. During public preview, the Node.js SDK supports device streams on the service side only. As a result, this quickstart covers instructions to run only the service-local proxy application.
-
-## Prerequisites
-
-* Completion of [Enable SSH and RDP over IoT Hub device streams by using a C proxy application](./quickstart-device-streams-proxy-c.md) or [Enable SSH and RDP over IoT Hub device streams by using a C# proxy application](./quickstart-device-streams-proxy-csharp.md).
-
-* An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
-
-* [Node.js 10+](https://nodejs.org).
-
- You can verify the current version of Node.js on your development machine by using the following command:
-
- ```cmd/sh
- node --version
- ```
-
-* [A sample Node.js project](https://github.com/Azure-Samples/azure-iot-samples-node/archive/streams-preview.zip).
--
-Microsoft Azure IoT Hub currently supports device streams as a [preview feature](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-> [!IMPORTANT]
-> The preview of device streams is currently only supported for IoT Hubs created in the following regions:
->
-> * Central US
-> * Central US EUAP
-> * North Europe
-> * Southeast Asia
-
-### Add Azure IoT Extension
-
-Add the Azure IoT Extension for Azure CLI to your Cloud Shell instance by running the following command. The IoT Extension adds IoT Hub, IoT Edge, and IoT Device Provisioning Service (DPS)-specific commands to the Azure CLI.
-
-```azurecli-interactive
-az extension add --name azure-iot
-```
--
-## Create an IoT hub
-
-If you completed the previous [Quickstart: Send telemetry from a device to an IoT hub](quickstart-send-telemetry-node.md), you can skip this step.
--
-## Register a device
-
-If you completed [Quickstart: Send telemetry from a device to an IoT hub](quickstart-send-telemetry-node.md), you can skip this step.
-
-A device must be registered with your IoT hub before it can connect. In this section, you use Azure Cloud Shell to register a simulated device.
-
-1. To create the device identity, run the following command in Cloud Shell:
-
- > [!NOTE]
- > * Replace the *YourIoTHubName* placeholder with the name you chose for your IoT hub.
- > * For the name of the device you're registering, it's recommended to use *MyDevice* as shown. If you choose a different name for your device, use that name throughout this article, and update the device name in the sample applications before you run them.
-
- ```azurecli-interactive
- az iot hub device-identity create --hub-name {YourIoTHubName} --device-id MyDevice
- ```
-
-1. To enable the back-end application to connect to your IoT hub and retrieve the messages, you also need a *service connection string*. The following command retrieves the string for your IoT hub:
-
- > [!NOTE]
- > Replace the *YourIoTHubName* placeholder with the name you chose for your IoT hub.
-
- ```azurecli-interactive
- az iot hub connection-string show --policy-name service --hub-name {YourIoTHubName} --output table
- ```
-
- Note the returned service connection string for later use in this quickstart. It looks like the following example:
-
- `"HostName={YourIoTHubName}.azure-devices.net;SharedAccessKeyName=service;SharedAccessKey={YourSharedAccessKey}"`
-
-## SSH to a device via device streams
-
-In this section, you establish an end-to-end stream to tunnel SSH traffic.
-
-### Run the device-local proxy application
-
-As mentioned earlier, the IoT Hub Node.js SDK supports device streams on the service side only. For the device-local application, use a device proxy application that's available in one of the following quickstarts:
-
- * [Enable SSH and RDP over IoT Hub device streams by using a C proxy application](./quickstart-device-streams-proxy-c.md)
- * [Enable SSH and RDP over IoT Hub device streams by using a C# proxy application](./quickstart-device-streams-proxy-csharp.md)
-
-Before you proceed to the next step, ensure that the device-local proxy application is running. For an overview of the setup, see [Local Proxy Sample](./iot-hub-device-streams-overview.md#local-proxy-sample-for-ssh-or-rdp).
-
-### Run the service-local proxy application
-
-This article describes the setup for SSH (by using port 22) and then describes how to modify the setup for RDP (which uses port 3389). Because device streams are application- and protocol-agnostic, you can modify the same sample to accommodate other types of client-server application traffic, usually by modifying the communication port.
-
-With the device-local proxy application running, run the service-local proxy application that's written in Node.js by doing the following in a local terminal window:
-
-1. For environment variables, provide your service credentials, the target device ID where the SSH daemon runs, and the port number for the proxy that's running on the device.
-
- ```
- # In Linux
- export IOTHUB_CONNECTION_STRING="{ServiceConnectionString}"
- export STREAMING_TARGET_DEVICE="MyDevice"
- export PROXY_PORT=2222
-
- # In Windows
- SET IOTHUB_CONNECTION_STRING={ServiceConnectionString}
- SET STREAMING_TARGET_DEVICE=MyDevice
- SET PROXY_PORT=2222
- ```
-
- Change the ServiceConnectionString placeholder to match your service connection string, and **MyDevice** to match your device ID if you gave yours a different name.
-
-1. Navigate to the `Quickstarts/device-streams-service` directory in your unzipped project folder. Use the following code to run the service-local proxy application:
-
- ```
- cd azure-iot-samples-node-streams-preview/iot-hub/Quickstarts/device-streams-service
-
- # Install the preview service SDK, and other dependencies
- npm install azure-iothub@streams-preview
- npm install
-
- # Run the service-local proxy application
- node proxy.js
- ```
-
-### SSH to your device via device streams
-
-In Linux, run SSH by using `ssh $USER@localhost -p 2222` on a terminal. In Windows, use your favorite SSH client (for example, PuTTY).
-
-Console output on the service-local after SSH session is established (the service-local proxy application listens on port 2222):
-
-![SSH terminal output](./media/quickstart-device-streams-proxy-nodejs/service-console-output.png)
-
-Console output of the SSH client application (SSH client communicates to SSH daemon by connecting to port 22, where the service-local proxy application is listening):
-
-![SSH client output](./media/quickstart-device-streams-proxy-nodejs/ssh-console-output.png)
-
-### RDP to your device via device streams
-
-Now use your RDP client application and connect to the service proxy on port 2222, an arbitrary port that you chose earlier.
-
-> [!NOTE]
-> Ensure that your device proxy is configured correctly for RDP and configured with RDP port 3389.
-
-![The RDP client connects to the service-local proxy application](./media/quickstart-device-streams-proxy-nodejs/rdp-screen-capture.png)
-
-## Clean up resources
--
-## Next steps
-
-In this quickstart, you set up an IoT hub, registered a device, and deployed a service proxy application to enable RDP and SSH on an IoT device. The RDP and SSH traffic will be tunneled via a device stream through the IoT hub. This process eliminates the need for direct connectivity to the device.
-
-To learn more about device streams, see:
-
-> [!div class="nextstepaction"]
-> [Device streams overview](./iot-hub-device-streams-overview.md)
key-vault Logging https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/logging.md
The following table lists the **operationName** values and corresponding REST AP
| **CertificateEnroll** |Enroll a certificate | | **CertificateRenew** |Renew a certificate | | **CertificatePendingGet** |Retrieve pending certificate |
-| **CertificatePendingMerge** |Pending a certificate merge |
-| **CertificatePendingUpdate** |Pending a certificate update |
+| **CertificatePendingMerge** | The merger of the certificate is pending |
+| **CertificatePendingUpdate** | The update of the certificate is pending |
| **CertificatePendingDelete** |Delete pending certificate | | **CertificateNearExpiryEventGridNotification** |Certificate near expiry event published | | **CertificateExpiredEventGridNotification** |Certificate expired event published |
For more information, including how to set this up, see [Azure Key Vault in Azur
- [Azure monitor](../../azure-monitor/index.yml) - For a tutorial that uses Azure Key Vault in a .NET web application, see [Use Azure Key Vault from a web application](tutorial-net-create-vault-azure-web-app.md). - For programming references, see [the Azure Key Vault developer's guide](developers-guide.md).-- For a list of Azure PowerShell 1.0 cmdlets for Azure Key Vault, see [Azure Key Vault cmdlets](/powershell/module/az.keyvault/#key_vault).
+- For a list of Azure PowerShell 1.0 cmdlets for Azure Key Vault, see [Azure Key Vault cmdlets](/powershell/module/az.keyvault/#key_vault).
kinect-dk Capture Device Synchronization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/kinect-dk/capture-device-synchronization.md
Title: Capture Azure Kinect device synchronization description: Learn how to synchronize Azure Kinect capture devices using the Azure Kinect Sensor SDK.--++ ms.prod: kinect-dk Last updated 06/26/2019
kinect-dk Record File Format https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/kinect-dk/record-file-format.md
Title: Use Azure Kinect Sensor SDK to record file format description: Understand how to use the Azure Kinect Sensor SDK recorded file format.--++ ms.prod: kinect-dk Last updated 06/26/2019
kinect-dk Record Playback Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/kinect-dk/record-playback-api.md
Title: Azure Kinect playback API description: Learn how to use the Azure Kinect Sensor SDK to open a recording file using the playback API.--++ ms.prod: kinect-dk Last updated 06/26/2019
logic-apps Azure Arc Enabled Logic Apps Create Deploy Workflows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/azure-arc-enabled-logic-apps-create-deploy-workflows.md
ms.suite: integration Previously updated : 05/25/2021 Last updated : 06/03/2021 ## Customer intent: As a developer, I want to learn how to create and deploy automated Logic Apps workflows that can run anywhere that Kubernetes can run.
Based on whether you want to use Azure CLI, Visual Studio Code, or the Azure por
Before you start, you need to have the following items: -- The [Azure CLI installed](/cli/azure/install-azure-cli) on your local computer.-- An [Azure resource group](#create-resource-group) where to create your logic app.
+- The latest Azure CLI extension installed on your local computer.
-Check your environment before you begin:
+ - If you don't have this extension, review the [installation guide for your operating system or platform](/cli/azure/install-azure-cli).
+
+ - If you're not sure that you have the latest version, follow the [steps to check your environment and CLI version](#check-environment-cli-version).
+
+- The *preview* Azure Logic Apps (Standard) extension for Azure CLI.
+
+ Although single-tenant Azure Logic Apps is generally available, the Azure Logic Apps extension is still in preview.
+
+- An [Azure resource group](#create-resource-group) for where to create your logic app.
+
+ If you don't have this resource group, follow the [steps to create the resource group](#create-resource-group).
+
+- An Azure storage account to use with your logic app for data and run history retention.
+
+ If you don't have this storage account, you can create this account when you create your logic app, or you can follow the [steps to create a storage account](/cli/azure/storage/account#az_storage_account_create).
+
+<a name="check-environment-cli-version"></a>
+
+#### Check environment and CLI version
1. Sign in to the Azure portal. Check that your subscription is active by running the following command:
Check your environment before you begin:
1. If you don't have the latest version, update your installation by following the [installation guide for your operating system or platform](/cli/azure/install-azure-cli).
-#### Install Logic Apps extension
+<a name="install-logic-apps-cli-extension"></a>
-Install the preview version of the Logic Apps extension for the Azure CLI:
+##### Install Azure Logic Apps (Standard) extension for Azure CLI
-```azurecli
+Install the *preview* single-tenant Azure Logic Apps (Standard) extension for Azure CLI by running the following command:
+
+```azurecli-interactive
az extension add --yes --source "https://aka.ms/logicapp-latest-py2.py3-none-any.whl" ```
-#### Create resource group
+<a name="create-resource-group"></a>
-If you don't already have a resource group for your logic app, create the group with the command `az group create`. Make sure to use the `--subscription` parameter with your subscription name or identifier. For example, the following command creates a resource group named `MyResourceGroupName` in the location `eastus`:
+#### Create resource group
-```azurecli
-az group create --name MyResourceGroupName --location eastus --subscription MySubscription
-```
+If you don't already have a resource group for your logic app, create the group by running the command, `az group create`. Unless you already set a default subscription for your Azure account, make sure to use the `--subscription` parameter with your subscription name or identifier. Otherwise, you don't have to use the `--subscription` parameter.
> [!TIP]
-> You don't have to use the `--subscription` parameter if you've set a default subscription for your Azure account.
> To set a default subscription, run the following command, and replace `MySubscription` with your subscription name or identifier.
+>
> `az account set --subscription MySubscription`
-The output shows the `provisioningState` as `Succeeded` when your resource group is successfully created:
+For example, the following command creates a resource group named `MyResourceGroupName` using the Azure subscription named `MySubscription` in the location `eastus`:
+
+```azurecli
+az group create --name MyResourceGroupName
+ --subscription MySubscription
+ --location eastus
+```
+
+If your resource group is successfully created, the output shows the `provisioningState` as `Succeeded`:
```output <...>
The output shows the `provisioningState` as `Succeeded` when your resource group
#### Create logic app
-To create an Azure Arc enabled logic app using the Azure CLI, run the command `az logicapp create` as follows:
-
-```azurecli
-az logicapp create --resource-group MyResourceGroupName --name MyLogicAppName
- --storage-account MyStorageAccount --custom-location MyCustomLocation
- --subscription MySubscription
-```
-
-> [!IMPORTANT]
-> The resource locations for your logic app, custom location, and Kubernetes environment must all be the same.
-
-Make sure to provide the following required parameters in your command:
+To create an Azure Arc enabled logic app, run the command, `az logicapp create`, with the following required parameters. The resource locations for your logic app, custom location, and Kubernetes environment must all be the same.
| Parameters | Description | ||-| | `--name -n` | A unique name for your logic app | | `--resource-group -g` | The name of the [resource group](../azure-resource-manager/management/manage-resource-groups-cli.md) where you want to create your logic app. If you don't have one to use, [create a resource group](#create-resource-group). |
-| `--storage-account -s` | The [storage account](/cli/azure/storage/account) that you want to use with your logic app. For storage accounts in the same resource group, use a string value. For storage accounts in a different resource group, use a resource ID. |
+| `--storage-account -s` | The [storage account](/cli/azure/storage/account) to use with your logic app. For storage accounts in the same resource group, use a string value. For storage accounts in a different resource group, use a resource ID. |
|||
-To create a logic app in Azure Arc using a private Azure Container Registry image, run `az logicapp create` as follows:
+```azurecli
+az logicapp create --name MyLogicAppName
+ --resource-group MyResourceGroupName --subscription MySubscription
+ --storage-account MyStorageAccount --custom-location MyCustomLocation
+```
+
+To create an Azure Arc enabled logic app using a private Azure Container Registry image, run the command, `az logicapp create`, with the following required parameters:
```azurecli
-az logicapp create --resource-group MyResourceGroupName --name MyLogicAppName
- --storage-account MyStorageAccount --subscription MySubscription
- --custom-location MyCustomLocation
+az logicapp create --name MyLogicAppName
+ --resource-group MyResourceGroupName --subscription MySubscription
+ --storage-account MyStorageAccount --custom-location MyCustomLocation
--deployment-container-image-name myacr.azurecr.io/myimage:tag
- --docker-registry-server-password passw0rd
- --docker-registry-server-user MyUser
+ --docker-registry-server-password MyPassword
+ --docker-registry-server-user MyUsername
``` #### Show logic app details
-To show details about your Azure Arc enabled logic app, run the command `az logicapp show` as follows:
+To show details about your Azure Arc enabled logic app, run the command, `az logicapp show`, with the following required parameters:
```azurecli az logicapp show --name MyLogicAppName
az logicapp show --name MyLogicAppName
#### Deploy logic app
-To deploy your logic app using Kudu's zip deployment, run the command `az logicapp deployment source config-zip`, for example:
+To deploy your Azure Arc enabled logic app using [Azure App Service's Kudu zip deployment](../app-service/resources-kudu.md), run the command, `az logicapp deployment source config-zip`, with the following required parameters:
+
+> [!IMPORTANT]
+> Make sure that your zip file contains your project's artifacts at the root level. These artifacts include all workflow folders,
+> configuration files such as host.json, connections.json, and any other related files. Don't add any extra folders nor put any artifacts
+> into folders that don't already exist in your project structure. For example, this list shows an example MyBuildArtifacts.zip file structure:
+>
+> ```output
+> MyStatefulWorkflow1-Folder
+> MyStatefulWorkflow2-Folder
+> connections.json
+> host.json
+> ```
```azurecli az logicapp deployment source config-zip --name MyLogicAppName
- --resource-group MyResourceGroupName
- --src C:\uploads\v22.zip
- --subscription MySubscription
+ --resource-group MyResourceGroupName --subscription MySubscription
+ --src MyBuildArtifact.zip
``` #### Start logic app
-To start your Azure Arc enabled logic app, run the command `az logicapp start` with the following required parameters:
+To start your Azure Arc enabled logic app, run the command, `az logicapp start`, with the following required parameters:
```azurecli az logicapp start --name MyLogicAppName
az logicapp start --name MyLogicAppName
#### Stop logic app
-To stop your Azure Arc enabled logic app, run the command `az logicapp stop` with the following required parameters:
+To stop your Azure Arc enabled logic app, run the command, `az logicapp stop`, with the following required parameters:
```azurecli az logicapp stop --name MyLogicAppName
az logicapp stop --name MyLogicAppName
#### Restart logic app
-To restart your Azure Arc enabled logic app, run the command `az logicapp restart` with the following required parameters:
+To restart your Azure Arc enabled logic app, run the command, `az logicapp restart`, with the following required parameters:
```azurecli az logicapp restart --name MyLogicAppName
az logicapp restart --name MyLogicAppName
#### Delete logic app
-To delete your Azure Arc enabled logic app, run the command `az logicapp delete` with the following required parameters:
+To delete your Azure Arc enabled logic app, run the command, `az logicapp delete`, with the following required parameters:
```azurecli
-az logicapp delete --name MyLogicAppName --resource-group MyResourceGroupName --subscription MySubscription
+az logicapp delete --name MyLogicAppName
+ --resource-group MyResourceGroupName --subscription MySubscription
``` ### [Visual Studio Code](#tab/visual-studio-code)
For this task, use your previously saved client ID as the *application ID*.
1. Under **API Connections**, select a connection, which is `office365` in this example. 1. On the connection's menu, under **Settings**, select **Access policies** > **Add**.
-
+ 1. In the **Add access policy** pane, in the search box, find and select your previously saved client ID. 1. When you're done, select **Add**.
To change this maximum, use the Azure CLI (logic app create only) and Azure port
#### Azure CLI
-For a new logic app, run the Azure CLI command, `az logicapp create`, for example:
+To create a new logic app, run the command, `az logicapp create`, with the following parameters:
```azurecli
-az logicapp create --resource-group MyResourceGroupName
- --name MyLogicAppName --storage-account MyStorageAccount
- --custom-location --subscription MySubscription MyCustomLocation
+az logicapp create --name MyLogicAppName
+ --resource-group MyResourceGroupName --subscription MySubscription
+ --storage-account MyStorageAccount --custom-location MyCustomLocation
[--plan MyHostingPlan] [--min-worker-count 1] [--max-worker-count 4] ```
To configure your maximum instance count, use the `--settings` parameter:
```azurecli az logicapp config appsettings set --name MyLogicAppName
- --resource-group MyResourceGroupName
- --settings "K8SE_APP_MAX_INSTANCE_COUNT=10"
- --subscription MySubscription
+ --resource-group MyResourceGroupName --subscription MySubscription
+ --settings "K8SE_APP_MAX_INSTANCE_COUNT=10"
``` #### Azure portal
az logicapp config appsettings set --name MyLogicAppName
In your single-tenant based logic app's settings, add or edit the `K8SE_APP_MAX_INSTANCE_COUNT` setting value by following these steps: 1. In the Azure portal, find and open your single-tenant based logic app.+ 1. On the logic app menu, under **Settings**, select **Configuration**.+ 1. In the **Configuration** pane, under **Application settings**, either add a new application setting or edit the existing value, if already added. 1. Select **New application setting**, and add the `K8SE_APP_MAX_INSTANCE_COUNT` setting with the maximum value you want.
To change this minimum, use the Azure CLI or the Azure portal.
#### Azure CLI
-For a existing logic app resource, run the Azure CLI command, `az logicapp scale`, for example:
+For a existing logic app resource, run the command, `az logicapp scale`, with the following parameters:
```azurecli
-az logicapp scale --name MyLogicAppName --resource-group MyResourceGroupName
- --instance-count 5 --subscription MySubscription
+az logicapp scale --name MyLogicAppName
+ --resource-group MyResourceGroupName --subscription MySubscription
+ --instance-count 5
```
-For a new logic app, run the Azure CLI command, `az logicapp create`, for example:
+To create a new logic app, run the command, `az logicapp create`, with the following parameters:
```azurecli
-az logicapp create --resource-group MyResourceGroupName --name MyLogicAppName
- --storage-account MyStorageAccount --custom-location
- --subscription MySubscription MyCustomLocation
+az logicapp create --name MyLogicAppName
+ --resource-group MyResourceGroupName --subscription MySubscription
+ --storage-account MyStorageAccount --custom-location MyCustomLocation
[--plan MyHostingPlan] [--min-worker-count 2] [--max-worker-count 4] ```
az logicapp create --resource-group MyResourceGroupName --name MyLogicAppName
In your single-tenant based logic app's settings, change the **Scale out** property value by following these steps: 1. In the Azure portal, find and open your single-tenant based logic app.+ 1. On the logic app menu, under **Settings**, select **Scale out**.+ 1. On the **Scale out** pane, drag the minimum instances slider to the value that you want.+ 1. When you're done, save your changes. ## Troubleshoot problems
To get more information about your deployed logic apps, try the following option
### Access app settings and configuration
-To access your app settings, run the following Azure CLI command:
+To access your app settings, run the command, `az logicapp config appsettings`, with the following parameters:
```azurecli az logicapp config appsettings list --name MyLogicAppName --resource-group MyResourceGroupName --subscription MySubscription ```
-To configure an app setting, run the command `az logicapp config appsettings set` as follows. Make sure to use the `--settings` parameter with your setting's name and value.
+To configure an app setting, run the command, `az logicapp config appsettings set`, with the following parameters. Make sure to use the `--settings` parameter with your setting's name and value.
```azurecli az logicapp config appsettings set --name MyLogicAppName
- --resource-group MyResourceGroupName
- --settings "MySetting=1"
- --subscription MySubscription
+ --resource-group MyResourceGroupName --subscription MySubscription
+ --settings "MySetting=1"
```
-To delete an app setting, run the command `az logicapp config appsettings delete` as follows. Make sure to using the `--setting-names` parameter with the name of the setting you want to delete.
+To delete an app setting, run the command, `az logicapp config appsettings delete`, with the following parameters. Make sure to use the `--setting-names` parameter with the name of the setting you want to delete.
```azurecli az logicapp config appsettings delete --name MyLogicAppName
- --resource-group MyResourceGroupName
- --setting-names MySetting
- --subscription MySubscription
+ --resource-group MyResourceGroupName --subscription MySubscription
+ --setting-names MySetting
``` ### View logic app properties
-To view your app's information and properties, run the following Azure CLI command:
+To view your app's information and properties, run the command, `az logicapp show`, with the following parameters:
```azurecli
-az logicapp show --name MyLogicAppName --resource-group MyResourceGroupName --subscription MySubscription
+az logicapp show --name MyLogicAppName
+ --resource-group MyResourceGroupName --subscription MySubscription
``` ### Monitor workflow activity
To get logged data about your logic app, enable Application Insights on your
## Next steps
-* Learn more about [Azure Arc enabled Logic Apps](azure-arc-enabled-logic-apps-overview.md)
+- [About Azure Arc enabled Logic Apps](azure-arc-enabled-logic-apps-overview.md)
logic-apps Create Automation Tasks Azure Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/create-automation-tasks-azure-resources.md
Title: Create automation tasks to manage and monitor Azure resources
description: Set up automated tasks that help you manage Azure resources and monitor costs by creating workflows that run on Azure Logic Apps. ms.suite: integration-- Previously updated : 04/05/2021++ Last updated : 06/09/2021 # Manage Azure resources and monitor costs by creating automation tasks (preview) > [!IMPORTANT]
-> This capability is in public preview, is provided without a service level agreement,
-> and is not recommended for production workloads. Certain features might not be supported
-> or might have constrained capabilities. For more information, see
+> This capability is in preview, is not recommended for production workloads, and is excluded from service level agreements.
+> Certain features might not be supported or might have constrained capabilities. For more information, see
> [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). To help you manage [Azure resources](../azure-resource-manager/management/overview.md#terminology) more easily, you can create automated management tasks for a specific resource or resource group by using automation task templates, which vary in availability based on the resource type. For example, for an [Azure storage account](../storage/common/storage-account-overview.md), you can set up an automation task that sends you the monthly cost for that storage account. For an [Azure virtual machine](https://azure.microsoft.com/services/virtual-machines/), you can create an automation task that turns on or turns off that virtual machine on a predefined schedule.
-Behind the scenes, an automation task is actually a workflow that runs on the [Azure Logic Apps](../logic-apps/logic-apps-overview.md) service and is billed using the same [pricing rates](https://azure.microsoft.com/pricing/details/logic-apps/) and [pricing model](../logic-apps/logic-apps-pricing.md). After you create the task, you can view and edit the underlying workflow by opening the task in the Logic App Designer. After a task finishes at least one run, you can review the status, history, inputs, and outputs for each run.
- Here are the currently available task templates in this preview: | Resource type | Automation task templates |
This article shows you how to complete the following tasks:
## How do automation tasks differ from Azure Automation?
-Currently, you can create an automation task only at the resource level, view the task's runs history, and edit the task's underlying logic app workflow, which is powered by the [Azure Logic Apps](../logic-apps/logic-apps-overview.md) service. Automation tasks are more basic and lightweight than [Azure Automation](../automation/automation-intro.md).
+Automation tasks are more basic and lightweight than [Azure Automation](../automation/automation-intro.md). Currently, you can create an automation task only at the Azure resource level. Behind the scenes, an automation task is actually a logic app resource that runs a workflow and is powered by the [*multi-tenant* Azure Logic Apps service](../logic-apps/logic-apps-overview.md). After you create the automation task, you can view and edit the underlying workflow by opening the task in the workflow designer. After a task finishes at least one run, you can review the task's status, workflow run history, inputs, and outputs for each run.
By comparison, Azure Automation is a cloud-based automation and configuration service that supports consistent management across your Azure and non-Azure environments. The service comprises [process automation for orchestrating processes](../automation/automation-intro.md#process-automation) by using [runbooks](../automation/automation-runbook-execution.md), configuration management with [change tracking and inventory](../automation/change-tracking/overview.md), update management, shared capabilities, and heterogeneous features. Automation gives you complete control during deployment, operations, and decommissioning of workloads and resources.
+<a name="pricing"></a>
+
+## Pricing
+
+Just creating an automation task doesn't automatically incur charges. Underneath, an automation task is a multi-tenant based logic app, so the [Consumption pricing model](logic-apps-pricing.md) also applies to automation tasks. Metering and billing are based on the trigger and action executions in the underlying logic app workflow.
+
+Executions are metered and billed, regardless whether the workflow runs successfully or whether the workflow is even instantiated. For example, suppose your automation task uses a polling trigger that regularly makes an outgoing call to an endpoint. This outbound request is metered and billed as an execution, regardless whether the trigger fires or is skipped, which affects whether a workflow instance is created.
+
+Triggers and actions follow [Consumption plan rates](https://azure.microsoft.com/pricing/details/logic-apps/), which differ based on whether these operations are ["built-in"](../connectors/built-in.md) or ["managed" (Standard or Enterprise)](../connectors/managed.md). Triggers and actions also make storage transactions, which use the [Consumption plan data rate](https://azure.microsoft.com/pricing/details/logic-apps/).
+
+> [!TIP]
+> As a monthly bonus, the Consumption plan includes *several thousand* built-in executions free of charge.
+> For specific information, review the [Consumption plan rates](https://azure.microsoft.com/pricing/details/logic-apps/).
+ ## Prerequisites * An Azure account and subscription. If you don't have a subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
logic-apps Parameterize Workflow App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/parameterize-workflow-app.md
ms.suite: integration Previously updated : 05/25/2021 Last updated : 06/08/2021 # Create parameters for values that change in workflows across environments for single-tenant Azure Logic Apps
To replace parameter files dynamically using the Azure CLI, run the following co
az functionapp deploy --resource-group MyResourceGroup --name MyLogicApp --src-path C:\parameters.json --type static --target-path parameters.json ```
+If you have a NuGet-based Logic App project, you have to update your project file (**&lt;logic-app-name&gt;.csproj**) to include the parameters file in the build output, for example:
+
+```csproj
+<ItemGroup>
+ <None Update="parameters.json">
+ <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
+ </None>
+</ItemGroup>
+```
+ > [!NOTE] > Currently, the capability to dynamically replace parameter files is not yet available in the Azure portal or the workflow designer.
logic-apps Set Up Devops Deployment Single Tenant Azure Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/set-up-devops-deployment-single-tenant-azure-logic-apps.md
ms.suite: integration Previously updated : 05/25/2021 Last updated : 06/01/2021 # As a developer, I want to automate deployment for workflows hosted in single-tenant Azure Logic Apps by using DevOps tools and processes.
This article shows how to deploy a single-tenant based logic app project from Vi
- A single-tenant based logic app project created with [Visual Studio Code and the Azure Logic Apps (Standard) extension](create-single-tenant-workflows-visual-studio-code.md#prerequisites).
- If you don't already have a logic app project or infrastructure set up, you can use the included sample projects to deploy an example app and infrastructure, based on the source and deployment options you prefer to use. For more information about these sample projects and the resources included to run the example logic app, review [Deploy your infrastructure](#deploy-infrastructure).
+ If you haven't already set up your logic app project or infrastructure, you can use the included sample projects to deploy an example app and infrastructure, based on the source and deployment options you prefer to use. For more information about these sample projects and the resources included to run the example logic app, review [Deploy your infrastructure](#deploy-infrastructure).
- If you want to deploy to Azure, you need an existing **Logic App (Standard)** resource created in Azure. To quickly create an empty logic app resource, review [Create single-tenant based logic app workflows - Portal](create-single-tenant-workflows-azure-portal.md).
After you push your logic app project to your source repository, you can set up
### Build your project
-To set up a build pipeline based on your logic app project type, follow the corresponding actions:
+To set up a build pipeline based on your logic app project type, complete the corresponding actions listed in the following table:
-* Nuget-based: The NuGet-based project structure is based on the .NET Framework. To build these projects, make sure to follow the build steps for .NET Standard. For more information, review the [Create a NuGet package using MSBuild](/nuget/create-packages/creating-a-package-msbuild) documentation.
-
-* Bundle-based: The extension bundle-based project isn't language specific and doesn't require any language-specific build steps. You can use any method to zip your project files.
-
- > [!IMPORTANT]
- > Make sure that the .zip file includes all workflow folders, configuration files such as host.json, connections.json, and any other related files.
+| Project type | Description and steps |
+|--|--|
+| Nuget-based | The NuGet-based project structure is based on the .NET Framework. To build these projects, make sure to follow the build steps for .NET Standard. For more information, review the [Create a NuGet package using MSBuild](/nuget/create-packages/creating-a-package-msbuild) documentation. |
+| Bundle-based | The extension bundle-based project isn't language-specific and doesn't require any language-specific build steps. You can use any method to zip your project files. <p><p>**Important**: Make sure that your .zip file contains the actual build artifacts, including all workflow folders, configuration files such as host.json, connections.json, and any other related files. |
+|||
### Release to Azure
To set up a release pipeline that deploys to Azure, choose the associated option
For GitHub deployments, you can deploy your logic app by using [GitHub Actions](https://docs.github.com/actions), for example, the GitHub Action in Azure Functions. This action requires that you pass through the following information:
-* Your build artifact
-* The logic app name to use for deployment
-* Your publish profile
+- The logic app name to use for deployment
+- The zip file that contains your actual build artifacts, including all workflow folders, configuration files such as host.json, connections.json, and any other related files.
+- Your [publish profile](../azure-functions/functions-how-to-github-actions.md#generate-deployment-credentials), which is used for authentication
```yaml - name: 'Run Azure Functions Action' uses: Azure/functions-action@v1 id: fa with:
- app-name: {your-logic-app-name}
- package: '{your-build-artifact}.zip'
- publish-profile: {your-logic-app-publish-profile}
+ app-name: 'MyLogicAppName'
+ package: 'MyBuildArtifact.zip'
+ publish-profile: 'MyLogicAppPublishProfile'
``` For more information, review the [Continuous delivery by using GitHub Action](../azure-functions/functions-how-to-github-actions.md) documentation. #### [Azure DevOps](#tab/azure-devops)
-For Azure DevOps deployments, you can deploy your logic app by using the [Azure Function App Deploy task](/azure/devops/pipelines/tasks/deploy/azure-function-app?view=azure-devops?view=azure-devops?view=azure-devops) in Azure Pipelines. This action requires that you pass through the following information:
+For Azure DevOps deployments, you can deploy your logic app by using the [Azure Function App Deploy task](/azure/devops/pipelines/tasks/deploy/azure-function-app?view=azure-devops&preserve-view=true) in Azure Pipelines. This action requires that you pass through the following information:
-* Your build artifact
-* The logic app name to use for deployment
-* Your publish profile
+- The logic app name to use for deployment
+- The zip file that contains your actual build artifacts, including all workflow folders, configuration files such as host.json, connections.json, and any other related files.
+- Your [publish profile](../azure-functions/functions-how-to-github-actions.md#generate-deployment-credentials), which is used for authentication
```yaml - task: AzureFunctionApp@1 displayName: 'Deploy logic app workflows' inputs:
- azureSubscription: '{your-service-connection}'
+ azureSubscription: 'MyServiceConnection'
appType: 'workflowapp'
- appName: '{your-logic-app-name}'
- package: '{your-build-artifact}.zip'
+ appName: 'MyLogicAppName'
+ package: 'MyBuildArtifact.zip'
deploymentMethod: 'zipDeploy' ```
For more information, review the [Deploy an Azure Function using Azure Pipelines
#### [Azure CLI](#tab/azure-cli)
-If you use other deployment tools, you can deploy your logic app by using the Azure CLI commands for single-tenant Azure Logic Apps. For example, to deploy your zipped artifact to an Azure resource group, run the following CLI command:
+If you use other deployment tools, you can deploy your single-tenant based logic app by using th