Updates from: 01/15/2022 02:11:49
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Billing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/billing.md
Previously updated : 11/16/2021 Last updated : 01/14/2022
# Billing model for Azure Active Directory B2C
-Azure Active Directory B2C (Azure AD B2C) pricing is based on monthly active users (MAU), which is the count of unique users with authentication activity within a calendar month. This billing model applies to both Azure AD B2C tenants and [Azure AD guest user collaboration (B2B)](../active-directory/external-identities/external-identities-pricing.md). MAU billing helps you reduce costs by offering a free tier and flexible, predictable pricing. In this article, learn about MAU billing, linking your Azure AD B2C tenants to a subscription, and changing your pricing tier.
+Azure Active Directory B2C (Azure AD B2C) pricing is based on monthly active users (MAU), which is the count of unique users with authentication activity within a calendar month. This billing model applies to both Azure AD B2C tenants and [Azure AD guest user collaboration (B2B)](../active-directory/external-identities/external-identities-pricing.md). MAU billing helps you reduce costs by offering a free tier and flexible, predictable pricing. In this article, learn about MAU billing, linking Azure AD B2C tenants to a subscription, and changing the pricing tier.
## MAU overview A monthly active user (MAU) is a unique user that performs an authentication within a given month. A user that authenticates multiple times within a given month is counted as one MAU. Customers are not charged for a MAUΓÇÖs subsequent authentications during the month, nor for inactive users. Authentications may include: -- Active, interactive sign-in by the user, for example through [sign-up or sign-in](add-sign-up-and-sign-in-policy.md), [self-service password reset](add-password-reset-policy.md), [profile editing](add-profile-editing-policy.md), or any type of [user flow](user-flow-overview.md) or [custom policy](custom-policy-overview.md).-- Passive, non-interactive sign-in such as [single sign-on (SSO)](session-behavior.md), or any type of token acquisition, such as authorization code flow, token refresh, or [resource owner password credentials (ROPC)](add-ropc-policy.md).
+- Active, interactive sign-in by the user. For example, [sign-up or sign-in](add-sign-up-and-sign-in-policy.md), [self-service password reset](add-password-reset-policy.md), or any type of [user flow](user-flow-overview.md) or [custom policy](custom-policy-overview.md).
+- Passive, non-interactive sign-in such as [single sign-on (SSO)](session-behavior.md), or any type of token acquisition. For example, authorization code flow, token refresh, or [resource owner password credentials flow](add-ropc-policy.md).
If you choose to provide higher levels of assurance using Multi-factor Authentication (MFA) for Voice and SMS, you will continue to be charged a worldwide flat fee for each MFA attempt that month, whether the sign-in is successful or unsuccessful.
To take advantage of MAU billing, your Azure AD B2C tenant must be linked to an
## About the monthly active users (MAU) billing model
-MAU billing went into effect for Azure AD B2C tenants on **November 1, 2019**. Any Azure AD B2C tenants that you created and linked to a subscription on or after that date have been billed on a per-MAU basis. If you have an Azure AD B2C tenant that hasn't been linked to a subscription, you'll need to do so now. If you have an existing Azure AD B2C tenant that was linked to a subscription before November 1, 2019, we recommend you upgrade to the monthly active users (MAU) billing model, or you can stay on the per-authentication billing model.
+MAU billing went into effect for Azure AD B2C tenants on **November 1, 2019**. Any Azure AD B2C tenants that you created and linked to a subscription on or after that date have been billed on a per-MAU basis.
+
+- If you have an Azure AD B2C tenant that hasn't been linked to a subscription, link it now.
+- If you have an existing Azure AD B2C tenant that was linked to a subscription before November 1, 2019, upgrade to the monthly active users (MAU) billing model. You can also choose to stay on the per-authentication billing model.
Your Azure AD B2C tenant must also be linked to the appropriate Azure pricing tier based on the features you want to use. Premium features require Azure AD B2C [Premium P1 or P2 pricing](https://azure.microsoft.com/pricing/details/active-directory-b2c/). You might need to upgrade your pricing tier as you use new features. For example, for risk-based Conditional Access policies, you’ll need to select the Azure AD B2C Premium P2 pricing tier for your tenant. > [!NOTE] > Your first 50,000 MAUs per month are free for both Premium P1 and Premium P2 features, but the **free tier doesn’t apply to free trial, credit-based, or sponsorship subscriptions**. Once the free trial period or credits expire for these types of subscriptions, you'll begin to be charged for Azure AD B2C MAUs. To determine the total number of MAUs, we combine MAUs from all your tenants (both Azure AD and Azure AD B2C) that are linked to the same subscription.+ ## Link an Azure AD B2C tenant to a subscription
-Usage charges for Azure Active Directory B2C (Azure AD B2C) are billed to an Azure subscription. You need to explicitly link an Azure AD B2C tenant to an Azure subscription by creating an Azure AD B2C *resource* within the target Azure subscription. Several Azure AD B2C resources can be created in a single Azure subscription, along with other Azure resources like virtual machines, Storage accounts, and Logic Apps. You can see all of the resources within a subscription by going to the Azure Active Directory (Azure AD) tenant that the subscription is associated with.
+Usage charges for Azure Active Directory B2C (Azure AD B2C) are billed to an Azure subscription. You need to explicitly link an Azure AD B2C tenant to an Azure subscription by creating an Azure AD B2C *resource* within the target Azure subscription. Several Azure AD B2C resources can be created in a single Azure subscription, along with other Azure resources like virtual machines, and storage accounts. You can see all of the resources within a subscription by going to the Azure Active Directory (Azure AD) tenant that the subscription is associated with.
A subscription linked to an Azure AD B2C tenant can be used for the billing of Azure AD B2C usage or other Azure resources, including additional Azure AD B2C resources. It can't be used to add other Azure license-based services or Office 365 licenses within the Azure AD B2C tenant.
After you complete these steps for an Azure AD B2C tenant, your Azure subscripti
## Change your Azure AD pricing tier
-A tenant must be linked to the appropriate Azure pricing tier based on the features you want to use with your Azure AD B2C tenant. Premium features require Azure AD B2C Premium P1 or P2, as described in the [Azure Active Directory B2C pricing](https://azure.microsoft.com/pricing/details/active-directory-b2c/). In some cases, you'll need to upgrade your pricing tier as you use new features. For example, if you want to use Identity Protection, risk-based Conditional Access policies, and any future Premium P2 capabilities with Azure AD B2C, you’ll need to select the Azure AD B2C Premium P2 pricing tier for your tenant.
+A tenant must be linked to the appropriate Azure pricing tier based on the features you want to use with your Azure AD B2C tenant. Premium features require Azure AD B2C Premium P1 or P2, as described in the [Azure Active Directory B2C pricing](https://azure.microsoft.com/pricing/details/active-directory-b2c/).
+
+In some cases, you'll need to upgrade your pricing tier as you use new features. For example, if you want to use [Identity Protection](conditional-access-identity-protection-overview.md), risk-based Conditional Access policies, and any future Premium P2 capabilities with Azure AD B2C.
-To change your pricing tier, follow these steps.
+To change your pricing tier, follow these steps:
1. Sign in to the Azure portal.
To change your pricing tier, follow these steps.
1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD directory in the **Directory name** list, and then select **Switch**. 1. In the search box at the top of the portal, enter the name of your Azure AD B2C tenant. Then select the tenant in the search results under **Resources**.
+
+ ![Screenshot that shows how to select an Azure AD B2C tenant in Azure portal.](media/billing/select-azure-ad-b2c-tenant.png)
1. On the resource **Overview** page, under **Pricing tier**, select **change**.
- ![Change pricing tier](media/billing/change-pricing-tier.png)
+ ![Screenshot that shows how to change the pricing tier.](media/billing/change-pricing-tier.png)
1. Select the pricing tier that includes the features you want to enable.
- ![Select the pricing tier](media/billing/select-tier.png)
+ ![Screenshot that shows how to select the pricing tier.](media/billing/select-tier.png)
## Switch to MAU billing (pre-November 2019 Azure AD B2C tenants)
Here's how to make the switch to MAU billing for an existing Azure AD B2C resour
1. In the left menu, select **Azure AD B2C**. Or, select **All services** and search for and select **Azure AD B2C**. 1. On the **Overview** page of the Azure AD B2C tenant, select the link under **Resource name**. You're directed to the Azure AD B2C resource in your Azure AD tenant.<br/>
- ![Azure AD B2C resource link highlighted in Azure portal](./media/billing/portal-mau-02-b2c-resource-link.png)
+ ![Screenshot that shows how to select the Azure AD B2C resource in Azure portal.](./media/billing/portal-mau-02-b2c-resource-link.png)
1. On the **Overview** page of the Azure AD B2C resource, under **Billable Units**, select the **Per Authentication (Change to MAU)** link.<br/>
- ![Change to MAU link highlighted in Azure portal](./media/billing/portal-mau-03-change-to-mau-link.png)
+ ![Screenshot that shows how to change to MAU link highlighted in Azure portal.](./media/billing/portal-mau-03-change-to-mau-link.png)
1. Select **Confirm** to complete the upgrade to MAU billing.<br/>
- ![MAU-based billing confirmation dialog in Azure portal](./media/billing/portal-mau-04-confirm-change-to-mau.png)
+ ![Screenshot that shows the MAU-based billing confirmation dialog in Azure portal.](./media/billing/portal-mau-04-confirm-change-to-mau.png)
### What to expect when you transition to MAU billing from per-authentication billing
During the billing period of the transition, the subscription owner will likely
* An entry for the usage until the date/time of change that reflects per-authentication. * An entry for the usage after the change that reflects monthly active users (MAU).
-For the latest information about usage billing and pricing for Azure AD B2C, see [Azure Active Directory B2C pricing](https://azure.microsoft.com/pricing/details/active-directory-b2c/).
+For the latest information about usage billing and pricing, see [Azure Active Directory B2C pricing](https://azure.microsoft.com/pricing/details/active-directory-b2c/).
## Manage your Azure AD B2C tenant resources
active-directory-b2c Date Transformations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/date-transformations.md
Previously updated : 02/16/2020 Last updated : 1/14/2022
# Date claims transformations - This article provides examples for using the date claims transformations of the Identity Experience Framework schema in Azure Active Directory B2C (Azure AD B2C). For more information, see [ClaimsTransformations](claimstransformations.md). ## AssertDateTimeIsGreaterThan
-Checks that one date and time claim (string data type) is later than a second date and time claim (string data type), and throws an exception.
+Asserts that one date is later than a second date. Determines whether the `rightOperand` is greater than the `leftOperand`. If yes, throws an exception.
| Item | TransformationClaimType | Data Type | Notes | | - | -- | | -- | | InputClaim | leftOperand | string | First claim's type, which should be later than the second claim. | | InputClaim | rightOperand | string | Second claim's type, which should be earlier than the first claim. |
-| InputParameter | AssertIfEqualTo | boolean | Specifies whether this assertion should throw an error if the left operand is equal to the right operand. An error will be thrown if the left operand is equal to the right operand and the value is set to `true`. Possible values: `true` (default), or `false`. |
+| InputParameter | AssertIfEqualTo | boolean | Specifies whether this assertion should throw an error if the left operand is equal to the right operand. Possible values: `true` (default), or `false`. |
| InputParameter | AssertIfRightOperandIsNotPresent | boolean | Specifies whether this assertion should pass if the right operand is missing. | | InputParameter | TreatAsEqualIfWithinMillseconds | int | Specifies the number of milliseconds to allow between the two date times to consider the times equal (for example, to account for clock skew). |
The **AssertDateTimeIsGreaterThan** claims transformation is always executed fro
![AssertStringClaimsAreEqual execution](./media/date-transformations/assert-execution.png)
+### AssertDateTimeIsGreaterThan example
+ The following example compares the `currentDateTime` claim with the `approvedDateTime` claim. An error is thrown if `currentDateTime` is later than `approvedDateTime`. The transformation treats values as equal if they are within 5 minutes (30000 milliseconds) difference. It won't throw an error if the values are equal because `AssertIfEqualTo` is set to `false`. ```xml
The following example compares the `currentDateTime` claim with the `approvedDat
> In the example above, if you remove the `AssertIfEqualTo` input parameter, and the `currentDateTime` is equal to`approvedDateTime`, an error will be thrown. The `AssertIfEqualTo` default value is `true`. >
-The `login-NonInteractive` validation technical profile calls the `AssertApprovedDateTimeLaterThanCurrentDateTime` claims transformation.
+- Input claims:
+ - **leftOperand**: 2022-01-01T15:00:00
+ - **rightOperand**: 2022-01-22T15:00:00
+- Input parameters:
+ - **AssertIfEqualTo**: false
+ - **AssertIfRightOperandIsNotPresent**: true
+ - **TreatAsEqualIfWithinMillseconds**: 300000 (30 seconds)
+- Result: Error thrown
+
+### Call the claims transformation
+
+The following `Example-AssertDates` validation technical profile calls the `AssertApprovedDateTimeLaterThanCurrentDateTime` claims transformation.
+ ```xml
-<TechnicalProfile Id="login-NonInteractive">
- ...
+<TechnicalProfile Id="Example-AssertDates">
+ <DisplayName>Unit test</DisplayName>
+ <Protocol Name="Proprietary" Handler="Web.TPEngine.Providers.ClaimsTransformationProtocolProvider, Web.TPEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" />
+ <OutputClaims>
+ <OutputClaim ClaimTypeReferenceId="ComparisonResult" DefaultValue="false" />
+ </OutputClaims>
<OutputClaimsTransformations>
- <OutputClaimsTransformation ReferenceId="AssertApprovedDateTimeLaterThanCurrentDateTime" />
+ <OutputClaimsTransformation ReferenceId="AssertDates" />
</OutputClaimsTransformations>
+ <UseTechnicalProfileForSessionManagement ReferenceId="SM-Noop" />
</TechnicalProfile> ```
-The self-asserted technical profile calls the validation **login-NonInteractive** technical profile.
+The self-asserted technical profile calls the validation `Example-AssertDates` technical profile.
```xml
-<TechnicalProfile Id="SelfAsserted-LocalAccountSignin-Email">
+<TechnicalProfile Id="SelfAsserted-AssertDateTimeIsGreaterThan">
+ <DisplayName>User ID signup</DisplayName>
+ <Protocol Name="Proprietary" Handler="Web.TPEngine.Providers.SelfAssertedAttributeProvider, Web.TPEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" />
<Metadata>
- <Item Key="DateTimeGreaterThan">Custom error message if the provided left operand is greater than the right operand.</Item>
+ <Item Key="ContentDefinitionReferenceId">api.selfasserted</Item>
+ <Item Key="DateTimeGreaterThan">Custom error message if the provided right operand is greater than the right operand.</Item>
</Metadata>
+ ...
<ValidationTechnicalProfiles>
- <ValidationTechnicalProfile ReferenceId="login-NonInteractive" />
+ <ValidationTechnicalProfile ReferenceId="ClaimsTransformation-AssertDateTimeIsGreaterThan" />
</ValidationTechnicalProfiles> </TechnicalProfile> ```
-### Example
--- Input claims:
- - **leftOperand**: 2020-03-01T15:00:00.0000000Z
- - **rightOperand**: 2020-03-01T14:00:00.0000000Z
-- Result: Error thrown- ## ConvertDateToDateTimeClaim
-Converts a **Date** ClaimType to a **DateTime** ClaimType. The claims transformation converts the time format and adds 12:00:00 AM to the date.
+Converts a `Date` claim type to a `DateTime` claim type. The claims transformation converts the time format and adds 12:00:00 AM to the date.
| Item | TransformationClaimType | Data Type | Notes | | - | -- | | -- |
-| InputClaim | inputClaim | date | The ClaimType to be converted. |
-| OutputClaim | outputClaim | dateTime | The ClaimType that is produced after this ClaimsTransformation has been invoked. |
+| InputClaim | inputClaim | date | The claim type to be converted. |
+| OutputClaim | outputClaim | dateTime | The claim type that is produced after this claims transformation has been invoked. |
+
+### ConvertDateToDateTimeClaim example
The following example demonstrates the conversion of the claim `dateOfBirth` (date data type) to another claim `dateOfBirthWithTime` (dateTime data type).
The following example demonstrates the conversion of the claim `dateOfBirth` (da
</ClaimsTransformation> ```
-### Example
- - Input claims:
- - **inputClaim**: 2020-15-03
+ - **inputClaim**: 2022-01-03
- Output claims:
- - **outputClaim**: 2020-15-03T00:00:00.0000000Z
+ - **outputClaim**: 2022-01-03T00:00:00.0000000Z
## ConvertDateTimeToDateClaim
-Converts a **DateTime** ClaimType to a **Date** ClaimType. The claims transformation removes the time format from the date.
+Converts a `DateTime` claim type to a `Date` claim type. The claims transformation removes the time format from the date.
| Item | TransformationClaimType | Data Type | Notes | | - | -- | | -- |
-| InputClaim | inputClaim | dateTime | The ClaimType to be converted. |
-| OutputClaim | outputClaim | date | The ClaimType that is produced after this ClaimsTransformation has been invoked. |
+| InputClaim | inputClaim | dateTime | The claim type to be converted. |
+| OutputClaim | outputClaim | date | The claim type that is produced after this claims transformation has been invoked. |
+
+### ConvertDateTimeToDateClaim example
The following example demonstrates the conversion of the claim `systemDateTime` (dateTime data type) to another claim `systemDate` (date data type).
The following example demonstrates the conversion of the claim `systemDateTime`
</ClaimsTransformation> ```
-### Example
- - Input claims:
- - **inputClaim**: 2020-15-03T11:34:22.0000000Z
+ - **inputClaim**: 2022-01-03T11:34:22.0000000Z
- Output claims:
- - **outputClaim**: 2020-15-03
-
-## GetCurrentDateTime
-
-Get the current UTC date and time and add the value to a ClaimType.
-
-| Item | TransformationClaimType | Data Type | Notes |
-| - | -- | | -- |
-| OutputClaim | currentDateTime | dateTime | The ClaimType that is produced after this ClaimsTransformation has been invoked. |
-
-```xml
-<ClaimsTransformation Id="GetSystemDateTime" TransformationMethod="GetCurrentDateTime">
- <OutputClaims>
- <OutputClaim ClaimTypeReferenceId="systemDateTime" TransformationClaimType="currentDateTime" />
- </OutputClaims>
-</ClaimsTransformation>
-```
-
-### Example
-
-* Output claims:
- * **currentDateTime**: 2020-15-03T11:40:35.0000000Z
+ - **outputClaim**: 2022-01-03
## DateTimeComparison
-Determine whether one dateTime is later, earlier, or equal to another. The result is a new boolean ClaimType boolean with a value of `true` or `false`.
+Compares two dates and determines whether the first date is later, earlier, or equal to another. The result is a new Boolean claim with a value of `true` or `false`.
| Item | TransformationClaimType | Data Type | Notes | | - | -- | | -- |
-| InputClaim | firstDateTime | dateTime | The first dateTime to compare whether it is earlier or later than the second dateTime. Null value throws an exception. |
-| InputClaim | secondDateTime | dateTime | The second dateTime to compare whether it is earlier or later than the first dateTime. Null value is treated as the current datetTime. |
+| InputClaim | firstDateTime | dateTime | The first date to compare whether it's later, earlier, or equal to the second date. Null value throws an exception. |
+| InputClaim | secondDateTime | dateTime | The second date to compare. Null value is treated as the current datetTime. |
+| InputParameter | timeSpanInSeconds | int | Timespan to add to the first date. Possible values: range from negative -2,147,483,648 through positive 2,147,483,647. |
| InputParameter | operator | string | One of following values: same, later than, or earlier than. |
-| InputParameter | timeSpanInSeconds | int | Add the timespan to the first datetime. |
-| OutputClaim | result | boolean | The ClaimType that is produced after this ClaimsTransformation has been invoked. |
+| OutputClaim | result | boolean | The claim that is produced after this claims transformation has been invoked. |
+
+Use this claims transformation to determine if first date plus the timespan parameter is later, earlier, or equal to another. For example, you may store the last time a user accepted your terms of services (TOS). After three months, you can ask the user to access the TOS again.
+To run the claim transformation, you first need to get the current date and also the last time user accepts the TOS.
+
+### DateTimeComparison example
-Use this claims transformation to determine if two ClaimTypes are equal, later, or earlier than each other. For example, you may store the last time a user accepted your terms of services (TOS). After 3 months, you can ask the user to access the TOS again.
-To run the claim transformation, you first need to get the current dateTime and also the last time user accepts the TOS.
+The following example shows that the first date (2022-01-01T00:00:00) plus 90 days is later than the second date (2022-03-16T00:00:00).
```xml <ClaimsTransformation Id="CompareLastTOSAcceptedWithCurrentDateTime" TransformationMethod="DateTimeComparison"> <InputClaims>
- <InputClaim ClaimTypeReferenceId="currentDateTime" TransformationClaimType="firstDateTime" />
<InputClaim ClaimTypeReferenceId="extension_LastTOSAccepted" TransformationClaimType="secondDateTime" />
+ <InputClaim ClaimTypeReferenceId="currentDateTime" TransformationClaimType="firstDateTime" />
</InputClaims> <InputParameters> <InputParameter Id="operator" DataType="string" Value="later than" />
To run the claim transformation, you first need to get the current dateTime and
</ClaimsTransformation> ```
-### Example
- - Input claims:
- - **firstDateTime**: 2020-01-01T00:00:00.100000Z
- - **secondDateTime**: 2020-04-01T00:00:00.100000Z
+ - **firstDateTime**: 2022-01-01T00:00:00.100000Z
+ - **secondDateTime**: 2022-03-16T00:00:00.100000Z
- Input parameters: - **operator**: later than - **timeSpanInSeconds**: 7776000 (90 days) - Output claims: - **result**: true
+
+## GetCurrentDateTime
+
+Get the current UTC date and time and add the value to a claim type.
+
+| Item | TransformationClaimType | Data Type | Notes |
+| - | -- | | -- |
+| OutputClaim | currentDateTime | dateTime | The claim type that is produced after this claims transformation has been invoked. |
+
+### GetCurrentDateTime example
+
+The following example shows how to get the current data and time:
+
+```xml
+<ClaimsTransformation Id="GetSystemDateTime" TransformationMethod="GetCurrentDateTime">
+ <OutputClaims>
+ <OutputClaim ClaimTypeReferenceId="systemDateTime" TransformationClaimType="currentDateTime" />
+ </OutputClaims>
+</ClaimsTransformation>
+```
+
+* Output claims:
+ * **currentDateTime**: 2022-01-14T11:40:35.0000000Z
+
+## Next steps
+
+- Find more [claims transformation samples](https://github.com/azure-ad-b2c/unit-tests/tree/main/claims-transformation) on the Azure AD B2C community GitHub repo
active-directory-b2c General Transformations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/general-transformations.md
Previously updated : 02/03/2020 Last updated : 01/14/2022
[!INCLUDE [active-directory-b2c-advanced-audience-warning](../../includes/active-directory-b2c-advanced-audience-warning.md)]
-This article provides examples for using general claims transformations of the Identity Experience Framework schema in Azure Active Directory B2C (Azure AD B2C). For more information, see [ClaimsTransformations](claimstransformations.md).
+This article provides examples for using general claims transformations of the Identity Experience Framework schema in Azure Active Directory B2C (Azure AD B2C). For more information, see [claims transformations](claimstransformations.md).
## CopyClaim
Copy value of a claim to another. Both claims must be from the same type.
| Item | TransformationClaimType | Data Type | Notes | | - | -- | | -- |
-| InputClaim | inputClaim | string, int | The claim type which is to be copied. |
-| OutputClaim | outputClaim | string, int | The ClaimType that is produced after this ClaimsTransformation has been invoked. |
+| InputClaim | inputClaim | string, int | The claim type, which is to be copied. |
+| OutputClaim | outputClaim | string, int | The claim that is produced after this claims transformation has been invoked. |
Use this claims transformation to copy a value from a string or numeric claim, to another claim. The following example copies the externalEmail claim value to email claim.
Use this claims transformation to copy a value from a string or numeric claim, t
</ClaimsTransformation> ```
-### Example
+### CopyClaim example
- Input claims: - **inputClaim**: bob@contoso.com
Checks if the **inputClaim** exists or not and sets **outputClaim** to true or f
| Item | TransformationClaimType | Data Type | Notes | | - | -- | | -- | | InputClaim | inputClaim |Any | The input claim whose existence needs to be verified. |
-| OutputClaim | outputClaim | boolean | The ClaimType that is produced after this ClaimsTransformation has been invoked. |
+| OutputClaim | outputClaim | boolean | The claim that is produced after this claims transformation has been invoked. |
Use this claims transformation to check if a claim exists or contains any value. The return value is a boolean that indicates whether the claim exists. Following example checks if the email address exists.
Use this claims transformation to check if a claim exists or contains any value.
</ClaimsTransformation> ```
-### Example
+### DoesClaimExist example
- Input claims: - **inputClaim**: someone@contoso.com
Hash the provided plain text using the salt and a secret. The hashing algorithm
| InputClaim | plaintext | string | The input claim to be encrypted | | InputClaim | salt | string | The salt parameter. You can create a random value, using `CreateRandomString` claims transformation. | | InputParameter | randomizerSecret | string | Points to an existing Azure AD B2C **policy key**. To create a new policy key: In your Azure AD B2C tenant, under **Manage**, select **Identity Experience Framework**. Select **Policy keys** to view the keys that are available in your tenant. Select **Add**. For **Options**, select **Manual**. Provide a name (the prefix *B2C_1A_* might be added automatically.). In the **Secret** text box, enter any secret you want to use, such as 1234567890. For **Key usage**, select **Signature**. Select **Create**. |
-| OutputClaim | hash | string | The ClaimType that is produced after this claims transformation has been invoked. The claim configured in the `plaintext` inputClaim. |
+| OutputClaim | hash | string | The claim that is produced after this claims transformation has been invoked. The claim configured in the `plaintext` inputClaim. |
```xml <ClaimsTransformation Id="HashPasswordWithEmail" TransformationMethod="Hash">
Hash the provided plain text using the salt and a secret. The hashing algorithm
</ClaimsTransformation> ```
-### Example
+### Hash example
- Input claims: - **plaintext**: MyPass@word1
Hash the provided plain text using the salt and a secret. The hashing algorithm
- **randomizerSecret**: B2C_1A_AccountTransformSecret - Output claims: - **outputClaim**: CdMNb/KTEfsWzh9MR1kQGRZCKjuxGMWhA5YQNihzV6U=+
+## Next steps
+
+- Find more [claims transformation samples](https://github.com/azure-ad-b2c/unit-tests/tree/main/claims-transformation) on the Azure AD B2C community GitHub repo
active-directory-b2c Multi Factor Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/multi-factor-authentication.md
Previously updated : 12/09/2021 Last updated : 01/14/2022
A customer account is created in your tenant before the multifactor authenticati
::: zone pivot="b2c-custom-policy"
-To enable multifactor authentication, get the custom policy starter packs from GitHub as follows:
+To enable multifactor authentication, get the custom policy starter pack from GitHub as follows:
-- [Download the .zip file](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/archive/master.zip) or clone the repository from `https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack`, and then update the XML files in the **SocialAndLocalAccountsWithMFA** starter pack with your Azure AD B2C tenant name. The **SocialAndLocalAccountsWithMFA** enables social, local, and multifactor authentication options, except the Authenticator app - TOTP MFA option.
+- [Download the .zip file](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/archive/master.zip) or clone the repository from `https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack`, and then update the XML files in the **SocialAndLocalAccountsWithMFA** starter pack with your Azure AD B2C tenant name. The **SocialAndLocalAccountsWithMFA** enables social and local sign in options, and multifactor authentication options, except for the Authenticator app - TOTP option.
- To support the **Authenticator app - TOTP** MFA option, download the custom policy files from `https://github.com/azure-ad-b2c/samples/tree/master/policies/totp`, and then update the XML files with your Azure AD B2C tenant name. Make sure to include `TrustFrameworkExtensions.xml`, `TrustFrameworkLocalization.xml`, and `TrustFrameworkBase.xml` XML files from the **SocialAndLocalAccounts** starter pack. - Update your [page layout] to version `2.1.9`. For more information, see [Select a page layout](contentdefinitions.md#select-a-page-layout).
When an Azure AD B2C application enables MFA using the TOTP option, end users ne
1. Select **+ Add account**. 1. Select **Other account (Google, Facebook, etc.)**, and then scan the QR code shown in the application (for example, *Contoso webapp*) to enroll your account. If you're unable to scan the QR code, you can add the account manually: 1. In the Microsoft Authenticator app on your phone, select **OR ENTER CODE MANUALLY**.
- 1. In the application (for example, *Contoso webapp*), select **Still having trouble?** to show **Account Name** and **Secret**.
+ 1. In the application (for example, *Contoso webapp*), select **Still having trouble?**. This displays **Account Name** and **Secret**.
1. Enter the **Account Name** and **Secret** in your Microsoft Authenticator app, and then select **FINISH**. 1. In the application (for example, *Contoso webapp*), select **Continue**. 1. In **Enter your code**, enter the code that appears in your Microsoft Authenticator app.
Learn about [OATH software tokens](../active-directory/authentication/concept-au
## Delete a user's TOTP authenticator enrollment (for system admins)
-In Azure AD B2C, you can delete a user's TOTP authenticator app enrollment. Then the user would be required to re-enroll their account to use TOTP authentication again. To delete a user's TOTP enrollment, you can use either the Azure portal or the Microsoft Graph API.
+In Azure AD B2C, you can delete a user's TOTP authenticator app enrollment. Then the user would be required to re-enroll their account to use TOTP authentication again. To delete a user's TOTP enrollment, you can use either the [Azure portal](https://portal.azure.com) or the [Microsoft Graph API](/graph/api/softwareoathauthenticationmethod-delete).
> [!NOTE] > - Deleting a user's TOTP authenticator app enrollment from Azure AD B2C doesn't remove the user's account in the TOTP authenticator app. The system admin needs to direct the user to manually delete their account from the TOTP authenticator app before trying to enroll again.
active-directory-b2c Self Asserted Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/self-asserted-technical-profile.md
Previously updated : 03/10/2021 Last updated : 01/14/2022
You can also call a REST API technical profile with your business logic, overwri
| setting.enableRememberMe <sup>2</sup>| No| Displays the [Keep me signed in](session-behavior.md?pivots=b2c-custom-policy#enable-keep-me-signed-in-kmsi) checkbox. Possible values: `true` , or `false` (default). | | setting.inputVerificationDelayTimeInMilliseconds <sup>3</sup>| No| Improves user experience, by waiting for the user to stop typing, and then validate the value. Default value 2000 milliseconds. | | IncludeClaimResolvingInClaimsHandling  | No | For input and output claims, specifies whether [claims resolution](claim-resolver-overview.md) is included in the technical profile. Possible values: `true`, or `false` (default). If you want to use a claims resolver in the technical profile, set this to `true`. |
-|forgotPasswordLinkOverride <sup>4</sup>| No | A password reset claims exchange to be executed. For more information, see [Self-service password reset](add-password-reset-policy.md). |
+|setting.forgotPasswordLinkOverride <sup>4</sup>| No | A password reset claims exchange to be executed. For more information, see [Self-service password reset](add-password-reset-policy.md). |
Notes: 1. Available for content definition [DataUri](contentdefinitions.md#datauri) type of `unifiedssp`, or `unifiedssd`.
active-directory-b2c Solution Articles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/solution-articles.md
Azure Active Directory B2C (Azure AD B2C) enables organizations to implement bus
| Title | Medium | Description | | -- | |-- |
-| [Customer Identity Management with Azure AD B2C](https://channel9.msdn.com/Shows/On-NET/Customer-Identity-Management-with-Azure-AD-B2C) | Video (20 minutes) | In this overview of the service, Parakh Jain ([@jainparakh](https://twitter.com/jainparakh)) from the Azure AD B2C team provides us an overview of how the service works, and also show how we can quickly connect B2C to an ASP.NET Core application. |
+| [Customer Identity Management with Azure AD B2C](/Shows/On-NET/Customer-Identity-Management-with-Azure-AD-B2C) | Video (20 minutes) | In this overview of the service, Parakh Jain ([@jainparakh](https://twitter.com/jainparakh)) from the Azure AD B2C team provides us an overview of how the service works, and also show how we can quickly connect B2C to an ASP.NET Core application. |
| [Benefits of using Azure AD B2C](https://aka.ms/b2coverview) | PDF | Understand the benefits and common scenarios of Azure AD B2C, and how your application(s) can leverage this CIAM service. | | [Gaining Expertise in Azure AD B2C: A Course for Developers](https://aka.ms/learnAADB2C) | PDF | This end-to-end course takes developers through a complete journey on developing applications with Azure AD B2C as the authentication mechanism. Ten in-depth modules with labs cover everything from setting up an Azure subscription to creating custom policies that define the journeys that engage your customers. | | [Enabling partners, Suppliers, and Customers to Access Applications with Azure active Directory](https://aka.ms/aadexternalidentities) | PDF | Every organizationΓÇÖs success, regardless of its size, industry, or compliance and security posture, relies on organizational ability to collaborate with other organizations and connect with customers.<br><br>Bringing together Azure AD, Azure AD B2C, and Azure AD B2B Collaboration, this guide details the business value and the mechanics of building an application or web experience that provides a consolidated authentication experience tailored to the contexts of your employees, business partners and suppliers, and customers. |
active-directory Console App Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/console-app-quickstart.md
+
+ Title: "Quickstart: Call Microsoft Graph from a console application | Azure"
+
+description: In this quickstart, you learn how a console application can get an access token and call an API protected by Microsoft identity platform, using the app's own identity
++++++++ Last updated : 12/06/2021++
+zone_pivot_groups: console-app-quickstart
+#Customer intent: As an app developer, I want to learn how my console app can get an access token and call an API that's protected by the Microsoft identity platform by using the client credentials flow.
++
+# Quickstart: Acquire a token and call the Microsoft Graph API by using a console app's identity
++++
active-directory Desktop App Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/desktop-app-quickstart.md
+
+ Title: "Quickstart: Sign in users and call Microsoft Graph in a desktop app | Azure"
+
+description: In this quickstart, learn how a desktop application can get an access token and call an API protected by the Microsoft identity platform.
++++++++ Last updated : 01/14/2022++
+zone_pivot_groups: desktop-app-quickstart
+#Customer intent: As an application developer, I want to learn how my desktop application can get an access token and call an API that's protected by the Microsoft identity platform.
++
+# Quickstart: Acquire a token and call Microsoft Graph API from a desktop application
+++
active-directory Mobile App Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/mobile-app-quickstart.md
+
+ Title: "Quickstart: Add sign in with Microsoft to a mobile app | Azure"
+
+description: In this quickstart, learn how a mobile app can sign in users, get an access token from the Microsoft identity platform, and call the Microsoft Graph API.
++++++++ Last updated : 01/14/2022++
+zone_pivot_groups: mobile-app-quickstart
+#Customer intent: As an application developer, I want to learn how to sign in users and call Microsoft Graph from my mobile application.
++
+# Quickstart: Sign in users and call the Microsoft Graph API from a mobile application
+++
active-directory Quickstart Register App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-register-app.md
Previously updated : 10/27/2021 Last updated : 01/13/2022 #Customer intent: As developer, I want to know how to register my application with the Microsoft identity platform so that the security token service can issue ID and/or access tokens to client applications that request them.
# Quickstart: Register an application with the Microsoft identity platform
-In this quickstart, you register an app in the Azure portal so the Microsoft identity platform can provide authentication and authorization services for your application and its users.
+Get started with the Microsoft identity platform by registering an application in the Azure portal.
The Microsoft identity platform performs identity and access management (IAM) only for registered applications. Whether it's a client application like a web or mobile app, or it's a web API that backs a client app, registering it establishes a trust relationship between your application and the identity provider, the Microsoft identity platform.
You add and modify redirect URIs for your registered applications by configuring
Settings for each application type, including redirect URIs, are configured in **Platform configurations** in the Azure portal. Some platforms, like **Web** and **Single-page applications**, require you to manually specify a redirect URI. For other platforms, like mobile and desktop, you can select from redirect URIs generated for you when you configure their other settings.
-To configure application settings based on the platform or device you're targeting:
+To configure application settings based on the platform or device you're targeting, follow these steps:
1. In the Azure portal, in **App registrations**, select your application. 1. Under **Manage**, select **Authentication**.
Sometimes called a _public key_, a certificate is the recommended credential typ
Sometimes called an _application password_, a client secret is a string value your app can use in place of a certificate to identity itself.
-Client secrets are considered less secure than certificate credentials. Application developers sometimes use client secrets during local app development because of their ease of use. However, you should use certificate credentials for any application you have running in production.
+Client secrets are considered less secure than certificate credentials. Application developers sometimes use client secrets during local app development because of their ease of use. However, you should use certificate credentials for any of your applications that are running in production.
1. In the Azure portal, in **App registrations**, select your application. 1. Select **Certificates & secrets** > **Client secrets** > **New client secret**.
active-directory Quickstart V2 Android https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-android.md
-+ Previously updated : 10/15/2019 Last updated : 01/14/2022 #Customer intent: As an application developer, I want to learn how Android native apps can call protected APIs that require login and access tokens using the Microsoft identity platform.
Applications must be represented by an app object in Azure Active Directory so t
* Android Studio * Android 16+
-> [!div class="sxs-lookup" renderon="portal"]
-> ### Step 1: Configure your application in the Azure portal
-> For the code sample in this quickstart to work, add a **Redirect URI** compatible with the Auth broker.
-> > [!div renderon="portal" id="makechanges" class="nextstepaction"]
-> > [Make these changes for me]()
->
-> > [!div id="appconfigured" class="alert alert-info"]
-> > ![Already configured](media/quickstart-v2-android/green-check.png) Your application is configured with these attributes
->
-> ### Step 2: Download the project
-> [!div class="sxs-lookup" renderon="portal"]
-> Run the project using Android Studio.
-> [!div class="sxs-lookup" renderon="portal" id="autoupdate" class="nextstepaction"]
+### Step 1: Configure your application in the Azure portal
+For the code sample in this quickstart to work, add a **Redirect URI** compatible with the Auth broker.
+> [!div class="nextstepaction"]
+> [Make these changes for me]()
+
+> [!div class="alert alert-info"]
+> ![Already configured](media/quickstart-v2-android/green-check.png) Your application is configured with these attributes
+
+### Step 2: Download the project
+
+Run the project using Android Studio.
+> [!div class="nextstepaction"]
> [Download the code sample](https://github.com/Azure-Samples/ms-identity-android-java/archive/master.zip)
->
-> [!div class="sxs-lookup" renderon="portal"]
-> ### Step 3: Your app is configured and ready to run
-> We have configured your project with values of your app's properties and it's ready to run.
-> The sample app starts on the **Single Account Mode** screen. A default scope, **user.read**, is provided by default, which is used when reading your own profile data during the Microsoft Graph API call. The URL for the Microsoft Graph API call is provided by default. You can change both of these if you wish.
->
-> ![MSAL sample app showing single and multiple account usage](./media/quickstart-v2-android/quickstart-sample-app.png)
->
-> Use the app menu to change between single and multiple account modes.
->
-> In single account mode, sign in using a work or home account:
->
-> 1. Select **Get graph data interactively** to prompt the user for their credentials. You'll see the output from the call to the Microsoft Graph API in the bottom of the screen.
-> 2. Once signed in, select **Get graph data silently** to make a call to the Microsoft Graph API without prompting the user for credentials again. You'll see the output from the call to the Microsoft Graph API in the bottom of the screen.
->
-> In multiple account mode, you can repeat the same steps. Additionally, you can remove the signed-in account, which also removes the cached tokens for that account.
-
-> [!div class="sxs-lookup" renderon="portal"]
++
+### Step 3: Your app is configured and ready to run
+
+We have configured your project with values of your app's properties and it's ready to run.
+The sample app starts on the **Single Account Mode** screen. A default scope, **user.read**, is provided by default, which is used when reading your own profile data during the Microsoft Graph API call. The URL for the Microsoft Graph API call is provided by default. You can change both of these if you wish.
+
+![MSAL sample app showing single and multiple account usage](./media/quickstart-v2-android/quickstart-sample-app.png)
+
+Use the app menu to change between single and multiple account modes.
+
+In single account mode, sign in using a work or home account:
+
+1. Select **Get graph data interactively** to prompt the user for their credentials. You'll see the output from the call to the Microsoft Graph API in the bottom of the screen.
+2. Once signed in, select **Get graph data silently** to make a call to the Microsoft Graph API without prompting the user for credentials again. You'll see the output from the call to the Microsoft Graph API in the bottom of the screen.
+
+In multiple account mode, you can repeat the same steps. Additionally, you can remove the signed-in account, which also removes the cached tokens for that account.
+
+> [!div class="sxs-lookup"]
> > [!NOTE] > > `Enter_the_Supported_Account_Info_Here`
-> [!div renderon="docs"]
-> ## Step 1: Get the sample app
->
-> [Download the code](https://github.com/Azure-Samples/ms-identity-android-java/archive/master.zip).
->
-> ## Step 2: Run the sample app
->
-> Select your emulator, or physical device, from Android Studio's **available devices** dropdown and run the app.
->
-> The sample app starts on the **Single Account Mode** screen. A default scope, **user.read**, is provided by default, which is used when reading your own profile data during the Microsoft Graph API call. The URL for the Microsoft Graph API call is provided by default. You can change both of these if you wish.
->
-> ![MSAL sample app showing single and multiple account usage](./media/quickstart-v2-android/quickstart-sample-app.png)
->
-> Use the app menu to change between single and multiple account modes.
->
-> In single account mode, sign in using a work or home account:
->
-> 1. Select **Get graph data interactively** to prompt the user for their credentials. You'll see the output from the call to the Microsoft Graph API in the bottom of the screen.
-> 2. Once signed in, select **Get graph data silently** to make a call to the Microsoft Graph API without prompting the user for credentials again. You'll see the output from the call to the Microsoft Graph API in the bottom of the screen.
->
-> In multiple account mode, you can repeat the same steps. Additionally, you can remove the signed-in account, which also removes the cached tokens for that account.
- ## How the sample works ![Screenshot of the sample app](media/quickstart-v2-android/android-intro.svg)
active-directory Quickstart V2 Aspnet Core Web Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-aspnet-core-web-api.md
-+ Previously updated : 09/22/2020 Last updated : 01/11/2022 #Customer intent: As an application developer, I want to know how to write an ASP.NET Core web API that uses the Microsoft identity platform to authorize API requests from clients.
In this quickstart, you download an ASP.NET Core web API code sample and review the way it restricts resource access to authorized accounts only. The sample supports authorization of personal Microsoft accounts and accounts in any Azure Active Directory (Azure AD) organization.
-> [!div renderon="docs"]
-> ## Prerequisites
->
-> - Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-> - [Azure Active Directory tenant](quickstart-create-new-tenant.md)
-> - [.NET Core SDK 3.1+](https://dotnet.microsoft.com/)
-> - [Visual Studio 2019](https://visualstudio.microsoft.com/vs/) or [Visual Studio Code](https://code.visualstudio.com/)
->
-> ## Step 1: Register the application
->
-> First, register the web API in your Azure AD tenant and add a scope by following these steps:
->
-> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
-> 1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
-> 1. Search for and select **Azure Active Directory**.
-> 1. Under **Manage**, select **App registrations** > **New registration**.
-> 1. For **Name**, enter a name for your application. For example, enter **AspNetCoreWebApi-Quickstart**. Users of your app will see this name, and you can change it later.
-> 1. Select **Register**.
-> 1. Under **Manage**, select **Expose an API** > **Add a scope**. For **Application ID URI**, accept the default by selecting **Save and continue**, and then enter the following details:
-> - **Scope name**: `access_as_user`
-> - **Who can consent?**: **Admins and users**
-> - **Admin consent display name**: `Access AspNetCoreWebApi-Quickstart`
-> - **Admin consent description**: `Allows the app to access AspNetCoreWebApi-Quickstart as the signed-in user.`
-> - **User consent display name**: `Access AspNetCoreWebApi-Quickstart`
-> - **User consent description**: `Allow the application to access AspNetCoreWebApi-Quickstart on your behalf.`
-> - **State**: **Enabled**
-> 1. Select **Add scope** to complete the scope addition.
+
+## Prerequisites
+
+- Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- [Azure Active Directory tenant](quickstart-create-new-tenant.md)
+- [.NET Core SDK 3.1+](https://dotnet.microsoft.com/)
+- [Visual Studio 2019](https://visualstudio.microsoft.com/vs/) or [Visual Studio Code](https://code.visualstudio.com/)
+
+## Step 1: Register the application
+
+First, register the web API in your Azure AD tenant and add a scope by following these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
+1. Search for and select **Azure Active Directory**.
+1. Under **Manage**, select **App registrations** > **New registration**.
+1. For **Name**, enter a name for your application. For example, enter **AspNetCoreWebApi-Quickstart**. Users of your app will see this name, and you can change it later.
+1. Select **Register**.
+1. Under **Manage**, select **Expose an API** > **Add a scope**. For **Application ID URI**, accept the default by selecting **Save and continue**, and then enter the following details:
+ - **Scope name**: `access_as_user`
+ - **Who can consent?**: **Admins and users**
+ - **Admin consent display name**: `Access AspNetCoreWebApi-Quickstart`
+ - **Admin consent description**: `Allows the app to access AspNetCoreWebApi-Quickstart as the signed-in user.`
+ - **User consent display name**: `Access AspNetCoreWebApi-Quickstart`
+ - **User consent description**: `Allow the application to access AspNetCoreWebApi-Quickstart on your behalf.`
+ - **State**: **Enabled**
+1. Select **Add scope** to complete the scope addition.
## Step 2: Download the ASP.NET Core project
-> [!div renderon="docs"]
-> [Download the ASP.NET Core solution](https://github.com/Azure-Samples/active-directory-dotnet-native-aspnetcore-v2/archive/aspnetcore3-1.zip) from GitHub.
+[Download the ASP.NET Core solution](https://github.com/Azure-Samples/active-directory-dotnet-native-aspnetcore-v2/archive/aspnetcore3-1.zip) from GitHub.
[!INCLUDE [active-directory-develop-path-length-tip](../../../includes/active-directory-develop-path-length-tip.md)]
-> [!div renderon="docs"]
-> ## Step 3: Configure the ASP.NET Core project
->
-> In this step, configure the sample code to work with the app registration that you created earlier.
->
-> 1. Extract the .zip archive into a folder near the root of your drive. For example, extract into *C:\Azure-Samples*.
->
-> We recommend extracting the archive into a directory near the root of your drive to avoid errors caused by path length limitations on Windows.
->
-> 1. Open the solution in the *webapi* folder in your code editor.
-> 1. Open the *appsettings.json* file and modify the following code:
->
-> ```json
-> "ClientId": "Enter_the_Application_Id_here",
-> "TenantId": "Enter_the_Tenant_Info_Here"
-> ```
->
-> - Replace `Enter_the_Application_Id_here` with the application (client) ID of the application that you registered in the Azure portal. You can find the application (client) ID on the app's **Overview** page.
-> - Replace `Enter_the_Tenant_Info_Here` with one of the following:
-> - If your application supports **Accounts in this organizational directory only**, replace this value with the directory (tenant) ID (a GUID) or tenant name (for example, `contoso.onmicrosoft.com`). You can find the directory (tenant) ID on the app's **Overview** page.
-> - If your application supports **Accounts in any organizational directory**, replace this value with `organizations`.
-> - If your application supports **All Microsoft account users**, leave this value as `common`.
->
-> For this quickstart, don't change any other values in the *appsettings.json* file.
+
+## Step 3: Configure the ASP.NET Core project
+
+In this step, configure the sample code to work with the app registration that you created earlier.
+
+1. Extract the .zip archive into a folder near the root of your drive. For example, extract into *C:\Azure-Samples*.
+
+ We recommend extracting the archive into a directory near the root of your drive to avoid errors caused by path length limitations on Windows.
+
+1. Open the solution in the *webapi* folder in your code editor.
+1. Open the *appsettings.json* file and modify the following code:
+
+ ```json
+ "ClientId": "Enter_the_Application_Id_here",
+ "TenantId": "Enter_the_Tenant_Info_Here"
+ ```
+
+ - Replace `Enter_the_Application_Id_here` with the application (client) ID of the application that you registered in the Azure portal. You can find the application (client) ID on the app's **Overview** page.
+ - Replace `Enter_the_Tenant_Info_Here` with one of the following:
+ - If your application supports **Accounts in this organizational directory only**, replace this value with the directory (tenant) ID (a GUID) or tenant name (for example, `contoso.onmicrosoft.com`). You can find the directory (tenant) ID on the app's **Overview** page.
+ - If your application supports **Accounts in any organizational directory**, replace this value with `organizations`.
+ - If your application supports **All Microsoft account users**, leave this value as `common`.
+
+For this quickstart, don't change any other values in the *appsettings.json* file.
## How the sample works
active-directory Quickstart V2 Dotnet Native Aspnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-dotnet-native-aspnet.md
-+ Previously updated : 10/05/2020 Last updated : 01/11/2022 #Customer intent: As an application developer, I want to know how to set up OpenId Connect authentication in a web application that's built by using Node.js with Express.
You can obtain the sample in either of two ways:
Register your web API in **App registrations** in the Azure portal.
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
+1. Sign in to the [Azure portal](https://portal.azure.com/).
1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant in which you want to register an application. 1. Find and select **Azure Active Directory**. 1. Under **Manage**, select **App registrations** > **New registration**.
active-directory Quickstart V2 Ios https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-ios.md
-+ Previously updated : 09/24/2019 Last updated : 01/14/2022
The quickstart applies to both iOS and macOS apps. Some steps are needed only fo
![Shows how the sample app generated by this quickstart works](media/quickstart-v2-ios/ios-intro.svg)
-> [!div renderon="docs"]
-> ## Register and download your quickstart app
-> You have two options to start your quickstart application:
-> * [Express] [Option 1: Register and auto configure your app and then download your code sample](#option-1-register-and-auto-configure-your-app-and-then-download-the-code-sample)
-> * [Manual] [Option 2: Register and manually configure your application and code sample](#option-2-register-and-manually-configure-your-application-and-code-sample)
->
-> ### Option 1: Register and auto configure your app and then download the code sample
-> #### Step 1: Register your application
-> To register your app,
-> 1. Go to the <a href="https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/IosQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations</a> quickstart experience.
-> 1. Enter a name for your application and select **Register**.
-> 1. Follow the instructions to download and automatically configure your new application with just one click.
->
-> ### Option 2: Register and manually configure your application and code sample
->
-> #### Step 1: Register your application
-> To register your application and add the app's registration information to your solution manually, follow these steps:
->
-> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
-> 1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
-> 1. Search for and select **Azure Active Directory**.
-> 1. Under **Manage**, select **App registrations** > **New registration**.
-> 1. Enter a **Name** for your application. Users of your app might see this name, and you can change it later.
-> 1. Select **Register**.
-> 1. Under **Manage**, select **Authentication** > **Add Platform** > **iOS**.
-> 1. Enter the **Bundle Identifier** for your application. The bundle identifier is a unique string that uniquely identifies your application, for example `com.<yourname>.identitysample.MSALMacOS`. Make a note of the value you use. Note that the iOS configuration is also applicable to macOS applications.
-> 1. Select **Configure** and save the **MSAL Configuration** details for later in this quickstart.
-> 1. Select **Done**.
-
-> [!div renderon="portal" class="sxs-lookup"]
->
-> #### Step 1: Configure your application
-> For the code sample for this quickstart to work, add a **Redirect URI** compatible with the Auth broker.
-> > [!div renderon="portal" id="makechanges" class="nextstepaction"]
-> > [Make this change for me]()
->
-> > [!div id="appconfigured" class="alert alert-info"]
-> > ![Already configured](media/quickstart-v2-ios/green-check.png) Your application is configured with these attributes
->
-> #### Step 2: Download the sample project
-> > [!div id="autoupdate_ios" class="nextstepaction"]
-> > [Download the code sample for iOS]()
->
-> > [!div id="autoupdate_macos" class="nextstepaction"]
-> > [Download the code sample for macOS]()
-> [!div renderon="docs"]
-> #### Step 2: Download the sample project
->
-> - [Download the code sample for iOS](https://github.com/Azure-Samples/active-directory-ios-swift-native-v2/archive/master.zip)
-> - [Download the code sample for macOS](https://github.com/Azure-Samples/active-directory-macOS-swift-native-v2/archive/master.zip)
+#### Step 1: Configure your application
+For the code sample for this quickstart to work, add a **Redirect URI** compatible with the Auth broker.
+> [!div class="nextstepaction"]
+> [Make this change for me]()
+
+> [!div class="alert alert-info"]
+> ![Already configured](media/quickstart-v2-ios/green-check.png) Your application is configured with these attributes
+
+#### Step 2: Download the sample project
+> [!div class="nextstepaction"]
+> [Download the code sample for iOS]()
+
+> [!div class="nextstepaction"]
+> [Download the code sample for macOS]()
#### Step 3: Install dependencies 1. Extract the zip file. 2. In a terminal window, navigate to the folder with the downloaded code sample and run `pod install` to install the latest MSAL library.
-> [!div renderon="portal" class="sxs-lookup"]
-> #### Step 4: Your app is configured and ready to run
-> We have configured your project with values of your app's properties and it's ready to run.
-> > [!NOTE]
-> > `Enter_the_Supported_Account_Info_Here`
->
-> [!div renderon="docs"]
-> #### Step 4: Configure your project
-> If you selected Option 1 above, you can skip these steps.
-> 1. Open the project in XCode.
-> 1. Edit **ViewController.swift** and replace the line starting with 'let kClientID' with the following code snippet. Remember to update the value for `kClientID` with the clientID that you saved when you registered your app in the portal earlier in this quickstart:
->
-> ```swift
-> let kClientID = "Enter_the_Application_Id_Here"
-> ```
-
-> 1. If you're building an app for [Azure AD national clouds](/graph/deployments#app-registration-and-token-service-root-endpoints), replace the line starting with 'let kGraphEndpoint' and 'let kAuthority' with correct endpoints. For global access, use default values:
->
-> ```swift
-> let kGraphEndpoint = "https://graph.microsoft.com/"
-> let kAuthority = "https://login.microsoftonline.com/common"
-> ```
-
-> 1. Other endpoints are documented [here](/graph/deployments#app-registration-and-token-service-root-endpoints). For example, to run the quickstart with Azure AD Germany, use following:
->
-> ```swift
-> let kGraphEndpoint = "https://graph.microsoft.de/"
-> let kAuthority = "https://login.microsoftonline.de/common"
-> ```
-
-> 3. Open the project settings. In the **Identity** section, enter the **Bundle Identifier** that you entered into the portal.
-> 4. Right-click **Info.plist** and select **Open As** > **Source Code**.
-> 5. Under the dict root node, replace `Enter_the_bundle_Id_Here` with the ***Bundle Id*** that you used in the portal. Notice the `msauth.` prefix in the string.
->
-> ```xml
-> <key>CFBundleURLTypes</key>
-> <array>
-> <dict>
-> <key>CFBundleURLSchemes</key>
-> <array>
-> <string>msauth.Enter_the_Bundle_Id_Here</string>
-> </array>
-> </dict>
-> </array>
-> ```
-
-> 6. Build and run the app!
+#### Step 4: Your app is configured and ready to run
+We have configured your project with values of your app's properties and it's ready to run.
+> [!NOTE]
+> `Enter_the_Supported_Account_Info_Here`
+
+1. If you're building an app for [Azure AD national clouds](/graph/deployments#app-registration-and-token-service-root-endpoints), replace the line starting with 'let kGraphEndpoint' and 'let kAuthority' with correct endpoints. For global access, use default values:
+
+ ```swift
+ let kGraphEndpoint = "https://graph.microsoft.com/"
+ let kAuthority = "https://login.microsoftonline.com/common"
+ ```
+
+1. Other endpoints are documented [here](/graph/deployments#app-registration-and-token-service-root-endpoints). For example, to run the quickstart with Azure AD Germany, use following:
+
+ ```swift
+ let kGraphEndpoint = "https://graph.microsoft.de/"
+ let kAuthority = "https://login.microsoftonline.de/common"
+ ```
+
+3. Open the project settings. In the **Identity** section, enter the **Bundle Identifier** that you entered into the portal.
+4. Right-click **Info.plist** and select **Open As** > **Source Code**.
+5. Under the dict root node, replace `Enter_the_bundle_Id_Here` with the ***Bundle Id*** that you used in the portal. Notice the `msauth.` prefix in the string.
+
+ ```xml
+ <key>CFBundleURLTypes</key>
+ <array>
+ <dict>
+ <key>CFBundleURLSchemes</key>
+ <array>
+ <string>msauth.Enter_the_Bundle_Id_Here</string>
+ </array>
+ </dict>
+ </array>
+ ```
+
+6. Build and run the app!
## More Information
active-directory Quickstart V2 Java Daemon https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-java-daemon.md
-+ Previously updated : 01/22/2021 Last updated : 01/10/2022 #Customer intent: As an application developer, I want to learn how my Java app can get an access token and call an API that's protected by Microsoft identity platform endpoint using client credentials flow.
In this quickstart, you download and run a code sample that demonstrates how a Java application can get an access token using the app's identity to call the Microsoft Graph API and display a [list of users](/graph/api/user-list) in the directory. The code sample demonstrates how an unattended job or Windows service can run with an application identity, instead of a user's identity.
-> [!div renderon="docs"]
-> ![Shows how the sample app generated by this quickstart works](media/quickstart-v2-java-daemon/java-console-daemon.svg)
- ## Prerequisites To run this sample, you need:
To run this sample, you need:
- [Java Development Kit (JDK)](https://openjdk.java.net/) 8 or greater - [Maven](https://maven.apache.org/)
-> [!div renderon="docs"]
-> ## Register and download your quickstart app
-
-> [!div renderon="docs" class="sxs-lookup"]
->
-> You have two options to start your quickstart application: Express (Option 1 below), and Manual (Option 2)
->
-> ### Option 1: Register and auto configure your app and then download your code sample
->
-> 1. Go to the <a href="https://portal.azure.com/?Microsoft_AAD_RegisteredApps=true#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/JavaDaemonQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations</a> quickstart experience.
-> 1. Enter a name for your application and select **Register**.
-> 1. Follow the instructions to download and automatically configure your new application with just one click.
->
-> ### Option 2: Register and manually configure your application and code sample
-
-> [!div renderon="docs"]
-> #### Step 1: Register your application
-> To register your application and add the app's registration information to your solution manually, follow these steps:
->
-> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
-> 1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
-> 1. Search for and select **Azure Active Directory**.
-> 1. Under **Manage**, select **App registrations** > **New registration**.
-> 1. Enter a **Name** for your application, for example `Daemon-console`. Users of your app might see this name, and you can change it later.
-> 1. Select **Register**.
-> 1. Under **Manage**, select **Certificates & secrets**.
-> 1. Under **Client secrets**, select **New client secret**, enter a name, and then select **Add**. Record the secret value in a safe location for use in a later step.
-> 1. Under **Manage**, select **API Permissions** > **Add a permission**. Select **Microsoft Graph**.
-> 1. Select **Application permissions**.
-> 1. Under **User** node, select **User.Read.All**, then select **Add permissions**.
-
-> [!div class="sxs-lookup" renderon="portal"]
-> ### Download and configure the quickstart app
->
-> #### Step 1: Configure the application in Azure portal
-> For the code sample for this quickstart to work, you need to create a client secret, and add Graph API's **User.Read.All** application permission.
-> > [!div renderon="portal" id="makechanges" class="nextstepaction"]
-> > [Make these changes for me]()
->
-> > [!div id="appconfigured" class="alert alert-info"]
-> > ![Already configured](media/quickstart-v2-netcore-daemon/green-check.png) Your application is configured with these attributes.
+> [!div class="sxs-lookup"]
+### Download and configure the quickstart app
-#### Step 2: Download the Java project
+#### Step 1: Configure the application in Azure portal
+For the code sample for this quickstart to work, you need to create a client secret, and add Graph API's **User.Read.All** application permission.
+> [!div class="nextstepaction"]
+> [Make these changes for me]()
-> [!div renderon="docs"]
-> [Download the Java daemon project](https://github.com/Azure-Samples/ms-identity-java-daemon/archive/master.zip)
+> [!div class="alert alert-info"]
+> ![Already configured](media/quickstart-v2-netcore-daemon/green-check.png) Your application is configured with these attributes.
+
+#### Step 2: Download the Java project
-> [!div renderon="portal" id="autoupdate" class="sxs-lookup nextstepaction"]
+> [!div class="sxs-lookup nextstepaction"]
> [Download the code sample](https://github.com/Azure-Samples/ms-identity-java-daemon/archive/master.zip)
-> [!div class="sxs-lookup" renderon="portal"]
+> [!div class="sxs-lookup"]
> > [!NOTE] > > `Enter_the_Supported_Account_Info_Here`
-> [!div renderon="docs"]
-> #### Step 3: Configure the Java project
->
-> 1. Extract the zip file to a local folder close to the root of the disk, for example, *C:\Azure-Samples*.
-> 1. Navigate to the sub folder **msal-client-credential-secret**.
-> 1. Edit *src\main\resources\application.properties* and replace the values of the fields `AUTHORITY`, `CLIENT_ID`, and `SECRET` with the following snippet:
->
-> ```
-> AUTHORITY=https://login.microsoftonline.com/Enter_the_Tenant_Id_Here/
-> CLIENT_ID=Enter_the_Application_Id_Here
-> SECRET=Enter_the_Client_Secret_Here
-> ```
-> Where:
-> - `Enter_the_Application_Id_Here` - is the **Application (client) ID** for the application you registered.
-> - `Enter_the_Tenant_Id_Here` - replace this value with the **Tenant Id** or **Tenant name** (for example, contoso.microsoft.com).
-> - `Enter_the_Client_Secret_Here` - replace this value with the client secret created on step 1.
->
-> > [!TIP]
-> > To find the values of **Application (client) ID**, **Directory (tenant) ID**, go to the app's **Overview** page in the Azure portal. To generate a new key, go to **Certificates & secrets** page.
-
-> [!div class="sxs-lookup" renderon="portal"]
-> #### Step 3: Admin consent
-
-> [!div renderon="docs"]
-> #### Step 4: Admin consent
+#### Step 3: Admin consent
If you try to run the application at this point, you'll receive *HTTP 403 - Forbidden* error: `Insufficient privileges to complete the operation`. This error happens because any *app-only permission* requires Admin consent: a global administrator of your directory must give consent to your application. Select one of the options below depending on your role: ##### Global tenant administrator
-> [!div renderon="docs"]
-> If you are a global tenant administrator, go to **API Permissions** page in **App registrations** in the Azure portal and select **Grant admin consent for {Tenant Name}** (Where {Tenant Name} is the name of your directory).
-
-> [!div renderon="portal" class="sxs-lookup"]
-> If you are a global administrator, go to **API Permissions** page select **Grant admin consent for Enter_the_Tenant_Name_Here**.
-> > [!div id="apipermissionspage"]
-> > [Go to the API Permissions page]()
+If you are a global administrator, go to **API Permissions** page select **Grant admin consent for Enter_the_Tenant_Name_Here**.
+> [!div id="apipermissionspage"]
+> [Go to the API Permissions page]()
##### Standard user
If you're a standard user of your tenant, then you need to ask a global administ
```url https://login.microsoftonline.com/Enter_the_Tenant_Id_Here/adminconsent?client_id=Enter_the_Application_Id_Here ```-
-> [!div renderon="docs"]
-> > Where:
-> > * `Enter_the_Tenant_Id_Here` - replace this value with the **Tenant Id** or **Tenant name** (for example, contoso.microsoft.com)
-> > * `Enter_the_Application_Id_Here` - is the **Application (client) ID** for the application you registered.
-
-> [!div class="sxs-lookup" renderon="portal"]
-> #### Step 4: Run the application
-
-> [!div renderon="docs"]
-> #### Step 5: Run the application
+#### Step 4: Run the application
You can test the sample directly by running the main method of ClientCredentialGrant.java from your IDE.
ConfidentialClientApplication cca =
> | Where: |Description | > |||
-> | `CLIENT_SECRET` | Is the client secret created for the application in Azure Portal. |
+> | `CLIENT_SECRET` | Is the client secret created for the application in Azure portal. |
> | `CLIENT_ID` | Is the **Application (client) ID** for the application registered in the Azure portal. You can find this value in the app's **Overview** page in the Azure portal. | > | `AUTHORITY` | The STS endpoint for user to authenticate. Usually `https://login.microsoftonline.com/{tenant}` for public cloud, where {tenant} is the name of your tenant or your tenant Id.|
IAuthenticationResult result;
> |Where:| Description | > |||
-> | `SCOPE` | Contains the scopes requested. For confidential clients, this should use the format similar to `{Application ID URI}/.default` to indicate that the scopes being requested are the ones statically defined in the app object set in the Azure portal (for Microsoft Graph, `{Application ID URI}` points to `https://graph.microsoft.com`). For custom web APIs, `{Application ID URI}` is defined under the **Expose an API** section in **App registrations** in the Azure Portal.|
+> | `SCOPE` | Contains the scopes requested. For confidential clients, this should use the format similar to `{Application ID URI}/.default` to indicate that the scopes being requested are the ones statically defined in the app object set in the Azure portal (for Microsoft Graph, `{Application ID URI}` points to `https://graph.microsoft.com`). For custom web APIs, `{Application ID URI}` is defined under the **Expose an API** section in **App registrations** in the Azure portal.|
[!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)]
active-directory Quickstart V2 Netcore Daemon https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-netcore-daemon.md
-+ Previously updated : 10/05/2020 Last updated : 01/10/2022
In this quickstart, you download and run a code sample that demonstrates how a .NET Core console application can get an access token to call the Microsoft Graph API and display a [list of users](/graph/api/user-list) in the directory. The code sample also demonstrates how a job or a Windows service can run with an application identity, instead of a user's identity. The sample console application in this quickstart is also a daemon application, so it's a confidential client application.
-> [!div renderon="docs"]
-> The following diagram shows how the sample app works:
->
-> ![Diagram that shows how the sample app generated by this quickstart works.](media/quickstart-v2-netcore-daemon/netcore-daemon-intro.svg)
->
- ## Prerequisites This quickstart requires [.NET Core 3.1 SDK](https://dotnet.microsoft.com/download) but will also work with .NET 5.0 SDK.
-> [!div renderon="docs"]
-> ## Register and download the app
-
-> [!div renderon="docs" class="sxs-lookup"]
->
-> You have two options to start building your application: automatic or manual configuration.
->
-> ### Automatic configuration
->
-> If you want to register and automatically configure your app and then download the code sample, follow these steps:
->
-> 1. Go to the <a href="https://portal.azure.com/?Microsoft_AAD_RegisteredApps=true#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/DotNetCoreDaemonQuickstartPage/sourceType/docs" target="_blank">Azure portal page for app registration</a>.
-> 1. Enter a name for your application and select **Register**.
-> 1. Follow the instructions to download and automatically configure your new application in one click.
->
-> ### Manual configuration
->
-> If you want to manually configure your application and code sample, use the following procedures.
->
-> [!div renderon="docs"]
-> #### Step 1: Register your application
-> To register your application and add the app's registration information to your solution manually, follow these steps:
->
-> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</span></a>.
-> 1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
-> 1. Search for and select **Azure Active Directory**.
-> 1. Under **Manage**, select **App registrations** > **New registration**.
-> 1. For **Name**, enter a name for your application. For example, enter **Daemon-console**. Users of your app will see this name, and you can change it later.
-> 1. Select **Register** to create the application.
-> 1. Under **Manage**, select **Certificates & secrets**.
-> 1. Under **Client secrets**, select **New client secret**, enter a name, and then select **Add**. Record the secret value in a safe location for use in a later step.
-> 1. Under **Manage**, select **API Permissions** > **Add a permission**. Select **Microsoft Graph**.
-> 1. Select **Application permissions**.
-> 1. Under the **User** node, select **User.Read.All**, and then select **Add permissions**.
-
-> [!div class="sxs-lookup" renderon="portal"]
-> ### Download and configure your quickstart app
->
-> #### Step 1: Configure your application in the Azure portal
-> For the code sample in this quickstart to work, create a client secret and add the Graph API's **User.Read.All** application permission.
-> > [!div renderon="portal" id="makechanges" class="nextstepaction"]
-> > [Make these changes for me]()
->
-> > [!div id="appconfigured" class="alert alert-info"]
-> > ![Already configured](media/quickstart-v2-netcore-daemon/green-check.png) Your application is configured with these attributes.
+> [!div class="sxs-lookup"]
+### Download and configure your quickstart app
-#### Step 2: Download your Visual Studio project
+#### Step 1: Configure your application in the Azure portal
+For the code sample in this quickstart to work, create a client secret and add the Graph API's **User.Read.All** application permission.
+> [!div class="nextstepaction"]
+> [Make these changes for me]()
-> [!div renderon="docs"]
-> [Download the Visual Studio project](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/archive/master.zip)
->
-> You can run the provided project in either Visual Studio or Visual Studio for Mac.
+> [!div class="alert alert-info"]
+> ![Already configured](media/quickstart-v2-netcore-daemon/green-check.png) Your application is configured with these attributes.
+#### Step 2: Download your Visual Studio project
-> [!div class="sxs-lookup" renderon="portal"]
+> [!div class="sxs-lookup"]
> Run the project by using Visual Studio 2019.
-> [!div class="sxs-lookup" renderon="portal" id="autoupdate" class="nextstepaction"]
+> [!div class="sxs-lookup" id="autoupdate" class="nextstepaction"]
> [Download the code sample](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/archive/master.zip) [!INCLUDE [active-directory-develop-path-length-tip](../../../includes/active-directory-develop-path-length-tip.md)]
-> [!div class="sxs-lookup" renderon="portal"]
+> [!div class="sxs-lookup"]
> > [!NOTE] > > `Enter_the_Supported_Account_Info_Here`
-> [!div renderon="docs"]
-> #### Step 3: Configure your Visual Studio project
->
-> 1. Extract the .zip file to a local folder that's close to the root of the disk. For example, extract to *C:\Azure-Samples*.
->
-> We recommend extracting the archive into a directory near the root of your drive to avoid errors caused by path length limitations on Windows.
->
-> 1. Open the solution in Visual Studio: *1-Call-MSGraph\daemon-console.sln* (optional).
-> 1. In *appsettings.json*, replace the values of `Tenant`, `ClientId`, and `ClientSecret`:
->
-> ```json
-> "Tenant": "Enter_the_Tenant_Id_Here",
-> "ClientId": "Enter_the_Application_Id_Here",
-> "ClientSecret": "Enter_the_Client_Secret_Here"
-> ```
-> In that code:
-> - `Enter_the_Application_Id_Here` is the application (client) ID for the application that you registered.
- To find the values for the application (client) ID and the directory (tenant) ID, go to the app's **Overview** page in the Azure portal.
-> - Replace `Enter_the_Tenant_Id_Here` with the tenant ID or tenant name (for example, `contoso.microsoft.com`).
-> - Replace `Enter_the_Client_Secret_Here` with the client secret that you created in step 1.
- To generate a new key, go to the **Certificates & secrets** page.
-
-> [!div class="sxs-lookup" renderon="portal"]
-> #### Step 3: Admin consent
-
-> [!div renderon="docs"]
-> #### Step 4: Admin consent
+#### Step 3: Admin consent
If you try to run the application at this point, you'll receive an *HTTP 403 - Forbidden* error: "Insufficient privileges to complete the operation." This error happens because any app-only permission requires a global administrator of your directory to give consent to your application. Select one of the following options, depending on your role. ##### Global tenant administrator
-> [!div renderon="docs"]
-> If you're a global tenant administrator, go to **Enterprise applications** in the Azure portal. Select your app registration, and select **Permissions** from the **Security** section of the left pane. Then select the large button labeled **Grant admin consent for {Tenant Name}** (where **{Tenant Name}** is the name of your directory).
-
-> [!div renderon="portal" class="sxs-lookup"]
-> If you're a global administrator, go to the **API Permissions** page and select **Grant admin consent for Enter_the_Tenant_Name_Here**.
-> > [!div id="apipermissionspage"]
-> > [Go to the API Permissions page]()
+If you're a global administrator, go to the **API Permissions** page and select **Grant admin consent for Enter_the_Tenant_Name_Here**.
+> [!div id="apipermissionspage"]
+> [Go to the API Permissions page]()
##### Standard user
If you're a standard user of your tenant, ask a global administrator to grant ad
https://login.microsoftonline.com/Enter_the_Tenant_Id_Here/adminconsent?client_id=Enter_the_Application_Id_Here ```
-> [!div renderon="docs"]
-> In that URL:
-> * Replace `Enter_the_Tenant_Id_Here` with the tenant ID or tenant name (for example, `contoso.microsoft.com`).
-> * `Enter_the_Application_Id_Here` is the application (client) ID for the application that you registered.
- You might see the error "AADSTS50011: No reply address is registered for the application" after you grant consent to the app by using the preceding URL. This error happens because this application and the URL don't have a redirect URI. You can ignore it.
-> [!div class="sxs-lookup" renderon="portal"]
-> #### Step 4: Run the application
-
-> [!div renderon="docs"]
-> #### Step 5: Run the application
+#### Step 4: Run the application
If you're using Visual Studio or Visual Studio for Mac, press **F5** to run the application. Otherwise, run the application via command prompt, console, or terminal:
This quickstart application uses a client secret to identify itself as a confide
## More information This section gives an overview of the code required to sign in users. This overview can be useful to understand how the code works, what the main arguments are, and how to add sign-in to an existing .NET Core console application.
-> [!div class="sxs-lookup" renderon="portal"]
-> ### How the sample works
->
-> ![Diagram that shows how the sample app generated by this quickstart works.](media/quickstart-v2-netcore-daemon/netcore-daemon-intro.svg)
+> [!div class="sxs-lookup"]
+### How the sample works
+
+![Diagram that shows how the sample app generated by this quickstart works.](media/quickstart-v2-netcore-daemon/netcore-daemon-intro.svg)
### MSAL.NET
active-directory Quickstart V2 Nodejs Console https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-nodejs-console.md
- Previously updated : 02/17/2021+ Last updated : 01/10/2022 #Customer intent: As an application developer, I want to learn how my Node.js app can get an access token and call an API that is protected by a Microsoft identity platform endpoint using client credentials flow.
This quickstart uses the [Microsoft Authentication Library for Node.js (MSAL Nod
* [Node.js](https://nodejs.org/en/download/) * [Visual Studio Code](https://code.visualstudio.com/download) or another code editor
-> [!div renderon="docs"]
-> ## Register and download the sample application
->
-> Follow the steps below to get started.
->
-> [!div renderon="docs"]
-> #### Step 1: Register the application
-> To register your application and add the app's registration information to your solution manually, follow these steps:
->
-> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
-> 1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
-> 1. Search for and select **Azure Active Directory**.
-> 1. Under **Manage**, select **App registrations** > **New registration**.
-> 1. Enter a **Name** for your application, for example `msal-node-cli`. Users of your app might see this name, and you can change it later.
-> 1. Select **Register**.
-> 1. Under **Manage**, select **Certificates & secrets**.
-> 1. Under **Client secrets**, select **New client secret**, enter a name, and then select **Add**. Record the secret value in a safe location for use in a later step.
-> 1. Under **Manage**, select **API Permissions** > **Add a permission**. Select **Microsoft Graph**.
-> 1. Select **Application permissions**.
-> 1. Under **User** node, select **User.Read.All**, then select **Add permissions**.
-
-> [!div class="sxs-lookup" renderon="portal"]
-> ### Download and configure the sample app
->
-> #### Step 1: Configure the application in Azure portal
-> For the code sample for this quickstart to work, you need to create a client secret, and add Graph API's **User.Read.All** application permission.
-> > [!div renderon="portal" id="makechanges" class="nextstepaction"]
-> > [Make these changes for me]()
->
-> > [!div id="appconfigured" class="alert alert-info"]
-> > ![Already configured](media/quickstart-v2-netcore-daemon/green-check.png) Your application is configured with these attributes.
-#### Step 2: Download the Node.js sample project
+### Download and configure the sample app
-> [!div renderon="docs"]
-> [Download the code sample](https://github.com/azure-samples/ms-identity-javascript-nodejs-console/archive/main.zip)
+#### Step 1: Configure the application in Azure portal
+For the code sample for this quickstart to work, you need to create a client secret, and add Graph API's **User.Read.All** application permission.
+> [!div class="nextstepaction"]
+> [Make these changes for me]()
-> [!div renderon="portal" id="autoupdate" class="sxs-lookup nextstepaction"]
+> [!div class="alert alert-info"]
+> ![Already configured](media/quickstart-v2-netcore-daemon/green-check.png) Your application is configured with these attributes.
+
+#### Step 2: Download the Node.js sample project
+
+> [!div class="sxs-lookup nextstepaction"]
> [Download the code sample](https://github.com/azure-samples/ms-identity-javascript-nodejs-console/archive/main.zip)
-> [!div class="sxs-lookup" renderon="portal"]
+> [!div class="sxs-lookup"]
> > [!NOTE] > > `Enter_the_Supported_Account_Info_Here`
-> [!div renderon="docs"]
-> #### Step 3: Configure the Node.js sample project
->
-> 1. Extract the zip file to a local folder close to the root of the disk, for example, *C:/Azure-Samples*.
-> 1. Edit *.env* and replace the values of the fields `TENANT_ID`, `CLIENT_ID`, and `CLIENT_SECRET` with the following snippet:
->
-> ```
-> "TENANT_ID": "Enter_the_Tenant_Id_Here",
-> "CLIENT_ID": "Enter_the_Application_Id_Here",
-> "CLIENT_SECRET": "Enter_the_Client_Secret_Here"
-> ```
-> Where:
-> - `Enter_the_Application_Id_Here` - is the **Application (client) ID** of the application you registered earlier. Find this ID on the app registration's **Overview** pane in the Azure portal.
-> - `Enter_the_Tenant_Id_Here` - replace this value with the **Tenant ID** or **Tenant name** (for example, contoso.microsoft.com). Find these values on the app registration's **Overview** pane in the Azure portal.
-> - `Enter_the_Client_Secret_Here` - replace this value with the client secret you created earlier. To generate a new key, use **Certificates & secrets** in the app registration settings in the Azure portal.
->
-> > [!WARNING]
-> > Any plaintext secret in source code poses an increased security risk. This article uses a plaintext client secret for simplicity only. Use [certificate credentials](active-directory-certificate-credentials.md) instead of client secrets in your confidential client applications, especially those apps you intend to deploy to production.
-
-> [!div class="sxs-lookup" renderon="portal"]
-> #### Step 3: Admin consent
-
-> [!div renderon="docs"]
-> #### Step 4: Admin consent
+#### Step 3: Admin consent
If you try to run the application at this point, you'll receive *HTTP 403 - Forbidden* error: `Insufficient privileges to complete the operation`. This error happens because any *app-only permission* requires **admin consent**: a global administrator of your directory must give consent to your application. Select one of the options below depending on your role: ##### Global tenant administrator
-> [!div renderon="docs"]
-> If you are a global tenant administrator, go to **API Permissions** page in the Azure portal's Application Registration and select **Grant admin consent for {Tenant Name}** (where {Tenant Name} is the name of your directory).
-
-> [!div renderon="portal" class="sxs-lookup"]
-> If you are a global administrator, go to **API Permissions** page select **Grant admin consent for Enter_the_Tenant_Name_Here**
+If you are a global administrator, go to **API Permissions** page select **Grant admin consent for Enter_the_Tenant_Name_Here**
> > [!div id="apipermissionspage"] > > [Go to the API Permissions page]()
If you're a standard user of your tenant, then you need to ask a global administ
https://login.microsoftonline.com/Enter_the_Tenant_Id_Here/adminconsent?client_id=Enter_the_Application_Id_Here ```
-> [!div renderon="docs"]
->> Where:
->> * `Enter_the_Tenant_Id_Here` - replace this value with the **Tenant Id** or **Tenant name** (for example, contoso.microsoft.com)
->> * `Enter_the_Application_Id_Here` - is the **Application (client) ID** for the application you registered.
-
-> [!div class="sxs-lookup" renderon="portal"]
-> #### Step 4: Run the application
-
-> [!div renderon="docs"]
-> #### Step 5: Run the application
+#### Step 4: Run the application
Locate the sample's root folder (where `package.json` resides) in a command prompt or console. You'll need to install the dependencies of this sample once:
active-directory Quickstart V2 Nodejs Desktop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-nodejs-desktop.md
Previously updated : 02/17/2021 Last updated : 01/14/2022 #Customer intent: As an application developer, I want to learn how my Node.js Electron desktop application can get an access token and call an API that's protected by a Microsoft identity platform endpoint.
This quickstart uses the [Microsoft Authentication Library for Node.js (MSAL Nod
* [Node.js](https://nodejs.org/en/download/) * [Visual Studio Code](https://code.visualstudio.com/download) or another code editor
-> [!div renderon="docs"]
-> ## Register and download the sample application
->
-> Follow the steps below to get started.
->
-> #### Step 1: Register the application
-> To register your application and add the app's registration information to your solution manually, follow these steps:
->
-> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
-> 1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
-> 1. Search for and select **Azure Active Directory**.
-> 1. Under **Manage**, select **App registrations** > **New registration**.
-> 1. Enter a **Name** for your application, for example `msal-node-desktop`. Users of your app might see this name, and you can change it later.
-> 1. Select **Register** to create the application.
-> 1. Under **Manage**, select **Authentication**.
-> 1. Select **Add a platform** > **Mobile and desktop applications**.
-> 1. In the **Redirect URIs** section, enter `msal://redirect`.
-> 1. Select **Configure**.
-
-> [!div class="sxs-lookup" renderon="portal"]
-> #### Step 1: Configure the application in Azure portal
-> For the code sample for this quickstart to work, you need to add a reply URL as **msal://redirect**.
-> > [!div renderon="portal" id="makechanges" class="nextstepaction"]
-> > [Make this change for me]()
->
-> > [!div id="appconfigured" class="alert alert-info"]
-> > ![Already configured](media/quickstart-v2-windows-desktop/green-check.png) Your application is configured with these attributes.
+#### Step 1: Configure the application in Azure portal
+For the code sample for this quickstart to work, you need to add a reply URL as **msal://redirect**.
+> [!div class="nextstepaction"]
+> [Make this change for me]()
-#### Step 2: Download the Electron sample project
+> [!div class="alert alert-info"]
+> ![Already configured](media/quickstart-v2-windows-desktop/green-check.png) Your application is configured with these attributes.
-> [!div renderon="docs"]
-> [Download the code sample](https://github.com/azure-samples/ms-identity-javascript-nodejs-desktop/archive/main.zip)
+#### Step 2: Download the Electron sample project
-> [!div renderon="portal" id="autoupdate" class="sxs-lookup nextstepaction"]
+> [!div class="nextstepaction"]
> [Download the code sample](https://github.com/azure-samples/ms-identity-javascript-nodejs-desktop/archive/main.zip)
-> [!div class="sxs-lookup" renderon="portal"]
+> [!div class="sxs-lookup"]
> > [!NOTE] > > `Enter_the_Supported_Account_Info_Here`
-> [!div renderon="docs"]
-> #### Step 3: Configure the Electron sample project
->
-> 1. Extract the zip file to a local folder close to the root of the disk, for example, *C:/Azure-Samples*.
-> 1. Edit *.env* and replace the values of the fields `TENANT_ID` and `CLIENT_ID` with the following snippet:
->
-> ```
-> "TENANT_ID": "Enter_the_Tenant_Id_Here",
-> "CLIENT_ID": "Enter_the_Application_Id_Here"
-> ```
-> Where:
-> - `Enter_the_Application_Id_Here` - is the **Application (client) ID** for the application you registered.
-> - `Enter_the_Tenant_Id_Here` - replace this value with the **Tenant Id** or **Tenant name** (for example, contoso.microsoft.com)
->
-> > [!TIP]
-> > To find the values of **Application (client) ID**, **Directory (tenant) ID**, go to the app's **Overview** page in the Azure portal.
-
-> [!div class="sxs-lookup" renderon="portal"]
-> #### Step 4: Run the application
-
-> [!div renderon="docs"]
-> #### Step 4: Run the application
+#### Step 4: Run the application
You'll need to install the dependencies of this sample once:
active-directory Quickstart V2 Python Daemon https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-python-daemon.md
-+ Previously updated : 10/22/2019 Last updated : 01/10/2022 #Customer intent: As an application developer, I want to learn how my Python app can get an access token and call an API that's protected by the Microsoft identity platform using client credentials flow.
In this quickstart, you download and run a code sample that demonstrates how a Python application can get an access token using the app's identity to call the Microsoft Graph API and display a [list of users](/graph/api/user-list) in the directory. The code sample demonstrates how an unattended job or Windows service can run with an application identity, instead of a user's identity.
-> [!div renderon="docs"]
-> ![Shows how the sample app generated by this quickstart works](media/quickstart-v2-python-daemon/python-console-daemon.svg)
- ## Prerequisites To run this sample, you need:
To run this sample, you need:
- [Python 2.7+](https://www.python.org/downloads/release/python-2713) or [Python 3+](https://www.python.org/downloads/release/python-364/) - [MSAL Python](https://github.com/AzureAD/microsoft-authentication-library-for-python)
-> [!div renderon="docs"]
-> ## Register and download your quickstart app
-
-> [!div renderon="docs" class="sxs-lookup"]
->
-> You have two options to start your quickstart application: Express (Option 1 below), and Manual (Option 2)
->
-> ### Option 1: Register and auto configure your app and then download your code sample
->
-> 1. Go to the <a href="https://portal.azure.com/?Microsoft_AAD_RegisteredApps=true#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/PythonDaemonQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations</a> quickstart experience.
-> 1. Enter a name for your application and select **Register**.
-> 1. Follow the instructions to download and automatically configure your new application with just one click.
->
-> ### Option 2: Register and manually configure your application and code sample
-
-> [!div renderon="docs"]
-> #### Step 1: Register your application
-> To register your application and add the app's registration information to your solution manually, follow these steps:
->
-> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
-> 1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
-> 1. Search for and select **Azure Active Directory**.
-> 1. Under **Manage**, select **App registrations** > **New registration**.
-> 1. Enter a **Name** for your application, for example `Daemon-console`. Users of your app might see this name, and you can change it later.
-> 1. Select **Register**.
-> 1. Under **Manage**, select **Certificates & secrets**.
-> 1. Under **Client secrets**, select **New client secret**, enter a name, and then select **Add**. Record the secret value in a safe location for use in a later step.
-> 1. Under **Manage**, select **API Permissions** > **Add a permission**. Select **Microsoft Graph**.
-> 1. Select **Application permissions**.
-> 1. Under **User** node, select **User.Read.All**, then select **Add permissions**.
-
-> [!div class="sxs-lookup" renderon="portal"]
-> ### Download and configure the quickstart app
->
-> #### Step 1: Configure your application in Azure portal
-> For the code sample in this quickstart to work, create a client secret and add Graph API's **User.Read.All** application permission.
-> > [!div renderon="portal" id="makechanges" class="nextstepaction"]
-> > [Make these changes for me]()
->
-> > [!div id="appconfigured" class="alert alert-info"]
-> > ![Already configured](media/quickstart-v2-netcore-daemon/green-check.png) Your application is configured with these attributes.
+> [!div class="sxs-lookup"]
+### Download and configure the quickstart app
-#### Step 2: Download the Python project
+#### Step 1: Configure your application in Azure portal
+For the code sample in this quickstart to work, create a client secret and add Graph API's **User.Read.All** application permission.
+> [!div class="nextstepaction"]
+> [Make these changes for me]()
+
+> [!div class="alert alert-info"]
+> ![Already configured](media/quickstart-v2-netcore-daemon/green-check.png) Your application is configured with these attributes.
-> [!div renderon="docs"]
-> [Download the Python daemon project](https://github.com/Azure-Samples/ms-identity-python-daemon/archive/master.zip)
+#### Step 2: Download the Python project
-> [!div renderon="portal" id="autoupdate" class="sxs-lookup nextstepaction"]
+> [!div class="sxs-lookup nextstepaction"]
> [Download the code sample](https://github.com/Azure-Samples/ms-identity-python-daemon/archive/master.zip)
-> [!div class="sxs-lookup" renderon="portal"]
+> [!div class="sxs-lookup"]
> > [!NOTE] > > `Enter_the_Supported_Account_Info_Here` -
-> [!div renderon="docs"]
-> #### Step 3: Configure the Python project
->
-> 1. Extract the zip file to a local folder close to the root of the disk, for example, **C:\Azure-Samples**.
-> 1. Navigate to the sub folder **1-Call-MsGraph-WithSecret**.
-> 1. Edit **parameters.json** and replace the values of the fields `authority`, `client_id`, and `secret` with the following snippet:
->
-> ```json
-> "authority": "https://login.microsoftonline.com/Enter_the_Tenant_Id_Here",
-> "client_id": "Enter_the_Application_Id_Here",
-> "secret": "Enter_the_Client_Secret_Here"
-> ```
-> Where:
-> - `Enter_the_Application_Id_Here` - is the **Application (client) ID** for the application you registered.
-> - `Enter_the_Tenant_Id_Here` - replace this value with the **Tenant Id** or **Tenant name** (for example, contoso.microsoft.com)
-> - `Enter_the_Client_Secret_Here` - replace this value with the client secret created on step 1.
->
-> > [!TIP]
-> > To find the values of **Application (client) ID**, **Directory (tenant) ID**, go to the app's **Overview** page in the Azure portal. To generate a new key, go to **Certificates & secrets** page.
-
-> [!div class="sxs-lookup" renderon="portal"]
-> #### Step 3: Admin consent
-
-> [!div renderon="docs"]
-> #### Step 4: Admin consent
+#### Step 3: Admin consent
If you try to run the application at this point, you'll receive *HTTP 403 - Forbidden* error: `Insufficient privileges to complete the operation`. This error happens because any *app-only permission* requires Admin consent: a global administrator of your directory must give consent to your application. Select one of the options below depending on your role: ##### Global tenant administrator
-> [!div renderon="docs"]
-> If you are a global tenant administrator, go to **API Permissions** page in **App registrations** in the Azure portal and select **Grant admin consent for {Tenant Name}** (Where {Tenant Name} is the name of your directory).
-
-> [!div renderon="portal" class="sxs-lookup"]
-> If you are a global administrator, go to **API Permissions** page select **Grant admin consent for Enter_the_Tenant_Name_Here**.
-> > [!div id="apipermissionspage"]
-> > [Go to the API Permissions page]()
+If you are a global administrator, go to **API Permissions** page select **Grant admin consent for Enter_the_Tenant_Name_Here**.
+> [!div id="apipermissionspage"]
+> [Go to the API Permissions page]()
##### Standard user
If you're a standard user of your tenant, ask a global administrator to grant ad
https://login.microsoftonline.com/Enter_the_Tenant_Id_Here/adminconsent?client_id=Enter_the_Application_Id_Here ```
-> [!div renderon="docs"]
->> Where:
->> * `Enter_the_Tenant_Id_Here` - replace this value with the **Tenant Id** or **Tenant name** (for example, contoso.microsoft.com)
->> * `Enter_the_Application_Id_Here` - is the **Application (client) ID** for the application you registered.
-
-> [!div class="sxs-lookup" renderon="portal"]
-> #### Step 4: Run the application
-> [!div renderon="docs"]
-> #### Step 5: Run the application
+#### Step 4: Run the application
You'll need to install the dependencies of this sample once.
app = msal.ConfidentialClientApplication(
> | Where: |Description | > |||
-> | `config["secret"]` | Is the client secret created for the application in Azure Portal. |
+> | `config["secret"]` | Is the client secret created for the application in Azure portal. |
> | `config["client_id"]` | Is the **Application (client) ID** for the application registered in the Azure portal. You can find this value in the app's **Overview** page in the Azure portal. | > | `config["authority"]` | The STS endpoint for user to authenticate. Usually `https://login.microsoftonline.com/{tenant}` for public cloud, where {tenant} is the name of your tenant or your tenant Id.|
if not result:
> |Where:| Description | > |||
-> | `config["scope"]` | Contains the scopes requested. For confidential clients, this should use the format similar to `{Application ID URI}/.default` to indicate that the scopes being requested are the ones statically defined in the app object set in the Azure portal (for Microsoft Graph, `{Application ID URI}` points to `https://graph.microsoft.com`). For custom web APIs, `{Application ID URI}` is defined under the **Expose an API** section in **App registrations** in the Azure Portal.|
+> | `config["scope"]` | Contains the scopes requested. For confidential clients, this should use the format similar to `{Application ID URI}/.default` to indicate that the scopes being requested are the ones statically defined in the app object set in the Azure portal (for Microsoft Graph, `{Application ID URI}` points to `https://graph.microsoft.com`). For custom web APIs, `{Application ID URI}` is defined under the **Expose an API** section in **App registrations** in the Azure portal.|
For more information, please see the [reference documentation for `AcquireTokenForClient`](https://msal-python.readthedocs.io/en/latest/#msal.ConfidentialClientApplication.acquire_token_for_client).
active-directory Quickstart V2 Uwp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-uwp.md
Previously updated : 10/07/2020 Last updated : 01/14/2022+ #Customer intent: As an application developer, I want to learn how my Universal Windows Platform (XAML) application can get an access token and call an API that's protected by the Microsoft identity platform.
In this quickstart, you download and run a code sample that demonstrates how a U
See [How the sample works](#how-the-sample-works) for an illustration.
-> [!div renderon="docs"]
-> ## Prerequisites
->
-> * An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-> * [Visual Studio 2019](https://visualstudio.microsoft.com/vs/)
->
-> ## Register and download your quickstart app
-> [!div renderon="docs" class="sxs-lookup"]
-> You have two options to start your quickstart application:
-> * [Express] [Option 1: Register and auto configure your app and then download your code sample](#option-1-register-and-auto-configure-your-app-and-then-download-your-code-sample)
-> * [Manual] [Option 2: Register and manually configure your application and code sample](#option-2-register-and-manually-configure-your-application-and-code-sample)
->
-> ### Option 1: Register and auto configure your app and then download your code sample
->
-> 1. Go to the <a href="https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/UwpQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations</a> quickstart experience.
-> 1. Enter a name for your application and select **Register**.
-> 1. Follow the instructions to download and automatically configure your new application for you in one click.
->
-> ### Option 2: Register and manually configure your application and code sample
-> [!div renderon="docs"]
-> #### Step 1: Register your application
-> To register your application and add the app's registration information to your solution, follow these steps:
-> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
-> 1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
-> 1. Search for and select **Azure Active Directory**.
-> 1. Under **Manage**, select **App registrations** > **New registration**.
-> 1. Enter a **Name** for your application, for example `UWP-App-calling-MsGraph`. Users of your app might see this name, and you can change it later.
-> 1. In the **Supported account types** section, select **Accounts in any organizational directory and personal Microsoft accounts (for example, Skype, Xbox, Outlook.com)**.
-> 1. Select **Register** to create the application, and then record the **Application (client) ID** for use in a later step.
-> 1. Under **Manage**, select **Authentication**.
-> 1. Select **Add a platform** > **Mobile and desktop applications**.
-> 1. Under **Redirect URIs**, select `https://login.microsoftonline.com/common/oauth2/nativeclient`.
-> 1. Select **Configure**.
-
-> [!div renderon="portal" class="sxs-lookup"]
-> #### Step 1: Configure the application
-> For the code sample in this quickstart to work, add a **Redirect URI** of `https://login.microsoftonline.com/common/oauth2/nativeclient`.
-> > [!div renderon="portal" id="makechanges" class="nextstepaction"]
-> > [Make this change for me]()
->
-> > [!div id="appconfigured" class="alert alert-info"]
-> > ![Already configured](media/quickstart-v2-uwp/green-check.png) Your application is configured with these attributes.
-#### Step 2: Download the Visual Studio project
+## Prerequisites
-> [!div renderon="docs"]
-> [Download the Visual Studio project](https://github.com/Azure-Samples/active-directory-dotnet-native-uwp-v2/archive/msal3x.zip)
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* [Visual Studio 2019](https://visualstudio.microsoft.com/vs/)
+
+#### Step 1: Configure the application
+For the code sample in this quickstart to work, add a **Redirect URI** of `https://login.microsoftonline.com/common/oauth2/nativeclient`.
+> [!div class="nextstepaction"]
+> [Make this change for me]()
+
+> [!div class="alert alert-info"]
+> ![Already configured](media/quickstart-v2-uwp/green-check.png) Your application is configured with these attributes.
+
+#### Step 2: Download the Visual Studio project
-> [!div class="sxs-lookup" renderon="portal"]
-> Run the project using Visual Studio 2019.
-> [!div class="sxs-lookup" renderon="portal" id="autoupdate" class="nextstepaction"]
+Run the project using Visual Studio 2019.
+> [!div class="nextstepaction"]
> [Download the code sample](https://github.com/Azure-Samples/active-directory-dotnet-native-uwp-v2/archive/msal3x.zip) [!INCLUDE [active-directory-develop-path-length-tip](../../../includes/active-directory-develop-path-length-tip.md)]
-> [!div class="sxs-lookup" renderon="portal"]
-> #### Step 3: Your app is configured and ready to run
-> We have configured your project with values of your app's properties and it's ready to run.
-
-> [!div class="sxs-lookup" renderon="portal"]
-> > [!NOTE]
-> > `Enter_the_Supported_Account_Info_Here`
-
-> [!div renderon="docs"]
-> #### Step 3: Configure the Visual Studio project
->
-> 1. Extract the .zip archive to a local folder close to the root of your drive. For example, into **C:\Azure-Samples**.
-> 1. Open the project in Visual Studio. Install the **Universal Windows Platform development** workload and any individual SDK components if prompted.
-> 1. In *MainPage.Xaml.cs*, change the value of the `ClientId` variable to the **Application (Client) ID** of the application you registered earlier.
->
-> ```csharp
-> private const string ClientId = "Enter_the_Application_Id_here";
-> ```
->
-> You can find the **Application (client) ID** on the app's **Overview** pane in the Azure portal (**Azure Active Directory** > **App registrations** > *{Your app registration}*).
-> 1. Create and then select a new self-signed test certificate for the package:
-> 1. In the **Solution Explorer**, double-click the *Package.appxmanifest* file.
-> 1. Select **Packaging** > **Choose Certificate...** > **Create...**.
-> 1. Enter a password and then select **OK**. A certificate called *Native_UWP_V2_TemporaryKey.pfx* is created.
-> 1. Select **OK** to dismiss the **Choose a certificate** dialog, and then verify that you see *Native_UWP_V2_TemporaryKey.pfx* in Solution Explorer.
-> 1. In the **Solution Explorer**, right-click the **Native_UWP_V2** project and select **Properties**.
-> 1. Select **Signing**, and then select the .pfx you created in the **Choose a strong name key file** drop-down.
+#### Step 3: Your app is configured and ready to run
+We have configured your project with values of your app's properties and it's ready to run.
#### Step 4: Run the application To run the sample application on your local machine:
active-directory Quickstart V2 Windows Desktop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-windows-desktop.md
Previously updated : 12/12/2019 Last updated : 01/14/2022 #Customer intent: As an application developer, I want to learn how my Windows desktop .NET application can get an access token and call an API that's protected by the Microsoft identity platform.
active-directory Reference Breaking Changes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/reference-breaking-changes.md
You can review the current text of the 50105 error and more on the error lookup
**Change**
-For single tenant applications, a request to add/update AppId URI (identifierUris) will validate that domain in the value of URI is part of the verified domain list in the customer tenant or the value uses the default scheme (`api://{appId}`) provided by AAD.
-This could prevent applications from adding an AppId URI if the domain isn't in the verified domain list or value does not use the default scheme.
+For single tenant applications, adding or updating the AppId URI validates that the domain in the HTTPS scheme URI is listed in the verified domain list in the customer tenant or that the value uses the default scheme (`api://{appId}`) provided by Azure AD. This could prevent applications from adding an AppId URI if the domain isn't in the verified domain list or the value does not use the default scheme.
To find more information on verified domains, refer to the [custom domains documentation](../../active-directory/fundamentals/add-custom-domain.md). The change does not affect existing applications using unverified domains in their AppID URI. It validates only new applications or when an existing application updates an identifier URIs or adds a new one to the identifierUri collection. The new restrictions apply only to URIs added to an app's identifierUris collection after 10/15/2021. AppId URIs already in an application's identifierUris collection when the restriction takes affect on 10/15/2021 will continue to function even if you add new URIs to that collection.
Azure AD will no longer double-encode this parameter, allowing apps to correctly
**Protocol impacted**: All flows
-On 1 June 2018, the official Azure Active Directory (AAD) Authority for Azure Government changed from `https://login-us.microsoftonline.com` to `https://login.microsoftonline.us`. This change also applied to Microsoft 365 GCC High and DoD, which Azure Government AAD also services. If you own an application within a US Government tenant, you must update your application to sign users in on the `.us` endpoint.
+On 1 June 2018, the official Azure Active Directory (Azure AD) Authority for Azure Government changed from `https://login-us.microsoftonline.com` to `https://login.microsoftonline.us`. This change also applied to Microsoft 365 GCC High and DoD, which Azure Government Azure AD also services. If you own an application within a US Government tenant, you must update your application to sign users in on the `.us` endpoint.
Starting May 5th, Azure AD will begin enforcing the endpoint change, blocking government users from signing into apps hosted in US Government tenants using the public endpoint (`microsoftonline.com`). Impacted apps will begin seeing an error `AADSTS900439` - `USGClientNotSupportedOnPublicEndpoint`. This error indicates that the app is attempting to sign in a US Government user on the public cloud endpoint. If your app is in a public cloud tenant and intended to support US Government users, you will need to [update your app to support them explicitly](./authentication-national-cloud.md). This may require creating a new app registration in the US Government cloud.
active-directory V2 Oauth2 On Behalf Of Flow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-oauth2-on-behalf-of-flow.md
Depending on the architecture or usage of your application, you may consider dif
> [!NOTE] > Previously the Microsoft account system (personal accounts) did not support the "Known client application" field, nor could it show combined consent. This has been added and all apps in the Microsoft identity platform can use the known client application approach for getting consent for OBO calls.
-### /.default and combined consent
+### .default and combined consent
-The middle tier application adds the client to the known client applications list in its manifest, and then the client can trigger a combined consent flow for both itself and the middle tier application. On the Microsoft identity platform, this is done using the [`/.default` scope](v2-permissions-and-consent.md#the-default-scope). When triggering a consent screen using known client applications and `/.default`, the consent screen will show permissions for **both** the client to the middle tier API, and also request whatever permissions are required by the middle-tier API. The user provides consent for both applications, and then the OBO flow works.
+The middle tier application adds the client to the known client applications list in its manifest. If a consent prompt is triggered by the client, the consent flow will be both for itself and the middle tier application. On the Microsoft identity platform, this is done using the [`.default` scope](v2-permissions-and-consent.md#the-default-scope). When triggering a consent screen using known client applications and `.default`, the consent screen will show permissions for **both** the client to the middle tier API, and also request whatever permissions are required by the middle-tier API. The user provides consent for both applications, and then the OBO flow works.
+
+The resource service (API) identified in the request should be the API for which the client application is requesting an access token as a result of the user's sign-in. For example, `scope=openid https://middle-tier-api.example.com/.default` (to request an access token for the middle tier API), or `scope=openid offline_access .default` (when a resource is not identified, it defaults to Microsoft Graph).
+
+Regardless of which API is identified in the authorization request, the consent prompt will be a combined consent prompt including all required permissions configured for the client app, as well as all required permissions configured for each middle tier API listed in the client's required permissions list, and which have identified the client as a known client application.
### Pre-authorized applications
active-directory V2 Permissions And Consent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-permissions-and-consent.md
Last updated 07/06/2021 -+
The `scope` parameter is a space-separated list of delegated permissions that th
After the user enters their credentials, the Microsoft identity platform checks for a matching record of *user consent*. If the user hasn't consented to any of the requested permissions in the past, and if the administrator hasn't consented to these permissions on behalf of the entire organization, the Microsoft identity platform asks the user to grant the requested permissions.
-At this time, the `offline_access` ("Maintain access to data you have given it access to") permission and `user.read` ("Sign you in and read your profile") permission are automatically included in the initial consent to an application. These permissions are generally required for proper app functionality. The `offline_access` permission gives the app access to refresh tokens that are critical for native apps and web apps. The `user.read` permission gives access to the `sub` claim. It allows the client or app to correctly identify the user over time and access rudimentary user information.
+At this time, the `offline_access` ("Maintain access to data you have given it access to") permission and `User.Read` ("Sign you in and read your profile") permission are automatically included in the initial consent to an application. These permissions are generally required for proper app functionality. The `offline_access` permission gives the app access to refresh tokens that are critical for native apps and web apps. The `User.Read` permission gives access to the `sub` claim. It allows the client or app to correctly identify the user over time and access rudimentary user information.
![Example screenshot that shows work account consent.](./media/v2-permissions-and-consent/work_account_consent.png)
To see a code sample that implements the steps, see the [admin-restricted scopes
### Request the permissions in the app registration portal
-In the app registration portal, applications can list the permissions they require, including both delegated permissions and application permissions. This setup allows the use of the `/.default` scope and the Azure portal's **Grant admin consent** option.
+In the app registration portal, applications can list the permissions they require, including both delegated permissions and application permissions. This setup allows the use of the `.default` scope and the Azure portal's **Grant admin consent** option.
In general, the permissions should be statically defined for a given application. They should be a superset of the permissions that the app will request dynamically or incrementally. > [!NOTE]
->Application permissions can be requested only through the use of [`/.default`](#the-default-scope). So if your app needs application permissions, make sure they're listed in the app registration portal.
+>Application permissions can be requested only through the use of [`.default`](#the-default-scope). So if your app needs application permissions, make sure they're listed in the app registration portal.
To configure the list of statically requested permissions for an application:
https://graph.microsoft.com/mail.send
| `client_id` | Required | The application (client) ID that the [Azure portal ΓÇô App registrations](https://go.microsoft.com/fwlink/?linkid=2083908) experience assigned to your app. | | `redirect_uri` | Required |The redirect URI where you want the response to be sent for your app to handle. It must exactly match one of the redirect URIs that you registered in the app registration portal. | | `state` | Recommended | A value included in the request that will also be returned in the token response. It can be a string of any content you want. Use the state to encode information about the user's state in the app before the authentication request occurred, such as the page or view they were on. |
-|`scope` | Required | Defines the set of permissions being requested by the application. Scopes can be either static (using [`/.default`](#the-default-scope)) or dynamic. This set can include the OpenID Connect scopes (`openid`, `profile`, `email`). If you need application permissions, you must use `/.default` to request the statically configured list of permissions. |
+|`scope` | Required | Defines the set of permissions being requested by the application. Scopes can be either static (using [`.default`](#the-default-scope)) or dynamic. This set can include the OpenID Connect scopes (`openid`, `profile`, `email`). If you need application permissions, you must use `.default` to request the statically configured list of permissions. |
-At this point, Azure AD requires a tenant administrator to sign in to complete the request. The administrator is asked to approve all the permissions that you requested in the `scope` parameter. If you used a static (`/.default`) value, it will function like the v1.0 admin consent endpoint and request consent for all scopes found in the required permissions for the app.
+At this point, Azure AD requires a tenant administrator to sign in to complete the request. The administrator is asked to approve all the permissions that you requested in the `scope` parameter. If you used a static (`.default`) value, it will function like the v1.0 admin consent endpoint and request consent for all scopes found in the required permissions for the app.
#### Successful response
Content-Type: application/json
{ "grant_type": "authorization_code", "client_id": "6731de76-14a6-49ae-97bc-6eba6914391e",
- "scope": "https://outlook.office.com/mail.read https://outlook.office.com/mail.send",
+ "scope": "https://outlook.office.com/Mail.Read https://outlook.office.com/mail.send",
"code": "AwABAAAAvPM1KaPlrEqdFSBzjqfTGBCmLdgfSTLEMPGYuNHSUYBrq...", "redirect_uri": "https://localhost/myapp", "client_secret": "zc53fwe80980293klaj9823" // NOTE: Only required for web apps
You can use the resulting access token in HTTP requests to the resource. It reli
For more information about the OAuth 2.0 protocol and how to get access tokens, see the [Microsoft identity platform endpoint protocol reference](active-directory-v2-protocols.md).
-## The /.default scope
+## The .default scope
-You can use the `/.default` scope to help migrate your apps from the v1.0 endpoint to the Microsoft identity platform endpoint. The `/.default` scope is built in for every application that refers to the static list of permissions configured on the application registration.
+The `.default` scope is used to refer generically to a resource service (API) in a request, without identifying specific permissions. If consent is necessary, using `.default` signals that consent should be prompted for all required permissions listed in the application registration (for all APIs in the list).
-A `scope` value of `https://graph.microsoft.com/.default` is functionally the same as `resource=https://graph.microsoft.com` on the v1.0 endpoint. By specifying the `https://graph.microsoft.com/.default` scope in its request, your application is requesting an access token that includes scopes for every Microsoft Graph permission you've selected for the app in the app registration portal. The scope is constructed by using the resource URI and `/.default`. So if the resource URI is `https://contosoApp.com`, the scope requested is `https://contosoApp.com/.default`. For cases where you must include a second slash to correctly request the token, see the [section about trailing slashes](#trailing-slash-and-default).
+The scope parameter value is constructed by using the identifier URI for the resource and `.default`, separated by a forward slash (`/`). For example, if the resource's identifier URI is `https://contoso.com`, the scope to request is `https://contoso.com/.default`. For cases where you must include a second slash to correctly request the token, see the [section about trailing slashes](#trailing-slash-and-default).
-The `/.default` scope can be used in any OAuth 2.0 flow. But it's necessary in the [On-Behalf-Of flow](v2-oauth2-on-behalf-of-flow.md) and [client credentials flow](v2-oauth2-client-creds-grant-flow.md). You also need it when you use the v2 admin consent endpoint to request application permissions.
+Using `scope={resource-identifier}/.default` is functionally the same as `resource={resource-identifier}` on the v1.0 endpoint (where `{resource-identifier}` is the identifier URI for the API, for example `https://graph.microsoft.com` for Microsoft Graph).
-Clients can't combine static (`/.default`) consent and dynamic consent in a single request. So `scope=https://graph.microsoft.com/.default+mail.read` results in an error because it combines scope types.
+The `.default` scope can be used in any OAuth 2.0 flow and to initiate [admin consent](v2-admin-consent.md). It's use is required in the [On-Behalf-Of flow](v2-oauth2-on-behalf-of-flow.md) and [client credentials flow](v2-oauth2-client-creds-grant-flow.md).
-### /.default and consent
+Clients can't combine static (`.default`) consent and dynamic consent in a single request. So `scope=https://graph.microsoft.com/.default Mail.Read` results in an error because it combines scope types.
-The `/.default` scope triggers the v1.0 endpoint behavior for `prompt=consent` as well. It requests consent for all permissions that the application registered, regardless of the resource. If it's included as part of the request, the `/.default` scope returns a token that contains the scopes for the resource requested.
+### .default when the user has already given consent
-### /.default when the user has already given consent
+The `.default` scope is functionally identical to the behavior of the `resource`-centric v1.0 endpoint. It carries the consent behavior of the v1.0 endpoint as well. That is, `.default` triggers a consent prompt only if consent has not been granted for any delegated permission between the client and the resource, on behalf of the signed-in user.
-The `/.default` scope is functionally identical to the behavior of the `resource`-centric v1.0 endpoint. It carries the consent behavior of the v1.0 endpoint as well. That is, `/.default` triggers a consent prompt only if the user has granted no permission between the client and the resource.
+If consent does exists, the returned token contains all scopes granted for that resource for the signed-in user. However, if no permission has been granted for the requested resource (or if the `prompt=consent` parameter has been provided), a consent prompt is shown for all required permissions configured on the client application registration, for all APIs in the list.
-If any such consent exists, the returned token contains all scopes the user granted for that resource. However, if no permission has been granted or if the `prompt=consent` parameter has been provided, a consent prompt is shown for all scopes that the client application registered.
+For example, if the scope `https://graph.microsoft.com/.default` is requested, your application is requesting an access token for the Microsoft Graph API. If at least one delegated permission has been granted for Microsoft Graph on behalf of the signed-in user, the sign-in will continue and all Microsoft Graph delegated permissions which have been granted for that user will be included in the access token. If no permissions have been granted for the requested resource (Microsoft Graph, in this example), then a consent prompt will be presented for all required permissions configured on the application, for all APIs in the list.
#### Example 1: The user, or tenant admin, has granted permissions
-In this example, the user or a tenant administrator has granted the `mail.read` and `user.read` Microsoft Graph permissions to the client.
+In this example, the user or a tenant administrator has granted the `Mail.Read` and `User.Read` Microsoft Graph permissions to the client.
-If the client requests `scope=https://graph.microsoft.com/.default`, no consent prompt is shown, regardless of the contents of the client application's registered permissions for Microsoft Graph. The returned token contains the scopes `mail.read` and `user.read`.
+If the client requests `scope=https://graph.microsoft.com/.default`, no consent prompt is shown, regardless of the contents of the client application's registered permissions for Microsoft Graph. The returned token contains the scopes `Mail.Read` and `User.Read`.
#### Example 2: The user hasn't granted permissions between the client and the resource
-In this example, the user hasn't granted consent between the client and Microsoft Graph. The client has registered for the permissions `user.read` and `contacts.read`. It has also registered for the Azure Key Vault scope `https://vault.azure.net/user_impersonation`.
+In this example, the user hasn't granted consent between the client and Microsoft Graph, nor has an administrator. The client has registered for the permissions `User.Read` and `Contacts.Read`. It has also registered for the Azure Key Vault scope `https://vault.azure.net/user_impersonation`.
-When the client requests a token for `scope=https://graph.microsoft.com/.default`, the user sees a consent page for the `user.read` scope, the `contacts.read` scope, and the Key Vault `user_impersonation` scopes. The returned token contains only the `user.read` and `contacts.read` scopes. It can be used only against Microsoft Graph.
+When the client requests a token for `scope=https://graph.microsoft.com/.default`, the user sees a consent page for the Microsoft Graph `User.Read` and `Contacts.Read` scopes, and for the Azure Key Vault `user_impersonation` scope. The returned token contains only the `User.Read` and `Contacts.Read` scopes, and it can be used only against Microsoft Graph.
#### Example 3: The user has consented, and the client requests more scopes
-In this example, the user has already consented to `mail.read` for the client. The client has registered for the `contacts.read` scope.
+In this example, the user has already consented to `Mail.Read` for the client. The client has registered for the `Contacts.Read` scope.
-When the client requests a token by using `scope=https://graph.microsoft.com/.default` and requests consent through `prompt=consent`, the user sees a consent page for all (and only) the permissions that the application registered. The `contacts.read` scope is on the consent page but `mail.read` isn't. The token returned is for Microsoft Graph. It contains `mail.read` and `contacts.read`.
+The client first performs a sign-in with `scope=https://graph.microsoft.com/.default`. Based on the `scopes` parameter of the response, the application's code detects that only `Mail.Read` has been granted. The client then initiates a second sign-in using `scope=https://graph.microsoft.com/.default`, and this time forces consent using `prompt=consent`. If the user is allowed to consent for all the permissions that the application registered, they will be shown the consent prompt. (If not, they will be shown an error message or the [admin consent request](../manage-apps/configure-admin-consent-workflow.md) form.) Both `Contacts.Read` and `Mail.Read` will be in the consent prompt. If consent is granted and the sign-in continues, the token returned is for Microsoft Graph, and contains `Mail.Read` and `Contacts.Read`.
-### Using the /.default scope with the client
+### Using the .default scope with the client
-In some cases, a client can request its own `/.default` scope. The following example demonstrates this scenario.
+In some cases, a client can request its own `.default` scope. The following example demonstrates this scenario.
-```HTTP
+```http
// Line breaks are for legibility only.
-GET https://login.microsoftonline.com/{tenant}/oauth2/v2.0/authorize?
-response_type=token //Code or a hybrid flow is also possible here
-&client_id=9ada6f8a-6d83-41bc-b169-a306c21527a5
-&scope=9ada6f8a-6d83-41bc-b169-a306c21527a5/.default
-&redirect_uri=https%3A%2F%2Flocalhost
-&state=1234
+GET https://login.microsoftonline.com/{tenant}/oauth2/v2.0/authorize
+ ?response_type=token //Code or a hybrid flow is also possible here
+ &client_id=9ada6f8a-6d83-41bc-b169-a306c21527a5
+ &scope=9ada6f8a-6d83-41bc-b169-a306c21527a5/.default
+ &redirect_uri=https%3A%2F%2Flocalhost
+ &state=1234
```
-This code example produces a consent page for all registered permissions if the preceding descriptions of consent and `/.default` apply to the scenario. Then the code returns an `id_token`, rather than an access token.
+This code example produces a consent page for all registered permissions if the preceding descriptions of consent and `.default` apply to the scenario. Then the code returns an `id_token`, rather than an access token.
This behavior accommodates some legacy clients that are moving from Azure AD Authentication Library (ADAL) to the Microsoft Authentication Library (MSAL). This setup *shouldn't* be used by new clients that target the Microsoft identity platform.
-### Client credentials grant flow and /.default
+### Client credentials grant flow and .default
-Another use of `/.default` is to request application permissions (or *roles*) in a noninteractive application like a daemon app that uses the [client credentials](v2-oauth2-client-creds-grant-flow.md) grant flow to call a web API.
+Another use of `.default` is to request app roles (also known as application permissions) in a non-interactive application like a daemon app that uses the [client credentials](v2-oauth2-client-creds-grant-flow.md) grant flow to call a web API.
-To create application permissions (roles) for a web API, see [Add app roles in your application](howto-add-app-roles-in-azure-ad-apps.md).
+To define app roles (application permissions) for a web API, see [Add app roles in your application](howto-add-app-roles-in-azure-ad-apps.md).
-Client credentials requests in your client app *must* include `scope={resource}/.default`. Here, `{resource}` is the web API that your app intends to call. Issuing a client credentials request by using individual application permissions (roles) is *not* supported. All the application permissions (roles) that have been granted for that web API are included in the returned access token.
+Client credentials requests in your client service *must* include `scope={resource}/.default`. Here, `{resource}` is the web API that your app intends to call, and wishes to obtain an access token for. Issuing a client credentials request by using individual application permissions (roles) is *not* supported. All the app roles (application permissions) that have been granted for that web API are included in the returned access token.
-To grant access to the application permissions you define, including granting admin consent for the application, see [Configure a client application to access a web API](quickstart-configure-app-access-web-apis.md).
+To grant access to the app roles you define, including granting admin consent for the application, see [Configure a client application to access a web API](quickstart-configure-app-access-web-apis.md).
-### Trailing slash and /.default
+### Trailing slash and .default
-Some resource URIs have a trailing forward slash, for example, `https://contoso.com/` as opposed to `https://contoso.com`. The trailing slash can cause problems with token validation. Problems occur primarily when a token is requested for Azure Resource Manager (`https://management.azure.com/`). In this case, a trailing slash on the resource URI means the slash must be present when the token is requested. So when you request a token for `https://management.azure.com/` and use `/.default`, you must request `https://management.azure.com//.default` (notice the double slash!). In general, if you verify that the token is being issued, and if the token is being rejected by the API that should accept it, consider adding a second forward slash and trying again.
+Some resource URIs have a trailing forward slash, for example, `https://contoso.com/` as opposed to `https://contoso.com`. The trailing slash can cause problems with token validation. Problems occur primarily when a token is requested for Azure Resource Manager (`https://management.azure.com/`). In this case, a trailing slash on the resource URI means the slash must be present when the token is requested. So when you request a token for `https://management.azure.com/` and use `.default`, you must request `https://management.azure.com//.default` (notice the double slash!). In general, if you verify that the token is being issued, and if the token is being rejected by the API that should accept it, consider adding a second forward slash and trying again.
## Troubleshooting permissions and consent
active-directory V2 Saml Bearer Assertion https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-saml-bearer-assertion.md
Title: Microsoft identity platform & SAML bearer assertion flow | Azure
-description: Learn how to fetch data from Microsoft Graph without prompting the user for credentials using the SAML bearer assertion flow.
+ Title: Exchange a SAML token issued by Active Directory Federation Services (AD FS) for a Microsoft Graph access token
+
+description: Learn how to fetch data from Microsoft Graph without prompting an AD FS-federated user for credentials by using the SAML bearer assertion flow.
-+ Previously updated : 10/21/2021-- Last updated : 01/11/2022++
-# Microsoft identity platform and OAuth 2.0 SAML bearer assertion flow
-The OAuth 2.0 SAML bearer assertion flow allows you to request an OAuth access token using a SAML assertion when a client needs to use an existing trust relationship. The signature applied to the SAML assertion provides authentication of the authorized app. A SAML assertion is an XML security token issued by an identity provider and consumed by a service provider. The service provider relies on its content to identify the assertionΓÇÖs subject for security-related purposes.
+# Exchange a SAML token issued by AD FS for a Microsoft Graph access token
-The SAML assertion is posted to the OAuth token endpoint. The endpoint processes the assertion and issues an access token based on prior approval of the app. The client isnΓÇÖt required to have or store a refresh token, nor is the client secret required to be passed to the token endpoint.
+To enable single sign-on (SSO) in applications that use SAML tokens issued by Active Directory Federation Services (AD FS) and also require access to Microsoft Graph, follow the steps in this article.
-SAML Bearer Assertion flow is useful when fetching data from Microsoft Graph APIs (which only support delegated permissions) without prompting the user for credentials. In this scenario the client credentials grant, which is preferred for background processes, doesn't work.
+You'll enable the SAML bearer assertion flow to exchange a SAMLv1 token issued by the federated AD FS instance for an OAuth 2.0 access token for Microsoft Graph. When the user's browser is redirected to Azure Active Directory (Azure AD) to authenticate them, the browser picks up the session from the SAML sign-in instead of asking the user to enter their credentials.
-For applications that do interactive browser-based sign-in to get a SAML assertion and add access to an OAuth protected API (such as Microsoft Graph), you can make an OAuth request to get an access token for the API. When the browser is redirected to Azure Active Directory (Azure AD) to authenticate the user, the browser will pick up the session from the SAML sign-in and the user doesn't need to enter their credentials.
+> [!IMPORTANT]
+> This scenario works **only** when AD FS is the federated identity provider that issued the original SAMLv1 token. You **cannot** exchange a SAMLv2 token issued by Azure AD for a Microsoft Graph access token.
-The OAuth SAML Bearer Assertion flow is also supported for users authenticating with identity providers such as Active Directory Federation Services (ADFS) federated to Azure AD. The SAML assertion obtained from ADFS can be used in an OAuth flow to authenticate the user.
+## Prerequisites
-![OAuth flow](./media/v2-saml-bearer-assertion/1.png)
+- AD FS federated as an identity provider for single sign-on; see [Setting up AD FS and Enabling Single Sign-On to Office 365](/archive/blogs/canitpro/step-by-step-setting-up-ad-fs-and-enabling-single-sign-on-to-office-365) for an example.
+- [Postman](https://www.getpostman.com/) for testing requests.
+
+## Scenario overview
+
+The OAuth 2.0 SAML bearer assertion flow allows you to request an OAuth access token using a SAML assertion when a client needs to use an existing trust relationship. The signature applied to the SAML assertion provides authentication of the authorized app. A SAML assertion is an XML security token issued by an identity provider and consumed by a service provider. The service provider relies on its content to identify the assertion's subject for security-related purposes.
-## Call Graph using SAML bearer assertion
-Now let us understand on how we can actually fetch SAML assertion programatically. The programmatic approach is tested with ADFS. However, the approach works with any identity provider that supports the return of SAML assertion programatically. The basic process is: get a SAML assertion, get an access token, and access Microsoft Graph.
+The SAML assertion is posted to the OAuth token endpoint. The endpoint processes the assertion and issues an access token based on prior approval of the app. The client isn't required to have or store a refresh token, nor is the client secret required to be passed to the token endpoint.
+
+![OAuth flow](./media/v2-saml-bearer-assertion/1.png)
-### Prerequisites
+## Register the application with Azure AD
-Establish a trust relationship between the authorization server/environment (Microsoft 365) and the identity provider, or issuer of the SAML 2.0 bearer assertion. To configure ADFS for single sign-on and as an identity provider, see [Setting up AD FS and Enabling Single Sign-On to Office 365](/archive/blogs/canitpro/step-by-step-setting-up-ad-fs-and-enabling-single-sign-on-to-office-365).
+Start by registering the application in the [portal](https://ms.portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade):
-Register the application in the [portal](https://ms.portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade):
-1. Sign in to the [app registration page of the portal](https://ms.portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade) (Please note that we are using the v2.0 endpoints for Graph API and hence need to register the application in Azure portal. Otherwise we could have used the registrations in Azure AD).
+1. Sign in to the [app registration page of the portal](https://ms.portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade) (Please note that we are using the v2.0 endpoints for Graph API and hence need to register the application in Azure portal. Otherwise we could have used the registrations in Azure AD).
1. Select **New registration**.
-1. When the **Register an application** page appears, enter your application's registration information:
+1. When the **Register an application** page appears, enter your application's registration information:
1. **Name** - Enter a meaningful application name that will be displayed to users of the app. 1. **Supported account types** - Select which accounts you would like your application to support. 1. **Redirect URI (optional)** - Select the type of app you're building, Web, or Public client (mobile & desktop), and then enter the redirect URI (or reply URL) for your application. 1. When finished, select **Register**. 1. Make a note of the application (client) ID. 1. In the left pane, select **Certificates & secrets**. Click **New client secret** in the **Client secrets** section. Copy the new client secret, you won't be able to retrieve when you leave the page.
-1. In the left pane, select **API permissions** and then **Add a permission**. Select **Microsoft Graph**, then **delegated permissions**, and then select **Tasks.read** since we intend to use the Outlook Graph API.
+1. In the left pane, select **API permissions** and then **Add a permission**. Select **Microsoft Graph**, then **delegated permissions**, and then select **Tasks.read** since we intend to use the Outlook Graph API.
-Install [Postman](https://www.getpostman.com/), a tool required to test the sample requests. Later, you can convert the requests to code.
+## Get the SAML assertion from AD FS
-### Get the SAML assertion from ADFS
-Create a POST request to the ADFS endpoint using SOAP envelope to fetch the SAML assertion:
+Create a POST request to the AD FS endpoint using SOAP envelope to fetch the SAML assertion:
![Get SAML assertion](./media/v2-saml-bearer-assertion/2.png)
Header values:
![Header values](./media/v2-saml-bearer-assertion/3.png)
-ADFS request body:
+AD FS request body:
-![ADFS request body](./media/v2-saml-bearer-assertion/4.png)
+![AD FS request body](./media/v2-saml-bearer-assertion/4.png)
-Once the request is posted successfully, you should receive a SAML assertion from ADFS. Only the **SAML:Assertion** tag data is required, convert it to base64 encoding to use in further requests.
+Once the request is posted successfully, you should receive a SAML assertion from AD FS. Only the **SAML:Assertion** tag data is required, convert it to base64 encoding to use in further requests.
-### Get the OAuth2 token using the SAML assertion
+## Get the OAuth 2.0 token using the SAML assertion
-Fetch an OAuth2 token using the ADFS assertion response.
+Fetch an OAuth 2.0 token using the AD FS assertion response.
1. Create a POST request as shown below with the header values:
Fetch an OAuth2 token using the ADFS assertion response.
![Request body](./media/v2-saml-bearer-assertion/6.png) 1. Upon successful request, you'll receive an access token from Azure active directory.
-### Get the data with the OAuth2 token
+## Get the data with the OAuth 2.0 token
-After receiving the access token, call the Graph APIs (Outlook tasks in this example).
+After receiving the access token, call the Graph APIs (Outlook tasks in this example).
1. Create a GET request with the access token fetched in the previous step:
active-directory Web Api Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/web-api-quickstart.md
+
+ Title: "Quickstart: Protect a web API with the Microsoft identity platform | Azure"
+
+description: In this quickstart, you download and modify a code sample that demonstrates how to protect a web API by using the Microsoft identity platform for authorization.
+++++++ Last updated : 01/11/2022++
+zone_pivot_groups: web-api-quickstart
+#Customer intent: As an app developer, I want to learn how to get access tokens and refresh tokens by using the Microsoft identity platform so that my web app can sign in users of personal accounts, work accounts, and school accounts.
++
+# Quickstart: Protect a web API with the Microsoft identity platform
++
active-directory Direct Federation Adfs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/direct-federation-adfs.md
An AD FS server must already be set up and functioning before you begin this pro
### Add the claim description 1. On your AD FS server, select **Tools** > **AD FS management**.
-2. In the navigation pane, select **Service** > **Claim Descriptions**.
-3. Under **Actions**, select **Add Claim Description**.
-4. In the **Add a Claim Description** window, specify the following values:
+1. In the navigation pane, select **Service** > **Claim Descriptions**.
+1. Under **Actions**, select **Add Claim Description**.
+1. In the **Add a Claim Description** window, specify the following values:
- **Display Name**: Persistent Identifier - **Claim identifier**: `urn:oasis:names:tc:SAML:2.0:nameid-format:persistent` - Select the check box for **Publish this claim description in federation metadata as a claim type that this federation service can accept**. - Select the check box for **Publish this claim description in federation metadata as a claim type that this federation service can send**.
-5. Click **Ok**.
+1. Click **Ok**.
-### Add the relying party trust and claim rules
+### Add the relying party trust
1. On the AD FS server, go to **Tools** > **AD FS management**.
-2. In the navigation pane, select **Trust Relationships** > **Relying Party Trusts**.
-3. Under **Actions**, select **Add Relying Party Trust**.
-4. In the add relying party trust wizard for **Select Data Source**, use the option **Import data about the relying party published online or on a local network**. Specify this federation metadata URL- https://nexus.microsoftonline-p.com/federationmetadata/saml20/federationmetadata.xml. Leave other default selections. Select **Close**.
-5. The **Edit Claim Rules** wizard opens.
-6. In the **Edit Claim Rules** wizard, select **Add Rule**. In **Choose Rule Type**, select **Send LDAP Attributes as Claims**. Select **Next**.
-7. In **Configure Claim Rule**, specify the following values:
+1. In the navigation pane, select **Relying Party Trusts**.
+1. Under **Actions**, select **Add Relying Party Trust**.
+1. In the **Add Relying Party Trust** wizard, select **Claims aware**, and then select **Start**.
+1. In the **Select Data Source** section, select the check box for **Import data about the relying party published online or on a local network**. Enter this federation metadata URL: `https://nexus.microsoftonline-p.com/federationmetadata/saml20/federationmetadata.xml`. Select **Next**.
+1. Leave the other settings in their default options. Continue to select **Next**, and finally select **Close** to close the wizard.
+
+### Create claims rules
+
+1. Right-click the relying party trust you created, and then select **Edit Claim Issuance Policy**.
+1. In the **Edit Claim Rules** wizard, select **Add Rule**.
+1. In **Claim rule template**, select **Send LDAP Attributes as Claims**.
+1. In **Configure Claim Rule**, specify the following values:
- **Claim rule name**: Email claim rule - **Attribute store**: Active Directory - **LDAP Attribute**: E-Mail-Addresses - **Outgoing Claim Type**: E-Mail Address
-8. Select **Finish**.
-9. The **Edit Claim Rules** window will show the new rule. Click **Apply**.
-10. Click **Ok**.
-
-### Create an email transform rule
-1. Go to **Edit Claim Rules** and click **Add Rule**. In **Choose Rule Type**, select **Transform an Incoming Claim** and click **Next**.
-2. In **Configure Claim Rule**, specify the following values:
+1. Select **Finish**.
+1. Select **Add Rule**.
+1. In **Claim rule template**, select **Transform an Incoming Claim**, and then select **Next**.
+1. In **Configure Claim Rule**, specify the following values:
- **Claim rule name**: Email transform rule - **Incoming claim type**: E-mail Address
An AD FS server must already be set up and functioning before you begin this pro
- **Outgoing name ID format**: Persistent Identifier - Select **Pass through all claim values**.
-3. Click **Finish**.
-4. The **Edit Claim Rules** window will show the new rules. Click **Apply**.
-5. Click **OK**. The AD FS server is now configured for federation using the SAML 2.0 protocol.
+1. Select **Finish**.
+1. The **Edit Claim Rules** pane shows the new rules. Select **Apply**.
+1. Select **OK**. The AD FS server is now configured for federation using the SAML 2.0 protocol.
+
+## Configure AD FS for WS-Fed federation
-## Configure AD FS for WS-Fed federation
Azure AD B2B can be configured to federate with IdPs that use the WS-Fed protocol with the specific requirements listed below. Currently, the two WS-Fed providers have been tested for compatibility with Azure AD include AD FS and Shibboleth. Here, weΓÇÖll use Active Directory Federation Services (AD FS) as an example of the WS-Fed IdP. For more information about establishing a relying party trust between a WS-Fed compliant provider with Azure AD, download the Azure AD Identity Provider Compatibility Docs. To set up federation, the following attributes must be received in the WS-Fed message from the IdP. These attributes can be configured by linking to the online security token service XML file or by entering them manually. Step 12 in [Create a test AD FS instance](https://medium.com/in-the-weeds/create-a-test-active-directory-federation-services-3-0-instance-on-an-azure-virtual-machine-9071d978e8ed) describes how to find the AD FS endpoints or how to generate your metadata URL, for example `https://fs.iga.azure-test.net/federationmetadata/2007-06/federationmetadata.xml`.
active-directory Active Directory Get Started Premium https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/active-directory-get-started-premium.md
Before you sign up for Active Directory Premium 1 or Premium 2, you must first d
Signing up using your Azure subscription with previously purchased and activated Azure AD licenses, automatically activates the licenses in the same directory. If that's not the case, you must still activate your license plan and your Azure AD access. For more information about activating your license plan, see [Activate your new license plan](#activate-your-new-license-plan). For more information about activating your Azure AD access, see [Activate your Azure AD access](#activate-your-azure-ad-access). ## Sign up using your existing Azure or Microsoft 365 subscription
-As an Azure or Microsoft 365 subscriber, you can purchase the Azure Active Directory Premium editions online. For detailed steps, see [How to Purchase Azure Active Directory Premium - New Customers](https://channel9.msdn.com/Series/Azure-Active-Directory-Videos-Demos/How-to-Purchase-Azure-Active-Directory-Premium-New-Customers).
+As an Azure or Microsoft 365 subscriber, you can purchase the Azure Active Directory Premium editions online. For detailed steps, see How to Purchase Azure Active Directory Premium - New Customers.
## Sign up using your Enterprise Mobility + Security licensing plan Enterprise Mobility + Security is a suite, comprised of Azure AD Premium, Azure Information Protection, and Microsoft Intune. If you already have an EMS license, you can get started with Azure AD, using one of these licensing options:
active-directory Entitlement Management Access Package Incompatible https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-access-package-incompatible.md
na Previously updated : 07/2/2021 Last updated : 12/15/2021
Follow these steps to view the list of other access packages that have indicated
1. Click on **Incompatible With**.
+## Identifying users who already have incompatible access to another access package
+
+If you are configuring incompatible access settings on an access package that already has users assigned to it, then any of those users who also have an assignment to the incompatible access package or groups will not be able to re-request access.
+
+**Prerequisite role**: Global administrator, Identity Governance administrator, User administrator, Catalog owner or Access package manager
+
+Follow these steps to view the list of users who have assignments to two access packages.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Click **Azure Active Directory**, and then click **Identity Governance**.
+
+1. In the left menu, click **Access packages** and then open the access package where you will be configuring incompatible assignments.
+
+1. In the left menu, click **Assignments**.
+
+1. In the **Status** field, ensure that **Delivered** status is selected.
+
+1. Click the **Download** button and save the resulting CSV file as the first file with a list of assignments.
+
+1. In the navigation bar, click **Identity Governance**.
+
+1. In the left menu, click **Access packages** and then open the access package which you plan to indicate as incompatible.
+
+1. In the left menu, click **Assignments**.
+
+1. In the **Status** field, ensure that the **Delivered** status is selected.
+
+1. Click the **Download** button and save the resulting CSV file as the second file with a list of assignments.
+
+1. Use a spreadsheet program such as Excel to open the two files.
+
+1. Users who are listed in both files will have already-existing incompatible assignments.
+
+### Identifying users who already have incompatible access programmatically
+
+You can also query the users who have assignments to an access package with the `Get-MgEntitlementManagementAccessPackageAssignment` cmdlet from the [Microsoft Graph PowerShell cmdlets for Identity Governance](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.Governance/) module version 1.6.0 or later.
+
+For example, if you have two access packages, one with ID `29be137f-b006-426c-b46a-0df3d4e25ccd` and the other with ID `cce10272-68d8-4482-8ba3-a5965c86cfe5`, then you could retrieve the users who have assignments to the first access package, and then compare them to the users who have assignments to the second access package. You can also report the users who have assignments delivered to both, using a PowerShell script similar to the following:
+
+```powershell
+$c = Connect-MgGraph -Scopes "EntitlementManagement.Read.All"
+Select-MgProfile -Name "beta"
+$ap_w_id = "29be137f-b006-426c-b46a-0df3d4e25ccd"
+$ap_e_id = "cce10272-68d8-4482-8ba3-a5965c86cfe5"
+$apa_w_filter = "accessPackage/id eq '" + $ap_w_id + "' and assignmentState eq 'Delivered'"
+$apa_e_filter = "accessPackage/id eq '" + $ap_e_id + "' and assignmentState eq 'Delivered'"
+$apa_w = Get-MgEntitlementManagementAccessPackageAssignment -Filter $apa_w_filter -ExpandProperty target -All
+$apa_e = Get-MgEntitlementManagementAccessPackageAssignment -Filter $apa_e_filter -ExpandProperty target -All
+$htt = @{}; foreach ($e in $apa_e) { if ($null -ne $e.Target -and $null -ne $e.Target.Id) {$htt[$e.Target.Id] = $e} }
+foreach ($w in $apa_w) { if ($null -ne $w.Target -and $null -ne $w.Target.Id -and $htt.ContainsKey($w.Target.Id)) { write-output $w.Target.Email } }
+```
+
+## Configuring multiple access packages for override scenarios
+
+If an access package has been configured as incompatible, then a user who has an assignment to that incompatible access package cannot request the access package, nor can an administrator make a new assignment that would be incompatible.
+
+For example, if the **Production environment** access package has marked the **Development environment** package as incompatible, and a user has an assignment to the **Development environment** access package, then the access package manager for **Production environment** cannot create an assignment for that user to the **Production environment**. In order to proceed with that assignment, the user's existing assignment to the **Development environment** access package must first be removed.
+
+If there is an exceptional situation where separation of duties rules might need to be overridden, then configuring an additional access package to capture the users who have overlapping access rights will make it clear to the approvers, reviewers, and auditors the exceptional nature of those assignments.
+
+For example, if there was a scenario that some users would need to have access to both production and deployment environments at the same time, you could create a new access package **Production and development environments**. That access package could have as its resource roles some of the resource roles of the **Production environment** access package and some of the resource roles of the **Development environment** access package.
+
+If the motivation of the incompatible access is one resource's roles are particularly problematic, then that resource could be omitted from the combined access package, and require explicit administrator assignment of a user to the role. If that is a third party application or your own application, then you can ensure oversight by monitoring those role assignments using the *Application role assignment activity* workbook described in the next section.
+
+Depending on your governance processes, that combined access package could have as its policy either:
+
+ - a **direct assignments policy**, so that only an access package manager would be interacting with the access package, or
+ - a **users can request access policy**, so that a user can request, with potentially an additional approval stage
+
+This policy could have as its lifecycle settings a much shorter expiration number of days than a policy on other access packages, or require more frequent access reviews, with regular oversight so that users do not retain access longer than necessary.
+ ## Monitor and report on access assignments You can use Azure Monitor workbooks to get insights on how users have been receiving their access.
active-directory Plan Connect Topologies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/plan-connect-topologies.md
na Previously updated : 11/27/2018 Last updated : 01/14/2022
We recommend having a single tenant in Azure AD for an organization. Before you
### (Public preview) Sync AD objects to multiple Azure AD tenants
-![Diagram that shows a topology of multiple Azure A D tenants.](./media/plan-connect-topologies/multi-tenant-1.png)
+![Diagram that shows a topology of multiple Azure A D tenants.](./media/plan-connect-topologies/multi-tenant-2.png)
> [!NOTE] > This topology is currently in Public Preview. As the supported scenarios might still change, we recommend not deploying this topology in a production environment.
active-directory Reference Connect Dirsync Deprecated https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/reference-connect-dirsync-deprecated.md
If you are running DirSync, there are two ways you can upgrade: In-place upgrade
| [Upgrade from DirSync](how-to-dirsync-upgrade-get-started.md) |<li>If you have an existing DirSync server already running.</li> | | [Upgrade from Azure AD Sync](how-to-upgrade-previous-version.md) |<li>If you are moving from Azure AD Sync.</li> |
-If you want to see how to do an in-place upgrade from DirSync to Azure AD Connect, then see this Channel 9 video:
-
-> [!VIDEO https://channel9.msdn.com/Series/Azure-Active-Directory-Videos-Demos/Azure-Active-Directory-Connect-in-place-upgrade-from-legacy-tools/player]
->
->
## FAQ **Q: I have received an email notification from the Azure Team and/or a message from the Microsoft 365 message center, but I am using Connect.**
active-directory F5 Aad Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/f5-aad-integration.md
Steps 1-4 in the diagram illustrate the front-end pre-authentication exchange be
Whether a direct employee, affiliate, or consumer, most users are already acquainted with the Office 365 login experience, so accessing BIG-IP services via SHA remains largely familiar.
-Users now find their BIG-IP published services consolidated in the [MyApps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510) or [O365 launchpads](https://o365pp.blob.core.windows.net/media/Resources/Microsoft%20365%20Business/Launchpad%20Overview_for%20Partners_10292019.pdf) along with self-service capabilities to a broader set of services, no matter the type of device or location. Users can even continue accessing published services directly via the BIG-IPs proprietary Webtop portal, if preferred. When logging off, SHA ensures a usersΓÇÖ session is terminated at both ends, the BIG-IP and Azure AD, ensuring services remain fully protected from unauthorized access.
+Users now find their BIG-IP published services consolidated in the [MyApps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510) or [O365 launchpads](https://airhead.io/airbase/launchpads/R3kW-RkDFEedipcU1AFlnA) along with self-service capabilities to a broader set of services, no matter the type of device or location. Users can even continue accessing published services directly via the BIG-IPs proprietary Webtop portal, if preferred. When logging off, SHA ensures a usersΓÇÖ session is terminated at both ends, the BIG-IP and Azure AD, ensuring services remain fully protected from unauthorized access.
The screenshots provided are from the Azure AD app portal that users access securely to find their BIG-IP published services and for managing their account properties.
active-directory Secure Hybrid Access Integrations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/secure-hybrid-access-integrations.md
The following providers of software-defined perimeter (SDP) solutions connect wi
| **SDP vendor** | **Link** | | | |
-| Datawiza Access Broker | [https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/datawiza-with-azure-ad](./datawiza-with-azure-ad.md) |
+| Datawiza Access Broker | [https://docs.microsoft.com/azure/active-directory/manage-apps/datawiza-with-azure-ad](./datawiza-with-azure-ad.md) |
| Perimeter 81 | [https://docs.microsoft.com/azure/active-directory/saas-apps/perimeter-81-tutorial](../saas-apps/perimeter-81-tutorial.md) |
-| Silverfort Authentication Platform | [https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/silverfort-azure-ad-integration](./silverfort-azure-ad-integration.md) |
+| Silverfort Authentication Platform | [https://docs.microsoft.com/azure/active-directory/manage-apps/silverfort-azure-ad-integration](./silverfort-azure-ad-integration.md) |
| Strata Maverics Identity Orchestrator | [https://docs.microsoft.com/azure/active-directory/saas-apps/maverics-identity-orchestrator-saml-connector-tutorial](../saas-apps/maverics-identity-orchestrator-saml-connector-tutorial.md) | | Zscaler Private Access | [https://docs.microsoft.com/azure/active-directory/saas-apps/zscalerprivateaccess-tutorial](../saas-apps/zscalerprivateaccess-tutorial.md) |
active-directory Troubleshoot Password Based Sso https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/troubleshoot-password-based-sso.md
If you experience any of these problems, do the following things:
- Try the manual capture process again. Make sure that the red markers are over the correct fields. - If the manual capture process seems to stop responding or the sign-in page doesnΓÇÖt respond, try the manual capture process again. But this time, after completing the process, press the F12 key to open your browserΓÇÖs developer console. Select the **console** tab. Type **window.location="*&lt;the sign-in URL that you specified when configuring the app&gt;*"**, and then press Enter. This forces a page redirect that ends the capture process and stores the fields that were captured.
-### I can't add another user to my Password-based SSO app
+### I can't add another user to my password-based SSO app
-Password-based SSO app has a limit of 48 users. Thus, it has a limit of 48 keys for username/password pairs per app.
-If you want to add additional users you can either:
+A user cannot have more than 48 credentials configured across all password SSO apps where the user is directly assigned.
+
+If you want to add more apps with password-based SSO to a user, consider assigning the app to a group the user is a direct member of, and configuring the credential for the group. Note that the credentials configured for the group will be available for all members of the group.
+
+### I can't add another group to my password-based SSO app
+
+Each password-based SSO app has a limit of 48 groups which are assigned and have had credentials configured for them. If you want to add additional groups, you can either:
- Add additional instance of the app-- Remove users who are no longer using the app first
+- Remove groups who are no longer using the app
## Request support
active-directory Ways Users Get Assigned To Applications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/ways-users-get-assigned-to-applications.md
This article help you to understand how users get assigned to an application in your tenant.
-## How do users get assigned to an application in Azure AD?
+## How do users get assigned an application in Azure AD?
-For a user to access an application, they must first be assigned to it in some way. Assignment can be performed by an administrator, a business delegate, or sometimes, the user themselves. Below describes the ways users can get assigned to applications:
+There are several ways a user can be assigned an application. Assignment can be performed by an administrator, a business delegate, or sometimes, the user themselves. Below describes the ways users can get assigned to applications:
* An administrator [assigns a user](./assign-user-or-group-access-portal.md) to the application directly * An administrator [assigns a group](./assign-user-or-group-access-portal.md) that the user is a member of to the application, including:
For a user to access an application, they must first be assigned to it in some w
* An administrator enables [Self-service Application Access](./manage-self-service-access.md) to allow a user to add an application using [My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510) **Add App** feature, but only **with prior approval from a selected set of business approvers** * An administrator enables [Self-service Group Management](../enterprise-users/groups-self-service-management.md) to allow a user to join a group that an application is assigned to **without business approval** * An administrator enables [Self-service Group Management](../enterprise-users/groups-self-service-management.md) to allow a user to join a group that an application is assigned to, but only **with prior approval from a selected set of business approvers**
-* An administrator assigns a license to a user directly for a first party application, like [Microsoft 365](https://products.office.com/)
-* An administrator assigns a license to a group that the user is a member of to a first party application, like [Microsoft 365](https://products.office.com/)
-* An [administrator consents to an application](../develop/howto-convert-app-to-be-multi-tenant.md) to be used by all users and then a user signs in to the application
-* A user [consents to an application](../develop/howto-convert-app-to-be-multi-tenant.md) themselves by signing in to the application
+* An administrator assigns a license to a user directly, for a Microsoft service such as [Microsoft 365](https://products.office.com/)
+* An administrator assigns a license to a group that the user is a member of, for a Microsoft service such as [Microsoft 365](https://products.office.com/)
+* A user [consents to an application](consent-and-permissions-overview.md#user-consent) on behalf of themselves.
## Next steps
active-directory Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/overview.md
A common challenge for developers is the management of secrets and credentials u
Take a look at how you can use managed identities</br>
-> [!VIDEO https://channel9.msdn.com/Shows/On-NET/Using-Azure-Managed-identities/player?format=ny]
+> [!VIDEO https://docs.microsoft.com/Shows/On-NET/Using-Azure-Managed-identities/player?format=ny]
active-directory Admin Units Members Add https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/admin-units-members-add.md
Previously updated : 12/17/2021 Last updated : 01/14/2022
# Add users or groups to an administrative unit
-In Azure Active Directory (Azure AD), you can add users or groups to an administrative unit to restrict the scope of role permissions.
+In Azure Active Directory (Azure AD), you can add users or groups to an administrative unit to restrict the scope of role permissions. For additional details on what scoped administrators can do, see [Administrative units in Azure Active Directory](administrative-units.md).
## Prerequisites
Example
## Next steps
+- [Administrative units in Azure Active Directory](administrative-units.md)
- [Assign Azure AD roles with administrative unit scope](admin-units-assign-roles.md) - [Remove users or groups from an administrative unit](admin-units-members-remove.md)
active-directory Administrative Units https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/administrative-units.md
Previously updated : 11/04/2020 Last updated : 01/14/2022
A central administrator could:
- Create a role with administrative permissions over only Azure AD users in the School of Business administrative unit. - Add the business school IT team to the role, along with its scope.
+Administrative units apply scope only to management permissions. They don't prevent members or administrators from using their [default user permissions](../fundamentals/users-default-permissions.md) to browse other users, groups, or resources outside the administrative unit. In the Microsoft 365 admin center, users outside a scoped admin's administrative units are filtered out. But you can browse other users in the Azure portal, PowerShell, and other Microsoft services.
+ ## License requirements Using administrative units requires an Azure AD Premium P1 license for each administrative unit administrator, and Azure AD Free licenses for administrative unit members. To find the right license for your requirements, see [Comparing generally available features of the Free and Premium editions](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing).
The following sections describe current support for administrative unit scenario
| Permissions | Graph/PowerShell | Azure portal | Microsoft 365 admin center | | | | | |
-| Administrative unit-scoped management of group properties and members | Supported | Supported | Not supported |
+| Administrative unit-scoped management of group properties and membership | Supported | Supported | Not supported |
| Administrative unit-scoped management of group licensing | Supported | Supported | Not supported |
-Administrative units apply scope only to management permissions. They don't prevent members or administrators from using their [default user permissions](../fundamentals/users-default-permissions.md) to browse other users, groups, or resources outside the administrative unit. In the Microsoft 365 admin center, users outside a scoped admin's administrative units are filtered out. But you can browse other users in the Azure portal, PowerShell, and other Microsoft services.
+> [!NOTE]
+> Adding a group to an administrative unit does not grant scoped group administrators the ability to manage properties for individual members of that group. For example, a scoped group administrator can manage group membership, but they can't manage authentication methods of users who are members of the group added to an administrative unit. To manage authentication methods of users who are members of the group that is added to an administrative unit, the individual group members must be directly added as users of the administrative unit, and the group administrator must also be assigned a role that can manage user authentication methods.
+
+## Constraints
+
+Here are some of the constraints for administrative units.
+
+- Administrative units can't be nested.
+- Administrative unit-scoped user account administrators can't create or delete users.
+- A scoped role assignment doesn't apply to members of groups added to an administrative unit, unless the group members are directly added to the administrative unit. For more information, see [Add members to an administrative unit](admin-units-members-add.md).
+- Administrative units are currently not available in [Azure AD Identity Governance](../governance/identity-governance-overview.md).
## Next steps - [Create or delete administrative units](admin-units-manage.md) - [Add users or groups to an administrative unit](admin-units-members-add.md) - [Assign Azure AD roles with administrative unit scope](admin-units-assign-roles.md)-
+- [Administrative unit limits](../enterprise-users/directory-service-limits-restrictions.md?context=%2fazure%2factive-directory%2froles%2fcontext%2fugr-context)
aks Command Invoke https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/command-invoke.md
Title: Use `command invoke` to access a private Azure Kubernetes Service (AKS) c
description: Learn how to use `command invoke` to access a private Azure Kubernetes Service (AKS) cluster Previously updated : 11/30/2021 Last updated : 1/14/2022 # Use `command invoke` to access a private Azure Kubernetes Service (AKS) cluster
-Accessing a private AKS cluster requires that you connect to that cluster either from the cluster virtual network or from a peered network. These approaches require configuring a VPN, Express Route, or deploying a *jumpbox* within the cluster virtual network. Alternatively, you can use `command invoke` to access private clusters without having to configure a VPN or Express Route. Using `command invoke` allows you to remotely invoke commands like `kubectl` and `helm` on your private cluster through the Azure API without directly connecting to the cluster. Permissions for using `command invoke` are controlled through the `Microsoft.ContainerService/managedClusters/runcommand/action` and `Microsoft.ContainerService/managedclusters/commandResults/read` roles.
+Accessing a private AKS cluster requires that you connect to that cluster either from the cluster virtual network, from a peered network, or via a configured private endpoint. These approaches require configuring a VPN, Express Route, deploying a *jumpbox* within the cluster virtual network, or creating a private endpoint inside of another virtual network. Alternatively, you can use `command invoke` to access private clusters without having to configure a VPN or Express Route. Using `command invoke` allows you to remotely invoke commands like `kubectl` and `helm` on your private cluster through the Azure API without directly connecting to the cluster. Permissions for using `command invoke` are controlled through the `Microsoft.ContainerService/managedClusters/runcommand/action` and `Microsoft.ContainerService/managedclusters/commandResults/read` roles.
## Prerequisites
aks Ingress Basic https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/ingress-basic.md
Alternatively, you can also:
This article uses [Helm 3][helm] to install the NGINX ingress controller on a [supported version of Kubernetes][aks-supported versions]. Make sure that you are using the latest release of Helm and have access to the *ingress-nginx* Helm repository. The steps outlined in this article may not be compatible with previous versions of the Helm chart, NGINX ingress controller, or Kubernetes.
+### [Azure CLI](#tab/azure-cli)
+ This article also requires that you are running the Azure CLI version 2.0.64 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install]. In addition, this article assumes you have an existing AKS cluster with an integrated ACR. For more details on creating an AKS cluster with an integrated ACR, see [Authenticate with Azure Container Registry from Azure Kubernetes Service][aks-integrated-acr].
-## Import the images used by the Helm chart into your ACR
+### [Azure PowerShell](#tab/azure-powershell)
+
+This article also requires that you're running Azure PowerShell version 5.9.0 or later. Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell][azure-powershell-install].
+
+In addition, this article assumes you have an existing AKS cluster with an integrated ACR. For more details on creating an AKS cluster with an integrated ACR, see [Authenticate with Azure Container Registry from Azure Kubernetes Service][aks-integrated-acr-ps].
+++
+## Basic configuration
+
+To create a simple NGINX ingress controller without customizing the defaults, you will use helm.
+
+### [Azure CLI](#tab/azure-cli)
+
+```console
+NAMESPACE=ingress-basic
+
+helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
+helm repo update
+
+helm install ingress-nginx ingress-nginx/ingress-nginx --create-namespace --namespace $NAMESPACE
+```
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+```powershell-interactive
+$Namespace = 'ingress-basic'
+
+helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
+helm repo update
+
+helm install ingress-nginx ingress-nginx/ingress-nginx --create-namespace --namespace $Namespace
+```
+++
+Note that the above configuration uses the 'out of the box' configuration for simplicity. If needed, you could add parameters for customizing the deployment, eg, `--set controller.replicaCount=3`. The next section will show a highly customized example of the ingress controller.
+
+## Customized configuration
+As an alternative to the basic configuration presented in the above section, the next set of steps will show how to deploy a customized ingress controller.
-This article uses the [NGINX ingress controller Helm chart][ingress-nginx-helm-chart], which relies on three container images. Use `az acr import` to import those images into your ACR.
+### Import the images used by the Helm chart into your ACR
+
+### [Azure CLI](#tab/azure-cli)
+
+To control image versions, you will want to import them into your own Azure Container registry. The [NGINX ingress controller Helm chart][ingress-nginx-helm-chart] relies on three container images. Use `az acr import` to import those images into your ACR.
```azurecli REGISTRY_NAME=<REGISTRY_NAME>
az acr import --name $REGISTRY_NAME --source $SOURCE_REGISTRY/$PATCH_IMAGE:$PATC
az acr import --name $REGISTRY_NAME --source $SOURCE_REGISTRY/$DEFAULTBACKEND_IMAGE:$DEFAULTBACKEND_TAG --image $DEFAULTBACKEND_IMAGE:$DEFAULTBACKEND_TAG ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+To control image versions, you will want to import them into your own Azure Container registry. The [NGINX ingress controller Helm chart][ingress-nginx-helm-chart] relies on three container images. Use `Import-AzContainerRegistryImage` to import those images into your ACR.
++
+```azurepowershell-interactive
+$RegistryName = "<REGISTRY_NAME>"
+$ResourceGroup = (Get-AzContainerRegistry | Where-Object {$_.name -eq $RegistryName} ).ResourceGroupName
+$SourceRegistry = "k8s.gcr.io"
+$ControllerImage = "ingress-nginx/controller"
+$ControllerTag = "v1.0.4"
+$PatchImage = "ingress-nginx/kube-webhook-certgen"
+$PatchTag = "v1.1.1"
+$DefaultBackendImage = "defaultbackend-amd64"
+$DefaultBackendTag = "1.5"
+
+Import-AzContainerRegistryImage -ResourceGroupName $ResourceGroup -RegistryName $RegistryName -SourceRegistryUri $SourceRegistry -SourceImage "${ControllerImage}:${ControllerTag}"
+Import-AzContainerRegistryImage -ResourceGroupName $ResourceGroup -RegistryName $RegistryName -SourceRegistryUri $SourceRegistry -SourceImage "${PatchImage}:${PatchTag}"
+Import-AzContainerRegistryImage -ResourceGroupName $ResourceGroup -RegistryName $RegistryName -SourceRegistryUri $SourceRegistry -SourceImage "${DefaultBackendImage}:${DefaultBackendTag}"
+```
+++ > [!NOTE] > In addition to importing container images into your ACR, you can also import Helm charts into your ACR. For more information, see [Push and pull Helm charts to an Azure container registry][acr-helm].
The ingress controller also needs to be scheduled on a Linux node. Windows Serve
> > If you would like to enable [client source IP preservation][client-source-ip] for requests to containers in your cluster, add `--set controller.service.externalTrafficPolicy=Local` to the Helm install command. The client source IP is stored in the request header under *X-Forwarded-For*. When using an ingress controller with client source IP preservation enabled, SSL pass-through will not work.
+### [Azure CLI](#tab/azure-cli)
+ ```console # Add the ingress-nginx repository helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm install nginx-ingress ingress-nginx/ingress-nginx \
--set defaultBackend.image.digest="" ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell-interactive
+# Add the ingress-nginx repository
+helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
+
+# Set variable for ACR location to use for pulling images
+$AcrUrl = (Get-AzContainerRegistry -ResourceGroupName $ResourceGroup -Name $RegistryName).LoginServer
+
+# Use Helm to deploy an NGINX ingress controller
+helm install nginx-ingress ingress-nginx/ingress-nginx `
+ --namespace ingress-basic --create-namespace `
+ --set controller.replicaCount=2 `
+ --set controller.nodeSelector."kubernetes\.io/os"=linux `
+ --set controller.image.registry=$AcrUrL `
+ --set controller.image.image=$ControllerImage `
+ --set controller.image.tag=$ControllerTag `
+ --set controller.image.digest="" `
+ --set controller.admissionWebhooks.patch.nodeSelector."kubernetes\.io/os"=linux `
+ --set controller.admissionWebhooks.patch.image.registry=$AcrUrl `
+ --set controller.admissionWebhooks.patch.image.image=$PatchImage `
+ --set controller.admissionWebhooks.patch.image.tag=$PatchTag `
+ --set controller.admissionWebhooks.patch.image.digest="" `
+ --set defaultBackend.nodeSelector."kubernetes\.io/os"=linux `
+ --set defaultBackend.image.registry=$AcrUrl `
+ --set defaultBackend.image.image=$DefaultBackendImage `
+ --set defaultBackend.image.tag=$DefaultBackendTag `
+ --set defaultBackend.image.digest=""
+```
+++
+## Check the load balancer service
+ When the Kubernetes load balancer service is created for the NGINX ingress controller, a dynamic public IP address is assigned, as shown in the following example output: ```
You can also:
[client-source-ip]: concepts-network.md#ingress-controllers [aks-supported versions]: supported-kubernetes-versions.md [aks-integrated-acr]: cluster-container-registry-integration.md?tabs=azure-cli#create-a-new-aks-cluster-with-acr-integration
-[acr-helm]: ../container-registry/container-registry-helm-repos.md
+[aks-integrated-acr-ps]: cluster-container-registry-integration.md?tabs=azure-powershell#create-a-new-aks-cluster-with-acr-integration
+[acr-helm]: ../container-registry/container-registry-helm-repos.md
+[azure-powershell-install]: /powershell/azure/install-az-ps
aks Ingress Internal Ip https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/ingress-internal-ip.md
You can also:
This article uses [Helm 3][helm] to install the NGINX ingress controller on a [supported version of Kubernetes][aks-supported versions]. Make sure that you are using the latest release of Helm and have access to the *ingress-nginx* Helm repository. The steps outlined in this article may not be compatible with previous versions of the Helm chart, NGINX ingress controller, or Kubernetes. For more information on configuring and using Helm, see [Install applications with Helm in Azure Kubernetes Service (AKS)][use-helm].
+This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
+
+### [Azure CLI](#tab/azure-cli)
+
+In addition, this article assumes you have an existing AKS cluster with an integrated ACR. For more details on creating an AKS cluster with an integrated ACR, see [Authenticate with Azure Container Registry from Azure Kubernetes Service][aks-integrated-acr].
+ This article also requires that you are running the Azure CLI version 2.0.64 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
-In addition, this article assumes you have an existing AKS cluster with an [integrated ACR][aks-integrated-acr].
+### [Azure PowerShell](#tab/azure-powershell)
+
+In addition, this article assumes you have an existing AKS cluster with an integrated ACR. For more details on creating an AKS cluster with an integrated ACR, see [Authenticate with Azure Container Registry from Azure Kubernetes Service][aks-integrated-acr-ps].
+
+This article also requires that you're running Azure PowerShell version 5.9.0 or later. Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell][azure-powershell-install].
++ ## Import the images used by the Helm chart into your ACR Often when using an AKS cluster with a private network, it is a requirement to manage the provenance of the container images used within the cluster. See [Best practices for container image management and security in Azure Kubernetes Service (AKS)][aks-container-best-practices] for more information. To support this requirement, and for completeness, the examples in this article rely on importing the three container images used by the [NGINX ingress controller Helm chart][ingress-nginx-helm-chart] into your ACR.
+### [Azure CLI](#tab/azure-cli)
+ Use `az acr import` to import these images into your ACR. ```azurecli
az acr import --name $REGISTRY_NAME --source $SOURCE_REGISTRY/$PATCH_IMAGE:$PATC
az acr import --name $REGISTRY_NAME --source $SOURCE_REGISTRY/$DEFAULTBACKEND_IMAGE:$DEFAULTBACKEND_TAG --image $DEFAULTBACKEND_IMAGE:$DEFAULTBACKEND_TAG ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+Use `Import-AzContainerRegistryImage` to import those images into your ACR.
+
+```azurepowershell-interactive
+$RegistryName = "<REGISTRY_NAME>"
+$ResourceGroup = (Get-AzContainerRegistry | Where-Object {$_.name -eq $RegistryName} ).ResourceGroupName
+$SourceRegistry = "k8s.gcr.io"
+$ControllerImage = "ingress-nginx/controller"
+$ControllerTag = "v1.0.4"
+$PatchImage = "ingress-nginx/kube-webhook-certgen"
+$PatchTag = "v1.1.1"
+$DefaultBackendImage = "defaultbackend-amd64"
+$DefaultBackendTag = "1.5"
+
+Import-AzContainerRegistryImage -ResourceGroupName $ResourceGroup -RegistryName $RegistryName -SourceRegistryUri $SourceRegistry -SourceImage "${ControllerImage}:${ControllerTag}"
+Import-AzContainerRegistryImage -ResourceGroupName $ResourceGroup -RegistryName $RegistryName -SourceRegistryUri $SourceRegistry -SourceImage "${PatchImage}:${PatchTag}"
+Import-AzContainerRegistryImage -ResourceGroupName $ResourceGroup -RegistryName $RegistryName -SourceRegistryUri $SourceRegistry -SourceImage "${DefaultBackendImage}:${DefaultBackendTag}"
+```
+++ > [!NOTE] > In addition to importing container images into your ACR, you can also import Helm charts into your ACR. For more information, see [Push and pull Helm charts to an Azure container registry][acr-helm].
The ingress controller also needs to be scheduled on a Linux node. Windows Serve
> [!TIP] > If you would like to enable [client source IP preservation][client-source-ip] for requests to containers in your cluster, add `--set controller.service.externalTrafficPolicy=Local` to the Helm install command. The client source IP is stored in the request header under *X-Forwarded-For*. When using an ingress controller with client source IP preservation enabled, TLS pass-through will not work.
+### [Azure CLI](#tab/azure-cli)
+ ```console # Add the ingress-nginx repository helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm install nginx-ingress ingress-nginx/ingress-nginx \
--set defaultBackend.image.digest="" ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell-interactive
+# Add the ingress-nginx repository
+helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
+
+# Set variable for ACR location to use for pulling images
+$AcrUrl = (Get-AzContainerRegistry -ResourceGroupName $ResourceGroup -Name $RegistryName).LoginServer
+
+# Use Helm to deploy an NGINX ingress controller
+helm install nginx-ingress ingress-nginx/ingress-nginx `
+ --namespace ingress-basic --create-namespace `
+ --set controller.replicaCount=2 `
+ --set controller.nodeSelector."kubernetes\.io/os"=linux `
+ --set controller.image.registry=$AcrUrl `
+ --set controller.image.image=$ControllerImage `
+ --set controller.image.tag=$ControllerTag `
+ --set controller.image.digest="" `
+ --set controller.admissionWebhooks.patch.nodeSelector."kubernetes\.io/os"=linux `
+ --set controller.admissionWebhooks.patch.image.registry=$AcrUrl `
+ --set controller.admissionWebhooks.patch.image.image=$PatchImage `
+ --set controller.admissionWebhooks.patch.image.tag=$PatchTag `
+ --set controller.admissionWebhooks.patch.image.digest="" `
+ --set defaultBackend.nodeSelector."kubernetes\.io/os"=linux `
+ --set defaultBackend.image.registry=$AcrUrl `
+ --set defaultBackend.image.image=$DefaultBackendImage `
+ --set defaultBackend.image.tag=$DefaultBackendTag `
+ --set defaultBackend.image.digest=""
+
+```
+++ When the Kubernetes load balancer service is created for the NGINX ingress controller, your internal IP address is assigned. To get the public IP address, use the `kubectl get service` command. ```console
You can also:
[aks-http-app-routing]: http-application-routing.md [aks-ingress-own-tls]: ingress-own-tls.md [client-source-ip]: concepts-network.md#ingress-controllers
+[aks-quickstart-cli]: kubernetes-walkthrough.md
+[aks-quickstart-powershell]: kubernetes-walkthrough-powershell.md
+[aks-quickstart-portal]: kubernetes-walkthrough-portal.md
[aks-configure-kubenet-networking]: configure-kubenet.md [aks-configure-advanced-networking]: configure-azure-cni.md [aks-supported versions]: supported-kubernetes-versions.md [ingress-nginx-helm-chart]: https://github.com/kubernetes/ingress-nginx/tree/main/charts/ingress-nginx [aks-integrated-acr]: cluster-container-registry-integration.md?tabs=azure-cli#create-a-new-aks-cluster-with-acr-integration
+[aks-integrated-acr-ps]: cluster-container-registry-integration.md?tabs=azure-powershell#create-a-new-aks-cluster-with-acr-integration
+[azure-powershell-install]: /powershell/azure/install-az-ps
[acr-helm]: ../container-registry/container-registry-helm-repos.md [aks-container-best-practices]: operator-best-practices-container-image-management.md
aks Ingress Own Tls https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/ingress-own-tls.md
This article uses [Helm 3][helm] to install the NGINX ingress controller on a [s
For more information on configuring and using Helm, see [Install applications with Helm in Azure Kubernetes Service (AKS)][use-helm].
-This article also requires that you are running the Azure CLI version 2.0.64 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
+### [Azure CLI](#tab/azure-cli)
In addition, this article assumes you have an existing AKS cluster with an integrated ACR. For more details on creating an AKS cluster with an integrated ACR, see [Authenticate with Azure Container Registry from Azure Kubernetes Service][aks-integrated-acr].
+This article also requires that you are running the Azure CLI version 2.0.64 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+In addition, this article assumes you have an existing AKS cluster with an integrated ACR. For more details on creating an AKS cluster with an integrated ACR, see [Authenticate with Azure Container Registry from Azure Kubernetes Service][aks-integrated-acr-ps].
+
+This article also requires that you're running PowerShell 7.2 or newer and Azure PowerShell version 5.9.0 or later. Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell][azure-powershell-install].
+++ ## Import the images used by the Helm chart into your ACR
-This article uses the [NGINX ingress controller Helm chart][ingress-nginx-helm-chart], which relies on three container images. Use `az acr import` to import those images into your ACR.
+This article uses the [NGINX ingress controller Helm chart][ingress-nginx-helm-chart], which relies on three container images.
+
+### [Azure CLI](#tab/azure-cli)
+
+Use `az acr import` to import those images into your ACR.
```azurecli REGISTRY_NAME=<REGISTRY_NAME>
az acr import --name $REGISTRY_NAME --source $SOURCE_REGISTRY/$PATCH_IMAGE:$PATC
az acr import --name $REGISTRY_NAME --source $SOURCE_REGISTRY/$DEFAULTBACKEND_IMAGE:$DEFAULTBACKEND_TAG --image $DEFAULTBACKEND_IMAGE:$DEFAULTBACKEND_TAG ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+Use `Import-AzContainerRegistryImage` to import those images into your ACR.
+
+```azurepowershell-interactive
+$RegistryName = "<REGISTRY_NAME>"
+$ResourceGroup = (Get-AzContainerRegistry | Where-Object {$_.name -eq $RegistryName} ).ResourceGroupName
+$SourceRegistry = "k8s.gcr.io"
+$ControllerImage = "ingress-nginx/controller"
+$ControllerTag = "v1.0.4"
+$PatchImage = "ingress-nginx/kube-webhook-certgen"
+$PatchTag = "v1.1.1"
+$DefaultBackendImage = "defaultbackend-amd64"
+$DefaultBackendTag = "1.5"
+
+Import-AzContainerRegistryImage -ResourceGroupName $ResourceGroup -RegistryName $RegistryName -SourceRegistryUri $SourceRegistry -SourceImage "${ControllerImage}:${ControllerTag}"
+Import-AzContainerRegistryImage -ResourceGroupName $ResourceGroup -RegistryName $RegistryName -SourceRegistryUri $SourceRegistry -SourceImage "${PatchImage}:${PatchTag}"
+Import-AzContainerRegistryImage -ResourceGroupName $ResourceGroup -RegistryName $RegistryName -SourceRegistryUri $SourceRegistry -SourceImage "${DefaultBackendImage}:${DefaultBackendTag}"
+```
+++ > [!NOTE] > In addition to importing container images into your ACR, you can also import Helm charts into your ACR. For more information, see [Push and pull Helm charts to an Azure container registry][acr-helm].
The ingress controller also needs to be scheduled on a Linux node. Windows Serve
> [!TIP] > If you would like to enable [client source IP preservation][client-source-ip] for requests to containers in your cluster, add `--set controller.service.externalTrafficPolicy=Local` to the Helm install command. The client source IP is stored in the request header under *X-Forwarded-For*. When using an ingress controller with client source IP preservation enabled, TLS pass-through will not work.
+### [Azure CLI](#tab/azure-cli)
+ ```console # Add the ingress-nginx repository helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm install nginx-ingress ingress-nginx/ingress-nginx \
--set defaultBackend.image.digest="" ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell-interactive
+# Create a namespace for your ingress resources
+kubectl create namespace ingress-basic
+
+# Add the ingress-nginx repository
+helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
+
+# Set variable for ACR location to use for pulling images
+$AcrUrl = (Get-AzContainerRegistry -ResourceGroupName $ResourceGroup -Name $RegistryName).LoginServer
+
+# Use Helm to deploy an NGINX ingress controller
+helm install nginx-ingress ingress-nginx/ingress-nginx `
+ --namespace ingress-basic `
+ --set controller.replicaCount=2 `
+ --set controller.nodeSelector."kubernetes\.io/os"=linux `
+ --set controller.image.registry=$AcrUrl `
+ --set controller.image.image=$ControllerImage `
+ --set controller.image.tag=$ControllerTag `
+ --set controller.image.digest="" `
+ --set controller.admissionWebhooks.patch.nodeSelector."kubernetes\.io/os"=linux `
+ --set controller.admissionWebhooks.patch.image.registry=$AcrUrl `
+ --set controller.admissionWebhooks.patch.image.image=$PatchImage `
+ --set controller.admissionWebhooks.patch.image.tag=$PatchTag `
+ --set controller.admissionWebhooks.patch.image.digest="" `
+ --set defaultBackend.nodeSelector."kubernetes\.io/os"=linux `
+ --set defaultBackend.image.registry=$AcrUrl `
+ --set defaultBackend.image.image=$DefaultBackendImage `
+ --set defaultBackend.image.tag=$DefaultBackendTag `
+ --set defaultBackend.image.digest=""
+```
+++ During the installation, an Azure public IP address is created for the ingress controller. This public IP address is static for the life-span of the ingress controller. If you delete the ingress controller, the public IP address assignment is lost. If you then create an additional ingress controller, a new public IP address is assigned. If you wish to retain the use of the public IP address, you can instead [create an ingress controller with a static public IP address][aks-ingress-static-tls]. To get the public IP address, use the `kubectl get service` command.
No ingress rules have been created yet. If you browse to the public IP address,
## Generate TLS certificates
+### [Azure CLI](#tab/azure-cli)
+ For this article, let's generate a self-signed certificate with `openssl`. For production use, you should request a trusted, signed certificate through a provider or your own certificate authority (CA). In the next step, you generate a Kubernetes *Secret* using the TLS certificate and private key generated by OpenSSL. The following example generates a 2048-bit RSA X509 certificate valid for 365 days named *aks-ingress-tls.crt*. The private key file is named *aks-ingress-tls.key*. A Kubernetes TLS secret requires both of these files.
openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
-subj "/CN=demo.azure.com/O=aks-ingress-tls" ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+For this article, let's generate a self-signed certificate with `New-SelfSignedCertificate`. For production use, you should request a trusted, signed certificate through a provider or your own certificate authority (CA). In the next step, you generate a Kubernetes *Secret* using the TLS certificate and private key you generated.
+
+The following example generates a 2048-bit RSA X509 certificate valid for 365 days named *aks-ingress-tls.crt*. The private key file is named *aks-ingress-tls.key*. A Kubernetes TLS secret requires both of these files.
+
+This article works with the *demo.azure.com* subject common name and doesn't need to be changed. For production use, specify your own organizational values for the `-subj` parameter:
+
+```powershell-interactive
+$Certificate = New-SelfSignedCertificate -KeyAlgorithm RSA -KeyLength 2048 -Subject "CN=demo.azure.com,O=aks-ingress-tls" -KeyExportPolicy Exportable -CertStoreLocation Cert:\CurrentUser\My\
+$certificatePem =[System.Security.Cryptography.PemEncoding]::Write("CERTIFICATE", $Certificate.RawData)
+$certificatePem -join '' | Out-File -FilePath aks-ingress-tls.crt
+
+$privKeyBytes = $Certificate.PrivateKey.ExportPkcs8PrivateKey()
+$privKeyPem = [System.Security.Cryptography.PemEncoding]::Write("PRIVATE KEY", $privKeyBytes)
+$privKeyPem -join '' | Out-File -FilePath aks-ingress-tls.key
+
+```
+++ ## Create Kubernetes secret for the TLS certificate To allow Kubernetes to use the TLS certificate and private key for the ingress controller, you create and use a Secret. The secret is defined once, and uses the certificate and key file created in the previous step. You then reference this secret when you define ingress routes. The following example creates a Secret name *aks-ingress-tls*:
+### [Azure CLI](#tab/azure-cli)
+ ```console kubectl create secret tls aks-ingress-tls \ --namespace ingress-basic \
kubectl create secret tls aks-ingress-tls \
--cert aks-ingress-tls.crt ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+```powershell-interactive
+kubectl create secret tls aks-ingress-tls `
+ --namespace ingress-basic `
+ --key aks-ingress-tls.key `
+ --cert aks-ingress-tls.crt
+```
+++ ## Run demo applications An ingress controller and a Secret with your certificate have been configured. To see the ingress controller in action, run two demo applications in your AKS cluster. In this example, you use `kubectl apply` to deploy two instances of a simple *Hello world* application.
You can also:
[aks-supported versions]: supported-kubernetes-versions.md [client-source-ip]: concepts-network.md#ingress-controllers [aks-integrated-acr]: cluster-container-registry-integration.md?tabs=azure-cli#create-a-new-aks-cluster-with-acr-integration
+[aks-integrated-acr-ps]: cluster-container-registry-integration.md?tabs=azure-powershell#create-a-new-aks-cluster-with-acr-integration
+[azure-powershell-install]: /powershell/azure/install-az-ps
[acr-helm]: ../container-registry/container-registry-helm-repos.md
aks Ingress Static Ip https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/ingress-static-ip.md
You can also:
## Before you begin
-This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli] or [using the Azure portal][aks-quickstart-portal].
+This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
This article uses [Helm 3][helm] to install the NGINX ingress controller on a [supported version of Kubernetes][aks-supported versions]. Make sure that you are using the latest release of Helm and have access to the *ingress-nginx* and *jetstack* Helm repositories. The steps outlined in this article may not be compatible with previous versions of the Helm chart, NGINX ingress controller, or Kubernetes. For more information on configuring and using Helm, see [Install applications with Helm in Azure Kubernetes Service (AKS)][use-helm]. For upgrade instructions, see the [Helm install docs][helm-install].
-This article also requires that you are running the Azure CLI version 2.0.64 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
+### [Azure CLI](#tab/azure-cli)
In addition, this article assumes you have an existing AKS cluster with an integrated ACR. For more details on creating an AKS cluster with an integrated ACR, see [Authenticate with Azure Container Registry from Azure Kubernetes Service][aks-integrated-acr].
+This article also requires that you are running the Azure CLI version 2.0.64 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+In addition, this article assumes you have an existing AKS cluster with an integrated ACR. For more details on creating an AKS cluster with an integrated ACR, see [Authenticate with Azure Container Registry from Azure Kubernetes Service][aks-integrated-acr-ps].
+
+This article also requires that you're running Azure PowerShell version 5.9.0 or later. Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell][azure-powershell-install].
+++ ## Import the images used by the Helm chart into your ACR
-This article uses the [NGINX ingress controller Helm chart][ingress-nginx-helm-chart], which relies on three container images. Use `az acr import` to import those images into your ACR.
+This article uses the [NGINX ingress controller Helm chart][ingress-nginx-helm-chart], which relies on three container images.
+
+### [Azure CLI](#tab/azure-cli)
+
+Use `az acr import` to import those images into your ACR.
```azurecli REGISTRY_NAME=<REGISTRY_NAME>
az acr import --name $REGISTRY_NAME --source $CERT_MANAGER_REGISTRY/$CERT_MANAGE
az acr import --name $REGISTRY_NAME --source $CERT_MANAGER_REGISTRY/$CERT_MANAGER_IMAGE_CAINJECTOR:$CERT_MANAGER_TAG --image $CERT_MANAGER_IMAGE_CAINJECTOR:$CERT_MANAGER_TAG ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+Use `Import-AzContainerRegistryImage` to import those images into your ACR.
+
+```azurepowershell-interactive
+$RegistryName = "<REGISTRY_NAME>"
+$ResourceGroup = (Get-AzContainerRegistry | Where-Object {$_.name -eq $RegistryName} ).ResourceGroupName
+$ControllerRegistry = "k8s.gcr.io"
+$ControllerImage = "ingress-nginx/controller"
+$ControllerTag = "v1.0.4"
+$PatchRegistry = "docker.io"
+$PatchImage = "jettech/kube-webhook-certgen"
+$PatchTag = "v1.5.1"
+$DefaultBackendRegistry = "k8s.gcr.io"
+$DefaultBackendImage = "defaultbackend-amd64"
+$DefaultBackendTag = "1.5"
+$CertManagerRegistry = "quay.io"
+$CertManagerTag = "v1.3.1"
+$CertManagerImageController = "jetstack/cert-manager-controller"
+$CertManagerImageWebhook = "jetstack/cert-manager-webhook"
+$CertManagerImageCaInjector = "jetstack/cert-manager-cainjector"
+
+Import-AzContainerRegistryImage -ResourceGroupName $ResourceGroup -RegistryName $RegistryName -SourceRegistryUri $ControllerRegistry -SourceImage "${ControllerImage}:${ControllerTag}"
+Import-AzContainerRegistryImage -ResourceGroupName $ResourceGroup -RegistryName $RegistryName -SourceRegistryUri $PatchRegistry -SourceImage "${PatchImage}:${PatchTag}"
+Import-AzContainerRegistryImage -ResourceGroupName $ResourceGroup -RegistryName $RegistryName -SourceRegistryUri $DefaultBackendRegistry -SourceImage "${DefaultBackendImage}:${DefaultBackendTag}"
+Import-AzContainerRegistryImage -ResourceGroupName $ResourceGroup -RegistryName $RegistryName -SourceRegistryUri $CertManagerRegistry -SourceImage "${CertManagerImageController}:${CertManagerTag}"
+Import-AzContainerRegistryImage -ResourceGroupName $ResourceGroup -RegistryName $RegistryName -SourceRegistryUri $CertManagerRegistry -SourceImage "${CertManagerImageWebhook}:${CertManagerTag}"
+Import-AzContainerRegistryImage -ResourceGroupName $ResourceGroup -RegistryName $RegistryName -SourceRegistryUri $CertManagerRegistry -SourceImage "${CertManagerImageCaInjector}:${CertManagerTag}"
+
+```
+++ > [!NOTE] > In addition to importing container images into your ACR, you can also import Helm charts into your ACR. For more information, see [Push and pull Helm charts to an Azure container registry][acr-helm].
az acr import --name $REGISTRY_NAME --source $CERT_MANAGER_REGISTRY/$CERT_MANAGE
By default, an NGINX ingress controller is created with a new public IP address assignment. This public IP address is only static for the life-span of the ingress controller, and is lost if the controller is deleted and re-created. A common configuration requirement is to provide the NGINX ingress controller an existing static public IP address. The static public IP address remains if the ingress controller is deleted. This approach allows you to use existing DNS records and network configurations in a consistent manner throughout the lifecycle of your applications.
+### [Azure CLI](#tab/azure-cli)
+ If you need to create a static public IP address, first get the resource group name of the AKS cluster with the [az aks show][az-aks-show] command: ```azurecli-interactive
Next, create a public IP address with the *static* allocation method using the [
az network public-ip create --resource-group MC_myResourceGroup_myAKSCluster_eastus --name myAKSPublicIP --sku Standard --allocation-method static --query publicIp.ipAddress -o tsv ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+If you need to create a static public IP address, first get the resource group name of the AKS cluster with the [Get-AzAksCluster][get-az-aks-cluster] command:
+
+```azurepowershell-interactive
+(Get-AzAksCluster -ResourceGroupName $ResourceGroup -Name myAKSCluster).NodeResourceGroup
+```
+
+Next, create a public IP address with the *static* allocation method using the [New-AzPublicIpAddress][new-az-public-ip-address] command. The following example creates a public IP address named *myAKSPublicIP* in the AKS cluster resource group obtained in the previous step:
+
+```azurepowershell-interactive
+(New-AzPublicIpAddress -ResourceGroupName MC_myResourceGroup_myAKSCluster_eastus -Name myAKSPublicIP -Sku Standard -AllocationMethod Static -Location eastus).IpAddress
+```
+++ > [!NOTE] > The above commands create an IP address that will be deleted if you delete your AKS cluster. Alternatively, you can create an IP address in a different resource group which can be managed separately from your AKS cluster. If you create an IP address in a different resource group, ensure the cluster identity used by the AKS cluster has delegated permissions to the other resource group, such as *Network Contributor*. For more information, see [Use a static public IP address and DNS label with the AKS load balancer][aks-static-ip].
Update the following script with the **IP address** of your ingress controller a
> [!IMPORTANT] > You must update replace `<STATIC_IP>` and `<DNS_LABEL>` with your own IP address and unique name when running the command. The DNS_LABEL must be unique within the Azure region.
+### [Azure CLI](#tab/azure-cli)
+ ```console # Add the ingress-nginx repository helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm install nginx-ingress ingress-nginx/ingress-nginx \
--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-dns-label-name"=$DNS_LABEL ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+```powershell-interactive
+# Add the ingress-nginx repository
+helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
+
+# Set variable for ACR location to use for pulling images
+$AcrUrl = (Get-AzContainerRegistry -ResourceGroupName $ResourceGroup -Name $RegistryName).LoginServer
+$StaticIp = "<STATIC_IP>"
+$DnsLabel = "<DNS_LABEL>"
+
+helm install nginx-ingress ingress-nginx/ingress-nginx `
+ --namespace ingress-basic --create-namespace `
+ --set controller.replicaCount=2 `
+ --set controller.nodeSelector."kubernetes\.io/os"=linux `
+ --set controller.image.registry=$AcrUrl `
+ --set controller.image.image=$ControllerImage `
+ --set controller.image.tag=$ControllerTag `
+ --set controller.image.digest="" `
+ --set controller.admissionWebhooks.patch.nodeSelector."kubernetes\.io/os"=linux `
+ --set controller.admissionWebhooks.patch.image.registry=$AcrUrl `
+ --set controller.admissionWebhooks.patch.image.image=$PatchImage `
+ --set controller.admissionWebhooks.patch.image.tag=$PatchTag `
+ --set controller.admissionWebhooks.patch.image.digest="" `
+ --set defaultBackend.nodeSelector."kubernetes\.io/os"=linux `
+ --set defaultBackend.image.registry=$AcrUrl, `
+ --set defaultBackend.image.image=$DefaultBackendImage `
+ --set defaultBackend.image.tag=$DefaultBackendTag `
+ --set defaultBackend.image.digest="" `
+ --set controller.service.loadBalancerIP=$StaticIp `
+ --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-dns-label-name"=$DnsLabel
+```
++++ When the Kubernetes load balancer service is created for the NGINX ingress controller, your static IP address is assigned, as shown in the following example output: ```
No ingress rules have been created yet, so the NGINX ingress controller's defaul
You can verify that the DNS name label has been applied by querying the FQDN on the public IP address as follows:
+### [Azure CLI](#tab/azure-cli)
+ ```azurecli-interactive az network public-ip list --resource-group MC_myResourceGroup_myAKSCluster_eastus --query "[?name=='myAKSPublicIP'].[dnsSettings.fqdn]" -o tsv ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell-interactive
+(Get-AzPublicIpAddress -ResourceGroupName MC_myResourceGroup_myAKSCluster_eastus -Name myAKSPublicIP).DnsSettings.Fqdn
+```
+++ The ingress controller is now accessible through the IP address or the FQDN. ## Install cert-manager
The NGINX ingress controller supports TLS termination. There are several ways to
To install the cert-manager controller in an Kubernetes RBAC-enabled cluster, use the following `helm install` command:
+### [Azure CLI](#tab/azure-cli)
+ ```console # Label the cert-manager namespace to disable resource validation kubectl label namespace ingress-basic cert-manager.io/disable-validation=true
helm install cert-manager jetstack/cert-manager \
--set webhook.image.repository=$ACR_URL/$CERT_MANAGER_IMAGE_WEBHOOK \ --set webhook.image.tag=$CERT_MANAGER_TAG \ --set cainjector.image.repository=$ACR_URL/$CERT_MANAGER_IMAGE_CAINJECTOR \
- --set cainjector.image.tag=$CERT_MANAGER_TAG
+ --set cainjector.image.tag=$CERT_MANAGER_TAG
```
+### [Azure PowerShell](#tab/azure-powershell)
+
+```powershell-interactive
+# Label the cert-manager namespace to disable resource validation
+kubectl label namespace ingress-basic cert-manager.io/disable-validation=true
+
+# Add the Jetstack Helm repository
+helm repo add jetstack https://charts.jetstack.io
+
+# Update your local Helm chart repository cache
+helm repo update
+
+# Install the cert-manager Helm chart
+helm install cert-manager jetstack/cert-manager `
+ --namespace ingress-basic `
+ --version $CertManagerTag `
+ --set installCRDs=true `
+ --set nodeSelector."kubernetes\.io/os"=linux `
+ --set image.repository=$AcrUrl/$CertManagerImageController `
+ --set image.tag=$CertManagerTag `
+ --set webhook.image.repository=$AcrUrl/$CertManagerImageWebhook `
+ --set webhook.image.tag=$CertManagerTag `
+ --set cainjector.image.repository=$AcrUrl/$CertManagerImageCaInjector `
+ --set cainjector.image.tag=$CertManagerTag
+```
+++ For more information on cert-manager configuration, see the [cert-manager project][cert-manager]. ## Create a CA cluster issuer
An ingress controller and a certificate management solution have been configured
To see the ingress controller in action, run two demo applications in your AKS cluster. In this example, you use `kubectl apply` to deploy two instances of a simple *Hello world* application.
-Create a *aks-helloworld.yaml* file and copy in the following example YAML:
+Create a *aks-helloworld-one.yaml* file and copy in the following example YAML:
```yml apiVersion: apps/v1 kind: Deployment metadata:
- name: aks-helloworld
+ name: aks-helloworld-one
spec: replicas: 1 selector: matchLabels:
- app: aks-helloworld
+ app: aks-helloworld-one
template: metadata: labels:
- app: aks-helloworld
+ app: aks-helloworld-one
spec: containers:
- - name: aks-helloworld
+ - name: aks-helloworld-one
image: mcr.microsoft.com/azuredocs/aks-helloworld:v1 ports: - containerPort: 80
spec:
apiVersion: v1 kind: Service metadata:
- name: aks-helloworld
+ name: aks-helloworld-one
spec: type: ClusterIP ports: - port: 80 selector:
- app: aks-helloworld
+ app: aks-helloworld-one
```
-Create a *ingress-demo.yaml* file and copy in the following example YAML:
+Create a *aks-helloworld-two.yaml* file and copy in the following example YAML:
```yml apiVersion: apps/v1 kind: Deployment metadata:
- name: ingress-demo
+ name: aks-helloworld-two
spec: replicas: 1 selector: matchLabels:
- app: ingress-demo
+ app: aks-helloworld-two
template: metadata: labels:
- app: ingress-demo
+ app: aks-helloworld-two
spec: containers:
- - name: ingress-demo
+ - name: aks-helloworld-two
image: mcr.microsoft.com/azuredocs/aks-helloworld:v1 ports: - containerPort: 80
spec:
apiVersion: v1 kind: Service metadata:
- name: ingress-demo
+ name: aks-helloworld-two
spec: type: ClusterIP ports: - port: 80 selector:
- app: ingress-demo
+ app: aks-helloworld-two
``` Run the two demo applications using `kubectl apply`: ```console
-kubectl apply -f aks-helloworld.yaml --namespace ingress-basic
-kubectl apply -f ingress-demo.yaml --namespace ingress-basic
+kubectl apply -f aks-helloworld-one.yaml --namespace ingress-basic
+kubectl apply -f aks-helloworld-two.yaml --namespace ingress-basic
``` ## Create an ingress route
spec:
pathType: Prefix backend: service:
- name: aks-helloworld
+ name: aks-helloworld-one
port: number: 80 - path: /hello-world-two(/|$)(.*) pathType: Prefix backend: service:
- name: ingress-demo
+ name: aks-helloworld-two
port: number: 80 - path: /(.*) pathType: Prefix backend: service:
- name: aks-helloworld
+ name: aks-helloworld-one
port: number: 80 ```
release "cert-manager" deleted
Next, remove the two sample applications: ```console
-kubectl delete -f aks-helloworld.yaml --namespace ingress-basic
-kubectl delete -f ingress-demo.yaml --namespace ingress-basic
+kubectl delete -f aks-helloworld-one.yaml --namespace ingress-basic
+kubectl delete -f aks-helloworld-two.yaml --namespace ingress-basic
``` Delete the itself namespace. Use the `kubectl delete` command and specify your namespace name:
kubectl delete namespace ingress-basic
Finally, remove the static public IP address created for the ingress controller. Provide your *MC_* cluster resource group name obtained in the first step of this article, such as *MC_myResourceGroup_myAKSCluster_eastus*:
+### [Azure CLI](#tab/azure-cli)
+ ```azurecli-interactive az network public-ip delete --resource-group MC_myResourceGroup_myAKSCluster_eastus --name myAKSPublicIP ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell-interactive
+Remove-AzPublicIpAddress -ResourceGroupName MC_myResourceGroup_myAKSCluster_eastus -Name myAKSPublicIP
+```
+++ ## Next steps This article included some external components to AKS. To learn more about these components, see the following project pages:
You can also:
[use-helm]: kubernetes-helm.md [azure-cli-install]: /cli/azure/install-azure-cli [az-aks-show]: /cli/azure/aks#az_aks_show
+[get-az-aks-cluster]: /powershell/module/az.aks/get-azakscluster
[az-network-public-ip-create]: /cli/azure/network/public-ip#az_network_public_ip_create
+[new-az-public-ip-address]: /powershell/module/az.network/new-azpublicipaddress
[aks-ingress-internal]: ingress-internal-ip.md [aks-ingress-basic]: ingress-basic.md [aks-ingress-tls]: ingress-tls.md [aks-http-app-routing]: http-application-routing.md [aks-ingress-own-tls]: ingress-own-tls.md [aks-quickstart-cli]: kubernetes-walkthrough.md
+[aks-quickstart-powershell]: kubernetes-walkthrough-powershell.md
[aks-quickstart-portal]: kubernetes-walkthrough-portal.md [client-source-ip]: concepts-network.md#ingress-controllers
-[install-azure-cli]: /cli/azure/install-azure-cli
[aks-static-ip]: static-ip.md [aks-supported versions]: supported-kubernetes-versions.md [aks-integrated-acr]: cluster-container-registry-integration.md?tabs=azure-cli#create-a-new-aks-cluster-with-acr-integration
+[aks-integrated-acr-ps]: cluster-container-registry-integration.md?tabs=azure-powershell#create-a-new-aks-cluster-with-acr-integration
+[azure-powershell-install]: /powershell/azure/install-az-ps
[acr-helm]: ../container-registry/container-registry-helm-repos.md
aks Ingress Tls https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/ingress-tls.md
You can also:
## Before you begin
-This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell] or [using the Azure portal][aks-quickstart-portal].
+This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
This article also assumes you have [a custom domain][custom-domain] with a [DNS Zone][dns-zone] in the same resource group as your AKS cluster.
In addition, this article assumes you have an existing AKS cluster with an integ
This article also requires that you're running Azure PowerShell version 5.9.0 or later. Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell][azure-powershell-install]. + ## Import the images used by the Helm chart into your ACR This article uses the [NGINX ingress controller Helm chart][ingress-nginx-helm-chart], which relies on three container images.
az acr import --name $REGISTRY_NAME --source $CERT_MANAGER_REGISTRY/$CERT_MANAGE
### [Azure PowerShell](#tab/azure-powershell)
+Use `Import-AzContainerRegistryImage` to import those images into your ACR.
+ ```azurepowershell $RegistryName = "<REGISTRY_NAME>" $ResourceGroup = (Get-AzContainerRegistry | Where-Object {$_.name -eq $RegistryName} ).ResourceGroupName
kubectl create namespace ingress-basic
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx # Set variable for ACR location to use for pulling images
-$AcrUrl = "$RegistryName.azurecr.io"
-
-# Get the SHA256 digest of the controller and patch images
-$ControllerDigest = (Get-AzContainerRegistryTag -RegistryName $RegistryName -RepositoryName $ControllerImage -Name $ControllerTag).Attributes.digest
-$PatchDigest = (Get-AzContainerRegistryTag -RegistryName $RegistryName -RepositoryName $PatchImage -Name $PatchTag).Attributes.digest
+$AcrUrl = (Get-AzContainerRegistry -ResourceGroupName $ResourceGroup -Name $RegistryName).LoginServer
# Use Helm to deploy an NGINX ingress controller helm install nginx-ingress ingress-nginx/ingress-nginx `
helm install nginx-ingress ingress-nginx/ingress-nginx `
--set controller.image.registry=$AcrUrl ` --set controller.image.image=$ControllerImage ` --set controller.image.tag=$ControllerTag `
- --set controller.image.digest=$ControllerDigest `
+ --set controller.image.digest="" `
--set controller.admissionWebhooks.patch.nodeSelector."kubernetes\.io/os"=linux ` --set controller.admissionWebhooks.patch.image.registry=$AcrUrl ` --set controller.admissionWebhooks.patch.image.image=$PatchImage ` --set controller.admissionWebhooks.patch.image.tag=$PatchTag `
- --set controller.admissionWebhooks.patch.image.digest=$PatchDigest `
+ --set controller.admissionWebhooks.patch.image.digest="" `
--set defaultBackend.nodeSelector."kubernetes\.io/os"=linux ` --set defaultBackend.image.registry=$AcrUrl ` --set defaultBackend.image.image=$DefaultBackendImage `
- --set defaultBackend.image.tag=$DefaultBackendTag
+ --set defaultBackend.image.tag=$DefaultBackendTag `
+ --set defaultBackend.image.digest=""
```
New-AzDnsRecordSet -Name "*" `
-RecordType A ` -ResourceGroupName <Name of Resource Group for the DNS Zone> ` -ZoneName <Custom Domain Name> `
- -TTL 3600
+ -TTL 3600 `
-DnsRecords $Records ```
aks Private Clusters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/private-clusters.md
Title: Create a private Azure Kubernetes Service cluster
description: Learn how to create a private Azure Kubernetes Service (AKS) cluster Previously updated : 11/30/2021 Last updated : 01/12/2022
The API server endpoint has no public IP address. To manage the API server, you'
* Use a VM in a separate network and set up [Virtual network peering][virtual-network-peering]. See the section below for more information on this option. * Use an [Express Route or VPN][express-route-or-VPN] connection. * Use the [AKS `command invoke` feature][command-invoke].
+* Use a [private endpoint][private-endpoint-service] connection.
-Creating a VM in the same VNET as the AKS cluster is the easiest option. Express Route and VPNs add costs and require additional networking complexity. Virtual network peering requires you to plan your network CIDR ranges to ensure there are no overlapping ranges.
+Creating a VM in the same VNET as the AKS cluster is the easiest option. Express Route and VPNs add costs and require additional networking complexity. Virtual network peering requires you to plan your network CIDR ranges to ensure there are no overlapping ranges.
## Virtual network peering
As mentioned, virtual network peering is one way to access your private cluster.
> [!NOTE] > If you are using [Bring Your Own Route Table with kubenet](./configure-kubenet.md#bring-your-own-subnet-and-route-table-with-kubenet) and Bring Your Own DNS with Private Cluster, the cluster creation will fail. You will need to associate the [RouteTable](./configure-kubenet.md#bring-your-own-subnet-and-route-table-with-kubenet) in the node resource group to the subnet after the cluster creation failed, in order to make the creation successful.
+## Using a private endpoint connection
+
+A private endpoint can be set up so that an Azure Virtual Network doesn't need to be peered to communicate to the private cluster. To use a private endpoint, create a new private endpoint in your virtual network then create a link between your virtual network and a new private DNS zone.
+
+> [!IMPORTANT]
+> If the virtual network is configured with custom DNS servers, private DNS will need to be set up appropriately for the environment. See the [virtual networks name resolution documentation][virtual-networks-name-resolution] for more details.
+
+1. On the Azure portal menu or from the Home page, select **Create a resource**.
+2. Search for **Private Endpoint** and select **Create > Private Endpoint**.
+3. Select **Create**.
+4. On the **Basics** tab, set up the following options:
+ * **Project details**:
+ * Select an Azure **Subscription**.
+ * Select the Azure **Resource group** where your virtual network is located.
+ * **Instance details**:
+ * Enter a **Name** for the private endpoint, such as *myPrivateEndpoint*.
+ * Select a **Region** for the private endpoint.
+
+ > [!IMPORTANT]
+ > Check that the region selected is the same as the virtual network where you want to connect from, otherwise you won't see your virtual network in the **Configuration** tab.
+
+5. Select **Next: Resource** when complete.
+6. On the **Resource** tab, set up the following options:
+ * **Connection method**: *Connect to an Azure resource in my directory*
+ * **Subscription**: Select your Azure Subscription where the private cluster is located
+ * **Resource type**: *Microsoft.ContainerService/managedClusters*
+ * **Resource**: *myPrivateAKSCluster*
+ * **Target sub-resource**: *management*
+7. Select **Next: Configuration** when complete.
+8. On the **Configuration** tab, set up the following options:
+ * **Networking**:
+ * **Virtual network**: *myVirtualNetwork*
+ * **Subnet**: *mySubnet*
+9. Select **Next: Tags** when complete.
+10. (Optional) On the **Tags** tab, set up key-values as needed.
+11. Select **Next: Review + create**, and then select **Create** when validation completes.
+
+Record the private IP address of the private endpoint. This private IP address is used in a later step.
+
+After the private endpoint has been created, create a new private DNS zone with the same name as the private DNS zone that was created by the private cluster.
+
+1. Go to the node resource group in the Azure portal.
+2. Select the private DNS zone and record:
+ * the name of the private DNS zone, which follows the pattern `*.privatelink.<region>.azmk8s.io`
+ * the name of the A record (excluding the private DNS name)
+ * the time-to-live (TTL)
+3. On the Azure portal menu or from the Home page, select **Create a resource**.
+4. Search for **Private DNS zone** and select **Create > Private DNS Zone**.
+5. On the **Basics** tab, set up the following options:
+ * **Project details**:
+ * Select an Azure **Subscription**
+ * Select the Azure **Resource group** where the private endpoint was created
+ * **Instance details**:
+ * Enter the **Name** of the DNS zone retrieved from previous steps
+ * **Region** defaults to the Azure Resource group location
+6. Select **Review + create** when complete and select **Create** when validation completes.
+
+After the private DNS zone is created, create an A record. This record associates the private endpoint to the private cluster.
+
+1. Go to the private DNS zone created in previous steps.
+2. On the **Overview** page, select **+ Record set**.
+3. On the **Add record set** tab, set up the following options:
+ * **Name**: Input the name retrieved from the A record in the private cluster's DNS zone
+ * **Type**: *A - Alias record to IPv4 address*
+ * **TTL**: Input the number to match the record from the A record private cluster's DNS zone
+ * **TTL Unit**: Change the dropdown value to match the A record from the private cluster's DNS zone
+ * **IP address**: Input the IP address of the private endpoint that was created previously
+
+> [!IMPORTANT]
+> When creating the A record, use only the name, and not the fully qualified domain name (FQDN).
+
+Once the A record is created, link the private DNS zone to the virtual network that will access the private cluster.
+
+1. Go to the private DNS zone created in previous steps.
+2. In the left pane, select **Virtual network links**.
+3. Create a new link to add the virtual network to the private DNS zone. It takes a few minutes for the DNS zone link to become available.
+
+> [!WARNING]
+> If the private cluster is stopped and restarted, the private cluster's original private link service is removed and re-created, which breaks the connection between your private endpoint and the private cluster. To resolve this issue, delete and re-create any user created private endpoints linked to the private cluster. DNS records will also need to be updated if the re-created private endpoints have new IP addresses.
+ ## Limitations
-* IP authorized ranges can't be applied to the private api server endpoint, they only apply to the public API server
+* IP authorized ranges can't be applied to the private API server endpoint, they only apply to the public API server
* [Azure Private Link service limitations][private-link-service] apply to private clusters.
-* No support for Azure DevOps Microsoft-hosted Agents with private clusters. Consider to use [Self-hosted Agents](/azure/devops/pipelines/agents/agents?tabs=browser).
-* For customers that need to enable Azure Container Registry to work with private AKS, the Container Registry virtual network must be peered with the agent cluster virtual network.
+* No support for Azure DevOps Microsoft-hosted Agents with private clusters. Consider using [Self-hosted Agents](/azure/devops/pipelines/agents/agents?tabs=browser).
+* If you need to enable Azure Container Registry to work with a private AKS cluster, [set up a private link for the container registry in the cluster virtual network][container-registry-private-link] or set up peering between the Container Registry virtual network and the private cluster's virtual network.
* No support for converting existing AKS clusters into private clusters * Deleting or modifying the private endpoint in the customer subnet will cause the cluster to stop functioning.
As mentioned, virtual network peering is one way to access your private cluster.
[az-extension-add]: /cli/azure/extension#az_extension_add [az-extension-update]: /cli/azure/extension#az_extension_update [private-link-service]: ../private-link/private-link-service-overview.md#limitations
+[private-endpoint-service]: ../private-link/private-endpoint-overview.md
[virtual-network-peering]: ../virtual-network/virtual-network-peering-overview.md [azure-bastion]: ../bastion/tutorial-create-host-portal.md [express-route-or-vpn]: ../expressroute/expressroute-about-virtual-network-gateways.md [devops-agents]: /azure/devops/pipelines/agents/agents [availability-zones]: availability-zones.md [command-invoke]: command-invoke.md
+[container-registry-private-link]: ../container-registry/container-registry-private-link.md
+[virtual-networks-name-resolution]: ../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server
analysis-services Analysis Services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/analysis-services/analysis-services-overview.md
Azure Analysis Services is a fully managed platform as a service (PaaS) that pro
In Azure portal, you can [create a server](analysis-services-create-server.md) within minutes. And with Azure Resource Manager [templates](../azure-resource-manager/templates/quickstart-create-templates-use-the-portal.md) and PowerShell, you can create servers using a declarative template. With a single template, you can deploy server resources along with other Azure components such as storage accounts and Azure Functions.
-**Video:** Check out [Automating deployment](https://channel9.msdn.com/series/Azure-Analysis-Services/AzureAnalysisServicesAutomation) to learn more about how you can use Azure Automation to speed server creation.
+**Video:** Check out Automating deployment to learn more about how you can use Azure Automation to speed server creation.
Azure Analysis Services integrates with many Azure services enabling you to build sophisticated analytics solutions. Integration with [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md) provides secure, role-based access to your critical data. Integrate with [Azure Data Factory](../data-factory/introduction.md) pipelines by including an activity that loads data into the model. [Azure Automation](../automation/automation-intro.md) and [Azure Functions](../azure-functions/functions-overview.md) can be used for lightweight orchestration of models using custom code.
api-management Api Management Caching Policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-caching-policies.md
The `cache-store` policy caches responses according to the specified cache setti
``` #### Example using policy expressions
-This example shows how to configure API Management response caching duration that matches the response caching of the backend service as specified by the backed service's `Cache-Control` directive. For a demonstration of configuring and using this policy, see [Cloud Cover Episode 177: More API Management Features with Vlad Vinogradsky](https://channel9.msdn.com/Shows/Cloud+Cover/Episode-177-More-API-Management-Features-with-Vlad-Vinogradsky) and fast-forward to 25:25.
+This example shows how to configure API Management response caching duration that matches the response caching of the backend service as specified by the backed service's `Cache-Control` directive. For a demonstration of configuring and using this policy, see Cloud Cover Episode 177: More API Management Features with Vlad Vinogradsky and fast-forward to 25:25.
```xml <!-- The following cache policy snippets demonstrate how to control API Management response cache duration with Cache-Control headers sent by the backend service. -->
api-management Api Management Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-role-based-access-control.md
New-AzRoleAssignment -ObjectId <object ID of the user account> -RoleDefinitionNa
The [Azure Resource Manager resource provider operations](../role-based-access-control/resource-provider-operations.md#microsoftapimanagement) article contains the list of permissions that can be granted on the API Management level.
-## Video
--
-> [!VIDEO https://channel9.msdn.com/Blogs/AzureApiMgmt/Role-Based-Access-Control-in-API-Management/player]
->
->
- ## Next steps To learn more about Role-Based Access Control in Azure, see the following articles:
api-management Quickstart Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/quickstart-arm-template.md
Title: Quickstart - Create Azure API Management instance by using ARM template
description: Learn how to create an Azure API Management instance in the Developer tier by using an Azure Resource Manager template (ARM template). -+
+tags: azure-resource-manager
app-service App Service Asp Net Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/app-service-asp-net-migration.md
Azure Migrate recently announced at-scale, agentless discovery, and assessment o
| **Best practices** | | [Assessment best practices in Azure Migrate Discovery and assessment tool](../migrate/best-practices-assessment.md) | | **Video** |
-| [At scale discovery and assessment for ASP.NET app migration with Azure Migrate](https://channel9.msdn.com/Shows/Inside-Azure-for-IT/At-scale-discovery-and-assessment-for-ASPNET-app-migration-with-Azure-Migrate) |
+| [At scale discovery and assessment for ASP.NET app migration with Azure Migrate](/Shows/Inside-Azure-for-IT/At-scale-discovery-and-assessment-for-ASPNET-app-migration-with-Azure-Migrate) |
## Migrate from an IIS server <!-- Intent: discover how to assess and migrate from a single IIS server -->
-You can migrate ASP.NET web apps from single IIS server discovered through Azure Migrate's at-scale discovery experience using [PowerShell scripts](https://github.com/Azure/App-Service-Migration-Assistant/wiki/PowerShell-Scripts) [(download)](https://appmigration.microsoft.com/api/download/psscriptpreview/AppServiceMigrationScripts.zip). Watch the video for [updates on migrating to Azure App Service](https://channel9.msdn.com/Shows/The-Launch-Space/Updates-on-Migrating-to-Azure-App-Service).
+You can migrate ASP.NET web apps from single IIS server discovered through Azure Migrate's at-scale discovery experience using [PowerShell scripts](https://github.com/Azure/App-Service-Migration-Assistant/wiki/PowerShell-Scripts) [(download)](https://appmigration.microsoft.com/api/download/psscriptpreview/AppServiceMigrationScripts.zip). Watch the video for [updates on migrating to Azure App Service](/Shows/The-Launch-Space/Updates-on-Migrating-to-Azure-App-Service).
## ASP.NET web app migration <!-- Intent: migrate a single web app -->
app-service App Service Java Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/app-service-java-migration.md
Azure App Service provides tools to discover web apps deployed to on-premise web
## Standalone Tomcat Web App Migration (Windows OS)
-Download this [preview tool](https://azure.microsoft.com/services/app-service/migration-assistant/) to migrate a Java web app on Apache Tomcat to App Service on Windows. For more information, see the [video](https://channel9.msdn.com/Shows/The-Launch-Space/Updates-on-Migrating-to-Azure-App-Service) and [how-to](https://github.com/Azure/App-Service-Migration-Assistant/wiki/TOMCAT-Java-Information).
+Download this [preview tool](https://azure.microsoft.com/services/app-service/migration-assistant/) to migrate a Java web app on Apache Tomcat to App Service on Windows. For more information, see the [video](/Shows/The-Launch-Space/Updates-on-Migrating-to-Azure-App-Service) and [how-to](https://github.com/Azure/App-Service-Migration-Assistant/wiki/TOMCAT-Java-Information).
## Containerize standalone Tomcat Web App
app-service How To Migrate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/how-to-migrate.md
+
+ Title: How to migrate App Service Environment v2 to App Service Environment v3
+description: Learn how to migrate your App Service Environment v2 to App Service Environment v3
++ Last updated : 1/17/2022++
+# How to migrate App Service Environment v2 to App Service Environment v3
+
+> [!IMPORTANT]
+> This article describes a feature that is currently in preview. You should use this feature for dev environments first before migrating any production environments to ensure there are no unexpected issues. Please provide any feedback related to this article or the feature using the buttons at the bottom of the page.
+>
+
+An App Service Environment (ASE) v2 can be migrated to an [App Service Environment v3](overview.md). To learn more about the migration process and to see if your App Service Environment supports migration at this time, see the [Migration to App Service Environment v3 Overview](migrate.md).
+
+## Prerequisites
+
+Ensure you understand how migrating to an App Service Environment v3 will affect your applications. Review the [migration process](migrate.md#overview-of-the-migration-process) to understand the process timeline and where and when you'll need to get involved. Also review the [FAQs](migrate.md#frequently-asked-questions), which may answer some questions you currently have.
+
+For the initial preview of the migration feature, you should follow the below steps in order and as written since you'll be making Azure REST API calls. The recommended way for making these calls is by using the [Azure CLI](/cli/azure/). For information about other methods, see [Getting Started with Azure REST](/rest/api/azure/).
+
+For this guide, [install the Azure CLI](/cli/azure/install-azure-cli) or use the [Azure Cloud Shell](https://shell.azure.com/).
+
+## 1. Get your App Service Environment ID
+
+Run these commands to get your App Service Environment ID and store it as an environment variable. Replace the placeholders for name and resource group with your values for the App Service Environment you want to migrate.
+
+```azurecli
+ASE_NAME=<Your-App-Service-Environment-name>
+ASE_RG=<Your-Resource-Group>
+ASE_ID=$(az appservice ase show --name $ASE_NAME --resource-group $ASE_RG --query id --output tsv)
+```
+
+## 2. Delegate your App Service Environment subnet
+
+App Service Environment v3 requires the subnet it's in to have a single delegation of `Microsoft.Web/hostingEnvironments`. Previous versions didn't require this delegation. You'll need to confirm your subnet is delegated properly and update the delegation if needed before migrating. You can update the delegation either by running the following command or by navigating to the subnet in the [Azure portal](https://portal.azure.com).
+
+```azurecli
+az network vnet subnet update -g $ASE_RG -n <subnet-name> --vnet-name <vnet-name> --delegations Microsoft.Web/hostingEnvironments
+```
+
+![subnet delegation sample](./media/migration/subnet-delegation.jpg)
+
+## 3. Validate migration is supported
+
+The following command will check whether your App Service Environment is supported for migration. If you receive an error or if your App Service Environment is in an unhealthy or suspended state, you can't migrate at this time. For an estimate of when you can migrate, see the [timeline](migrate.md#preview-limitations). If your environment [won't be supported for migration](migrate.md#migration-feature-limitations) or you want to migrate to ASEv3 without using the migration feature, see [migration alternatives](migration-alternatives.md).
+
+```azurecli
+az rest --method post --uri "${ASE_ID}/migrate?api-version=2021-02-01&phase=validation"
+```
+
+If there are no errors, your migration is supported and you can continue to the next step.
+
+## 4. Generate IP addresses for your new App Service Environment v3
+
+Run the following command to create the new IPs. This step will take about 5 minutes to complete. Don't scale or make changes to your existing App Service Environment during this time.
+
+```azurecli
+az rest --method post --uri "${ASE_ID}/migrate?api-version=2021-02-01&phase=premigration" --verbose
+```
+
+Run the following command to check the status of this step.
+
+```azurecli
+az rest --method get --uri "${ASE_ID}?api-version=2018-11-01" --query properties.status
+```
+
+If it's in progress, you'll get a status of "Migrating". Once you get a status of "Ready", run the following command to get your new IPs. If you don't see the new IPs immediately, wait a few minutes and try again.
+
+```azurecli
+az rest --method get --uri "${ASE_ID}/configurations/networking?api-version=2018-11-01"
+```
+
+## 5. Update dependent resources with new IPs
+
+Don't move on to full migration immediately after completing the previous step. Using the new IPs, update any resources and networking components to ensure your new environment functions as intended once migration is complete. It's your responsibility to make any necessary updates.
+
+## 6. Full migration
+
+Only start this step once you've completed all pre-migration actions listed above and understand the [implications of full migration](migrate.md#full-migration) including what will happen during this time. There will be about one hour of downtime. Don't scale or make changes to your existing App Service Environment during this step.
+
+```azurecli
+az rest --method post --uri "${ASE_ID}/migrate?api-version=2021-02-01&phase=fullmigration" --verbose
+```
+
+Run the following command to check the status of your migration. The status will show as "Migrating" while in progress.
+
+```azurecli
+az rest --method get --uri "${ASE_ID}?api-version=2018-11-01" --query properties.status
+```
+
+Once you get a status of "Ready", migration is done and you have an App Service Environment v3.
+
+Get the details of your new environment by running the following command or by navigating to the [Azure portal](https://portal.azure.com).
+
+```azurecli
+az appservice ase show --name $ASE_NAME --resource group $ASE_RG
+```
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Using an App Service Environment v3](using.md)
+
+> [!div class="nextstepaction"]
+> [App Service Environment v3 Networking](networking.md)
app-service Intro https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/intro.md
In ASEv1, you need to manage all of the resources manually. That includes the fr
ASEv1 uses a different pricing model from ASEv2. In ASEv1, you pay for each vCPU allocated. That includes vCPUs used for front ends or workers that aren't hosting any workloads. In ASEv1, the default maximum-scale size of an ASE is 55 total hosts. That includes workers and front ends. One advantage to ASEv1 is that it can be deployed in a classic virtual network and a Resource Manager virtual network. To learn more about ASEv1, see [App Service Environment v1 introduction][ASEv1Intro]. <!--Links-->
-[App Service Environments v2]: https://channel9.msdn.com/Blogs/Azure/Azure-Application-Service-Environments-v2-Private-PaaS-Environments-in-the-Cloud?term=app%20service%20environment
-[Isolated offering]: https://channel9.msdn.com/Shows/Azure-Friday/Security-and-Horsepower-with-App-Service-The-New-Isolated-Offering?term=app%20service%20environment
[Intro]: ./intro.md [MakeExternalASE]: ./create-external-ase.md [MakeASEfromTemplate]: ./create-from-template.md
app-service Migrate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/migrate.md
+
+ Title: Migration to App Service Environment v3
+description: Overview of the migration process to App Service Environment v3
++ Last updated : 1/17/2022+++
+# Migration to App Service Environment v3
+
+> [!IMPORTANT]
+> This article describes a feature that is currently in preview. You should use this feature for dev environments first before migrating any production environments to ensure there are no unexpected issues. Please provide any feedback related to this article or the feature using the buttons at the bottom of the page.
+>
+
+App Service can now migrate your App Service Environment (ASE) v2 to an [App Service Environment v3](overview.md). App Service Environment v3 provides [advantages and feature differences](overview.md#feature-differences) over earlier versions. Make sure to review the [supported features](overview.md#feature-differences) of App Service Environment v3 before migrating to reduce the risk of an unexpected application issue.
+
+## Supported scenarios
+
+At this time, App Service Environment migrations to v3 support both [Internal Load Balancer (ILB)](create-ilb-ase.md) and [external (internet facing with public IP)](create-external-ase.md) App Service Environment v2 in the following regions:
+
+- West Central US
+- Canada Central
+- Canada East
+- UK South
+- Germany West Central
+- East Asia
+- Australia East
+- Australia Southeast
+
+You can find the version of your App Service Environment by navigating to your App Service Environment in the [Azure portal](https://portal.azure.com) and selecting **Configuration** under **Settings** on the left-hand side. You can also use [Azure Resource Explorer](https://resources.azure.com/) and review the value of the `kind` property for your App Service Environment.
+
+### Preview limitations
+
+For this version of the preview, your new App Service Environment will be placed in the existing subnet that was used for your old environment. Internet facing App Service Environment cannot be migrated to ILB App Service Environment v3 and vice versa.
+
+Note that App Service Environment v3 doesn't currently support the following features that you may be using with your current App Service Environment. If you require any of these features, don't migrate until they're supported.
+
+- Sending SMTP traffic. You can still have email triggered alerts but your app can't send outbound traffic on port 25.
+- Deploying your apps with FTP
+- Using remote debug with your apps
+- Monitoring your traffic with Network Watcher or NSG Flow
+- Configuring an IP-based TLS/SSL binding with your apps
+
+The following scenarios aren't supported in this version of the preview.
+
+- App Service Environment v2 -> Zone Redundant App Service Environment v3
+- App Service Environment v1
+- App Service Environment v1 -> Zone Redundant App Service Environment v3
+- |ILB App Service Environment v2 with a custom domain suffix
+- ILB App Service Environment v1 with a custom domain suffix
+- Internet facing App Service Environment v2 with IP SSL addresses
+- Internet facing App Service Environment v1 with IP SSL addresses
+- [Zone pinned](zone-redundancy.md) App Service Environment v2
+- App Service Environment in a region not listed above
+
+The App Service platform will review your App Service Environment to confirm migration support. If your scenario doesn't pass all validation checks, you won't be able to migrate at this time.
+
+## Overview of the migration process
+
+Migration consists of a series of steps that must be followed in order. Key points are given below for a subset of the steps. It's important to understand what will happen during these steps and how your environment and apps will be impacted. After reviewing the following information and when you're ready to migrate, follow the [step-by-step guide](how-to-migrate.md).
+
+> [!NOTE]
+> For this version of the preview, migration must be carried out using Azure REST API calls.
+>
+
+### Delegate your App Service Environment subnet
+
+App Service Environment v3 requires the subnet it's in to have a single delegation of `Microsoft.Web/hostingEnvironments`. If the App Service Environment's subnet isn't delegated or it's delegated to a different resource, migration will fail.
+
+### Generate IP addresses for your new App Service Environment v3
+
+The platform will create the [new inbound IP (if you're migrating an internet facing App Service Environment) and the new outbound IP](networking.md#addresses). While these IPs are getting created, activity with your existing App Service Environment won't be interrupted, however, you won't be able to scale or make changes to your existing environment. This process will take about 5 minutes to complete.
+
+When completed, you'll be given the new IPs that will be used by your future App Service Environment v3. These new IPs have no effect on your existing environment. The IPs used by your existing environment will continue to be used up until your existing environment is shut down during the full migration step.
+
+### Update dependent resources with new IPs
+
+Once the new IPs are created, you'll have the new default outbound to the internet public addresses so you can adjust any external firewalls, DNS routing, network security groups, and so on, in preparation for the migration. For public internet facing App Service Environment, you'll also have the new inbound IP address that you can use to set up new endpoints with services like [Traffic Manager](../../traffic-manager/traffic-manager-overview.md) or [Azure Front Door](../../frontdoor/front-door-overview.md). **It's your responsibility to update any and all resources that will be impacted by the IP address change associated with the new App Service Environment v3. Don't move on to the next step until you've made all required updates.**
+
+### Full migration
+
+After updating all dependent resources with your new IPs, you should continue with full migration as soon as possible. It's recommended that you move on within one week.
+
+During full migration, the following events will occur:
+
+- The existing App Service Environment is shut down and replaced by the new App Service Environment v3
+- All App Service plans in the App Service Environment are converted from Isolated to Isolated v2
+- All of the apps that are on your App Service Environment are temporarily down. You should expect about one hour of downtime.
+ - If you can't support downtime, see [migration-alternatives](migration-alternatives.md#guidance-for-manual-migration)
+- The public addresses that are used by the App Service Environment will change to the IPs identified previously
+
+As in the IP generation step, you won't be able to scale or modify your App Service Environment or deploy apps to it during this process. When migration is complete, the apps that were on the old App Service Environment will be running on the new App Service Environment v3.
+
+> [!NOTE]
+> Due to the conversion of App Service plans from Isolated to Isolated v2, your apps may be over-provisioned after the migration since the Isolated v2 tier has more memory and CPU per corresponding instance size. You'll have the opportunity to [scale your environment](../manage-scale-up.md) as needed once migration is complete. For more information, review the [SKU details](https://azure.microsoft.com/pricing/details/app-service/windows/).
+>
+
+## Pricing
+
+There's no cost to migrate your App Service Environment. You'll stop being charged for your previous App Service Environment as soon as it shuts down during the full migration process, and you'll begin getting charged for your new App Service Environment v3 as soon as it's deployed. For more information about App Service Environment v3 pricing, see the [pricing details](overview.md#pricing).
+
+## Migration feature limitations
+
+The migration feature doesn't plan on supporting App Service Environment v1 within a classic VNet. See [migration alternatives](migration-alternatives.md) if your App Service Environment falls into this category. Also, you won't be able to migrate if your App Service Environment is in an unhealthy or suspended state.
+
+## Frequently asked questions
+
+- **What if migrating my App Service Environment is not currently supported?**
+ You won't be able migrate using the migration feature at this time. If you have an unsupported environment and want to migrate immediately, see [migration alternatives](migration-alternatives.md).
+- **Will I experience downtime during the migration?**
+ Yes, you should expect about one hour of downtime during the full migration step so plan accordingly. If downtime isn't an option for you, see [migration alternatives](migration-alternatives.md).
+- **Will I need to do anything to my apps after the migration to get them running on the new App Service Environment?**
+ No, all of your apps running on the old environment will be automatically migrated to the new environment and run like before. No user input is needed.
+- **What if my App Service Environment has a custom domain suffix?**
+ You won't be able migrate using the migration feature at this time. If you have an unsupported environment and want to migrate immediately, see [migration alternatives](migration-alternatives.md).
+- **What if my App Service Environment is zone pinned?**
+ Zone pinned App Service Environment is currently not a supported scenario for migration. When supported, zone pinned App Service Environments will be migrated to zone redundant App Service Environment v3.
+- **What properties of my App Service Environment will change?**
+ You'll now be on App Service Environment v3 so be sure to review the [features and feature differences](overview.md#feature-differences) compared to previous versions. For ILB App Service Environment, you'll keep the same ILB IP address. For internet facing App Service Environment, the public IP address and the outbound IP address will change. Note for internet facing App Service Environment, previously there was a single IP for both inbound and outbound. For App Service Environment v3, they're separate. For more information, see [App Service Environment v3 networking](networking.md#addresses).
+- **What happens if migration fails or there is an unexpected issue during the migration?**
+ If there's an unexpected issue, support teams will be on hand. It's recommended to migrate dev environments before touching any production environments.
+- **What happens to my old App Service Environment?**
+ If you decide to migrate an App Service Environment, the old environment gets shut down and deleted and all of your apps are migrated to a new environment. Your old environment will no longer be accessible.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Migrate App Service Environment v2 to App Service Environment v3](how-to-migrate.md)
+
+> [!div class="nextstepaction"]
+> [Migration Alternatives](migration-alternatives.md)
+
+> [!div class="nextstepaction"]
+> [App Service Environment v3 Networking](networking.md)
+
+> [!div class="nextstepaction"]
+> [Using an App Service Environment v3](using.md)
app-service Migration Alternatives https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/migration-alternatives.md
+
+ Title: Alternative methods for migrating to App Service Environment v3
+description: Migrate to App Service Environment v3 Without Using the Migration Feature
++ Last updated : 1/17/2022++
+# Migrate to App Service Environment v3 without using the migration feature
+
+> [!NOTE]
+> The App Service Environment v3 [migration feature](migrate.md) is now available in preview for a set of supported environment configurations. Consider that feature which provides an automated migration path to [App Service Environment v3](overview.md).
+>
+
+If you're currently using App Service Environment (ASE) v1 or v2, you have the opportunity to migrate your workloads to [App Service Environment v3](overview.md). App Service Environment v3 has [advantages and feature differences](overview.md#feature-differences) that provide enhanced support for your workloads and can reduce overall costs. Consider using the [migration feature](migrate.md) if your environment falls into one of the [supported scenarios](migrate.md#supported-scenarios). If your environment isn't currently supported by the migration feature, you can wait for support if your scenario is listed in the [upcoming supported scenarios](migrate.md#preview-limitations). Otherwise, you can choose to use one of the alternative migration options given below.
+
+If your App Service Environment [won't be supported for migration](migrate.md#migration-feature-limitations) with the migration feature, you must use one of the alternative methods to migrate to App Service Environment v3.
+
+## Prerequisites
+
+Scenario: An existing app running on an App Service Environment v1 or App Service Environment v2 and you need that app to run on an App Service Environment v3.
+
+For any migration method that doesn't use the [migration feature](migrate.md), you'll need to [create the App Service Environment v3](creation.md) and a new subnet using the method of your choice. There are [feature differences](overview.md#feature-differences) between App Service Environment v1/v2 and App Service Environment v3 as well as [networking changes](networking.md) that will involve new (and for internet-facing environments, additional) IP addresses. You'll need to update any infrastructure that relies on these IPs.
+
+Note that multiple App Service Environments can't exist in a single subnet. If you need to use your existing subnet for your new App Service Environment v3, you'll need to delete the existing App Service Environment before you create a new one. For this scenario, the recommended migration method is to [back up your apps and then restore them](#back-up-and-restore) on the new environment after it gets created and configured. There will be application downtime during this process because of the time it takes to delete the old environment (15 minutes), create the new App Service Environment v3 (30 minutes), configure any infrastructure and connected resources to work with the new environment (your responsibility), and deploy your apps onto the new environment (application deployment, type, and quantity dependent).
+
+### Checklist before migrating apps
+
+- [Create an App Service Environment v3](creation.md)
+- After creating the new environment, update any networking dependencies with the IP addresses associated with the new environment
+- Plan for downtime (if applicable)
+- Decide on a process for recreating your apps in your new environment
+
+## Isolated v2 App Service plans
+
+App Service Environment v3 uses Isolated v2 App Service plans that are priced and sized differently than those from Isolated plans. Review the [SKU details](https://azure.microsoft.com/pricing/details/app-service/windows/) to understand how you're new environment will need to be sized and scaled to ensure appropriate capacity. There's no difference in how you create App Service plans for App Service Environment v3 compared to previous versions.
+
+## Back up and restore
+
+The [back up](../manage-backup.md) and [restore](../web-sites-restore.md) feature allows you to keep your app configuration, file content, and database connected to your app when migrating to your new environment. Make sure you review the [requirements and restrictions](../manage-backup.md#requirements-and-restrictions) of this feature.
+
+The step-by-step instructions in the current documentation for [back up](../manage-backup.md) and [restore](../web-sites-restore.md) should be sufficient to allow you to use this feature. When restoring, the **Storage** option lets you select any backup ZIP file from any existing Azure Storage account container in your subscription. A sample of a restore configuration is given below.
+
+![back up and restore sample](./media/migration/back-up-restore-sample.png)
+
+|Benefits |Limitations |
+|||
+|Quick - should only take 5-10 minutes per app |Support is limited to [certain database types](../manage-backup.md#what-gets-backed-up) |
+|Multiple apps can be restored at the same time (restoration needs to be configured for each app individually) |Old and new environments as well as supporting resources (for example apps, databases, storage accounts and containers) must all be in the same subscription |
+|In-app MySQL databases are automatically backed up without any configuration |Backups can be up to 10 GB of app and database content, up to 4 GB of which can be the database backup. If the backup size exceeds this limit, you get an error. |
+|Can restore the app to a snapshot of a previous state |Using a [firewall enabled storage account](../../storage/common/storage-network-security.md) as the destination for your backups isn't supported |
+|Can integrate with [Azure Traffic Manager](../../traffic-manager/traffic-manager-overview.md) and [Azure Application Gateway](../../application-gateway/overview.md) to distribute traffic across old and new environments |Using a [private endpoint enabled storage account](../../storage/common/storage-private-endpoints.md) for backup and restore isn't supported |
+|Can create empty web apps to restore to in your new environment before you start restoring to speed up the process | |
+
+## Clone your app to an App Service Environment v3
+
+[Cloning your apps](../app-service-web-app-cloning.md) is another feature that can be used to get your **Windows** apps onto your App Service Environment v3. There are limitations with cloning apps. These limitations are the same as those for the App Service Backup feature, see [Back up an app in Azure App Service](../manage-backup.md#requirements-and-restrictions).
+
+> [!NOTE]
+> Cloning apps is supported on Windows App Service only.
+>
+
+This solution is recommended for users that are using Windows App Service and can't migrate using the [migration feature](migrate.md). You'll need to set up your new App Service Environment v3 before cloning any apps. Cloning an app can take up to 30 minutes to complete. Cloning can be done using PowerShell as described in the [documentation](../app-service-web-app-cloning.md#cloning-an-existing-app-to-an-app-service-environment) or using the Azure portal as described below.
+
+To clone an app using the [Azure portal](https://www.portal.azure.com), navigate to your existing App Service and select **Clone App** under **Development Tools**. Fill in the required fields using the details for your new App Service Environment v3.
+
+1. Select an existing or create a new **Resource Group**
+1. Give your app a **Name**. This name can be the same as the old app, but note the site's default URL using the new environment will be different. You'll need to update any custom DNS or connected resources to point to the new URL.
+1. Use your App Service Environment v3 name for **Region**
+1. Choose whether or not to clone your deployment source
+1. You can use an existing Windows **App Service plan** from your new environment if you created one already, or create a new one. The available Windows App Service plans in your new App Service Environment v3, if any, will be listed in the dropdown.
+1. Modify **SKU and size** as needed using one of the Isolated v2 options if creating a new App Service plan. Note App Service Environment v3 uses Isolated v2 plans, which have more memory and CPU per corresponding instance size compared to the Isolated plan. For more information, see [App Service Environment v3 pricing](overview.md#pricing).
+
+![clone sample](./media/migration/portal-clone-sample.png)
+
+|Benefits |Limitations |
+|||
+|Can be automated using PowerShell |Only supported on Windows App Service |
+|Multiple apps can be cloned at the same time (cloning needs to be configured for each app individually or using a script) |Support is limited to [certain database types](../manage-backup.md#what-gets-backed-up) |
+|Can integrate with [Azure Traffic Manager](../../traffic-manager/traffic-manager-overview.md) and [Azure Application Gateway](../../application-gateway/overview.md) to distribute traffic across old and new environments |Old and new environments as well as supporting resources (for example apps, databases, storage accounts and containers) must all be in the same subscription |
+
+## Manually create your apps on an App Service Environment v3
+
+If the above features don't support your apps or you're looking to take a more manual route, you have the option of deploying your apps following the same process you used for your existing App Service Environment. At this time, all deployment methods except FTP are supported on App Service Environment v3. You don't need to make updates when you deploy your apps to your new environment unless you want to make changes or take advantage of App Service Environment v3's dedicated features.
+
+You can export [Azure Resource Manager (ARM) templates](../../azure-resource-manager/templates/overview.md) of your existing apps, App Service plans, and any other supported resources and deploy them in your new environment. To export a template for just your app, head over to your App Service and go to **Export template** under **Automation**.
+
+![export from toc](./media/migration/export-toc.png)
+
+You can also export templates for multiple resources directly from your resource group by going to your resource group, selecting the resources you want a template for, and then selecting **Export template**.
+
+![export template sample](./media/migration/export-template-sample.png)
+
+The following initial changes to your Azure Resource Manager templates are required to get your apps onto your App Service Environment v3:
+
+- Update SKU parameters for App Service plan to an Isolated v2 plan as shown below if creating a new plan
+
+ ```json
+ "type": "Microsoft.Web/serverfarms",
+ "apiVersion": "2021-02-01",
+ "name": "[parameters('serverfarm_name')]",
+ "location": "East US",
+ "sku": {
+ "name": "I1v2",
+ "tier": "IsolatedV2",
+ "size": "I1v2",
+ "family": "Iv2",
+ "capacity": 1
+ },
+ ```
+
+- Update App Service plan (serverfarm) parameter the app is to be deployed into to the plan associated with the App Service Environment v3
+- Update hosting environment profile (hostingEnvironmentProfile) parameter to the new App Service Environment v3 resource ID
+- An Azure Resource Manager template export includes all properties exposed by the resource providers for the given resources. Remove all non-required properties such as those which point to the domain of the old app. For example, you `sites` resource could be simplified to the below:
+
+ ```json
+ "type": "Microsoft.Web/sites",
+ "apiVersion": "2021-02-01",
+ "name": "[parameters('site_name')]",
+ "location": "East US",
+ "dependsOn": [
+ "[resourceId('Microsoft.Web/serverfarms', parameters('serverfarm_name'))]"
+ ],
+ "properties": {
+ "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', parameters('serverfarm_name'))]",
+ "siteConfig": {
+ "linuxFxVersion": "NODE|14-lts"
+ },
+ "hostingEnvironmentProfile": {
+ "id": "[parameters('hostingEnvironments_externalid')]"
+ }
+ }
+ ```
+
+Other changes may be required depending on how your app is configured.
+
+Azure Resource Manager templates can be [deployed](../deploy-complex-application-predictably.md) using multiple methods including using the Azure portal, Azure CLI, or PowerShell.
+
+## Guidance for manual migration
+
+The [migration feature](migrate.md) automates the migration to App Service Environment v3 and at the same time transfers all of your apps to the new environment. There's about one hour of downtime during this migration. If you're in a position where you can't have any downtime, the recommendation is to use one of the manual options to recreate your apps in an App Service Environment v3.
+
+You can distribute traffic between your old and new environment using an [Application Gateway](../networking/app-gateway-with-service-endpoints.md). If you're using an Internal Load Balancer (ILB) App Service Environment, see the [considerations](../networking/app-gateway-with-service-endpoints.md#considerations-for-ilb-ase) and [create an Azure Application Gateway](integrate-with-application-gateway.md) with an extra backend pool to distribute traffic between your environments. For internet facing App Service Environments, see these [considerations](../networking/app-gateway-with-service-endpoints.md#considerations-for-external-ase). You can also use services like [Azure Front Door](../../frontdoor/quickstart-create-front-door.md), [Azure Content Delivery Network (CDN)](../../cdn/cdn-add-to-web-app.md), and [Azure Traffic Manager](../../cdn/cdn-traffic-manager.md) to distribute traffic between environments. Using these services allows for testing of your new environment in a controlled manner and allows you to move to your new environment at your own pace.
+
+Once your migration and any testing with your new environment is complete, delete your old App Service Environment, the apps that are on it, and any supporting resources that you no longer need. You'll continue to be charged for any resources that haven't been deleted.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [App Service Environment v3 Networking](networking.md)
+
+> [!div class="nextstepaction"]
+> [Using an App Service Environment v3](using.md)
+
+> [!div class="nextstepaction"]
+> [Integrate your ILB App Service Environment with the Azure Application Gateway](integrate-with-application-gateway.md)
app-service Troubleshoot Domain Ssl Certificates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/troubleshoot-domain-ssl-certificates.md
This problem occurs for one of the following reasons:
**Do I have to configure my custom domain for my website once I buy it?**
-When you purchase a domain from the Azure portal, the App Service application is automatically configured to use that custom domain. You donΓÇÖt have to take any additional steps. For more information, watch [Azure App Service Self Help: Add a Custom Domain Name](https://channel9.msdn.com/blogs/Azure-App-Service-Self-Help/Add-a-Custom-Domain-Name) on Channel9.
+When you purchase a domain from the Azure portal, the App Service application is automatically configured to use that custom domain. You donΓÇÖt have to take any additional steps. For more information, watch Azure App Service Self Help: Add a Custom Domain Name on Channel9.
**Can I use a domain purchased in the Azure portal to point to an Azure VM instead?**
app-service Troubleshoot Performance Degradation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/troubleshoot-performance-degradation.md
Uptime is monitored using HTTP response codes, and response time is measured in
To set it up, see [Monitor apps in Azure App Service](web-sites-monitor.md).
-Also, see [Keeping Azure Web Sites up plus Endpoint Monitoring - with Stefan Schackow](https://channel9.msdn.com/Shows/Azure-Friday/Keeping-Azure-Web-Sites-up-plus-Endpoint-Monitoring-with-Stefan-Schackow) for a video on endpoint monitoring.
+Also, see Keeping Azure Web Sites up plus Endpoint Monitoring - with Stefan Schackow for a video on endpoint monitoring.
#### Application performance monitoring using Extensions You can also monitor your application performance by using a *site extension*.
application-gateway Url Route Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/url-route-overview.md
description: This article provides an overview of the Azure Application Gateway
Previously updated : 01/12/2022 Last updated : 01/14/2022
In the following example, Application Gateway is serving traffic for contoso.com
Requests for http\://contoso.com/video/* are routed to VideoServerPool, and http\://contoso.com/images/* are routed to ImageServerPool. DefaultServerPool is selected if none of the path patterns match. > [!IMPORTANT]
-> For both the v1 and v2 SKUs, rules are processed in the order they are listed in the portal. The best practice when you create path rules is to have the least specific path (the ones with wildcards) at the end. If wildcards are on the top, then they take priority even if there is more specific match in subsequent path rules.
+> For both the v1 and v2 SKUs, rules are processed in the order they are listed in the portal. The best practice when you create path rules is to have the least specific path (the ones with wildcards) at the end. If wildcards are on the top, then they take priority even if there's a more specific match in subsequent path rules.
>
-> If a basic listener is listed first and matches an incoming request, it gets processed by that listener. However, it is highly recommended to configure multi-site listeners first prior to configuring a basic listener. This ensures that traffic gets routed to the right back end.
+> If a basic listener is listed first and matches an incoming request, it gets processed by that listener. However, it's highly recommended to configure multi-site listeners first prior to configuring a basic listener. This ensures that traffic gets routed to the right back end.
## UrlPathMap configuration element
The urlPathMap element is used to specify Path patterns to back-end server pool
### PathPattern
-PathPattern is a list of path patterns to match. Each path must start with / and may use \* as a wildcard character. The string fed to the path matcher does not include any text after the first ? or #, and those chars are not allowed here. Otherwise, any characters allowed in a URL are allowed in PathPattern.
+PathPattern is a list of path patterns to match. Each path must start with / and may use \* as a wildcard character. The string fed to the path matcher doesn't include any text after the first `?` or `#`, and those chars aren't allowed here. Otherwise, any characters allowed in a URL are allowed in PathPattern.
Path rules are case insensitive.
Path rules are case insensitive.
|`/Repos/*/Comments/*` |no| |`/CurrentUser/Comments/*` |yes|
+#### Examples
+Path-based rule processing when wildcard (*) is used:
-You can check out a [Resource Manager template using URL-based routing](https://azure.microsoft.com/resources/templates/application-gateway-url-path-based-routing) for more information.
+**Example 1:**
+
+`/master-dev* to contoso.com`
+
+`/master-dev/api-core/ to fabrikam.com`
+
+`/master-dev/* to microsoft.com`
+
+Because the wildcard path `/master-dev*` is present above more granular paths, all client requests containing `/master-dev` are routed to contoso.com, including the specific `/master-dev/api-core/`. To ensure that the client requests are routed to the appropriate paths, it's critical to have the granular paths above wildcard paths.
+
+**Example 2:**
+
+`/ (default) to contoso.com`
+
+`/master-dev/api-core/ to fabrikam.com`
+
+`/master-dev/api to bing.com`
+
+`/master-dev/* to microsoft.com`
+
+All client requests with the path pattern `/master-dev/*` are processed in the order as listed. If there's no match within the path rules, the request is routed to the default target.
+
+For more information, see [Resource Manager template using URL-based routing](https://azure.microsoft.com/resources/templates/application-gateway-url-path-based-routing).
## PathBasedRouting rule
applied-ai-services Compose Custom Models https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/form-recognizer/compose-custom-models.md
Form Recognizer uses the [Layout](concept-layout.md) API to learn the expected s
[Get started with Train with labels](label-tool.md)
-> [!VIDEO https://channel9.msdn.com/Shows/Docs-Azure/Azure-Form-Recognizer/player]
+> [!VIDEO https://docs.microsoft.com/Shows/Docs-Azure/Azure-Form-Recognizer/player]
## Create a composed model
applied-ai-services Label Tool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/form-recognizer/label-tool.md
keywords: document processing
In this article, you'll use the Form Recognizer REST API with the Sample Labeling tool to train a custom document processing model with manually labeled data.
-> [!VIDEO https://channel9.msdn.com/Shows/Docs-Azure/Azure-Form-Recognizer/player]
+> [!VIDEO https://docs.microsoft.com/Shows/Docs-Azure/Azure-Form-Recognizer/player]
## Prerequisites
azure-arc Managed Instance Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/managed-instance-overview.md
To learn more about these capabilities, watch these introductory videos.
### Azure Arc-enabled SQL Managed Instance - indirect connected mode
-> [!VIDEO https://channel9.msdn.com/Shows/Inside-Azure-for-IT/Azure-Arcenabled-data-services-in-disconnected-mode/player?format=ny]
+> [!VIDEO https://docs.microsoft.com/Shows/Inside-Azure-for-IT/Azure-Arcenabled-data-services-in-disconnected-mode/player?format=ny]
### Azure Arc-enabled SQL Managed Instance - direct connected mode
-> [!VIDEO https://channel9.msdn.com/Shows/Inside-Azure-for-IT/Azure-Arcenabled-data-services-in-connected-mode/player?format=ny]
+> [!VIDEO https://docs.microsoft.com/Shows/Inside-Azure-for-IT/Azure-Arcenabled-data-services-in-connected-mode/player?format=ny]
## Next steps
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/overview.md
Currently, the following Azure Arc-enabled data services are available:
For an introduction to how Azure Arc-enabled data services supports your hybrid work environment, see this introductory video:
-> [!VIDEO https://channel9.msdn.com/Shows//Inside-Azure-for-IT/Choose-the-right-data-solution-for-your-hybrid-environment/player?format=ny]
+> [!VIDEO https://docs.microsoft.com/Shows//Inside-Azure-for-IT/Choose-the-right-data-solution-for-your-hybrid-environment/player?format=ny]
## Always current
azure-arc What Is Azure Arc Enabled Postgres Hyperscale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/what-is-azure-arc-enabled-postgres-hyperscale.md
With the Direct connectivity mode offered by Azure Arc-enabled data services you
- The scale-out and scale-in operations are not automatic. They are controlled by the users. Users may script these operations and automate the execution of those scripts. Not all workloads can benefit from scaling out. Read further details on this topic as suggested in the "Next steps" section. **To learn more about these capabilities, you can also refer to this Data Exposed episode:**
-> [!VIDEO https://channel9.msdn.com/Shows/Data-Exposed/What-is-Azure-Arc-Enabled-PostgreSQL-Hyperscale--Data-Exposed/player?format=ny]
+> [!VIDEO https://docs.microsoft.com/Shows/Data-Exposed/What-is-Azure-Arc-Enabled-PostgreSQL-Hyperscale--Data-Exposed/player?format=ny]
## Roles and responsibilities: Azure managed services (Platform as a service (PaaS)) _vs._ Azure Arc-enabled data services
azure-arc Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/agent-overview.md
Azure Arc-enabled servers depend on the following Azure resource providers in yo
* **Microsoft.HybridCompute** * **Microsoft.GuestConfiguration**
+* **Microsoft.HybridConnectivity**
If they are not registered, you can register them using the following commands:
Login-AzAccount
Set-AzContext -SubscriptionId [subscription you want to onboard] Register-AzResourceProvider -ProviderNamespace Microsoft.HybridCompute Register-AzResourceProvider -ProviderNamespace Microsoft.GuestConfiguration
+Register-AzResourceProvider -ProviderNamespace Microsoft.HybridConnectivity
``` Azure CLI:
Azure CLI:
az account set --subscription "{Your Subscription Name}" az provider register --namespace 'Microsoft.HybridCompute' az provider register --namespace 'Microsoft.GuestConfiguration'
+az provider register --namespace 'Microsoft.HybridConnectivity'
``` You can also register the resource providers in the Azure portal by following the steps under [Azure portal](../../azure-resource-manager/management/resource-providers-and-types.md#azure-portal).
azure-cache-for-redis Cache How To Geo Replication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-how-to-geo-replication.md
To configure geo-replication between two caches, the following prerequisites mus
- Both caches are in the same Azure subscription. - The secondary linked cache is either the same cache size or a larger cache size than the primary linked cache. - Both caches are created and in a running state.
+- Neither cache can have more than one replica.
> [!NOTE] > Data transfer between Azure regions will be charged at standard [bandwidth rates](https://azure.microsoft.com/pricing/details/bandwidth/).
After geo-replication is configured, the following restrictions apply to your li
The primary linked cache remains available for use during the linking process. The secondary linked cache isn't available until the linking process completes.
+[!NOTE]
+> Geo-replication can be enabled for this cache if you scale it to 'Premium' pricing tier and disable data persistence. This feature is not available at this time when using extra replicas.
+ ## Remove a geo-replication link 1. To remove the link between two caches and stop geo-replication, click **Unlink caches** from the **Geo-replication** on the left .
azure-cache-for-redis Cache How To Premium Persistence https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-how-to-premium-persistence.md
The following list contains answers to commonly asked questions about Azure Cach
- [Can I enable persistence on a previously created cache?](#can-i-enable-persistence-on-a-previously-created-cache) - [Can I enable AOF and RDB persistence at the same time?](#can-i-enable-aof-and-rdb-persistence-at-the-same-time)
+- [How does persistence work with geo-replication?](#how-does-persistence-work-with-geo-replication)
- [Which persistence model should I choose?](#which-persistence-model-should-i-choose) - [What happens if I've scaled to a different size and a backup is restored that was made before the scaling operation?](#what-happens-if-ive-scaled-to-a-different-size-and-a-backup-is-restored-that-was-made-before-the-scaling-operation) - [Can I use the same storage account for persistence across two different caches?](#can-i-use-the-same-storage-account-for-persistence-across-two-different-caches)
Yes, Redis persistence can be configured both at cache creation and on existing
No, you can enable RDB or AOF, but not both at the same time.
+### How does persistence work with geo-replication?
+
+If you enable data persistence, geo-replication cannot be enabled for your premium cache.
+ ### Which persistence model should I choose? AOF persistence saves every write to a log, which has a significant effect on throughput. Compared AOF with RDB persistence, which saves backups based on the configured backup interval with minimal effect to performance. Choose AOF persistence if your primary goal is to minimize data loss, and you can handle a lower throughput for your cache. Choose RDB persistence if you wish to maintain optimal throughput on your cache, but still want a mechanism for data recovery.
After a rewrite, two sets of AOF files exist in storage. Rewrites occur in the b
Learn more about Azure Cache for Redis features. -- [Azure Cache for Redis Premium service tiers](cache-overview.md#service-tiers)
+- [Azure Cache for Redis Premium service tiers](cache-overview.md#service-tiers)
azure-cache-for-redis Cache Troubleshoot Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-troubleshoot-connectivity.md
In this article, we provide troubleshooting help for connecting your client appl
- [Kubernetes hosted applications](#kubernetes-hosted-applications) - [Linux-based client application](#linux-based-client-application) - [Continuous connectivity issues](#continuous-connectivity)
- - [Azure Cache for Redis CLI](#azure-cache-for-redis-cli)
- - [PSPING](#psping)
+ - [Test connectivity using _redis-cli_](#test-connectivity-using-redis-cli)
+ - [Test connectivity using PSPING](#test-connectivity-using-psping)
- [Virtual network configuration](#virtual-network-configuration) - [Private endpoint configuration](#private-endpoint-configuration) - [Firewall rules](#third-party-firewall-or-external-proxy)
Using optimistic TCP settings in Linux might cause client applications to experi
## Continuous connectivity
-If your application can't maintain a continuous connection to your Azure Cache for Redis, it's possible some configuration on the cache isn't set up correctly. The following sections offer suggestions on how to make sure your cache is configured correctly.
+If your application can't connect to your Azure Cache for Redis, it's possible some configuration on the cache isn't set up correctly. The following sections offer suggestions on how to make sure your cache is configured correctly.
-### Azure Cache for Redis CLI
+### Test connectivity using _redis-cli_
-Test connectivity using Azure Cache for Redis CLI. For more information on CLI, [Use the Redis command-line tool with Azure Cache for Redis](cache-how-to-redis-cli-tool.md).
+Test connectivity using _redis-cli_. For more information on CLI, [Use the Redis command-line tool with Azure Cache for Redis](cache-how-to-redis-cli-tool.md).
-### PSPING
+### Test connectivity using PSPING
-If Azure Cache for Redis CLI is unable to connect, you can test connectivity using `PSPING` in PowerShell.
+If _redis-cli_ is unable to connect, you can test connectivity using `PSPING` in PowerShell.
```azurepowershell-interactive psping -q <cache DNS endpoint>:<Port Number>
azure-functions Durable Functions Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/durable-functions-overview.md
Durable Functions is developed in collaboration with Microsoft Research. As a re
The following video highlights the benefits of Durable Functions:
-> [!VIDEO https://channel9.msdn.com/Shows/Azure-Friday/Durable-Functions-in-Azure-Functions/player]
+> [!VIDEO https://docs.microsoft.com/Shows/Azure-Friday/Durable-Functions-in-Azure-Functions/player]
For a more in-depth discussion of Durable Functions and the underlying technology, see the following video (it's focused on .NET, but the concepts also apply to other supported languages):
-> [!VIDEO https://channel9.msdn.com/Events/dotnetConf/2018/S204/player]
+> [!VIDEO https://docs.microsoft.com/Events/dotnetConf/2018/S204/player]
Because Durable Functions is an advanced extension for [Azure Functions](../functions-overview.md), it isn't appropriate for all applications. For a comparison with other Azure orchestration technologies, see [Compare Azure Functions and Azure Logic Apps](../functions-compare-logic-apps-ms-flow-webjobs.md#compare-azure-functions-and-azure-logic-apps).
azure-functions Durable Functions Perf And Scale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/durable-functions-perf-and-scale.md
Activity functions have all the same behaviors as regular queue-triggered functi
Entity functions are also executed on a single thread and operations are processed one-at-a-time. However, entity functions do not have any restrictions on the type of code that can be executed.
+## Function timeouts
+
+Activity, orchestrator, and entity functions are subject to the same [function timeouts](../functions-scale.md#timeout) as all Azure Functions. As a general rule, Durable Functions treats function timeouts the same way as unhandled exceptions thrown by the application code. For example, if an activity times out, the function execution is recorded as a failure, and the orchestrator is notified and handles the timeout just like any other exception: retries take place if specified by the call, or an exception handler may be executed.
+ ## Concurrency throttles Azure Functions supports executing multiple functions concurrently within a single app instance. This concurrent execution helps increase parallelism and minimizes the number of "cold starts" that a typical app will experience over time. However, high concurrency can exhaust per-VM system resources such network connections or available memory. Depending on the needs of the function app, it may be necessary to throttle the per-instance concurrency to avoid the possibility of running out of memory in high-load situations. Activity, orchestrator, and entity function concurrency limits can be configured in the **host.json** file. The relevant settings are `durableTask/maxConcurrentActivityFunctions` for activity functions and `durableTask/maxConcurrentOrchestratorFunctions` for both orchestrator and entity functions. These settings control the maximum number of orchestrator, entity, or activity functions that can be loaded into memory concurrently.
+> [!NOTE]
+> The concurrency throttles only apply locally, to limit what is currently being processed on one individual machine. Thus, these throttles do not limit the total throughput of the system. Quite to the contrary, they can actually support proper scale out, as they prevent individual machines from taking on too much work at once. If this leads to unprocessed work accumulating in the queues, the autoscaler adds more machines. The total throughput of the system thus scales out as needed.
+
+> [!NOTE]
+> The `durableTask/maxConcurrentOrchestratorFunctions` limit applies only to the act of processing new events or operations. Orchestrations or entities that are idle waiting for events or operations do not count towards the limit.
+ ### Functions 2.0 ```json
In all other situations, there is typically no observable performance improvemen
> [!NOTE] > These settings should only be used after an orchestrator function has been fully developed and tested. The default aggressive replay behavior can useful for detecting [orchestrator function code constraints](durable-functions-code-constraints.md) violations at development time, and is therefore disabled by default.
-### Entity function unloading
+## Entity operation batching
+
+To improve performance and cost, entity operations are executed in batches. Each batch is billed as a single function execution.
-Entity functions process up to 20 operations in a single batch. As soon as an entity finishes processing a batch of operations, it persists its state and unloads from memory. You can delay the unloading of entities from memory using the extended sessions setting. Entities continue to persist their state changes as before, but remain in memory for the configured period of time to reduce the number of loads from storage. This reduction of loads from storage can improve the overall throughput of frequently accessed entities.
+By default, the maximum batch size is 50 (for consumption plans) and 5000 (for all other plans). The maximum batch size can also be configured in the [host.json](durable-functions-bindings.md#host-json) file. If the maximum batch size is 1, batching is effectively disabled.
+
+> [!NOTE]
+> If individual entity operations take a long time to execute, it may be beneficial to limit the maximum batch size to reduce the risk of [function timeouts](#function-timeouts), in particular on consumption plans.
## Performance targets
azure-government Compare Azure Government Global Azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/compare-azure-government-global-azure.md
For secured virtual networks, you will want to allow network security groups (NS
|US Gov Virginia|13.72.49.126 </br> 13.72.55.55 </br> 13.72.184.124 </br> 13.72.190.110| 443| |US Gov Arizona|52.127.3.176 </br> 52.127.3.178| 443|
-For a demo on how to build data-centric solutions on Azure Government using HDInsight, see [Cognitive Services, HDInsight, and Power BI on Azure Government](https://channel9.msdn.com/Blogs/Azure/Cognitive-Services-HDInsight-and-Power-BI-on-Azure-Government).
+For a demo on how to build data-centric solutions on Azure Government using HDInsight, see Cognitive Services, HDInsight, and Power BI on Azure Government.
### [Power BI](/power-bi/service-govus-overview)
-For usage guidance, feature variations, and limitations, see [Power BI for US government customers](/power-bi/admin/service-govus-overview). For a demo on how to build data-centric solutions on Azure Government using Power BI, see [Cognitive Services, HDInsight, and Power BI on Azure Government](https://channel9.msdn.com/Blogs/Azure/Cognitive-Services-HDInsight-and-Power-BI-on-Azure-Government).
+For usage guidance, feature variations, and limitations, see [Power BI for US government customers](/power-bi/admin/service-govus-overview). For a demo on how to build data-centric solutions on Azure Government using Power BI, see Cognitive Services, HDInsight, and Power BI on Azure Government.
### [Power BI Embedded](/azure/power-bi-embedded/)
azure-government Documentation Government Developer Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-developer-guide.md
The [Azure Government video library](https://aka.ms/AzureGovVideos) contains man
## Compliance
-For more information on Azure Government Compliance, refer to the [compliance documentation](./documentation-government-plan-compliance.md) and watch this [video](https://channel9.msdn.com/blogs/Azure-Government/Compliance-on-Azure-Government).
+For more information on Azure Government Compliance, refer to the [compliance documentation](./documentation-government-plan-compliance.md).
### Azure Blueprints
azure-government Documentation Government Overview Jps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-overview-jps.md
When they are properly planned and secured, cloud services can deliver powerful
From devices to the cloud, Microsoft puts privacy and information security first, while increasing productivity for officers in the field and throughout the department. By combining highly secure mobile devices with "anytime-anywhere" access to the cloud, JPS agencies can contribute to ongoing investigations, analyze data, manage evidence, and help protect citizens from threats.
-Other cloud providers treat Criminal Justice Information Systems (CJIS) compliance as a check box, rather than a commitment. At Microsoft, we're committed to providing solutions that meet the applicable CJIS controls, today and in the future. In addition, we extend our commitment to justice and public safety through our <a href="https://news.microsoft.com/presskits/dcu/#sm.0000eqdq0pxj4ex3u272bevclb0uc#KwSv0iLdMkJerFly.97">Digital Crimes Unit</a>, <a href="https://channel9.msdn.com/Blogs/Taste-of-Premier/Satya-Nadella-on-Cybersecurity">Cyber Defense Operations Center</a>, and <a href="https://enterprise.microsoft.com/en-us/industries/government/public-safety/">Worldwide Justice and Public Safety organization</a>.
+Other cloud providers treat Criminal Justice Information Systems (CJIS) compliance as a check box, rather than a commitment. At Microsoft, we're committed to providing solutions that meet the applicable CJIS controls, today and in the future. In addition, we extend our commitment to justice and public safety through our <a href="https://news.microsoft.com/presskits/dcu/#sm.0000eqdq0pxj4ex3u272bevclb0uc#KwSv0iLdMkJerFly.97">Digital Crimes Unit</a>, Cyber Defense Operations Center, and <a href="https://enterprise.microsoft.com/en-us/industries/government/public-safety/">Worldwide Justice and Public Safety organization</a>.
## Next steps * <a href="https://www.microsoft.com/en-us/TrustCenter/Compliance/CJIS"> Microsoft Trust Center - Criminal Justice Information Services webpage</a>
azure-government Documentation Government Welcome https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-welcome.md
The following video provides a good introduction to Azure Government:
</br>
-> [!VIDEO https://channel9.msdn.com/Shows/Azure-Friday/Enable-government-missions-in-the-cloud-with-Azure-Government/player]
+> [!VIDEO https://docs.microsoft.com/Shows/Azure-Friday/Enable-government-missions-in-the-cloud-with-Azure-Government/player]
## Compare Azure Government and global Azure
azure-maps About Azure Maps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/about-azure-maps.md
The following video explains Azure Maps in depth:
</br>
-> [!VIDEO https://channel9.msdn.com/Shows/Internet-of-Things-Show/Azure-Maps/player?format=ny]
+> [!VIDEO https://docs.microsoft.com/Shows/Internet-of-Things-Show/Azure-Maps/player?format=ny]
## Map controls
azure-maps Add Heat Map Layer Ios https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/add-heat-map-layer-ios.md
You can use heat maps in many different scenarios, including:
> [!TIP] > Heat map layers by default render the coordinates of all geometries in a data source. To limit the layer so that it only renders point geometry features, set the `filter` option of the layer to `NSPredicate(format: "%@ == \"Point\"", NSExpression.geometryTypeAZMVariable)`. If you want to include MultiPoint features as well, use `NSCompoundPredicate`.
-[Internet of Things Show - Heat Maps and Image Overlays in Azure Maps](https://channel9.msdn.com/Shows/Internet-of-Things-Show/Heat-Maps-and-Image-Overlays-in-Azure-Maps/player?format=ny)
+[Internet of Things Show - Heat Maps and Image Overlays in Azure Maps](/shows/internet-of-things-show/heat-maps-and-image-overlays-in-azure-maps/player?format=ny)
## Prerequisites
azure-maps Clustering Point Data Android Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/clustering-point-data-android-sdk.md
When visualizing many data points on the map, data points may overlap over each
</br>
->[!VIDEO https://channel9.msdn.com/Shows/Internet-of-Things-Show/Clustering-point-data-in-Azure-Maps/player?format=ny]
+>[!VIDEO https://docs.microsoft.com/Shows/Internet-of-Things-Show/Clustering-point-data-in-Azure-Maps/player?format=ny]
## Prerequisites
azure-maps Clustering Point Data Ios Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/clustering-point-data-ios-sdk.md
When visualizing many data points on the map, data points may overlap over each other. The overlap may cause the map may become unreadable and difficult to use. Clustering point data is the process of combining point data that are near each other and representing them on the map as a single clustered data point. As the user zooms into the map, the clusters break apart into their individual data points. When you work with large number of data points, use the clustering processes to improve your user experience.
-[Internet of Things Show - Clustering point data in Azure Maps](https://channel9.msdn.com/Shows/Internet-of-Things-Show/Clustering-point-data-in-Azure-Maps/player?format=ny)
+[Internet of Things Show - Clustering point data in Azure Maps](/shows/Internet-of-Things-Show/Clustering-point-data-in-Azure-Maps/player?format=ny)
## Prerequisites
azure-maps Clustering Point Data Web Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/clustering-point-data-web-sdk.md
When visualizing many data points on the map, data points may overlap over each
</br>
->[!VIDEO https://channel9.msdn.com/Shows/Internet-of-Things-Show/Clustering-point-data-in-Azure-Maps/player?format=ny]
+>[!VIDEO https://docs.microsoft.com/Shows/Internet-of-Things-Show/Clustering-point-data-in-Azure-Maps/player?format=ny]
## Enabling clustering on a data source
azure-maps Data Driven Style Expressions Android Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/data-driven-style-expressions-android-sdk.md
This video provides an overview of data-driven styling in Azure Maps.
</br>
->[!VIDEO https://channel9.msdn.com/Shows/Internet-of-Things-Show/Data-Driven-Styling-with-Azure-Maps/player?format=ny]
+>[!VIDEO https://docs.microsoft.com/Shows/Internet-of-Things-Show/Data-Driven-Styling-with-Azure-Maps/player?format=ny]
## Data expressions
azure-maps Data Driven Style Expressions Ios Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/data-driven-style-expressions-ios-sdk.md
Using this approach can make it easy to reuse style expressions between mobile a
This video provides an overview of data-driven styling in Azure Maps.
->[Internet of Things Show - Data-Driven Styling with Azure Maps](https://channel9.msdn.com/Shows/Internet-of-Things-Show/Data-Driven-Styling-with-Azure-Maps/player?format=ny)
+>[Internet of Things Show - Data-Driven Styling with Azure Maps](/shows/Internet-of-Things-Show/Data-Driven-Styling-with-Azure-Maps/player?format=ny)
### Constant values
azure-maps Data Driven Style Expressions Web Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/data-driven-style-expressions-web-sdk.md
This video provides an overview of data-driven styling in the Azure Maps Web SDK
</br>
->[!VIDEO https://channel9.msdn.com/Shows/Internet-of-Things-Show/Data-Driven-Styling-with-Azure-Maps/player?format=ny]
+>[!VIDEO https://docs.microsoft.com/Shows/Internet-of-Things-Show/Data-Driven-Styling-with-Azure-Maps/player?format=ny]
Expressions are represented as JSON arrays. The first element of an expression in the array is a string that specifies the name of the expression operator. For example, "+" or "case". The next elements (if any) are the arguments to the expression. Each argument is either a literal value (a string, number, boolean, or `null`), or another expression array. The following pseudocode defines the basic structure of an expression.
azure-maps How To Request Weather Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-request-weather-data.md
This video provides examples for making REST calls to Azure Maps Weather service
</br>
->[!VIDEO https://channel9.msdn.com/Shows/Internet-of-Things-Show/Azure-Maps-Weather-services-for-developers/player?format=ny]
+>[!VIDEO https://docs.microsoft.com/Shows/Internet-of-Things-Show/Azure-Maps-Weather-services-for-developers/player?format=ny]
## Prerequisites
azure-maps How To Use Spatial Io Module https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-use-spatial-io-module.md
This video provides an overview of Spatial IO module in the Azure Maps Web SDK.
</br>
-> [!VIDEO https://channel9.msdn.com/Shows/Internet-of-Things-Show/Easily-integrate-spatial-data-into-the-Azure-Maps/player?format=ny]
+> [!VIDEO https://docs.microsoft.com/Shows/Internet-of-Things-Show/Easily-integrate-spatial-data-into-the-Azure-Maps/player?format=ny]
> [!WARNING] > Only use data and services that are from a source you trust, especially if referencing it from another domain. The spatial IO module does take steps to minimize risk, however the safest approach is too not allow any danagerous data into your application to begin with.
azure-maps Map Add Heat Map Layer Android https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/map-add-heat-map-layer-android.md
You can use heat maps in many different scenarios, including:
</br>
-> [!VIDEO https://channel9.msdn.com/Shows/Internet-of-Things-Show/Heat-Maps-and-Image-Overlays-in-Azure-Maps/player?format=ny]
+> [!VIDEO https://docs.microsoft.com/Shows/Internet-of-Things-Show/Heat-Maps-and-Image-Overlays-in-Azure-Maps/player?format=ny]
## Prerequisites
azure-maps Map Add Heat Map Layer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/map-add-heat-map-layer.md
You can use heat maps in many different scenarios, including:
</br>
->[!VIDEO https://channel9.msdn.com/Shows/Internet-of-Things-Show/Heat-Maps-and-Image-Overlays-in-Azure-Maps/player?format=ny]
+>[!VIDEO https://docs.microsoft.com/Shows/Internet-of-Things-Show/Heat-Maps-and-Image-Overlays-in-Azure-Maps/player?format=ny]
## Add a heat map layer
azure-monitor Cloudservices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/cloudservices.md
If you have a client mobile app, use [App Center](../app/mobile-center-quickstar
## Exception "method not found" on running in Azure cloud services Did you build for .NET 4.6? .NET 4.6 is not automatically supported in Azure cloud services roles. [Install .NET 4.6 on each role](../../cloud-services/cloud-services-dotnet-install-dotnet.md) before running your app.
-## Video
-
-> [!VIDEO https://channel9.msdn.com/events/Connect/2016/100/player]
## Next steps * [Configure sending Azure Diagnostics to Application Insights](../agents/diagnostics-extension-to-application-insights.md)
azure-monitor Devops https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/devops.md
When an alert is raised, Application Insights can automatically create a work it
* [Pricing](./pricing.md) - You can get started for free, and that continues while you're in low volume.
-## Video
-
-> [!VIDEO https://channel9.msdn.com/events/Connect/2016/112/player]
- ## Next steps Getting started with Application Insights is easy. The main options are:
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-standalone-config.md
To disable auto-collection of Micrometer metrics (including Spring Boot Actuator
} ```
+## HTTP headers
+
+Starting from 3.2.5-BETA, you can capture request and response headers on your server (request) telemetry:
+
+```json
+{
+ "preview": {
+ "captureHttpServerHeaders": {
+ "requestHeaders": [
+ "My-Header-A"
+ ],
+ "responseHeaders": [
+ "My-Header-B"
+ ]
+ }
+ }
+}
+```
+
+The header names are case-insensitive.
+
+The examples above will be captured under property names `http.request.header.my_header_a` and
+`http.response.header.my_header_b`.
+
+Similarly, you can capture request and response headers on your client (dependency) telemetry:
+
+```json
+{
+ "preview": {
+ "captureHttpClientHeaders": {
+ "requestHeaders": [
+ "My-Header-C"
+ ],
+ "responseHeaders": [
+ "My-Header-D"
+ ]
+ }
+ }
+}
+```
+
+Again, the header names are case-insensitive, and the examples above will be captured under property names
+`http.request.header.my_header_c` and `http.response.header.my_header_d`.
+
+## Http server 4xx response codes
+
+By default, http server requests that result in 4xx response codes are captured as errors.
+
+Starting from version 3.2.5-BETA, you can change this behavior to capture them as success if you prefer:
+
+```json
+{
+ "preview": {
+ "captureHttpServer4xxAsError": false
+ }
+}
+```
+ ## Suppressing specific auto-collected telemetry Starting from version 3.0.3, specific auto-collected telemetry can be suppressed using these configuration options:
you can configure Application Insights Java 3.x to use an HTTP proxy:
Application Insights Java 3.x also respects the global `https.proxyHost` and `https.proxyPort` system properties if those are set (and `http.nonProxyHosts` if needed).
+Starting from 3.2.5-BETA, authenticated proxies are supported. You can add `"user"` and `"password"` under `"proxy"` in
+the json above (or if you are using the system properties above, you can add `https.proxyUser` and `https.proxyPassword`
+system properties).
+ ## Self-diagnostics "Self-diagnostics" refers to internal logging from Application Insights Java 3.x.
azure-monitor Proactive Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/proactive-diagnostics.md
Configuring email notifications for a specific smart detection rule can be done
Alternatively, you can change the configuration using Azure Resource Manager templates. For more information, see [Manage Application Insights smart detection rules using Azure Resource Manager templates](./proactive-arm-config.md) for more details.
-## Video
-
-> [!VIDEO https://channel9.msdn.com/events/Connect/2016/112/player]
-- ## Next steps These diagnostic tools help you inspect the telemetry from your app:
azure-monitor Sql Insights Enable https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/sql-insights-enable.md
This article describes how to enable [SQL insights](sql-insights-overview.md) to
> To enable SQL insights by creating the monitoring profile and virtual machine using a resource manager template, see [Resource Manager template samples for SQL insights](resource-manager-sql-insights.md). To learn how to enable SQL Insights, you can also refer to this Data Exposed episode.
-> [!VIDEO https://channel9.msdn.com/Shows/Data-Exposed/How-to-Set-up-Azure-Monitor-for-SQL-Insights/player?format=ny]
+> [!VIDEO https://docs.microsoft.com/Shows/Data-Exposed/How-to-Set-up-Azure-Monitor-for-SQL-Insights/player?format=ny]
## Create Log Analytics workspace SQL insights stores its data in one or more [Log Analytics workspaces](../logs/data-platform-logs.md#log-analytics-and-workspaces). Before you can enable SQL Insights, you need to either [create a workspace](../logs/quick-create-workspace.md) or select an existing one. A single workspace can be used with multiple monitoring profiles, but the workspace and profiles must be located in the same Azure region. To enable and access the features in SQL insights, you must have the [Log Analytics contributor role](../logs/manage-access.md) in the workspace.
azure-netapp-files Configure Ldap Extended Groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/configure-ldap-extended-groups.md
na Previously updated : 01/07/2022 Last updated : 01/14/2022 # Configure ADDS LDAP with extended groups for NFS volume access
This article explains the considerations and steps for enabling LDAP with extend
## Steps
-1. The LDAP with extended groups feature is currently in preview. Before using this feature for the first time, you need to register the feature:
-
- 1. Register the feature:
-
- ```azurepowershell-interactive
- Register-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFLdapExtendedGroups
- ```
-
- 2. Check the status of the feature registration:
-
- > [!NOTE]
- > The **RegistrationState** may be in the `Registering` state for up to 60 minutes before changing to`Registered`. Wait until the status is `Registered` before continuing.
-
- ```azurepowershell-interactive
- Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFLdapExtendedGroups
- ```
-
- You can also use [Azure CLI commands](/cli/azure/feature) `az feature register` and `az feature show` to register the feature and display the registration status.
-
-2. LDAP volumes require an Active Directory configuration for LDAP server settings. Follow instructions in [Requirements for Active Directory connections](create-active-directory-connections.md#requirements-for-active-directory-connections) and [Create an Active Directory connection](create-active-directory-connections.md#create-an-active-directory-connection) to configure Active Directory connections on the Azure portal.
+1. LDAP volumes require an Active Directory configuration for LDAP server settings. Follow instructions in [Requirements for Active Directory connections](create-active-directory-connections.md#requirements-for-active-directory-connections) and [Create an Active Directory connection](create-active-directory-connections.md#create-an-active-directory-connection) to configure Active Directory connections on the Azure portal.
> [!NOTE] > Ensure that you have configured the Active Directory connection settings. A machine account will be created in the organizational unit (OU) that is specified in the Active Directory connection settings. The settings are used by the LDAP client to authenticate with your Active Directory.
-3. Ensure that the Active Directory LDAP server is up and running on the Active Directory.
+2. Ensure that the Active Directory LDAP server is up and running on the Active Directory.
-4. LDAP NFS users need to have certain POSIX attributes on the LDAP server. Set the attributes for LDAP users and LDAP groups as follows:
+3. LDAP NFS users need to have certain POSIX attributes on the LDAP server. Set the attributes for LDAP users and LDAP groups as follows:
* Required attributes for LDAP users: `uid: Alice`,
This article explains the considerations and steps for enabling LDAP with extend
![Active Directory Attribute Editor](../media/azure-netapp-files/active-directory-attribute-editor.png)
-5. If you want to configure an LDAP-integrated NFSv4.1 Linux client, see [Configure an NFS client for Azure NetApp Files](configure-nfs-clients.md).
+4. If you want to configure an LDAP-integrated NFSv4.1 Linux client, see [Configure an NFS client for Azure NetApp Files](configure-nfs-clients.md).
-6. If your LDAP-enabled volumes use NFSv4.1, follow instructions in [Configure NFSv4.1 domain](azure-netapp-files-configure-nfsv41-domain.md#configure-nfsv41-domain) to configure the `/etc/idmapd.conf` file.
+5. If your LDAP-enabled volumes use NFSv4.1, follow instructions in [Configure NFSv4.1 domain](azure-netapp-files-configure-nfsv41-domain.md#configure-nfsv41-domain) to configure the `/etc/idmapd.conf` file.
You need to set `Domain` in `/etc/idmapd.conf` to the domain that is configured in the Active Directory Connection on your NetApp account. For instance, if `contoso.com` is the configured domain in the NetApp account, then set `Domain = contoso.com`. Then you need to restart the `rpcbind` service on your host or reboot the host.
-7. Follow steps in [Create an NFS volume for Azure NetApp Files](azure-netapp-files-create-volumes.md) to create an NFS volume. During the volume creation process, under the **Protocol** tab, enable the **LDAP** option.
+6. Follow steps in [Create an NFS volume for Azure NetApp Files](azure-netapp-files-create-volumes.md) to create an NFS volume. During the volume creation process, under the **Protocol** tab, enable the **LDAP** option.
![Screenshot that shows Create a Volume page with LDAP option.](../media/azure-netapp-files/create-nfs-ldap.png)
-8. Optional - You can enable local NFS client users not present on the Windows LDAP server to access an NFS volume that has LDAP with extended groups enabled. To do so, enable the **Allow local NFS users with LDAP** option as follows:
+7. Optional - You can enable local NFS client users not present on the Windows LDAP server to access an NFS volume that has LDAP with extended groups enabled. To do so, enable the **Allow local NFS users with LDAP** option as follows:
1. Click **Active Directory connections**. On an existing Active Directory connection, click the context menu (the three dots `…`), and select **Edit**. 2. On the **Edit Active Directory settings** window that appears, select the **Allow local NFS users with LDAP** option. ![Screenshot that shows the Allow local NFS users with LDAP option](../media/azure-netapp-files/allow-local-nfs-users-with-ldap.png)
+8. <a name="ldap-search-scope"></a>Optional - If you have large topologies, and you use the Unix security style with a dual-protocol volume or LDAP with extended groups, you can use the **LDAP Search Scope** option to avoid "access denied" errors on Linux clients for Azure NetApp Files.
+
+ The **LDAP Search Scope** option is configured through the **[Active Directory Connections](create-active-directory-connections.md#create-an-active-directory-connection)** page.
+
+ To resolve the users and group from an LDAP server for large topologies, set the values of the **User DN**, **Group DN**, and **Group Membership Filter** options on the Active Directory Connections page as follows:
+
+ * Specify nested **User DN** and **Group DN** in the format of `OU=subdirectory,OU=directory,DC=domain,DC=com`.
+ * Specify **Group Membership Filter** in the format of `(gidNumber=*)`.
+
+ ![Screenshot that shows options related to LDAP Search Scope](../media/azure-netapp-files/ldap-search-scope.png)
+ ## Next steps * [Create an NFS volume for Azure NetApp Files](azure-netapp-files-create-volumes.md)
azure-netapp-files Create Active Directory Connections https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/create-active-directory-connections.md
na Previously updated : 01/07/2022 Last updated : 01/14/2022 # Create and manage Active Directory connections for Azure NetApp Files
This setting is configured in the **Active Directory Connections** under **NetAp
You can also use [Azure CLI commands](/cli/azure/feature) `az feature register` and `az feature show` to register the feature and display the registration status.
+ * **LDAP over TLS**
+ See [Configure ADDS LDAP over TLS](configure-ldap-over-tls.md) for information about this option.
+
+ * **LDAP Search Scope**, **User DN**, **Group DN**, and **Group Membership Filter**
+ See [Configure ADDS LDAP with extended groups for NFS volume access](configure-ldap-extended-groups.md#ldap-search-scope) for information about these options.
+ * **Security privilege users** <!-- SMB CA share feature --> You can grant security privilege (`SeSecurityPrivilege`) to AD users or groups that require elevated privilege to access the Azure NetApp Files volumes. The specified AD users or groups will be allowed to perform certain actions on Azure NetApp Files SMB shares that require security privilege not assigned by default to domain users.
You can also use [Azure CLI commands](/cli/azure/feature) `az feature register`
* [Create a dual-protocol volume](create-volumes-dual-protocol.md) * [Configure NFSv4.1 Kerberos encryption](configure-kerberos-encryption.md) * [Install a new Active Directory forest using Azure CLI](/windows-server/identity/ad-ds/deploy/virtual-dc/adds-on-azure-vm)
+* [Configure ADDS LDAP over TLS](configure-ldap-over-tls.md)
+* [ADDS LDAP with extended groups for NFS volume access](configure-ldap-extended-groups.md)
azure-netapp-files Create Volumes Dual Protocol https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/create-volumes-dual-protocol.md
na Previously updated : 01/07/2022 Last updated : 01/14/2022 # Create a dual-protocol volume for Azure NetApp Files
To create NFS volumes, see [Create an NFS volume](azure-netapp-files-create-volu
The following table describes the name mappings and security styles:
- | Protocol | Security style | Name mapping direction | Permissions applied |
+ | Protocol | Security style | Name-mapping direction | Permissions applied |
|-|-|-|-| | SMB | `Unix` | Windows to UNIX | UNIX (mode bits or NFSv4.x ACLs) | | SMB | `Ntfs` | Windows to UNIX | NTFS ACLs (based on Windows SID accessing share) |
- | NFSv3 | `Unix` | None | UNIX (mode bits or NFSv4.x ACLs) <br><br> Note that NFSv4.x ACLs can be applied using an NFSv4.x administrative client and honored by NFSv3 clients. |
+ | NFSv3 | `Unix` | None | UNIX (mode bits or NFSv4.x ACLs) <br><br> NFSv4.x ACLs can be applied using an NFSv4.x administrative client and honored by NFSv3 clients. |
| NFS | `Ntfs` | UNIX to Windows | NTFS ACLs (based on mapped Windows user SID) |
-* If you have large topologies, and you use the `Unix` security style with a dual-protocol volume or LDAP with extended groups, Azure NetApp Files might not be able to access all servers in your topologies. If this situation occurs, contact your account team for assistance. <!-- NFSAAS-15123 -->
+* The LDAP with extended groups feature supports the dual protocol of both [NFSv3 and SMB] and [NFSv4.1 and SMB] with the Unix security style. See [Configure ADDS LDAP with extended groups for NFS volume access](configure-ldap-extended-groups.md) for more information.
+
+* If you have large topologies, and you use the Unix security style with a dual-protocol volume or LDAP with extended groups, you should use the **LDAP Search Scope** option on the Active Directory Connections page to avoid "access denied" errors on Linux clients for Azure NetApp Files. See [Configure ADDS LDAP with extended groups for NFS volume access](configure-ldap-extended-groups.md#ldap-search-scope) for more information.
+ * You don't need a server root CA certificate for creating a dual-protocol volume. It is required only if LDAP over TLS is enabled. ## Create a dual-protocol volume
To create NFS volumes, see [Create an NFS volume](azure-netapp-files-create-volu
* **Virtual network** Specify the Azure virtual network (VNet) from which you want to access the volume.
- The Vnet you specify must have a subnet delegated to Azure NetApp Files. The Azure NetApp Files service can be accessed only from the same Vnet or from a Vnet that is in the same region as the volume through Vnet peering. You can also access the volume from your on-premises network through Express Route.
+ The VNet you specify must have a subnet delegated to Azure NetApp Files. Azure NetApp Files can be accessed only from the same VNet or from a VNet that is in the same region as the volume through VNet peering. You can also access the volume from your on-premises network through Express Route.
* **Subnet** Specify the subnet that you want to use for the volume. The subnet you specify must be delegated to Azure NetApp Files.
- If you have not delegated a subnet, you can click **Create new** on the Create a Volume page. Then in the Create Subnet page, specify the subnet information, and select **Microsoft.NetApp/volumes** to delegate the subnet for Azure NetApp Files. In each Vnet, only one subnet can be delegated to Azure NetApp Files.
+ If you have not delegated a subnet, you can click **Create new** on the Create a Volume page. Then in the Create Subnet page, specify the subnet information, and select **Microsoft.NetApp/volumes** to delegate the subnet for Azure NetApp Files. In each VNet, only one subnet can be delegated to Azure NetApp Files.
![Create a volume](../media/azure-netapp-files/azure-netapp-files-new-volume.png)
To create NFS volumes, see [Create an NFS volume](azure-netapp-files-create-volu
* If you want to enable SMB3 protocol encryption for the dual-protocol volume, select **Enable SMB3 Protocol Encryption**.
- This feature enables encryption for only in-flight SMB3 data. It does not encrypt NFSv3 in-flight data. SMB clients not using SMB3 encryption will not be able to access this volume. Data at rest is encrypted regardless of this setting. See [SMB encryption](azure-netapp-files-smb-performance.md#smb-encryption) for additional information.
+ This feature enables encryption for only in-flight SMB3 data. It does not encrypt NFSv3 in-flight data. SMB clients not using SMB3 encryption will not be able to access this volume. Data at rest is encrypted regardless of this setting. See [SMB encryption](azure-netapp-files-smb-performance.md#smb-encryption) for more information.
* If you selected NFSv4.1 and SMB for the dual-protocol volume versions, indicate whether you want to enable **Kerberos** encryption for the volume.
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/whats-new.md
na Previously updated : 12/14/2021 Last updated : 01/14/2022
Azure NetApp Files is updated regularly. This article provides a summary about the latest new features and enhancements.
+## January 2022
+
+* [LDAP search scope](configure-ldap-extended-groups.md#ldap-search-scope)
+
+ You might be using the Unix security style with a dual-protocol volume or LDAP with extended groups features in combination with large LDAP topologies. In this case, you might encounter "access denied" errors on Linux clients when interacting with such Azure NetApp Files volumes. You can now use the **LDAP Search Scope** option to specify the LDAP search scope to avoid "access denied" errors.
+
+* [Active Directory Domain Services (ADDS) LDAP user-mapping with NFS extended groups](configure-ldap-extended-groups.md) now generally available (GA)
+
+ The ADDS LDAP user-mapping with NFS extended groups feature is now generally available. You no longer need to register the feature before using it.
+ ## December 2021 * [NFS protocol version conversion](convert-nfsv3-nfsv41.md) (Preview)
azure-portal Azure Portal Markdown Tile https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/azure-portal-markdown-tile.md
You can add a markdown tile to your Azure dashboards to display custom, static c
1. Select **Dashboard** from the Azure portal menu. - 1. In the dashboard view, select the dashboard where the custom markdown tile should appear, then select **Edit**. ![Screenshot showing dashboard edit view](./media/azure-portal-markdown-tile/azure-portal-dashboard-edit.png)
azure-portal Quick Create Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/quick-create-template.md
Title: Create an Azure portal dashboard by using an Azure Resource Manager templ
description: Learn how to create an Azure portal dashboard by using an Azure Resource Manager template. Previously updated : 03/15/2021 Last updated : 01/13/2022 # Quickstart: Create a dashboard in the Azure portal by using an ARM template
If your environment meets the prerequisites and you're familiar with using ARM t
## Prerequisites -- If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.-- An existing VM.
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
## Create a virtual machine The dashboard you create in the next part of this quickstart requires an existing VM. Create a VM by following these steps.
-1. In the Azure portal, select **Cloud Shell**.
+1. In the Azure portal, select **Cloud Shell** from the global controls at the top of the page.
- ![Select Cloud shell from the Azure portal ribbon](media/quick-create-template/cloud-shell.png)
+ :::image type="content" source="media/quick-create-template/cloud-shell.png" alt-text="Screenshot showing the Cloud Shell option in the Azure portal.":::
1. In the **Cloud Shell** window, select **PowerShell**.
- ![Select PowerShell in the terminal window](media/quick-create-template/powershell.png)
+ :::image type="content" source="media/quick-create-template/powershell.png" alt-text="Screenshot showing the PowerShell option in Cloud Shell.":::
1. Copy the following command and enter it at the command prompt to create a resource group.
The dashboard you create in the next part of this quickstart requires an existin
New-AzResourceGroup -Name SimpleWinVmResourceGroup -Location EastUS ```
- ![Copy a command into the command prompt](media/quick-create-template/command-prompt.png)
-
-1. Copy the following command and enter it at the command prompt to create a VM in the resource group.
+1. Next, copy the following command and enter it at the command prompt to create a VM in your new resource group.
```powershell New-AzVm ` -ResourceGroupName "SimpleWinVmResourceGroup" `
- -Name "SimpleWinVm" `
+ -Name "myVM1" `
-Location "East US" ``` 1. Enter a username and password for the VM. This is a new user name and password; it's not, for example, the account you use to sign in to Azure. For more information, see [username requirements](../virtual-machines/windows/faq.yml#what-are-the-username-requirements-when-creating-a-vm-) and [password requirements](../virtual-machines/windows/faq.yml#what-are-the-password-requirements-when-creating-a-vm-).
- The VM deployment now starts and typically takes a few minutes to complete. After deployment completes, move on to the next section.
+ After the VM has been created, move on to the next section.
## Review the template
-The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/azure-portal-dashboard/). The template for this article is too long to show here. To view the template, see [azuredeploy.json](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.portal/azure-portal-dashboard/azuredeploy.json). One Azure resource is defined in the template, [Microsoft.Portal/dashboards](/azure/templates/microsoft.portal/dashboards) - Create a dashboard in the Azure portal.
+The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/azure-portal-dashboard/). The template for this article is too long to show here. To view the template, see [azuredeploy.json](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.portal/azure-portal-dashboard/azuredeploy.json). The template defines one Azure resource, a dashboard that displays data about the VM you created.
## Deploy the template
+This example uses the Azure portal to deploy the template. You can also use other methods to deploy ARM templates, such as [Azure PowerShell](../azure-resource-manager/templates/deploy-powershell.md), [Azure CLI](../azure-resource-manager/templates/deploy-cli.md), or [REST API](../azure-resource-manager/templates/deploy-rest.md).
+ 1. Select the following image to sign in to Azure and open a template. [![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.portal%2Fazure-portal-dashboard%2Fazuredeploy.json) 1. Select or enter the following values, then select **Review + create**.
- ![ARM template, create dashboard, deploy portal](media/quick-create-template/create-dashboard-using-template-portal.png)
+ :::image type="content" source="media/quick-create-template/create-dashboard-using-template-portal.png" alt-text="Screenshot of the dashboard template deployment screen in the Azure portal.":::
Unless it's specified, use the default values to create the dashboard.
- * **Subscription**: select an Azure subscription.
- * **Resource group**: select **SimpleWinVmResourceGroup**.
- * **Location**: select **East US**.
- * **Virtual Machine Name**: enter **SimpleWinVm**.
- * **Virtual Machine Resource Group**: enter **SimpleWinVmResourceGroup**.
-
-1. Select **Create** or **Purchase**. After the dashboard has been deployed successfully, you get a notification:
-
- ![ARM template, create dashboard, deploy portal notification](media/quick-create-template/resource-manager-template-portal-deployment-notification.png)
+ - **Subscription**: select the Azure subscription.
+ - **Resource group**: select **SimpleWinVmResourceGroup**.
+ - **Location**: If not automatically selected, choose **East US**.
+ - **Virtual Machine Name**: enter **myVM1**.
+ - **Virtual Machine Resource Group**: enter **SimpleWinVmResourceGroup**.
-The Azure portal was used to deploy the template. In addition to the Azure portal, you can also use Azure PowerShell, Azure CLI, and REST API. To learn other deployment methods, see [Deploy templates](../azure-resource-manager/templates/deploy-powershell.md).
+1. Select **Create**. You'll see a notification confirming when the dashboard has been deployed successfully.
## Review deployed resources
If you want to remove the VM and associated dashboard, delete the resource group
1. On the **SimpleWinVmResourceGroup** page, select **Delete resource group**, enter the resource group name to confirm, then select **Delete**.
- ![Delete resource group](media/quick-create-template/delete-resource-group.png)
+> [!CAUTION]
+> Deleting a resource group will delete all of the resources contained within it. If the resource group contains additional resources aside from your virtual machine and dashboard, those resources will also be deleted.
## Next steps
azure-portal Quickstart Portal Dashboard Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/quickstart-portal-dashboard-azure-cli.md
Title: Create an Azure portal dashboard with Azure CLI
description: "Quickstart: Learn how to create a dashboard in the Azure portal using the Azure CLI. A dashboard is a focused and organized view of your cloud resources." Previously updated : 12/4/2020 Last updated : 01/13/2022 # Quickstart: Create an Azure portal dashboard with Azure CLI
-A dashboard in the Azure portal is a focused and organized view of your cloud resources. This
-article focuses on the process of using Azure CLI to create a dashboard.
-The dashboard shows the performance of a virtual machine (VM), as well as some static information
-and links.
+A dashboard in the Azure portal is a focused and organized view of your cloud resources. This article shows you how to use Azure CLI to create a dashboard. In this example, the dashboard shows the performance of a virtual machine (VM), as well as some static information and links.
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
[!INCLUDE [azure-cli-prepare-your-environment.md](../../includes/azure-cli-prepare-your-environment.md)] - If you have multiple Azure subscriptions, choose the appropriate subscription in which to bill the resources.
-Select a subscription by using the [az account set](/cli/azure/account#az_account_set) command:
+Select a subscription by using the [az account set](/cli/azure/account#az-account-set) command:
```azurecli az account set --subscription 00000000-0000-0000-0000-000000000000 ``` -- Create an [Azure resource group](../azure-resource-manager/management/overview.md) by using the [az group create](/cli/azure/group#az_group_create) command or use an existing resource group:
+- Create an [Azure resource group](../azure-resource-manager/management/overview.md#resource-groups) by using the [az group create](/cli/azure/group#az-group-create) command (or use an existing resource group):
```azurecli az group create --name myResourceGroup --location centralus ```
- A resource group is a logical container in which Azure resources are deployed and managed as a group.
- ## Create a virtual machine
-Create a virtual machine by using the [az vm create](/cli/azure/vm#az_vm_create) command:
+Create a virtual machine by using the [az vm create](/cli/azure/vm#az-vm-create) command:
```azurecli
-az vm create --resource-group myResourceGroup --name SimpleWinVM --image win2016datacenter \
+az vm create --resource-group myResourceGroup --name myVM1 --image win2016datacenter \
--admin-username azureuser --admin-password 1StrongPassword$ ``` > [!Note]
-> The password must be complex.
-> This is a new user name and password.
-> It's not, for example, the account you use to sign in to Azure.
-> For more information, see [username requirements](../virtual-machines/windows/faq.yml#what-are-the-username-requirements-when-creating-a-vm-)
+> This is a new username and password (not the account you use to sign in to Azure). The password must be complex. For more information, see [username requirements](../virtual-machines/windows/faq.yml#what-are-the-username-requirements-when-creating-a-vm-)
and [password requirements](../virtual-machines/windows/faq.yml#what-are-the-password-requirements-when-creating-a-vm-). The deployment starts and typically takes a few minutes to complete.
-After deployment completes, move on to the next section.
## Download the dashboard template
-Since Azure dashboards are resources, they can be represented as JSON.
-For more information, see [The structure of Azure Dashboards](./azure-portal-dashboards-structure.md).
+Since Azure dashboards are resources, they can be represented as JSON. For more information, see [The structure of Azure dashboards](./azure-portal-dashboards-structure.md).
-Download the following file: [portal-dashboard-template-testvm.json](https://raw.githubusercontent.com/Azure/azure-docs-powershell-samples/master/azure-portal/portal-dashboard-template-testvm.json).
+Download the file [portal-dashboard-template-testvm.json](https://raw.githubusercontent.com/Azure/azure-docs-powershell-samples/master/azure-portal/portal-dashboard-template-testvm.json).
-Customize the downloaded template by changing the following values to your values:
+Then, customize the downloaded template file by changing the following to your values:
-* `<subscriptionID>`: Your subscription
-* `<rgName>`: Resource group, for example `myResourceGroup`
-* `<vmName>`: Virtual machine name, for example `SimpleWinVM`
-* `<dashboardTitle>`: Dashboard title, for example `Simple VM Dashboard`
-* `<location>`: Your Azure region, for example, `centralus`
+- `<subscriptionID>`: Your subscription
+- `<rgName>`: Resource group, for example `myResourceGroup`
+- `<vmName>`: Virtual machine name, for example `myVM1`
+- `<dashboardTitle>`: Dashboard title, for example `Simple VM Dashboard`
+- `<location>`: Your Azure region, for example `centralus`
For more information, see [Microsoft portal dashboards template reference](/azure/templates/microsoft.portal/dashboards).
For more information, see [Microsoft portal dashboards template reference](/azur
You can now deploy the template from within Azure CLI.
-1. Run the [az portal dashboard create](/cli/azure/portal/dashboard#az_portal_dashboard_create) command to deploy the template:
+1. Run the [az portal dashboard create](/cli/azure/portal/dashboard#az-portal-dashboard-create) command to deploy the template:
```azurecli az portal dashboard create --resource-group myResourceGroup --name 'Simple VM Dashboard' \ --input-path portal-dashboard-template-testvm.json --location centralus ```
-1. Check that the dashboard was created successfully by running the [az portal dashboard show](/cli/azure/portal/dashboard#az_portal_dashboard_show) command:
+1. Check that the dashboard was created successfully by running the [az portal dashboard show](/cli/azure/portal/dashboard#az-portal-dashboard-show) command:
```azurecli az portal dashboard show --resource-group myResourceGroup --name 'Simple VM Dashboard' ```
-To see all the dashboards for the current subscription, use [az portal dashboard list](/cli/azure/portal/dashboard#az_portal_dashboard_list):
+To see all the dashboards for the current subscription, use [az portal dashboard list](/cli/azure/portal/dashboard#az-portal-dashboard-list):
```azurecli az portal dashboard list ```
-You can also see all the dashboards for a resource group:
+You can also see all the dashboards for a specific resource group:
```azurecli az portal dashboard list --resource-group myResourceGroup ```
-You can update a dashboard by using the [az portal dashboard update](/cli/azure/portal/dashboard#az_portal_dashboard_update) command:
+To update a dashboard, use the [az portal dashboard update](/cli/azure/portal/dashboard#az-portal-dashboard-update) command:
```azurecli az portal dashboard update --resource-group myResourceGroup --name 'Simple VM Dashboard' \ --input-path portal-dashboard-template-testvm.json --location centralus ```
+## Review deployed resources
+ [!INCLUDE [azure-portal-review-deployed-resources](../../includes/azure-portal-review-deployed-resources.md)] ## Clean up resources
-To remove the virtual machine and associated dashboard, delete the resource group that contains them.
+To remove the virtual machine and associated dashboard that you created, delete the resource group that contains them.
> [!CAUTION]
-> The following example deletes the specified resource group and all resources contained within it.
-> If resources outside the scope of this article exist in the specified resource group, they will also be deleted.
+> Deleting the resource group will delete all of the resources contained within it. If the resource group contains additional resources aside from your virtual machine and dashboard, those resources will also be deleted.
```azurecli az group delete --name myResourceGroup
az portal dashboard delete --resource-group myResourceGroup --name "Simple VM Da
## Next steps
-For more information about Azure CLI support for dashboards, see [az portal dashboard](/cli/azure/portal/dashboard).
+For more information about Azure CLI commands for dashboards, see:
+
+> [!div class="nextstepaction"]
+> [Azure CLI: az portal dashboard](/cli/azure/portal/dashboard).
azure-portal Quickstart Portal Dashboard Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/quickstart-portal-dashboard-powershell.md
Title: Create an Azure portal dashboard with PowerShell
description: Learn how to create a dashboard in the Azure portal using Azure PowerShell. Previously updated : 03/25/2021 Last updated : 01/13/2022 # Quickstart: Create an Azure portal dashboard with PowerShell
-A dashboard in the Azure portal is a focused and organized view of your cloud resources. This
-article focuses on the process of using the Az.Portal PowerShell module to create a dashboard.
-The dashboard shows the performance of a virtual machine (VM), as well as some static information
+A dashboard in the Azure portal is a focused and organized view of your cloud resources. This article focuses on the process of using the Az.Portal PowerShell module to create a dashboard. The dashboard shows the performance of a virtual machine (VM), as well as some static information
and links. ## Requirements
-If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account
-before you begin.
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-If you choose to use PowerShell locally, this article requires that you install the Az PowerShell
-module and connect to your Azure account using the
-[Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount)
-cmdlet. For more information about installing the Az PowerShell module, see
-[Install Azure PowerShell](/powershell/azure/install-az-ps).
-
-> [!IMPORTANT]
-> While the **Az.Portal** PowerShell module is in preview, you must install it separately from
-> the Az PowerShell module using the `Install-Module` cmdlet. Once this PowerShell module becomes
-> generally available, it becomes part of future Az PowerShell module releases and available
-> natively from within Azure Cloud Shell.
-
-```azurepowershell-interactive
-Install-Module -Name Az.Portal
-```
+- If you choose to use PowerShell locally, this article requires that you install the Az PowerShell module and connect to your Azure account using the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet. For more information about installing the Az PowerShell module, see [Install Azure PowerShell](/powershell/azure/install-az-ps).
[!INCLUDE [cloud-shell-try-it](../../includes/cloud-shell-try-it.md)]
$dashboardName = $dashboardTitle -replace '\s'
$subscriptionID = (Get-AzContext).Subscription.Id # Name of test VM
-$vmName = 'SimpleWinVM'
+$vmName = 'myVM1'
``` ## Create a resource group
$Content = $Content -replace '<location>', $location
$Content | Out-File -FilePath $myPortalDashboardTemplatePath -Force ```
-For more information, see [Microsoft portal dashboards template reference](/azure/templates/microsoft.portal/dashboards).
+For more information about the dashboard template structure, see [Microsoft portal dashboards template reference](/azure/templates/microsoft.portal/dashboards).
## Deploy the dashboard template
Get-AzPortalDashboard -Name $dashboardName -ResourceGroupName $resourceGroupName
To remove the VM and associated dashboard, delete the resource group that contains them. > [!CAUTION]
-> The following example deletes the specified resource group and all resources contained within it.
-> If resources outside the scope of this article exist in the specified resource group, they will
-> also be deleted.
+> Deleting the resource group will delete all of the resources contained within it. If the resource group contains additional resources aside from your virtual machine and dashboard, those resources will also be deleted.
```azurepowershell-interactive Remove-AzResourceGroup -Name $resourceGroupName
Remove-Item -Path "$HOME\portal-dashboard-template-testvm.json"
For more information about the cmdlets contained in the Az.Portal PowerShell module, see: > [!div class="nextstepaction"]
-> [Microsoft Azure PowerShell: Portal Dashboard cmdlets](/powershell/module/Az.Portal/)
+> [Microsoft Azure PowerShell: Portal Dashboard cmdlets](/powershell/module/Az.Portal/#portal)
azure-resource-manager Azure Subscription Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/azure-subscription-service-limits.md
The following table applies to v1, v2, Standard, and WAF SKUs unless otherwise s
[!INCLUDE [notification-hub-limits](../../../includes/notification-hub-limits.md)]
-## Purview limits
+## Azure Purview limits
The latest values for Azure Purview quotas can be found in the [Azure Purview quota page](../../purview/how-to-manage-quotas.md).
azure-resource-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/overview.md
To implement infrastructure as code for your Azure solutions, use Azure Resource
To learn about how you can get started with ARM templates, see the following video.
-> [!VIDEO https://channel9.msdn.com/Shows/Azure-Enablement/How-and-why-to-learn-about-ARM-templates/player]
+> [!VIDEO https://docs.microsoft.com/Shows/Azure-Enablement/How-and-why-to-learn-about-ARM-templates/player]
## Why choose ARM templates?
azure-sql-edge Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/overview.md
Azure SQL Edge is an optimized relational database engine geared for IoT and IoT
Azure SQL Edge is built on the latest versions of the [SQL Server Database Engine](/sql/sql-server/sql-server-technical-documentation), which provides industry-leading performance, security and query processing capabilities. Since Azure SQL Edge is built on the same engine as [SQL Server](/sql/sql-server/sql-server-technical-documentation) and [Azure SQL](../azure-sql/index.yml), it provides the same Transact-SQL (T-SQL) programming surface area that makes development of applications or solutions easier and faster, and makes application portability between IoT Edge devices, data centers and the cloud straight forward. What is Azure SQL Edge video on Channel 9:
-> [!VIDEO https://channel9.msdn.com/Shows/Data-Exposed/What-is-Azure-SQL-Edge/player]
+> [!VIDEO https://docs.microsoft.com/shows/Data-Exposed/What-is-Azure-SQL-Edge/player]
## Deployment Models
azure-sql-edge Tutorial Renewable Energy Demo https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/tutorial-renewable-energy-demo.md
This Azure SQL Edge demo is based on a Contoso Renewable Energy, a wind turbine
This demo will walk you through resolving an alert being raised because of wind turbulence being detected at the device. You will train a model and deploy it to SQL DB Edge that will correct the detected wind wake and ultimately optimize power output. Azure SQL Edge - renewable Energy demo video on Channel 9:
-> [!VIDEO https://channel9.msdn.com/Shows/Data-Exposed/Azure-SQL-Edge-Demo-Renewable-Energy/player]
+> [!VIDEO /shows/Data-Exposed/Azure-SQL-Edge-Demo-Renewable-Energy/player]
## Setting up the demo on your local computer Git will be used to copy all files from the demo to your local computer.
azure-sql Azure Sql Iaas Vs Paas What Is Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/azure-sql-iaas-vs-paas-what-is-overview.md
Azure SQL is built upon the familiar SQL Server engine, so you can migrate appli
Learn how each product fits into Microsoft's Azure SQL data platform to match the right option for your business requirements. Whether you prioritize cost savings or minimal administration, this article can help you decide which approach delivers against the business requirements you care about most.
-If you're new to Azure SQL, check out the *What is Azure SQL* video from our in-depth [Azure SQL video series](https://channel9.msdn.com/Series/Azure-SQL-for-Beginners?WT.mc_id=azuresql4beg_azuresql-ch9-niner):
-> [!VIDEO https://channel9.msdn.com/Series/Azure-SQL-for-Beginners/What-is-Azure-SQL-3-of-61/player]
+If you're new to Azure SQL, check out the *What is Azure SQL* video from our in-depth [Azure SQL video series](/shows/Azure-SQL-for-Beginners/?WT.mc_id=azuresql4beg_azuresql-ch9-niner):
+> [!VIDEO https://docs.microsoft.com/shows/Azure-SQL-for-Beginners/What-is-Azure-SQL-3-of-61/player]
azure-sql Auditing Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/auditing-overview.md
You can manage Azure SQL Database auditing using [Azure Resource Manager](../../
## See also -- Data Exposed episode [What's New in Azure SQL Auditing](https://channel9.msdn.com/Shows/Data-Exposed/Whats-New-in-Azure-SQL-Auditing) on Channel 9.
+- Data Exposed episode [What's New in Azure SQL Auditing](/Shows/Data-Exposed/Whats-New-in-Azure-SQL-Auditing) on Channel 9.
- [Auditing for SQL Managed Instance](../managed-instance/auditing-configure.md) - [Auditing for SQL Server](/sql/relational-databases/security/auditing/sql-server-audit-database-engine)
azure-sql Database Import https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/database-import.md
You can import a SQL Server database into Azure SQL Database or SQL Managed Inst
Watch this video to see how to import from a BACPAC file in the Azure portal or continue reading below:
-> [!VIDEO https://channel9.msdn.com/Shows/Data-Exposed/Its-just-SQL-Restoring-a-database-to-Azure-SQL-DB-from-backup/player?WT.mc_id=dataexposed-c9-niner]
+> [!VIDEO https://docs.microsoft.com/Shows/Data-Exposed/Its-just-SQL-Restoring-a-database-to-Azure-SQL-DB-from-backup/player?WT.mc_id=dataexposed-c9-niner]
The [Azure portal](https://portal.azure.com) *only* supports creating a single database in Azure SQL Database and *only* from a BACPAC file stored in Azure Blob storage.
Stop-AzSqlDatabaseActivity -ResourceGroupName $ResourceGroupName -ServerName $Se
- Importing to a database in elastic pool isn't supported. You can import data into a single database and then move the database to an elastic pool. - Import Export Service does not work when Allow access to Azure services is set to OFF. However you can work around the problem by manually running sqlpackage.exe from an Azure VM or performing the export directly in your code by using the DACFx API. - Import does not support specifying a backup storage redundancy while creating a new database and creates with the default geo-redundant backup storage redundancy. To workaround, first create an empty database with desired backup storage redundancy using Azure portal or PowerShell and then import the BACPAC into this empty database.
+- Storage behind a firewall is currently not supported.
> [!NOTE] > Azure SQL Database Configurable Backup Storage Redundancy is currently available in public preview in Southeast Asia Azure region only.
azure-sql Dynamic Data Masking Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/dynamic-data-masking-overview.md
To learn more about permissions when using dynamic data masking with T-SQL comma
## See also - [Dynamic Data Masking](/sql/relational-databases/security/dynamic-data-masking) for SQL Server.-- Data Exposed episode about [Granular Permissions for Azure SQL Dynamic Data Masking](https://channel9.msdn.com/Shows/Data-Exposed/Granular-Permissions-for-Azure-SQL-Dynamic-Data-Masking) on Channel 9.
+- Data Exposed episode about [Granular Permissions for Azure SQL Dynamic Data Masking](/Shows/Data-Exposed/Granular-Permissions-for-Azure-SQL-Dynamic-Data-Masking) on Channel 9.
azure-sql Failover Group Add Elastic Pool Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/failover-group-add-elastic-pool-tutorial.md
Set these additional parameter values for use in creating the an elastic pool.
### Create elastic pool on primary server
-Use this script to create an elastic pool with the [az sql elastic-pool create](/cli/azure/sql/elastic-poolt#az_sql_elastic_pool_create) command.
+Use this script to create an elastic pool with the [az sql elastic-pool create](/cli/azure/sql/elastic-pool#az-sql-elastic-pool-create) command.
:::code language="azurecli" source="~/azure_cli_scripts/sql-database/failover-groups/add-elastic-pool-to-failover-group-az-cli.sh" range="29-31":::
This portion of the tutorial uses the following Azure CLI cmdlets:
| Command | Notes | |||
-| [az sql elastic-pool create](/cli/azure/sql/elastic-poolt#az_sql_elastic_pool_create) | Creates an elastic pool. |
+| [az sql elastic-pool create](/cli/azure/sql/elastic-pool#az-sql-elastic-pool-create) | Creates an elastic pool. |
| [az sql db update](/cli/azure/sql/db#az_sql_db_update) | Updates a database|
Use this script to create a secondary server with the [az sql server create](/cl
### Create elastic pool on secondary server
-Use this script to create an elastic pool on the secondary server with the [az sql elastic-pool create](/cli/azure/sql/elastic-poo#az_sql_elastic_pool_create) command.
+Use this script to create an elastic pool on the secondary server with the [az sql elastic-pool create](/cli/azure/sql/elastic-pool#az-sql-elastic-pool-create) command.
:::code language="azurecli" source="~/azure_cli_scripts/sql-database/failover-groups/add-elastic-pool-to-failover-group-az-cli.sh" range="38-40":::
This portion of the tutorial uses the following Azure CLI cmdlets:
| Command | Notes | ||| | [az sql server create](/cli/azure/sql/server#az_sql_server_create) | Creates a server that hosts databases and elastic pools. |
-| [az sql elastic-pool create](/cli/azure/sql/elastic-poo#az_sql_elastic_pool_create) | Creates an elastic pool.|
+| [az sql elastic-pool create](/cli/azure/sql/elastic-pool#az-sql-elastic-pool-create) | Creates an elastic pool.|
| [az sql failover-group create](/cli/azure/sql/failover-group#az_sql_failover_group_create) | Creates a failover group. | | [az sql failover-group update](/cli/azure/sql/failover-group#az_sql_failover_group_update) | Updates a failover group.|
azure-sql Network Access Controls Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/network-access-controls-overview.md
You can also allow private access to the database from [virtual networks](../../
See the below video for a high-level explanation of these access controls and what they do:
-> [!VIDEO https://channel9.msdn.com/Shows/Data-Exposed/Data-Exposed--SQL-Database-Connectivity-Explained/player?WT.mc_id=dataexposed-c9-niner]
+> [!VIDEO https://docs.microsoft.com/shows/Data-Exposed/Data-Exposed--SQL-Database-Connectivity-Explained/player?WT.mc_id=dataexposed-c9-niner]
## Allow Azure services
Ip based firewall is a feature of the logical SQL server in Azure that prevents
In addition to IP rules, the server firewall allows you to define *virtual network rules*. To learn more, see [Virtual network service endpoints and rules for Azure SQL Database](vnet-service-endpoint-rule-overview.md) or watch this video:
-> [!VIDEO https://channel9.msdn.com/Shows/Data-Exposed/Data-Exposed--Demo--Vnet-Firewall-Rules-for-SQL-Database/player?WT.mc_id=dataexposed-c9-niner]
+> [!VIDEO https://docs.microsoft.com/shows/Data-Exposed/Data-Exposed--Demo--Vnet-Firewall-Rules-for-SQL-Database/player?WT.mc_id=dataexposed-c9-niner]
### Azure Networking terminology
azure-sql Saas Dbpertenant Dr Geo Restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/saas-dbpertenant-dr-geo-restore.md
This tutorial uses features of Azure SQL Database and the Azure platform to addr
* [Azure Resource Manager templates](../../azure-resource-manager/templates/quickstart-create-templates-use-the-portal.md), to reserve all needed capacity as quickly as possible. Azure Resource Manager templates are used to provision a mirror image of the original servers and elastic pools in the recovery region. A separate server and pool are also created for provisioning new tenants. * [Elastic Database Client Library](elastic-database-client-library.md) (EDCL), to create and maintain a tenant database catalog. The extended catalog includes periodically refreshed pool and database configuration information. * [Shard management recovery features](elastic-database-recovery-manager.md) of the EDCL, to maintain database location entries in the catalog during recovery and repatriation.
-* [Geo-restore](../../key-vault/general/disaster-recovery-guidance.md), to recover the catalog and tenant databases from automatically maintained geo-redundant backups.
+* [Geo-restore](recovery-using-backups.md#geo-restore), to recover the catalog and tenant databases from automatically maintained geo-redundant backups.
* [Asynchronous restore operations](../../azure-resource-manager/management/async-operations.md), sent in tenant-priority order, are queued for each pool by the system and processed in batches so the pool isn't overloaded. These operations can be canceled before or during execution if necessary. * [Geo-replication](active-geo-replication-overview.md), to repatriate databases to the original region after the outage. There is no data loss and minimal impact on the tenant when you use geo-replication. * [SQL server DNS aliases](./dns-alias-overview.md), to allow the catalog sync process to connect to the active catalog regardless of its location.
azure-sql Saas Tenancy Video Index Wingtip Brk3120 20171011 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/saas-tenancy-video-index-wingtip-brk3120-20171011.md
Clicking any screenshot image takes you to the exact time location in the video.
[video-on-youtube-com-478y]: https://www.youtube.com/watch?v=jjNmcKBVjrc&t=1
-[video-on-channel9-479c]: https://channel9.msdn.com/Events/Ignite/Microsoft-Ignite-Orlando-2017/BRK3120
-- [resource-blog-saas-patterns-app-dev-sql-db-768h]: https://azure.microsoft.com/blog/saas-patterns-accelerate-saas-application-development-on-sql-database/
azure-sql Setup Geodr Failover Group Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/setup-geodr-failover-group-cli.md
This script uses the following commands. Each command in the table links to comm
| Command | Description | ||| | [az sql failover-group create](/cli/azure/sql/failover-group#az_sql_failover_group_create) | Creates a failover group. |
-| [az sql failover-group set-primary](/cli/azure/sql/failover-groupt#az_sql_failover_group_set_primary) | Set the primary of the failover group by failing over all databases from the current primary server |
+| [az sql failover-group set-primary](/cli/azure/sql/failover-group#az-sql-failover-group-set-primary) | Set the primary of the failover group by failing over all databases from the current primary server |
| [az sql failover-group show](/cli/azure/sql/failover-group) | Gets a failover group | | [az sql failover-group delete](/cli/azure/sql/failover-group) | Deletes a failover group |
azure-sql Sql Database Paas Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/sql-database-paas-overview.md
Azure SQL Database is based on the latest stable version of the [Microsoft SQL S
SQL Database enables you to easily define and scale performance within two different purchasing models: a [vCore-based purchasing model](service-tiers-vcore.md) and a [DTU-based purchasing model](service-tiers-dtu.md). SQL Database is a fully managed service that has built-in high availability, backups, and other common maintenance operations. Microsoft handles all patching and updating of the SQL and operating system code. You don't have to manage the underlying infrastructure.
-If you're new to Azure SQL Database, check out the *Azure SQL Database Overview* video from our in-depth [Azure SQL video series](https://channel9.msdn.com/Series/Azure-SQL-for-Beginners?WT.mc_id=azuresql4beg_azuresql-ch9-niner):
-> [!VIDEO https://channel9.msdn.com/Series/Azure-SQL-for-Beginners/Azure-SQL-Database-Overview-7-of-61/player]
+If you're new to Azure SQL Database, check out the *Azure SQL Database Overview* video from our in-depth [Azure SQL video series](/shows/Azure-SQL-for-Beginners/?WT.mc_id=azuresql4beg_azuresql-ch9-niner):
+> [!VIDEO https://docs.microsoft.com/shows/Azure-SQL-for-Beginners/Azure-SQL-Database-Overview-7-of-61/player]
azure-sql Understand Resolve Blocking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/understand-resolve-blocking.md
The Waittype, Open_Tran, and Status columns refer to information returned by [sy
## Next steps
-* [Azure SQL Database: Improving Performance Tuning with Automatic Tuning](https://channel9.msdn.com/Shows/Data-Exposed/Azure-SQL-Database-Improving-Performance-Tuning-with-Automatic-Tuning)
+* [Azure SQL Database: Improving Performance Tuning with Automatic Tuning](/Shows/Data-Exposed/Azure-SQL-Database-Improving-Performance-Tuning-with-Automatic-Tuning)
* [Deliver consistent performance with Azure SQL](/learn/modules/azure-sql-performance/) * [Troubleshooting connectivity issues and other errors with Azure SQL Database and Azure SQL Managed Instance](troubleshoot-common-errors-issues.md) * [Transient Fault Handling](/aspnet/aspnet/overview/developing-apps-with-windows-azure/building-real-world-cloud-apps-with-windows-azure/transient-fault-handling)
azure-sql Sql Managed Instance Paas Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/sql-managed-instance-paas-overview.md
Last updated 01/14/2021
Azure SQL Managed Instance is the intelligent, scalable cloud database service that combines the broadest SQL Server database engine compatibility with all the benefits of a fully managed and evergreen platform as a service. SQL Managed Instance has near 100% compatibility with the latest SQL Server (Enterprise Edition) database engine, providing a native [virtual network (VNet)](../../virtual-network/virtual-networks-overview.md) implementation that addresses common security concerns, and a [business model](https://azure.microsoft.com/pricing/details/sql-database/) favorable for existing SQL Server customers. SQL Managed Instance allows existing SQL Server customers to lift and shift their on-premises applications to the cloud with minimal application and database changes. At the same time, SQL Managed Instance preserves all PaaS capabilities (automatic patching and version updates, [automated backups](../database/automated-backups-overview.md), [high availability](../database/high-availability-sla.md)) that drastically reduce management overhead and TCO.
-If you're new to Azure SQL Managed Instance, check out the *Azure SQL Managed Instance* video from our in-depth [Azure SQL video series](https://channel9.msdn.com/Series/Azure-SQL-for-Beginners?WT.mc_id=azuresql4beg_azuresql-ch9-niner):
-> [!VIDEO https://channel9.msdn.com/Series/Azure-SQL-for-Beginners/Azure-SQL-Managed-Instance-Overview-6-of-61/player]
+If you're new to Azure SQL Managed Instance, check out the *Azure SQL Managed Instance* video from our in-depth [Azure SQL video series](/shows/Azure-SQL-for-Beginners/?WT.mc_id=azuresql4beg_azuresql-ch9-niner):
+> [!VIDEO https://docs.microsoft.com/shows/Azure-SQL-for-Beginners/Azure-SQL-Managed-Instance-Overview-6-of-61/player]
> [!IMPORTANT] > For a list of regions where SQL Managed Instance is currently available, see [Supported regions](resource-limits.md#supported-regions).
azure-sql Sql Server On Linux Vm What Is Iaas Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/linux/sql-server-on-linux-vm-what-is-iaas-overview.md
SQL Server on Azure Virtual Machines enables you to use full versions of SQL Ser
Azure virtual machines run in many different [geographic regions](https://azure.microsoft.com/regions/) around the world. They also offer a variety of [machine sizes](../../../virtual-machines/sizes.md). The virtual machine image gallery allows you to create a SQL Server VM with the right version, edition, and operating system. This makes virtual machines a good option for a many different SQL Server workloads.
-If you're new to Azure SQL, check out the *SQL Server on Azure VM Overview* video from our in-depth [Azure SQL video series](https://channel9.msdn.com/Series/Azure-SQL-for-Beginners?WT.mc_id=azuresql4beg_azuresql-ch9-niner):
-> [!VIDEO https://channel9.msdn.com/Series/Azure-SQL-for-Beginners/SQL-Server-on-Azure-VM-Overview-4-of-61/player]
+If you're new to Azure SQL, check out the *SQL Server on Azure VM Overview* video from our in-depth [Azure SQL video series](/shows/Azure-SQL-for-Beginners?WT.mc_id=azuresql4beg_azuresql-ch9-niner):
+> [!VIDEO https://docs.microsoft.com/shows/Azure-SQL-for-Beginners/SQL-Server-on-Azure-VM-Overview-4-of-61/player]
## <a id="create"></a> Get started with SQL Server VMs
azure-sql Sql Server On Azure Vm Iaas What Is Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview.md
Azure virtual machines run in many different [geographic regions](https://azure.microsoft.com/regions/) around the world. They also offer a variety of [machine sizes](../../../virtual-machines/sizes.md). The virtual machine image gallery allows you to create a SQL Server VM with the right version, edition, and operating system. This makes virtual machines a good option for many different SQL Server workloads.
-If you're new to SQL Server on Azure VMs, check out the *SQL Server on Azure VM Overview* video from our in-depth [Azure SQL video series](https://channel9.msdn.com/Series/Azure-SQL-for-Beginners?WT.mc_id=azuresql4beg_azuresql-ch9-niner):
-> [!VIDEO https://channel9.msdn.com/Series/Azure-SQL-for-Beginners/SQL-Server-on-Azure-VM-Overview-4-of-61/player]
+If you're new to SQL Server on Azure VMs, check out the *SQL Server on Azure VM Overview* video from our in-depth [Azure SQL video series](/shows/Azure-SQL-for-Beginners?WT.mc_id=azuresql4beg_azuresql-ch9-niner):
+> [!VIDEO https://docs.microsoft.com/shows/Azure-SQL-for-Beginners/SQL-Server-on-Azure-VM-Overview-4-of-61/player]
## Automated updates
azure-video-analyzer Direct Methods https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/edge/direct-methods.md
Following are some of the error codes used at the detail level.
|409| ResourceValidationError| Referenced resource (example: video resource) is not in a valid state.| ## Supported direct methods
-Following are the direct methods exposed by the Video Analyzer edge module. The schema for the direct methods can be found [here](https://github.com/Azure/azure-rest-api-specs/blob/master/specification/videoanalyzer/data-plane/VideoAnalyzer.Edge/preview/1.1.0/AzureVideoAnalyzerSdkDefinitions.json).
+Following are the direct methods exposed by the Video Analyzer edge module. The schema for the direct methods can be found [here](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/videoanalyzer/data-plane/VideoAnalyzer.Edge/preview/1.1.0/AzureVideoAnalyzerSdkDefinitions.json).
### pipelineTopologyList
azure-video-analyzer Manage Retention Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/manage-retention-policy.md
The retention period is typically set in the properties of a video sink node whe
} ```
-You can also set or update the `retentionPeriod` property of a video resource, using Azure portal, or via the [REST API](https://github.com/Azure/azure-rest-api-specs/blob/master/specification/videoanalyzer/resource-manager/Microsoft.Media/preview/2021-11-01-preview/Videos.json). Below is an example of setting a 3-day retention policy.
+You can also set or update the `retentionPeriod` property of a video resource, using Azure portal, or via the [REST API](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/videoanalyzer/resource-manager/Microsoft.Media/preview/2021-11-01-preview/Videos.json). Below is an example of setting a 3-day retention policy.
``` "archival":
azure-video-analyzer Viewing Videos How To https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/viewing-videos-how-to.md
You can also use the Video Analyzer service to create videos using CVR. You can
## Accessing videos
-You can query the ARM API [`Videos`](https://github.com/Azure/azure-rest-api-specs/blob/master/specification/videoanalyzer/resource-manager/Microsoft.Medi) shows you how.
+You can query the ARM API [`Videos`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/videoanalyzer/resource-manager/Microsoft.Medi) shows you how.
## Determining that a video recording is ready for viewing
When you export a portion of a video recording to an MP4 file, the resulting vid
## Recording and playback latencies
-When using Video Analyzer edge module to record to a video resource, you will specify a [`segmentLength` property](https://github.com/Azure/azure-rest-api-specs/blob/master/specification/videoanalyzer/data-plane/VideoAnalyzer.Edge/preview/1.0.0/AzureVideoAnalyzer.json) in your pipeline topology which tells the module to aggregate a minimum duration of video (in seconds) before it is written to the cloud. For example, if `segmentLength` is set to 300, then the module will accumulate 5 minutes worth of video before uploading one 5 minutes ΓÇ£chunkΓÇ¥, then go into accumulation mode for the next 5 minutes, and upload again. Increasing the `segmentLength` has the benefit of lowering your Azure Storage transaction costs, as the number of reads and writes will be no more frequent than once every `segmentLength` seconds. If you are using Video Analyzer service, the pipeline topology has the same [`segmentLength` property](https://github.com/Azure/azure-rest-api-specs/blob/master/specification/videoanalyzer/resource-manager/Microsoft.Media/preview/2021-11-01-preview/PipelineTopologies.json).
+When using Video Analyzer edge module to record to a video resource, you will specify a [`segmentLength` property](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/videoanalyzer/data-plane/VideoAnalyzer.Edge/preview/1.1.0/AzureVideoAnalyzer.json) in your pipeline topology which tells the module to aggregate a minimum duration of video (in seconds) before it is written to the cloud. For example, if `segmentLength` is set to 300, then the module will accumulate 5 minutes worth of video before uploading one 5 minutes ΓÇ£chunkΓÇ¥, then go into accumulation mode for the next 5 minutes, and upload again. Increasing the `segmentLength` has the benefit of lowering your Azure Storage transaction costs, as the number of reads and writes will be no more frequent than once every `segmentLength` seconds. If you are using Video Analyzer service, the pipeline topology has the same [`segmentLength` property](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/videoanalyzer/resource-manager/Microsoft.Media/preview/2021-11-01-preview/PipelineTopologies.json).
Consequently, streaming of the video from your Video Analyzer account will be delayed by at least that much time.
azure-vmware Reserved Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/reserved-instance.md
These requirements apply to buying a reserved dedicated host instance:
### Buy reserved instances for a CSP subscription
-CSPs that want to purchase reserved instances for their customers must use the **Admin On Behalf Of** (AOBO) procedure from the [Partner Center documentation](/partner-center/azure-plan-manage). For more information, view the [Admin on behalf of (AOBO)](https://channel9.msdn.com/Series/cspdev/Module-11-Admin-On-Behalf-Of-AOBO) video.
+CSPs that want to purchase reserved instances for their customers must use the **Admin On Behalf Of** (AOBO) procedure from the [Partner Center documentation](/partner-center/azure-plan-manage). For more information, view the Admin on behalf of (AOBO) video.
1. Sign in to [Partner Center](https://partner.microsoft.com).
backup Automation Backup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/automation-backup.md
Once you assign an Azure Policy to a scope, all VMs that meet your criteria are
The following video illustrates how Azure Policy works for backup: <br><br>
-> [!VIDEO https://channel9.msdn.com/Shows/IT-Ops-Talk/Configure-backups-at-scale-using-Azure-Policy/player]
+> [!VIDEO /shows/IT-Ops-Talk/Configure-backups-at-scale-using-Azure-Policy/player]
### Export backup-operational data
For more information on how to set up this runbook, see [Automatic retry of fail
The following video provides an end-to-end walk-through of the scenario: <br><br>
- > [!VIDEO https://channel9.msdn.com/Shows/IT-Ops-Talk/Automatically-retry-failed-backup-jobs-using-Azure-Resource-Graph-and-Azure-Automation-Runbooks/player]
+ > [!VIDEO /shows/IT-Ops-Talk/Automatically-retry-failed-backup-jobs-using-Azure-Resource-Graph-and-Azure-Automation-Runbooks/player]
## Additional resources
backup Backup Sql Server Database Azure Vms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-sql-server-database-azure-vms.md
Title: Back up multiple SQL Server VMs from the vault description: In this article, learn how to back up SQL Server databases on Azure virtual machines with Azure Backup from the Recovery Services vault Previously updated : 11/02/2021 Last updated : 01/14/2022
You can also use the following FQDNs to allow access to the required services fr
| Azure Storage | `*.blob.core.windows.net` <br><br> `*.queue.core.windows.net` <br><br> `*.blob.storage.azure.net` | 443 | Azure AD | Allow access to FQDNs under sections 56 and 59 according to [this article](/office365/enterprise/urls-and-ip-address-ranges#microsoft-365-common-and-office-online) | As applicable
+#### Allow connectivity for servers behind internal load balancers
+
+When using an internal load balancer, you need to allow the outbound connectivity from virtual machines behind the internal load balancer to perform backups. To do so, you can use a combination of internal and external standard load balancers to create an outbound connectivity. [Learn more](/azure/load-balancer/egress-only) about the configuration to create an _egress only_ setup for VMs in the backend pool of the internal load balancer.
+ #### Use an HTTP proxy server to route traffic When you back up a SQL Server database on an Azure VM, the backup extension on the VM uses the HTTPS APIs to send management commands to Azure Backup and data to Azure Storage. The backup extension also uses Azure AD for authentication. Route the backup extension traffic for these three services through the HTTP proxy. Use the list of IPs and FQDNs mentioned above for allowing access to the required services. Authenticated proxy servers aren't supported.
When you back up a SQL Server database on an Azure VM, the backup extension on t
- Multiple databases on the same SQL instance with casing difference aren't supported. -- Changing the casing of a SQL database isn't supported after configuring protection.
+- Changing the casing of an SQL database isn't supported after configuring protection.
>[!NOTE] >The **Configure Protection** operation for databases with special characters, such as '+' or '&', in their name isn't supported. You can change the database name or enable **Auto Protection**, which can successfully protect these databases.
How to discover databases running on a VM:
1. Azure Backup discovers all SQL Server databases on the VM. During discovery, the following elements occur in the background: * Azure Backup registers the VM with the vault for workload backup. All databases on the registered VM can be backed up to this vault only.
- * Azure Backup installs the AzureBackupWindowsWorkload extension on the VM. No agent is installed on a SQL database.
+ * Azure Backup installs the AzureBackupWindowsWorkload extension on the VM. No agent is installed on an SQL database.
* Azure Backup creates the service account NT Service\AzureWLBackupPluginSvc on the VM. * All backup and restore operations use the service account. * NT Service\AzureWLBackupPluginSvc requires SQL sysadmin permissions. All SQL Server VMs created in the Marketplace come with the SqlIaaSExtension installed. The AzureBackupWindowsWorkload extension uses the SQLIaaSExtension to automatically get the required permissions.
backup Manage Monitor Sql Database Backup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/manage-monitor-sql-database-backup.md
Title: Manage and monitor SQL Server DBs on an Azure VM description: This article describes how to manage and monitor SQL Server databases that are running on an Azure VM. Previously updated : 11/02/2021 Last updated : 01/14/2022
If you haven't yet configured backups for your SQL Server databases, see [Back u
## Monitor backup jobs in the portal
-Azure Backup shows all scheduled and on-demand operations under **Backup jobs** in **Backup center** in the Azure portal, except the scheduled log backups since they can be very frequent. The jobs you see in this portal include database discovery and registration, configure backup, and backup and restore operations.
+Azure Backup shows all scheduled and on-demand operations under **Backup jobs** in **Backup center** in the Azure portal, except the scheduled log backups since they can be very frequent. The jobs you see in this portal includes database discovery and registration, configure backup, and backup and restore operations.
:::image type="content" source="./media/backup-azure-sql-database/backup-operations-in-backup-center-jobs-inline.png" alt-text="Screenshot showing the Backup jobs under Backup jobs." lightbox="./media/backup-azure-sql-database/backup-operations-in-backup-center-jobs-expanded.png":::
You can fix the policy version for all the impacted items in one click:
## Unregister a SQL Server instance
-Unregister a SQL Server instance after you disable protection but before you delete the vault:
+Before you unregister the server, [disable soft delete](/azure/backup/backup-azure-security-feature-cloud#disabling-soft-delete-using-azure-portal), and then delete all backup items.
+
+>[!NOTE]
+>Deleting backup items with soft delete enabled will lead to 14 days retention, and you will need to wait before the items are completely removed. However, if you've deleted the backup items with soft delete enabled, you can undelete them, disable soft-delete, and then delete them again for immediate removal. [Learn more](/azure/backup/backup-azure-security-feature-cloud#permanently-deleting-soft-deleted-backup-items)
+
+Unregister a SQL Server instance after you disable protection but before you delete the vault.
1. On the vault dashboard, under **Manage**, select **Backup Infrastructure**.
backup Multi User Authorization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/multi-user-authorization.md
Now that the Backup admin has the Reader role on the Resource Guard, they can ea
1. Go to the Recovery Services vault. Navigate to **Properties** on the left navigation panel, then to **Multi-User Authorization** and click **Update**.
+ :::image type="content" source="./media/multi-user-authorization/test-vault-properties.png" alt-text="Screenshot showing the Recovery services vault-properties.":::
+ 1. Now you are presented with the option to enable MUA and choose a Resource Guard using one of the following ways: 1. You can either specify the URI of the Resource Guard, make sure you specify the URI of a Resource Guard you have **Reader** access to and that is the same regions as the vault. You can find the URI (Resource Guard ID) of the Resource Guard in its **Overview** screen:
Depicted below is an illustration of what happens when the Backup admin tries to
1. Select the directory containing the Resource Guard and Authenticate yourself. This step may not be required if the Resource Guard is in the same directory as the vault. 1. Proceed to click **Save**. The request fails with an error informing them about not having sufficient permissions on the Resource Guard to let you perform this operation.
+ :::image type="content" source="./media/multi-user-authorization/test-vault-properties-security-settings-inline.png" alt-text="Screenshot showing the Test Vault properties security settings." lightbox="./media/multi-user-authorization/test-vault-properties-security-settings-expanded.png":::
+ ## Authorize critical (protected) operations using Azure AD Privileged Identity Management The following sub-sections discuss authorizing these requests using PIM. There are cases where you may need to perform critical operations on your backups and MUA can help you ensure that these are performed only when the right approvals or permissions exist. As discussed earlier, the Backup admin needs to have a Contributor role on the Resource Guard to perform critical operations that are in the Resource Guard scope. One of the ways to allow just-in-time for such operations is through the use of [Azure Active Directory (Azure AD) Privileged Identity Management](../active-directory/privileged-identity-management/pim-configure.md).
Once the Backup adminΓÇÖs request for the Contributor role on the Resource Guard
>[!NOTE] > If the access was assigned using a JIT mechanism, the Contributor role is retracted at the end of the approved period. Else, the Security admin manually removes the **Contributor** role assigned to the Backup admin to perform the critical operation.
+The following screenshot shows an example of disabling soft delete for an MUA-enabled vault.
++ ## Disable MUA on a Recovery Services vault Disabling MUA is a protected operation, and hence, is protected using MUA. This means that the Backup admin must have the required Contributor role in the Resource Guard. Details on obtaining this role are described here. Following is a summary of steps to disable MUA on a vault.
Disabling MUA is a protected operation, and hence, is protected using MUA. This
1. Click **Update** 1. Uncheck the Protect with Resource Guard check box 1. Choose the Directory that contains the Resource Guard and verify access using the Authenticate button (if applicable).
- 1. After **authentication**, click **Save**. With the right access, the request should be successfully completed.
+ 1. After **authentication**, click **Save**. With the right access, the request should be successfully completed.
+
+ :::image type="content" source="./media/multi-user-authorization/disable-mua.png" alt-text="Screenshot showing to disable multi-user authentication.":::
backup Microsoft Azure Recovery Services Powershell All https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/scripts/microsoft-azure-recovery-services-powershell-all.md
$WC = New-Object System.Net.WebClient
$WC.DownloadFile($MarsAURL,'C:\downloads\MARSAgentInstaller.EXE') C:\Downloads\MARSAgentInstaller.EXE /q
-MARSAgentInstaller.exe /q # Please note the commandline install options available here: https://docs.microsoft.com/en-us/azure/backup/backup-client-automation#installation-options
+MARSAgentInstaller.exe /q # Please note the commandline install options available here: https://docs.microsoft.com/azure/backup/backup-client-automation#installation-options
# Registering Windows Server or Windows client machine to a Recovery Services Vault $CredsPath = "C:\downloads"
Set-OBMachineSetting -NoThrottle
# Encryption settings $PassPhrase = ConvertTo-SecureString -String "Complex!123_STRING" -AsPlainText -Force Set-OBMachineSetting -EncryptionPassPhrase $PassPhrase -SecurityPin "<generatedPIN>" #NOTE: You must generate a security pin by selecting Generate, under Settings > Properties > Security PIN in the Recovery Services vault section of the Azure portal.
-# See: https://docs.microsoft.com/en-us/rest/api/backup/securitypins/get
-# See: https://docs.microsoft.com/en-us/powershell/module/azurerm.keyvault/Add-AzureKeyVaultKey?view=azurermps-6.13.0
+# See: https://docs.microsoft.com/rest/api/backup/securitypins/get
+# See: https://docs.microsoft.com/powershell/module/azurerm.keyvault/Add-AzureKeyVaultKey?view=azurermps-6.13.0
# Back up files and folders $NewPolicy = New-OBPolicy
bastion Configuration Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/configuration-settings.md
Previously updated : 11/29/2021 Last updated : 01/14/2022
The sections in this article discuss the resources and settings for Azure Bastio
A SKU is also known as a Tier. Azure Bastion supports two SKU types: Basic and Standard. The SKU is configured in the Azure portal during the workflow when you configure Bastion. You can [upgrade a Basic SKU to a Standard SKU](#upgradesku).
-* The **Basic SKU** provides base functionality, enabling Azure Bastion to manage RDP/SSH connectivity to Virtual Machines (VMs) without exposing public IP addresses on the target application VMs.
-* The Standard SKU enables premium features that allow Azure Bastion to manage remote connectivity at a larger scale.
+* The **Basic SKU** provides base functionality, enabling Azure Bastion to manage RDP/SSH connectivity to virtual machines (VMs) without exposing public IP addresses on the target application VMs.
+* The **Standard SKU** enables premium features that allow Azure Bastion to manage remote connectivity at a larger scale.
The following table shows features and corresponding SKUs. [!INCLUDE [Azure Bastion SKUs](../../includes/bastion-sku.md)]
-### Configuration methods
- Currently, you must use the Azure portal if you want to specify the Standard SKU. If you use the Azure CLI or Azure PowerShell to configure Bastion, the SKU can't be specified and defaults to the Basic SKU. | Method | Value | Links |
Azure Bastion supports upgrading from a Basic to a Standard SKU.
> Downgrading from a Standard SKU to a Basic SKU is not supported. To downgrade, you must delete and recreate Azure Bastion. >
-#### Configuration methods
- You can configure this setting using the following method: | Method | Value | Links | | | | | | Azure portal |Tier | [Upgrade a SKU](upgrade-sku.md)|
-## <a name="instance"></a>Instances and host scaling
-
-An instance is an optimized Azure VM that is created when you configure Azure Bastion. It's fully managed by Azure and runs all of the processes needed for Azure Bastion. An instance is also referred to as a scale unit. You connect to client VMs via an Azure Bastion instance. When you configure Azure Bastion using the Basic SKU, two instances are created. If you use the Standard SKU, you can specify the number of instances. This is called **host scaling**.
-
-Each instance can support 10 concurrent RDP connections and 50 concurrent SSH connections. The number of connections per instances depends on what actions you are taking when connected to the client VM. For example, if you are doing something data intensive, it creates a larger load for the instance to process. Once the concurrent sessions are exceeded, an additional scale unit (instance) is required.
-
-Instances are created in the AzureBastionSubnet. To allow for host scaling, the AzureBastionSubnet should be /26 or larger. Using a smaller subnet limits the number of instances you can create. For more information about the AzureBastionSubnet, see the [subnets](#subnet) section in this article.
-
-### Configuration methods
-
-You can configure this setting using the following methods:
-
-| Method | Value | Links |
-| | | |
-| Azure portal |Instance count | [Azure portal steps](configure-host-scaling.md)|
-| Azure PowerShell | ScaleUnit | [PowerShell steps](configure-host-scaling-powershell.md) |
-- ## <a name="subnet"></a>Azure Bastion subnet >[!IMPORTANT] >For Azure Bastion resources deployed on or after November 2, 2021, the minimum AzureBastionSubnet size is /26 or larger (/25, /24, etc.). All Azure Bastion resources deployed in subnets of size /27 prior to this date are unaffected by this change and will continue to work, but we highly recommend increasing the size of any existing AzureBastionSubnet to /26 in case you choose to take advantage of [host scaling](./configure-host-scaling.md) in the future. >
-Azure Bastion requires a dedicated subnet: **AzureBastionSubnet**. This subnet needs to be created in the same Virtual Network that Azure Bastion is deployed to. The subnet must have the following configuration:
+Azure Bastion requires a dedicated subnet: **AzureBastionSubnet**. You must create this subnet in the same virtual network that you want to deploy Azure Bastion to. The subnet must have the following configuration:
* Subnet name must be *AzureBastionSubnet*. * Subnet size must be /26 or larger (/25, /24 etc.).
Azure Bastion requires a dedicated subnet: **AzureBastionSubnet**. This subnet n
* The subnet must be in the same VNet and resource group as the bastion host. * The subnet cannot contain additional resources.
-### Configuration methods
- You can configure this setting using the following methods: | Method | Value | Links |
Azure Bastion requires a Public IP address. The Public IP must have the followin
* The Public IP address name is the resource name by which you want to refer to this public IP address. * You can choose to use a public IP address that you already created, as long as it meets the criteria required by Azure Bastion and is not already in use.
-### Configuration methods
- You can configure this setting using the following methods: | Method | Value | Links | | | | | | Azure portal | Public IP address |[Azure portal](https://portal.azure.com)| | Azure PowerShell | -PublicIpAddress| [cmdlet](/powershell/module/az.network/new-azbastion#parameters) |
-| Azure CLI | --public-ip create |[command](/cli/azure/network/public-ip)
-|
+| Azure CLI | --public-ip create |[command](/cli/azure/network/public-ip) |
+
+## <a name="instance"></a>Instances and host scaling
+
+An instance is an optimized Azure VM that is created when you configure Azure Bastion. It's fully managed by Azure and runs all of the processes needed for Azure Bastion. An instance is also referred to as a scale unit. You connect to client VMs via an Azure Bastion instance. When you configure Azure Bastion using the Basic SKU, two instances are created. If you use the Standard SKU, you can specify the number of instances. This is called **host scaling**.
+
+Each instance can support 10 concurrent RDP connections and 50 concurrent SSH connections. The number of connections per instances depends on what actions you are taking when connected to the client VM. For example, if you are doing something data intensive, it creates a larger load for the instance to process. Once the concurrent sessions are exceeded, an additional scale unit (instance) is required.
+
+Instances are created in the AzureBastionSubnet. To allow for host scaling, the AzureBastionSubnet should be /26 or larger. Using a smaller subnet limits the number of instances you can create. For more information about the AzureBastionSubnet, see the [subnets](#subnet) section in this article.
+
+You can configure this setting using the following methods:
+
+| Method | Value | Links |
+| | | |
+| Azure portal |Instance count | [Azure portal steps](configure-host-scaling.md)|
+| Azure PowerShell | ScaleUnit | [PowerShell steps](configure-host-scaling-powershell.md) |
+
+## <a name="ports"></a>Custom ports
+
+You can specify the port that you want to use to connect to your VMs. By default, the inbound ports used to connect are 3389 for RDP and 22 for SSH. If you configure a custom port value, you need to specify that value when you connect to the VM.
+
+Custom port values are supported for the Standard SKU only. If your Bastion deployment uses the Basic SKU, you can easily [upgrade a Basic SKU to a Standard SKU](#upgradesku).
## Next steps
batch Batch Js Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-js-get-started.md
Now, let us follow the process step by step into building the JavaScript client:
You can install Azure Batch SDK for JavaScript using the npm install command.
-`npm install azure-batch`
+`npm install @azure/batch`
This command installs the latest version of azure-batch JavaScript SDK.
Following code snippet first imports the azure-batch JavaScript module and then
```javascript // Initializing Azure Batch variables
-var batch = require('azure-batch');
+import { BatchServiceClient, BatchSharedKeyCredentials } from "@azure/batch";
-var accountName = '<azure-batch-account-name>';
+// Replace values below with Batch Account details
+const batchAccountName = '<batch-account-name>';
+const batchAccountKey = '<batch-account-key>';
+const batchEndpoint = '<batch-account-url>';
-var accountKey = '<account-key-downloaded>';
-
-var accountUrl = '<account-url>'
-
-// Create Batch credentials object using account name and account key
-
-var credentials = new batch.SharedKeyCredentials(accountName,accountKey);
-
-// Create Batch service client
-
-var batch_client = new batch.ServiceClient(credentials,accountUrl);
+const credentials = new BatchSharedKeyCredentials(batchAccountName, batchAccountKey);
+const batchClient = new BatchServiceClient(credentials, batchEndpoint);
```
The following code snippet creates the configuration parameter objects.
```javascript // Creating Image reference configuration for Ubuntu Linux VM
-var imgRef = {publisher:"Canonical",offer:"UbuntuServer",sku:"14.04.2-LTS",version:"latest"}
-
+const imgRef = {
+ publisher: "Canonical",
+ offer: "UbuntuServer",
+ sku: "18.04-LTS",
+ version: "latest"
+}
// Creating the VM configuration object with the SKUID
-var vmconfig = {imageReference:imgRef,nodeAgentSKUId:"batch.node.ubuntu 14.04"}
-
-// Setting the VM size to Standard F4
-var vmSize = "STANDARD_F4"
-
-//Setting number of VMs in the pool to 4
-var numVMs = 4
+const vmConfig = {
+ imageReference: imgRef,
+ nodeAgentSKUId: "batch.node.ubuntu 18.04"
+};
+// Number of VMs to create in a pool
+const numVms = 4;
+
+// Setting the VM size
+const vmSize = "STANDARD_D1_V2";
``` > [!TIP]
The following code snippet creates an Azure Batch pool.
```javascript // Create a unique Azure Batch pool ID
-var poolid = "pool" + customerDetails.customerid;
-var poolConfig = {id:poolid, displayName:poolid,vmSize:vmSize,virtualMachineConfiguration:vmconfig,targetDedicatedComputeNodes:numVms,enableAutoScale:false };
-// Creating the Pool for the specific customer
-var pool = batch_client.pool.add(poolConfig,function(error,result){
+const now = new Date();
+const poolId = `processcsv_${now.getFullYear()}${now.getMonth()}${now.getDay()}${now.getHours()}${now.getSeconds()}`;
+
+const poolConfig = {
+ id: poolId,
+ displayName: "Processing csv files",
+ vmSize: vmSize,
+ virtualMachineConfiguration: vmConfig,
+ targetDedicatedNodes: numVms,
+ enableAutoScale: false
+};
+
+// Creating the Pool
+var pool = batchClient.pool.add(poolConfig, function (error, result){
if(error!=null){console.log(error.response)}; }); ```
var pool = batch_client.pool.add(poolConfig,function(error,result){
You can check the status of the pool created and ensure that the state is in "active" before going ahead with submission of a Job to that pool. ```javascript
-var cloudPool = batch_client.pool.get(poolid,function(error,result,request,response){
+var cloudPool = batchClient.pool.get(poolId,function(error,result,request,response){
if(error == null) {
var cloudPool = batch_client.pool.get(poolid,function(error,result,request,respo
Following is a sample result object returned by the pool.get function. ```
-{ id: 'processcsv_201721152',
- displayName: 'processcsv_201721152',
- url: 'https://<batch-account-name>.centralus.batch.azure.com/pools/processcsv_201721152',
- eTag: '<eTag>',
- lastModified: 2017-03-27T10:28:02.398Z,
- creationTime: 2017-03-27T10:28:02.398Z,
+{
+ id: 'processcsv_2022002321',
+ displayName: 'Processing csv files',
+ url: 'https://<batch-account-name>.westus.batch.azure.com/pools/processcsv_2022002321',
+ eTag: '0x8D9D4088BC56FA1',
+ lastModified: 2022-01-10T07:12:21.943Z,
+ creationTime: 2022-01-10T07:12:21.943Z,
state: 'active',
- stateTransitionTime: 2017-03-27T10:28:02.398Z,
- allocationState: 'resizing',
- allocationStateTransitionTime: 2017-03-27T10:28:02.398Z,
- vmSize: 'standard_a1',
- virtualMachineConfiguration:
- { imageReference:
- { publisher: 'Canonical',
- offer: 'UbuntuServer',
- sku: '14.04.2-LTS',
- version: 'latest' },
- nodeAgentSKUId: 'batch.node.ubuntu 14.04' },
- resizeTimeout:
- { [Number: 900000]
- _milliseconds: 900000,
- _days: 0,
- _months: 0,
- _data:
- { milliseconds: 0,
- seconds: 0,
- minutes: 15,
- hours: 0,
- days: 0,
- months: 0,
- years: 0 },
- _locale:
- Locale {
- _calendar: [Object],
- _longDateFormat: [Object],
- _invalidDate: 'Invalid date',
- ordinal: [Function: ordinal],
- _ordinalParse: /\d{1,2}(th|st|nd|rd)/,
- _relativeTime: [Object],
- _months: [Object],
- _monthsShort: [Object],
- _week: [Object],
- _weekdays: [Object],
- _weekdaysMin: [Object],
- _weekdaysShort: [Object],
- _meridiemParse: /[ap]\.?m?\.?/i,
- _abbr: 'en',
- _config: [Object],
- _ordinalParseLenient: /\d{1,2}(th|st|nd|rd)|\d{1,2}/ } },
- currentDedicated: 0,
- targetDedicated: 4,
+ stateTransitionTime: 2022-01-10T07:12:21.943Z,
+ allocationState: 'steady',
+ allocationStateTransitionTime: 2022-01-10T07:13:35.103Z,
+ vmSize: 'standard_d1_v2',
+ virtualMachineConfiguration: {
+ imageReference: {
+ publisher: 'Canonical',
+ offer: 'UbuntuServer',
+ sku: '18.04-LTS',
+ version: 'latest'
+ },
+ nodeAgentSKUId: 'batch.node.ubuntu 18.04'
+ },
+ resizeTimeout: 'PT15M',
+ currentDedicatedNodes: 4,
+ currentLowPriorityNodes: 0,
+ targetDedicatedNodes: 4,
+ targetLowPriorityNodes: 0,
enableAutoScale: false, enableInterNodeCommunication: false, taskSlotsPerNode: 1,
- taskSchedulingPolicy: { nodeFillType: 'Spread' } }
+ taskSchedulingPolicy: { nodeFillType: 'Spread' }}
``` ### Step 4: Submit an Azure Batch job
An Azure Batch job is a logical group of similar tasks. In our scenario, it is "
These tasks would run in parallel and deployed across multiple nodes, orchestrated by the Azure Batch service. > [!TIP]
-> You can use the taskSlotsPerNode property to specify maximum number of tasks that can run concurrently on a single node.
+> You can use the [taskSlotsPerNode](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/batch/arm-batch/src/models/index.ts#L1190-L1191) property to specify maximum number of tasks that can run concurrently on a single node.
#### Preparation task The VM nodes created are blank Ubuntu nodes. Often, you need to install a set of programs as prerequisites. Typically, for Linux nodes you can have a shell script that installs the prerequisites before the actual tasks run. However it could be any programmable executable.
-The [shell script](https://github.com/shwetams/azure-batchclient-sample-nodejs/blob/master/startup_prereq.sh) in this example installs Python-pip and the Azure Storage SDK for Python.
+The [shell script](https://github.com/Azure-Samples/azure-batch-samples/blob/master/JavaScript/Node.js/startup_prereq.sh) in this example installs Python-pip and the Azure Storage Blob SDK for Python.
You can upload the script on an Azure Storage Account and generate a SAS URI to access the script. This process can also be automated using the Azure Storage JavaScript SDK. > [!TIP]
-> A preparation task for a job runs only on the VM nodes where the specific task needs to run. If you want prerequisites to be installed on all nodes irrespective of the tasks that run on it, you can use the startTask property while adding a pool. You can use the following preparation task definition for reference.
+> A preparation task for a job runs only on the VM nodes where the specific task needs to run. If you want prerequisites to be installed on all nodes irrespective of the tasks that run on it, you can use the [startTask](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/batch/batch/src/models/index.ts#L1432) property while adding a pool. You can use the following preparation task definition for reference.
-A preparation task is specified during the submission of Azure Batch job. Following are the preparation task configuration parameters:
+A preparation task is specified during the submission of Azure Batch job. Following are some configurable preparation task parameters:
- **ID**: A unique identifier for the preparation task - **commandLine**: Command line to execute the task executable - **resourceFiles**: Array of objects that provide details of files needed to be downloaded for this task to run. Following are its options
- - blobSource: The SAS URI of the file
+ - httpUrl: The URL of the file to download
- filePath: Local path to download and save the file - fileMode: Only applicable for Linux nodes, fileMode is in octal format with a default value of 0770 - **waitForSuccess**: If set to true, the task does not run on preparation task failures
A preparation task is specified during the submission of Azure Batch job. Follow
Following code snippet shows the preparation task script configuration sample: ```javascript
-var job_prep_task_config = {id:"installprereq",commandLine:"sudo sh startup_prereq.sh > startup.log",resourceFiles:[{'blobSource':'Blob SAS URI','filePath':'startup_prereq.sh'}],waitForSuccess:true,runElevated:true}
+var jobPrepTaskConfig = {id:"installprereq",commandLine:"sudo sh startup_prereq.sh > startup.log",resourceFiles: [{ 'httpUrl': 'Blob sh url', 'filePath': 'startup_prereq.sh' }],waitForSuccess:true,runElevated:true, userIdentity: {autoUser: {elevationLevel: "admin", scope: "pool"}}}
``` If there are no prerequisites to be installed for your tasks to run, you can skip the preparation tasks. Following code creates a job with display name "process csv files." ```javascript
- // Setting up Batch pool configuration
- var pool_config = {poolId:poolid}
- // Setting up Job configuration along with preparation task
- var jobId = "processcsvjob"
- var job_config = {id:jobId,displayName:"process csv files",jobPreparationTask:job_prep_task_config,poolInfo:pool_config}
+ // Setting Batch Pool ID
+const poolInfo = { poolId: poolId };
+// Batch job configuration object
+const jobId = "processcsvjob";
+const jobConfig = {
+ id: jobId,
+ displayName: "process csv files",
+ jobPreparationTask: jobPrepTaskConfig,
+ poolInfo: poolInfo
+};
// Adding Azure batch job to the pool
- var job = batch_client.job.add(job_config,function(error,result){
- if(error != null)
- {
- console.log("Error submitting job : " + error.response);
- }});
+ const job = batchClient.job.add(jobConfig, function (error, result) {
+ if (error !== null) {
+ console.log("An error occurred while creating the job...");
+ console.log(error.response);
+ }
+ });
``` ### Step 5: Submit Azure Batch tasks for a job Now that our process csv job is created, let us create tasks for that job. Assuming we have four containers, we have to create four tasks, one for each container.
-If we look at the [Python script](https://github.com/shwetams/azure-batchclient-sample-nodejs/blob/master/processcsv.py), it accepts two parameters:
+If we look at the [Python script](https://github.com/Azure-Samples/azure-batch-samples/blob/master/JavaScript/Node.js/processcsv.py), it accepts two parameters:
- container name: The Storage container to download files from - pattern: An optional parameter of file name pattern
-Assuming we have four containers "con1", "con2", "con3","con4" following code shows submitting for tasks to the Azure batch job "process csv" we created earlier.
+Assuming we have four containers "con1", "con2", "con3","con4" following code shows submitting four tasks to the Azure batch job "process csv" we created earlier.
```javascript // storing container names in an array
-var container_list = ["con1","con2","con3","con4"]
- container_list.forEach(function(val,index){
-
- var container_name = val;
- var taskID = container_name + "_process";
- var task_config = {id:taskID,displayName:'process csv in ' + container_name,commandLine:'python processcsv.py --container ' + container_name,resourceFiles:[{'blobSource':'<blob SAS URI>','filePath':'processcsv.py'}]}
- var task = batch_client.task.add(poolid,task_config,function(error,result){
- if(error != null)
- {
- console.log(error.response);
- }
- else
- {
- console.log("Task for container : " + container_name + "submitted successfully");
- }
---
- });
-
+const containerList = ["con1", "con2", "con3", "con4"]; //Replace with list of blob containers within storage account
+containerList.forEach(function (val, index) {
+ console.log("Submitting task for container : " + val);
+ const containerName = val;
+ const taskID = containerName + "_process";
+ // Task configuration object
+ const taskConfig = {
+ id: taskID,
+ displayName: 'process csv in ' + containerName,
+ commandLine: 'python processcsv.py --container ' + containerName,
+ resourceFiles: [{ 'httpUrl': 'Blob script url', 'filePath': 'processcsv.py' }]
+ };
+
+ const task = batchClient.task.add(jobId, taskConfig, function (error, result) {
+ if (error !== null) {
+ console.log("Error occured while creating task for container " + containerName + ". Details : " + error.response);
+ }
+ else {
+ console.log("Task for container : " + containerName + " submitted successfully");
+ }
});
+});
``` The code adds multiple tasks to the pool. And each of the tasks is executed on a node in the pool of VMs created. If the number of tasks exceeds the number of VMs in a pool or the taskSlotsPerNode property, the tasks wait until a node is made available. This orchestration is handled by Azure Batch automatically.
-The portal has detailed views on the tasks and job statuses. You can also use the list and get functions in the Azure JavaScript SDK..
+The portal has detailed views on the tasks and job statuses. You can also use the list and get functions in the Azure JavaScript SDK. Details are provided in the documentation [link](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/batch/batch/src/operations/job.ts#L114-L149).
## Next steps
cloud-services Cloud Services Dotnet Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-dotnet-get-started.md
Here are some cloud service sample applications that demonstrate more real-world
For general information about developing for the cloud, see [Building Real-World Cloud Apps with Azure](https://www.asp.net/aspnet/overview/developing-apps-with-windows-azure/building-real-world-cloud-apps-with-windows-azure/introduction).
-For a video introduction to Azure Storage best practices and patterns, see [Microsoft Azure Storage ΓÇô What's New, Best Practices and Patterns](https://channel9.msdn.com/Events/Build/2014/3-628).
+For a video introduction to Azure Storage best practices and patterns, see Microsoft Azure Storage ΓÇô What's New, Best Practices and Patterns.
For more information, see the following resources:
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Anomaly-Detector/whats-new.md
We've also added links to some user-generated content. Those items will be marke
## Videos
-* May 7, 2021 [New to Anomaly Detector: Multivariate Capabilities](https://channel9.msdn.com/Shows/AI-Show/New-to-Anomaly-Detector-Multivariate-Capabilities) - AI Show on the new multivariate anomaly detection APIs with Tony Xing and Seth Juarez
-* April 20, 2021 [AI Show Live | Episode 11| New to Anomaly Detector: Multivariate Capabilities](https://channel9.msdn.com/Shows/AI-Show/AI-Show-Live-Episode-11-Whats-new-with-Anomaly-Detector) - AI Show live recording with Tony Xing and Seth Juarez
-* May 18, 2020 [Inside Anomaly Detector](https://channel9.msdn.com/Shows/AI-Show/Inside-Anomaly-Detector) - AI Show with Qun Ying and Seth Juarez
+* May 7, 2021 [New to Anomaly Detector: Multivariate Capabilities](/shows/AI-Show/New-to-Anomaly-Detector-Multivariate-Capabilities) - AI Show on the new multivariate anomaly detection APIs with Tony Xing and Seth Juarez
+* April 20, 2021 AI Show Live | Episode 11| New to Anomaly Detector: Multivariate Capabilities - AI Show live recording with Tony Xing and Seth Juarez
+* May 18, 2020 [Inside Anomaly Detector](/shows/AI-Show/Inside-Anomaly-Detector) - AI Show with Qun Ying and Seth Juarez
* September 19, 2019 **[UGC]** [Detect Anomalies in Your Data with the Anomaly Detector](https://www.youtube.com/watch?v=gfb63wvjnYQ) - Video by Jon Wood
-* September 3, 2019 [Anomaly detection on streaming data using Azure Databricks](https://channel9.msdn.com/Shows/AI-Show/Anomaly-detection-on-streaming-data-using-Azure-Databricks) - AI Show with Qun Ying
-* August 27, 2019 [Anomaly Detector v1.0 Best Practices](https://channel9.msdn.com/Shows/AI-Show/Anomaly-Detector-v10-Best-Practices) - AI Show on univariate anomaly detection best practices with Qun Ying
-* August 20, 2019 [Bring Anomaly Detector on-premises with containers support](https://channel9.msdn.com/Shows/AI-Show/Bring-Anomaly-Detector-on-premise-with-containers-support) - AI Show with Qun Ying and Seth Juarez
-* August 13, 2019 [Introducing Azure Anomaly Detector](https://channel9.msdn.com/Shows/AI-Show/Introducing-Azure-Anomaly-Detector?WT.mc_id=ai-c9-niner) - AI Show with Qun Ying and Seth Juarez
+* September 3, 2019 [Anomaly detection on streaming data using Azure Databricks](/shows/AI-Show/Anomaly-detection-on-streaming-data-using-Azure-Databricks) - AI Show with Qun Ying
+* August 27, 2019 [Anomaly Detector v1.0 Best Practices](/shows/AI-Show/Anomaly-Detector-v10-Best-Practices) - AI Show on univariate anomaly detection best practices with Qun Ying
+* August 20, 2019 [Bring Anomaly Detector on-premises with containers support](/shows/AI-Show/Bring-Anomaly-Detector-on-premise-with-containers-support) - AI Show with Qun Ying and Seth Juarez
+* August 13, 2019 [Introducing Azure Anomaly Detector](/shows/AI-Show/Introducing-Azure-Anomaly-Detector?WT.mc_id=ai-c9-niner) - AI Show with Qun Ying and Seth Juarez
## Service updates
cognitive-services Facebook Post Moderation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/facebook-post-moderation.md
+
+ Title: "Tutorial: Moderate Facebook content - Content Moderator"
+
+description: In this tutorial, you will learn how to use machine-learning-based Content Moderator to help moderate Facebook posts and comments.
+++++++ Last updated : 01/29/2021+
+#Customer intent: As the moderator of a Facebook page, I want to use Azure's machine learning technology to automate and streamline the process of post moderation.
++
+# Tutorial: Moderate Facebook posts and commands with Azure Content Moderator
++
+In this tutorial, you will learn how to use Azure Content Moderator to help moderate the posts and comments on a Facebook page. Facebook will send the content posted by visitors to the Content Moderator service. Then your Content Moderator workflows will either publish the content or create reviews within the Review tool, depending on the content scores and thresholds.
+
+> [!IMPORTANT]
+> In 2018, Facebook implemented a more strict vetting policy for Facebook Apps. You will not be able to complete the steps of this tutorial if your app has not been reviewed and approved by the Facebook review team.
+
+This tutorial shows you how to:
+
+> [!div class="checklist"]
+> * Create a Content Moderator team.
+> * Create Azure Functions that listen for HTTP events from Content Moderator and Facebook.
+> * Link a Facebook page to Content Moderator using a Facebook application.
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
+
+This diagram illustrates each component of this scenario:
+
+![Diagram of Content Moderator receiving information from Facebook through "FBListener" and sending information through "CMListener"](images/tutorial-facebook-moderation.png)
+
+## Prerequisites
+
+- A Content Moderator subscription key. Follow the instructions in [Create a Cognitive Services account](../cognitive-services-apis-create-account.md) to subscribe to the Content Moderator service and get your key.
+- A [Facebook account](https://www.facebook.com/).
+
+## Create a review team
+
+Refer to the [Try Content Moderator on the web](quick-start.md) quickstart for instructions on how to sign up for the [Content Moderator Review tool](https://contentmoderator.cognitive.microsoft.com/) and create a review team. Take note of the **Team ID** value on the **Credentials** page.
+
+## Configure image moderation workflow
+
+Refer to the [Define, test, and use workflows](review-tool-user-guide/workflows.md) guide to create a custom image workflow. Content Moderator will use this workflow to automatically check images on Facebook and send some to the Review tool. Take note of the workflow **name**.
+
+## Configure text moderation workflow
+
+Again, refer to the [Define, test, and use workflows](review-tool-user-guide/workflows.md) guide; this time, create a custom text workflow. Content Moderator will use this workflow to automatically check text content. Take note of the workflow **name**.
+
+![Configure Text Workflow](images/text-workflow-configure.PNG)
+
+Test your workflow using the **Execute Workflow** button.
+
+![Test Text Workflow](images/text-workflow-test.PNG)
+
+## Create Azure Functions
+
+Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps:
+
+1. Create an Azure Function App as shown on the [Azure Functions](../../azure-functions/functions-create-function-app-portal.md) page.
+1. Go to the newly created Function App.
+1. Within the App, go to the **Platform features** tab and select **Configuration**. In the **Application settings** section of the next page, select **New application setting** to add the following key/value pairs:
+
+ | App Setting name | value |
+ | -- |-|
+ | `cm:TeamId` | Your Content Moderator TeamId |
+ | `cm:SubscriptionKey` | Your Content Moderator subscription key - See [Credentials](./review-tool-user-guide/configure.md#credentials) |
+ | `cm:Region` | Your Content Moderator region name, without the spaces. You can find this name in the **Location** field of the **Overview** tab of your Azure resource.|
+ | `cm:ImageWorkflow` | Name of the workflow to run on Images |
+ | `cm:TextWorkflow` | Name of the workflow to run on Text |
+ | `cm:CallbackEndpoint` | Url for the CMListener Function App that you will create later in this guide |
+ | `fb:VerificationToken` | A secret token that you create, used to subscribe to the Facebook feed events |
+ | `fb:PageAccessToken` | The Facebook graph api access token does not expire and allows the function Hide/Delete posts on your behalf. You will get this token at a later step. |
+
+ Click the **Save** button at the top of the page.
+
+1. Go back to the **Platform features** tab. Use the **+** button on the left pane to bring up the **New function** pane. The function you are about to create will receive events from Facebook.
+
+ ![Azure Functions pane with the Add Function button highlighted.](images/new-function.png)
+
+ 1. Click on the tile that says **Http trigger**.
+ 1. Enter the name **FBListener**. The **Authorization Level** field should be set to **Function**.
+ 1. Click **Create**.
+ 1. Replace the contents of the **run.csx** with the contents from **FbListener/run.csx**
+
+ [!code-csharp[FBListener: csx file](~/samples-fbPageModeration/FbListener/run.csx?range=1-154)]
+
+1. Create a new **Http trigger** function named **CMListener**. This function receives events from Content Moderator. Replace the contents of the **run.csx** with the contents from **CMListener/run.csx**
+
+ [!code-csharp[FBListener: csx file](~/samples-fbPageModeration/CmListener/run.csx?range=1-110)]
+++
+## Configure the Facebook page and App
+
+1. Create a Facebook App.
+
+ ![facebook developer page](images/facebook-developer-app.png)
+
+ 1. Navigate to the [Facebook developer site](https://developers.facebook.com/)
+ 1. Go to **My Apps**.
+ 1. Add a New App.
+ 1. Provide a name
+ 1. Select **Webhooks -> Set Up**
+ 1. Select **Page** in the dropdown menu and select **Subscribe to this object**
+ 1. Provide the **FBListener Url** as the Callback URL and the **Verify Token** you configured under the **Function App Settings**
+ 1. Once subscribed, scroll down to feed and select **subscribe**.
+ 1. Select the **Test** button of the **feed** row to send a test message to your FBListener Azure Function, then hit the **Send to My Server** button. You should see the request being received on your FBListener.
+
+1. Create a Facebook Page.
+
+ > [!IMPORTANT]
+ > In 2018, Facebook implemented a more strict vetting of Facebook apps. You will not be able to execute sections 2, 3 and 4 if your app has not been reviewed and approved by the Facebook review team.
+
+ 1. Navigate to [Facebook](https://www.facebook.com/pages) and create a **new Facebook Page**.
+ 1. Allow the Facebook App to access this page by following these steps:
+ 1. Navigate to the [Graph API Explorer](https://developers.facebook.com/tools/explorer/).
+ 1. Select **Application**.
+ 1. Select **Page Access Token**, Send a **Get** request.
+ 1. Select the **Page ID** in the response.
+ 1. Now append the **/subscribed_apps** to the URL and Send a **Get** (empty response) request.
+ 1. Submit a **Post** request. You get the response as **success: true**.
+
+3. Create a non-expiring Graph API access token.
+
+ 1. Navigate to the [Graph API Explorer](https://developers.facebook.com/tools/explorer/).
+ 2. Select the **Application** option.
+ 3. Select the **Get User Access Token** option.
+ 4. Under the **Select Permissions**, select **manage_pages** and **publish_pages** options.
+ 5. We will use the **access token** (Short Lived Token) in the next step.
+
+4. We use Postman for the next few steps.
+
+ 1. Open **Postman** (or get it [here](https://www.getpostman.com/)).
+ 2. Import these two files:
+ 1. [Postman Collection](https://github.com/MicrosoftContentModerator/samples-fbPageModeration/blob/master/Facebook%20Permanant%20Page%20Access%20Token.postman_collection.json)
+ 2. [Postman Environment](https://github.com/MicrosoftContentModerator/samples-fbPageModeration/blob/master/FB%20Page%20Access%20Token%20Environment.postman_environment.json)
+ 3. Update these environment variables:
+
+ | Key | Value |
+ | -- |-|
+ | appId | Insert your Facebook App Identifier here |
+ | appSecret | Insert your Facebook App's secret here |
+ | short_lived_token | Insert the short lived user access token you generated in the previous step |
+ 4. Now run the 3 APIs listed in the collection:
+ 1. Select **Generate Long-Lived Access Token** and click **Send**.
+ 2. Select **Get User ID** and click **Send**.
+ 3. Select **Get Permanent Page Access Token** and click **Send**.
+ 5. Copy the **access_token** value from the response and assign it to the App setting, **fb:PageAccessToken**.
+
+The solution sends all images and text posted on your Facebook page to Content Moderator. Then the workflows that you configured earlier are invoked. The content that does not pass your criteria defined in the workflows gets passed to reviews within the review tool. The rest of the content gets published automatically.
+
+## Next steps
+
+In this tutorial, you set up a program to analyze product images, tag them by product type, and allow a review team to make informed decisions about content moderation. Next, learn more about the details of image moderation.
+
+> [!div class="nextstepaction"]
+> [Image moderation](./image-moderation-api.md)
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/whats-new.md
Learn what's new in the service. These items include release notes, videos, blog
* [Continuous integration tools](developer-reference-resource.md#continuous-integration-tools) * Workshop - learn best practices for [_Natural Language Understanding_ (NLU) using LUIS](developer-reference-resource.md#workshops) * [Customer managed keys](./encrypt-data-at-rest.md) - encrypt all the data you use in LUIS by using your own key
-* [AI show](https://channel9.msdn.com/Shows/AI-Show/New-Features-in-Language-Understanding) (video) - see the new features in LUIS
+* [AI show](/Shows/AI-Show/New-Features-in-Language-Understanding) (video) - see the new features in LUIS
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/whats-new.md
Learn what's new with QnA Maker.
* New version of QnA Maker launched in free Public Preview. Read more [here](https://techcommunity.microsoft.com/t5/azure-ai/introducing-qna-maker-managed-now-in-public-preview/ba-p/1845575).
-> [!VIDEO https://channel9.msdn.com/Shows/AI-Show/Introducing-QnA-managed-Now-in-Public-Preview/player]
+> [!VIDEO https://docs.microsoft.com/Shows/AI-Show/Introducing-QnA-managed-Now-in-Public-Preview/player]
* Simplified resource creation * End to End region support * Deep learnt ranking model
cognitive-services How To Use Codec Compressed Audio Input Streams https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-use-codec-compressed-audio-input-streams.md
Title: Stream codec-compressed audio with the Speech SDK - Speech service
-description: Learn how to stream compressed audio to the Speech service with the Speech SDK. Available for C++, C#, and Java for Linux, Java in Android and Objective-C in iOS.
+description: Learn how to stream compressed audio to the Speech service with the Speech SDK.
++ Previously updated : 03/30/2020 Last updated : 01/13/2022 ms.devlang: cpp, csharp, golang, java, python zone_pivot_groups: programming-languages-set-twenty-eight
-# Use codec-compressed audio input
+# Stream codec-compressed audio
-The Speech SDK and Speech CLI can accept compressed audio formats using GStreamer. GStreamer decompresses the audio before it's sent over the wire to the Speech service as raw PCM.
+The Speech SDK and Speech CLI use GStreamer to support different kinds of input audio formats. GStreamer decompresses the audio before it's sent over the wire to the Speech service as raw PCM.
++
+## Installing GStreamer
+
+Choose a platform for installation instructions.
Platform | Languages | Supported GStreamer version | : | : | ::
+Android | Java | [1.18.3](https://gstreamer.freedesktop.org/data/pkg/android/1.18.3/)
Linux | C++, C#, Java, Python, Go | [Supported Linux distributions and target architectures](~/articles/cognitive-services/speech-service/speech-sdk.md) Windows (excluding UWP) | C++, C#, Java, Python | [1.18.3](https://gstreamer.freedesktop.org/data/pkg/windows/1.18.3/msvc/gstreamer-1.0-msvc-x86_64-1.18.3.msi)
-Android | Java | [1.18.3](https://gstreamer.freedesktop.org/data/pkg/android/1.18.3/)
-## Installing GStreamer on Linux
+### [Android](#tab/android)
+
+See [GStreamer configuration by programming language](#gstreamer-configuration) for the details about building libgstreamer_android.so.
+
+For more information, see [Android installation instructions](https://gstreamer.freedesktop.org/documentation/installing/for-android-development.html?gi-language=c).
+
+### [Linux](#tab/linux)
For more information, see [Linux installation instructions](https://gstreamer.freedesktop.org/documentation/installing/on-linux.html?gi-language=c).
gstreamer1.0-plugins-good \
gstreamer1.0-plugins-bad \ gstreamer1.0-plugins-ugly ```
-## Installing GStreamer on Windows
+### [Windows](#tab/windows)
Make sure that packages of the same platform (x64 or x86) are installed. For example, if you installed the x64 package for Python, then you need to install the x64 GStreamer package. The instructions below are for the x64 packages.
Make sure that packages of the same platform (x64 or x86) are installed. For exa
For more information about GStreamer, see [Windows installation instructions](https://gstreamer.freedesktop.org/documentation/installing/on-windows.html?gi-language=c).
-## Using GStreamer in Android
-Look at the Java tab above for the details about building libgstreamer_android.so
+***
-For more information see [Android installation instructions](https://gstreamer.freedesktop.org/documentation/installing/for-android-development.html?gi-language=c).
-
-## Speech SDK version required for compressed audio input
-* Speech SDK version 1.10.0 or later is required for RHEL 8 and CentOS 8
-* Speech SDK version 1.11.0 or later is required for Windows.
-* Speech SDK version 1.16.0 or later for the latest GStreamer on Windows and Android.
-
+## GStreamer configuration
-## GStreamer required to handle compressed audio
+> [!NOTE]
+> GStreamer configuration requirements vary by programming language. For details, choose your programming language at the top of this page. The contents of this section will be updated.
::: zone pivot="programming-language-csharp" [!INCLUDE [prerequisites](includes/how-to/compressed-audio-input/csharp/prerequisites.md)]
For more information see [Android installation instructions](https://gstreamer.f
[!INCLUDE [prerequisites](includes/how-to/compressed-audio-input/go/prerequisites.md)] ::: zone-end
-## Example code using codec compressed audio input
+## Example
::: zone pivot="programming-language-csharp" [!INCLUDE [prerequisites](includes/how-to/compressed-audio-input/csharp/examples.md)]
cognitive-services Speech Ssml Phonetic Sets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-ssml-phonetic-sets.md
Title: Speech phonetic sets - Speech service
+ Title: Speech phonetic alphabets - Speech service
-description: Learn how to the Speech service phonetic alphabet maps to the International Phonetic Alphabet (IPA), and when to use which set.
+description: Speech service phonetic alphabet and International Phonetic Alphabet (IPA) examples.
-+ Previously updated : 03/04/2020 Last updated : 01/13/2022
-# Speech service phonetic sets
+# SSML phonetic alphabets
-The Speech service defines phonetic alphabets ("phone sets" for short), consisting of seven languages; `en-US`, `fr-FR`, `de-DE`, `es-ES`, `ja-JP`, `zh-CN`, and `zh-TW`. The Speech service phone sets typically map to the <a href="https://en.wikipedia.org/wiki/International_Phonetic_Alphabet" target="_blank">International Phonetic Alphabet (IPA) </a>. Speech service phone sets are used in conjunction with the [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md), as part of the Text-to-speech service offering. In this article, you'll learn how these phone sets are mapped and when to use which phone set.
+Phonetic alphabets are used with the [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md) to improve pronunciation of Text-to-speech voices. See [Use phonemes to improve pronunciation](speech-synthesis-markup.md#use-phonemes-to-improve-pronunciation) to learn when and how to use each alphabet.
-# [en-US](#tab/en-US)
+## Speech service phonetic alphabet
-### English suprasegmentals
+For some locales, the Speech service defines its own phonetic alphabets that typically map to the [International Phonetic Alphabet (IPA)](https://en.wikipedia.org/wiki/International_Phonetic_Alphabet). The 7 locales that support `sapi` are: `en-US`, `fr-FR`, `de-DE`, `es-ES`, `ja-JP`, `zh-CN`, and `zh-TW`.
-| Example 1 (Onset for consonant, word initial for vowel) | Example 2 (Intervocalic for consonant, word medial nucleus for vowel) | Example 3 (Coda for consonant, word final for vowel) | Comments |
+You set `sapi` or `ipa` as the `alphabet` in [SSML](speech-synthesis-markup.md#use-phonemes-to-improve-pronunciation).
+
+### [en-US](#tab/en-US)
+
+#### English suprasegmentals
+
+|Example 1 (Onset for consonant, word initial for vowel)|Example 2 (Intervocalic for consonant, word medial nucleus for vowel)|Example 3 (Coda for consonant, word final for vowel)|Comments|
|--|--|--|--| | burger /b er **1** r - g ax r/ | falafel /f ax - l aa **1** - f ax l/ | guitar /g ih - t aa **1** r/ | Speech service phone set put stress after the vowel of the stressed syllable | | inopportune /ih **2** - n aa - p ax r - t uw 1 n/ | dissimilarity /d ih - s ih **2**- m ax - l eh 1 - r ax - t iy/ | workforce /w er 1 r k - f ao **2** r s/ | Speech service phone set put stress after the vowel of the sub-stressed syllable |
-### English vowels
+#### English vowels
| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 | |--|-||--|--|
The Speech service defines phonetic alphabets ("phone sets" for short), consisti
| y uw | `ju` | **Yu**ma | h**u**man | f**ew** | | ax | `ə` | **a**go | wom**a**n | are**a** |
-### English R-colored vowels
+#### English R-colored vowels
| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 | |--|-|--|-||
The Speech service defines phonetic alphabets ("phone sets" for short), consisti
| er r | `ɝ` | **ear**th | b**ir**d | f**ur** | | ax r | `ɚ` | | all**er**gy | supp**er** |
-### English Semivowels
+#### English Semivowels
| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 | |--|-|||--| | w | `w` | **w**ith, s**ue**de | al**w**ays | | | y | `j` | **y**ard, f**e**w | on**i**on | |
-### English aspirated oral stops
+#### English aspirated oral stops
| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 | |--|-|--|-||
The Speech service defines phonetic alphabets ("phone sets" for short), consisti
| k | `k` | **c**ut | sla**ck**er | Ira**q** | | g | `g` | **g**o | a**g**o | dra**g** |
-### English Nasal stops
+#### English Nasal stops
| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 | |--|-|||-|
The Speech service defines phonetic alphabets ("phone sets" for short), consisti
| n | `n` | **n**o, s**n**ow | te**n**t | chicke**n** | | ng | `ŋ` | | li**n**k | s**ing** |
-### English fricatives
+#### English fricatives
| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 | |--|-|-|||
The Speech service defines phonetic alphabets ("phone sets" for short), consisti
| zh | `ʒ` | **J**acques | plea**s**ure | gara**g**e | | h | `h` | **h**elp | en**h**ance | a-**h**a! |
-### English affricates
+#### English affricates
| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 | |--|-|--|--|| | ch | `tʃ` | **ch**in | fu**t**ure | atta**ch** | | jh | `dʒ` | **j**oy | ori**g**inal | oran**g**e |
-### English approximants
+#### English approximants
| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 | |--|-|--||--| | l | `l` | **l**id, g**l**ad | pa**l**ace | chi**ll** | | r | `╔╣` | **r**ed, b**r**ing | bo**rr**ow | ta**r** |
-# [fr-FR](#tab/fr-FR)
+### [fr-FR](#tab/fr-FR)
-### French suprasegmentals
+#### French suprasegmentals
The Speech service phone set puts stress after the vowel of the stressed syllable, however; the `fr-FR` Speech service phone set doesn't support the IPA substress 'ˌ'. If the IPA substress is needed, you should use the IPA directly.
-### French vowels
+#### French vowels
| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 | |--|-||--|--|
The Speech service phone set puts stress after the vowel of the stressed syllabl
| uw | `u` | **ou**trage | intr**ou**vable | **ou** | | uy | `y` | **u**ne | p**u**nir | él**u** |
-### French consonants
+#### French consonants
| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 | |--|-|-||-|
The Speech service phone set puts stress after the vowel of the stressed syllabl
> [!TIP] > The `fr-FR` Speech service phone set doesn't support the following French liasions, `n‿`, `t‿`, and `z‿`. If they are needed, you should consider using the IPA directly.
-# [de-DE](#tab/de-DE)
+### [de-DE](#tab/de-DE)
-### German suprasegmentals
+#### German suprasegmentals
| Example 1 (Onset for consonant, word initial for vowel) | Example 2 (Intervocalic for consonant, word medial nucleus for vowel) | Example 3 (Coda for consonant, word final for vowel) | Comments | |--|--|--|--| | anders /a **1** n - d ax r s/ | Multiplikationszeichen /m uh l - t iy - p l iy - k a - ts y ow **1** n s - ts ay - c n/ | Biologie /b iy - ow - l ow - g iy **1**/ | Speech service phone set put stress after the vowel of the stressed syllable | | Allgemeinwissen /a **2** l - g ax - m ay 1 n - v ih - s n/ | Abfallentsorgungsfirma /a 1 p - f a l - ^ eh n t - z oh **2** ax r - g uh ng s - f ih ax r - m a/ | Computertomographie /k oh m - p y uw 1 - t ax r - t ow - m ow - g r a - f iy **2**/ | Speech service phone set put stress after the vowel of the sub-stressed syllable |
-### German vowels
+#### German vowels
| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 | |--|--||||
The Speech service phone set puts stress after the vowel of the stressed syllabl
<a id="de-v-2"></a> **2** *Word-intially only in words of foreign origin such as **A**ppointment. Syllable-initially in: 'v**e**rstauen.*
-### German diphthong
+#### German diphthong
| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 | |--|-|--|--|--|
The Speech service phone set puts stress after the vowel of the stressed syllabl
| aw | `au` | **au**ßen | abb**au**st | St**au** | | oy | `ɔy`, `ɔʏ̯` | **Eu**phorie | tr**äu**mt | sch**eu** |
-### German semivowels
+#### German semivowels
| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 | |--|-|--|--|| | ax r | `ɐ` | | abänd**er**n | lock**er** |
-### German consonants
+#### German consonants
| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 | |--|--|--|--|--|
The Speech service phone set puts stress after the vowel of the stressed syllabl
<a id="de-c-12"></a> **12** *Word-initially only in words of foreign origin, such as: **J**uan. Syllable-initially also in words like: Ba**ch**erach.*<br>
-### German oral consonants
+#### German oral consonants
| `sapi` | `ipa` | Example 1 | |--|-|--| | ^ | `ʔ` | beachtlich /b ax - ^ a 1 x t - l ih c/ | > [!NOTE]
-> We need to add a [gs\] phone between two distinct vowels, except the two vowels are a genuine diphthong. This oral consonant is a glottal stop, for more information, see <a href="http://en.wikipedia.org/wiki/Glottal_stop" target="_blank">glottal stop <span class="docon docon-navigate-external x-hidden-focus"></a></a>.
+> We need to add a [gs\] phone between two distinct vowels, except the two vowels are a genuine diphthong. This oral consonant is a glottal stop, for more information, see [glottal stop](http://en.wikipedia.org/wiki/Glottal_stop).
-# [es-ES](#tab/es-ES)
+### [es-ES](#tab/es-ES)
-### Spanish vowels
+#### Spanish vowels
| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 | |--|-|--||--|
The Speech service phone set puts stress after the vowel of the stressed syllabl
| o | `o` | **o**caso | enc**o**ntrar | ocasenc**o** | | u | `u` | **u**sted | p**u**nta | Juanl**u** |
-### Spanish consonants
+#### Spanish consonants
| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 | |--|||-|-|
The Speech service phone set puts stress after the vowel of the stressed syllabl
> [!TIP] > The `es-ES` Speech service phone set doesn't support the following Spanish IPA, `β`, `ð`, and `ɣ`. If they are needed, you should consider using the IPA directly.
-# [zh-CN](#tab/zh-CN)
+### [zh-CN](#tab/zh-CN)
-The Speech service phone set for `zh-CN` is based on the native phone <a href="https://en.wikipedia.org/wiki/Pinyin" target="_blank">Pinyin </a> set.
+The Speech service phone set for `zh-CN` is based on the native phone [Pinyin](https://en.wikipedia.org/wiki/Pinyin).
-### Tone
+#### Tone
| Pinyin tone | `sapi` | Character example | |-|--|-|
The Speech service phone set for `zh-CN` is based on the native phone <a href="h
| 累进 | lei 3 -jin 4 | | 西宅巷 | xi 1 - zhai 2 - xiang 4 |
-# [zh-TW](#tab/zh-TW)
+### [zh-TW](#tab/zh-TW)
-The Speech service phone set for `zh-TW` is based on the native phone <a href="https://en.wikipedia.org/wiki/Bopomofo" target="_blank">Bopomofo </a> set.
+The Speech service phone set for `zh-TW` is based on the native phone [Bopomofo](https://en.wikipedia.org/wiki/Bopomofo).
-### Tone
+#### Tone
| Speech service tone | Bopomofo tone | Example (word) | Speech service phones | Bopomofo | Pinyin (拼音) | |||-|--|-|-|
The Speech service phone set for `zh-TW` is based on the native phone <a href="h
| 然后 | ㄖㄢˊㄏㄡˋ | | 剪掉 | ㄐㄧㄢˇㄉㄧㄠˋ |
-# [ja-JP](#tab/ja-JP)
+### [ja-JP](#tab/ja-JP)
-The Speech service phone set for `ja-JP` is based on the native phone <a href="https://en.wikipedia.org/wiki/Kana" target="_blank">Kana </a> set.
+The Speech service phone set for `ja-JP` is based on the native phone [Kana](https://en.wikipedia.org/wiki/Kana) set.
-### Stress
+#### Stress
| `sapi` | `ipa` | |--|-|
The Speech service phone set for `ja-JP` is based on the native phone <a href="h
| 所有者 | ショュ'ウ?ャ | ɕjojɯˈwɯɕja | | 最適化 | サィテキカ+ | sajitecikaˌ | + ***+
+## International Phonetic Alphabet
+
+For the locales below, the Speech service uses the [International Phonetic Alphabet (IPA)](https://en.wikipedia.org/wiki/International_Phonetic_Alphabet).
+
+You set `ipa` as the `alphabet` in [SSML](speech-synthesis-markup.md#use-phonemes-to-improve-pronunciation).
+
+These locales all use the same IPA stress and syllables described here.
+
+|`ipa` | Symbol |
+|-|-|
+| `ˈ` | Primary stress |
+| `ˌ` | Secondary stress |
+| `.` | Syllable boundary |
++
+Select a tab for the IPA phonemes specific to each locale.
+
+### [ca-ES](#tab/ca-ES)
+
+| `ipa` | Example 1 | Example 2 | Example 3 |
+|-|-||-|
+| `a` | **a**men | am**a**ro | est**à** |
+| `ɔ` | **o**dre | ofert**o**ri | microt**ò** |
+| `ə` | **e**stan | s**e**ré | aigu**a** |
+| `b` | **b**aba | do**b**la | |
+| `β` | **v**ià | ba**b**a | |
+| `t͡ʃ` | **tx**adià | ma**tx**ucs | fa**ig** |
+| `d̪` | **d**edicada | con**d**uïa | navida**d** |
+| `├░` | **Th**e_Sun | de**d**icada | trinida**d** |
+| `e` | **é**rem | f**e**ta | ser**é** |
+| `ɛ` | **e**cosistema | incorr**e**cta | hav**er** |
+| `f` | **f**acilitades | a**f**ectarà | àgra**f** |
+| `g` | **g**racia | con**g**ratula | |
+| `ɣ` | | ai**g**ua | |
+| `i` | **i**tinerants | it**i**nerants | zomb**i** |
+| `j` | **hi**ena | espla**i**a | cofo**i** |
+| `d͡ʒ` | **dj**akarta | composta**tg**e | geor**ge** |
+| `k` | **c**urós | dode**c**à | doble**c** |
+| `l` | **l**aberint | mio**l**ar | preva**l** |
+| `ʎ` | **ll**igada | mi**ll**orarà | perbu**ll** |
+| `m` | **m**acadàmies | fe**m**ar | subli**m** |
+| `n` | **n**ecessaris | sa**n**itaris | alterame**nt** |
+| `ŋ` | | algo**n**quí | albe**nc** |
+| `╔▓` | **ny**asa | reme**n**jar | alema**ny** |
+| `o` | **o**mbra | ret**o**ndre | omissi**├│** |
+| `p` | **p**egues | este**p**a | ca**p** |
+| `ɾ` | | ca**r**o | càrte**r** |
+| `r` | **r**abada | ca**rr**o | lof├▓fo**r** |
+| `s` | **c**eri | cur**s**ar | cu**s** |
+| `ʃ` | **x**acar | micro**x**ip | midra**ix** |
+| `t̪` | **t**abacaires | es**t**ratifica | debatu**t** |
+| `θ` | **c**eará | ve**c**inos | Álvare**z** |
+| `u` | **u**niversitaris | candidat**u**res | cron**o** |
+| `w` | **w**estfalià | ina**u**gurar | inscri**u** |
+| `x` | **j**uanita | mu**j**eres | heinri**ch** |
+| `z` | **z**elar | bra**s**ils | alian**ze** |
++
+### [en-GB](#tab/en-GB)
+
+#### Vowels
+
+| `ipa` | Example 1 | Example 2 | Example 3 |
+|-||--|-|
+| `ɑː` | | f**a**st | br**a** |
+| `æ` | | f**a**t | |
+| `ʌ` | | b**u**g | |
+| `ɛə` | | | h**air** |
+| `aʊ` | **ou**t | m**ou**th | h**ow** |
+| `ə` | **a** | | driv**er** |
+| `aɪ` | | f**i**ve | |
+| `ɛ` | **e**gg | dr**e**ss | |
+| `ɜː` | **er**nest | sh**ir**t | f**ur** |
+| `eɪ` | **ai**lment | l**a**ke | p**ay** |
+| `ɪ` | | add**i**ng | |
+| `ɪə` | | b**ear**d | h**ear** |
+| `iː` | **ea**t | s**ee**d | s**ee** |
+| `ɒ` | | p**o**d | |
+| `ɔː` | | d**aw**n | |
+| `əʊ` | | c**o**de | pill**ow** |
+| `ɔɪ` | | p**oi**nt | b**oy** |
+| `ʊ` | | l**oo**k | |
+| `ʊə` | | | t**our** |
+| `uː` | | f**oo**d | t**wo** |
+
+#### Consonants
+
+| `ipa` | Example 1 | Example 2 | Example 3 |
+|-||--|-|
+| `b ` | **b**ike | ri**bb**on | ri**b** |
+| `tʃ ` | **ch**allenge | na**t**ure | ri**ch** |
+| `d ` | **d**ate | ca**dd**y | sli**d** |
+| `├░` | **th**is | fa**th**er | brea**the** |
+| `f ` | **f**ace | lau**gh**ing | enou**gh** |
+| `g ` | **g**old | bra**gg**ing | be**g** |
+| `h ` | **h**urry | a**h**ead | |
+| `j` | **y**es | | |
+| `dʒ` | **g**in | ba**dg**er | bri**dge** |
+| `k ` | **c**at | lu**ck**y | tru**ck** |
+| `l ` | **l**eft | ga**ll**on | fi**ll** |
+| `m ` | **m**ile | li**m**it | ha**m** |
+| `n ` | **n**ose | pho**n**etic | ti**n** |
+| `ŋ ` | | si**ng**er | lo**ng** |
+| `p ` | **p**rice | su**p**er | ti**p** |
+| `╔╣` | **r**ate | ve**r**y | |
+| `s ` | **s**ay | si**ss**y | pa**ss** |
+| `ʃ ` | **sh**op | ca**sh**ier | lea**sh** |
+| `t ` | **t**op | ki**tt**en | be**t** |
+| `╬╕` | **th**eatre | ma**the**matics | brea**th** |
+| `v` | **v**ery | li**v**er | ha**ve** |
+| `w ` | **w**ill | | |
+| `z ` | **z**ero | bli**zz**ard | ro**se** |
++
+### [es-MX](#tab/es-MX)
+
+#### Vowels
+
+| `ipa` | Example 1 | Example 2 | Example 3|
+|-||-|-|
+| `ɑ` | **a**zúcar | tom**a**te | rop**a** |
+| `e` | **e**so | rem**e**ro | am**é** |
+| `i` | h**i**lo | liqu**i**do | ol**í** |
+| `o` | h**o**gar | ol**o**te | cas**o** |
+| `u` | **u**no | ning**u**no | tab**├║** |
+
+#### Consonants
+
+| `ipa` | Example 1 | Example 2 | Example 3|
+|-||-|-|
+| `b` | **b**ote | | |
+| `╬▓` | ├│r**b**ita | envol**v**ente | |
+| `t͡ʃ` | **ch**ico | ha**ch**a | |
+| `d` | **d**átil | | |
+| `├░` | or**d**en | o**d**a | |
+| `f` | **f**oco | o**f**icina | |
+| `g` | **g**ajo | | |
+| `ɣ` | a**g**ua | ho**gu**era | |
+| `j` | **i**odo | cal**i**ente | re**y** |
+| `j͡j` | | o**ll**a | |
+| `k` | **c**asa | á**c**aro | |
+| `l` | **l**oco | a**l**a | |
+| `ʎ` | **ll**ave | en**y**ugo | |
+| `m` | **m**ata | a**m**ar | |
+| `n` | **n**ada | a**n**o | |
+| `╔▓` | **├▒**o├▒o | a**├▒**o | |
+| `p` | **p**apa | pa**p**a | |
+| `╔╛` | | a**r**o | |
+| `r` | **r**ojo | pe**rr**o | |
+| `s` | **s**illa | a**s**a | |
+| `t` | **t**omate | | sof**t** |
+| `w` | h**u**evo | | |
+| `x` | **j**arra | ho**j**a | |
++
+### [it-IT](#tab/it-IT)
+
+#### Vowels
+
+| `ipa` | Example 1 | Example 2 | Example 3 |
+|-||--|--|
+| `a` | **a**mo | s**a**no | scort**a** |
+| `ai` | **ai**cs | abb**ai**no | m**ai** |
+| `aʊ` | **au**dio | r**au**co | b**au** |
+| `e` | **e**roico | v**e**nti / numb**e**r | sapor**e** |
+| `ɛ` | **e**lle | avv**e**nto | lacch**è** |
+| `ej` | **ei**ra | em**ai**l | l**ei** |
+| `ɛu` | **eu**ro | n**eu**ro | |
+| `ei` | | as**ei**tà | scultor**ei** |
+| `eu` | **eu**ropeo | f**eu**dale | |
+| `i` | **i**taliano | v**i**no | sol**i** |
+| `u` | **u**nico | l**u**na | zeb**├╣** |
+| `o` | **o**besità | stra**o**rdinari | amic**o** |
+| `ɔ` | **o**tto | b**o**tte / str**o**kes | per**ò** |
+| `oj` | | oppi**oi**di | |
+| `oi` | **oi**b├▓ | intellettual**oi**de | Gameb**oy** |
+| `ou` | | sh**ow** | talksh**ow** |
+
+#### Consonants
+
+| `ipa` | Example 1 | Example 2 | Example 3 |
+|-||--|--|
+| `b` | **b**ene | e**b**anista | Euroclu**b** |
+| `bː` | | go**bb**a | |
+| `ʧ` | **c**enare | a**c**ido | fren**ch** |
+| `tʃː` | | bra**cc**io | |
+| `kː` | | pa**cc**o | Innsbru**ck** |
+| `d` | **d**ente | a**d**orare | interlan**d** |
+| `dː` | | ca**dd**e | |
+| `ʣ` | **z**ero | or**z**o | |
+| `ʣː` | | me**zz**o | |
+| `f` | **f**ame | a**f**a | ale**f** |
+| `fː` | | be**ff**a | blu**ff** |
+| `ʤ` | **g**ente | a**g**ire | bei**ge** |
+| `ʤː` | | o**gg**i | |
+| `g` | **g**ara | al**gh**e | smo**g** |
+| `gː` | | fu**gg**a | Zue**gg** |
+| `ʎ` | **gl**i | ammira**gl**i | |
+| `ʎː` | | fo**gl**ia | |
+| `ɲː` | | ba**gn**o | |
+| `╔▓` | **gn**occo | padri**gn**o | Montai**gne** |
+| `j` | **i**eri | p**i**ede | freewif**i** |
+| `k` | **c**aro | an**ch**e | ti**c** ta**c** |
+| `l` | **l**ana | a**l**ato | co**l** |
+| `lː` | | co**ll**a | fu**ll** |
+| `m` | **m**ano | a**m**are | Ada**m** |
+| `mː` | | gra**mm**o | |
+| `n` | **n**aso | la**n**a | no**n** |
+| `nː` | | pa**nn**a | |
+| `p` | **p**ane | e**p**ico | sto**p** |
+| `pː` | | co**pp**a | |
+| `╔╛` | **r**ana | moto**r**e | pe**r** |
+| `r.r` | | ca**rr**o | Sta**rr** |
+| `s` | **s**ano | ca**s**cata | lapi**s** |
+| `sː` | | ca**ss**a | cordle**ss** |
+| `ʃ` | **sc**emo | Gram**sc**i | sla**sh** |
+| `ʃː` | | a**sc**ia | fich**es** |
+| `t` | **t**ana | e**t**erno | al**t** |
+| `tː` | | zi**tt**o | |
+| `ʦ` | **ts**unami | turbolen**z**a | subtes**ts** |
+| `ʦː` | | bo**zz**a | |
+| `v` | **v**ento | a**v**aro | Asimo**v** |
+| `vː` | | be**vv**i | |
+| `w` | **u**ovo | d**u**omo | Marlo**we** |
+
+### [pt-BR](#tab/pt-BR)
+
+#### VOWELS
+
+| `ipa` | Example 1 | Example 2 | Example 3 |
+|-|--||--|
+| `i` | **i**lha | f**i**car | com**i** |
+| `ĩ` | **in**tacto | p**in**tar | aberd**een** |
+| `ɑ` | **á**gua | d**a**da | m**á** |
+| `ɔ` | **o**ra | p**o**rta | cip**ó** |
+| `u` | **u**fanista | m**u**la | per**u** |
+| `ũ` | **un**s | p**un**gente | k**uhn** |
+| `o` | **o**rtopedista | f**o**fo | av**├┤** |
+| `e` | **e**lefante | el**e**fante | voc**ê** |
+| `ɐ̃` | **an**ta | c**an**ta | amanh**ã** |
+| `ɐ` | **a**qui | am**a**ciar | dad**a** |
+| `ɛ` | **e**la | s**e**rra | at**é** |
+| `ẽ` | **en**dorfina | p**en**der | |
+| `õ` | **on**tologia | c**on**to | |
+
+#### Consonants
+
+| `ipa` | Example 1 | Example 2 | Example 3 |
+|-|--||--|
+| `w̃` | | | atualizaçã**o** |
+| `w` | **w**ashington | ág**u**a | uso**u** |
+| `p` | **p**ato | ca**p**ital | |
+| `b` | **b**ola | ca**b**eça | |
+| `t` | **t**ato | ra**t**o | |
+| `d` | **d**ado | ama**d**o | |
+| `g` | **g**ato | mara**g**ato | |
+| `m` | **m**ato | co**m**er | |
+| `n` | **n**o | a**n**o | |
+| `ŋ` | **nh**oque | ni**nh**o | |
+| `f` | **f**aca | a**f**ago | |
+| `v` | **v**aca | ca**v**ar | |
+| `╔╣` | | pa**r**a | ama**r** |
+| `s` | **s**atisfeito | amas**s**ado | casado**s** |
+| `z` | **z**ebra | a**z**ar | |
+| `ʃ` | **ch**eirar | ma**ch**ado | |
+| `ʒ` | **jaca** | in**j**usta | |
+| `x` | **r**ota | ca**rr**eta | |
+| `tʃ` | **t**irar | a**t**irar | |
+| `dʒ` | **d**ia | a**d**iar | |
+| `l` | **l**ata | a**l**eto | |
+| `ʎ` | **lh**ama | ma**lh**ado | |
+| `j̃` | | inabalavelme**n**te | hífe**n** |
+| `j` | | ca**i**xa | sa**i** |
+| `k` | **c**asa | ensa**c**ado | |
++
+### [pt-PT](#tab/pt-PT)
+
+| `ipa` | Example 1 | Example 2 | Example 3 |
+|-|-|--||
+| `a` | **á**bdito | consul**a**r | medir**á** |
+| `ɐ` | **a**bacaxi | dom**a**ção | long**a** |
+| `ɐ͡j` | **ei**dético | dir**ei**ta | detect**ei** |
+| `ɐ̃` | **an**verso | viaj**an**te | af**ã** |
+| `ɐ͡j̃`| **an**gels | viag**en**s | tamb**ém** |
+| `ɐ͡w̃`| **hão** | significaç**ão**zinha | gab**ão** |
+| `ɐ͡w` | | s**au**dar | hell**o** |
+| `a͡j` | **ai**rosa | cultur**ai**s | v**ai** |
+| `ɔ` | **ho**ra | dep**ó**sito | l**ó** |
+| `ɔ͡j` | **ói**s | her**ói**co | d**ói** |
+| `a͡w` | **ou**tlook | inc**au**to | p**au** |
+| `ə` | **e**xtremo | sapr**e**mar | noit**e** |
+| `b` | **b**acalhau | ta**b**aco | clu**b** |
+| `d` | **d**ado | da**d**o | ban**d** |
+| `ɾ` | **r**ename | ve**r**ás | chuta**r** |
+| `e` | **e**clipse | hav**e**r | buff**et** |
+| `ɛ` | **e**co | hib**é**rnios | pat**é** |
+| `ɛ͡w` | | pirin**éu**s | escarc**éu** |
+| `ẽ` | **em**baçado | dirim**en**te | ám**en** |
+| `e͡w` | **eu** | d**eu**s | beb**eu** |
+| `f` | **f**im | e**f**icácia | gol**f** |
+| `g` | **g**adinho | ape**g**o | blo**g** |
+| `i` | **i**greja | aplaud**i**do | escrev**i** |
+| `ĩ` | **im**paciente | esp**in**çar | manequ**im** |
+| `i͡w` | | n**iu**e | garant**iu** |
+| `j` | **i**ode | desassoc**i**ado | substitu**i** |
+| `k` | **k**iwi | trafi**c**ado | sna**ck** |
+| `l` | **l**aborar | pe**l**ada | fu**ll** |
+| `ɫ` | | po**l**vo | brasi**l** |
+| `ʎ` | **lh**anamente | anti**lh**as | |
+| `m` | **m**aça | ama**nh**ã | mode**m** |
+| `n` | **n**utritivo | campa**n**a | sca**n** |
+| `╔▓` | **nh**ambu-grande | toalhi**nh**a | pe**nh** |
+| `o` | **o**fir | consumad**o**r | stacatt**o** |
+| `o͡j` | **oi**rar | n**oi**te | f**oi** |
+| `õ` | **om**brão | barr**on**da | d**om** |
+| `o͡j̃`| | ocupaç**õe**s | exp**õe** |
+| `p` | **p**ai | crá**p**ula | lapto**p** |
+| `ʀ` | **r**ecordar | gue**rr**a | chauffeu**r** |
+| `s` | **s**eco | gro**ss**eira | bo**ss** |
+| `ʃ` | **ch**uva | du**ch**ar | médio**s** |
+| `t` | **t**abaco | pelo**t**a | inpu**t** |
+| `u` | **u**bi | fac**u**ltativo | fad**o** |
+| `u͡j` | **ui**var | arr**ui**vado | f**ui** |
+| `ũ` | **um**bilical | f**un**cionar | fór**um** |
+| `u͡j̃`| | m**ui**to | |
+| `v` | **v**aca | combatí**v**el | pavlo**v** |
+| `w` | **w**affle | restit**u**ir | katofi**o** |
+| `z` | **z**âmbia | pra**z**er | ja**zz** |
++
+### [ru-RU](#tab/ru-RU)
+
+#### VOWELS
+
+| `ipa` | Example 1 | Example 2 | Example 3 |
+|-||-|-|
+| `a` | **а**дрес | р**а**дость | бед**а** |
+| `ʌ` | **о**блаков | з**а**стенчивость | внучк**а** |
+| `ə` | | ябл**о**чн**о**го | |
+| `ɛ` | **э**пос | б**е**лка | каф**е** |
+| `i` | **и**ней | л**и**ст | соловь**и** |
+| `ɪ` | **и**гра | м**е**дведь | мгновень**е** |
+| `ɨ` | **э**нергия | л**ы**с**ы**й | вес**ы** |
+| `ɔ` | **о**крик | м**о**т | весл**о** |
+| `u` | **у**жин | к**у**ст | пойд**у** |
+
+#### CONSONANT
+
+| `ipa` | Example 1 | Example 2 | Example 3 |
+|-||-|-|
+| `p` | **п**рофессор | по**п**лавок | укро**п** |
+| `pʲ` | **П**етербург | осле**п**ительно | сте**пь** |
+| `b` | **б**ольшой | со**б**ака | |
+| `bʲ` | **б**елый | у**б**едить | |
+| `t` | **т**айна | с**т**аренький | тви**д** |
+| `tʲ` | **т**епло | учи**т**ель | сине**ть** |
+| `d` | **д**оверчиво | не**д**алеко | |
+| `dʲ` | **д**ядя | е**д**иница | |
+| `k` | **к**рыло | ку**к**уруза | кустарни**к** |
+| `kʲ` | **к**ипяток | неяр**к**ий | |
+| `g` | **г**роза | немно**г**о | |
+| `gʲ` | **г**ерань | помо**г**ите | |
+| `x` | **х**ороший | по**х**од | ду**х** |
+| `xʲ` | **х**илый | хи**х**иканье | |
+| `f` | **ф**антазия | шка**ф**ах | кро**в** |
+| `fʲ` | **ф**естиваль | ко**ф**е | вер**фь** |
+| `v` | **в**нучка | сине**в**а | |
+| `vʲ` | **в**ертеть | с**в**ет | |
+| `s` | **с**казочник | ле**с**ной | карапу**з** |
+| `sʲ` | **с**еять | по**с**ередине | зажгли**сь** |
+| `z` | **з**аяц | зве**з**да | |
+| `zʲ` | **з**емляника | со**з**ерцал | |
+| `ʂ` | **ш**уметь | п**ш**ено | мы**шь** |
+| `ʐ` | **ж**илище | кру**ж**евной | |
+| `t͡s` | **ц**елитель | Вене**ц**ия | незнакоме**ц** |
+| `t͡ɕ` | **ч**асы | о**ч**арование | мя**ч** |
+| `ɕː` | **щ**елчок | о**щ**у**щ**ать | ле**щ** |
+| `m` | **м**олодежь | нес**м**отря | то**м** |
+| `mʲ` | **м**еч | ды**м**ить | се**мь** |
+| `n` | **н**ачало | око**н**це | со**н** |
+| `nʲ` | **н**ебо | ли**н**ялый | тюле**нь** |
+| `l` | **л**ужа | до**л**гожитель | ме**л** |
+| `lʲ` | **л**ицо | неда**л**еко | со**ль** |
+| `r` | **р**адость | со**р**ока | дво**р** |
+| `rʲ` | **р**ябина | набе**р**ежная | две**рь** |
+| `j` | **е**сть | ма**я**к | игрушечны**й** |
+
+***
+
cognitive-services Speech Synthesis Markup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-synthesis-markup.md
Phonetic alphabets are composed of phones, which are made up of letters, numbers
| Attribute | Description | Required / Optional | |--|-||
-| `alphabet` | Specifies the phonetic alphabet to use when synthesizing the pronunciation of the string in the `ph` attribute. The string specifying the alphabet must be specified in lowercase letters. The following are the possible alphabets that you can specify.<ul><li>`ipa` &ndash; <a href="https://en.wikipedia.org/wiki/International_Phonetic_Alphabet" target="_blank">International Phonetic Alphabet </a></li><li>`sapi` &ndash; [Speech service phonetic alphabet](speech-ssml-phonetic-sets.md)</li><li>`ups` &ndash;<a href="https://documentation.help/Microsoft-Speech-Platform-SDK-11/17509a49-cae7-41f5-b61d-07beaae872ea.htm" target="_blank"> Universal Phone Set</a></li></ul><br>The alphabet applies only to the `phoneme` in the element.| Optional |
+| `alphabet` | Specifies the phonetic alphabet to use when synthesizing the pronunciation of the string in the `ph` attribute. The string specifying the alphabet must be specified in lowercase letters. The following are the possible alphabets that you can specify.<ul><li>`ipa` &ndash; [International Phonetic Alphabet](speech-ssml-phonetic-sets.md#speech-service-phonetic-alphabet)</li><li>`sapi` &ndash; [Speech service phonetic alphabet](speech-ssml-phonetic-sets.md#speech-service-phonetic-alphabet)</li><li>`ups` &ndash; [Universal Phone Set](https://documentation.help/Microsoft-Speech-Platform-SDK-11/17509a49-cae7-41f5-b61d-07beaae872ea.htm)</li></ul><br>The alphabet applies only to the `phoneme` in the element.| Optional |
| `ph` | A string containing phones that specify the pronunciation of the word in the `phoneme` element. If the specified string contains unrecognized phones, the Text-to-Speech (TTS) service rejects the entire SSML document and produces none of the speech output specified in the document. | Required if using phonemes. | **Examples**
To define how multiple entities are read, you can create a custom lexicon, which
</lexicon> ```
-The `lexicon` element contains at least one `lexeme` element. Each `lexeme` element contains at least one `grapheme` element and one or more `grapheme`, `alias`, and `phoneme` elements. The `grapheme` element contains text describing the <a href="https://www.w3.org/TR/pronunciation-lexicon/#term-Orthography" target="_blank">orthography </a>. The `alias` elements are used to indicate the pronunciation of an acronym or an abbreviated term. The `phoneme` element provides text describing how the `lexeme` is pronounced. When `alias` and `phoneme` element are provided with the same `grapheme` element, `alias` has higher priority.
+The `lexicon` element contains at least one `lexeme` element. Each `lexeme` element contains at least one `grapheme` element and one or more `grapheme`, `alias`, and `phoneme` elements. The `grapheme` element contains text describing the [orthography](https://www.w3.org/TR/pronunciation-lexicon/#term-Orthography). The `alias` elements are used to indicate the pronunciation of an acronym or an abbreviated term. The `phoneme` element provides text describing how the `lexeme` is pronounced. When `alias` and `phoneme` element are provided with the same `grapheme` element, `alias` has higher priority.
> [!IMPORTANT] > The `lexeme` element is case sensitive in custom lexicon. For example, if you only provide a phoneme for `lexeme` 'Hello', it will not work for `lexeme` 'hello'.
You can subscribe to the `BookmarkReached` event in Speech SDK to get the bookma
# [C#](#tab/csharp)
-For more information, see <a href="/dotnet/api/microsoft.cognitiveservices.speech.speechsynthesizer.bookmarkreached" target="_blank"> `BookmarkReached` </a>.
+For more information, see [`BookmarkReached`](/dotnet/api/microsoft.cognitiveservices.speech.speechsynthesizer.bookmarkreached).
```csharp synthesizer.BookmarkReached += (s, e) =>
Bookmark reached. Audio offset: 1462.5ms, bookmark text: flower_2.
# [C++](#tab/cpp)
-For more information, see <a href="/cpp/cognitive-services/speech/speechsynthesizer#bookmarkreached" target="_blank"> `BookmarkReached` </a>.
+For more information, see [`BookmarkReached`](/cpp/cognitive-services/speech/speechsynthesizer#bookmarkreached).
```cpp synthesizer->BookmarkReached += [](const SpeechSynthesisBookmarkEventArgs& e)
Bookmark reached. Audio offset: 1462.5ms, bookmark text: flower_2.
# [Java](#tab/java)
-For more information, see <a href="/java/api/com.microsoft.cognitiveservices.speech.speechsynthesizer.bookmarkReached#com_microsoft_cognitiveservices_speech_SpeechSynthesizer_BookmarkReached" target="_blank"> `BookmarkReached` </a>.
+For more information, see [`BookmarkReached`](/java/api/com.microsoft.cognitiveservices.speech.speechsynthesizer.bookmarkReached#com_microsoft_cognitiveservices_speech_SpeechSynthesizer_BookmarkReached).
```java synthesizer.BookmarkReached.addEventListener((o, e) -> {
Bookmark reached. Audio offset: 1462.5ms, bookmark text: flower_2.
# [Python](#tab/python)
-For more information, see <a href="/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechsynthesizer#bookmark-reached" target="_blank"> `bookmark_reached` </a>.
+For more information, see [`bookmark_reached`](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechsynthesizer#bookmark-reached).
```python # The unit of evt.audio_offset is tick (1 tick = 100 nanoseconds), divide it by 10,000 to convert to milliseconds.
Bookmark reached, audio offset: 1462.5ms, bookmark text: flower_2.
# [JavaScript](#tab/javascript)
-For more information, see <a href="/javascript/api/microsoft-cognitiveservices-speech-sdk/speechsynthesizer#bookmarkReached" target="_blank"> `bookmarkReached`</a>.
+For more information, see [`bookmarkReached`](/javascript/api/microsoft-cognitiveservices-speech-sdk/speechsynthesizer#bookmarkReached).
```javascript synthesizer.bookmarkReached = function (s, e) {
For the example SSML above, the `bookmarkReached` event will be triggered twice,
# [Objective-C](#tab/objectivec)
-For more information, see <a href="/objectivec/cognitive-services/speech/spxspeechsynthesizer#addbookmarkreachedeventhandler" target="_blank"> `addBookmarkReachedEventHandler` </a>.
+For more information, see [`addBookmarkReachedEventHandler`](/objectivec/cognitive-services/speech/spxspeechsynthesizer#addbookmarkreachedeventhandler).
```objectivec [synthesizer addBookmarkReachedEventHandler: ^ (SPXSpeechSynthesizer *synthesizer, SPXSpeechSynthesisBookmarkEventArgs *eventArgs) {
Bookmark reached. Audio offset: 1462.5ms, bookmark text: flower_2.
# [Swift](#tab/swift)
-For more information, see <a href="/objectivec/cognitive-services/speech/spxspeechsynthesizer" target="_blank"> `addBookmarkReachedEventHandler` </a>.
+For more information, see [`addBookmarkReachedEventHandler`](/objectivec/cognitive-services/speech/spxspeechsynthesizer).
cognitive-services Improve Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/language-service/custom-named-entity-recognition/how-to/improve-model.md
After you have reviewed your [model's evaluation](view-model-evaluation.md), you
> [!NOTE] > This guide focuses on data from the [validation set](train-model.md#data-split) that was created during training.
-### Review validation set
+### Review test set
Using Language Studio, you can review how your model performs against how you expected it to perform. You can review predicted and tagged classes for each model you have trained.
Using Language Studio, you can review how your model performs against how you ex
2. Select **Improve model** from the left side menu.
-3. Select **Review validation set**.
+3. Select **Review test set**.
4. Choose your trained model from **Model** drop-down menu.
cognitive-services Authoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/language-service/question-answering/how-to/authoring.md
curl -X GET -H "Ocp-Apim-Subscription-Key: {API-KEY}" -H "Content-Type: applicat
"value": [ { "displayName": "source1",
- "sourceUri": "https://docs.microsoft.com/en-us/azure/cognitive-services/qnamaker/overview/overview",
+ "sourceUri": "https://docs.microsoft.com/azure/cognitive-services/qnamaker/overview/overview",
"sourceKind": "url", "lastUpdatedDateTime": "2021-05-01T15:13:22Z" },
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/language-service/text-analytics-for-health/overview.md
This documentation contains the following types of articles:
* [**How-to guides**](how-to/call-api.md) contain instructions for using the service in more specific or customized ways. * The [**conceptual articles**](concepts/health-entity-categories.md) provide in-depth explanations of the service's functionality and features.
-> [!VIDEO https://channel9.msdn.com/Shows/AI-Show/Introducing-Text-Analytics-for-Health/player]
+> [!VIDEO https://docs.microsoft.com/Shows/AI-Show/Introducing-Text-Analytics-for-Health/player]
## Features
confidential-computing Confidential Enclave Nodes Aks Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/confidential-computing/confidential-enclave-nodes-aks-get-started.md
spec:
restartPolicy: Never backoffLimit: 0 ```
+Alternatively you can also do a node pool selection deployment for your container deployments as shown below
+
+```yaml
+apiVersion: batch/v1
+kind: Job
+metadata:
+ name: sgx-test
+spec:
+ template:
+ metadata:
+ labels:
+ app: sgx-test
+ spec:
+ affinity:
+ nodeAffinity:
+ requiredDuringSchedulingIgnoredDuringExecution:
+ nodeSelectorTerms:
+ - matchExpressions:
+ - key: agentpool
+ operator: In
+ values:
+ - acc # this is the name of your confidential computing nodel pool
+ - acc_second # this is the name of your confidential computing nodel pool
+ containers:
+ - name: sgx-test
+ image: oeciteam/oe-helloworld:1.0
+ resources:
+ limits:
+ kubernetes.azure.com/sgx_epc_mem_in_MiB: 10
+ requests:
+ kubernetes.azure.com/sgx_epc_mem_in_MiB: 10
+ restartPolicy: "Never"
+ backoffLimit: 0
+ ```
Now use the `kubectl apply` command to create a sample job that will open in a secure enclave, as shown in the following example output:
confidential-computing Confidential Nodes Aks Addon https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/confidential-computing/confidential-nodes-aks-addon.md
Azure Kubernetes Service (AKS) provides a plugin for Azure confidential computin
The SGX Device plugin implements the Kubernetes device plugin interface for Enclave Page Cache (EPC) memory. In effect, this plugin makes EPC memory another resource type in Kubernetes. Users can specify limits on EPC just like other resources. Apart from the scheduling function, the device plugin helps assign SGX device driver permissions to confidential workload containers. [A sample implementation of the EPC memory-based deployment](https://github.com/Azure-Samples/confidential-computing/blob/main/containersamples/helloworld/helm/templates/helloworld.yaml) (`kubernetes.azure.com/sgx_epc_mem_in_MiB`) is available.
-## PSM with SGX quote helper
+## PSW with SGX quote helper
Enclave applications that do remote attestation need to generate a quote. The quote provides cryptographic proof of the identity and the state of the application, along with the enclave's host environment. Quote generation relies on certain trusted software components from Intel, which are part of the SGX Platform Software Components (PSW/DCAP). This PSW is packaged as a daemon set that runs per node. You can use the PSW when requesting attestation quote from enclave apps. Using the AKS provided service helps better maintain the compatibility between the PSW and other SW components in the host. Read the feature details below.
Enclave applications that do remote attestation need to generate a quote. The qu
Intel supports two attestation modes to run the quote generation. For how to choose which type, see the [attestation type differences](#attestation-type-differences). -- **in-proc**: hosts the trusted software components inside the enclave application process
+- **in-proc**: hosts the trusted software components inside the enclave application process. This method is useful when you are performing local attestation (between 2 enclave apps in a single VM node)
-- **out-of-proc**: hosts the trusted software components outside of the enclave application.
+- **out-of-proc**: hosts the trusted software components outside of the enclave application. This is a preferred method when performing remote attestation.
SGX applications built using Open Enclave SDK by default use in-proc attestation mode. SGX-based applications allow out-of-proc and require extra hosting. These applications expose the required components such as Architectural Enclave Service Manager (AESM), external to the application.
You don't have to check for backward compatibility with PSW and DCAP. The provid
### Out-of-proc attestation for confidential workloads
-The out-of-proc attestation model works for confidential workloads. The quote requestor and quote generation are executed separately, but on the same physical machine. The quote generation happens in a centralized manner and serves requests for QUOTES from all entities. Properly define the interface, and make the interface discoverable for any entity to request quotes.
+The out-of-proc attestation model works for confidential workloads. The quote requestor and quote generation are executed separately, but on the same physical machine. The quote generation happens in a centralized manner and serves requests for QUOTES from all entities. Properly define the interface and make the interface discoverable for any entity to request quotes.
![Diagram of quote requestor and quote generation interface.](./media/confidential-nodes-out-of-proc-attestation/aesmmanager.png)
Each container needs to opt in to use out-of-proc quote generation by setting th
An application can still use the in-proc attestation as before. However, you can't simultaneously use both in-proc and out-of-proc within an application. The out-of-proc infrastructure is available by default and consumes resources.
+> [!NOTE]
+> If you are using a Intel SGX wrapper software(OSS/ISV) to run you unmodified containers the attestation interaction with hardware is typically handled for your higher level apps. Please refer to the attestation implementation per provider.
+ ### Sample implementation The below docker file is a sample for an Open Enclave-based application. Set the `SGX_AESM_ADDR=1` environment variable in the Docker file. Or, set the variable in the deployment file. Follow this sample for the Docker file and deployment YAML details.
confidential-computing Confidential Nodes Aks Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/confidential-computing/confidential-nodes-aks-overview.md
# Confidential computing nodes on Azure Kubernetes Service
-[Azure confidential computing](overview.md) allows you to protect your sensitive data while it's in use. The underlying confidential computing infrastructure protects this data from other applications, administrators, and cloud providers with a hardware backed trusted execution container environments. Adding confidential computing nodes allow you to target container application to run in an isolated, hardware protected and attestable environment.
+[Azure confidential computing](overview.md) allows you to protect your sensitive data while it's in use. The underlying confidential computing infrastructure protects this data from other applications, administrators, and cloud providers with a hardware backed trusted execution container environments. Adding confidential computing nodes allow you to target container application to run in an isolated, hardware protected, integrity protected in an attestable environment.
## Overview
-Azure Kubernetes Service (AKS) supports adding [DCsv2 confidential computing nodes](confidential-computing-enclaves.md) powered by Intel SGX. These nodes allow you to run sensitive workloads within a hardware-based trusted execution environment (TEE). TEEΓÇÖs allow user-level code from containers to allocate private regions of memory to execute the code with CPU directly. These private memory regions that execute directly with CPU are called enclaves. Enclaves help protect the data confidentiality, data integrity and code integrity from other processes running on the same nodes. The Intel SGX execution model also removes the intermediate layers of Guest OS, Host OS and Hypervisor thus reducing the attack surface area. The *hardware based per container isolated execution* model in a node allows applications to directly execute with the CPU, while keeping the special block of memory encrypted per container. Confidential computing nodes with confidential containers are a great addition to your zero trust security planning and defense-in-depth container strategy.
+Azure Kubernetes Service (AKS) supports adding [DCsv2 confidential computing nodes](confidential-computing-enclaves.md) powered by Intel SGX. These nodes allow you to run sensitive workloads within a hardware-based trusted execution environment (TEE). TEEs allow user-level code from containers to allocate private regions of memory to execute the code with CPU directly. These private memory regions that execute directly with CPU are called enclaves. Enclaves help protect the data confidentiality, data integrity and code integrity from other processes running on the same nodes, as well as Azure operator. The Intel SGX execution model also removes the intermediate layers of Guest OS, Host OS and Hypervisor thus reducing the attack surface area. The *hardware based per container isolated execution* model in a node allows applications to directly execute with the CPU, while keeping the special block of memory encrypted per container. Confidential computing nodes with confidential containers are a great addition to your zero-trust security planning and defense-in-depth container strategy.
:::image type="content" source="./media/confidential-nodes-aks-overview/sgx-aks-node.png" alt-text="Graphic of AKS Confidential Compute Node, showing confidential containers with code and data secured inside.":::
Azure Kubernetes Service (AKS) supports adding [DCsv2 confidential computing nod
- Linux Containers support through Ubuntu 18.04 Gen 2 VM worker nodes ## Confidential Computing add-on for AKS
-The add-on feature enables extra capability on AKS when running confidential computing node pools on the cluster. This add-on enables the features below.
+The add-on feature enables extra capability on AKS when running confidential computing Intel SGX capable node pools on the cluster. "Confcon" add-on on AKS enables the features below.
#### Azure Device Plugin for Intel SGX <a id="sgx-plugin"></a>
-The device plugin implements the Kubernetes device plugin interface for Encrypted Page Cache (EPC) memory and exposes the device drivers from the nodes. Effectively, this plugin makes EPC memory as another resource type in Kubernetes. Users can specify limits on this resource just as other resources. Apart from the scheduling function, the device plugin helps assign Intel SGX device driver permissions to confidential workload containers. With this plugin developer can avoid mounting the Intel SGX driver volumes in the deployment files. A sample implementation of the EPC memory-based deployment (`kubernetes.azure.com/sgx_epc_mem_in_MiB`) sample is [here](https://github.com/Azure-Samples/confidential-computing/blob/main/containersamples/helloworld/helm/templates/helloworld.yaml)
+The device plugin implements the Kubernetes device plugin interface for Encrypted Page Cache (EPC) memory and exposes the device drivers from the nodes. Effectively, this plugin makes EPC memory as another resource type in Kubernetes. Users can specify limits on this resource just as other resources. Apart from the scheduling function, the device plugin helps assign Intel SGX device driver permissions to confidential container deployments. With this plugin developer can avoid mounting the Intel SGX driver volumes in the deployment files. This add-on on AKS clusters run as a daemonset per VM node that is of Intel SGX capable. A sample implementation of the EPC memory-based deployment (`kubernetes.azure.com/sgx_epc_mem_in_MiB`) sample is [here](https://github.com/Azure-Samples/confidential-computing/blob/main/containersamples/helloworld/helm/templates/helloworld.yaml)
+#### Intel SGX Quote Helper with Platform Software Components <a id="sgx-plugin"></a>
+
+As part of the plugin another daemonset is deployed per VM node that are of Intel SGX capable on the AKS cluster. This daemonset helps your confidential container apps when a remote out-of-proc attestation request is invoked.
+
+Enclave applications that do remote attestation need to generate a quote. The quote provides cryptographic proof of the identity and the state of the application, along with the enclave's host environment. Quote generation relies on certain trusted software components from Intel, which are part of the SGX Platform Software Components (PSW/DCAP). This PSW is packaged as a daemon set that runs per node. You can use the PSW when requesting attestation quote from enclave apps. Using the AKS provided service helps better maintain the compatibility between the PSW and other SW components in the host with Intel SGX drivers that are part of the AKS VM nodes. Read more on how your apps can use this daemonset without having to package the attestation primitives as part of your container deployments [More here](confidential-nodes-aks-addon.md#psw-with-sgx-quote-helper)
## Programming models
Confidential computing nodes on AKS also support containers that are programmed
[Quick starter confidential container samples](https://github.com/Azure-Samples/confidential-container-samples)
-[Intel SGX Confidential VM's - DCsv2 SKU List](../virtual-machines/dcv2-series.md)
+[Intel SGX Confidential VMs - DCsv2 SKU List](../virtual-machines/dcv2-series.md)
-[Intel SGX Confidential VM's - DCsv3 SKU List](../virtual-machines/dcv3-series.md)
+[Intel SGX Confidential VMs - DCsv3 SKU List](../virtual-machines/dcv3-series.md)
[Defense-in-depth with confidential containers webinar session](https://www.youtube.com/watch?reload=9&v=FYZxtHI_Or0&feature=youtu.be)
cosmos-db Concepts Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/concepts-limits.md
You can provision throughput at a container-level or a database-level in terms o
| Minimum RU/s required per 1 GB | 10 RU/s<br>**Note:** this minimum can be lowered if your account is eligible to our ["high storage / low throughput" program](set-throughput.md#high-storage-low-throughput-program) | > [!NOTE]
-> To learn about best practices for managing workloads that have partition keys requiring higher limits for storage or throughput, see [Create a synthetic partition key](synthetic-partition-keys.md).
+> To learn about best practices for managing workloads that have partition keys requiring higher limits for storage or throughput, see [Create a synthetic partition key](synthetic-partition-keys.md). If your workload has already reached the logical partition limit of 20GB in production, it is recommended to re-architect your application with a different partition key as a long-term solution. To help give time for this, you can request a temporary increase in the logical partition key limit for your existing application. [File an Azure support ticket](create-support-request-quota-increase.md) and select quota type **Temporary increase in container's logical partition key size**. Note this is intended as a temporary mitigation and not recommendeded as a long-term solution, as SLA guarantees are not honored when the limit is increased. To remove the configuration, file a support ticket and select quota type **Restore containerΓÇÖs logical partition key size to default (20 GB)**. This can be done after you have either deleted data to fit the 20 GB logical partition limit or have re-architected your application with a different partition key.
### Minimum throughput limits
cosmos-db Large Partition Keys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/large-partition-keys.md
Previously updated : 09/28/2019 Last updated : 12/8/2019
# Create containers with large partition key [!INCLUDE[appliesto-sql-api](includes/appliesto-sql-api.md)]
-Azure Cosmos DB uses hash-based partitioning scheme to achieve horizontal scaling of data. All Azure Cosmos containers created before May 3 2019 use a hash function that computes hash based on the first 100 bytes of the partition key. If there are multiple partition keys that have the same first 100 bytes, then those logical partitions are considered as the same logical partition by the service. This can lead to issues like partition size quota being incorrect, and unique indexes being applied across the partition keys. Large partition keys are introduced to solve this issue. Azure Cosmos DB now supports large partition keys with values up to 2 KB.
+Azure Cosmos DB uses hash-based partitioning scheme to achieve horizontal scaling of data. All Azure Cosmos containers created before May 3, 2019 use a hash function that computes hash based on the first 101 bytes of the partition key. If there are multiple partition keys that have the same first 101 bytes, then those logical partitions are considered as the same logical partition by the service. This can lead to issues like partition size quota being incorrect, unique indexes being incorrectly applied across the partition keys, and uneven distribution of storage. Large partition keys are introduced to solve this issue. Azure Cosmos DB now supports large partition keys with values up to 2 KB.
-Large partition keys are supported by using the functionality of an enhanced version of the hash function, which can generate a unique hash from large partition keys up to 2 KB. This hash version is also recommended for scenarios with high partition key cardinality irrespective of the size of the partition key. A partition key cardinality is defined as the number of unique logical partitions, for example in the order of ~30000 logical partitions in a container. This article describes how to create a container with a large partition key using the Azure portal and different SDKs.
+Large partition keys are supported by enabling an enhanced version of the hash function, which can generate a unique hash from large partition keys up to 2 KB.
+As a best practice, unless you need support for an [older Cosmos SDK or application that does not support this feature](#supported-sdk-versions), it is always recommended to configure your container with support for large partition keys.
## Create a large partition key (Azure portal)
-To create a large partition key, when you create a new container using the Azure portal, check the **My partition key is larger than 100-bytes** option. Unselect the checkbox if you donΓÇÖt need large partition keys or if you have applications running on SDKs version older than 1.18.
+To create a large partition key, when you create a new container using the Azure portal, check the **My partition key is larger than 101-bytes** option. Unselect the checkbox if you donΓÇÖt need large partition keys or if you have applications running on SDKs version older than 1.18.
:::image type="content" source="./media/large-partition-keys/large-partition-key-with-portal.png" alt-text="Create large partition keys using Azure portal":::
To create a container with large partition key support see,
* [Create an Azure Cosmos container with a large partition key size](manage-with-powershell.md#create-container-big-pk)
-## Create a large partition key (.Net SDK)
+## Create a large partition key (.NET SDK)
To create a container with a large partition key using the .NET SDK, specify the `PartitionKeyDefinitionVersion.V2` property. The following example shows how to specify the Version property within the PartitionKeyDefinition object and set it to PartitionKeyDefinitionVersion.V2.
+> [!NOTE]
+> By default, all containers created using the .NET SDK V2 do not support large partition keys. By default, all containers created using the .NET SDK V3 support large partition keys.
+ # [.NET SDK V3](#tab/dotnetv3) ```csharp
The Large partition keys are supported with the following minimum versions of SD
|SDK type | Minimum version | |||
-|.Net | 1.18 |
+|.NET | 1.18 |
|Java sync | 2.4.0 | |Java Async | 2.5.0 | | REST API | version higher than `2017-05-03` by using the `x-ms-version` request header.|
cosmos-db Create Sql Api Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/create-sql-api-nodejs.md
In this quickstart, you create and manage an Azure Cosmos DB SQL API account fro
Watch this video for a complete walkthrough of the content in this article.
-> [!VIDEO https://channel9.msdn.com/Shows/Docs-Azure/Quickstart-Use-Nodejs-to-connect-and-query-data-from-Azure-Cosmos-DB-SQL-API-account/player]
+> [!VIDEO https://docs.microsoft.com/Shows/Docs-Azure/Quickstart-Use-Nodejs-to-connect-and-query-data-from-Azure-Cosmos-DB-SQL-API-account/player]
## Prerequisites
cosmos-db Manage With Templates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/manage-with-templates.md
To create any of the Azure Cosmos DB resources below, copy the following example
This template creates an Azure Cosmos account in two regions with options for consistency and failover, with database and container configured for autoscale throughput that has most policy options enabled. This template is also available for one-click deploy from Azure Quickstart Templates Gallery.
+> [!NOTE]
+> You can use Azure Resource Manager templates to create new autoscale databases/containers and change the autoscale max RU/s setting on an existing database/container that is already configured with autoscale. By design, migrating between manual and autoscale throughput is not supported with Azure Resource Manager templates. To do this programmatically, you can use [Azure CLI](how-to-provision-autoscale-throughput.md#azure-cli) or [PowerShell](how-to-provision-autoscale-throughput.md#azure-powershell).
+ [:::image type="content" source="../../media/template-deployments/deploy-to-azure.svg" alt-text="Deploy to Azure":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.documentdb%2Fcosmosdb-sql-autoscale%2Fazuredeploy.json) :::code language="json" source="~/quickstart-templates/quickstarts/microsoft.documentdb/cosmosdb-sql-autoscale/azuredeploy.json":::
This template creates an Azure Cosmos account, database and container with with
## Azure Cosmos DB account with Azure AD and RBAC
-This template will create a SQL Cosmos account, a natively maintained Role Definition, and a natively maintained Role Assignment for an AAD identity. This template is also available for one-click deploy from Azure Quickstart Templates Gallery.
+This template will create a SQL Cosmos account, a natively maintained Role Definition, and a natively maintained Role Assignment for an Azure AD identity. This template is also available for one-click deploy from Azure Quickstart Templates Gallery.
[:::image type="content" source="../../media/template-deployments/deploy-to-azure.svg" alt-text="Deploy to Azure":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.documentdb%2Fcosmosdb-sql-rbac%2Fazuredeploy.json)
cosmos-db Migrate Dotnet V3 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/migrate-dotnet-v3.md
Previously updated : 10/19/2021 Last updated : 01/13/2022 ms.devlang: csharp
catch (CosmosClientException ex)
### Diagnostics
-Where the v2 SDK had Direct-only diagnostics available through the `ResponseDiagnosticsString` property, the v3 SDK uses `Diagnostics` available in all responses and exceptions, which are richer and not restricted to Direct mode. They include not only the time spent on the SDK for the operation, but also the regions the operation contacted:
+Where the v2 SDK had Direct-only diagnostics available through the `RequestDiagnosticsString` property, the v3 SDK uses `Diagnostics` available in all responses and exceptions, which are richer and not restricted to Direct mode. They include not only the time spent on the SDK for the operation, but also the regions the operation contacted:
```csharp try
cost-management-billing Reporting Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/costs/reporting-get-started.md
+
+ Title: Get started with Cost Management + Billing reporting - Azure
+description: This article helps you to get started with Cost Management + Billing to understand, report on, and analyze your invoiced Microsoft Cloud and AWS costs.
++ Last updated : 01/13/2022++++++
+# Get started with Cost Management + Billing reporting
+
+Cost Management + Billing includes several tools to help you understand, report on, and analyze your invoiced Microsoft Cloud and AWS costs. The following sections describe the major reporting components.
+
+## Cost analysis
+
+Cost analysis should be your first stop in the Azure portal when it comes to understanding what you're spending and where you're spending. Cost analysis helps you:
+
+- Visualize and analyze your organizational costs
+- Share cost views with others using custom alerts
+- View aggregated costs by organization to understand where costs occur over time and identify spending trends
+- View accumulated costs over time to estimate monthly, quarterly, or even yearly cost trends against a budget
+- Create budgets to provide adherence to financial constraints
+- Use budgets to view daily or monthly costs and help isolate spending irregularities
+
+Cost analysis is available from every resource group, subscription, management group, and billing account in the Azure portal. If you manage one of these scopes, you can start there and select **Cost analysis** from the menu. If you manage multiple scopes, you may want to start directly within Cost Management:
+
+Sign in to the Azure portal > select **Home** in the menu > scroll down under **Tools** and select **Cost Management** > select a scope at the top of the page > in the left menu, select **Cost analysis**.
++
+For more information about cost analysis, see [Explore and analyze costs with cost analysis](quick-acm-cost-analysis.md).
+
+## Power BI
+
+While cost analysis offers a rich, interactive experience for analyzing and surfacing insights about your costs, there are times when you need to build more extensive dashboards and complex reports or combine costs with internal data. The Cost Management template app for Power BI is a great way to get up and running with Power BI quickly. For more information about the template app, see [Analyze Azure costs with the Power BI App](analyze-cost-data-azure-cost-management-power-bi-template-app.md).
++
+Need to go beyond the basics with Power BI? The Cost Management connector for Power BI lets you choose the data you need to help you seamlessly integrate costs with your own datasets or easily build out more complete dashboards and reports to meet your organization's needs. For more information about the connector, see [Connect to Azure Cost Management data in Power BI Desktop](/power-bi/connect-data/desktop-connect-azure-cost-management).
+
+## Usage details and exports
+
+If you're looking for raw data to automate business processes or integrate with other systems, start by exporting data to a storage account. Scheduled exports allow you to automatically publish your raw cost data to a storage account on a daily, weekly, or monthly basis. With special handling for large datasets, scheduled exports are the most scalable option for building first-class cost data integration. For more information, see [Create and manage exported data](tutorial-export-acm-data.md).
+
+If you need more fine-grained control over your data requests, the Usage Details API offers a bit more flexibility to pull raw data the way you need it. For more information, see the [Usage Details REST API](/rest/api/consumption/usage-details/list).
++
+## Invoices and credits
+
+Cost analysis is a great tool for reviewing estimated, unbilled charges or for tracking historical cost trends, but it may not show your total billed amount because credits, taxes, and other refunds and charges not available in Cost Management. To estimate your projected bill at the end of the month, start in cost analysis to understand your forecasted costs, then review any available credit or prepaid commitment balance from **Credits** or **Payment methods** for your billing account or billing profile within the Azure portal. To review your final billed charges after the invoice is available, see **Invoices** for your billing account or billing profile.
+
+Here's an example that shows credits on the Credits tab on the Credits + Commitments page.
++
+For more information about your invoice, see [View and download your Microsoft Azure invoice](../understand/download-azure-invoice.md)
+
+For more information about credits, see [Track Microsoft Customer Agreement Azure credit balance](../manage/mca-check-azure-credits-balance.md).
+
+## Next steps
+
+- [Explore and analyze costs with cost analysis](quick-acm-cost-analysis.md).
+- [Analyze Azure costs with the Power BI App](analyze-cost-data-azure-cost-management-power-bi-template-app.md).
+- [Connect to Azure Cost Management data in Power BI Desktop](/power-bi/connect-data/desktop-connect-azure-cost-management).
+- [Create and manage exported data](tutorial-export-acm-data.md).
cost-management-billing Azure Plan Subscription Transfer Partners https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/azure-plan-subscription-transfer-partners.md
Access to existing users, groups, or service principals that were assigned using
Consequently, it's important that you remove Azure RBAC access for the old partner and add access for the new partner. For more information about giving your new partner access, see [What is Azure role-based access control (Azure RBAC)?](../../role-based-access-control/overview.md) For more information about removing your previous partner's Azure RBAC access, see [Remove Azure role assignments](../../role-based-access-control/role-assignments-remove.md).
-Additionally, your new partner doesn't automatically get [Admin on Behalf Of (AOBO)](https://channel9.msdn.com/Series/cspdev/Module-11-Admin-On-Behalf-Of-AOBO) access to your subscriptions. AOBO is necessary for your partner to manage the Azure subscriptions on your behalf. For more information about Azure privileges, see [Obtain permissions to manage a customer's service or subscription](/partner-center/customers-revoke-admin-privileges).
+Additionally, your new partner doesn't automatically get Admin on Behalf Of (AOBO) access to your subscriptions. AOBO is necessary for your partner to manage the Azure subscriptions on your behalf. For more information about Azure privileges, see [Obtain permissions to manage a customer's service or subscription](/partner-center/customers-revoke-admin-privileges).
## Stop a transfer
cost-management-billing Mpa Request Ownership https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/mpa-request-ownership.md
Azure Reservations don't automatically move with subscriptions. Either you can k
Access for existing users, groups, or service principals that was assigned using [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md) isn't affected during the transition. The partner wonΓÇÖt get any new Azure RBAC access to the subscriptions.
-The partners should work with the customer to get access to subscriptions. The partners need to get either [Admin on Behalf Of - AOBO](https://channel9.msdn.com/Series/cspdev/Module-11-Admin-On-Behalf-Of-AOBO) or [Azure Lighthouse](../../lighthouse/concepts/cloud-solution-provider.md) access open support tickets.
+The partners should work with the customer to get access to subscriptions. The partners need to get either Admin on Behalf Of - AOBO or [Azure Lighthouse](../../lighthouse/concepts/cloud-solution-provider.md) access open support tickets.
### Power BI connectivity
data-factory Author Global Parameters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/author-global-parameters.md
There are two ways to integrate global parameters in your continuous integration
* Include global parameters in the ARM template * Deploy global parameters via a PowerShell script
-For general use cases, it is recommended to include global parameters in the ARM template. This integrates natively with the solution outlined in [the CI/CD doc](continuous-integration-delivery.md). In case of automatic publishing and Purview connection, **PowerShell script** method is required. You can find more about PowerShell script method later. Global parameters will be added as an ARM template parameter by default as they often change from environment to environment. You can enable the inclusion of global parameters in the ARM template from the **Manage** hub.
+For general use cases, it is recommended to include global parameters in the ARM template. This integrates natively with the solution outlined in [the CI/CD doc](continuous-integration-delivery.md). In case of automatic publishing and Azure Purview connection, **PowerShell script** method is required. You can find more about PowerShell script method later. Global parameters will be added as an ARM template parameter by default as they often change from environment to environment. You can enable the inclusion of global parameters in the ARM template from the **Manage** hub.
:::image type="content" source="media/author-global-parameters/include-arm-template.png" alt-text="Include in ARM template"::: > [!NOTE]
-> The **Include in ARM template** configuration is only available in "Git mode". Currently it is disabled in "live mode" or "Data Factory" mode. In case of automatic publishing or Purview connection, do not use Include global parameters method; use PowerShell script method.
+> The **Include in ARM template** configuration is only available in "Git mode". Currently it is disabled in "live mode" or "Data Factory" mode. In case of automatic publishing or Azure Purview connection, do not use Include global parameters method; use PowerShell script method.
> [!WARNING] >You cannot use ΓÇÿ-ΓÇÿ in the parameter name. You will receive an errorcode "{"code":"BadRequest","message":"ErrorCode=InvalidTemplate,ErrorMessage=The expression >'pipeline().globalParameters.myparam-dbtest-url' is not valid: .....}". But, you can use the ΓÇÿ_ΓÇÖ in the parameter name.
data-factory Ci Cd Github Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/ci-cd-github-troubleshoot-guide.md
Following section is not valid because package.json folder is not valid.
``` It should have DataFactory included in customCommand like *'run build validate $(Build.Repository.LocalPath)/DataFactory/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/testResourceGroup/providers/Microsoft.DataFactory/factories/yourFactoryName'*. Make sure the generated YAML file for higher stage should have required JSON artifacts.
-### Git Repository or Purview Connection Disconnected
+### Git Repository or Azure Purview Connection Disconnected
#### Issue When deploying a service instance, the git repository or purview connection is disconnected.
data-factory Connect Data Factory To Azure Purview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connect-data-factory-to-azure-purview.md
You need to have **Owner** or **Contributor** role on your data factory to conne
To establish the connection on Data Factory authoring UI:
-1. In the ADF authoring UI, go to **Manage** -> **Azure Purview**, and select **Connect to a Purview account**.
+1. In the ADF authoring UI, go to **Manage** -> **Azure Purview**, and select **Connect to an Azure Purview account**.
- :::image type="content" source="./media/data-factory-purview/register-purview-account.png" alt-text="Screenshot for registering a Purview account.":::
+ :::image type="content" source="./media/data-factory-purview/register-purview-account.png" alt-text="Screenshot for registering an Azure Purview account.":::
2. Choose **From Azure subscription** or **Enter manually**. **From Azure subscription**, you can select the account that you have access to.
-3. Once connected, you can see the name of the Purview account in the tab **Purview account**.
+3. Once connected, you can see the name of the Azure Purview account in the tab **Azure Purview account**.
-If your Purview account is protected by firewall, create the managed private endpoints for Purview. Learn more about how to let Data Factory [access a secured Purview account](how-to-access-secured-purview-account.md). You can either do it during the initial connection or edit an existing connection later.
+If your Azure Purview account is protected by firewall, create the managed private endpoints for Azure Purview. Learn more about how to let Data Factory [access a secured Azure Purview account](how-to-access-secured-purview-account.md). You can either do it during the initial connection or edit an existing connection later.
-The Purview connection information is stored in the data factory resource like the following. To establish the connection programmatically, you can update the data factory and add the `purviewConfiguration` settings. When you want to push lineage from SSIS activities, also add `catalogUri` tag additionally.
+The Azure Purview connection information is stored in the data factory resource like the following. To establish the connection programmatically, you can update the data factory and add the `purviewConfiguration` settings. When you want to push lineage from SSIS activities, also add `catalogUri` tag additionally.
```json {
For how to register Data Factory in Azure Purview, see [How to connect Azure Dat
## Set up authentication
-Data factory's managed identity is used to authenticate lineage push operations from data factory to Purview.
+Data factory's managed identity is used to authenticate lineage push operations from data factory to Azure Purview.
-Grant the data factory's managed identity **Data Curator** role on your Purview **root collection**. Learn more about [Access control in Azure Purview](../purview/catalog-permissions.md) and [Add roles and restrict access through collections](../purview/how-to-create-and-manage-collections.md#add-roles-and-restrict-access-through-collections).
+Grant the data factory's managed identity **Data Curator** role on your Azure Purview **root collection**. Learn more about [Access control in Azure Purview](../purview/catalog-permissions.md) and [Add roles and restrict access through collections](../purview/how-to-create-and-manage-collections.md#add-roles-and-restrict-access-through-collections).
-When connecting data factory to Purview on authoring UI, ADF tries to add such role assignment automatically. If you have **Collection admins** role on the Purview root collection and have access to Purview account from your network, this operation is done successfully.
+When connecting data factory to Azure Purview on authoring UI, ADF tries to add such role assignment automatically. If you have **Collection admins** role on the Azure Purview root collection and have access to Azure Purview account from your network, this operation is done successfully.
-## Monitor Purview connection
+## Monitor Azure Purview connection
-Once you connect the data factory to a Purview account, you see the following page with details on the enabled integration capabilities.
+Once you connect the data factory to an Azure Purview account, you see the following page with details on the enabled integration capabilities.
For **Data Lineage - Pipeline**, you may see one of below status: -- **Connected**: The data factory is successfully connected to the Purview account. Note this indicates data factory is associated with a Purview account and has permission to push lineage to it. If your Purview account is protected by firewall, you also need to make sure the integration runtime used to execute the activities and conduct lineage push can reach the Purview account. Learn more from [Access a secured Azure Purview account from Azure Data Factory](how-to-access-secured-purview-account.md).-- **Disconnected**: The data factory cannot push lineage to Purview because Purview Data Curator role is not granted to data factory's managed identity. To fix this issue, go to your Purview account to check the role assignments, and manually grant the role as needed. Learn more from [Set up authentication](#set-up-authentication) section.
+- **Connected**: The data factory is successfully connected to the Azure Purview account. Note this indicates data factory is associated with an Azure Purview account and has permission to push lineage to it. If your Azure Purview account is protected by firewall, you also need to make sure the integration runtime used to execute the activities and conduct lineage push can reach the Azure Purview account. Learn more from [Access a secured Azure Purview account from Azure Data Factory](how-to-access-secured-purview-account.md).
+- **Disconnected**: The data factory cannot push lineage to Azure Purview because Azure Purview Data Curator role is not granted to data factory's managed identity. To fix this issue, go to your Azure Purview account to check the role assignments, and manually grant the role as needed. Learn more from [Set up authentication](#set-up-authentication) section.
- **Unknown**: Data Factory cannot check the status. Possible reasons are:
- - Cannot reach the Purview account from your current network because the account is protected by firewall. You can launch the ADF UI from a private network that has connectivity to your Purview account instead.
- - You don't have permission to check role assignments on the Purview account. You can contact the Purview account admin to check the role assignments for you. Learn about the needed Purview role from [Set up authentication](#set-up-authentication) section.
+ - Cannot reach the Azure Purview account from your current network because the account is protected by firewall. You can launch the ADF UI from a private network that has connectivity to your Azure Purview account instead.
+ - You don't have permission to check role assignments on the Azure Purview account. You can contact the Azure Purview account admin to check the role assignments for you. Learn about the needed Azure Purview role from [Set up authentication](#set-up-authentication) section.
## Report lineage data to Azure Purview
-Once you connect the data factory to a Purview account, when you execute pipelines, Data Factory push lineage information to the Purview account. For detailed supported capabilities, see [Supported Azure Data Factory activities](../purview/how-to-link-azure-data-factory.md#supported-azure-data-factory-activities). For an end to end walkthrough, refer to [Tutorial: Push Data Factory lineage data to Azure Purview](tutorial-push-lineage-to-purview.md).
+Once you connect the data factory to an Azure Purview account, when you execute pipelines, Data Factory push lineage information to the Azure Purview account. For detailed supported capabilities, see [Supported Azure Data Factory activities](../purview/how-to-link-azure-data-factory.md#supported-azure-data-factory-activities). For an end to end walkthrough, refer to [Tutorial: Push Data Factory lineage data to Azure Purview](tutorial-push-lineage-to-purview.md).
-## Discover and explore data using Purview
+## Discover and explore data using Azure Purview
-Once you connect the data factory to a Purview account, you can use the search bar at the top center of Data Factory authoring UI to search for data and perform actions. Learn more from [Discover and explore data in ADF using Purview](how-to-discover-explore-purview-data.md).
+Once you connect the data factory to an Azure Purview account, you can use the search bar at the top center of Data Factory authoring UI to search for data and perform actions. Learn more from [Discover and explore data in ADF using Azure Purview](how-to-discover-explore-purview-data.md).
## Next steps [Tutorial: Push Data Factory lineage data to Azure Purview](tutorial-push-lineage-to-purview.md)
-[Discover and explore data in ADF using Purview](how-to-discover-explore-purview-data.md)
+[Discover and explore data in ADF using Azure Purview](how-to-discover-explore-purview-data.md)
-[Access a secured Purview account](how-to-access-secured-purview-account.md)
+[Access a secured Azure Purview account](how-to-access-secured-purview-account.md)
data-factory Connector Dynamics Crm Office 365 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-dynamics-crm-office-365.md
Title: Copy data in Dynamics (Microsoft Dataverse)
+ Title: Copy and transform data in Dynamics 365 (Microsoft Dataverse) or Dynamics CRM
-description: Learn how to copy data from Microsoft Dynamics CRM or Microsoft Dynamics 365 (Microsoft Dataverse) to supported sink data stores or from supported source data stores to Dynamics CRM or Dynamics 365 by using a copy activity in an Azure Data Factory or Azure Synapse Analytics pipeline.
+description: Learn how to copy and transform data in Dynamics 365 (Microsoft Dataverse) or Dynamics CRM using Azure Data Factory or Azure Synapse Analytics.
Previously updated : 12/31/2021 Last updated : 01/10/2022
-# Copy data from and to Dynamics 365 (Microsoft Dataverse) or Dynamics CRM
+# Copy and transform data in Dynamics 365 (Microsoft Dataverse) or Dynamics CRM using Azure Data Factory or Azure Synapse Analytics
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-This article outlines how to use a copy activity in Azure Data Factory or Synapse pipelines to copy data from and to Microsoft Dynamics 365 and Microsoft Dynamics CRM. It builds on the [copy activity overview](copy-activity-overview.md) article that presents a general overview of a copy activity.
+This article outlines how to use a copy activity in Azure Data Factory or Synapse pipelines to copy data from and to Dynamics 365 (Microsoft Dataverse) or Dynamics CRM, and use a data flow to transform data in Dynamics 365 (Microsoft Dataverse) or Dynamics CRM. To learn more, read the [Azure Data Factory](introduction.md) and the [Azure Synapse Analytics](..\synapse-analytics\overview-what-is.md) introduction articles.
## Supported capabilities This connector is supported for the following activities: - [Copy activity](copy-activity-overview.md) with [supported source and sink matrix](copy-activity-overview.md)
+- [Mapping data flow](concepts-data-flow-overview.md)
- [Lookup activity](control-flow-lookup-activity.md) You can copy data from Dynamics 365 (Microsoft Dataverse) or Dynamics CRM to any supported sink data store. You also can copy data from any supported source data store to Dynamics 365 (Microsoft Dataverse) or Dynamics CRM. For a list of data stores that a copy activity supports as sources and sinks, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
If all of your source records map to the same target entity and your source data
:::image type="content" source="./media/connector-dynamics-crm-office-365/connector-dynamics-add-entity-reference-column.png" alt-text="Dynamics lookup-field adding an entity-reference column":::
+## Mapping data flow properties
+
+When transforming data in mapping data flow, you can read and write to tables from Dynamics. For more information, see the [source transformation](data-flow-source.md) and [sink transformation](data-flow-sink.md) in mapping data flows. You can choose to use a Dynamics dataset or an [inline dataset](data-flow-source.md#inline-datasets) as source and sink type.
+
+### Source transformation
+
+The below table lists the properties supported by Dynamics. You can edit these properties in the **Source options** tab.
+
+| Name | Description | Required | Allowed values | Data flow script property |
+| - | -- | -- | -- | - |
+| Table | If you select Table as input, data flow fetches all the data from the table specified in the dataset. | No | - | tableName |
+| Query |FetchXML is a proprietary query language that is used in Dynamics online and on-premises. See the following example. To learn more, see [Build queries with FetchXML](/previous-versions/dynamicscrm-2016/developers-guide/gg328332(v=crm.8)). | No | String | query |
+| Entity | The logical name of the entity to retrieve. | Yes when use inline mode | - | entity|
+
+> [!Note]
+> If you select **Query** as input type, the column type from tables can not be retrieved. It will be treated as string by default.
+
+#### Dynamics source script example
+
+When you use Dynamics as source type, the associated data flow script is:
+
+```
+source(
+ output(
+ new_name as string,
+ new_dataflowtestid as string
+ ),
+ store: 'dynamics',
+ format: 'dynamicsformat',
+ baseUrl: $baseUrl,
+ cloudType:'AzurePublic',
+ servicePrincipalId:$servicePrincipalId,
+ servicePrincipalCredential:$servicePrincipalCredential,
+ entity:'new_datalowtest'
+query:' <fetch mapping='logical' count='3 paging-cookie=''><entity name='new_dataflow_crud_test'><attribute name='new_name'/><attribute name='new_releasedate'/></entity></fetch> '
+ ) ~> movies
+
+```
+
+### Sink transformation
+
+The below table lists the properties supported by Dynamics sink. You can edit these properties in the **Sink options** tab.
+
+| Name | Description | Required | Allowed values | Data flow script property |
+| - | -- | -- | -- | - |
+| Entity | The logical name of the entity to retrieve. | Yes when use inline mode | - | entity|
+| Request interval | The interval time between API requests in millisecond. | No | - | requestInterval|
+| Update method | Specify what operations are allowed on your database destination. The default is to only allow inserts.<br>To update, upsert, or delete rows, an [Alter row transformation](data-flow-alter-row.md) is required to tag rows for those actions. | Yes | `true` or `false` | insertable <br/>updateable<br/>upsertable<br>deletable|
+| Alternate key name | The alternate key name defined on your entity to do an update, upsert or delete. | No | - | alternateKeyName |
+
+#### Dynamics sink script example
+
+When you use Dynamics as sink type, the associated data flow script is:
+
+```
+moviesAltered sink(
+ input(new_name as string,
+ new_id as string,
+ new_releasedate as string
+ ),
+ store: 'dynamics',
+ format: 'dynamicsformat',
+ baseUrl: $baseUrl,
+
+ cloudType:'AzurePublic',
+ servicePrincipalId:$servicePrincipalId,
+ servicePrincipalCredential:$servicePrincipalCredential,
+ updateable: true,
+ upsertable: true,
+ insertable: true,
+ deletable:true,
+ alternateKey:'new_testalternatekey',
+ entity:'new_dataflow_crud_test',
+
+requestInterval:1000
+ ) ~> movieDB
+```
+ ## Lookup activity properties To learn details about the properties, see [Lookup activity](control-flow-lookup-activity.md).
data-factory Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-overview.md
Previously updated : 10/14/2021 Last updated : 01/10/2022
data-factory Connector Troubleshoot Azure Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-troubleshoot-azure-blob-storage.md
This article provides suggestions to troubleshoot common problems with the Azure
## Error code: FIPSModeIsNotSupport -- **Message**: `Fail to read data form Azure Blob Storage for Azure Blob connector needs MD5 algorithm which can't co-work with FIPS mode. Please change diawp.exe.config in self-hosted integration runtime install directory to disable FIPS policy following https://docs.microsoft.com/en-us/dotnet/framework/configure-apps/file-schema/runtime/enforcefipspolicy-element.`
+- **Message**: `Fail to read data form Azure Blob Storage for Azure Blob connector needs MD5 algorithm which can't co-work with FIPS mode. Please change diawp.exe.config in self-hosted integration runtime install directory to disable FIPS policy following https://docs.microsoft.com/dotnet/framework/configure-apps/file-schema/runtime/enforcefipspolicy-element.`
- **Cause**: Then FIPS policy is enabled on the VM where the self-hosted integration runtime was installed.
data-factory Control Flow Azure Function Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-azure-function-activity.md
The Azure Function activity allows you to run [Azure Functions](../azure-functio
For an eight-minute introduction and demonstration of this feature, watch the following video:
-> [!VIDEO https://channel9.msdn.com/shows/azure-friday/Run-Azure-Functions-from-Azure-Data-Factory-pipelines/player]
+> [!VIDEO https://docs.microsoft.com/shows/azure-friday/Run-Azure-Functions-from-Azure-Data-Factory-pipelines/player]
## Create an Azure Function activity with UI
data-factory Create Self Hosted Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/create-self-hosted-integration-runtime.md
When the processor and available RAM aren't well utilized, but the execution of
### TLS/SSL certificate requirements
-Here are the requirements for the TLS/SSL certificate that you use to secure communication between integration runtime nodes:
--- The certificate must be a publicly trusted X509 v3 certificate. We recommend that you use certificates that are issued by a public partner certification authority (CA).-- Each integration runtime node must trust this certificate.-- We don't recommend Subject Alternative Name (SAN) certificates because only the last SAN item is used. All other SAN items are ignored. For example, if you have a SAN certificate whose SANs are **node1.domain.contoso.com** and **node2.domain.contoso.com**, you can use this certificate only on a machine whose fully qualified domain name (FQDN) is **node2.domain.contoso.com**.-- The certificate can use any key size supported by Windows Server 2012 R2 for TLS/SSL certificates.-- Certificates that use CNG keys aren't supported.
+ If you want to enable remote access from intranet with TLS/SSL certificate (Advanced) to secure communication between integration runtime nodes, you can follow steps in [Enable remote access from intranet with TLS/SSL certificate](tutorial-enable-remote-access-intranet-tls-ssl-certificate.md).
> [!NOTE] > This certificate is used:
data-factory Create Shared Self Hosted Integration Runtime Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/create-shared-self-hosted-integration-runtime-powershell.md
You can reuse an existing self-hosted integration runtime infrastructure that yo
To see an introduction and demonstration of this feature, watch the following 12-minute video:
-> [!VIDEO https://channel9.msdn.com/Shows/Azure-Friday/Hybrid-data-movement-across-multiple-Azure-Data-Factories/player]
+> [!VIDEO https://docs.microsoft.com/Shows/Azure-Friday/Hybrid-data-movement-across-multiple-Azure-Data-Factories/player]
### Terminology
data-factory Data Flow Sink https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-sink.md
Previously updated : 10/14/2021 Last updated : 01/10/2022 # Sink transformation in mapping data flow
Mapping data flow follows an extract, load, and transform (ELT) approach and wor
| [Azure SQL Database](connector-azure-sql-database.md#mapping-data-flow-properties) | | Γ£ô/Γ£ô | | [Azure SQL Managed Instance](connector-azure-sql-managed-instance.md#mapping-data-flow-properties) | | Γ£ô/- | | [Azure Synapse Analytics](connector-azure-sql-data-warehouse.md#mapping-data-flow-properties) | | Γ£ô/- |
+| [Dataverse](connector-dynamics-crm-office-365.md#mapping-data-flow-properties) | | Γ£ô/Γ£ô |
+| [Dynamics 365](connector-dynamics-crm-office-365.md#mapping-data-flow-properties) | | Γ£ô/Γ£ô |
+| [Dynamics CRM](connector-dynamics-crm-office-365.md#mapping-data-flow-properties) | | Γ£ô/Γ£ô |
| [Snowflake](connector-snowflake.md) | | Γ£ô/Γ£ô | | [SQL Server](connector-sql-server.md) | | Γ£ô/Γ£ô |
data-factory Data Flow Source https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-source.md
Previously updated : 12/08/2021 Last updated : 01/10/2022 # Source transformation in mapping data flow
Mapping data flow follows an extract, load, and transform (ELT) approach and wor
| [Azure SQL Database](connector-azure-sql-database.md#mapping-data-flow-properties) | | Γ£ô/Γ£ô | | [Azure SQL Managed Instance](connector-azure-sql-managed-instance.md#mapping-data-flow-properties) | | Γ£ô/Γ£ô | | [Azure Synapse Analytics](connector-azure-sql-data-warehouse.md#mapping-data-flow-properties) | | Γ£ô/Γ£ô |
+| [Dataverse](connector-dynamics-crm-office-365.md#mapping-data-flow-properties) | | Γ£ô/Γ£ô |
+| [Dynamics 365](connector-dynamics-crm-office-365.md#mapping-data-flow-properties) | | Γ£ô/Γ£ô |
+| [Dynamics CRM](connector-dynamics-crm-office-365.md#mapping-data-flow-properties) | | Γ£ô/Γ£ô |
| [Hive](connector-hive.md#mapping-data-flow-properties) | | -/Γ£ô | | [Snowflake](connector-snowflake.md) | | Γ£ô/Γ£ô | | [SQL Server](connector-sql-server.md) | | Γ£ô/Γ£ô |
data-factory How To Access Secured Purview Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-access-secured-purview-account.md
This article describes how to access a secured Azure Purview account from Azure
## Azure Purview private endpoint deployment scenarios
-You can use [Azure private endpoints](../private-link/private-endpoint-overview.md) for your Azure Purview accounts to allow secure access from a virtual network (VNet) to the catalog over a Private Link. Purview provides different types of private points for various access need: *account* private endpoint, *portal* private endpoint, and *ingestion* private endpoints. Learn more from [Purview private endpoints conceptual overview](../purview/catalog-private-link.md#conceptual-overview).
+You can use [Azure private endpoints](../private-link/private-endpoint-overview.md) for your Azure Purview accounts to allow secure access from a virtual network (VNet) to the catalog over a Private Link. Azure Purview provides different types of private points for various access need: *account* private endpoint, *portal* private endpoint, and *ingestion* private endpoints. Learn more from [Azure Purview private endpoints conceptual overview](../purview/catalog-private-link.md#conceptual-overview).
-If your Purview account is protected by firewall and denies public access, make sure you follow below checklist to set up the private endpoints so Data Factory can successfully connect to Purview.
+If your Azure Purview account is protected by firewall and denies public access, make sure you follow below checklist to set up the private endpoints so Data Factory can successfully connect to Azure Purview.
-| Scenario | Required Purview private endpoints |
+| Scenario | Required Azure Purview private endpoints |
| | |
-| [Run pipeline and report lineage to Purview](tutorial-push-lineage-to-purview.md) | For Data Factory pipeline to push lineage to Purview, Purview ***account*** and ***ingestion*** private endpoints are required. <br>- When using **Azure Integration Runtime**, follow the steps in [Managed private endpoints for Purview](#managed-private-endpoints-for-purview) section to create managed private endpoints in the Data Factory managed virtual network.<br>- When using **Self-hosted Integration Runtime**, follow the steps in [this section](../purview/catalog-private-link-end-to-end.md#option-2enable-account-portal-and-ingestion-private-endpoint-on-existing-azure-purview-accounts) to create the *account* and *ingestion* private endpoints in your integration runtime's virtual network. |
-| [Discover and explore data using Purview on ADF UI](how-to-discover-explore-purview-data.md) | To use the search bar at the top center of Data Factory authoring UI to search for Purview data and perform actions, you need to create Purview ***account*** and ***portal*** private endpoints in the virtual network that you launch the Data Factory Studio. Follow the steps in [Enable *account* and *portal* private endpoint](../purview/catalog-private-link-account-portal.md#option-2enable-account-and-portal-private-endpoint-on-existing-azure-purview-accounts). |
+| [Run pipeline and report lineage to Azure Purview](tutorial-push-lineage-to-purview.md) | For Data Factory pipeline to push lineage to Azure Purview, Azure Purview ***account*** and ***ingestion*** private endpoints are required. <br>- When using **Azure Integration Runtime**, follow the steps in [Managed private endpoints for Azure Purview](#managed-private-endpoints-for-azure-purview) section to create managed private endpoints in the Data Factory managed virtual network.<br>- When using **Self-hosted Integration Runtime**, follow the steps in [this section](../purview/catalog-private-link-end-to-end.md#option-2enable-account-portal-and-ingestion-private-endpoint-on-existing-azure-purview-accounts) to create the *account* and *ingestion* private endpoints in your integration runtime's virtual network. |
+| [Discover and explore data using Azure Purview on ADF UI](how-to-discover-explore-purview-data.md) | To use the search bar at the top center of Data Factory authoring UI to search for Azure Purview data and perform actions, you need to create Azure Purview ***account*** and ***portal*** private endpoints in the virtual network that you launch the Data Factory Studio. Follow the steps in [Enable *account* and *portal* private endpoint](../purview/catalog-private-link-account-portal.md#option-2enable-account-and-portal-private-endpoint-on-existing-azure-purview-accounts). |
-## Managed private endpoints for Purview
+## Managed private endpoints for Azure Purview
-[Managed private endpoints](managed-virtual-network-private-endpoint.md#managed-private-endpoints) are private endpoints created in the Azure Data Factory Managed Virtual Network establishing a private link to Azure resources. When you run pipeline and report lineage to a firewall protected Azure Purview account, create an Azure Integration Runtime with "Virtual network configuration" option enabled, then create the Purview ***account*** and ***ingestion*** managed private endpoints as follows.
+[Managed private endpoints](managed-virtual-network-private-endpoint.md#managed-private-endpoints) are private endpoints created in the Azure Data Factory Managed Virtual Network establishing a private link to Azure resources. When you run pipeline and report lineage to a firewall protected Azure Purview account, create an Azure Integration Runtime with "Virtual network configuration" option enabled, then create the Azure Purview ***account*** and ***ingestion*** managed private endpoints as follows.
### Create managed private endpoints
-To create managed private endpoints for Purview on Data Factory authoring UI:
+To create managed private endpoints for Azure Purview on Data Factory authoring UI:
-1. Go to **Manage** -> **Azure Purview**, and click **Edit** to edit your existing connected Purview account or click **Connect to a Purview account** to connect to a new Purview account.
+1. Go to **Manage** -> **Azure Purview**, and click **Edit** to edit your existing connected Azure Purview account or click **Connect to an Azure Purview account** to connect to a new Azure Purview account.
2. Select **Yes** for **Create managed private endpoints**. You need to have at least one Azure Integration Runtime with "Virtual network configuration" option enabled in the data factory to see this option.
-3. Click **+ Create all** button to batch create the needed Purview private endpoints, including the ***account*** private endpoint and the ***ingestion*** private endpoints for the Purview managed resources - Blob storage, Queue storage, and Event Hubs namespace. You need to have at least **Reader** role on your Purview account for Data Factory to retrieve the Purview managed resources' information.
+3. Click **+ Create all** button to batch create the needed Azure Purview private endpoints, including the ***account*** private endpoint and the ***ingestion*** private endpoints for the Azure Purview managed resources - Blob storage, Queue storage, and Event Hubs namespace. You need to have at least **Reader** role on your Azure Purview account for Data Factory to retrieve the Azure Purview managed resources' information.
- :::image type="content" source="./media/how-to-access-secured-purview-account/purview-create-all-managed-private-endpoints.png" alt-text="Create managed private endpoint for your connected Purview account.":::
+ :::image type="content" source="./media/how-to-access-secured-purview-account/purview-create-all-managed-private-endpoints.png" alt-text="Create managed private endpoint for your connected Azure Purview account.":::
4. In the next page, specify a name for the private endpoint. It will be used to generate names for the ingestion private endpoints as well with suffix.
- :::image type="content" source="./media/how-to-access-secured-purview-account/name-purview-private-endpoints.png" alt-text="Name the managed private endpoints for your connected Purview account.":::
+ :::image type="content" source="./media/how-to-access-secured-purview-account/name-purview-private-endpoints.png" alt-text="Name the managed private endpoints for your connected Azure Purview account.":::
-5. Click **Create** to create the private endpoints. After creation, 4 private endpoint requests will be generated that must [get approved by an owner of Purview](#approve-private-endpoint-connections).
+5. Click **Create** to create the private endpoints. After creation, 4 private endpoint requests will be generated that must [get approved by an owner of Azure Purview](#approve-private-endpoint-connections).
-Such batch managed private endpoint creation is provided on the Purview UI only. If you want to create the managed private endpoints programmatically, you need to create those PEs individually. You can find Purview managed resources' information from Azure portal -> your Purview account -> Managed resources.
+Such batch managed private endpoint creation is provided on the Azure Purview UI only. If you want to create the managed private endpoints programmatically, you need to create those PEs individually. You can find Azure Purview managed resources' information from Azure portal -> your Azure Purview account -> Managed resources.
### Approve private endpoint connections
-After you create the managed private endpoints for Purview, you see "Pending" state first. The Purview owner need to approve the private endpoint connections for each resource.
+After you create the managed private endpoints for Azure Purview, you see "Pending" state first. The Azure Purview owner need to approve the private endpoint connections for each resource.
-If you have permission to approve the Purview private endpoint connection, from Data Factory UI:
+If you have permission to approve the Azure Purview private endpoint connection, from Data Factory UI:
1. Go to **Manage** -> **Azure Purview** -> **Edit** 2. In the private endpoint list, click the **Edit** (pencil) button next to each private endpoint name
If you have permission to approve the Purview private endpoint connection, from
4. On the given resource, go to **Networking** -> **Private endpoint connection** to approve it. The private endpoint is named as `data_factory_name.your_defined_private_endpoint_name` with description as "Requested by data_factory_name". 5. Repeat this operation for all private endpoints.
-If you don't have permission to approve the Purview private endpoint connection, ask the Purview account owner to do as follows.
+If you don't have permission to approve the Azure Purview private endpoint connection, ask the Azure Purview account owner to do as follows.
-- For *account* private endpoint, go to Azure portal -> your Purview account -> Networking -> Private endpoint connection to approve.-- For *ingestion* private endpoints, go to Azure portal -> your Purview account -> Managed resources, click into the Storage account and Event Hubs namespace respectively, and approve the private endpoint connection in Networking -> Private endpoint connection page.
+- For *account* private endpoint, go to Azure portal -> your Azure Purview account -> Networking -> Private endpoint connection to approve.
+- For *ingestion* private endpoints, go to Azure portal -> your Azure Purview account -> Managed resources, click into the Storage account and Event Hubs namespace respectively, and approve the private endpoint connection in Networking -> Private endpoint connection page.
### Monitor managed private endpoints
-You can monitor the created managed private endpoints for Purview at two places:
+You can monitor the created managed private endpoints for Azure Purview at two places:
-- Go to **Manage** -> **Azure Purview** -> **Edit** to open your existing connected Purview account. To see all the relevant private endpoints, you need to have at least **Reader** role on your Purview account for Data Factory to retrieve the Purview managed resources' information. Otherwise, you only see *account* private endpoint with warning.-- Go to **Manage** -> **Managed private endpoints** where you see all the managed private endpoints created under the data factory. If you have at least **Reader** role on your Purview account, you see Purview relevant private endpoints being grouped together. Otherwise, they show up separately in the list.
+- Go to **Manage** -> **Azure Purview** -> **Edit** to open your existing connected Azure Purview account. To see all the relevant private endpoints, you need to have at least **Reader** role on your Azure Purview account for Data Factory to retrieve the Azure Purview managed resources' information. Otherwise, you only see *account* private endpoint with warning.
+- Go to **Manage** -> **Managed private endpoints** where you see all the managed private endpoints created under the data factory. If you have at least **Reader** role on your Azure Purview account, you see Azure Purview relevant private endpoints being grouped together. Otherwise, they show up separately in the list.
## Next steps - [Connect Data Factory to Azure Purview](connect-data-factory-to-azure-purview.md) - [Tutorial: Push Data Factory lineage data to Azure Purview](tutorial-push-lineage-to-purview.md)-- [Discover and explore data in ADF using Purview](how-to-discover-explore-purview-data.md)
+- [Discover and explore data in ADF using Azure Purview](how-to-discover-explore-purview-data.md)
data-factory How To Configure Azure Ssis Ir Enterprise Edition https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-configure-azure-ssis-ir-enterprise-edition.md
Some of these features require you to install additional components to customize
| **Enterprise Features** | **Descriptions** | ||| | CDC components | The CDC Source, Control Task, and Splitter Transformation are preinstalled on the Azure-SSIS IR Enterprise Edition. To connect to Oracle, you also need to install the CDC Designer and Service on another computer. |
-| Oracle connectors | The Oracle Connection Manager, Source, and Destination are preinstalled on the Azure-SSIS IR Enterprise Edition. You also need to install the Oracle Call Interface (OCI) driver, and if necessary configure the Oracle Transport Network Substrate (TNS), on the Azure-SSIS IR. For more info, see [Custom setup for the Azure-SSIS integration runtime](how-to-configure-azure-ssis-ir-custom-setup.md). |
+| Oracle connectors | You need to install the Oracle Connection Manager, Source, and Destination, as well as the Oracle Call Interface (OCI) driver, on the Azure-SSIS IR Enterprise Edition. If necessary, you can also configure the Oracle Transport Network Substrate (TNS), on the Azure-SSIS IR. For more info, see [Custom setup for the Azure-SSIS integration runtime](how-to-configure-azure-ssis-ir-custom-setup.md). |
| Teradata connectors | You need to install the Teradata Connection Manager, Source, and Destination, as well as the Teradata Parallel Transporter (TPT) API and Teradata ODBC driver, on the Azure-SSIS IR Enterprise Edition. For more info, see [Custom setup for the Azure-SSIS integration runtime](how-to-configure-azure-ssis-ir-custom-setup.md). | | SAP BW connectors | The SAP BW Connection Manager, Source, and Destination are preinstalled on the Azure-SSIS IR Enterprise Edition. You also need to install the SAP BW driver on the Azure-SSIS IR. These connectors support SAP BW 7.0 or earlier versions. To connect to later versions of SAP BW or other SAP products, you can purchase and install SAP connectors from third-party ISVs on the Azure-SSIS IR. For more info about how to install additional components, see [Custom setup for the Azure-SSIS integration runtime](how-to-configure-azure-ssis-ir-custom-setup.md). | | Analysis Services components | The Data Mining Model Training Destination, the Dimension Processing Destination, and the Partition Processing Destination, as well as the Data Mining Query Transformation, are preinstalled on the Azure-SSIS IR Enterprise Edition. All these components support SQL Server Analysis Services (SSAS), but only the Partition Processing Destination supports Azure Analysis Services (AAS). To connect to SSAS, you also need to [configure Windows Authentication credentials in SSISDB](/sql/integration-services/lift-shift/ssis-azure-connect-with-windows-auth). In addition to these components, the Analysis Services Execute DDL Task, the Analysis Services Processing Task, and the Data Mining Query Task are also preinstalled on the Azure-SSIS IR Standard/Enterprise Edition. |
Some of these features require you to install additional components to customize
- [Custom setup for the Azure-SSIS integration runtime](how-to-configure-azure-ssis-ir-custom-setup.md) -- [How to develop paid or licensed custom components for the Azure-SSIS integration runtime](how-to-develop-azure-ssis-ir-licensed-components.md)
+- [How to develop paid or licensed custom components for the Azure-SSIS integration runtime](how-to-develop-azure-ssis-ir-licensed-components.md)
data-factory How To Create Event Trigger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-create-event-trigger.md
Event-driven architecture (EDA) is a common data integration pattern that involv
For a ten-minute introduction and demonstration of this feature, watch the following video:
-> [!VIDEO https://channel9.msdn.com/Shows/Azure-Friday/Event-based-data-integration-with-Azure-Data-Factory/player]
+> [!VIDEO https://docs.microsoft.com/Shows/Azure-Friday/Event-based-data-integration-with-Azure-Data-Factory/player]
> [!NOTE] > The integration described in this article depends on [Azure Event Grid](https://azure.microsoft.com/services/event-grid/). Make sure that your subscription is registered with the Event Grid resource provider. For more info, see [Resource providers and types](../azure-resource-manager/management/resource-providers-and-types.md#azure-portal). You must be able to do the *Microsoft.EventGrid/eventSubscriptions/** action. This action is part of the EventGrid EventSubscription Contributor built-in role.
data-factory How To Discover Explore Purview Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-discover-explore-purview-data.md
Title: Discover and explore data in ADF using Purview
-description: Learn how to discover, explore data in Azure Data Factory using Purview
+ Title: Discover and explore data in ADF using Azure Purview
+description: Learn how to discover, explore data in Azure Data Factory using Azure Purview
Last updated 08/10/2021
-# Discover and explore data in ADF using Purview
+# Discover and explore data in ADF using Azure Purview
[!INCLUDE[appliesto-adf-xxx-md](includes/appliesto-adf-xxx-md.md)] In this article, you will register an Azure Purview Account to a Data Factory. That connection allows you to discover Azure Purview assets and interact with them through ADF capabilities. You can perform the following tasks in ADF: -- Use the search box at the top to find Purview assets based on keywords
+- Use the search box at the top to find Azure Purview assets based on keywords
- Understand the data based on metadata, lineage, annotations - Connect those data to your data factory with linked services or datasets ## Prerequisites -- [Azure Purview account](../purview/create-catalog-portal.md)
+- [Azure Purview account](../purview/create-catalog-portal.md)
- [Data Factory](./quickstart-create-data-factory-portal.md) - [Connect an Azure Purview Account into Data Factory](./connect-data-factory-to-azure-purview.md) ## Using Azure Purview in Data Factory
-The use Azure Purview in Data Factory requires you to have access to that Purview account. Data Factory passes-through your Purview permission. As an example, if you have a curator permission role, you will be able to edit metadata scanned by Azure Purview.
+The use Azure Purview in Data Factory requires you to have access to that Azure Purview account. Data Factory passes-through your Azure Purview permission. As an example, if you have a curator permission role, you will be able to edit metadata scanned by Azure Purview.
### Data discovery: search datasets
data-factory Iterative Development Debugging https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/iterative-development-debugging.md
Azure Data Factory and Synapse Analytics supports iterative development and debu
For an eight-minute introduction and demonstration of this feature, watch the following video:
-> [!VIDEO https://channel9.msdn.com/Shows/Azure-Friday/Iterative-development-and-debugging-with-Azure-Data-Factory/player]
+> [!VIDEO https://docs.microsoft.com/Shows/Azure-Friday/Iterative-development-and-debugging-with-Azure-Data-Factory/player]
## Debugging a pipeline
data-factory Monitor Using Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/monitor-using-azure-monitor.md
Cloud applications are complex and have many moving parts. Monitors provide data
Azure Monitor provides base-level infrastructure metrics and logs for most Azure services. Azure diagnostic logs are emitted by a resource and provide rich, frequent data about the operation of that resource. Azure Data Factory (ADF) can write diagnostic logs in Azure Monitor. For a seven-minute introduction and demonstration of this feature, watch the following video:
-> [!VIDEO https://channel9.msdn.com/Shows/Azure-Friday/Monitor-Data-Factory-pipelines-using-Operations-Management-Suite-OMS/player]
+> [!VIDEO https://docs.microsoft.com/Shows/Azure-Friday/Monitor-Data-Factory-pipelines-using-Operations-Management-Suite-OMS/player]
For more information, see [Azure Monitor overview](../azure-monitor/overview.md).
data-factory Monitor Visually https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/monitor-visually.md
You can raise alerts on supported metrics in Data Factory. Select **Monitor** >
For a seven-minute introduction and demonstration of this feature, watch the following video:
-> [!VIDEO https://channel9.msdn.com/shows/azure-friday/Monitor-your-Azure-Data-Factory-pipelines-proactively-with-alerts/player]
+> [!VIDEO https://docs.microsoft.com/shows/azure-friday/Monitor-your-Azure-Data-Factory-pipelines-proactively-with-alerts/player]
### Create alerts
data-factory Parameterize Linked Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/parameterize-linked-services.md
You can use the UI in the Azure portal or a programming interface to parameteriz
For a seven-minute introduction and demonstration of this feature, watch the following video:
-> [!VIDEO https://channel9.msdn.com/shows/azure-friday/Parameterize-connections-to-your-data-stores-in-Azure-Data-Factory/player]
+> [!VIDEO https://docs.microsoft.com/shows/azure-friday/Parameterize-connections-to-your-data-stores-in-Azure-Data-Factory/player]
## Supported linked service types
data-factory Quickstart Create Data Factory Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/quickstart-create-data-factory-portal.md
This quickstart describes how to use the Azure Data Factory UI to create and mon
### Video Watching this video helps you understand the Data Factory UI:
->[!VIDEO https://channel9.msdn.com/Shows/Azure-Friday/Visually-build-pipelines-for-Azure-Data-Factory-v2/Player]
+>[!VIDEO https://docs.microsoft.com/Shows/Azure-Friday/Visually-build-pipelines-for-Azure-Data-Factory-v2/Player]
## Create a data factory
data-factory Transform Data Databricks Jar https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/transform-data-databricks-jar.md
The Azure Databricks Jar Activity in a [pipeline](concepts-pipelines-activities.
For an eleven-minute introduction and demonstration of this feature, watch the following video:
-> [!VIDEO https://channel9.msdn.com/Shows/Azure-Friday/Execute-Jars-and-Python-scripts-on-Azure-Databricks-using-Data-Factory/player]
+> [!VIDEO https://docs.microsoft.com/Shows/Azure-Friday/Execute-Jars-and-Python-scripts-on-Azure-Databricks-using-Data-Factory/player]
## Add a Jar activity for Azure Databricks to a pipeline with UI
For more information, see the [Databricks documentation](/azure/databricks/dev-t
## Next steps
-For an eleven-minute introduction and demonstration of this feature, watch the [video](https://channel9.msdn.com/Shows/Azure-Friday/Execute-Jars-and-Python-scripts-on-Azure-Databricks-using-Data-Factory/player).
+For an eleven-minute introduction and demonstration of this feature, watch the [video](/Shows/Azure-Friday/Execute-Jars-and-Python-scripts-on-Azure-Databricks-using-Data-Factory/player).
data-factory Transform Data Databricks Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/transform-data-databricks-python.md
The Azure Databricks Python Activity in a [pipeline](concepts-pipelines-activiti
For an eleven-minute introduction and demonstration of this feature, watch the following video:
-> [!VIDEO https://channel9.msdn.com/Shows/Azure-Friday/Execute-Jars-and-Python-scripts-on-Azure-Databricks-using-Data-Factory/player]
+> [!VIDEO https://docs.microsoft.com/Shows/Azure-Friday/Execute-Jars-and-Python-scripts-on-Azure-Databricks-using-Data-Factory/player]
## Add a Python activity for Azure Databricks to a pipeline with UI
data-factory Transform Data Machine Learning Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/transform-data-machine-learning-service.md
Run your Azure Machine Learning pipelines as a step in your Azure Data Factory a
The below video features a six-minute introduction and demonstration of this feature.
-> [!VIDEO https://channel9.msdn.com/Shows/Azure-Friday/How-to-execute-Azure-Machine-Learning-service-pipelines-in-Azure-Data-Factory/player]
+> [!VIDEO https://docs.microsoft.com/Shows/Azure-Friday/How-to-execute-Azure-Machine-Learning-service-pipelines-in-Azure-Data-Factory/player]
## Create a Machine Learning Execute Pipeline activity with UI
data-factory Transform Data Using Databricks Notebook https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/transform-data-using-databricks-notebook.md
If you don't have an Azure subscription, create a [free account](https://azure.m
For an eleven-minute introduction and demonstration of this feature, watch the following video:
-> [!VIDEO https://channel9.msdn.com/Shows/Azure-Friday/ingest-prepare-and-transform-using-azure-databricks-and-data-factory/player]
+> [!VIDEO https://docs.microsoft.com/Shows/Azure-Friday/ingest-prepare-and-transform-using-azure-databricks-and-data-factory/player]
## Prerequisites
data-factory Tumbling Window Trigger Dependency https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tumbling-window-trigger-dependency.md
In order to build a dependency chain and make sure that a trigger is executed on
For a demonstration on how to create dependent pipelines using tumbling window trigger, watch the following video:
-> [!VIDEO https://channel9.msdn.com/Shows/Azure-Friday/Create-dependent-pipelines-in-your-Azure-Data-Factory/player]
+> [!VIDEO https://docs.microsoft.com/Shows/Azure-Friday/Create-dependent-pipelines-in-your-Azure-Data-Factory/player]
## Create a dependency in the UI
data-factory Tutorial Push Lineage To Purview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-push-lineage-to-purview.md
Currently, lineage is supported for Copy, Data Flow, and Execute SSIS activities
* **Azure subscription**. If you don't have an Azure subscription, create a [free Azure account](https://azure.microsoft.com/free/) before you begin. * **Azure Data Factory**. If you don't have an Azure Data Factory, see [Create an Azure Data Factory](./quickstart-create-data-factory-portal.md).
-* **Azure Purview account**. The Purview account captures all lineage data generated by data factory. If you don't have an Azure Purview account, see [Create an Azure Purview](../purview/create-catalog-portal.md).
+* **Azure Purview account**. The Azure Purview account captures all lineage data generated by data factory. If you don't have an Azure Purview account, see [Create an Azure Purview](../purview/create-catalog-portal.md).
## Run pipeline and push lineage data to Azure Purview
-### Step 1: Connect Data Factory to your Purview account
+### Step 1: Connect Data Factory to your Azure Purview account
-You can establish the connection between Data Factory and Purview account by following the steps in [Connect Data Factory to Azure Purview](connect-data-factory-to-azure-purview.md).
+You can establish the connection between Data Factory and Azure Purview account by following the steps in [Connect Data Factory to Azure Purview](connect-data-factory-to-azure-purview.md).
### Step 2: Run pipeline in Data Factory
After you run the pipeline, in the [pipeline monitoring view](monitor-visually.m
:::image type="content" source="./media/data-factory-purview/monitor-lineage-reporting-status.png" alt-text="Monitor the lineage reporting status in pipeline monitoring view.":::
-### Step 4: View lineage information in your Purview account
+### Step 4: View lineage information in your Azure Purview account
-On Purview UI, you can browse assets and choose type "Azure Data Factory". You can also search the Data Catalog using keywords.
+On Azure Purview UI, you can browse assets and choose type "Azure Data Factory". You can also search the Data Catalog using keywords.
On the activity asset, click the Lineage tab, you can see all the lineage information. - Copy activity:
- :::image type="content" source="./media/data-factory-purview/copy-lineage.png" alt-text="Screenshot of the Copy activity lineage in Purview." lightbox="./media/data-factory-purview/copy-lineage.png":::
+ :::image type="content" source="./media/data-factory-purview/copy-lineage.png" alt-text="Screenshot of the Copy activity lineage in Azure Purview." lightbox="./media/data-factory-purview/copy-lineage.png":::
- Data Flow activity:
- :::image type="content" source="./media/data-factory-purview/dataflow-lineage.png" alt-text="Screenshot of the Data Flow lineage in Purview." lightbox="./media/data-factory-purview/dataflow-lineage.png":::
+ :::image type="content" source="./media/data-factory-purview/dataflow-lineage.png" alt-text="Screenshot of the Data Flow lineage in Azure Purview." lightbox="./media/data-factory-purview/dataflow-lineage.png":::
> [!NOTE] > For the lineage of Dataflow activity, we only support source and sink. The lineage for Dataflow transformation is not supported yet. - Execute SSIS Package activity:
- :::image type="content" source="./media/data-factory-purview/ssis-lineage.png" alt-text="Screenshot of the Execute SSIS lineage in Purview." lightbox="./media/data-factory-purview/ssis-lineage.png":::
+ :::image type="content" source="./media/data-factory-purview/ssis-lineage.png" alt-text="Screenshot of the Execute SSIS lineage in Azure Purview." lightbox="./media/data-factory-purview/ssis-lineage.png":::
> [!NOTE] > For the lineage of Execute SSIS Package activity, we only support source and destination. The lineage for transformation is not supported yet.
data-factory Data Factory Monitor Manage App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-monitor-manage-app.md
This article describes how to use the Monitoring and Management app to monitor,
> [!NOTE] > The user interface shown in the video may not exactly match what you see in the portal. It's slightly older, but concepts remain the same.
-> [!VIDEO https://channel9.msdn.com/Shows/Azure-Friday/Azure-Data-Factory-Monitoring-and-Managing-Big-Data-Piplines/player]
->
## Launch the Monitoring and Management app To launch the Monitor and Management app, click the **Monitor & Manage** tile on the **Data Factory** blade for your data factory.
data-lake-analytics Data Lake Analytics Data Lake Tools For Vscode https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-lake-analytics/data-lake-analytics-data-lake-tools-for-vscode.md
Last updated 02/09/2018
In this article, learn how you can use Azure Data Lake Tools for Visual Studio Code (VS Code) to create, test, and run U-SQL scripts. The information is also covered in the following video:
-[![Video player: Azure Data Lake tools for VS Code](media/data-lake-analytics-data-lake-tools-for-vscode/data-lake-tools-for-vscode-video.png)](https://channel9.msdn.com/Series/AzureDataLake/Azure-Data-Lake-Tools-for-VSCode?term=ADL%20Tools%20for%20VSCode")
+![Video player: Azure Data Lake tools for VS Code](media/data-lake-analytics-data-lake-tools-for-vscode/data-lake-tools-for-vscode-video.png)
## Prerequisites
data-lake-store Data Lake Store Performance Tuning Hive https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-lake-store/data-lake-store-performance-tuning-hive.md
Restart all the nodes/service for the config to take effect.
Here are a few blogs that will help tune your Hive queries: * [Optimize Hive queries for Hadoop in HDInsight](../hdinsight/hdinsight-hadoop-optimize-hive-query.md) * [Encoding the Hive query file in Azure HDInsight](/archive/blogs/bigdatasupport/encoding-the-hive-query-file-in-azure-hdinsight)
-* [Ignite talk on optimize Hive on HDInsight](https://channel9.msdn.com/events/Machine-Learning-and-Data-Sciences-Conference/Data-Science-Summit-2016/MSDSS25)
+* Ignite talk on optimize Hive on HDInsight
data-lake-store Data Lake Store With Data Catalog https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-lake-store/data-lake-store-with-data-catalog.md
Before you begin this tutorial, you must have the following:
## Register Data Lake Storage Gen1 as a source for Data Catalog
-> [!VIDEO https://channel9.msdn.com/Series/AzureDataLake/ADCwithADL/player]
- 1. Go to `https://azure.microsoft.com/services/data-catalog`, and click **Get started**. 1. Log into the Azure Data Catalog portal, and click **Publish data**.
defender-for-cloud Information Protection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/information-protection.md
Last updated 11/09/2021
[!INCLUDE [Banner for top of topics](./includes/banner.md)]
-[Azure Purview](../purview/overview.md), Microsoft's data governance service, provides rich insights into the *sensitivity of your data*. With automated data discovery, sensitive data classification, and end-to-end data lineage, Purview helps organizations manage and govern data in hybrid and multi-cloud environments.
+[Azure Purview](../purview/overview.md), Microsoft's data governance service, provides rich insights into the *sensitivity of your data*. With automated data discovery, sensitive data classification, and end-to-end data lineage, Azure Purview helps organizations manage and govern data in hybrid and multi-cloud environments.
Microsoft Defender for Cloud customers using Azure Purview can benefit from an additional vital layer of metadata in alerts and recommendations: information about any potentially sensitive data involved. This knowledge helps solve the triage challenge and ensures security professionals can focus their attention on threats to sensitive data.
-This page explains the integration of Purview's data sensitivity classification labels within Defender for Cloud.
+This page explains the integration of Azure Purview's data sensitivity classification labels within Defender for Cloud.
## Availability |Aspect|Details|
However, where possible, you'd want to focus the security team's efforts on risk
Azure Purview's data sensitivity classifications and data sensitivity labels provide that knowledge. ## Discover resources with sensitive data
-To provide the vital information about discovered sensitive data, and help ensure you have that information when you need it, Defender for Cloud displays information from Purview in multiple locations.
+To provide the vital information about discovered sensitive data, and help ensure you have that information when you need it, Defender for Cloud displays information from Azure Purview in multiple locations.
> [!TIP]
-> If a resource is scanned by multiple Purview accounts, the information shown in Defender for Cloud relates to the most recent scan.
+> If a resource is scanned by multiple Azure Purview accounts, the information shown in Defender for Cloud relates to the most recent scan.
### Alerts and recommendations pages
This vital additional layer of metadata helps solve the triage challenge and ens
### Inventory filters
-The [asset inventory page](asset-inventory.md) has a collection of powerful filters to group your resources with outstanding alerts and recommendations according to the criteria relevant for any scenario. These filters include **Data sensitivity classifications** and **Data sensitivity labels**. Use these filters to evaluate the security posture of resources on which Purview has discovered sensitive data.
+The [asset inventory page](asset-inventory.md) has a collection of powerful filters to group your resources with outstanding alerts and recommendations according to the criteria relevant for any scenario. These filters include **Data sensitivity classifications** and **Data sensitivity labels**. Use these filters to evaluate the security posture of resources on which Azure Purview has discovered sensitive data.
:::image type="content" source="./media/information-protection/information-protection-inventory-filters.png" alt-text="Screenshot of information protection filters in Microsoft Defender for Cloud's asset inventory page." lightbox="./media/information-protection/information-protection-inventory-filters.png":::
When you select a single resource - whether from an alert, recommendation, or th
The resource health page provides a snapshot view of the overall health of a single resource. You can review detailed information about the resource and all recommendations that apply to that resource. Also, if you're using any of the Microsoft Defender plans, you can see outstanding security alerts for that specific resource too.
-When reviewing the health of a specific resource, you'll see the Purview information on this page and can use it determine what data has been discovered on this resource alongside the Purview account used to scan the resource.
+When reviewing the health of a specific resource, you'll see the Azure Purview information on this page and can use it determine what data has been discovered on this resource alongside the Azure Purview account used to scan the resource.
:::image type="content" source="./media/information-protection/information-protection-resource-health.png" alt-text="Screenshot of Defender for Cloud's resource health page showing information protection labels and classifications from Azure Purview." lightbox="./media/information-protection/information-protection-resource-health.png":::
A graph shows the number of recommendations and alerts by classified resource ty
For related information, see: - [What is Azure Purview?](../purview/overview.md)-- [Purview's supported data sources and file types](../purview/sources-and-scans.md) and [supported data stores](../purview/purview-connector-overview.md)
+- [Azure Purview's supported data sources and file types](../purview/sources-and-scans.md) and [supported data stores](../purview/purview-connector-overview.md)
- [Azure Purview deployment best practices](../purview/deployment-best-practices.md) - [How to label to your data in Azure Purview](../purview/how-to-automatically-label-your-content.md)
defender-for-cloud Overview Page https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/overview-page.md
In the center of the page are the **feature tiles**, each linking to a high prof
- **Regulatory compliance** - Defender for Cloud provides insights into your compliance posture based on continuous assessments of your Azure environment. Defender for Cloud analyzes risk factors in your environment according to security best practices. These assessments are mapped to compliance controls from a supported set of standards. [Learn more](regulatory-compliance-dashboard.md). - **Firewall Manager** - This tile shows the status of your hubs and networks from [Azure Firewall Manager](../firewall-manager/overview.md). - **Inventory** - The asset inventory page of Microsoft Defender for Cloud provides a single page for viewing the security posture of the resources you've connected to Microsoft Defender for Cloud. All resources with unresolved security recommendations are shown in the inventory. If you've enabled the integration with Microsoft Defender for Endpoint and enabled Microsoft Defender for servers, you'll also have access to a software inventory. The tile on the overview page shows you at a glance the total healthy and unhealthy resources (for the currently selected subscriptions). [Learn more](asset-inventory.md).-- **Information protection** - A graph on this tile shows the resource types that have been scanned by [Azure Purview](../purview/overview.md), found to contain sensitive data, and have outstanding recommendations and alerts. Follow the **scan** link to access the Azure Purview accounts and configure new scans, or select any other part of the tile to open the [asset inventory](asset-inventory.md) and view your resources according to your Purview data sensitivity classifications. [Learn more](information-protection.md).
+- **Information protection** - A graph on this tile shows the resource types that have been scanned by [Azure Purview](../purview/overview.md), found to contain sensitive data, and have outstanding recommendations and alerts. Follow the **scan** link to access the Azure Purview accounts and configure new scans, or select any other part of the tile to open the [asset inventory](asset-inventory.md) and view your resources according to your Azure Purview data sensitivity classifications. [Learn more](information-protection.md).
### Insights
defender-for-cloud Security Center Planning And Operations Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/security-center-planning-and-operations-guide.md
This page shows the details regarding the time that the attack took place, the s
Once you identify the compromised system, you can run a [workflow automation](workflow-automation.md) that was previously created. These are a collection of procedures that can be executed from Defender for Cloud once triggered by an alert.
-In the [How to Leverage the Defender for Cloud & Microsoft Operations Management Suite for an Incident Response](https://channel9.msdn.com/Blogs/Taste-of-Premier/ToP1703) video, you can see some demonstrations that show how Defender for Cloud can be used in each one of those stages.
+In the How to Leverage the Defender for Cloud & Microsoft Operations Management Suite for an Incident Response video, you can see some demonstrations that show how Defender for Cloud can be used in each one of those stages.
> [!NOTE] > Read [Managing and responding to security alerts in Defender for Cloud](managing-and-responding-alerts.md) for more information on how to use Defender for Cloud capabilities to assist you during your Incident Response process.
defender-for-cloud Security Center Readiness Roadmap https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/security-center-readiness-roadmap.md
Articles
- [Protecting Azure SQL service and data in Defender for Cloud](./implement-security-recommendations.md)
-Video
-- [Mitigating Security Issues using Defender for Cloud](https://channel9.msdn.com/Blogs/Azure-Security-Videos/Mitigating-Security-Issues-using-Azure-Security-Center)- ### Defender for Cloud for incident response To reduce costs and damage, it's important to have an incident response plan in place before an attack takes place. You can use Defender for Cloud in different stages of an incident response. Use the following resources to understand how Defender for Cloud can be incorporated in your incident response process. Videos
-* [Defender for Cloud in Incident Response](https://channel9.msdn.com/Blogs/Azure-Security-Videos/Azure-Security-Center-in-Incident-Response)
* [Respond quickly to threats with next-generation security operation, and investigation](https://youtu.be/e8iFCz5RM4g) Articles
devops-project Azure Devops Project Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devops-project/azure-devops-project-sql-database.md
To learn more about the CI/CD pipeline, see:
## Videos
-> [!VIDEO https://channel9.msdn.com/Events/Build/2018/BRK3308/player]
+> [!VIDEO https://docs.microsoft.com/Events/Build/2018/BRK3308/player]
devtest-labs Devtest Lab Add Devtest User https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-add-devtest-user.md
# Add owners and users in Azure DevTest Labs
-> [!VIDEO https://channel9.msdn.com/Blogs/Azure/How-to-set-security-in-your-DevTest-Lab/player]
->
->
Access in Azure DevTest Labs is controlled by [Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md). Using Azure RBAC, you can segregate duties within your team into *roles* where you grant only the amount of access necessary to users to perform their jobs. Three of these Azure roles are *Owner*, *DevTest Labs User*, and *Contributor*. In this article, you learn what actions can be performed in each of the three main Azure roles. From there, you learn how to add users to a lab - both via the portal and via a PowerShell script, and how to add users at the subscription level.
devtest-labs Image Factory Create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/image-factory-create.md
The solution enables the speed of creating virtual machines from custom images w
<br/>
-> [!VIDEO https://channel9.msdn.com/Blogs/Azure/Custom-Image-Factory-with-Azure-DevTest-Labs/player]
-- ## High-level view of the solution The solution enables the speed of creating virtual machines from custom images while eliminating extra ongoing maintenance costs. With this solution, you can automatically create custom images and distribute them to other DevTest Labs. You use Azure DevOps (formerly Visual Studio Team Services) as the orchestration engine for automating the all the operations in the DevTest Labs.
event-grid Configure Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/configure-private-endpoints.md
topicName = "<TOPIC NAME>"
connectionName="<ENDPOINT CONNECTION NAME>" endpointName=<ENDPOINT NAME>
-# resource ID of the topic. replace <SUBSCRIPTION ID>, <RESOURCE GROUP NAME>, and <TOPIC NAME>
-topicResourceID="/subscriptions/<SUBSCRIPTION ID>/resourceGroups/<RESOURCE GROUP NAME>/providers/Microsoft.EventGrid/topics/<TOPIC NAME>"
+# resource ID of the topic. replace <SUBSCRIPTION ID>, <RESOURCE GROUP NAME>, and <TOPIC NAME>
+# topicResourceID="/subscriptions/<SUBSCRIPTION ID>/resourceGroups/<RESOURCE GROUP NAME>/providers/Microsoft.EventGrid/topics/<TOPIC NAME>"
# select subscription az account set --subscription $subscriptionID
az eventgrid topic show \
--name $topicName # create private endpoint for the topic you created
-az network private-endpoint create
+az network private-endpoint create \
--resource-group $resourceGroupName \ --name $endpointName \ --vnet-name $vNetName \
event-hubs Create Schema Registry https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/create-schema-registry.md
Title: Create an Azure Event Hubs schema registry description: This article shows you how to create a schema registry in an Azure Event Hubs namespace. Previously updated : 06/01/2021 Last updated : 01/13/2022
-# Create an Azure Event Hubs schema registry
-This article shows you how to create a schema group with schemas in a schema registry hosted by Azure Event Hubs. For an overview of the Schema Registry feature of Azure Event Hubs, see [Azure Schema Registry in Event Hubs](schema-registry-overview.md).
+# Quickstart: Create an Azure Event Hubs schema registry using Azure portal
+
+**Azure Schema Registry** is a feature of Event Hubs, which provides a central repository for schemas for event-driven and messaging-centric applications. It provides the flexibility for your producer and consumer applications to **exchange data without having to manage and share the schema**. It also provides a simple governance framework for reusable schemas and defines relationship between schemas through a grouping construct (schema groups). For more information, see [Azure Schema Registry in Event Hubs](schema-registry-overview.md).
+
+This article shows you how to create a schema group with schemas in a schema registry hosted by Azure Event Hubs.
> [!NOTE] > - The feature isn't available in the **basic** tier.
In this section, you add a schema to the schema group using the Azure portal.
:::image type="content" source="./media/create-schema-registry/new-version.png" alt-text="Image showing the new version of schema"::: 1. Select `1` to see the version 1 of the schema.
+## Clean up resources
+
+> [!NOTE]
+> Don't clean up resources if you want to continue to the next quick start linked from **Next steps**.
+
+1. Navigate to the **Event Hubs Namespace** page.
+1. Select **Schema Registry** on the left menu.
+1. Select the **schema group** you created in this quickstart.
+1. On the **Schema Group** page, select **Delete** on the toolbar.
+1. On the **Delete Schema Group** page, type the name of the schema group, and select **Delete**.
## Next steps
-For more information about schema registry, see [Azure Schema Registry in Event Hubs](schema-registry-overview.md).
+
+> [!div class="nextstepaction"]
+> [Validate schema when sending and receiving events - AMQP and .NET](schema-registry-dotnet-send-receive-quickstart.md).
event-hubs Dynamically Add Partitions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/dynamically-add-partitions.md
Title: Dynamically add partitions to an event hub in Azure Event Hubs description: This article shows you how to dynamically add partitions to an event hub in Azure Event Hubs. Previously updated : 10/20/2021 Last updated : 01/13/2022
-# Dynamically add partitions to an event hub (Apache Kafka topic) in Azure Event Hubs
+# Dynamically add partitions to an event hub (Apache Kafka topic)
Event Hubs provides message streaming through a partitioned consumer pattern in which each consumer only reads a specific subset, or partition, of the message stream. This pattern enables horizontal scale for event processing and provides other stream-focused features that are unavailable in queues and topics. A partition is an ordered sequence of events that is held in an event hub. As newer events arrive, they're added to the end of this sequence. For more information about partitions in general, see [Partitions](event-hubs-scalability.md#partitions) You can specify the number of partitions at the time of creating an event hub. In some scenarios, you may need to add partitions after the event hub has been created. This article describes how to dynamically add partitions to an existing event hub.
event-hubs Schema Registry Dotnet Send Receive Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/schema-registry-dotnet-send-receive-quickstart.md
Title: Validate schema when sending and receiving events - AMQP and .NET
+ Title: Validate schema when sending or receiving events
description: This article provides a walkthrough to create a .NET Core application that sends/receives events to/from Azure Event Hubs with schema validation using Schema Registry. Previously updated : 11/02/2021 Last updated : 01/12/2022 ms.devlang: csharp
-# Validate schema when sending and receiving events - AMQP and .NET
+# Quickstart: Validate schema when sending and receiving events - AMQP and .NET
+
+**Azure Schema Registry** is a feature of Event Hubs, which provides a central repository for schemas for event-driven and messaging-centric applications. It provides the flexibility for your producer and consumer applications to **exchange data without having to manage and share the schema**. It also provides a simple governance framework for reusable schemas and defines relationship between schemas through a grouping construct (schema groups). For more information, see [Azure Schema Registry in Event Hubs](schema-registry-overview.md).
+ This quickstart shows how to send events to and receive events from an event hub with schema validation using the **Azure.Messaging.EventHubs** .NET library. ## Prerequisites
This section shows how to write a .NET Core console application that receives ev
## Next steps
-Check out [Azure Schema Registry client library for .NET](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/schemaregistry/Azure.Data.SchemaRegistry) for additional information.
+
+> [!div class="nextstepaction"]
+> Checkout [Azure Schema Registry client library for .NET](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/schemaregistry/Azure.Data.SchemaRegistry)
event-hubs Schema Registry Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/schema-registry-overview.md
Title: Azure Schema Registry in Azure Event Hubs description: This article provides an overview of Schema Registry support by Azure Event Hubs. Previously updated : 11/02/2021 Last updated : 01/13/2022
In many event streaming and messaging scenarios, the event or message payload co
An event producer uses a schema to serialize event payload and publish it to an event broker such as Event Hubs. Event consumers read event payload from the broker and de-serialize it using the same schema. So, both producers and consumers can validate the integrity of the data with the same schema. ## What is Azure Schema Registry? **Azure Schema Registry** is a feature of Event Hubs, which provides a central repository for schemas for event-driven and messaging-centric applications. It provides the flexibility for your producer and consumer applications to **exchange data without having to manage and share the schema**. It also provides a simple governance framework for reusable schemas and defines relationship between schemas through a grouping construct (schema groups). With schema-driven serialization frameworks like Apache Avro, moving serialization metadata into shared schemas can also help with **reducing the per-message overhead**. That's because each message won't need to have the metadata (type information and field names) as it's the case with tagged formats such as JSON.
The information flow when you use schema registry is the same for all protocols
The following diagram shows how the information flows when event producers and consumers use Schema Registry with the **Kafka** protocol. ### Producer
The following diagram shows how the information flows when event producers and c
An Event Hubs namespace now can host schema groups alongside event hubs (or Kafka topics). It hosts a schema registry and can have multiple schema groups. In spite of being hosted in Azure Event Hubs, the schema registry can be used universally with all Azure messaging services and any other message or events broker. Each of these schema groups is a separately securable repository for a set of schemas. Groups can be aligned with a particular application or an organizational unit. ### Schema groups
expressroute Expressroute Howto Circuit Portal Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-howto-circuit-portal-resource-manager.md
This quickstart shows you how to create an ExpressRoute circuit using the Azure
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * Review the [prerequisites](expressroute-prerequisites.md) and [workflows](expressroute-workflows.md) before you begin configuration.
-* You can [view a video](https://channel9.msdn.com/Blogs/Azure/Azure-ExpressRoute-How-to-create-an-ExpressRoute-circuit) before beginning to better understand the steps.
+* You can view a video before beginning to better understand the steps.
## <a name="create"></a>Create and provision an ExpressRoute circuit
expressroute Expressroute Howto Erdirect https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-howto-erdirect.md
ExpressRoute Direct gives you the ability to directly connect to Microsoft's glo
## Before you begin
-Before using ExpressRoute Direct, you must first enroll your subscription. Before using ExpressRoute Direct, you must first enroll your subscription. To enroll, please do the following via Azure PowerShell:
+Before using ExpressRoute Direct, you must first enroll your subscription. To enroll, please do the following via Azure PowerShell:
1. Sign in to Azure and select the subscription you wish to enroll. ```azurepowershell-interactive
expressroute Expressroute Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-prerequisites.md
If you plan to enable Microsoft 365 on ExpressRoute, review the following docume
* [Network and migration planning for Microsoft 365](/microsoft-365/enterprise/network-and-migration-planning) * [Microsoft 365 integration with on-premises environments](/microsoft-365/enterprise/microsoft-365-integration) * [Stay up to date with Office 365 IP Address changes](/microsoft-365/enterprise/microsoft-365-ip-web-service)
-* [ExpressRoute on Office 365 advanced training videos](https://channel9.msdn.com/series/aer/)
+* ExpressRoute on Office 365 advanced training videos
## Next steps * For more information about ExpressRoute, see the [ExpressRoute FAQ](expressroute-faqs.md).
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/overview.md
The following limitations exist for certain fields:
## Video overview The following overview of Azure Blueprints is from Azure Fridays. For video download, visit
-[Azure Fridays - An overview of Azure Blueprints](https://channel9.msdn.com/Shows/Azure-Friday/An-overview-of-Azure-Blueprints)
+[Azure Fridays - An overview of Azure Blueprints](/Shows/Azure-Friday/An-overview-of-Azure-Blueprints)
on Channel 9. > [!VIDEO https://www.youtube.com/embed/cQ9D-d6KkMY]
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/overview.md
resource. For more information about making existing resources compliant, see
### Video overview The following overview of Azure Policy is from Build 2018. For slides or video download, visit
-[Govern your Azure environment through Azure Policy](https://channel9.msdn.com/events/Build/2018/THR2030)
+[Govern your Azure environment through Azure Policy](/events/Build/2018/THR2030)
on Channel 9. > [!VIDEO https://www.youtube.com/embed/dxMaYF2GB7o]
governance Supported Tables Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/resource-graph/reference/supported-tables-resources.md
For sample queries for this table, see [Resource Graph sample queries for resour
- microsoft.powerplatform/enterprisepolicies - microsoft.projectbabylon/accounts - microsoft.providerhubdevtest/regionalstresstests-- Microsoft.Purview/Accounts (Purview accounts)
+- Microsoft.Purview/Accounts (Azure Purview accounts)
- Microsoft.Quantum/Workspaces (Quantum Workspaces) - Microsoft.RecommendationsService/accounts (Intelligent Recommendations Accounts) - Microsoft.RecommendationsService/accounts/modeling (Modeling)
hdinsight Interactive Query Troubleshoot View Time Out https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/interactive-query/interactive-query-troubleshoot-view-time-out.md
This article describes troubleshooting steps and possible resolutions for issues
When running certain queries from the Apache Hive view, the following error may be encountered: ```
-result fetch timed out
+Result fetch timed out
+ java.util.concurrent.TimeoutException: deadline passed
+ at akka.actor.dsl.Inbox$InboxActor$$anonfun$receive$1.applyOrElse(Inbox.scala:117)
+ at scala.PartialFunction$AndThen.applyOrElse(PartialFunction.scala:189)
+ at akka.actor.Actor$class.aroundReceive(Actor.scala:467)
+ at akka.actor.dsl.Inbox$InboxActor.aroundReceive(Inbox.scala:62)
+ at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
+ at akka.actor.ActorCell.invoke(ActorCell.scala:487)
+ at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
+ at akka.dispatch.Mailbox.run(Mailbox.scala:220)
+ at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
+ at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
+ at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
+ at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
+ at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
``` ## Cause
The Hive View default timeout value may not be suitable for the query you are ru
views.request.read.timeout.millis=300000 views.ambari.hive.<HIVE_VIEW_INSTANCE_NAME>.result.fetch.timeout=300000 ```
- The value of `HIVE_VIEW_INSTANCE_NAME` is available at the end of the Hive View URL.
+ The value of `HIVE_VIEW_INSTANCE_NAME` is available by clicking YOUR_USERNAME > Manage Ambari > Views > Names column. Do not use the URL name.
2. Restart the active Ambari server by running the following. If you get an error message saying it's not the active Ambari server, just ssh into the next headnode and repeat this step. ```
healthcare-apis Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/azure-api-for-fhir/release-notes.md
Previously updated : 12/21/2021 Last updated : 01/11/2022
Azure API for FHIR provides a fully managed deployment of the Microsoft FHIR Server for Azure. The server is an implementation of the [FHIR](https://hl7.org/fhir) standard. This document provides details about the features and enhancements made to Azure API for FHIR.
+## December 2021
+
+### **Features and enhancements**
+
+|Enhancements &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |Related information |
+| :- | : |
+|Added Publisher to `CapabiilityStatement.name` |You can now find the publisher in the capability statement at `CapabilityStatement.name`. [#2319](https://github.com/microsoft/fhir-server/pull/2319) |
+|Log `FhirOperation` linked to anonymous calls to Request metrics |We werenΓÇÖt logging operations that didnΓÇÖt require authentication. We extended the ability to get `FhirOperation` type in `RequestMetrics` for anonymous calls. [#2295](https://github.com/microsoft/fhir-server/pull/2295) |
+
+### **Bug fixes**
+
+|Bug fixes |Related information |
+| :-- | : |
+|Fixed 500 error when `SearchParameter` Code is null |Fixed an issue with `SearchParameter` if it had a null value for Code, the result would be a 500. Now it will result in an `InvalidResourceException` like the other values do. [#2343](https://github.com/microsoft/fhir-server/pull/2343) |
+|Returned `BadRequestException` with valid message when input JSON body is invalid |For invalid JSON body requests, the FHIR server was returning a 500 error. Now we will return a `BadRequestException` with a valid message instead of 500. [#2239](https://github.com/microsoft/fhir-server/pull/2239) |
+|`_sort` can cause `ChainedSearch` to return incorrect results |Previously, the sort options from the chained search's `SearchOption` object was not cleared, causing the sorting options to be passed through to the chained sub-search, which are not valid. This could result in no results when there should be results. This bug is now fixed [#2347](https://github.com/microsoft/fhir-server/pull/2347). It addressed GitHub bug [#2344](https://github.com/microsoft/fhir-server/issues/2344). |
+++ ## November 2021 ### **Features and enhancements**
Azure API for FHIR provides a fully managed deployment of the Microsoft FHIR Ser
|Enhancements &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |Related information | | :- | : | |Process Patient-everything links |We've expanded the Patient-everything capabilities to process patient links [#2305](https://github.com/microsoft/fhir-server/pull/2305). For more information, see [Patient-everything in FHIR](../../healthcare-apis/fhir/patient-everything.md#processing-patient-links) documentation. |
-|Added software name and version to capability statement |In the capability statement, the software name now distinguishes if you're using Azure API for FHIR or Azure Healthcare APIs. The software version will now specify which open source [release package](https://github.com/microsoft/fhir-server/releases) is live in the managed service [#2294](https://github.com/microsoft/fhir-server/pull/2294). Addresses: [#1778](https://github.com/microsoft/fhir-server/issues/1778) and [#2241](https://github.com/microsoft/fhir-server/issues/2241) |
+|Added software name and version to capability statement |In the capability statement, the software name now distinguishes if you're using Azure API for FHIR or Azure Healthcare APIs. The software version will now specify which open-source [release package](https://github.com/microsoft/fhir-server/releases) is live in the managed service [#2294](https://github.com/microsoft/fhir-server/pull/2294). Addresses: [#1778](https://github.com/microsoft/fhir-server/issues/1778) and [#2241](https://github.com/microsoft/fhir-server/issues/2241) |
|Log 500's to `RequestMetric` |Previously, 500s or any unknown/unhandled errors were not getting logged in `RequestMetric`. They're now getting logged [#2240](https://github.com/microsoft/fhir-server/pull/2240). For more information, see [Enable diagnostic settings in Azure API for FHIR](../../healthcare-apis/azure-api-for-fhir/enable-diagnostic-logging.md) | |Compress continuation tokens |In certain instances, the continuation token was too long to be able to follow the [next link](../../healthcare-apis/azure-api-for-fhir/overview-of-search.md#pagination) in searches and would result in a 404. To resolve this, we compressed the continuation token to ensure it stays below the size limit [#2279](https://github.com/microsoft/fhir-server/pull/2279). Addresses issue [#2250](https://github.com/microsoft/fhir-server/issues/2250). |
Azure API for FHIR provides a fully managed deployment of the Microsoft FHIR Ser
|Bug fixes |Related information | | :-- | : | |Resolved 500 error when the date was passed with a time zone. |This fixes a 500 error when a date with a time zone was passed into a datetime field [#2270](https://github.com/microsoft/fhir-server/pull/2270). |
-|Resolved issue where posting a bundle with incorrect Media Type returned a 500 error. |Previously when posting a search with a key that contains certain characters, a 500 error was returned. This fixes this issue [#2264](https://github.com/microsoft/fhir-server/pull/2264), and it addresses [#2148](https://github.com/microsoft/fhir-server/issues/2148). |
+|Resolved issue when posting a bundle with incorrect Media Type returned a 500 error. |Previously when posting a search with a key that contains certain characters, a 500 error was returned. This fixes this issue [#2264](https://github.com/microsoft/fhir-server/pull/2264), and it addresses [#2148](https://github.com/microsoft/fhir-server/issues/2148). |
+ ## October 2021
healthcare-apis Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/release-notes.md
Previously updated : 12/21/2021 Last updated : 01/11/2022
Azure Healthcare APIs is a set of managed API services based on open standards and frameworks for the healthcare industry. They enable you to build scalable and secure healthcare solutions by bringing protected health information (PHI) datasets together and connecting them end-to-end with tools for machine learning, analytics, and AI. This document provides details about the features and enhancements made to Azure Healthcare APIs including the different service types (FHIR service, DICOM service, and IoT connector) that seamlessly work with one another.
+## December 2021
+
+### Azure Healthcare APIs
+
+### **Features and enhancements**
+
+|Enhancements &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |Related information |
+| :- | : |
+|Quota details for support requests |We've updated the quota details for customer support requests with the latest information. |
+|Local RBAC |We've updated the local RBAC documentation to clarify the use of the secondary tenant and the steps to disable it. |
+|Deploy and configure Healthcare APIs using scripts |We've started the process of providing PowerShell, CLI scripts, and ARM templates to configure app registration and role assignments. Note that scripts for deploying Healthcare APIs will be available after GA. |
+
+### FHIR service
+
+### **Features and enhancements**
+
+|Enhancements | Related information |
+| : | -: |
+|Added Publisher to `CapabiilityStatement.name` |You can now find the publisher in the capability statement at `CapabilityStatement.name`. [#2319](https://github.com/microsoft/fhir-server/pull/2319) |
+|Log `FhirOperation` linked to anonymous calls to Request metrics |We werenΓÇÖt logging operations that didnΓÇÖt require authentication. We extended the ability to get `FhirOperation` type in `RequestMetrics` for anonymous calls. [#2295](https://github.com/microsoft/fhir-server/pull/2295) |
+
+### **Bug fixes**
+
+|Bug fixes |Related information |
+| :-- | : |
+|Fixed 500 error when `SearchParameter` Code is null |Fixed an issue with `SearchParameter` if it had a null value for Code, the result would be a 500. Now it will result in an `InvalidResourceException` like the other values do. [#2343](https://github.com/microsoft/fhir-server/pull/2343) |
+|Returned `BadRequestException` with valid message when input JSON body is invalid |For invalid JSON body requests, the FHIR server was returning a 500 error. Now we will return a `BadRequestException` with a valid message instead of 500. [#2239](https://github.com/microsoft/fhir-server/pull/2239) |
+|Handled SQL Timeout issue |If SQL Server timed out, the PUT `/resource{id}` returned a 500 error. Now we handle the 500 error and return a timeout exception with an operation outcome. [#2290](https://github.com/microsoft/fhir-server/pull/2290) |
+ ## November 2021 ### FHIR service
Azure Healthcare APIs is a set of managed API services based on open standards a
| Enhancements &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Related information | | :- | --: | |Process Patient-everything links |We've expanded the Patient-everything capabilities to process patient links [#2305](https://github.com/microsoft/fhir-server/pull/2305). For more information, see [Patient-everything in FHIR](./../healthcare-apis/fhir/patient-everything.md#processing-patient-links) documentation. |
-|Added software name and version to capability statement. |In the capability statement, the software name now distinguishes if you're using Azure API for FHIR or Azure Healthcare APIs. The software version will now specify which open source [release package](https://github.com/microsoft/fhir-server/releases) is live in the managed service [#2294](https://github.com/microsoft/fhir-server/pull/2294). Addresses: [#1778](https://github.com/microsoft/fhir-server/issues/1778) and [#2241](https://github.com/microsoft/fhir-server/issues/2241) |
+|Added software name and version to capability statement. |In the capability statement, the software name now distinguishes if you're using Azure API for FHIR or Azure Healthcare APIs. The software version will now specify which open-source [release package](https://github.com/microsoft/fhir-server/releases) is live in the managed service [#2294](https://github.com/microsoft/fhir-server/pull/2294). Addresses: [#1778](https://github.com/microsoft/fhir-server/issues/1778) and [#2241](https://github.com/microsoft/fhir-server/issues/2241) |
|Compress continuation tokens |In certain instances, the continuation token was too long to be able to follow the [next link](./../healthcare-apis/fhir/overview-of-search.md#pagination) in searches and would result in a 404. To resolve this, we compressed the continuation token to ensure it stays below the size limit [#2279](https://github.com/microsoft/fhir-server/pull/2279). Addresses issue [#2250](https://github.com/microsoft/fhir-server/issues/2250). | |FHIR service autoscale |The [FHIR service autoscale](./fhir/fhir-service-autoscale.md) is designed to provide optimized service scalability automatically to meet customer demands when they perform data transactions in consistent or various workloads at any time. It's available in all regions where the FHIR service is supported. |
Azure Healthcare APIs is a set of managed API services based on open standards a
|Bug fixes | Related information | | :- | -: |
-|Implemented fix to resolve QIDO paging ordering issues | [#989](https://github.com/microsoft/dicom-server/pull/989) |
+|Implemented fix to resolve QIDO paging-ordering issues | [#989](https://github.com/microsoft/dicom-server/pull/989) |
| :- | -: | ### **IoT connector**
iot-fundamentals Iot Support Help https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-fundamentals/iot-support-help.md
Here are suggestions for where you can get help when developing your Azure IoT s
## Create an Azure support request <div class='icon is-large'>
- <img alt='Azure support' src='https://docs.microsoft.com/media/logos/logo_azure.svg'>
+ <img alt='Azure support' src='/media/logos/logo_azure.svg'>
</div> Explore the range of [Azure support options and choose the plan](https://azure.microsoft.com/support/plans) that best fits, whether you're a developer just starting your cloud journey or a large organization deploying business-critical, strategic applications. Azure customers can create and manage support requests in the Azure portal.
If you can't find an answer to your problem using search, submit a new question
## Post a question on Stack Overflow <div class='icon is-large'>
- <img alt='Stack Overflow' src='https://docs.microsoft.com/media/logos/logo_stackoverflow.svg'>
+ <img alt='Stack Overflow' src='/media/logos/logo_stackoverflow.svg'>
</div> For answers on your developer questions from the largest community developer ecosystem, ask your question on Stack Overflow.
If you do submit a new question to Stack Overflow, please use one or more of the
## Stay informed of updates and new releases <div class='icon is-large'>
- <img alt='Stay informed' src='https://docs.microsoft.com/media/common/i_blog.svg'>
+ <img alt='Stay informed' src='/media/common/i_blog.svg'>
</div> Learn about important product updates, roadmap, and announcements in [Azure Updates](https://azure.microsoft.com/updates/?category=iot).
-News and information about Azure IoT is shared at the [Azure blog](https://azure.microsoft.com/blog/topics/internet-of-things/) and on the [Internet of Things Show on Channel 9](https://channel9.msdn.com/Shows/Internet-of-Things-Show).
+News and information about Azure IoT is shared at the [Azure blog](https://azure.microsoft.com/blog/topics/internet-of-things/) and on the [Internet of Things Show on Channel 9](/Shows/Internet-of-Things-Show).
Also, share your experiences, engage and learn from experts in the [Internet of Things Tech Community](https://techcommunity.microsoft.com/t5/Internet-of-Things-IoT/ct-p/IoT).
iot-hub Iot Hub Devguide Device Twins https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-devguide-device-twins.md
In the previous example, the `telemetryConfig` device twin desired and reported
}, ```
-2. The device app is notified of the change immediately if connected, or at the first reconnect. The device app then reports the updated configuration (or an error condition using the `status` property). Here is the portion of the reported properties:
+2. The device app is notified of the change immediately if the device is connected. If it's not connected, the device app follows the [device reconnection flow](#device-reconnection-flow) when it connects. The device app then reports the updated configuration (or an error condition using the `status` property). Here is the portion of the reported properties:
```json "reported": {
In the previous example, the `telemetryConfig` device twin desired and reported
> [!NOTE] > The preceding snippets are examples, optimized for readability, of one way to encode a device configuration and its status. IoT Hub does not impose a specific schema for the device twin desired and reported properties in the device twins.
->
+
+> [!IMPORTANT]
+> IoT Plug and Play defines a schema that uses several additional properties to synchronize changes to desired and reported properties. If your solution uses IoT Plug and Play, you must follow the Plug and Play conventions when updating twin properties. For more information and an example, see [Writable properties in IoT Plug and Play](../iot-develop/concepts-convention.md#writable-properties).
You can use twins to synchronize long-running operations such as firmware updates. For more information on how to use properties to synchronize and track a long running operation across devices, see [Use desired properties to configure devices](tutorial-device-twins.md).
IoT Hub does not preserve desired properties update notifications for disconnect
The device app can ignore all notifications with `$version` less or equal than the version of the full retrieved document. This approach is possible because IoT Hub guarantees that versions always increment.
-> [!NOTE]
-> This logic is already implemented in the [Azure IoT device SDKs](iot-hub-devguide-sdks.md). This description is useful only if the device app cannot use any of Azure IoT device SDKs and must program the MQTT interface directly.
->
- ## Additional reference material Other reference topics in the IoT Hub developer guide include:
iot-hub Iot Hub Devguide Module Twins https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-devguide-module-twins.md
In the previous example, the `telemetryConfig` module twin desired and reported
... ```
-2. The module app is notified of the change immediately if connected, or at the first reconnect. The module app then reports the updated configuration (or an error condition using the `status` property). Here is the portion of the reported properties:
+2. The module app is notified of the change immediately if the module is connected. If it's not connected, the module app follows the [module reconnection flow](#module-reconnection-flow) when it connects. The module app then reports the updated configuration (or an error condition using the `status` property). Here is the portion of the reported properties:
```json "reported": {
In the previous example, the `telemetryConfig` module twin desired and reported
> [!NOTE] > The preceding snippets are examples, optimized for readability, of one way to encode a module configuration and its status. IoT Hub does not impose a specific schema for the module twin desired and reported properties in the module twins.
->
->
+
+> [!IMPORTANT]
+> IoT Plug and Play defines a schema that uses several additional properties to synchronize changes to desired and reported properties. If your solution uses IoT Plug and Play, you must follow the Plug and Play conventions when updating twin properties. For more information and an example, see [Writable properties in IoT Plug and Play](../iot-develop/concepts-convention.md#writable-properties).
## Back-end operations The solution back end operates on the module twin using the following atomic operations, exposed through HTTPS:
Module twin desired and reported properties do not have ETags, but have a `$vers
Versions are also useful when an observing agent (such as the module app observing the desired properties) must reconcile races between the result of a retrieve operation and an update notification. The section [Device reconnection flow](iot-hub-devguide-device-twins.md#device-reconnection-flow) provides more information.
+## Module reconnection flow
+
+IoT Hub does not preserve desired properties update notifications for disconnected modules. It follows that a module that is connecting must retrieve the full desired properties document, in addition to subscribing for update notifications. Given the possibility of races between update notifications and full retrieval, the following flow must be ensured:
+
+1. Module app connects to an IoT hub.
+2. Module app subscribes for desired properties update notifications.
+3. Module app retrieves the full document for desired properties.
+
+The module app can ignore all notifications with `$version` less or equal than the version of the full retrieved document. This approach is possible because IoT Hub guarantees that versions always increment.
+ ## Next steps To try out some of the concepts described in this article, see the following IoT Hub tutorials:
iot-hub Iot Hub Device Sdk C Intro https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-device-sdk-c-intro.md
There are a broad range of platforms on which the SDK has been tested (see the [
The following video presents an overview of the Azure IoT SDK for C:
->[!VIDEO https://channel9.msdn.com/Shows/Internet-of-Things-Show/Azure-IoT-C-SDK-insights/Player]
+>[!VIDEO https://docs.microsoft.com/Shows/Internet-of-Things-Show/Azure-IoT-C-SDK-insights/Player]
This article introduces you to the architecture of the Azure IoT device SDK for C. It demonstrates how to initialize the device library, send data to IoT Hub, and receive messages from it. The information in this article should be enough to get started using the SDK, but also provides pointers to additional information about the libraries.
lighthouse Cloud Solution Provider https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lighthouse/concepts/cloud-solution-provider.md
# Azure Lighthouse and the Cloud Solution Provider program
-If you're a [CSP (Cloud Solution Provider)](/partner-center/csp-overview) partner, you can already access the Azure subscriptions created for your customers through the CSP program by using the [Administer On Behalf Of (AOBO)](https://channel9.msdn.com/Series/cspdev/Module-11-Admin-On-Behalf-Of-AOBO) functionality. This access allows you to directly support, configure, and manage your customers' subscriptions.
+If you're a [CSP (Cloud Solution Provider)](/partner-center/csp-overview) partner, you can already access the Azure subscriptions created for your customers through the CSP program by using the Administer On Behalf Of (AOBO) functionality. This access allows you to directly support, configure, and manage your customers' subscriptions.
With [Azure Lighthouse](../overview.md), you can use Azure delegated resource management along with AOBO. This helps improve security and reduces unnecessary access by enabling more granular permissions for your users. It also allows for greater efficiency and scalability, as your users can work across multiple customer subscriptions using a single login in your tenant.
load-testing How To Find Download Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-testing/how-to-find-download-logs.md
Title: Download Apache JMeter logs for troubleshooting
+ Title: Troubleshoot load test errors
-description: Learn how you can troubleshoot Apache JMeter script problems by downloading the Azure Load Testing logs in the Azure portal.
+description: Learn how you can troubleshoot errors during your load test by downloading and analyzing the Apache JMeter logs in the Azure portal.
Previously updated : 11/30/2021 Last updated : 01/14/2022
-# Troubleshoot JMeter problems by downloading Azure Load Testing Preview logs
+# Troubleshoot load test errors by downloading Apache JMeter logs in Azure Load Testing Preview
-In this article, you'll learn how to download the Azure Load Testing Preview logs in the Azure portal to troubleshoot problems with the Apache JMeter script.
+In this article, you'll learn how to download the Apache JMeter logs for Azure Load Testing Preview in the Azure portal. You can use the logging information to troubleshoot problems while the Apache JMeter script runs.
-When you run a load test, the Azure Load Testing test engines execute your Apache JMeter test script. The Apache JMeter log can help you identify both problems in the JMX file and issues that occur during the test execution. For example, the application endpoint might be unavailable, or the JMX file might contain invalid credentials.
+The Apache JMeter log can help you identify problems in your JMX file, or run-time issues that occur while the test is running. For example, the application endpoint might be unavailable, or the JMX file might contain invalid credentials.
+
+When you run a load test, the Azure Load Testing test engines execute your Apache JMeter test script. While your load test is running, Apache JMeter stores detailed logging information in the worker node logs. You can download the JMeter worker node log for your load test run from the Azure portal to help you diagnose load test errors.
> [!IMPORTANT] > Azure Load Testing is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
When you run a load test, the Azure Load Testing test engines execute your Apach
## Prerequisites - An Azure account with an active subscription. If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. -- An Azure Load Testing resource that has a completed test run. If you need to create an Azure Load Testing resource, see [Create and run a load test](./quickstart-create-and-run-load-test.md).
+- An Azure load testing resource that has a completed test run. If you need to create an Azure load testing resource, see [Create and run a load test](./quickstart-create-and-run-load-test.md).
## Access and download logs for your load test
In this section, you retrieve and download the Azure Load Testing logs from the
1. On the dashboard, select **Download**, and then select **Logs**.
- :::image type="content" source="media/how-to-find-download-logs/logs.png" alt-text="Screenshot that shows how to download the load test logs from the test result page.":::
+ :::image type="content" source="media/how-to-find-download-logs/logs.png" alt-text="Screenshot that shows how to download the load test logs from the test run details page.":::
- The browser should now start downloading the execution logs as a zipped folder.
+ The browser should now start downloading the JMeter worker node log file *worker.log*.
-1. You can use any extraction tool to extract the zipped folder and access the logging information.
+1. You can use a text editor to open the log file.
:::image type="content" source="media/how-to-find-download-logs/jmeter-log.png" alt-text="Screenshot that shows the JMeter log file content.":::
+ The *worker.log* file can help you diagnose the root cause of a failing load test. In the previous screenshot, you can see that the test failed because a file is missing.
+ ## Next steps -- For more information about comparing test results, see [Compare multiple test results](./how-to-compare-multiple-test-runs.md).
+- Learn how to [Monitor server-side application metrics](./how-to-update-rerun-test.md).
+
+- Learn how to [Get detailed insights for Azure App Service based applications](./how-to-appservice-insights.md).
-- To learn about performance test automation, see [Configure automated performance testing](./tutorial-cicd-azure-pipelines.md).
+- Learn how to [Compare multiple load test runs](./how-to-compare-multiple-test-runs.md).
logic-apps Logic Apps Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-overview.md
The following list describes just a few example tasks, business processes, and w
* Monitor tweets, analyze the sentiment, and create alerts or tasks for items that need review.
-> [!VIDEO https://channel9.msdn.com/Shows/Azure-Friday/Go-serverless-Enterprise-integration-with-Azure-Logic-Apps/player]
+> [!VIDEO https://docs.microsoft.com/Shows/Azure-Friday/Go-serverless-Enterprise-integration-with-Azure-Logic-Apps/player]
Based on the logic app resource type that you choose and create, your logic apps run in multi-tenant Azure Logic Apps, [single-tenant Azure Logic Apps](single-tenant-overview-compare.md), or a dedicated [integration service environment](connect-virtual-network-vnet-isolated-environment-overview.md) when accessing an Azure virtual network. To run logic apps in containers, [create single-tenant based logic apps using Azure Arc enabled Logic Apps](azure-arc-enabled-logic-apps-create-deploy-workflows.md). For more information, review [What is Azure Arc enabled Logic Apps?](azure-arc-enabled-logic-apps-overview.md) and [Resource type and host environment differences for logic apps](#resource-environment-differences).
You might also want to explore other quickstart guides for Azure Logic Apps:
Learn more about the Azure Logic Apps platform with these introductory videos:
-> [!VIDEO https://channel9.msdn.com/Shows/Azure-Friday/Connect-and-extend-your-mainframe-to-the-cloud-with-Logic-Apps/player]
+> [!VIDEO https://docs.microsoft.com/Shows/Azure-Friday/Connect-and-extend-your-mainframe-to-the-cloud-with-Logic-Apps/player]
## Next steps
machine-learning Deploy With Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/deploy-with-resource-manager-template.md
+
+ Title: 'ML Studio (classic): Deploy workspaces with Azure Resource Manager - Azure'
+description: How to deploy a workspace for Machine Learning Studio (classic) using Azure Resource Manager template
++++++++ Last updated : 02/05/2018+
+# Deploy Machine Learning Studio (classic) Workspace Using Azure Resource Manager
+
+**APPLIES TO:** ![Applies to.](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio)
++
+Using an Azure Resource Manager deployment template saves you time by giving you a scalable way to deploy interconnected components with a validation and retry mechanism. To set up Machine Learning Studio (classic) Workspaces, for example, you need to first configure an Azure storage account and then deploy your workspace. Imagine doing this manually for hundreds of workspaces. An easier alternative is to use an Azure Resource Manager template to deploy an Studio (classic) Workspace and all its dependencies. This article takes you through this process step-by-step. For a great overview of Azure Resource Manager, see [Azure Resource Manager overview](../../azure-resource-manager/management/overview.md).
++
+## Step-by-step: create a Machine Learning Workspace
+We will create an Azure resource group, then deploy a new Azure storage account and a new Machine Learning Studio (classic) Workspace using a Resource Manager template. Once the deployment is complete, we will print out important information about the workspaces that were created (the primary key, the workspaceID, and the URL to the workspace).
+
+### Create an Azure Resource Manager template
+
+A Machine Learning Workspace requires an Azure storage account to store the dataset linked to it.
+The following template uses the name of the resource group to generate the storage account name and the workspace name. It also uses the storage account name as a property when creating the workspace.
+
+```json
+{
+ "contentVersion": "1.0.0.0",
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "variables": {
+ "namePrefix": "[resourceGroup().name]",
+ "location": "[resourceGroup().location]",
+ "mlVersion": "2016-04-01",
+ "stgVersion": "2015-06-15",
+ "storageAccountName": "[concat(variables('namePrefix'),'stg')]",
+ "mlWorkspaceName": "[concat(variables('namePrefix'),'mlwk')]",
+ "mlResourceId": "[resourceId('Microsoft.MachineLearning/workspaces', variables('mlWorkspaceName'))]",
+ "stgResourceId": "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]",
+ "storageAccountType": "Standard_LRS"
+ },
+ "resources": [
+ {
+ "apiVersion": "[variables('stgVersion')]",
+ "name": "[variables('storageAccountName')]",
+ "type": "Microsoft.Storage/storageAccounts",
+ "location": "[variables('location')]",
+ "properties": {
+ "accountType": "[variables('storageAccountType')]"
+ }
+ },
+ {
+ "apiVersion": "[variables('mlVersion')]",
+ "type": "Microsoft.MachineLearning/workspaces",
+ "name": "[variables('mlWorkspaceName')]",
+ "location": "[variables('location')]",
+ "dependsOn": ["[variables('stgResourceId')]"],
+ "properties": {
+ "UserStorageAccountId": "[variables('stgResourceId')]"
+ }
+ }
+ ],
+ "outputs": {
+ "mlWorkspaceObject": {"type": "object", "value": "[reference(variables('mlResourceId'), variables('mlVersion'))]"},
+ "mlWorkspaceToken": {"type": "string", "value": "[listWorkspaceKeys(variables('mlResourceId'), variables('mlVersion')).primaryToken]"},
+ "mlWorkspaceWorkspaceID": {"type": "string", "value": "[reference(variables('mlResourceId'), variables('mlVersion')).WorkspaceId]"},
+ "mlWorkspaceWorkspaceLink": {"type": "string", "value": "[concat('https://studio.azureml.net/Home/ViewWorkspace/', reference(variables('mlResourceId'), variables('mlVersion')).WorkspaceId)]"}
+ }
+}
+
+```
+Save this template as mlworkspace.json file under c:\temp\.
+
+### Deploy the resource group, based on the template
+
+* Open PowerShell
+* Install modules for Azure Resource Manager and Azure Service Management
+
+```powershell
+# Install the Azure Resource Manager modules from the PowerShell Gallery (press "A")
+Install-Module Az -Scope CurrentUser
+
+# Install the Azure Service Management modules from the PowerShell Gallery (press "A")
+Install-Module Azure -Scope CurrentUser
+```
+
+ These steps download and install the modules necessary to complete the remaining steps. This only needs to be done once in the environment where you are executing the PowerShell commands.
+
+* Authenticate to Azure
+
+```powershell
+# Authenticate (enter your credentials in the pop-up window)
+Connect-AzAccount
+```
+This step needs to be repeated for each session. Once authenticated, your subscription information should be displayed.
+
+![Azure Account](/articles/marketplace/media/test-drive/azure-subscriptions.png)
+
+Now that we have access to Azure, we can create the resource group.
+
+* Create a resource group
+
+```powershell
+$rg = New-AzResourceGroup -Name "uniquenamerequired523" -Location "South Central US"
+$rg
+```
+
+Verify that the resource group is correctly provisioned. **ProvisioningState** should be "Succeeded."
+The resource group name is used by the template to generate the storage account name. The storage account name must be between 3 and 24 characters in length and use numbers and lower-case letters only.
+
+<!--
+ ![Resource Group](./media/deploy-with-resource-manager-template/resource-groupprovisioning.png)
+-->
+
+* Using the resource group deployment, deploy a new Machine Learning Workspace.
+
+```powershell
+# Create a Resource Group, TemplateFile is the location of the JSON template.
+$rgd = New-AzResourceGroupDeployment -Name "demo" -TemplateFile "C:\temp\mlworkspace.json" -ResourceGroupName $rg.ResourceGroupName
+```
+
+Once the deployment is completed, it is straightforward to access properties of the workspace you deployed. For example, you can access the Primary Key Token.
+
+```powershell
+# Access Machine Learning Studio (classic) Workspace Token after its deployment.
+$rgd.Outputs.mlWorkspaceToken.Value
+```
+
+Another way to retrieve tokens of existing workspace is to use the Invoke-AzResourceAction command. For example, you can list the primary and secondary tokens of all workspaces.
+
+```powershell
+# List the primary and secondary tokens of all workspaces
+Get-AzResource |? { $_.ResourceType -Like "*MachineLearning/workspaces*"} |ForEach-Object { Invoke-AzResourceAction -ResourceId $_.ResourceId -Action listworkspacekeys -Force}
+```
+After the workspace is provisioned, you can also automate many Machine Learning Studio (classic) tasks using the [PowerShell Module for Machine Learning Studio (classic)](https://aka.ms/amlps).
+
+## Next steps
+
+* Learn more about [authoring Azure Resource Manager Templates](../../azure-resource-manager/templates/syntax.md).
+* Have a look at the [Azure Quickstart Templates Repository](https://github.com/Azure/azure-quickstart-templates).
+* See the [Resource Manager template reference help](/azure/templates/microsoft.machinelearning/allversions)
+
+<!--Link references-->
machine-learning How To Deploy Fpga Web Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-fpga-web-service.md
converted_model.delete()
+ Learn about FPGA and [Azure Machine Learning pricing and costs](https://azure.microsoft.com/pricing/details/machine-learning/).
-+ [Hyperscale hardware: ML at scale on top of Azure + FPGA: Build 2018 (video)](https://channel9.msdn.com/events/Build/2018/BRK3202)
-
-+ [Microsoft FPGA-based configurable cloud (video)](https://channel9.msdn.com/Events/Build/2017/B8063)
++ [Hyperscale hardware: ML at scale on top of Azure + FPGA: Build 2018 (video)](/events/Build/2018/BRK3202) + [Project Brainwave for real-time AI](https://www.microsoft.com/research/project/project-brainwave/)
machine-learning How To Troubleshoot Deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-troubleshoot-deployment.md
After the image is successfully built, the system attempts to start a container
Use the info in the [Inspect the Docker log](how-to-troubleshoot-deployment-local.md#dockerlog) article.
+## Container azureml-fe-aci launch fails
+
+When deploying a service to an Azure Container Instance compute target, Azure Machine Learning attempts to create a front-end container that has the name `azureml-fe-aci` for the inference request. If `azureml-fe-aci` crashes, you can see logs by running `az container logs --name MyContainerGroup --resource-group MyResourceGroup --subscription MySubscription --container-name azureml-fe-aci`. You can follow the error message in the logs to make the fix.
+
+The most common failure for `azureml-fe-aci` is that the provided SSL certificate or key is invalid.
+ ## Function fails: get_model_path() Often, in the `init()` function in the scoring script, [Model.get_model_path()](/python/api/azureml-core/azureml.core.model.model#get-model-path-model-name--version-noneworkspace-none-) function is called to locate a model file or a folder of model files in the container. If the model file or folder cannot be found, the function fails. The easiest way to debug this error is to run the below Python code in the Container shell:
Learn more about deployment:
* [How to deploy and where](how-to-deploy-and-where.md) * [Tutorial: Train & deploy models](tutorial-train-deploy-notebook.md)
-* [How to run and debug experiments locally](./how-to-debug-visual-studio-code.md)
+* [How to run and debug experiments locally](./how-to-debug-visual-studio-code.md)
marketplace Plan Azure Application Offer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/plan-azure-application-offer.md
Review the following resources as you plan your Azure application offer for the
- [Azure PowerShell](../azure-resource-manager/managed-applications/powershell-samples.md) - [Managed application solutions](../azure-resource-manager/managed-applications/sample-projects.md)
-The video [Building Solution Templates, and Managed Applications for Azure Marketplace](https://channel9.msdn.com/Events/Build/2018/BRK3603) gives a comprehensive introduction to the Azure application offer type:
+The video [Building Solution Templates, and Managed Applications for Azure Marketplace](/Events/Build/2018/BRK3603) gives a comprehensive introduction to the Azure application offer type:
- What offer types are available - What technical assets are required
media-services Migrate V 2 V 3 Migration Scenario Based Publishing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/migrate-v-2-v-3-migration-scenario-based-publishing.md
Major changes to the way content is published in v3 API. The new publishing mode
See publishing concepts, tutorials and how to guides below for specific steps.
+## Will v2 streaming locators continue to work after February 2024?
+
+Streaming locators created with v2 API will continue to work after our v2 API is turned off. Once the Streaming Locator data is created in the Media Services backend database, there is no dependency on the v2 REST API for streaming. We will not remove v2 specific records from the database when v2 is turned off in February 2024.
+
+There are some properties of assets and locators created with v2 that cannot be accessed or updated using the new v3 API. For example, v2 exposes an **Asset Files** API that does not have an equivalent feature in the v3 API. Often this is not a problem for most of our customers, since it is not a widely used feature and you can still stream old locators and delete them when they are no longer needed.
+
+After migration, you should avoid making any calls to the v2 API to modify streaming locators or assets.
+ ## Publishing concepts, tutorials and how to guides ### Concepts
media-services Samples Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/samples-overview.md
Previously updated : 03/24/2021- Last updated : 01/14/2022+ # Media Services v3 samples [!INCLUDE [media services api v3 logo](./includes/v3-hr.md)]
-This article contains a list of all the samples available for Media Services organized by method and SDK. Samples include .NET, Node.js (TypeScript), Python, Java, and also REST with Postman.
+This article contains a list of all the samples available for Media Services organized by method and SDK. Samples include .NET, Node.js (TypeScript), Python, Java, and also examples using REST with Postman.
## Samples by SDK You'll find description and links to the samples you may be looking for in each of the tabs.
+## [Node.JS (Typescript)](#tab/node/)
+
+|Sample|Description|
+|||
+|[Create an account from code](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/Account/CreateAccount/create-account.ts)|The sample shows how to create a Media Services account and set the primary storage account, in addition to advanced configuration settings including Key Delivery IP allowlist, Managed Identity, storage auth, and bring your own encryption key.|
+|[Create an account with user assigned managed identity code](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/Account/CreateAccount/create-account_with_managed_identity.ts)|The sample shows how to create a Media Services account and set the primary storage account, in addition to advanced configuration settings including Key Delivery IP allowlist, user or system assigned Managed Identity, storage auth, and bring your own encryption key.|
+|[Hello World - list assets](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/HelloWorld-ListAssets/list-assets.ts)|Basic example of how to connect and list assets |
+|[Live streaming with Standard Passthrough](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/Live/Standard_Passthrough_Live_Event/index.ts)| Standard passthrough live streaming example. **WARNING**, make sure to check that all resources are cleaned up and no longer billing in portal when using live|
+|[Live streaming with Standard Passthrough with Event Hubs](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/Live/Standard_Passthrough_Live_Event_with_EventHub/index.ts)| Demonstrates how to use Event Hubs to subscribe to events on the live streaming channel. Events include encoder connections, disconnections, heartbeat, latency, discontinuity, and drift issues. **WARNING**, make sure to check that all resources are cleaned up and no longer billing in portal when using live|
+|[Live streaming with Basic Passthrough](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/Live/Basic_Passthrough_Live_Event/index.ts)| Shows how to set up the basic passthrough live event if you only need to broadcast a low-cost UGC channel. **WARNING**, make sure to check that all resources are cleaned up and no longer billing in portal when using live|
+|[Live streaming with 720P Standard encoding](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/Live/720P_Encoding_Live_Event/index.ts)| Use live encoding in the cloud with the 720P HD adaptive bitrate encoding preset. **WARNING**, make sure to check that all resources are cleaned up and no longer billing in portal when using live|
+|[Live streaming with 1080P encoding](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/Live/720P_Encoding_Live_Event/index.ts)| Use live encoding in the cloud with the 1080P HD adaptive bitrate encoding preset. **WARNING**, make sure to check that all resources are cleaned up and no longer billing in portal when using live|
+|[Upload and stream HLS and DASH](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/StreamFilesSample/index.ts)| Basic example for uploading a local file or encoding from a source URL. Sample shows how to use storage SDK to download content, and shows how to stream to a player |
+|[Upload and stream HLS and DASH with PlayReady and Widevine DRM](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/StreamFilesWithDRMSample/index.ts)| Demonstrates how to encode and stream using Widevine and PlayReady DRM |
+|[Upload and use AI to index videos and audio](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/VideoAnalytics/index.ts)| Example of using the Video and Audio Analyzer presets to generate metadata and insights from a video or audio file |
+|[Create Transform, use Job preset overrides (v2-to-v3 API migration)](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/VideoEncoding/CreateTransform_Job_PresetOverride/index.ts)| If you need a workflow where you desire to submit custom preset jobs to a single queue, you can use this base sample that shows how to create a (mostly) empty Transform, and then you can use the preset override property on the Job to submit custom presets to the same transform. This allows you to treat the v3 AMS API a lot more like the legacy v2 API Job queue if you desire.|
+|[Basic Encoding with H264](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/VideoEncoding/Encoding_H264/index.ts)| Shows how to use the standard encoder to encode a source file into H264 format with AAC audio and PNG thumbnails |
+|[Basic Encoding with H264 with Event Hubs/Event Grid](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/VideoEncoding/Encoding_H264%20_with_EventHub/index.ts)| Shows how to use the standard encoder and receive and process Event Grid events from Media Services through an Event Hubs. First set up an Event Grid subscription that pushes events into an Event Hubs using the Azure portal or CLI to use this sample. |
+|[Sprite Thumbnail (VTT) in JPG format](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/VideoEncoding/Encoding_Sprite_Thumbnail/index.ts)| Shows how to generate a VTT Sprite Thumbnail in JPG format and how to set the columns and number of images. This also shows a speed encoding mode in H264 for a 720P layer. |
+|[Content Aware encoding with H264](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/VideoEncoding/Encoding_H264_ContentAware/index.ts)| Example of using the standard encoder with Content Aware encoding to automatically generate the best quality adaptive bitrate streaming set based on an analysis of the source files contents|
+|[Content Aware encoding Constrained with H264](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/VideoEncoding/Encoding_H264_ContentAware_Constrained/index.ts)| Demonstrates how to control the output settings of the Content Aware encoding preset to make the outputs more deterministic to your encoding needs and costs. This will still auto generate the best quality adaptive bitrate streaming set based on an analysis of the source files contents, but constrain the output to your desired ranges.|
+|[Overlay Image](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/VideoEncoding/Encoding_H264_OverlayImage/index.ts)| Shows how to upload an image file and overlay on top of video with output to MP4 container|
+|[Rotate Video](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/VideoEncoding/Encoding_H264_Rotate90degrees/index.ts)| Shows how to use the rotation filter to rotate a video by 90 degrees. |
+|[Output to Transport Stream format](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/VideoEncoding/Encoding_H264_To_TransportStream/index.ts)| Shows how to use the standard encoder to encode a source file and output to MPEG Transport Stream format using H264 format with AAC audio and PNG thumbnail|
+|[Basic Encoding with HEVC](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/VideoEncoding/Encoding_HEVC/index.ts)| Shows how to use the standard encoder to encode a source file into HEVC format with AAC audio and PNG thumbnails |
+|[Content Aware encoding with HEVC](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/VideoEncoding/Encoding_HEVC_ContentAware/index.ts)| Example of using the standard encoder with Content Aware encoding to automatically generate the best quality HEVC (H.265) adaptive bitrate streaming set based on an analysis of the source files contents|
+|[Content Aware encoding Constrained with HEVC](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/VideoEncoding/Encoding_HEVC_ContentAware_Constrained/index.ts)| Demonstrates how to control the output settings of the Content Aware encoding preset to make the outputs more deterministic to your encoding needs and costs. This will still auto generate the best quality adaptive bitrate streaming set based on an analysis of the source files contents, but constrain the output to your desired ranges.|
+|[Bulk encoding from a remote Azure storage account using SAS URLs](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/VideoEncoding/Encoding_Bulk_Remote_Storage_Account_SAS/index.ts)| This samples shows how you can point to a remote Azure Storage account using a SAS URL and submit batches of encoding jobs to your account, monitor progress, and continue. You can modify the file extension types to scan for (e.g - .mp4, .mov) and control the batch size submitted. You can also modify the Transform used in the batch operation. This sample demonstrates the use of SAS URL's as ingest sources to a Job input. Make sure to configure the REMOTESTORAGEACCOUNTSAS environment variable in the .env file for this sample to work.|
+| [Video Analytics](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/VideoAnalytics/index.ts)|This sample illustrates how to create a video and audio analyzer transform, upload a video file to an input asset, submit a job with the transform and download the results for verification.|
+| [Audio Analytics basic with per-job language override](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/AudioAnalytics/index.ts)|This sample illustrates how to create a audio analyzer transform using the basic mode. It also shows how you can override the preset language on a per-job basis to avoid creating a transform for every language. It also shows how to upload a media file to an input asset, submit a job with the transform and download the results for verification.|
+ ## [.NET](#tab/net/) | Sample | Description |
You'll find description and links to the samples you may be looking for in each
| [HighAvailabilityEncodingStreaming](https://github.com/Azure-Samples/media-services-v3-dotnet/tree/main/HighAvailabilityEncodingStreaming/) | This sample provides guidance and best practices for a production system using on-demand encoding or analytics. Readers should start with the companion article [High Availability with Media Services and VOD](architecture-high-availability-encoding-concept.md). There is a separate solution file provided for the [HighAvailabilityEncodingStreaming](https://github.com/Azure-Samples/media-services-v3-dotnet/blob/main/HighAvailabilityEncodingStreaming/README.md) sample. | | [Azure Functions for Media Services](https://github.com/xpouyat/media-services-v3-dotnet-core-functions-integration/tree/main/Functions)|This project contains examples of Azure Functions that connect to Azure Media Services v3 for video processing. You can use Visual Studio 2019 or Visual Studio Code to develop and run the functions. An Azure Resource Manager (ARM) template and a GitHub Actions workflow are provided for the deployment of the Function resources and to enable continuous deployment.|
-## [Node.JS](#tab/node/)
-
-|Sample|Description|
-|||
-|[Create an account from code](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/Account/create-account.ts)|The sample shows how to create a Media Services account and set the primary storage account, in addition to advanced configuration settings including Key Delivery IP allowlist, Managed Identity, storage auth, and bring your own encryption key.|
-|[Hello World - list assets](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/HelloWorld-ListAssets/list-assets.ts)|Basic example of how to connect and list assets |
-|[Live streaming with Standard Passthrough](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/Live/Standard_Passthrough_Live_Event/index.ts)| Standard passthrough live streaming example. **WARNING**, make sure to check that all resources are cleaned up and no longer billing in portal when using live|
-|[Live streaming with Standard Passthrough with Event Hubs](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/Live/Standard_Passthrough_Live_Event_with_EventHub/index.ts)| Demonstrates how to use Event Hubs to subscribe to events on the live streaming channel. Events include encoder connections, disconnections, heartbeat, latency, discontinuity, and drift issues. **WARNING**, make sure to check that all resources are cleaned up and no longer billing in portal when using live|
-|[Live streaming with Basic Passthrough](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/Live/Basic_Passthrough_Live_Event/index.ts)| Shows how to set up the basic passthrough live event if you only need to broadcast a low-cost UGC channel. **WARNING**, make sure to check that all resources are cleaned up and no longer billing in portal when using live|
-|[Live streaming with 720P Standard encoding](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/Live/720P_Encoding_Live_Event/index.ts)| Use live encoding in the cloud with the 720P HD adaptive bitrate encoding preset. **WARNING**, make sure to check that all resources are cleaned up and no longer billing in portal when using live|
-|[Live streaming with 1080P encoding](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/Live/720P_Encoding_Live_Event/index.ts)| Use live encoding in the cloud with the 1080P HD adaptive bitrate encoding preset. **WARNING**, make sure to check that all resources are cleaned up and no longer billing in portal when using live|
-|[Upload and stream HLS and DASH](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/StreamFilesSample/index.ts)| Basic example for uploading a local file or encoding from a source URL. Sample shows how to use storage SDK to download content, and shows how to stream to a player |
-|[Upload and stream HLS and DASH with PlayReady and Widevine DRM](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/StreamFilesWithDRMSample/index.ts)| Demonstrates how to encode and stream using Widevine and PlayReady DRM |
-|[Upload and use AI to index videos and audio](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/VideoAnalytics/index.ts)| Example of using the Video and Audio Analyzer presets to generate metadata and insights from a video or audio file |
-|[Create Transform, use Job preset overrides (v2-to-v3 API migration)](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/VideoEncoding/CreateTransform_Job_PresetOverride/index.ts)| If you need a workflow where you desire to submit custom preset jobs to a single queue, you can use this base sample that shows how to create a (mostly) empty Transform, and then you can use the preset override property on the Job to submit custom presets to the same transform. This allows you to treat the v3 AMS API a lot more like the legacy v2 API Job queue if you desire.|
-|[Basic Encoding with H264](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/VideoEncoding/Encoding_H264/index.ts)| Shows how to use the standard encoder to encode a source file into H264 format with AAC audio and PNG thumbnails |
-|[Basic Encoding with H264 with Event Hubs/Event Grid](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/VideoEncoding/Encoding_H264%20_with_EventHub/index.ts)| Shows how to use the standard encoder and receive and process Event Grid events from Media Services through an Event Hubs. First set up an Event Grid subscription that pushes events into an Event Hubs using the Azure portal or CLI to use this sample. |
-|[Content Aware encoding with H264](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/VideoEncoding/Encoding_H264_ContentAware/index.ts)| Example of using the standard encoder with Content Aware encoding to automatically generate the best quality adaptive bitrate streaming set based on an analysis of the source files contents|
-|[Content Aware encoding Constrained with H264](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/VideoEncoding/Encoding_H264_ContentAware_Constrained/index.ts)| Demonstrates how to control the output settings of the Content Aware encoding preset to make the outputs more deterministic to your encoding needs and costs. This will still auto generate the best quality adaptive bitrate streaming set based on an analysis of the source files contents, but constrain the output to your desired ranges.|
-|[Basic Encoding with HEVC](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/VideoEncoding/Encoding_HEVC/index.ts)| Shows how to use the standard encoder to encode a source file into HEVC format with AAC audio and PNG thumbnails |
-|[Content Aware encoding with HEVC](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/VideoEncoding/Encoding_HEVC_ContentAware/index.ts)| Example of using the standard encoder with Content Aware encoding to automatically generate the best quality HEVC (H.265) adaptive bitrate streaming set based on an analysis of the source files contents|
-|[Content Aware encoding Constrained with HEVC](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/VideoEncoding/Encoding_HEVC_ContentAware_Constrained/index.ts)| Demonstrates how to control the output settings of the Content Aware encoding preset to make the outputs more deterministic to your encoding needs and costs. This will still auto generate the best quality adaptive bitrate streaming set based on an analysis of the source files contents, but constrain the output to your desired ranges.|
- ## [Python](#tab/python) |Sample|Description|
You'll find description and links to the samples you may be looking for in each
## REST Postman collection
-The [REST Postman](https://github.com/Azure-Samples/media-services-v3-rest-postman) samples include a Postman environment and collection for you to import into the Postman client. The Postman collection samples are recommended for getting familiar with the API structure and how it works with Azure Resource Management (ARM), and the structure of calls from the client SDKs.
+The [REST Postman](https://github.com/Azure-Samples/media-services-v3-rest-postman) samples include a Postman environment and collection for you to import into the Postman client. The Postman collection samples are recommended for getting familiar with the API structure and how it works with Azure Resource Management (ARM), and the structure of calls from the client SDKs.
[!INCLUDE [warning-rest-api-retry-policy.md](./includes/warning-rest-api-retry-policy.md)]
media-services Media Services Protect With Aes128 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/previous/media-services-protect-with-aes128.md
To take advantage of dynamic encryption, you need to have an asset that contains
This article is useful to developers who work on applications that deliver protected media. The article shows you how to configure the key delivery service with authorization policies so that only authorized clients can receive encryption keys. It also shows how to use dynamic encryption. For information on how to encrypt content with the Advanced Encryption Standard (AES) for delivery to Safari on macOS, see [this blog post](https://azure.microsoft.com/blog/how-to-make-token-authorized-aes-encrypted-hls-stream-working-in-safari/).
-For an overview of how to protect your media content with AES encryption, see [this video](https://channel9.msdn.com/Shows/Azure-Friday/Azure-Media-Services-Protecting-your-Media-Content-with-AES-Encryption).
- ## AES-128 dynamic encryption and key delivery service workflow
media-services Media Services Workflow Designer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/previous/media-services-workflow-designer.md
Day 1 video covers:
* Basic Workflows ΓÇô "Hello World" * Creating multiple output MP4 files for use with Azure Media Services streaming
-> [!VIDEO https://channel9.msdn.com/Blogs/Azure/Azure-Premium-Encoder-Workflow-Designer-Training-Videos-Day-1/player]
->
->
### Day 2 Day 2 video covers:
Day 2 video covers:
* Workflows with advanced Logic * Graph stages
-> [!VIDEO https://channel9.msdn.com/Blogs/Azure/Azure-Premium-Encoder-Workflow-Designer-Training-Videos-Day-2/player]
->
->
- ### Day 3 Day 3 video covers:
Day 3 video covers:
* Restrictions with the current Encoder * Q&A
-> [!VIDEO https://channel9.msdn.com/Blogs/Azure/Azure-Premium-Encoder-Workflow-Designer-Training-Videos-Day-3/player]
->
->
- ## Need help? You can open a support ticket by navigating to [New support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest)
mysql Videos https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/videos.md
This page provides video content for learning about Azure Database for MySQL.
## Overview: Azure Database for PostgreSQL and MySQL
->[!VIDEO https://channel9.msdn.com/Events/Connect/2017/T147/player]
-[Open in Channel 9](https://channel9.msdn.com/Events/Connect/2017/T147)
+>[!VIDEO https://docs.microsoft.com/Events/Connect/2017/T147/player]
+[Open in Channel 9](/Events/Connect/2017/T147)
Azure Database for PostgreSQL and Azure Database for MySQL bring together community edition database engines and capabilities of a fully managed serviceΓÇöso you can focus on your apps instead of having to manage a database. Tune in to get a quick overview of the advantages of using the service, and see some of the capabilities in action.
Azure Database for PostgreSQL and Azure Database for MySQL are managed services
## Deep dive on managed service capabilities for MySQL and PostgreSQL
->[!VIDEO https://channel9.msdn.com/Events/Connect/2017/T148/player]
-[Open in Channel 9](https://channel9.msdn.com/Events/Connect/2017/T148)
+>[!VIDEO https://docs.microsoft.com/Events/Connect/2017/T148/player]
+[Open in Channel 9](/Events/Connect/2017/T148)
Azure Database for PostgreSQL and Azure Database for MySQL bring together community edition database engines and the capabilities of a fully managed service. Tune in to get a deep dive on how these services workΓÇöhow we ensure high availability and fast scaling (within seconds), so you can meet your customersΓÇÖ needs. You'll also learn about some of the underlying investments in security and worldwide availability. ## How to get started with the new Azure Database for MySQL service
->[!VIDEO https://channel9.msdn.com/Events/Build/2017/B8045/player]
-[Open in Channel 9](https://channel9.msdn.com/events/Build/2017/B8045)
- In this video from the May 2017 Microsoft //Build conference, learn about MicrosoftΓÇÖs managed MySQL offering in Azure. The video walks through MicrosoftΓÇÖs strategy for supporting Open-Source database systems in Azure. The video discusses what it means to you as a developer to develop or deploy applications that use MySQL in Azure. This video shows an overview of the architecture of the service, and demonstrates Azure Database for MySQL is integrated with other Azure Services such as Web Apps.
networking Networking Partners Msp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/networking/networking-partners-msp.md
Use the links in this section for more information about managed cloud networkin
|[BT](https://www.globalservices.bt.com/en/solutions/products/cloud-connect-azure)|[Network Transformation Consulting: 1-Hr Assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/bt-americas-inc.network-transformation-consulting);[BT Cloud Connect Azure](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/bt-americas-inc.bt-cca-lh-001?tab=Overview)|[BT Cloud Connect Azure ExpressRoute](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/bt-americas-inc.bt-cca-lh-003?tab=Overview)|[BT Cloud Connect Azure VWAN](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/bt-americas-inc.bt-cca-lh-002?tab=Overview)||| |[BUI](https://www.bui.co.za/)|[a2zManaged Cloud Management](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/bui.a2zmanagement?tab=Overview)||[BUI Managed Azure vWAN using VMware SD-WAN](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/bui.bui_managed_vwan?tab=Overview)||[BUI CyberSoC](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/bui.buicybersoc_msp?tab=Overview)| |[Coevolve](https://www.coevolve.com/services/azure-networking-services/)|||[Managed Azure Virtual WAN](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/coevolveptylimited1581027739259.coevolve-managed-azure-vwan?tab=Overview);[Managed VMware SD-WAN Virtual Edge](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/coevolveptylimited1581027739259.managed-vmware-sdwan-edge?tab=Overview)|||
-|[Colt](https://www.colt.net/why-colt/partner-hub/microsoft/)|[Network optimization on Azure: 2-hr Assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/colttechnologyservices.azure_networking)|||||
+|[Colt](https://cloud.telekom.de/de/infrastruktur/microsoft-azure/azure-networking)|[Network optimization on Azure: 2-hr Assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/colttechnologyservices.azure_networking)|||||
+|[Deutsche Telekom](https://cloud.telekom.de/de/infrastruktur/microsoft-azure/azure-networking)|[Network connectivity to Azure: 2-Hr assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/telekomdeutschlandgmbh1617272539503.azure_netzwerkoptimierung_2_stunden?search=telekom&page=1); [Cloud Transformation with Azure: 1-Day Workshop](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/telekomdeutschlandgmbh1617272539503.azure_cloudtransformation_1_tag?search=telekom&page=1)|[Managed ExpressRoute](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/telekomdeutschlandgmbh1617272539503.azure_intraselect_cloud_connect_implementation?search=telekom&page=1)|||[Azure Networking and Security: 1-Day Workshop](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/telekomdeutschlandgmbh1617272539503.azure_netzwerke_und_sicherheit_1_tag?search=telekom&page=1); [Intraselect SecureConnect: 1-Week Implementation](https://appsource.microsoft.com/de-de/marketplace/consulting-services/telekomdeutschlandgmbh1617272539503.azure_intraselect_secure_connect_implementation?tab=Overview)|
|[Equinix](https://www.equinix.com/)|Cloud Optimized WAN Workshop|[ExpressRoute Connectivity Strategy Workshop](https://www.equinix.se/resources/data-sheets/expressroute-strategy-workshop); [Equinix Cloud Exchange Fabric](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/equinix.equinix_ecx_fabric?tab=Overview)|||| |[Federated Wireless](https://www.federatedwireless.com/caas/)||||[Federated Wireless Connectivity-as-a-Service](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/federatedwireless1580839623708.fw_caas?tab=Overview)| |[HCL](https://www.hcltech.com/)|[HCL Cloud Network Transformation- One Day Assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/hcl-technologies.clo?tab=Overview)|[1-Hour Briefing of HCL Azure ExpressRoute Service](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/hcl-technologies.hclmanagedazureexpressroute?tab=Overview)|[HCL Azure Virtual WAN Services - One Day Assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/hcl-technologies.hclmanagedazurevitualwan?search=vWAN&page=1)|[HCL Azure Private LTE offering - One Day Assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/hcl-technologies.hclazureprivatelteoffering)|
Use the links in this section for more information about managed cloud networkin
|[Zertia](https://zertia.es/)||[Express Route ΓÇô Intercloud connectivity](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/zertiatelecomunicacionessl1615311863650.zertia-inter-conect-of103?tab=Overview)|[Enterprise Connectivity Suite - Virtual WAN](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/zertiatelecomunicacionessl1615311863650.zertia-vw-suite-of101?tab=Overview); [Manage Virtual WAN ΓÇô SD-WAN Fortinet](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/zertiatelecomunicacionessl1615311863650.zertia-mvw-fortinet-of101?tab=Overview); [Manage Virtual WAN ΓÇô SD-WAN Cisco Meraki](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/zertiatelecomunicacionessl1615311863650.zertia-mvw-cisco-of101?tab=Overview); [Manage Virtual WAN ΓÇô SD-WAN Citrix](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/zertiatelecomunicacionessl1615311863650.zertia-mvw-citrix-of101?tab=Overview);||| Azure Marketplace offers for Managed ExpressRoute, Virtual WAN, Security Services and Private Edge Zone Services from the following Azure Networking MSP Partners are on our roadmap:
-[Amdocs](https://www.amdocs.com/); [Cirrus Core Networks](https://cirruscorenetworks.com/); [Cognizant](https://www.cognizant.com/cognizant-digital-systems-technology/cloud-enablement-services); [Deutsche Telekom](https://www.telekom.com/en/media/media-information/archive/deutsche-telekom-offers-managed-network-services-for-microsoft-azure-598406); [InterCloud](https://intercloud.com/partners/microsoft-azure/); [KINX](https://www.kinx.net/service/cloud/?lang=en); [Netfosys](https://www.netfosys.com/services/azure-networking-services/); [OmniClouds](https://omniclouds.com/); [Sejong Telecom](https://www.sejongtelecom.net/en/pages/service/cloud_ms); [SES](https://www.ses.com/networks/cloud/ses-and-azure-expressroute);
+[Amdocs](https://www.amdocs.com/); [Cirrus Core Networks](https://cirruscorenetworks.com/); [Cognizant](https://www.cognizant.com/cognizant-digital-systems-technology/cloud-enablement-services); [InterCloud](https://intercloud.com/partners/microsoft-azure/); [KINX](https://www.kinx.net/service/cloud/?lang=en); [Netfosys](https://www.netfosys.com/services/azure-networking-services/); [OmniClouds](https://omniclouds.com/); [Sejong Telecom](https://www.sejongtelecom.net/en/pages/service/cloud_ms); [SES](https://www.ses.com/networks/cloud/ses-and-azure-expressroute);
## <a name="expressroute"></a>ExpressRoute partners
networking Networking Partners Msp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/nodejs-use-node-modules-azure-apps.md
Now that you understand how to use Node.js modules with Azure, learn how to [spe
For more information, see the [Node.js Developer Center](/azure/developer/javascript/). [specify the Node.js version]: ./app-service/overview.md
-[How to use the Azure Command-Line Interface for Mac and Linux]:cli-install-nodejs.md
-[Custom Website Deployment Scripts with Kudu]: https://channel9.msdn.com/Shows/Azure-Friday/Custom-Web-Site-Deployment-Scripts-with-Kudu-with-David-Ebbo
+[How to use the Azure Command-Line Interface for Mac and Linux]:cli-install-nodejs.md
object-anchors Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/object-anchors/best-practices.md
We recommend trying some of these steps to get the best results.
## Detection
-> [!VIDEO https://channel9.msdn.com/Shows/Docs-Mixed-Reality/Azure-Object-Anchors-Detection-and-Alignment-Best-Practices/player]
+> [!VIDEO https://docs.microsoft.com/Shows/Docs-Mixed-Reality/Azure-Object-Anchors-Detection-and-Alignment-Best-Practices/player]
- The provided runtime SDK requires a user-provided search region to search for and detect the physical object(s). The search region could be a bounding box, a sphere, a view frustum, or any combination of them. To avoid a false detection,
open-datasets Dataset Boston Safety https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/open-datasets/dataset-boston-safety.md
Sample not available for this platform/package combination.
``` # This is a package in preview.
-# You need to pip install azureml-opendatasets in Databricks cluster. https://docs.microsoft.com/en-us/azure/data-explorer/connect-from-databricks#install-the-python-library-on-your-azure-databricks-cluster
+# You need to pip install azureml-opendatasets in Databricks cluster. https://docs.microsoft.com/azure/data-explorer/connect-from-databricks#install-the-python-library-on-your-azure-databricks-cluster
from azureml.opendatasets import BostonSafety from datetime import datetime
postgresql Concepts High Availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/concepts-high-availability.md
Previously updated : 11/15/2021 Last updated : 01/12/2022 # High availability in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)
To take advantage of HA on the coordinator node, database applications need to
detect and retry dropped connections and failed transactions. The newly promoted coordinator will be accessible with the same connection string.
+## High availability states
+ Recovery can be broken into three stages: detection, failover, and full
-recovery. Hyperscale (Citus) runs periodic health checks on every node, and after four
-failed checks it determines that a node is down. Hyperscale (Citus) then promotes a
-standby to primary node status (failover), and provisions a new standby-to-be.
-Streaming replication begins, bringing the new node up-to-date. When all data
-has been replicated, the node has reached full recovery.
+recovery. Hyperscale (Citus) runs periodic health checks on every node, and
+after four failed checks it determines that a node is down. Hyperscale (Citus)
+then promotes a standby to primary node status (failover), and provisions a new
+standby-to-be. Streaming replication begins, bringing the new node up to date.
+When all data has been replicated, the node has reached full recovery.
+
+Hyperscale (Citus) displays its failover progress state on the Overview page
+for server groups in the Azure portal.
+
+* **Healthy**: HA is enabled and the node is fully replicated to its standby.
+* **Failover in progress**: A failure was detected on the primary node and
+ a failover to standby was initiated. This state will transition into
+ **Creating standby** once failover to the standby node is completed, and the
+ standby becomes the new primary.
+* **Creating standby**: The previous standby was promoted to primary, and a
+ new standby is being created for it. When the new secondary is ready, this
+ state will transition into **Replication in progress**.
+* **Replication in progress**: The new standby node is provisioned and data
+ synchronization is in progress. Once all data is replicated to the new
+ standby, synchronous replication will be enabled between the primary and
+ standby nodes, and the nodes' state will transition back to **Healthy**.
+* **No**: HA is not enabled on this node.
-### Next steps
+## Next steps
- Learn how to [enable high availability](howto-high-availability.md) in a Hyperscale (Citus) server
postgresql Concepts Server Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/concepts-server-group.md
+
+ Title: Server group - Hyperscale (Citus) - Azure Database for PostgreSQL
+description: What is a server group in Azure Database for PostgreSQL - Hyperscale (Citus)
+++++ Last updated : 01/13/2022++
+# Hyperscale (Citus) server group
+
+## Nodes
+
+The Azure Database for PostgreSQL - Hyperscale (Citus) deployment option allows
+PostgreSQL servers (called nodes) to coordinate with one another in a "server
+group." The server group's nodes collectively hold more data and use more CPU
+cores than would be possible on a single server. The architecture also allows
+the database to scale by adding more nodes to the server group.
+
+To learn more about the types of Hyperscale (Citus) nodes, see [nodes and
+tables](concepts-nodes.md).
+
+### Node status
+
+Hyperscale (Citus) displays the status of nodes in a server group on the
+Overview page in the Azure portal. Each node can have one of these status
+values:
+
+* **Provisioning**: Initial node provisioning, either as a part of its server
+ group provisioning, or when a worker node is added.
+* **Available**: Node is in a healthy state.
+* **Need attention**: An issue is detected on the node. The node is attempting
+ to self-heal. If self-healing fails, an issue gets put in the queue for our
+ engineers to investigate.
+* **Dropping**: Server group deletion started.
+* **Disabled**: The server group's Azure subscription turned into Disabled
+ states. For more information about subscription states, see [this
+ page](../../cost-management-billing/manage/subscription-states.md).
+
+## Tiers
+
+The basic tier in Azure Database for PostgreSQL - Hyperscale (Citus) is a
+simple way to create a small server group that you can scale later. While
+server groups in the standard tier have a coordinator node and at least two
+worker nodes, the basic tier runs everything in a single database node.
+
+Other than using fewer nodes, the basic tier has all the features of the
+standard tier. Like the standard tier, it supports high availability, read
+replicas, and columnar table storage, among other features.
+
+### Choosing basic vs standard tier
+
+The basic tier can be an economical and convenient deployment option for
+initial development, testing, and continuous integration. It uses a single
+database node and presents the same SQL API as the standard tier. You can test
+applications with the basic tier and later [graduate to the standard
+tier](howto-scale-grow.md#add-worker-nodes) with confidence that the
+interface remains the same.
+
+The basic tier is also appropriate for smaller workloads in production. There
+is room to scale vertically *within* the basic tier by increasing the number of
+server vCores.
+
+When greater scale is required right away, use the standard tier. Its smallest
+allowed server group has one coordinator node and two workers. You can choose
+to use more nodes based on your use-case, as described in our [initial
+sizing](howto-scale-initial.md) how-to.
+
+## Next steps
+
+* Learn to [provision the basic tier](quickstart-create-basic-tier.md)
+* When you're ready, see [how to graduate](howto-scale-grow.md#add-worker-nodes) from the basic tier to the standard tier
+* The [columnar storage](concepts-columnar.md) option is available in both the basic and standard tier
postgresql Videos https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/videos.md
This page provides video content for learning about Azure Database for PostgreSQ
## Overview: Azure Database for PostgreSQL and MySQL
->[!VIDEO https://channel9.msdn.com/Events/Connect/2017/T147/player]
-[Open in Channel 9](https://channel9.msdn.com/Events/Connect/2017/T147)
+>[!VIDEO https://docs.microsoft.com/Events/Connect/2017/T147/player]
+[Open in Channel 9](/Events/Connect/2017/T147)
Azure Database for PostgreSQL and Azure Database for MySQL bring together community edition database engines and capabilities of a fully managed serviceΓÇöso you can focus on your apps instead of having to manage a database. Tune in to get a quick overview of the advantages of using the service, and see some of the capabilities in action.
Azure Database for PostgreSQL and Azure Database for MySQL are managed services
## Deep dive on managed service capabilities for MySQL and PostgreSQL
->[!VIDEO https://channel9.msdn.com/Events/Connect/2017/T148/player]
-[Open in Channel 9](https://channel9.msdn.com/Events/Connect/2017/T148)
+>[!VIDEO https://docs.microsoft.com/Events/Connect/2017/T148/player]
+[Open in Channel 9](/Events/Connect/2017/T148)
Azure Database for PostgreSQL and Azure Database for MySQL bring together community edition database engines and the capabilities of a fully managed service. Tune in to get a deep dive on how these services workΓÇöhow we ensure high availability and fast scaling (within seconds), so you can meet your customersΓÇÖ needs. You'll also learn about some of the underlying investments in security and worldwide availability. ## Develop an intelligent analytics app with PostgreSQL
->[!VIDEO https://channel9.msdn.com/Events/Connect/2017/T149/player]
-[Open in Channel 9](https://channel9.msdn.com/Events/Connect/2017/T149)
+>[!VIDEO https://docs.microsoft.com/Events/Connect/2017/T149/player]
+[Open in Channel 9](/Events/Connect/2017/T149)
Azure Database for PostgreSQL brings together community edition database engine and capabilities of a fully managed serviceΓÇöso you can focus on your apps instead of having to manage a database. Tune in to see in action how easy it is to create new experiences like adding Cognitive Services to your apps by virtue of being on Azure. ## How to get started with the new Azure Database for PostgreSQL service
->[!VIDEO https://channel9.msdn.com/Events/Build/2017/B8046/player]
-[Open in Channel 9](https://channel9.msdn.com/events/Build/2017/B8046)
In this video from the 2017 Microsoft //Build conference, learn from two early adopting customers how they've used Azure Database for PostgreSQL service to innovate faster. Learn how they migrated to the service, and discuss next steps in their application development. The video walks through some of the key service features and discusses how you as a developer can migrate your existing applications or develop new applications that use this managed PostgreSQL service in Azure.
purview Abap Functions Deployment Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/abap-functions-deployment-guide.md
Last updated 12/20/2021
# SAP ABAP function module deployment guide
-When you scan [SAP ECC](register-scan-sapecc-source.md) or [SAP S/4HANA](register-scan-saps4hana-source.md) sources in Azure Purview, you need to create the dependent ABAP function module in your SAP server. Purview invokes this function module to extract the metadata from your SAP system during scan.
+When you scan [SAP ECC](register-scan-sapecc-source.md) or [SAP S/4HANA](register-scan-saps4hana-source.md) sources in Azure Purview, you need to create the dependent ABAP function module in your SAP server. Azure Purview invokes this function module to extract the metadata from your SAP system during scan.
This document details the steps required to deploy this module.
This document details the steps required to deploy this module.
## Prerequisites
-Download the SAP ABAP function module source code from Purview Studio. After you register a source for [SAP ECC](register-scan-sapecc-source.md) or [SAP S/4HANA](register-scan-saps4hana-source.md), you can find a download link on top as follows.
+Download the SAP ABAP function module source code from Azure Purview Studio. After you register a source for [SAP ECC](register-scan-sapecc-source.md) or [SAP S/4HANA](register-scan-saps4hana-source.md), you can find a download link on top as follows.
## Deployment of the Module
When the module has been created, specify the following information:
3. Navigate to the **Source code** tab. There are two ways how to deploy code for the function:
- a. From the main menu, upload the text file you downloaded from Purview Studio as described in [Prerequisites](#prerequisites). To do so, select **Utilities**, **More Utilities**, then **Upload/Download**, then **Upload**.
+ a. From the main menu, upload the text file you downloaded from Azure Purview Studio as described in [Prerequisites](#prerequisites). To do so, select **Utilities**, **More Utilities**, then **Upload/Download**, then **Upload**.
b. Alternatively, open the file, copy its content and paste into **Source code** area.
purview Apply Classifications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/apply-classifications.md
This article discusses how to apply classifications on assets.
## Introduction
-Classifications can be system or custom types. System classifications are present in Purview by default. Custom classifications can be created based on a regular expression pattern. Classifications can be applied to assets either automatically or manually.
+Classifications can be system or custom types. System classifications are present in Azure Purview by default. Custom classifications can be created based on a regular expression pattern. Classifications can be applied to assets either automatically or manually.
This document explains how to apply classifications to your data.
purview Asset Insights https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/asset-insights.md
Title: Asset insights on your data in Azure Purview
-description: This how-to guide describes how to view and use Purview Insights asset reporting on your data.
+description: This how-to guide describes how to view and use Azure Purview Insights asset reporting on your data.
Last updated 09/27/2021
# Asset insights on your data in Azure Purview
-This how-to guide describes how to access, view, and filter Purview Asset insight reports for your data.
+This how-to guide describes how to access, view, and filter Azure Purview Asset insight reports for your data.
> [!IMPORTANT] > Azure Purview Insights are currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
This how-to guide describes how to access, view, and filter Purview Asset insigh
In this how-to guide, you'll learn how to: > [!div class="checklist"]
-> * View insights from your Purview account.
+> * View insights from your Azure Purview account.
> * Get a bird's eye view of your data. > * Drill down for more asset count details. ## Prerequisites
-Before getting started with Purview insights, make sure that you've completed the following steps:
+Before getting started with Azure Purview insights, make sure that you've completed the following steps:
* Set up your Azure resources and populate the account with data.
Before getting started with Purview insights, make sure that you've completed th
For more information, see [Manage data sources in Azure Purview](manage-data-sources.md).
-## Use Purview Asset Insights
+## Use Azure Purview Asset Insights
In Azure Purview, you can register and scan source types. Once the scan is complete, you can view the asset distribution in Asset Insights, which tells you the state of your data estate by classification and resource sets. It also tells you if there is any change in data size.
In Azure Purview, you can register and scan source types. Once the scan is compl
1. Navigate to your Azure Purview resource in the Azure portal.
-1. On the **Overview** page, in the **Get Started** section, select the **Open Purview Studio** tile.
+1. On the **Overview** page, in the **Get Started** section, select the **Open Azure Purview Studio** tile.
- :::image type="content" source="./media/asset-insights/portal-access.png" alt-text="Launch Purview from the Azure portal":::
+ :::image type="content" source="./media/asset-insights/portal-access.png" alt-text="Launch Azure Purview from the Azure portal":::
-1. On the Purview **Home** page, select **Insights** on the left menu.
+1. On the Azure Purview **Home** page, select **Insights** on the left menu.
:::image type="content" source="./media/asset-insights/view-insights.png" alt-text="View your insights in the Azure portal":::
-1. In the **Insights** area, select **Assets** to display the Purview **Asset insights** report.
+1. In the **Insights** area, select **Assets** to display the Azure Purview **Asset insights** report.
### View Asset Insights
purview Catalog Asset Details https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/catalog-asset-details.md
Last updated 09/27/2021
-# View, edit and delete assets in Purview catalog
+# View, edit and delete assets in Azure Purview catalog
This article discusses how to you can view your assets and their relevant details. It also describes how you can edit and delete assets from your catalog. ## Prerequisites - Set up your data sources and scan the assets into your catalog.-- *Or* Use the Purview Atlas APIs to ingest assets into the catalog.
+- *Or* Use the Azure Purview Atlas APIs to ingest assets into the catalog.
## Viewing asset details
-You can discover your assets in Purview by either:
+You can discover your assets in Azure Purview by either:
- [Browsing the Azure Purview Data catalog](how-to-browse-catalog.md) - [Searching the Azure Purview Data Catalog](how-to-search-catalog.md)
If you edit an asset by adding a description, asset level classification, glossa
If you make some column level updates, like adding a description, column level classification, or glossary term, then subsequent scans will also update the asset schema (new columns and classifications will be detected by the scanner in subsequent scan runs).
-Even on edited assets, after a scan Azure Purview will reflect the truth of the source system. For example: if you edit a column and it's deleted from the source, it will be deleted from your asset in Purview.
+Even on edited assets, after a scan Azure Purview will reflect the truth of the source system. For example: if you edit a column and it's deleted from the source, it will be deleted from your asset in Azure Purview.
>[!NOTE] > If you update the **name or data type of a column** in an Azure Purview asset, later scans **will not** update the asset schema. New columns and classifications **will not** be detected.
You can delete an asset by selecting the delete icon under the name of the asset
### Delete behavior explained
-Any asset you delete using the delete button is permanently deleted in Azure Purview. However, if you run a **full scan** on the source from which the asset was ingested into the catalog, then the asset is reingested and you can discover it using the Purview catalog.
+Any asset you delete using the delete button is permanently deleted in Azure Purview. However, if you run a **full scan** on the source from which the asset was ingested into the catalog, then the asset is reingested and you can discover it using the Azure Purview catalog.
-If you have a scheduled scan (weekly or monthly) on the source, the **deleted asset will not get re-ingested** into the catalog unless the asset is modified by an end user since the previous run of the scan. For example, if a SQL table was deleted from Purview, but after the table was deleted a user added a new column to the table in SQL, at the next scan the asset will be rescanned and ingested into the catalog.
+If you have a scheduled scan (weekly or monthly) on the source, the **deleted asset will not get re-ingested** into the catalog unless the asset is modified by an end user since the previous run of the scan. For example, if a SQL table was deleted from Azure Purview, but after the table was deleted a user added a new column to the table in SQL, at the next scan the asset will be rescanned and ingested into the catalog.
-If you delete an asset, only that asset is deleted. Purview does not currently support cascaded deletes. For example, if you delete a storage account asset in your catalog - the containers, folders and files within them are not deleted.
+If you delete an asset, only that asset is deleted. Azure Purview does not currently support cascaded deletes. For example, if you delete a storage account asset in your catalog - the containers, folders and files within them are not deleted.
## Next steps
purview Catalog Conditional Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/catalog-conditional-access.md
+
+ Title: Configure Azure AD Conditional Access for Azure Purview
+description: This article describes steps how to configure Azure AD Conditional Access for Azure Purview
+++++ Last updated : 01/14/2022
+# Customer intent: As an identity and security admin, I want to set up Azure Active Directory Conditional Access for Azure Purview, for secure access.
++
+# Conditional Access with Azure Purview
+
+[Azure Purview](/overview.md) supports Microsoft Conditional Access.
+
+The following steps show how to configure Azure Purview to enforce a Conditional Access policy.
+
+## Prerequisites
+
+- When multi-factor authentication is enabled, to login to Azure Purview Studio, you must perform multi-factor authentication.
+
+## Configure conditional access
+
+1. Sign in to the Azure portal, select **Azure Active Directory**, and then select **Conditional Access**. For more information, see [Azure Active Directory Conditional Access technical reference](../active-directory/conditional-access/concept-conditional-access-conditions.md).
+
+ :::image type="content" source="media/catalog-conditional-access/conditional-access-blade.png" alt-text="Screenshot that shows Conditional Access blade"lightbox="media/catalog-conditional-access/conditional-access-blade.png":::
+
+2. In the **Conditional Access-Policies** blade, click **New policy**, provide a name, and then click **Configure rules**.
+3. Under **Assignments**, select **Users and groups**, check **Select users and groups**, and then select the user or group for Conditional Access. Click **Select**, and then click **Done** to accept your selection.
+
+ :::image type="content" source="media/catalog-conditional-access/select-users-and-groups.png" alt-text="Screenshot that shows User and Group selection"lightbox="media/catalog-conditional-access/select-users-and-groups.png":::
+
+4. Select **Cloud apps**, click **Select apps**. You see all apps available for Conditional Access. Select **Azure Purview**, at the bottom click **Select**, and then click **Done**.
+
+ :::image type="content" source="media/catalog-conditional-access/select-azure-purview.png" alt-text="Screenshot that shows Applications selection"lightbox="media/catalog-conditional-access/select-azure-purview.png":::
+
+5. Select **Access controls**, select **Grant**, and then check the policy you want to apply. For this example, we select **Require multi-factor authentication**.
+
+ :::image type="content" source="media/catalog-conditional-access/grant-access.png" alt-text="Screenshot that shows Grant access tab"lightbox="media/catalog-conditional-access/grant-access.png":::
+
+6. Set **Enable policy** to **On** and click **Create**.
+
+## Next steps
+
+- [Use Azure Purview Studio](/use-purview-studio.md)
purview Catalog Lineage User Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/catalog-lineage-user-guide.md
This article provides an overview of the data lineage features in Azure Purview
One of the platform features of Azure Purview is the ability to show the lineage between datasets created by data processes. Systems like Data Factory, Data Share, and Power BI capture the lineage of data as it moves. Custom lineage reporting is also supported via Atlas hooks and REST API. ## Lineage collection
- Metadata collected in Azure Purview from enterprise data systems are stitched across to show an end to end data lineage. Data systems that collect lineage into Purview are broadly categorized into following three types.
+ Metadata collected in Azure Purview from enterprise data systems are stitched across to show an end to end data lineage. Data systems that collect lineage into Azure Purview are broadly categorized into following three types.
### Data processing system
-Data integration and ETL tools can push lineage in to Azure Purview at execution time. Tools such as Data Factory, Data Share, Synapse, Azure Databricks, and so on, belong to this category of data systems. The data processing systems reference datasets as source from different databases and storage solutions to create target datasets. The list of data processing systems currently integrated with Purview for lineage are listed in below table.
+Data integration and ETL tools can push lineage in to Azure Purview at execution time. Tools such as Data Factory, Data Share, Synapse, Azure Databricks, and so on, belong to this category of data systems. The data processing systems reference datasets as source from different databases and storage solutions to create target datasets. The list of data processing systems currently integrated with Azure Purview for lineage are listed in below table.
| Data processing system | Supported scope | | - | |
Data integration and ETL tools can push lineage in to Azure Purview at execution
| Azure Data Share | [Share snapshot](how-to-link-azure-data-share.md) | ### Data storage systems
-Databases & storage solutions such as SQL Server, Teradata, and SAP have query engines to transform data using scripting language. Data lineage from stored procedures is collected in to Purview and stitched with lineage from other systems.
+Databases & storage solutions such as SQL Server, Teradata, and SAP have query engines to transform data using scripting language. Data lineage from stored procedures is collected in to Azure Purview and stitched with lineage from other systems.
| Data storage system | Supported scope | | - | |
Data systems like Azure ML and Power BI report lineage into Azure Purview. These
> [!VIDEO https://www.microsoft.com/videoplayer/embed/RWxTAK]
-Lineage in Purview includes datasets and processes. Datasets are also referred to as nodes while processes can be also called edges:
+Lineage in Azure Purview includes datasets and processes. Datasets are also referred to as nodes while processes can be also called edges:
-* **Dataset (Node)**: A dataset (structured or unstructured) provided as an input to a process. For example, a SQL Table, Azure blob, and files (such as .csv and .xml), are all considered datasets. In the lineage section of Purview, datasets are represented by rectangular boxes.
+* **Dataset (Node)**: A dataset (structured or unstructured) provided as an input to a process. For example, a SQL Table, Azure blob, and files (such as .csv and .xml), are all considered datasets. In the lineage section of Azure Purview, datasets are represented by rectangular boxes.
-* **Process (Edge)**: An activity or transformation performed on a dataset is called a process. For example, ADF Copy activity, Data Share snapshot and so on. In the lineage section of Purview, processes are represented by round-edged boxes.
+* **Process (Edge)**: An activity or transformation performed on a dataset is called a process. For example, ADF Copy activity, Data Share snapshot and so on. In the lineage section of Azure Purview, processes are represented by round-edged boxes.
-To access lineage information for an asset in Purview, follow the steps:
+To access lineage information for an asset in Azure Purview, follow the steps:
1. In the Azure portal, go to the [Azure Purview accounts page](https://aka.ms/purviewportal).
-1. Select your Azure Purview account from the list, and then select **Open Purview Studio** from the **Overview** page.
+1. Select your Azure Purview account from the list, and then select **Open Azure Purview Studio** from the **Overview** page.
1. On the Azure Purview Studio **Home** page, search for a dataset name or the process name such as ADF Copy or Data Flow activity. And then press Enter.
To see column-level lineage of a dataset, go to the **Lineage** tab of the curre
:::image type="content" source="./media/catalog-lineage-user-guide/use-toggle-to-filter-nodes.png" alt-text="Screenshot showing how to use the toggle to filter the list of nodes on the lineage page." lightbox="./media/catalog-lineage-user-guide/use-toggle-to-filter-nodes.png"::: ## Process column lineage
-Data process can take one or more input datasets to produce one or more outputs. In Purview, column level lineage is available for process nodes.
+Data process can take one or more input datasets to produce one or more outputs. In Azure Purview, column level lineage is available for process nodes.
1. Switch between input and output datasets from a drop down in the columns panel. 2. Select columns from one or more tables to see the lineage flowing from input dataset to corresponding output dataset.
purview Catalog Managed Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/catalog-managed-vnet.md
Previously updated : 01/11/2022 Last updated : 01/13/2022
-# Customer intent: As a Purview admin, I want to set up Managed Virtual Network and managed private endpoints for my Purview account.
+# Customer intent: As a Azure Purview admin, I want to set up Managed Virtual Network and managed private endpoints for my Azure Purview account.
# Use a Managed VNet with your Azure Purview account
> [!IMPORTANT] > Currently, Managed Virtual Network and managed private endpoints are available for Azure Purview accounts that are deployed in the following regions:
+> - Australia East
> - Canada Central > - East US 2 > - West Europe
This article describes how to configure Managed Virtual Network and managed priv
### Supported regions Currently, Managed Virtual Network and managed private endpoints are available for Azure Purview accounts that are deployed in the following regions:
+- Australia East
- Canada Central - East US 2 - West Europe
Additionally, you can deploy managed private endpoints for your Azure Key Vault
### Managed Virtual Network
-A Managed Virtual Network in Azure Purview is a virtual network which is deployed and managed by Azure inside the same region as Purview account to allow scanning Azure data sources inside a managed network, without having to deploy and manage any self-hosted integration runtime virtual machines by the customer in Azure.
+A Managed Virtual Network in Azure Purview is a virtual network which is deployed and managed by Azure inside the same region as Azure Purview account to allow scanning Azure data sources inside a managed network, without having to deploy and manage any self-hosted integration runtime virtual machines by the customer in Azure.
You can deploy an Azure Managed Integration Runtime within an Azure Purview Managed Virtual Network. From there, the Managed VNet Runtime will leverage private endpoints to securely connect to and scan supported data sources.
-Creating an Managed VNet Runtime within Managed Virtual Network ensures that data integration process is isolated and secure.
+Creating a Managed VNet Runtime within Managed Virtual Network ensures that data integration process is isolated and secure.
Benefits of using Managed Virtual Network:
Benefits of using Managed Virtual Network:
> [!Note] > You cannot switch a global Azure integration runtime or self-hosted integration runtime to a Managed VNet Runtime and vice versa.
-A Managed VNet is created for your Azure Purview account when you create a Managed VNet Runtime for the first time in your Purview account. You can't view or manage the Managed VNets.
+A Managed VNet is created for your Azure Purview account when you create a Managed VNet Runtime for the first time in your Azure Purview account. You can't view or manage the Managed VNets.
### Managed private endpoints
-Managed private endpoints are private endpoints created in the Azure Purview Managed Virtual Network establishing a private link to Purview and Azure resources. Azure Purview manages these private endpoints on your behalf.
+Managed private endpoints are private endpoints created in the Azure Purview Managed Virtual Network establishing a private link to Azure Purview and Azure resources. Azure Purview manages these private endpoints on your behalf.
Azure Purview supports private links. Private link enables you to access Azure (PaaS) services (such as Azure Storage, Azure Cosmos DB, Azure Synapse Analytics).
Private endpoint uses a private IP address in the Managed Virtual Network to eff
> To reduce administrative overhead, it's recommended that you create managed private endpoints to scan all supported Azure data sources. > [!WARNING]
-> If an Azure PaaS data store (Blob, Azure Data Lake Storage Gen2, Azure Synapse Analytics) has a private endpoint already created against it, and even if it allows access from all networks, Purview would only be able to access it using a managed private endpoint. If a private endpoint does not already exist, you must create one in such scenarios.
+> If an Azure PaaS data store (Blob, Azure Data Lake Storage Gen2, Azure Synapse Analytics) has a private endpoint already created against it, and even if it allows access from all networks, Azure Purview would only be able to access it using a managed private endpoint. If a private endpoint does not already exist, you must create one in such scenarios.
A private endpoint connection is created in a "Pending" state when you create a managed private endpoint in Azure Purview. An approval workflow is initiated. The private link resource owner is responsible to approve or reject the connection.
Before deploying a Managed VNet and Managed VNet Runtime for an Azure Purview ac
1. An Azure Purview account deployed in one of the [supported regions](#supported-regions). 2. From Azure Purview roles, you must be a data curator at root collection level in your Azure Purview account.
-3. From Azure RBAC roles, you must be contributor on the Purview account and data source to approve private links.
+3. From Azure RBAC roles, you must be contributor on the Azure Purview account and data source to approve private links.
### Deploy Managed VNet Runtimes > [!NOTE] > The following guide shows how to register and scan an Azure Data Lake Storage Gen 2 using Managed VNet Runtime.
-1. Go to the [Azure portal](https://portal.azure.com), and navigate to the **Purview accounts** page and select your _Purview account_.
+1. Go to the [Azure portal](https://portal.azure.com), and navigate to the **Azure Purview accounts** page and select your _Purview account_.
- :::image type="content" source="media/catalog-managed-vnet/purview-managed-azure-portal.png" alt-text="Screenshot that shows the Purview account":::
+ :::image type="content" source="media/catalog-managed-vnet/purview-managed-azure-portal.png" alt-text="Screenshot that shows the Azure Purview account":::
-2. **Open Purview Studio** and navigate to the **Data Map --> Integration runtimes**.
+2. **Open Azure Purview Studio** and navigate to the **Data Map --> Integration runtimes**.
- :::image type="content" source="media/catalog-managed-vnet/purview-managed-vnet.png" alt-text="Screenshot that shows Purview Data Map menus":::
+ :::image type="content" source="media/catalog-managed-vnet/purview-managed-vnet.png" alt-text="Screenshot that shows Azure Purview Data Map menus":::
3. From **Integration runtimes** page, select **+ New** icon, to create a new runtime. Select Azure and then select **Continue**.
Before deploying a Managed VNet and Managed VNet Runtime for an Azure Purview ac
:::image type="content" source="media/catalog-managed-vnet/purview-managed-ir-region.png" alt-text="Screenshot that shows to create a Managed VNet Runtime":::
-5. Deploying the Managed VNet Runtime for the first time triggers multiple workflows in Purview Studio for creating managed private endpoints for Azure Purview and its Managed Storage Account. Click on each workflow to approve the private endpoint for the corresponding Azure resource.
+5. Deploying the Managed VNet Runtime for the first time triggers multiple workflows in Azure Purview Studio for creating managed private endpoints for Azure Purview and its Managed Storage Account. Click on each workflow to approve the private endpoint for the corresponding Azure resource.
:::image type="content" source="media/catalog-managed-vnet/purview-managed-ir-workflows.png" alt-text="Screenshot that shows deployment of a Managed VNet Runtime":::
-6. In Azure portal, from your Purview account resource blade, approve the managed private endpoint. From Managed storage account blade approve the managed private endpoints for blob and queue
+6. In Azure portal, from your Azure Purview account resource blade, approve the managed private endpoint. From Managed storage account blade approve the managed private endpoints for blob and queue
- :::image type="content" source="media/catalog-managed-vnet/purview-managed-pe-purview.png" alt-text="Screenshot that shows how to approve a managed private endpoint for Purview":::
+ :::image type="content" source="media/catalog-managed-vnet/purview-managed-pe-purview.png" alt-text="Screenshot that shows how to approve a managed private endpoint for Azure Purview":::
- :::image type="content" source="media/catalog-managed-vnet/purview-managed-pe-purview-approved.png" alt-text="Screenshot that shows how to approve a managed private endpoint for Purview - approved":::
+ :::image type="content" source="media/catalog-managed-vnet/purview-managed-pe-purview-approved.png" alt-text="Screenshot that shows how to approve a managed private endpoint for Azure Purview - approved":::
:::image type="content" source="media/catalog-managed-vnet/purview-managed-pe-managed-storage.png" alt-text="Screenshot that shows how to approve a managed private endpoint for managed storage account":::
Before deploying a Managed VNet and Managed VNet Runtime for an Azure Purview ac
7. From Management, select Managed private endpoint to validate if all managed private endpoints are successfully deployed and approved. All private endpoints be approved.
- :::image type="content" source="media/catalog-managed-vnet/purview-managed-pe-list.png" alt-text="Screenshot that shows managed private endpoints in Purview":::
+ :::image type="content" source="media/catalog-managed-vnet/purview-managed-pe-list.png" alt-text="Screenshot that shows managed private endpoints in Azure Purview":::
- :::image type="content" source="media/catalog-managed-vnet/purview-managed-pe-list-approved.png" alt-text="Screenshot that shows managed private endpoints in Purview - approved ":::
+ :::image type="content" source="media/catalog-managed-vnet/purview-managed-pe-list-approved.png" alt-text="Screenshot that shows managed private endpoints in Azure Purview - approved ":::
### Deploy managed private endpoints for data sources
For more information, see [Manage data sources in Azure Purview](manage-data-sou
#### Scan data source
-You can use any of the following options to scan data sources using Purview Managed VNet Runtime:
+You can use any of the following options to scan data sources using Azure Purview Managed VNet Runtime:
- [Using Managed Identity](#scan-using-managed-identity) (Recommended) - As soon as the Azure Purview Account is created, a system-assigned managed identity (SAMI) is created automatically in Azure AD tenant. Depending on the type of resource, specific RBAC role assignments are required for the Azure Purview system-assigned managed identity (SAMI) to perform the scans.
You can use any of the following options to scan data sources using Purview Mana
##### Scan using Managed Identity
-To scan a data source using a Managed VNet Runtime and Purview managed identity perform these steps:
+To scan a data source using a Managed VNet Runtime and Azure Purview managed identity perform these steps:
-1. Select the **Data Map** tab on the left pane in the Purview Studio.
+1. Select the **Data Map** tab on the left pane in the Azure Purview Studio.
1. Select the data source that you registered.
To scan a data source using a Managed VNet Runtime and Purview managed identity
##### Scan using other authentication options
-You can also use other supported options to scan data sources using Purview Managed Runtime. This requires setting up a private connection to Azure Key Vault where the secret is stored.
+You can also use other supported options to scan data sources using Azure Purview Managed Runtime. This requires setting up a private connection to Azure Key Vault where the secret is stored.
To set up a scan using Account Key or SQL Authentication follow these steps:
To set up a scan using Account Key or SQL Authentication follow these steps:
6. Provide a name for the managed private endpoint, select the Azure subscription and the Azure Key Vault from the drop down lists. Select **create**.
- :::image type="content" source="media/catalog-managed-vnet/purview-managed-pe-key-vault-create.png" alt-text="Screenshot that shows how to create a managed private endpoint for Azure Key Vault in Purview Studio":::
+ :::image type="content" source="media/catalog-managed-vnet/purview-managed-pe-key-vault-create.png" alt-text="Screenshot that shows how to create a managed private endpoint for Azure Key Vault in Azure Purview Studio":::
7. From the list of managed private endpoints, click on the newly created managed private endpoint for your Azure Key Vault and then click on **Manage approvals in the Azure portal**, to approve the private endpoint in Azure portal.
To set up a scan using Account Key or SQL Authentication follow these steps:
:::image type="content" source="media/catalog-managed-vnet/purview-managed-pe-list-3.png" alt-text="Screenshot that shows managed private endpoints including Azure Key Vault in purview studio":::
-10. Select the **Data Map** tab on the left pane in the Purview Studio.
+10. Select the **Data Map** tab on the left pane in the Azure Purview Studio.
11. Select the data source that you registered.
purview Catalog Permissions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/catalog-permissions.md
Azure Purview uses **Collections** to organize and manage access across its sour
## Collections
-A collection is a tool Azure Purview uses to group assets, sources, and other artifacts into a hierarchy for discoverability and to manage access control. All access to Purview's resources are managed from collections in the Purview account itself.
+A collection is a tool Azure Purview uses to group assets, sources, and other artifacts into a hierarchy for discoverability and to manage access control. All access to Azure Purview's resources are managed from collections in the Azure Purview account itself.
> [!NOTE] > As of November 8th, 2021, ***Insights*** is accessible to Data Curators. Data Readers do not have access to Insights.
Azure Purview uses a set of predefined roles to control who can access what with
|I need to edit the glossary or set up new classification definitions|Data Curator| |I need to view Insights to understand the governance posture of my data estate|Data Curator| |My application's Service Principal needs to push data to Azure Purview|Data Curator|
-|I need to set up scans via the Purview Studio|Data Curator on the collection **or** Data Curator **And** Data Source Administrator where the source is registered|
+|I need to set up scans via the Azure Purview Studio|Data Curator on the collection **or** Data Curator **And** Data Source Administrator where the source is registered|
|I need to enable a Service Principal or group to set up and monitor scans in Azure Purview without allowing them to access the catalog's information |Data Source Admin| |I need to put users into roles in Azure Purview | Collection Admin | ## Understand how to use Azure Purview's roles and collections
-All access control is managed in Purview's collections. Purview's collections can be found in the [Purview Studio](https://web.purview.azure.com/resource/). Open your Purview account in the [Azure portal](https://portal.azure.com) and select the Purview Studio tile on the Overview page. From there, navigate to the data map on the left menu, and then select the 'Collections' tab.
+All access control is managed in Azure Purview's collections. Azure Purview's collections can be found in the [Azure Purview Studio](https://web.purview.azure.com/resource/). Open your Azure Purview account in the [Azure portal](https://portal.azure.com) and select the Azure Purview Studio tile on the Overview page. From there, navigate to the data map on the left menu, and then select the 'Collections' tab.
-When an Azure Purview account is created, it starts with a root collection that has the same name as the Purview account itself. The creator of the Purview account is automatically added as a Collection Admin, Data Source Admin, Data Curator, and Data Reader on this root collection, and can edit and manage this collection.
+When an Azure Purview account is created, it starts with a root collection that has the same name as the Azure Purview account itself. The creator of the Azure Purview account is automatically added as a Collection Admin, Data Source Admin, Data Curator, and Data Reader on this root collection, and can edit and manage this collection.
-Sources, assets, and objects can be added directly to this root collection, but so can other collections. Adding collections will give you more control over who has access to data across your Purview account.
+Sources, assets, and objects can be added directly to this root collection, but so can other collections. Adding collections will give you more control over who has access to data across your Azure Purview account.
All other users can only access information within the Azure Purview account if they, or a group they're in, are given one of the above roles. This means, when you create an Azure Purview account, no one but the creator can access or use its APIs until they are [added to one or more of the above roles in a collection](how-to-create-and-manage-collections.md#add-role-assignments). Users can only be added to a collection by a collection admin, or through permissions inheritance. The permissions of a parent collection are automatically inherited by its subcollections. However, you can choose to [restrict permission inheritance](how-to-create-and-manage-collections.md#restrict-inheritance) on any collection. If you do this, its subcollections will no longer inherit permissions from the parent and will need to be added directly, though collection admins that are automatically inherited from a parent collection can't be removed.
-You can assign Purview roles to users, security groups and service principals from your Azure Active Directory which is associated with your purview account's subscription.
+You can assign Azure Purview roles to users, security groups and service principals from your Azure Active Directory which is associated with your purview account's subscription.
## Assign permissions to your users After creating an Azure Purview account, the first thing to do is create collections and assign users to roles within those collections. > [!NOTE]
-> If you created your Azure Purview account using a service principal, to be able to access the Purview Studio and assign permissions to users, you will need to grant a user collection admin permissions on the root collection.
+> If you created your Azure Purview account using a service principal, to be able to access the Azure Purview Studio and assign permissions to users, you will need to grant a user collection admin permissions on the root collection.
> You can use [this Azure CLI command](/cli/azure/purview/account#az_purview_account_add_root_collection_admin): > > ```azurecli
-> az purview account add-root-collection-admin --account-name [Purview Account Name] --resource-group [Resource Group Name] --object-id [User Object Id]
+> az purview account add-root-collection-admin --account-name [Azure Purview Account Name] --resource-group [Resource Group Name] --object-id [User Object Id]
> ``` > The object-id is optional. For more information and an example, see the [CLI command reference page](/cli/azure/purview/account#az_purview_account_add_root_collection_admin). ### Create collections
-Collections can be customized for structure of the sources in your Purview account, and can act like organized storage bins for these resources. When you're thinking about the collections you might need, consider how your users will access or discover information. Are your sources broken up by departments? Are there specialized groups within those departments that will only need to discover some assets? Are there some sources that should be discoverable by all your users?
+Collections can be customized for structure of the sources in your Azure Purview account, and can act like organized storage bins for these resources. When you're thinking about the collections you might need, consider how your users will access or discover information. Are your sources broken up by departments? Are there specialized groups within those departments that will only need to discover some assets? Are there some sources that should be discoverable by all your users?
This will inform the collections and subcollections you may need to most effectively organize your data map.
Now that we have a base understanding of collections, permissions, and how they
This is one way an organization might structure their data: Starting with their root collection (Contoso, in this example) collections are organized into regions, and then into departments and subdepartments. Data sources and assets can be added to any one these collections to organize data resources by these regions and department, and manage access control along those lines. There's one subdepartment, Revenue, that has strict access guidelines, so permissions will need to be tightly managed.
-The [data reader role](#roles) can access information within the catalog, but not manage or edit it. So for our example above, adding the Data Reader permission to a group on the root collection and allowing inheritance will give all users in that group reader permissions on Purview sources and assets. This makes these resources discoverable, but not editable, by everyone in that group. [Restricting inheritance](how-to-create-and-manage-collections.md#restrict-inheritance) on the Revenue group will control access to those assets. Users who need access to revenue information can be added separately to the Revenue collection.
+The [data reader role](#roles) can access information within the catalog, but not manage or edit it. So for our example above, adding the Data Reader permission to a group on the root collection and allowing inheritance will give all users in that group reader permissions on Azure Purview sources and assets. This makes these resources discoverable, but not editable, by everyone in that group. [Restricting inheritance](how-to-create-and-manage-collections.md#restrict-inheritance) on the Revenue group will control access to those assets. Users who need access to revenue information can be added separately to the Revenue collection.
Similarly with the Data Curator and Data Source Admin roles, permissions for those groups will start at the collection where they're assigned and trickle down to subcollections that haven't restricted inheritance. Below we have assigned permissions for several groups at collections levels in the Americas sub collection. :::image type="content" source="./media/catalog-permissions/collection-permissions-example.png" alt-text="Chart showing a sample collections hierarchy broken up by region and department showing permissions distribution." border="true"::: ### Add users to roles
-Role assignment is managed through the collections. Only a user with the [collection admin role](#roles) can grant permissions to other users on that collection. When new permissions need to be added, a collection admin will access the [Purview Studio](https://web.purview.azure.com/resource/), navigate to data map, then the collections tab, and select the collection where a user needs to be added. From the Role Assignments tab they will be able to add and manage users who need permissions.
+Role assignment is managed through the collections. Only a user with the [collection admin role](#roles) can grant permissions to other users on that collection. When new permissions need to be added, a collection admin will access the [Azure Purview Studio](https://web.purview.azure.com/resource/), navigate to data map, then the collections tab, and select the collection where a user needs to be added. From the Role Assignments tab they will be able to add and manage users who need permissions.
For full instructions, see our [how-to guide for adding role assignments](how-to-create-and-manage-collections.md#add-role-assignments). ## Next steps
-Now that you have a base understanding of collections, and access control, follow the guides below to create and manage those collections, or get started with registering sources into your Purview Resource.
+Now that you have a base understanding of collections, and access control, follow the guides below to create and manage those collections, or get started with registering sources into your Azure Purview Resource.
- [How to create and manage collections](how-to-create-and-manage-collections.md)-- [Purview supported data sources](purview-connector-overview.md)
+- [Azure Purview supported data sources](purview-connector-overview.md)
purview Catalog Private Link Account Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/catalog-private-link-account-portal.md
Title: Connect privately and securely to your Purview account
-description: This article describes how you can set up a private endpoint to connect to your Purview account from restricted network.
+ Title: Connect privately and securely to your Azure Purview account
+description: This article describes how you can set up a private endpoint to connect to your Azure Purview account from restricted network.
Last updated 09/27/2021
# Customer intent: As an Azure Purview admin, I want to set up private endpoints for my Azure Purview account for secure access.
-# Connect privately and securely to your Purview account
-In this guide, you will learn how to deploy private endpoints for your Purview account to allow you to connect to your Azure Purview account only from VNets and private networks. To achieve this goal, you need to deploy _account_ and _portal_ private endpoints for your Azure Purview account.
+# Connect privately and securely to your Azure Purview account
+In this guide, you will learn how to deploy private endpoints for your Azure Purview account to allow you to connect to your Azure Purview account only from VNets and private networks. To achieve this goal, you need to deploy _account_ and _portal_ private endpoints for your Azure Purview account.
The Azure Purview _account_ private endpoint is used to add another layer of security by enabling scenarios where only client calls that originate from within the virtual network are allowed to access the Azure Purview account. This private endpoint is also a prerequisite for the portal private endpoint.
Using one of the deployment options from this guide, you can deploy a new Azure
- Deploy new Azure DNS zones using the steps explained further in this guide. - Add required DNS records to existing Azure DNS zones using the steps explained further in this guide. - After completing the steps in this guide, add required DNS A records in your existing DNS servers manually.
-3. Deploy a [new Purview account](#option-1deploy-a-new-azure-purview-account-with-account-and-portal-private-endpoints) with account and portal private endpoints, or deploy account and portal private endpoints for an [existing Purview account](#option-2enable-account-and-portal-private-endpoint-on-existing-azure-purview-accounts).
+3. Deploy a [new Azure Purview account](#option-1deploy-a-new-azure-purview-account-with-account-and-portal-private-endpoints) with account and portal private endpoints, or deploy account and portal private endpoints for an [existing Azure Purview account](#option-2enable-account-and-portal-private-endpoint-on-existing-azure-purview-accounts).
4. [Enable access to Azure Active Directory](#enable-access-to-azure-active-directory) if your private network has network security group rules set to deny for all public internet traffic. 5. After completing this guide, adjust DNS configurations if needed. 6. Validate your network and name resolution from management machine to Azure Purview. ## Option 1 - Deploy a new Azure Purview account with _account_ and _portal_ private endpoints
-1. Go to the [Azure portal](https://portal.azure.com), and then go to the **Purview accounts** page. Select **+ Create** to create a new Azure Purview account.
+1. Go to the [Azure portal](https://portal.azure.com), and then go to the **Azure Purview accounts** page. Select **+ Create** to create a new Azure Purview account.
2. Fill in the basic information, and on the **Networking** tab, set the connectivity method to **Private endpoint**. Set enable private endpoint to **Account and Portal only**.
Using one of the deployment options from this guide, you can deploy a new Azure
:::image type="content" source="media/catalog-private-link/purview-pe-deploy-account-portal.png" alt-text="Screenshot that shows create private endpoint for account and portal page selections.":::
-4. On the **Create a private endpoint** page, for **Purview sub-resource**, choose your location, provide a name for _account_ private endpoint and select **account**. Under **networking**, select your virtual network and subnet, and optionally, select **Integrate with private DNS zone** to create a new Azure Private DNS zone.
+4. On the **Create a private endpoint** page, for **Azure Purview sub-resource**, choose your location, provide a name for _account_ private endpoint and select **account**. Under **networking**, select your virtual network and subnet, and optionally, select **Integrate with private DNS zone** to create a new Azure Private DNS zone.
:::image type="content" source="media/catalog-private-link/purview-pe-deploy-account.png" alt-text="Screenshot that shows create account private endpoint page.":::
Using one of the deployment options from this guide, you can deploy a new Azure
5. Select **OK**.
-6. In **Create Purview account** wizard, select **+Add** again to add _portal_ private endpoint.
+6. In **Create Azure Purview account** wizard, select **+Add** again to add _portal_ private endpoint.
-7. On the **Create a private endpoint** page, for **Purview sub-resource**,choose your location, provide a name for _portal_ private endpoint and select **portal**. Under **networking**, select your virtual network and subnet, and optionally, select **Integrate with private DNS zone** to create a new Azure Private DNS zone.
+7. On the **Create a private endpoint** page, for **Azure Purview sub-resource**,choose your location, provide a name for _portal_ private endpoint and select **portal**. Under **networking**, select your virtual network and subnet, and optionally, select **Integrate with private DNS zone** to create a new Azure Private DNS zone.
:::image type="content" source="media/catalog-private-link/purview-pe-deploy-portal.png" alt-text="Screenshot that shows create portal private endpoint page.":::
purview Catalog Private Link End To End https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/catalog-private-link-end-to-end.md
Title: Connect to your Azure Purview and scan data sources privately and securely
-description: This article describes how you can set up a private endpoint to connect to your Purview account and scan data sources from restricted network for an end to end isolation
+description: This article describes how you can set up a private endpoint to connect to your Azure Purview account and scan data sources from restricted network for an end to end isolation
Using one of the deployment options explained further in this guide, you can dep
- Deploy new Azure DNS zones using the steps explained further in this guide. - Add required DNS records to existing Azure DNS zones using the steps explained further in this guide. - After completing the steps in this guide, add required DNS A records in your existing DNS servers manually.
-3. Deploy a [new Purview account](#option-1deploy-a-new-azure-purview-account-with-account-portal-and-ingestion-private-endpoints) with account, portal and ingestion private endpoints, or deploy private endpoints for an [existing Purview account](#option-2enable-account-portal-and-ingestion-private-endpoint-on-existing-azure-purview-accounts).
+3. Deploy a [new Azure Purview account](#option-1deploy-a-new-azure-purview-account-with-account-portal-and-ingestion-private-endpoints) with account, portal and ingestion private endpoints, or deploy private endpoints for an [existing Azure Purview account](#option-2enable-account-portal-and-ingestion-private-endpoint-on-existing-azure-purview-accounts).
4. [Enable access to Azure Active Directory](#enable-access-to-azure-active-directory) if your private network has network security group rules set to deny for all public internet traffic. 5. Deploy and register [Self-hosted integration runtime](#deploy-self-hosted-integration-runtime-ir-and-scan-your-data-sources) inside the same VNet or a peered VNet where Azure Purview account and ingestion private endpoints are deployed. 6. After completing this guide, adjust DNS configurations if needed.
Using one of the deployment options explained further in this guide, you can dep
## Option 1 - Deploy a new Azure Purview account with _account_, _portal_ and _ingestion_ private endpoints
-1. Go to the [Azure portal](https://portal.azure.com), and then go to the **Purview accounts** page. Select **+ Create** to create a new Azure Purview account.
+1. Go to the [Azure portal](https://portal.azure.com), and then go to the **Azure Purview accounts** page. Select **+ Create** to create a new Azure Purview account.
2. Fill in the basic information, and on the **Networking** tab, set the connectivity method to **Private endpoint**. Set enable private endpoint to **Account, Portal and ingestion**.
Using one of the deployment options explained further in this guide, you can dep
:::image type="content" source="media/catalog-private-link/purview-pe-deploy-end-to-end.png" alt-text="Screenshot that shows create private endpoint end-to-end page selections.":::
-4. On the **Create a private endpoint** page, for **Purview sub-resource**, choose your location, provide a name for _account_ private endpoint and select **account**. Under **networking**, select your virtual network and subnet, and optionally, select **Integrate with private DNS zone** to create a new Azure Private DNS zone.
+4. On the **Create a private endpoint** page, for **Azure Purview sub-resource**, choose your location, provide a name for _account_ private endpoint and select **account**. Under **networking**, select your virtual network and subnet, and optionally, select **Integrate with private DNS zone** to create a new Azure Private DNS zone.
:::image type="content" source="media/catalog-private-link/purview-pe-deploy-account.png" alt-text="Screenshot that shows create account private endpoint page.":::
Using one of the deployment options explained further in this guide, you can dep
6. Under **Account and portal** wizard, again select **+Add** again to add _portal_ private endpoint.
-7. On the **Create a private endpoint** page, for **Purview sub-resource**,choose your location, provide a name for _portal_ private endpoint and select **portal**. Under **networking**, select your virtual network and subnet, and optionally, select **Integrate with private DNS zone** to create a new Azure Private DNS zone.
+7. On the **Create a private endpoint** page, for **Azure Purview sub-resource**,choose your location, provide a name for _portal_ private endpoint and select **portal**. Under **networking**, select your virtual network and subnet, and optionally, select **Integrate with private DNS zone** to create a new Azure Private DNS zone.
:::image type="content" source="media/catalog-private-link/purview-pe-deploy-portal.png" alt-text="Screenshot that shows create portal private endpoint page.":::
purview Catalog Private Link Faqs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/catalog-private-link-faqs.md
Make sure you enable **Allow trusted Microsoft services** to access the resource
No. Connecting to Azure Purview from a public endpoint where **Public network access** is set to **Deny** results in the following error message:
-"Not authorized to access this Purview account. This Purview account is behind a private endpoint. Please access the account from a client in the same virtual network (VNet) that has been configured for the Purview account's private endpoint."
+"Not authorized to access this Azure Purview account. This Azure Purview account is behind a private endpoint. Please access the account from a client in the same virtual network (VNet) that has been configured for the Azure Purview account's private endpoint."
In this case, to open Azure Purview Studio, either use a machine that's deployed in the same virtual network as the Azure Purview portal private endpoint or use a VM that's connected to your CorpNet in which hybrid connectivity is allowed.
No. However, it's expected that the virtual machine running self-hosted integrat
### Why do I receive the following error message when I try to launch Azure Purview Studio from my machine?
-"This Purview account is behind a private endpoint. Please access the account from a client in the same virtual network (VNet) that has been configured for the Purview account's private endpoint."
+"This Azure Purview account is behind a private endpoint. Please access the account from a client in the same virtual network (VNet) that has been configured for the Azure Purview account's private endpoint."
It's likely your Azure Purview account is deployed by using Private Link and public access is disabled on your Azure Purview account. As a result, you have to browse Azure Purview Studio from a virtual machine that has internal network connectivity to Azure Purview.
purview Catalog Private Link Name Resolution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/catalog-private-link-name-resolution.md
Title: Configure DNS Name Resolution for private endpoints
-description: This article describes an overview of how you can use a private end point for your Purview account
+description: This article describes an overview of how you can use a private end point for your Azure Purview account
Last updated 01/10/2022
-# Customer intent: As a Purview admin, I want to set up private endpoints for my Purview account, for secure access.
+# Customer intent: As an Azure Purview admin, I want to set up private endpoints for my Azure Purview account, for secure access.
# Configure and verify DNS Name Resolution for Azure Purview private endpoints
The following example shows Azure Purview DNS name resolution from outside the v
The following example shows Azure Purview DNS name resolution from inside the virtual network.
- :::image type="content" source="media/catalog-private-link/purview-name-resolution-private-link.png" alt-text="Screenshot that shows Purview name resolution from inside CorpNet.":::
+ :::image type="content" source="media/catalog-private-link/purview-name-resolution-private-link.png" alt-text="Screenshot that shows Azure Purview name resolution from inside CorpNet.":::
## Deployment options
To enable internal name resolution, you can deploy the required Azure DNS Zones
When you create ingestion, portal and account private endpoints, the DNS CNAME resource records for Azure Purview is automatically updated to an alias in few subdomains with the prefix `privatelink`: -- By default, during the deployment of _account_ private endpoint for your Purview account, we also create a [private DNS zone](../dns/private-dns-overview.md) that corresponds to the `privatelink` subdomain for Azure Purview as `privatelink.purview.azure.com` including DNS A resource records for the private endpoints.
+- By default, during the deployment of _account_ private endpoint for your Azure Purview account, we also create a [private DNS zone](../dns/private-dns-overview.md) that corresponds to the `privatelink` subdomain for Azure Purview as `privatelink.purview.azure.com` including DNS A resource records for the private endpoints.
-- During the deployment of _portal_ private endpoint for your Purview account, we also create a new private DNS zone that corresponds to the `privatelink` subdomain for Azure Purview as `privatelink.purviewstudio.azure.com` including DNS A resource records for _Web_.
+- During the deployment of _portal_ private endpoint for your Azure Purview account, we also create a new private DNS zone that corresponds to the `privatelink` subdomain for Azure Purview as `privatelink.purviewstudio.azure.com` including DNS A resource records for _Web_.
- If you enable ingestion private endpoints, additional DNS zones are required for managed resources.
Private endpoint |Private endpoint associated to |DNS Zone (new) |A Record (e
||||| |Account |Azure Purview |`privatelink.purview.azure.com` |Contoso-Purview | |Portal |Azure Purview |`privatelink.purviewstudio.azure.com` |Web |
-|Ingestion |Purview managed Storage Account - Blob |`privatelink.blob.core.windows.net` |scaneastusabcd1234 |
-|Ingestion |Purview managed Storage Account - Queue |`privatelink.queue.core.windows.net` |scaneastusabcd1234 |
-|Ingestion |Purview managed Storage Account - Event Hub |`privatelink.servicebus.windows.net` |atlas-12345678-1234-1234-abcd-123456789abc |
+|Ingestion |Azure Purview managed Storage Account - Blob |`privatelink.blob.core.windows.net` |scaneastusabcd1234 |
+|Ingestion |Azure Purview managed Storage Account - Queue |`privatelink.queue.core.windows.net` |scaneastusabcd1234 |
+|Ingestion |Azure Purview managed Storage Account - Event Hub |`privatelink.servicebus.windows.net` |atlas-12345678-1234-1234-abcd-123456789abc |
### Validate virtual network links on Azure Private DNS Zones
As an example, if an Azure Purview account name is 'Contoso-Purview', when it is
| Name | Type | Value | | - | -- | | | `Contoso-Purview.purview.azure.com` | CNAME | `Contoso-Purview.privatelink.purview.azure.com` |
-| `Contoso-Purview.privatelink.purview.azure.com` | CNAME | \<Purview public endpoint\> |
-| \<Purview public endpoint\> | A | \<Purview public IP address\> |
-| `Web.purview.azure.com` | CNAME | \<Purview Studio public endpoint\> |
+| `Contoso-Purview.privatelink.purview.azure.com` | CNAME | \<Azure Purview public endpoint\> |
+| \<Azure Purview public endpoint\> | A | \<Azure Purview public IP address\> |
+| `Web.purview.azure.com` | CNAME | \<Azure Purview Studio public endpoint\> |
The DNS resource records for Contoso-Purview, when resolved in the virtual network hosting the private endpoint, will be: | Name | Type | Value | | - | -- | | | `Contoso-Purview.purview.azure.com` | CNAME | `Contoso-Purview.privatelink.purview.azure.com` |
-| `Contoso-Purview.privatelink.purview.azure.com` | A | \<Purview account private endpoint IP address\> |
-| `Web.purview.azure.com` | CNAME | \<Purview portal private endpoint IP address\> |
+| `Contoso-Purview.privatelink.purview.azure.com` | A | \<Azure Purview account private endpoint IP address\> |
+| `Web.purview.azure.com` | CNAME | \<Azure Purview portal private endpoint IP address\> |
## Option 2 - Use existing Azure Private DNS Zones
During the deployment of Azure purview private endpoints, you can choose _Privat
This scenario also applies if your organization uses a central or hub subscription for all Azure Private DNS Zones.
-The following list shows the required Azure DNS zones and A records for Purview private endpoints:
+The following list shows the required Azure DNS zones and A records for Azure Purview private endpoints:
> [!NOTE] > Update all names with `Contoso-Purview`,`scaneastusabcd1234` and `atlas-12345678-1234-1234-abcd-123456789abc` with corresponding Azure resources name in your environment. For example, instead of `scaneastusabcd1234` use the name of your Azure Purview managed storage account.
Private endpoint |Private endpoint associated to |DNS Zone (existing) |A Reco
||||| |Account |Azure Purview |`privatelink.purview.azure.com` |Contoso-Purview | |Portal |Azure Purview |`privatelink.purviewstudio.azure.com` |Web |
-|Ingestion |Purview managed Storage Account - Blob |`privatelink.blob.core.windows.net` |scaneastusabcd1234 |
-|Ingestion |Purview managed Storage Account - Queue |`privatelink.queue.core.windows.net` |scaneastusabcd1234 |
-|Ingestion |Purview managed Storage Account - Event Hub |`privatelink.servicebus.windows.net` |atlas-12345678-1234-1234-abcd-123456789abc |
+|Ingestion |Azure Purview managed Storage Account - Blob |`privatelink.blob.core.windows.net` |scaneastusabcd1234 |
+|Ingestion |Azure Purview managed Storage Account - Queue |`privatelink.queue.core.windows.net` |scaneastusabcd1234 |
+|Ingestion |Azure Purview managed Storage Account - Event Hub |`privatelink.servicebus.windows.net` |atlas-12345678-1234-1234-abcd-123456789abc |
:::image type="content" source="media/catalog-private-link/purview-name-resolution-diagram.png" alt-text="Diagram that shows Azure Purview name resolution"lightbox="media/catalog-private-link/purview-name-resolution-diagram.png":::
As an example, if an Azure Purview account name is 'Contoso-Purview', when it is
| Name | Type | Value | | - | -- | | | `Contoso-Purview.purview.azure.com` | CNAME | `Contoso-Purview.privatelink.purview.azure.com` |
-| `Contoso-Purview.privatelink.purview.azure.com` | CNAME | \<Purview public endpoint\> |
-| \<Purview public endpoint\> | A | \<Purview public IP address\> |
-| `Web.purview.azure.com` | CNAME | \<Purview Studio public endpoint\> |
+| `Contoso-Purview.privatelink.purview.azure.com` | CNAME | \<Azure Purview public endpoint\> |
+| \<Azure Purview public endpoint\> | A | \<Azure Purview public IP address\> |
+| `Web.purview.azure.com` | CNAME | \<Azure Purview Studio public endpoint\> |
The DNS resource records for Contoso-Purview, when resolved in the virtual network hosting the private endpoint, will be: | Name | Type | Value | | - | -- | | | `Contoso-Purview.purview.azure.com` | CNAME | `Contoso-Purview.privatelink.purview.azure.com` |
-| `Contoso-Purview.privatelink.purview.azure.com` | A | \<Purview account private endpoint IP address\> |
-| `Web.purview.azure.com` | CNAME | \<Purview portal private endpoint IP address\> |
+| `Contoso-Purview.privatelink.purview.azure.com` | A | \<Azure Purview account private endpoint IP address\> |
+| `Web.purview.azure.com` | CNAME | \<Azure Purview portal private endpoint IP address\> |
## Option 3 - Use your own DNS Servers
If you do not use DNS forwarders and instead you manage A records directly in yo
||||| |Account |Azure Purview |`privatelink.purview.azure.com` |Contoso-Purview | |Portal |Azure Purview |`privatelink.purviewstudio.azure.com` |Web |
- |Ingestion |Purview managed Storage Account - Blob |`privatelink.blob.core.windows.net` |scaneastusabcd1234 |
- |Ingestion |Purview managed Storage Account - Queue |`privatelink.queue.core.windows.net` |scaneastusabcd1234 |
- |Ingestion |Purview managed Storage Account - Event Hub |`privatelink.servicebus.windows.net` |atlas-12345678-1234-1234-abcd-123456789abc |
+ |Ingestion |Azure Purview managed Storage Account - Blob |`privatelink.blob.core.windows.net` |scaneastusabcd1234 |
+ |Ingestion |Azure Purview managed Storage Account - Queue |`privatelink.queue.core.windows.net` |scaneastusabcd1234 |
+ |Ingestion |Azure Purview managed Storage Account - Event Hub |`privatelink.servicebus.windows.net` |atlas-12345678-1234-1234-abcd-123456789abc |
2. Create [Virtual network links](../dns/private-dns-virtual-network-links.md) in your Azure Private DNS Zones for your Azure Virtual Networks to allow internal name resolution.
purview Catalog Private Link Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/catalog-private-link-troubleshoot.md
Title: Troubleshooting private endpoint configuration for Purview accounts
-description: This article describes how to troubleshoot problems with your Purview account related to private endpoints configurations
+ Title: Troubleshooting private endpoint configuration for Azure Purview accounts
+description: This article describes how to troubleshoot problems with your Azure Purview account related to private endpoints configurations
Last updated 01/12/2022
-# Customer intent: As a Purview admin, I want to set up private endpoints for my Purview account, for secure access.
+# Customer intent: As an Azure Purview admin, I want to set up private endpoints for my Azure Purview account, for secure access.
-# Troubleshooting private endpoint configuration for Purview accounts
+# Troubleshooting private endpoint configuration for Azure Purview accounts
This guide summarizes known limitations related to using private endpoints for Azure Purview and provides a list of steps and solutions for troubleshooting some of the most common relevant issues.
This guide summarizes known limitations related to using private endpoints for A
## Recommended troubleshooting steps
-1. Once you deploy private endpoints for your Purview account, review your Azure environment to make sure private endpoint resources are deployed successfully. Depending on your scenario, one or more of the following Azure private endpoints must be deployed in your Azure subscription:
+1. Once you deploy private endpoints for your Azure Purview account, review your Azure environment to make sure private endpoint resources are deployed successfully. Depending on your scenario, one or more of the following Azure private endpoints must be deployed in your Azure subscription:
|Private endpoint |Private endpoint assigned to | Example| ||||
This guide summarizes known limitations related to using private endpoints for A
3. If Azure Private DNS Zones are used, make sure the required Azure DNS Zones are deployed and there is DNS (A) record for each private endpoint.
-4. Test network connectivity and name resolution from management machine to Purview endpoint and purview web url. If account and portal private endpoints are deployed, the endpoints must be resolved through private IP addresses.
+4. Test network connectivity and name resolution from management machine to Azure Purview endpoint and purview web url. If account and portal private endpoints are deployed, the endpoints must be resolved through private IP addresses.
```powershell
This guide summarizes known limitations related to using private endpoints for A
5. If you have created your Azure Purview account after 18 August 2021, make sure you download and install the latest version of self-hosted integration runtime from [Microsoft download center](https://www.microsoft.com/download/details.aspx?id=39717).
-6. From self-hosted integration runtime VM, test network connectivity and name resolution to Purview endpoint.
+6. From self-hosted integration runtime VM, test network connectivity and name resolution to Azure Purview endpoint.
7. From self-hosted integration runtime, test network connectivity and name resolution to Azure Purview managed resources such as blob queue and Event Hub through port 443 and private IP addresses. (Replace the managed storage account and Event Hubs namespace with corresponding managed resource name assigned to your Azure Purview account).
This guide summarizes known limitations related to using private endpoints for A
TcpTestSucceeded : True ```
-8. From the network where data source is located, test network connectivity and name resolution to Purview endpoint and managed resources endpoints.
+8. From the network where data source is located, test network connectivity and name resolution to Azure Purview endpoint and managed resources endpoints.
9. If data sources are located in on-premises network, review your DNS forwarder configuration. Test name resolution from within the same network where data sources are located to self-hosted integration runtime, Azure Purview endpoints and managed resources. It is expected to obtain a valid private IP address from DNS query for each endpoint.
This guide summarizes known limitations related to using private endpoints for A
10. If management machine and self-hosted integration runtime VMs are deployed in on-premises network and you have set up DNS forwarder in your environment, verify DNS and network settings in your environment.
-11. If ingestion private endpoint is used, make sure self-hosted integration runtime is registered successfully inside Purview account and shows as running both inside the self-hosted integration runtime VM and in the [Purview Studio](https://web.purview.azure.com/resource/) .
+11. If ingestion private endpoint is used, make sure self-hosted integration runtime is registered successfully inside Azure Purview account and shows as running both inside the self-hosted integration runtime VM and in the [Azure Purview Studio](https://web.purview.azure.com/resource/) .
## Common errors and messages
Review your existing Azure Policy Assignments and make sure deployment of the fo
### Issue
-Not authorized to access this Purview account. This Purview account is behind a private endpoint. Please access the account from a client in the same virtual network (VNet) that has been configured for the Purview account's private endpoint.
+Not authorized to access this Azure Purview account. This Azure Purview account is behind a private endpoint. Please access the account from a client in the same virtual network (VNet) that has been configured for the Azure Purview account's private endpoint.
### Cause User is trying to connect to Azure Purview from a public endpoint or using Azure Purview public endpoints where **Public network access** is set to **Deny**.
purview Catalog Private Link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/catalog-private-link.md
Title: Use private endpoints for secure access to Purview
-description: This article describes a high level overview of how you can use a private end point for your Purview account
+ Title: Use private endpoints for secure access to Azure Purview
+description: This article describes a high level overview of how you can use a private end point for your Azure Purview account
Last updated 01/10/2022
-# Customer intent: As a Purview admin, I want to set up private endpoints for my Purview account, for secure access.
+# Customer intent: As an Azure Purview admin, I want to set up private endpoints for my Azure Purview account, for secure access.
# Use private endpoints for your Azure Purview account
Last updated 01/10/2022
This article describes how to configure private endpoints for Azure Purview. ## Conceptual Overview
-You can use [Azure private endpoints](../private-link/private-endpoint-overview.md) for your Azure Purview accounts to allow users on a virtual network (VNet) to securely access the catalog over a Private Link. A private endpoint uses an IP address from the VNet address space for your Purview account. Network traffic between the clients on the VNet and the Purview account traverses over the VNet and a private link on the Microsoft backbone network.
+You can use [Azure private endpoints](../private-link/private-endpoint-overview.md) for your Azure Purview accounts to allow users on a virtual network (VNet) to securely access the catalog over a Private Link. A private endpoint uses an IP address from the VNet address space for your Azure Purview account. Network traffic between the clients on the VNet and the Azure Purview account traverses over the VNet and a private link on the Microsoft backbone network.
You can deploy Azure Purview _account_ private endpoint, to allow only client calls to Azure Purview that originate from within the private network.
Use the following recommended checklist to perform deployment of Azure Purview a
|Scenario |Objectives | ||| |**Scenario 1** - [Connect to your Azure Purview and scan data sources privately and securely](./catalog-private-link-end-to-end.md) |You need to restrict access to your Azure Purview account only via a private endpoint, including access to Azure Purview Studio, Atlas APIs and scan data sources in on-premises and Azure behind a virtual network using self-hosted integration runtime ensuring end to end network isolation. (Deploy _account_, _portal_ and _ingestion_ private endpoints.) |
-|**Scenario 2** - [Connect privately and securely to your Purview account](./catalog-private-link-account-portal.md) | You need to enable access to your Azure Purview account, including access to _Azure Purview Studio_ and Atlas API through private endpoints. (Deploy _account_ and _portal_ private endpoints). |
-|**Scenario 3** - [Scan data source securely using Managed Virtual Network](./catalog-managed-vnet.md) | You need to scan Azure data sources securely, without having to manage a virtual network or a self-hosted integration runtime VM. (Deploy managed private endpoint for Purview, managed storage account and Azure data sources). |
+|**Scenario 2** - [Connect privately and securely to your Azure Purview account](./catalog-private-link-account-portal.md) | You need to enable access to your Azure Purview account, including access to _Azure Purview Studio_ and Atlas API through private endpoints. (Deploy _account_ and _portal_ private endpoints). |
+|**Scenario 3** - [Scan data source securely using Managed Virtual Network](./catalog-managed-vnet.md) | You need to scan Azure data sources securely, without having to manage a virtual network or a self-hosted integration runtime VM. (Deploy managed private endpoint for Azure Purview, managed storage account and Azure data sources). |
## Support matrix for Scanning data sources through _ingestion_ private endpoint
For scenarios where _ingestion_ private endpoint is used in your Azure Purview a
For FAQs related to private endpoint deployments in Azure Purview, see [FAQ about Azure Purview private endpoints](./catalog-private-link-faqs.md). ## Troubleshooting guide
-For troubleshooting private endpoint configuration for Purview accounts, see [Troubleshooting private endpoint configuration for Purview accounts](./catalog-private-link-troubleshoot.md).
+For troubleshooting private endpoint configuration for Azure Purview accounts, see [Troubleshooting private endpoint configuration for Azure Purview accounts](./catalog-private-link-troubleshoot.md).
## Known limitations To view list of current limitations related to Azure Purview private endpoints, see [Azure Purview private endpoints known limitations](./catalog-private-link-troubleshoot.md#known-limitations).
To view list of current limitations related to Azure Purview private endpoints,
## Next steps - [Deploy end to end private networking](./catalog-private-link-end-to-end.md)-- [Deploy private networking for the Purview Studio](./catalog-private-link-account-portal.md)
+- [Deploy private networking for the Azure Purview Studio](./catalog-private-link-account-portal.md)
purview Classification Insights https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/classification-insights.md
Title: Classification reporting on your data in Azure Purview using Purview Insights
-description: This how-to guide describes how to view and use Purview classification reporting on your data.
+ Title: Classification reporting on your data in Azure Purview using Azure Purview Insights
+description: This how-to guide describes how to view and use Azure Purview classification reporting on your data.
Last updated 09/27/2021
-# Customer intent: As a security officer, I need to understand how to use Purview Insights to learn about sensitive data identified and classified and labeled during scanning.
+# Customer intent: As a security officer, I need to understand how to use Azure Purview Insights to learn about sensitive data identified and classified and labeled during scanning.
# Classification insights about your data from Azure Purview
-This how-to guide describes how to access, view, and filter Purview Classification insight reports for your data.
+This how-to guide describes how to access, view, and filter Azure Purview Classification insight reports for your data.
> [!IMPORTANT] > Azure Purview Insights are currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
Supported data sources include: Azure Blob Storage, Azure Data Lake Storage (ADL
In this how-to guide, you'll learn how to: > [!div class="checklist"]
-> - Launch your Purview account from Azure
+> - Launch your Azure Purview account from Azure
> - View classification insights on your data > - Drill down for more classification details on your data ## Prerequisites
-Before getting started with Purview insights, make sure that you've completed the following steps:
+Before getting started with Azure Purview insights, make sure that you've completed the following steps:
- Set up your Azure resources and populated the relevant accounts with test data - Set up and completed a scan on the test data in each data source. For more information, see [Manage data sources in Azure Purview](manage-data-sources.md) and [Create a scan rule set](create-a-scan-rule-set.md). -- Signed in to Purview with account with a [Data Reader or Data Curator role](catalog-permissions.md#roles).
+- Signed in to Azure Purview with account with a [Data Reader or Data Curator role](catalog-permissions.md#roles).
For more information, see [Manage data sources in Azure Purview](manage-data-sources.md).
-## Use Purview classification insights
+## Use Azure Purview classification insights
In Azure Purview, classifications are similar to subject tags, and are used to mark and identify data of a specific type that's found within your data estate during scanning.
-Purview uses the same sensitive information types as Microsoft 365, allowing you to stretch your existing security policies and protection across your entire data estate.
+Azure Purview uses the same sensitive information types as Microsoft 365, allowing you to stretch your existing security policies and protection across your entire data estate.
> [!NOTE] > After you have scanned your source types, give **Classification** Insights a couple of hours to reflect the new assets. **To view classification insights:**
-1. Go to the **Azure Purview** [instance screen in the Azure portal](https://aka.ms/purviewportal) and select your Purview account.
+1. Go to the **Azure Purview** [instance screen in the Azure portal](https://aka.ms/purviewportal) and select your Azure Purview account.
-1. On the **Overview** page, in the **Get Started** section, select the **Purview Studio** tile.
+1. On the **Overview** page, in the **Get Started** section, select the **Azure Purview Studio** tile.
-1. In Purview, select the **Insights** :::image type="icon" source="media/insights/ico-insights.png" border="false"::: menu item on the left to access your **Insights** area.
+1. In Azure Purview, select the **Insights** :::image type="icon" source="media/insights/ico-insights.png" border="false"::: menu item on the left to access your **Insights** area.
-1. In the **Insights** :::image type="icon" source="media/insights/ico-insights.png" border="false"::: area, select **Classification** to display the Purview **Classification insights** report.
+1. In the **Insights** :::image type="icon" source="media/insights/ico-insights.png" border="false"::: area, select **Classification** to display the Azure Purview **Classification insights** report.
:::image type="content" source="./media/insights/select-classification-labeling.png" alt-text="Classification insights report" lightbox="media/insights/select-classification-labeling.png":::
purview Concept Best Practices Accounts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/concept-best-practices-accounts.md
Azure Purview is a unified data governance solution. You deploy an Azure Purview
Consider deploying minimum number of Azure Purview accounts for the entire organization. This approach takes maximum advantage of the "network effects" where the value of the platform increases exponentially as a function of the data that resides inside the platform.
-Use [Azure Purview collections hierarchy](./concept-best-practices-collections.md) to lay out your organization's data management structure inside a single Azure Purview account. In this scenario, one Purview account is deployed in an Azure subscription. Data sources from one or more Azure subscriptions can be registered and scanned inside the Azure Purview. You can also register and scan data sources from your on-premises or multi-cloud environments.
+Use [Azure Purview collections hierarchy](./concept-best-practices-collections.md) to lay out your organization's data management structure inside a single Azure Purview account. In this scenario, one Azure Purview account is deployed in an Azure subscription. Data sources from one or more Azure subscriptions can be registered and scanned inside the Azure Purview. You can also register and scan data sources from your on-premises or multi-cloud environments.
:::image type="content" source="media/concept-best-practices/accounts-single-account.png" alt-text="Screenshot that shows the single Azure Purview account."lightbox="media/concept-best-practices/accounts-single-account.png":::
An exception applies to VM-based data sources and Power BI tenants.For more info
Review [Azure Purview Pricing model](https://azure.microsoft.com/pricing/details/azure-purview) when defining budgeting model and designing Azure Purview architecture for your organization. One billing is generated for a single Azure Purview account in the subscription where Azure Purview account is deployed. This model also applies to other Azure Purview costs such as scanning and classifying metadata inside Azure Purview Data Map.
-Some organizations often have many business units (BUs) that operate separately, and, in some cases, they don't even share billing with each other. In those cases, the organization will end up creating a Azure Purview instance for each BU. This model is not ideal, however, may be necessary, especially because Business Units are often not willing to share Azure billing.
+Some organizations often have many business units (BUs) that operate separately, and, in some cases, they don't even share billing with each other. In those cases, the organization will end up creating an Azure Purview instance for each BU. This model is not ideal, however, may be necessary, especially because Business Units are often not willing to share Azure billing.
For more information about cloud computing cost model in chargeback and showback models, see, [What is cloud accounting?](/azure/cloud-adoption-framework/strategy/cloud-accounting).
For more information about cloud computing cost model in chargeback and showback
- Review [Azure Purview prerequisites](./create-catalog-portal.md#prerequisites) before deploying any new Azure Purview accounts in your environment. ΓÇ» ## Next steps-- [Create a Purview account](./create-catalog-portal.md)
+- [Create an Azure Purview account](./create-catalog-portal.md)
purview Concept Best Practices Collections https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/concept-best-practices-collections.md
Consider deploying collections in Azure Purview to fulfill the following require
- Consider security and access management as part of your design decision-making process when you build collections in Azure Purview. -- Each collection has a name attribute and a friendly name attribute. If you use Azure [Purview Studio](https://web.purview.azure.com/resource/) to deploy a collection, the system automatically assigns a random six-letter name to the collection to avoid duplication. To reduce complexity, avoid using duplicated friendly names across your collections, especially in the same level.
+- Each collection has a name attribute and a friendly name attribute. If you use [Azure Purview Studio](https://web.purview.azure.com/resource/) to deploy a collection, the system automatically assigns a random six-letter name to the collection to avoid duplication. To reduce complexity, avoid using duplicated friendly names across your collections, especially in the same level.
- When you can, avoid duplicating your organizational structure into a deeply nested collection hierarchy. If you can't avoid doing so, be sure to use different names for every collection in the hierarchy to make the collections easy to distinguish.
purview Concept Best Practices Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/concept-best-practices-security.md
When an Azure Purview account is deployed, in addition, a managed resource group
Azure Purview extracts only the metadata from different data source systems into [Azure Purview Data Map](concept-elastic-data-map.md) during the scanning process.
-You can deploy a Azure Purview account inside your Azure subscription in any [supported Azure regions](https://azure.microsoft.com/global-infrastructure/services/?products=purview&regions=all).
+You can deploy an Azure Purview account inside your Azure subscription in any [supported Azure regions](https://azure.microsoft.com/global-infrastructure/services/?products=purview&regions=all).
All metadata is stored inside Data Map inside your Azure Purview instance. This means the metadata is stored in the same region as your Azure Purview instance.
purview Concept Best Practices Sensitivity Labels https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/concept-best-practices-sensitivity-labels.md
It also abstracts the data itself, so you use labels to track the type of data,
### Label considerations -- If you already have Microsoft 365 sensitivity labels in use in your environment, it is recommended that you continue to use your existing labels rather than making duplicate or more labels for Purview. This approach allows you to maximize the investment you have already made in the Microsoft 365 compliance space and ensures consistent labeling across your data estate.
+- If you already have Microsoft 365 sensitivity labels in use in your environment, it is recommended that you continue to use your existing labels rather than making duplicate or more labels for Azure Purview. This approach allows you to maximize the investment you have already made in the Microsoft 365 compliance space and ensures consistent labeling across your data estate.
- If you have not yet created Microsoft 365 sensitivity labels, it is recommended that you review the documentation to [Get started with sensitivity labels](/microsoft-365/compliance/get-started-with-sensitivity-labels). Creating a classification schema is a tenant-wide operation and should be discussed thoroughly before enabling it within your organization. ### Label recommendations
purview Concept Business Glossary https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/concept-business-glossary.md
Azure Purview supports eight out-of-the-box attributes for any business glossary
- Related terms - Resources
-These attributes cannot be edited or deleted. However, these attributes are not sufficient to completely define a term in an organization. To solve this problem, Purview provides a feature where you can define custom attributes for your glossary.
+These attributes cannot be edited or deleted. However, these attributes are not sufficient to completely define a term in an organization. To solve this problem, Azure Purview provides a feature where you can define custom attributes for your glossary.
## Term templates
Classifications are annotations that can be assigned to entities. The flexibilit
- understanding the nature of data stored in the data assets - defining access control policies
-Purview has more than 200 system classifiers today and you can define your own classifiers in catalog. As part of the scanning process, we automatically detect these classifications and apply them to data assets and schemas. However, you can override them at any point of time. The human overrides are never replaced by automated scans.
+Azure Purview has more than 200 system classifiers today and you can define your own classifiers in catalog. As part of the scanning process, we automatically detect these classifications and apply them to data assets and schemas. However, you can override them at any point of time. The human overrides are never replaced by automated scans.
### Sensitivity labels
-Sensitivity labels are a type of annotation that allows you to classify and protect your organization's data, without hindering productivity and collaboration. Sensitivity labels are used to identify the categories of classification types within your organizational data, and group the policies that you wish to apply to each category. Purview makes use of the same sensitive information types as Microsoft 365, which allows you to stretch your existing security policies and protection across your entire content and data estate. The same labels can be shared across Microsoft Office products and data assets in Purview.
+Sensitivity labels are a type of annotation that allows you to classify and protect your organization's data, without hindering productivity and collaboration. Sensitivity labels are used to identify the categories of classification types within your organizational data, and group the policies that you wish to apply to each category. Azure Purview makes use of the same sensitive information types as Microsoft 365, which allows you to stretch your existing security policies and protection across your entire content and data estate. The same labels can be shared across Microsoft Office products and data assets in Azure Purview.
## Next steps
purview Concept Classification https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/concept-classification.md
Custom classification rules can be based on a *regular expression* pattern or *d
* [Read about classification best practices](concept-best-practices-classification.md) * [Create custom classifications](create-a-custom-classification-and-classification-rule.md) * [Apply classifications](apply-classifications.md)
-* [Use the Purview Studio](use-purview-studio.md)
+* [Use the Azure Purview Studio](use-purview-studio.md)
purview Concept Data Lineage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/concept-data-lineage.md
Last updated 09/27/2021
# Data lineage in Azure Purview Data Catalog client
-This article provides an overview of data lineage in Azure Purview Data Catalog. It also details how data systems can integrate with the catalog to capture lineage of data. Purview can capture lineage for data in different parts of your organization's data estate, and at different levels of preparation including:
+This article provides an overview of data lineage in Azure Purview Data Catalog. It also details how data systems can integrate with the catalog to capture lineage of data. Azure Purview can capture lineage for data in different parts of your organization's data estate, and at different levels of preparation including:
- Completely raw data staged from various platforms - Transformed and prepared data
Data lineage is broadly understood as the lifecycle that spans the dataΓÇÖs orig
## Lineage experience in Azure Purview Data Catalog
-Purview Data Catalog will connect with other data processing, storage, and analytics systems to extract lineage information. The information is combined to represent a generic, scenario-specific lineage experience in the Catalog.
+Azure Purview Data Catalog will connect with other data processing, storage, and analytics systems to extract lineage information. The information is combined to represent a generic, scenario-specific lineage experience in the Catalog.
:::image type="content" source="media/concept-lineage/lineage-end-end.png" alt-text="end-end lineage showing data copied from blob store all the way to Power BI dashboard":::
The following example is a typical use case of data moving across multiple syste
## Lineage granularity
-The following section covers the details about the granularity of which the lineage information is gathered by Purview. This granularity can vary based on the data systems supported in Purview.
+The following section covers the details about the granularity of which the lineage information is gathered by Azure Purview. This granularity can vary based on the data systems supported in Azure Purview.
### Entity level lineage: Source(s) > Process > Target(s)
To support root cause analysis and data quality scenarios, we capture the execut
## Summary
-Lineage is a critical feature of the Purview Data Catalog to support quality, trust, and audit scenarios. The goal of a data catalog is to build a robust framework where all the data systems within your environment can naturally connect and report lineage. Once the metadata is available, the data catalog can bring together the metadata provided by data systems to power data governance use cases.
+Lineage is a critical feature of the Azure Purview Data Catalog to support quality, trust, and audit scenarios. The goal of a data catalog is to build a robust framework where all the data systems within your environment can naturally connect and report lineage. Once the metadata is available, the data catalog can bring together the metadata provided by data systems to power data governance use cases.
## Next steps * [Quickstart: Create an Azure Purview account in the Azure portal](create-catalog-portal.md) * [Quickstart: Create an Azure Purview account using Azure PowerShell/Azure CLI](create-catalog-powershell.md)
-* [Use the Purview Studio](use-purview-studio.md)
+* [Use the Azure Purview Studio](use-purview-studio.md)
purview Concept Default Purview Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/concept-default-purview-account.md
Having multiple Azure Purview accounts in a tenant now poses the challenge of wh
- [Create an Azure Purview account](create-catalog-portal.md) - [Azure Purview Pricing](https://azure.microsoft.com/pricing/details/azure-purview/)+
purview Concept Elastic Data Map https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/concept-elastic-data-map.md
# Elastic data map in Azure Purview
-Azure Purview Data Map provides the foundation for data discovery and data governance. It captures metadata about enterprise data present in analytics, software-as-a-service (SaaS), and operation systems in hybrid, on-premises, and multi-cloud environments. Purview Data Map automatically stays up to date with its built-in scanning and classification system. With the UI, developers can further programmatically interact with the Data Map using open-source Apache Atlas APIs.
+Azure Purview Data Map provides the foundation for data discovery and data governance. It captures metadata about enterprise data present in analytics, software-as-a-service (SaaS), and operation systems in hybrid, on-premises, and multi-cloud environments. Azure Purview Data Map automatically stays up to date with its built-in scanning and classification system. With the UI, developers can further programmatically interact with the Data Map using open-source Apache Atlas APIs.
## Elastic data map
Elastic Data Map comes with operation throughput and storage components that are
### Operations
-Operations are the throughput measure of the Purview Data Map. They include the Create, Read, Write, Update, and Delete operations on metadata stored in the Data Map. Some examples of operations are listed below:
+Operations are the throughput measure of the Azure Purview Data Map. They include the Create, Read, Write, Update, and Delete operations on metadata stored in the Data Map. Some examples of operations are listed below:
- Create an asset in Data Map - Add a relationship to an asset such as owner, steward, parent, lineage, etc.
Operations are the throughput measure of the Purview Data Map. They include the
Storage is the second component of Data Map and includes technical, business, operational, and semantic metadata.
-The technical metadata includes schema, data type, columns, and so on, that are discovered from Purview [scanning](concept-scans-and-ingestion.md). The business metadata includes automated (for example, promoted from Power BI datasets, or descriptions from SQL tables) and manual tagging of descriptions, glossary terms, and so on. Examples of semantic metadata include the collection mapping to data sources, or classifications. The operational metadata includes Data factory copy and data flow activity run status, and runs time.
+The technical metadata includes schema, data type, columns, and so on, that are discovered from Azure Purview [scanning](concept-scans-and-ingestion.md). The business metadata includes automated (for example, promoted from Power BI datasets, or descriptions from SQL tables) and manual tagging of descriptions, glossary terms, and so on. Examples of semantic metadata include the collection mapping to data sources, or classifications. The operational metadata includes Data factory copy and data flow activity run status, and runs time.
## Work with elastic data map
The technical metadata includes schema, data type, columns, and so on, that are
## Scenario
-Claudia is an Azure admin at Contoso who wants to provision a new Azure Purview account from Azure portal. While provisioning, she doesnΓÇÖt know the required size of Purview Data Map to support the future state of the platform. However, she knows that the Purview Data Map is billed by Capacity Units, which are affected by storage and operations throughput. She wants to provision the smallest Data Map to keep the cost low and grow the Data Map size elastically based on consumption.
+Claudia is an Azure admin at Contoso who wants to provision a new Azure Purview account from Azure portal. While provisioning, she doesnΓÇÖt know the required size of Azure Purview Data Map to support the future state of the platform. However, she knows that the Azure Purview Data Map is billed by Capacity Units, which are affected by storage and operations throughput. She wants to provision the smallest Data Map to keep the cost low and grow the Data Map size elastically based on consumption.
-Claudia can create a Purview account with the default Data Map size of 1 capacity unit that can automatically scale up and down. The autoscaling feature also allows for capacity to be tuned based on intermittent or planned data bursts during specific periods. Claudia follows the next steps in provisioning experience to set up network configuration and completes the provisioning.
+Claudia can create an Azure Purview account with the default Data Map size of 1 capacity unit that can automatically scale up and down. The autoscaling feature also allows for capacity to be tuned based on intermittent or planned data bursts during specific periods. Claudia follows the next steps in provisioning experience to set up network configuration and completes the provisioning.
-In the Azure monitor metrics page, Claudia can see the consumption of the Data Map storage and operations throughput. She can further set up an alert when the storage or operations throughput reaches a certain limit to monitor the consumption and billing of the new Purview account.
+In the Azure monitor metrics page, Claudia can see the consumption of the Data Map storage and operations throughput. She can further set up an alert when the storage or operations throughput reaches a certain limit to monitor the consumption and billing of the new Azure Purview account.
## Data map billing
-Customers are billed for one capacity unit (25 ops/sec and 10 GB) and extra billing is based on the consumption of each extra capacity unit rolled up to the hour. The Data Map operations scale in the increments of 25 operations/sec and metadata storage scales in the increments of 10 GB size. Purview Data Map can automatically scale up and down within the elasticity window ([check current limits](how-to-manage-quotas.md)). However, to get the next level of elasticity window, a support ticket needs to be created.
+Customers are billed for one capacity unit (25 ops/sec and 10 GB) and extra billing is based on the consumption of each extra capacity unit rolled up to the hour. The Data Map operations scale in the increments of 25 operations/sec and metadata storage scales in the increments of 10 GB size. Azure Purview Data Map can automatically scale up and down within the elasticity window ([check current limits](how-to-manage-quotas.md)). However, to get the next level of elasticity window, a support ticket needs to be created.
Data Map capacity units come with a cap on operations throughput and storage. If storage exceeds the current capacity unit, customers are charged for the next capacity unit even if the operations throughput isn't used. The below table shows the Data Map capacity unit ranges. Contact support if the Data Map capacity unit goes beyond 100 capacity unit.
Data Map capacity units come with a cap on operations throughput and storage. If
### Billing examples -- Purview Data MapΓÇÖs operation throughput for the given hour is less than or equal to 25 Ops/Sec and storage size is 1 GB. Customers are billed for one capacity unit.
+- Azure Purview Data MapΓÇÖs operation throughput for the given hour is less than or equal to 25 Ops/Sec and storage size is 1 GB. Customers are billed for one capacity unit.
-- Purview Data MapΓÇÖs operation throughput for the given hour is less than or equal to 25 Ops/Sec and storage size is 15 GB. Customers are billed for two capacity units.
+- Azure Purview Data MapΓÇÖs operation throughput for the given hour is less than or equal to 25 Ops/Sec and storage size is 15 GB. Customers are billed for two capacity units.
-- Purview Data MapΓÇÖs operation throughput for the given hour is 50 Ops/Sec and storage size is 15 GB. Customers are billed for two capacity units.
+- Azure Purview Data MapΓÇÖs operation throughput for the given hour is 50 Ops/Sec and storage size is 15 GB. Customers are billed for two capacity units.
-- Purview Data MapΓÇÖs operation throughput for the given hour is 50 Ops/Sec and storage size is 25 GB. Customers are billed for three capacity units.
+- Azure Purview Data MapΓÇÖs operation throughput for the given hour is 50 Ops/Sec and storage size is 25 GB. Customers are billed for three capacity units.
-- Purview Data MapΓÇÖs operation throughput for the given hour is 250 Ops/Sec and storage size is 15 GB. Customers are billed for ten capacity units.
+- Azure Purview Data MapΓÇÖs operation throughput for the given hour is 250 Ops/Sec and storage size is 15 GB. Customers are billed for ten capacity units.
### Detailed billing example
Based on the Data Map operations/second and metadata storage consumption in this
:::image type="content" source="./media/concept-elastic-data-map/billing-capacity-hours.png" alt-text="Table depicting number of CU hours over time."::: >[!Important]
->Purview Data Map can automatically scale up and down within the elasticity window ([check current limits](how-to-manage-quotas.md)). To get the next level of the elasticity window, a support ticket needs to be created.
+>Azure Purview Data Map can automatically scale up and down within the elasticity window ([check current limits](how-to-manage-quotas.md)). To get the next level of the elasticity window, a support ticket needs to be created.
## Request capacity If you're working with very large datasets or a massive environment and need higher capacity for your elastic data map, you can request a larger capacity of elasticity window by [creating a support ticket](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview).
-Select **Service and subscription limits (quota)** and complete the on screen instructions by choosing the Purview account that you'd like to request larger capacity for.
+Select **Service and subscription limits (quota)** and complete the on screen instructions by choosing the Azure Purview account that you'd like to request larger capacity for.
:::image type="content" source="./media/concept-elastic-data-map/increase-limit.png" alt-text="Screen showing the support case creation, with limit increase options selected.":::
In the description, provide as much relevant information as you can about your e
The metrics _data map capacity units_ and the _data map storage size_ can be monitored in order to understand the data estate size and the billing.
-1. Go to the [Azure portal](https://portal.azure.com), and navigate to the **Purview accounts** page and select your _Purview account_
+1. Go to the [Azure portal](https://portal.azure.com), and navigate to the **Azure Purview accounts** page and select your _Purview account_
2. Click on **Overview** and scroll down to observe the **Monitoring** section for _Data Map Capacity Units_ and _Data Map Storage Size_ metrics over different time periods
The metrics _data map capacity units_ and the _data map storage size_ can be mon
## Summary
-With elastic Data Map, Purview provides low-cost barrier for customers to start their data governance journey.
-Purview DataMap can grow elastically with pay as you go model starting from as small as 1 Capacity unit.
+With elastic Data Map, Azure Purview provides low-cost barrier for customers to start their data governance journey.
+Azure Purview DataMap can grow elastically with pay as you go model starting from as small as 1 Capacity unit.
Customers donΓÇÖt need to worry about choosing the correct Data Map size for their data estate at provision time and deal with platform migrations in the future due to size limits. ## Next Steps - [Create an Azure Purview account](create-catalog-portal.md)-- [Purview Pricing](https://azure.microsoft.com/pricing/details/azure-purview/)
+- [Azure Purview Pricing](https://azure.microsoft.com/pricing/details/azure-purview/)
purview Concept Insights https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/concept-insights.md
Last updated 12/02/2020
This article provides an overview of the Insights feature in Azure Purview.
-Insights are one of the key pillars of Purview. The feature provides customers, a single pane of glass view into their catalog and further aims to provide specific insights to the data source administrators, business users, data stewards, data officer and, security administrators. Currently, Purview has the following Insights reports that will be available to customers during Insight's public preview.
+Insights are one of the key pillars of Azure Purview. The feature provides customers, a single pane of glass view into their catalog and further aims to provide specific insights to the data source administrators, business users, data stewards, data officer and, security administrators. Currently, Azure Purview has the following Insights reports that will be available to customers during Insight's public preview.
> [!IMPORTANT] > Azure Purview Insights are currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
The report provides broad insights through graphs and KPIs and later deep dive i
## Scan Insights
-The report enables Data Source Administrators to understand overall health of the scans - how many succeeded, how many failed, how many canceled. This report gives a status update on scans that have been executed in the Purview account within a time period of last seven days or last 30 days.
+The report enables Data Source Administrators to understand overall health of the scans - how many succeeded, how many failed, how many canceled. This report gives a status update on scans that have been executed in the Azure Purview account within a time period of last seven days or last 30 days.
The report also allows administrators to deep dive and explore which scans failed and on what specific source types. To further enable users to investigate, the report helps them navigate into the scan history page within the "Sources" experience.
purview Concept Resource Sets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/concept-resource-sets.md
Using this strategy, Azure Purview would map the following resources to the same
### File types that Azure Purview will not detect as resource sets
-Purview intentionally doesn't try to classify most document file types like Word, Excel, or PDF as Resource Sets. The exception is CSV format since that is a common partitioned file format.
+Azure Purview intentionally doesn't try to classify most document file types like Word, Excel, or PDF as Resource Sets. The exception is CSV format since that is a common partitioned file format.
## How Azure Purview scans resource sets
purview Concept Scans And Ingestion https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/concept-scans-and-ingestion.md
# Scans and ingestion in Azure Purview
-This article provides an overview of the Scanning and Ingestion features in Azure Purview. These features connect your Purview account to your sources to populate the data map and data catalog so you can begin exploring and managing your data through Purview.
+This article provides an overview of the Scanning and Ingestion features in Azure Purview. These features connect your Azure Purview account to your sources to populate the data map and data catalog so you can begin exploring and managing your data through Azure Purview.
## Scanning
-After data sources are [registered](manage-data-sources.md) in your Purview account, the next step is to scan the data sources. The scanning process establishes a connection to the data source and captures technical metadata like names, file size, columns, and so on. It also extracts schema for structured data sources, applies classifications on schemas, and [applies sensitivity labels if your Purview account is connected to a Microsoft 365 Security and Compliance Center (SCC)](create-sensitivity-label.md). The scanning process can be triggered to run immediately or can be scheduled to run on a periodic basis to keep your Purview account up to date.
+After data sources are [registered](manage-data-sources.md) in your Azure Purview account, the next step is to scan the data sources. The scanning process establishes a connection to the data source and captures technical metadata like names, file size, columns, and so on. It also extracts schema for structured data sources, applies classifications on schemas, and [applies sensitivity labels if your Azure Purview account is connected to a Microsoft 365 Security and Compliance Center (SCC)](create-sensitivity-label.md). The scanning process can be triggered to run immediately or can be scheduled to run on a periodic basis to keep your Azure Purview account up to date.
For each scan there are customizations you can apply so that you're only scanning your sources for the information you need. ### Choose an authentication method for your scans
-Purview is secure by default. No passwords or secrets are stored directly in Purview, so youΓÇÖll need to choose an authentication method for your sources. There are four possible ways to authenticate your Purview account, but not all methods are supported for each data source.
+Azure Purview is secure by default. No passwords or secrets are stored directly in Azure Purview, so youΓÇÖll need to choose an authentication method for your sources. There are four possible ways to authenticate your Azure Purview account, but not all methods are supported for each data source.
- Managed Identity - Service Principal - SQL Authentication - Account Key or Basic Authentication
-Whenever possible, a Managed Identity is the preferred authentication method because it eliminates the need for storing and managing credentials for individual data sources. This can greatly reduce the time you and your team spend setting up and troubleshooting authentication for scans. When you enable a managed identity for your Purview account, an identity is created in Azure Active Directory and is tied to the lifecycle of your account.
+Whenever possible, a Managed Identity is the preferred authentication method because it eliminates the need for storing and managing credentials for individual data sources. This can greatly reduce the time you and your team spend setting up and troubleshooting authentication for scans. When you enable a managed identity for your Azure Purview account, an identity is created in Azure Active Directory and is tied to the lifecycle of your account.
### Scope your scan
There are [system scan rule sets](create-a-scan-rule-set.md#system-scan-rule-set
### Schedule your scan
-Purview gives you a choice of scanning weekly or monthly at a specific time you choose. Weekly scans may be appropriate for data sources with structures that are actively under development or frequently change. Monthly scanning is more appropriate for data sources that change infrequently. A good best practice is to work with the administrator of the source you want to scan to identify a time when compute demands on the source are low.
+Azure Purview gives you a choice of scanning weekly or monthly at a specific time you choose. Weekly scans may be appropriate for data sources with structures that are actively under development or frequently change. Monthly scanning is more appropriate for data sources that change infrequently. A good best practice is to work with the administrator of the source you want to scan to identify a time when compute demands on the source are low.
### How scans detect deleted assets
When you enumerate large data stores like Data Lake Storage Gen2, there are mult
## Ingestion
-The technical metadata or classifications identified by the scanning process are then sent to Ingestion. The ingestion process is responsible for populating the data map and is managed by Purview. Ingestion analyses the input from scan, [applies resource set patterns](concept-resource-sets.md#how-azure-purview-detects-resource-sets), populates available [lineage](concept-data-lineage.md) information, and then loads the data map automatically. Assets/schemas can be discovered or curated only after ingestion is complete. So, if your scan is completed but you haven't seen your assets in the data map or catalog, you'll need to wait for the ingestion process to finish.
+The technical metadata or classifications identified by the scanning process are then sent to Ingestion. The ingestion process is responsible for populating the data map and is managed by Azure Purview. Ingestion analyses the input from scan, [applies resource set patterns](concept-resource-sets.md#how-azure-purview-detects-resource-sets), populates available [lineage](concept-data-lineage.md) information, and then loads the data map automatically. Assets/schemas can be discovered or curated only after ingestion is complete. So, if your scan is completed but you haven't seen your assets in the data map or catalog, you'll need to wait for the ingestion process to finish.
## Next steps
purview Concept Search https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/concept-search.md
Last updated 09/27/2021
# Understand search features in Azure Purview
-This article provides an overview of the search experience in Azure Purview. Search is a core platform capability of Purview, that powers the data discovery and data use governance experiences in an organization.
+This article provides an overview of the search experience in Azure Purview. Search is a core platform capability of Azure Purview, that powers the data discovery and data use governance experiences in an organization.
## Search
-The Purview search experience is powered by a managed search index. After a data source is registered with Purview, its metadata is indexed by the search service to allow easy discovery. The index provides search relevance capabilities and completes search requests by querying millions of metadata assets. Search helps you to discover, understand, and use the data to get the most value out of it.
+The Azure Purview search experience is powered by a managed search index. After a data source is registered with Azure Purview, its metadata is indexed by the search service to allow easy discovery. The index provides search relevance capabilities and completes search requests by querying millions of metadata assets. Search helps you to discover, understand, and use the data to get the most value out of it.
-The search experience in Purview is a three stage process:
+The search experience in Azure Purview is a three stage process:
1. The search box shows the history containing recently used keywords and assets. 1. When you begin typing the keystrokes, the search suggests the matching keywords and assets.
The goal of search in Azure Purview is to speed up the process of data discovery
## Recent search and suggestions
-Many times, you may be working on multiple projects at the same time. To make it easier to resume previous projects, Purview search provides the ability to see recent search keywords and suggestions. Also, you can manage the recent search history by selecting **View all** from the search box drop-down.
+Many times, you may be working on multiple projects at the same time. To make it easier to resume previous projects, Azure Purview search provides the ability to see recent search keywords and suggestions. Also, you can manage the recent search history by selecting **View all** from the search box drop-down.
## Filters
Relevance is the default sort order in the search result page. The search releva
* [Quickstart: Create an Azure Purview account in the Azure portal](create-catalog-portal.md) * [Quickstart: Create an Azure Purview account using Azure PowerShell/Azure CLI](create-catalog-powershell.md)
-* [Use the Purview Studio](use-purview-studio.md)
+* [Use the Azure Purview Studio](use-purview-studio.md)
purview Create A Custom Classification And Classification Rule https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/create-a-custom-classification-and-classification-rule.md
You also have the ability to create custom classifications, if any of the defaul
> Our [data sampling rules](sources-and-scans.md#sampling-within-a-file) are applied to both system and custom classifications. > [!NOTE]
-> Purview custom classifications are applied only to structured data sources like SQL and CosmosDB, and to structured file types like CSV, JSON, and Parquet. Custom classification isn't applied to unstructured data file types like DOC, PDF, and XLSX.
+> Azure Purview custom classifications are applied only to structured data sources like SQL and CosmosDB, and to structured file types like CSV, JSON, and Parquet. Custom classification isn't applied to unstructured data file types like DOC, PDF, and XLSX.
## Steps to create a custom classification
purview Create A Scan Rule Set https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/create-a-scan-rule-set.md
A scan rule set is a container for grouping a set of scan rules together so that
To create a scan rule set:
-1. From your Azure [Purview Studio](https://web.purview.azure.com/resource/), select **Data Map**.
+1. From your Azure [Azure Purview Studio](https://web.purview.azure.com/resource/), select **Data Map**.
1. Select **Scan rule sets** from the left pane, and then select **New**.
In the above example:
Here are some more tips you can use to ignore patterns: - While processing the regex, Azure Purview will add $ to the regex by default.-- A good way to understand what url the scanning agent will compare with your regular expression is to browse through the Purview data catalog, find the asset you want to ignore in the future, and see its fully qualified name (FQN) in the **Overview** tab.
+- A good way to understand what url the scanning agent will compare with your regular expression is to browse through the Azure Purview data catalog, find the asset you want to ignore in the future, and see its fully qualified name (FQN) in the **Overview** tab.
:::image type="content" source="./media/create-a-scan-rule-set/fully-qualified-name.png" alt-text="Screenshot showing the fully qualified name on an asset's overview tab.":::
purview Create Catalog Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/create-catalog-portal.md
Title: 'Quickstart: Create a Purview account in the Azure portal'
+ Title: 'Quickstart: Create an Azure Purview account in the Azure portal'
description: This Quickstart describes how to create an Azure Purview account and configure permissions to begin using it.
# Quickstart: Create an Azure Purview account in the Azure portal
-This quickstart describes the steps to create an Azure Purview account in the Azure portal and get started on the process of classifying, securing, and discovering your data in Purview!
+This quickstart describes the steps to create an Azure Purview account in the Azure portal and get started on the process of classifying, securing, and discovering your data in Azure Purview!
-Azure Purview is a data governance service that helps you manage and govern your data landscape. By connecting to data across your on-premises, multi-cloud, and software-as-a-service (SaaS) sources, Purview creates an up-to-date map of your information. It identifies and classifies sensitive data, and provides end to end linage. Data consumers are able to discover data across your organization, and data administrators are able to audit, secure, and ensure right use of your data.
+Azure Purview is a data governance service that helps you manage and govern your data landscape. By connecting to data across your on-premises, multi-cloud, and software-as-a-service (SaaS) sources, Azure Purview creates an up-to-date map of your information. It identifies and classifies sensitive data, and provides end to end linage. Data consumers are able to discover data across your organization, and data administrators are able to audit, secure, and ensure right use of your data.
-For more information about Purview, [see our overview page](overview.md). For more information about deploying Purview across your organization, [see our deployment best practices](deployment-best-practices.md).
+For more information about Azure Purview, [see our overview page](overview.md). For more information about deploying Azure Purview across your organization, [see our deployment best practices](deployment-best-practices.md).
[!INCLUDE [purview-quickstart-prerequisites](includes/purview-quickstart-prerequisites.md)] ## Create an Azure Purview account
-1. Go to the **Purview accounts** page in the [Azure portal](https://portal.azure.com).
+1. Go to the **Azure Purview accounts** page in the [Azure portal](https://portal.azure.com).
:::image type="content" source="media/create-catalog-portal/purview-accounts-page.png" alt-text="Screenshot showing the purview accounts page in the Azure portal":::
For more information about Purview, [see our overview page](overview.md). For mo
:::image type="content" source="media/create-catalog-portal/search-marketplace.png" alt-text="Screenshot showing Azure Purview in the Azure Marketplace, with the create button highlighted.":::
-1. On the new Create Purview account page, under the **Basics** tab, select the Azure subscription where you want to create your Purview account.
+1. On the new Create Azure Purview account page, under the **Basics** tab, select the Azure subscription where you want to create your Azure Purview account.
-1. Select an existing **resource group** or create a new one to hold your Purview account.
+1. Select an existing **resource group** or create a new one to hold your Azure Purview account.
To learn more about resource groups, see our article on [using resource groups to manage your Azure resources](../azure-resource-manager/management/manage-resource-groups-portal.md#what-is-a-resource-group).
-1. Enter a **Purview account name**. Spaces and symbols aren't allowed.
- The name of the Purview account must be globally unique. If you see the following error, change the name of Purview account and try creating again.
+1. Enter a **Azure Purview account name**. Spaces and symbols aren't allowed.
+ The name of the Azure Purview account must be globally unique. If you see the following error, change the name of Azure Purview account and try creating again.
- :::image type="content" source="media/create-catalog-portal/name-error.png" alt-text="Screenshot showing the Create Purview account screen with an account name that is already in use, and the error message highlighted.":::
+ :::image type="content" source="media/create-catalog-portal/name-error.png" alt-text="Screenshot showing the Create Azure Purview account screen with an account name that is already in use, and the error message highlighted.":::
1. Choose a **location**.
- The list shows only locations that support Purview. The location you choose will be the region where your Purview account and meta data will be stored. Sources can be housed in other regions.
+ The list shows only locations that support Azure Purview. The location you choose will be the region where your Azure Purview account and meta data will be stored. Sources can be housed in other regions.
> [!Note] > Azure Purview does not support moving accounts across regions, so be sure to deploy to the correction region. You can find out more information about this in [move operation support for resources](../azure-resource-manager/management/move-support-resources.md).
-1. Select **Review & Create**, and then select **Create**. It takes a few minutes to complete the creation. The newly created Azure Purview account instance will appear in the list on your **Purview accounts** page.
+1. Select **Review & Create**, and then select **Create**. It takes a few minutes to complete the creation. The newly created Azure Purview account instance will appear in the list on your **Azure Purview accounts** page.
- :::image type="content" source="media/create-catalog-portal/create-resource.png" alt-text="Screenshot showing the Create Purview account screen with the Review + Create button highlighted":::
+ :::image type="content" source="media/create-catalog-portal/create-resource.png" alt-text="Screenshot showing the Create Azure Purview account screen with the Review + Create button highlighted":::
-## Open Purview Studio
+## Open Azure Purview Studio
-After your Azure Purview account is created, you'll use the Purview Studio to access and manage it. There are two ways to open Purview Studio:
+After your Azure Purview account is created, you'll use the Azure Purview Studio to access and manage it. There are two ways to open Azure Purview Studio:
-* Open your Purview account in the [Azure portal](https://portal.azure.com). Select the "Open Purview Studio" tile on the overview page.
- :::image type="content" source="media/create-catalog-portal/open-purview-studio.png" alt-text="Screenshot showing the Purview account overview page, with the Purview Studio tile highlighted.":::
+* Open your Azure Purview account in the [Azure portal](https://portal.azure.com). Select the "Open Azure Purview Studio" tile on the overview page.
+ :::image type="content" source="media/create-catalog-portal/open-purview-studio.png" alt-text="Screenshot showing the Azure Purview account overview page, with the Azure Purview Studio tile highlighted.":::
-* Alternatively, you can browse to [https://web.purview.azure.com](https://web.purview.azure.com), select your Purview account, and sign in to your workspace.
+* Alternatively, you can browse to [https://web.purview.azure.com](https://web.purview.azure.com), select your Azure Purview account, and sign in to your workspace.
## Next steps
-In this quickstart, you learned how to create an Azure Purview account and how to access it through the Purview Studio.
+In this quickstart, you learned how to create an Azure Purview account and how to access it through the Azure Purview Studio.
Next, you can create a user-assigned managed identity (UAMI) that will enable your new Azure Purview account to authenticate directly with resources using Azure Active Directory (Azure AD) authentication. To create a UAMI follow our [guide to create a user-assigned managed identity](manage-credentials.md#create-a-user-assigned-managed-identity).
-Follow these next articles to learn how to navigate the Purview Studio, create a collection, and grant access to Purview:
+Follow these next articles to learn how to navigate the Azure Purview Studio, create a collection, and grant access to Azure Purview:
-* [Using the Purview Studio](use-purview-studio.md)
+* [Using the Azure Purview Studio](use-purview-studio.md)
* [Create a collection](quickstart-create-collection.md) * [Add users to your Azure Purview account](catalog-permissions.md)
purview Create Catalog Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/create-catalog-powershell.md
Title: 'Quickstart: Create a Purview account with PowerShell/Azure CLI'
+ Title: 'Quickstart: Create an Azure Purview account with PowerShell/Azure CLI'
description: This Quickstart describes how to create an Azure Purview account using Azure PowerShell/Azure CLI.
# Quickstart: Create an Azure Purview account using Azure PowerShell/Azure CLI
-In this Quickstart, you'll create an Azure Purview account using Azure PowerShell/Azure CLI. [PowerShell reference for Purview](/powershell/module/az.purview/) is available, but this article will take you through all the steps needed to create an account with PowerShell.
+In this Quickstart, you'll create an Azure Purview account using Azure PowerShell/Azure CLI. [PowerShell reference for Azure Purview](/powershell/module/az.purview/) is available, but this article will take you through all the steps needed to create an account with PowerShell.
-Azure Purview is a data governance service that helps you manage and govern your data landscape. By connecting to data across your on-premises, multi-cloud, and software-as-a-service (SaaS) sources, Purview creates an up-to-date map of your information. It identifies and classifies sensitive data, and provides end to end linage. Data consumers are able to discover data across your organization, and data administrators are able to audit, secure, and ensure right use of your data.
+Azure Purview is a data governance service that helps you manage and govern your data landscape. By connecting to data across your on-premises, multi-cloud, and software-as-a-service (SaaS) sources, Azure Purview creates an up-to-date map of your information. It identifies and classifies sensitive data, and provides end to end linage. Data consumers are able to discover data across your organization, and data administrators are able to audit, secure, and ensure right use of your data.
-For more information about Purview, [see our overview page](overview.md). For more information about deploying Purview across your organization, [see our deployment best practices](deployment-best-practices.md).
+For more information about Azure Purview, [see our overview page](overview.md). For more information about deploying Azure Purview across your organization, [see our deployment best practices](deployment-best-practices.md).
[!INCLUDE [purview-quickstart-prerequisites](includes/purview-quickstart-prerequisites.md)]
For more information about Purview, [see our overview page](overview.md). For mo
-1. Create a resource group for your Purview account. You can skip this step if you already have one:
+1. Create a resource group for your Azure Purview account. You can skip this step if you already have one:
# [PowerShell](#tab/azure-powershell)
For more information about Purview, [see our overview page](overview.md). For mo
-1. Create or Deploy the Purview account
+1. Create or Deploy the Azure Purview account
# [PowerShell](#tab/azure-powershell)
- Use the [New-AzPurviewAccount](/powershell/module/az.purview/new-azpurviewaccount) cmdlet to create the Purview account:
+ Use the [New-AzPurviewAccount](/powershell/module/az.purview/new-azpurviewaccount) cmdlet to create the Azure Purview account:
```azurepowershell New-AzPurviewAccount -Name yourPurviewAccountName -ResourceGroupName myResourceGroup -Location eastus -IdentityType SystemAssigned -SkuCapacity 4 -SkuName Standard -PublicNetworkAccess Enabled
For more information about Purview, [see our overview page](overview.md). For mo
# [Azure CLI](#tab/azure-cli)
- 1. Create a Purview template file such as `purviewtemplate.json`. You can update `name`, `location`, and `capacity` (`4` or `16`):
+ 1. Create an Azure Purview template file such as `purviewtemplate.json`. You can update `name`, `location`, and `capacity` (`4` or `16`):
```json {
For more information about Purview, [see our overview page](overview.md). For mo
} ```
- 1. Deploy Purview template
+ 1. Deploy Azure Purview template
To run this deployment command, you must have the [latest version](/cli/azure/install-azure-cli) of Azure CLI.
For more information about Purview, [see our overview page](overview.md). For mo
1. If you deployed the Azure Purview account using a service principal, instead of a user account, you will also need to run the below command in the Azure CLI: ```azurecli
- az purview account add-root-collection-admin --account-name [Purview Account Name] --resource-group [Resource Group Name] --object-id [User Object Id]
+ az purview account add-root-collection-admin --account-name [Azure Purview Account Name] --resource-group [Resource Group Name] --object-id [User Object Id]
```
- This command will grant the user account [collection admin](catalog-permissions.md#roles) permissions on the root collection in your Azure Purview account. This allows the user to access the Purview Studio and add permission for other users. For more information about permissions in Azure Purview, see our [permissions guide](catalog-permissions.md). For more information about collections, see our [manage collections article](how-to-create-and-manage-collections.md).
+ This command will grant the user account [collection admin](catalog-permissions.md#roles) permissions on the root collection in your Azure Purview account. This allows the user to access the Azure Purview Studio and add permission for other users. For more information about permissions in Azure Purview, see our [permissions guide](catalog-permissions.md). For more information about collections, see our [manage collections article](how-to-create-and-manage-collections.md).
## Next steps In this quickstart, you learned how to create an Azure Purview account.
-Follow these next articles to learn how to navigate the Purview Studio, create a collection, and grant access to Purview.
+Follow these next articles to learn how to navigate the Azure Purview Studio, create a collection, and grant access to Azure Purview.
-* [How to use the Purview Studio](use-purview-studio.md)
+* [How to use the Azure Purview Studio](use-purview-studio.md)
* [Add users to your Azure Purview account](catalog-permissions.md) * [Create a collection](quickstart-create-collection.md)
purview Create Purview Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/create-purview-dotnet.md
Title: 'Quickstart: Create Purview Account using .NET SDK'
+ Title: 'Quickstart: Create Azure Purview Account using .NET SDK'
description: Create an Azure Purview Account using .NET SDK.
Last updated 09/27/2021
-# Quickstart: Create a Purview account using .NET SDK
+# Quickstart: Create an Azure Purview account using .NET SDK
In this quickstart you'll use the [.NET SDK](/dotnet/api/overview/azure/purviewresourceprovider) to create an Azure Purview account.
-Azure Purview is a data governance service that helps you manage and govern your data landscape. By connecting to data across your on-premises, multi-cloud, and software-as-a-service (SaaS) sources, Purview creates an up-to-date map of your information. It identifies and classifies sensitive data, and provides end to end linage. Data consumers are able to discover data across your organization, and data administrators are able to audit, secure, and ensure right use of your data.
+Azure Purview is a data governance service that helps you manage and govern your data landscape. By connecting to data across your on-premises, multi-cloud, and software-as-a-service (SaaS) sources, Azure Purview creates an up-to-date map of your information. It identifies and classifies sensitive data, and provides end to end linage. Data consumers are able to discover data across your organization, and data administrators are able to audit, secure, and ensure right use of your data.
-For more information about Purview, [see our overview page](overview.md). For more information about deploying Purview across your organization, [see our deployment best practices](deployment-best-practices.md).
+For more information about Azure Purview, [see our overview page](overview.md). For more information about deploying Azure Purview across your organization, [see our deployment best practices](deployment-best-practices.md).
[!INCLUDE [purview-quickstart-prerequisites](includes/purview-quickstart-prerequisites.md)]
Next, create a C# .NET console application in Visual Studio:
Install-Package Microsoft.IdentityModel.Clients.ActiveDirectory ```
-## Create a Purview client
+## Create an Azure Purview client
1. Open **Program.cs**, include the following statements to add references to namespaces.
Next, create a C# .NET console application in Visual Studio:
using Microsoft.IdentityModel.Clients.ActiveDirectory; ```
-2. Add the following code to the **Main** method that sets the variables. Replace the placeholders with your own values. For a list of Azure regions in which Purview is currently available, search on **Azure Purview** and select the regions that interest you on the following page: [Products available by region](https://azure.microsoft.com/global-infrastructure/services/).
+2. Add the following code to the **Main** method that sets the variables. Replace the placeholders with your own values. For a list of Azure regions in which Azure Purview is currently available, search on **Azure Purview** and select the regions that interest you on the following page: [Products available by region](https://azure.microsoft.com/global-infrastructure/services/).
```csharp // Set variables
Next, create a C# .NET console application in Visual Studio:
"<specify the name of purview account to create. It must be globally unique.>"; ```
-3. Add the following code to the **Main** method that creates an instance of **PurviewManagementClient** class. You use this object to create a Purview Account.
+3. Add the following code to the **Main** method that creates an instance of **PurviewManagementClient** class. You use this object to create an Azure Purview Account.
```csharp // Authenticate and create a purview management client
Next, create a C# .NET console application in Visual Studio:
}; ```
-## Create a Purview account
+## Create an Azure Purview account
-Add the following code to the **Main** method that creates a **Purview Account**.
+Add the following code to the **Main** method that creates a **Azure Purview Account**.
```csharp // Create a purview Account
-Console.WriteLine("Creating Purview Account " + purviewAccountName + "...");
+Console.WriteLine("Creating Azure Purview Account " + purviewAccountName + "...");
Account account = new Account() { Location = region,
Console.ReadKey();
Build and start the application, then verify the execution.
-The console prints the progress of creating Purview Account.
+The console prints the progress of creating Azure Purview Account.
### Sample output ```json
-Creating Purview Account testpurview...
+Creating Azure Purview Account testpurview...
Succeeded { "sku": {
Press any key to exit...
## Verify the output
-Go to the **Purview accounts** page in the [Azure portal](https://portal.azure.com) and verify the account created using the above code.
+Go to the **Azure Purview accounts** page in the [Azure portal](https://portal.azure.com) and verify the account created using the above code.
-## Delete Purview account
+## Delete Azure Purview account
-To programmatically delete a Purview Account, add the following lines of code to the program:
+To programmatically delete an Azure Purview Account, add the following lines of code to the program:
```csharp
-Console.WriteLine("Deleting the Purview Account");
+Console.WriteLine("Deleting the Azure Purview Account");
client.Accounts.Delete(resourceGroup, purviewAccountName); ```
-## Check if Purview account name is available
+## Check if Azure Purview account name is available
To check availability of a purview account, use the following code:
CheckNameAvailabilityRequest checkNameAvailabilityRequest = newCheckNameAvailabi
Name = purviewAccountName, Type = "Microsoft.Purview/accounts" };
-Console.WriteLine("Check Purview account name");
+Console.WriteLine("Check Azure Purview account name");
Console.WriteLine(client.Accounts.CheckNameAvailability(checkNameAvailabilityRequest).NameAvailable); ```
The above code with print 'True' if the name is available and 'False' if the nam
## Next steps
-The code in this tutorial creates a purview account, deletes a purview account and checks for name availability of purview account. You can now download the .NET SDK and learn about other resource provider actions you can perform for a Purview account.
+The code in this tutorial creates a purview account, deletes a purview account and checks for name availability of purview account. You can now download the .NET SDK and learn about other resource provider actions you can perform for an Azure Purview account.
-Follow these next articles to learn how to navigate the Purview Studio, create a collection, and grant access to Purview.
+Follow these next articles to learn how to navigate the Azure Purview Studio, create a collection, and grant access to Azure Purview.
-* [How to use the Purview Studio](use-purview-studio.md)
+* [How to use the Azure Purview Studio](use-purview-studio.md)
* [Create a collection](quickstart-create-collection.md) * [Add users to your Azure Purview account](catalog-permissions.md)
purview Create Purview Portal Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/create-purview-portal-faq.md
Title: Create an Azure Policy exception for Azure Purview
-description: This article describes how to create an Azure Policy exception for Purview while leaving existing Policies in place to maintain security.
+description: This article describes how to create an Azure Policy exception for Azure Purview while leaving existing Policies in place to maintain security.
Last updated 08/26/2021
-# Create an Azure Policy exception for Purview
+# Create an Azure Policy exception for Azure Purview
-Many subscriptions have [Azure Policies](../governance/policy/overview.md) in place that restrict the creation of some resources. This is to maintain subscription security and cleanliness. However, Purview accounts deploy two other Azure resources when they are created: an Azure Storage account, and an Event Hub namespace. When you [create Purview Account](create-catalog-portal.md), these resources will be deployed. They will be managed by Azure, so you don't need to maintain them, but you will need to deploy them.
+Many subscriptions have [Azure Policies](../governance/policy/overview.md) in place that restrict the creation of some resources. This is to maintain subscription security and cleanliness. However, Azure Purview accounts deploy two other Azure resources when they are created: an Azure Storage account, and an Event Hub namespace. When you [create Azure Purview Account](create-catalog-portal.md), these resources will be deployed. They will be managed by Azure, so you don't need to maintain them, but you will need to deploy them.
To maintain your policies in your subscription, but still allow the creation of these managed resources, you can create a policy exception.
-## Create a policy exception for Purview
+## Create a policy exception for Azure Purview
1. Navigate to the [Azure portal](https://portal.azure.com) and search for **Policy**
To maintain your policies in your subscription, but still allow the creation of
``` > [!Note]
- > The tag could be anything beside `resourceBypass` and it's up to you to define value when creating Purview in latter steps as long as the policy can detect the tag.
+ > The tag could be anything beside `resourceBypass` and it's up to you to define value when creating Azure Purview in latter steps as long as the policy can detect the tag.
:::image type="content" source="media/create-catalog-portal/policy-definition.png" alt-text="Screenshot showing how to create policy definition.":::
To maintain your policies in your subscription, but still allow the creation of
> [!Note] > If you have **Azure Policy** and need to add exception as in **Prerequisites**, you need to add the correct tag. For example, you can add `resourceBypass` tag:
-> :::image type="content" source="media/create-catalog-portal/add-purview-tag.png" alt-text="Add tag to Purview account.":::
+> :::image type="content" source="media/create-catalog-portal/add-purview-tag.png" alt-text="Add tag to Azure Purview account.":::
## Next steps
purview Create Purview Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/create-purview-python.md
Title: 'Quickstart: Create a Purview account using Python'
+ Title: 'Quickstart: Create an Azure Purview account using Python'
description: Create an Azure Purview account using Python.
Last updated 09/27/2021
-# Quickstart: Create a Purview account using Python
+# Quickstart: Create an Azure Purview account using Python
-In this quickstart, you will create a Purview account programatically using Python. [Python reference for Purview](/python/api/azure-mgmt-purview/) is available, but this article will take you through all the steps needed to create an account with Python.
+In this quickstart, you will create an Azure Purview account programatically using Python. [Python reference for Azure Purview](/python/api/azure-mgmt-purview/) is available, but this article will take you through all the steps needed to create an account with Python.
-Azure Purview is a data governance service that helps you manage and govern your data landscape. By connecting to data across your on-premises, multi-cloud, and software-as-a-service (SaaS) sources, Purview creates an up-to-date map of your information. It identifies and classifies sensitive data, and provides end to end linage. Data consumers are able to discover data across your organization, and data administrators are able to audit, secure, and ensure right use of your data.
+Azure Purview is a data governance service that helps you manage and govern your data landscape. By connecting to data across your on-premises, multi-cloud, and software-as-a-service (SaaS) sources, Azure Purview creates an up-to-date map of your information. It identifies and classifies sensitive data, and provides end to end linage. Data consumers are able to discover data across your organization, and data administrators are able to audit, secure, and ensure right use of your data.
-For more information about Purview, [see our overview page](overview.md). For more information about deploying Purview across your organization, [see our deployment best practices](deployment-best-practices.md).
+For more information about Azure Purview, [see our overview page](overview.md). For more information about deploying Azure Purview across your organization, [see our deployment best practices](deployment-best-practices.md).
[!INCLUDE [purview-quickstart-prerequisites](includes/purview-quickstart-prerequisites.md)]
For more information about Purview, [see our overview page](overview.md). For mo
pip install azure-mgmt-resource ```
-3. To install the Python package for Purview, run the following command:
+3. To install the Python package for Azure Purview, run the following command:
```python pip install azure-mgmt-purview ```
- The [Python SDK for Purview](https://github.com/Azure/azure-sdk-for-python) supports Python 2.7, 3.3, 3.4, 3.5, 3.6 and 3.7.
+ The [Python SDK for Azure Purview](https://github.com/Azure/azure-sdk-for-python) supports Python 2.7, 3.3, 3.4, 3.5, 3.6 and 3.7.
4. To install the Python package for Azure Identity authentication, run the following command:
For more information about Purview, [see our overview page](overview.md). For mo
# The purview name. It must be globally unique. purview_name = '<purview account name>'
- # Location name, where Purview account must be created.
+ # Location name, where Azure Purview account must be created.
location = '<location name>' # Specify your Active Directory client ID, client secret, and tenant ID
For more information about Purview, [see our overview page](overview.md). For mo
try: pa = (purview_client.accounts.begin_create_or_update(rg_name, purview_name, purview_resource)).result()
- print("location:", pa.location, " Purview Account Name: ", pa.name, " Id: " , pa.id ," tags: " , pa.tags)
+ print("location:", pa.location, " Azure Purview Account Name: ", pa.name, " Id: " , pa.id ," tags: " , pa.tags)
except: print("Error") print_item(pa)
For more information about Purview, [see our overview page](overview.md). For mo
pa = (purview_client.accounts.get(rg_name, purview_name)) print(getattr(pa,'provisioning_state')) if getattr(pa,'provisioning_state') != "Failed" :
- print("Error in creating Purview account")
+ print("Error in creating Azure Purview account")
break time.sleep(30) ```
Here is the full Python code:
try: pa = (purview_client.accounts.begin_create_or_update(rg_name, purview_name, purview_resource)).result()
- print("location:", pa.location, " Purview Account Name: ", purview_name, " Id: " , pa.id ," tags: " , pa.tags)
+ print("location:", pa.location, " Azure Purview Account Name: ", purview_name, " Id: " , pa.id ," tags: " , pa.tags)
except: print("Error in submitting job to create account") print_item(pa)
Here is the full Python code:
pa = (purview_client.accounts.get(rg_name, purview_name)) print(getattr(pa,'provisioning_state')) if getattr(pa,'provisioning_state') != "Failed" :
- print("Error in creating Purview account")
+ print("Error in creating Azure Purview account")
break time.sleep(30)
main()
## Run the code
-Build and start the application. The console prints the progress of Purview account creation. Wait until it is completed.
+Build and start the application. The console prints the progress of Azure Purview account creation. Wait until it is completed.
Here is the sample output: ```console
-location: southcentralus Purview Account Name: purviewpython7 Id: /subscriptions/8c2c7b23-848d-40fe-b817-690d79ad9dfd/resourceGroups/Demo_Catalog/providers/Microsoft.Purview/accounts/purviewpython7 tags: None
+location: southcentralus Azure Purview Account Name: purviewpython7 Id: /subscriptions/8c2c7b23-848d-40fe-b817-690d79ad9dfd/resourceGroups/Demo_Catalog/providers/Microsoft.Purview/accounts/purviewpython7 tags: None
Creating Creating Succeeded
Succeeded
## Verify the output
-Go to the **Purview accounts** page in the Azure portal and verify the account created using the above code.
+Go to the **Azure Purview accounts** page in the Azure portal and verify the account created using the above code.
-## Delete Purview account
+## Delete Azure Purview account
To delete purview account, add the following code to the program, then run:
pa = purview_client.accounts.begin_delete(rg_name, purview_name).result()
## Next steps
-The code in this tutorial creates a purview account and deletes a purview account. You can now download the python SDK and learn about other resource provider actions you can perform for a Purview account.
+The code in this tutorial creates a purview account and deletes a purview account. You can now download the python SDK and learn about other resource provider actions you can perform for an Azure Purview account.
-Follow these next articles to learn how to navigate the Purview Studio, create a collection, and grant access to Purview.
+Follow these next articles to learn how to navigate the Azure Purview Studio, create a collection, and grant access to Azure Purview.
-* [How to use the Purview Studio](use-purview-studio.md)
+* [How to use the Azure Purview Studio](use-purview-studio.md)
* [Create a collection](quickstart-create-collection.md) * [Add users to your Azure Purview account](catalog-permissions.md)
purview Create Sensitivity Label https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/create-sensitivity-label.md
Title: Labeling in Azure Purview
-description: Start utilizing sensitivity labels and classifications to enhance your Purview assets
+description: Start utilizing sensitivity labels and classifications to enhance your Azure Purview assets
For example, applying a sensitivity label ΓÇÿhighly confidentialΓÇÖ to a documen
Azure Purview allows you to apply sensitivity labels to assets, enabling you to classify and protect your data.
-* **Label travels with the data:** The sensitivity labels created in Microsoft 365 can also be extended to Purview, SharePoint, Teams, Power BI, and SQL. When you apply a label on an office document and then scan it in Purview, the label will flow to Purview. While the label is applied to the actual file in M365, it is only added as metadata in the Purview catalog. While there are differences in how a label is applied to an asset across various services/applications, labels travel with the data and is recognized by all the services you extend it to.
-* **Overview of your data estate:** Purview provides insights into your data through pre-canned reports. When you scan data in Purview, we hydrate the reports with information on what assets you have, scan history, classifications found in your data, labels applied, glossary terms, etc.
+* **Label travels with the data:** The sensitivity labels created in Microsoft 365 can also be extended to Azure Purview, SharePoint, Teams, Power BI, and SQL. When you apply a label on an office document and then scan it in Azure Purview, the label will flow to Azure Purview. While the label is applied to the actual file in M365, it is only added as metadata in the Azure Purview catalog. While there are differences in how a label is applied to an asset across various services/applications, labels travel with the data and is recognized by all the services you extend it to.
+* **Overview of your data estate:** Azure Purview provides insights into your data through pre-canned reports. When you scan data in Azure Purview, we hydrate the reports with information on what assets you have, scan history, classifications found in your data, labels applied, glossary terms, etc.
* **Automatic labeling:** Labels can be applied automatically based on sensitivity of the data. When an asset is scanned for sensitive data, autolabeling rules are used to decide which sensitivity label to apply. You can create autolabeling rules for each sensitivity label, defining which classification/sensitive information type constitutes a label. * **Apply labels to files and database columns:** Labels can be applied to files in storage like Azure Data Lake, Azure Files, etc. and to schematized data like columns in Azure SQL DB, Cosmos DB, etc.
Sensitivity labels are tags that you can apply on assets to classify and protect
## How to apply labels to assets in Azure Purview Being able to apply labels to your asset in Azure Purview requires you to perform the following steps:
Sensitivity labels are supported in Azure Purview for the following data sources
## Labeling for SQL databases
-In addition to Purview labeling for schematized data assets, Microsoft also supports labeling for SQL database columns using the SQL data classification in [SQL Server Management Studio (SSMS)](/sql/ssms/sql-server-management-studio-ssms). While Purview uses the global [sensitivity labels](/microsoft-365/compliance/sensitivity-labels), SSMS only uses labels defined locally.
+In addition to Azure Purview labeling for schematized data assets, Microsoft also supports labeling for SQL database columns using the SQL data classification in [SQL Server Management Studio (SSMS)](/sql/ssms/sql-server-management-studio-ssms). While Azure Purview uses the global [sensitivity labels](/microsoft-365/compliance/sensitivity-labels), SSMS only uses labels defined locally.
-Labeling in Purview and labeling in SSMS are separate processes that do not currently interact with each other. Therefore, **labels applied in SSMS are not shown in Purview, and vice versa**. We recommend Azure Purview for labeling SQL databases, as it uses global MIP labels that can be applied across multiple platforms.
+Labeling in Azure Purview and labeling in SSMS are separate processes that do not currently interact with each other. Therefore, **labels applied in SSMS are not shown in Azure Purview, and vice versa**. We recommend Azure Purview for labeling SQL databases, as it uses global MIP labels that can be applied across multiple platforms.
For more information, see the [SQL data discovery and classification documentation](/sql/relational-databases/security/sql-data-discovery-and-classification). </br></br>
purview Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/disaster-recovery.md
Last updated 04/23/2021
-# Disaster recovery for Purview
+# Disaster recovery for Azure Purview
-This article explains how to configure a disaster recovery environment for Azure Purview. Azure data center outages are rare, but can last anywhere from a few minutes to hours. Data Center outages can cause disruption to environments that are being relied on for data governance. By following the steps detailed in this article, you can continue to govern your data in the event of a data center outage for the primary region of your Purview account.
+This article explains how to configure a disaster recovery environment for Azure Purview. Azure data center outages are rare, but can last anywhere from a few minutes to hours. Data Center outages can cause disruption to environments that are being relied on for data governance. By following the steps detailed in this article, you can continue to govern your data in the event of a data center outage for the primary region of your Azure Purview account.
## Achieve business continuity for Azure Purview Business continuity and disaster recoveryΓÇ»(BCDR) in an Azure Purview instance refers to the mechanisms, policies, and procedures that enable your business to protect data loss and continue operating in the face of disruption, particularly to its scanning, catalog, and insights tiers. This page explains how to configure a disaster recovery environment for Azure Purview.
-Today, Azure Purview does not support automated BCDR. Until that support is added, you are responsible to take care of backup and restore activities. You can manually create a secondary Purview account as a warm standby instance in another region.
+Today, Azure Purview does not support automated BCDR. Until that support is added, you are responsible to take care of backup and restore activities. You can manually create a secondary Azure Purview account as a warm standby instance in another region.
The following steps show how you can achieve disaster recovery manually:
-1. Once the primary Purview account is created in a certain region, you must provision one or more secondary Purview accounts in separate regions from Azure portal.
+1. Once the primary Azure Purview account is created in a certain region, you must provision one or more secondary Azure Purview accounts in separate regions from Azure portal.
-2. All activities performed on the primary Purview account must be carried out on the secondary Purview accounts as well. This includes:
+2. All activities performed on the primary Azure Purview account must be carried out on the secondary Azure Purview accounts as well. This includes:
- Maintain Account information - Create and maintain custom Scan rule sets, Classifications, and Classification rules
The following steps show how you can achieve disaster recovery manually:
As you plan your manual BCDR plan, keep the following points in mind: -- You will be charged for primary and secondary Purview accounts.
+- You will be charged for primary and secondary Azure Purview accounts.
-- The primary and secondary Purview accounts cannot be configured to the same Azure Data Factory, Azure Data Share and Synapse Analytics accounts, if applicable. As a result, the lineage from Azure Data Factory and Azure Data Share cannot be seen in the secondary Purview accounts. Also, the Synapse Analytics workspace associated with the primary Purview account cannot be associated with secondary Purview accounts. This is a limitation today and will be addressed when automated BCDR is supported.
+- The primary and secondary Azure Purview accounts cannot be configured to the same Azure Data Factory, Azure Data Share and Synapse Analytics accounts, if applicable. As a result, the lineage from Azure Data Factory and Azure Data Share cannot be seen in the secondary Azure Purview accounts. Also, the Synapse Analytics workspace associated with the primary Azure Purview account cannot be associated with secondary Azure Purview accounts. This is a limitation today and will be addressed when automated BCDR is supported.
-- The integration runtimes are specific to a Purview account. Hence, if scans must run in primary and secondary Purview accounts in-parallel, multiple self-hosted integration runtimes must be maintained. This limitation will also be addressed when automated BCDR is supported.
+- The integration runtimes are specific to an Azure Purview account. Hence, if scans must run in primary and secondary Azure Purview accounts in-parallel, multiple self-hosted integration runtimes must be maintained. This limitation will also be addressed when automated BCDR is supported.
-- Parallel execution of scans from both primary and secondary Purview accounts on the same source can affect the performance of the source. This can result in scan durations to vary across the Purview accounts.
+- Parallel execution of scans from both primary and secondary Azure Purview accounts on the same source can affect the performance of the source. This can result in scan durations to vary across the Azure Purview accounts.
## Related information
purview Glossary Insights https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/glossary-insights.md
Title: Glossary report on your data using Purview Insights
-description: This how-to guide describes how to view and use Purview Insights glossary reporting on your data.
+ Title: Glossary report on your data using Azure Purview Insights
+description: This how-to guide describes how to view and use Azure Purview Insights glossary reporting on your data.
Last updated 09/27/2021
# Glossary insights on your data in Azure Purview
-This how-to guide describes how to access, view, and filter Purview Glossary insight reports for your data.
+This how-to guide describes how to access, view, and filter Azure Purview Glossary insight reports for your data.
> [!IMPORTANT] > Azure Purview Insights are currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
This how-to guide describes how to access, view, and filter Purview Glossary ins
In this how-to guide, you'll learn how to: > [!div class="checklist"]
-> - Go to Insights from your Purview account
+> - Go to Insights from your Azure Purview account
> - Get a bird's eye view of your data ## Prerequisites
-Before getting started with Purview insights, make sure that you've completed the following steps:
+Before getting started with Azure Purview insights, make sure that you've completed the following steps:
- Set up your Azure resources and populate the account with data
Before getting started with Purview insights, make sure that you've completed th
For more information, see [Manage data sources in Azure Purview](manage-data-sources.md).
-## Use Purview Glossary Insights
+## Use Azure Purview Glossary Insights
In Azure Purview, you can create glossary terms and attach them to assets. Later, you can view the glossary distribution in Glossary Insights. This tells you the state of your glossary by terms attached to assets. It also tells you terms by status and distribution of roles by number of users. **To view Glossary Insights:**
-1. Go to the **Azure Purview** [instance screen in the Azure portal](https://aka.ms/purviewportal) and select your Purview account.
+1. Go to the **Azure Purview** [instance screen in the Azure portal](https://aka.ms/purviewportal) and select your Azure Purview account.
-1. On the **Overview** page, in the **Get Started** section, select **Open Purview Studio** account tile.
+1. On the **Overview** page, in the **Get Started** section, select **Open Azure Purview Studio** account tile.
- :::image type="content" source="./media/glossary-insights/portal-access.png" alt-text="Launch Purview from the Azure portal":::
+ :::image type="content" source="./media/glossary-insights/portal-access.png" alt-text="Launch Azure Purview from the Azure portal":::
-1. On the Purview **Home** page, select **Insights** on the left menu.
+1. On the Azure Purview **Home** page, select **Insights** on the left menu.
:::image type="content" source="./media/glossary-insights/view-insights.png" alt-text="View your insights in the Azure portal":::
-1. In the **Insights** area, select **Glossary** to display the Purview **Glossary insights** report.
+1. In the **Insights** area, select **Glossary** to display the Azure Purview **Glossary insights** report.
**Glossary Insights** provides you as a business user, valuable information to maintain a well-defined glossary for your organization.
-1. The report starts with **High-level KPIs** that shows ***Total terms*** in your Purview account, ***Approved terms without assets*** and ***Expired terms with assets***. Each of these values will help you identify the health of your Glossary.
+1. The report starts with **High-level KPIs** that shows ***Total terms*** in your Azure Purview account, ***Approved terms without assets*** and ***Expired terms with assets***. Each of these values will help you identify the health of your Glossary.
:::image type="content" source="./media/glossary-insights/glossary-kpi.png" alt-text="View glossary insights KPI":::
purview How To Access Policies Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/how-to-access-policies-storage.md
Previously updated : 1/5/2022 Last updated : 1/14/2022
Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/)
>[!IMPORTANT] > The access policy feature is only available on **new** Azure Purview and Azure Storage accounts. - Create a new or use an existing isolated test subscription. You can [follow this guide to create one](../cost-management-billing/manage/create-subscription.md).-- Create a new Azure Purview account. You can [follow our quick-start guide to create one](create-catalog-portal.md).
+- Create a new or use an existing Azure Purview account. You can [follow our quick-start guide to create one](create-catalog-portal.md).
- Create a new Azure Storage account in one of the regions listed below. You can [follow this guide to create one](../storage/common/storage-account-create.md). Only Storage account versions >= 81.x.x support policy enforcement. [!INCLUDE [supported regions](./includes/storage-access-policy-regions.md)]
purview How To Automatically Label Your Content https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/how-to-automatically-label-your-content.md
The following steps extend your sensitivity labels and enable them to be availab
For example: > [!TIP]
->If you don't see the button, and you're not sure if consent has been granted to extend labeling to assets in Purview, see [this FAQ](sensitivity-labels-frequently-asked-questions.yml#how-can-i-determine-if-consent-has-been-granted-to-extend-labeling-to-purview) item on how to determine the status.
+>If you don't see the button, and you're not sure if consent has been granted to extend labeling to assets in Azure Purview, see [this FAQ](sensitivity-labels-frequently-asked-questions.yml#how-can-i-determine-if-consent-has-been-granted-to-extend-labeling-to-azure-purview) item on how to determine the status.
>
-After you've extended labeling to assets in Azure Purview, all published sensitivity labels are available for use in Purview.
+After you've extended labeling to assets in Azure Purview, all published sensitivity labels are available for use in Azure Purview.
### Step 3: Create or modify existing label to automatically label content
Once you create a label, you will need to Scan your data in Azure Purview to aut
## Scan your data to apply sensitivity labels automatically
-Scan your data in Azure Purview to automatically apply the labels you've created, based on the autolabeling rules you've defined. Allow up to 24 hours for sensitivity label changes to reflect in Purview.
+Scan your data in Azure Purview to automatically apply the labels you've created, based on the autolabeling rules you've defined. Allow up to 24 hours for sensitivity label changes to reflect in Azure Purview.
For more information on how to set up scans on various assets in Azure Purview, see:
purview How To Browse Catalog https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/how-to-browse-catalog.md
Searching a data catalog is a great tool for data discovery if a data consumer k
To access the browse experience, select ΓÇ£Browse assetsΓÇ¥ from the data catalog home page. ## Browse by collection
Browse by collection allows you to explore the different collections you are a d
:::image type="content" source="media/how-to-browse-catalog/browse-by-collection.png" alt-text="Screenshot showing the browse by collection page" border="true":::
-Once a collection is selected, you will get a list of assets in that collection with the facets and filters available in search. As a collection can have thousands of assets, browse uses the Purview search relevance engine to boost the most important assets to the top.
+Once a collection is selected, you will get a list of assets in that collection with the facets and filters available in search. As a collection can have thousands of assets, browse uses the Azure Purview search relevance engine to boost the most important assets to the top.
:::image type="content" source="media/how-to-browse-catalog/browse-collection-results.png" alt-text="Screenshot showing the browse by collection results" border="true":::
A native browsing experience with hierarchical namespace is provided for each co
- [How to create, import, and export glossary terms](how-to-create-import-export-glossary.md) - [How to manage term templates for business glossary](how-to-manage-term-templates.md)-- [How to search the Purview data catalog](how-to-search-catalog.md)
+- [How to search the Azure Purview data catalog](how-to-search-catalog.md)
purview How To Create And Manage Collections https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/how-to-create-and-manage-collections.md
# Create and manage collections in Azure Purview
-Collections in Azure Purview can be used to organize assets and sources by your business's flow, but they are also the tool used to manage access across Purview. This guide will take you through the creation and management of these collections, as well as cover steps about how to register sources and add assets into your collections.
+Collections in Azure Purview can be used to organize assets and sources by your business's flow, but they are also the tool used to manage access across Azure Purview. This guide will take you through the creation and management of these collections, as well as cover steps about how to register sources and add assets into your collections.
## Prerequisites
Collections in Azure Purview can be used to organize assets and sources by your
* Your own [Azure Active Directory tenant](../active-directory/fundamentals/active-directory-access-create-new-tenant.md).
-* An active [Purview resource](create-catalog-portal.md).
+* An active [Azure Purview resource](create-catalog-portal.md).
### Check permissions
-In order to create and manage collections in Purview, you will need to be a **Collection Admin** within Purview. We can check these permissions in the [Purview Studio](https://web.purview.azure.com/resource/). You can find the studio by going to your Purview resource in the [Azure portal](https://portal.azure.com), and selecting the Open Purview Studio tile on the overview page.
+In order to create and manage collections in Azure Purview, you will need to be a **Collection Admin** within Azure Purview. We can check these permissions in the [Azure Purview Studio](https://web.purview.azure.com/resource/). You can find the studio by going to your Azure Purview resource in the [Azure portal](https://portal.azure.com), and selecting the Open Azure Purview Studio tile on the overview page.
1. Select Data Map > Collections from the left pane to open collection management page.
- :::image type="content" source="./media/how-to-create-and-manage-collections/find-collections.png" alt-text="Screenshot of Purview studio window, opened to the Data Map, with the Collections tab selected." border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/find-collections.png" alt-text="Screenshot of Azure Purview studio window, opened to the Data Map, with the Collections tab selected." border="true":::
-1. Select your root collection. This is the top collection in your collection list and will have the same name as your Purview resource. In our example below, it is called Contoso Purview. Alternatively, if collections already exist you can select any collection where you want to create a subcollection.
+1. Select your root collection. This is the top collection in your collection list and will have the same name as your Azure Purview resource. In our example below, it is called Contoso Azure Purview. Alternatively, if collections already exist you can select any collection where you want to create a subcollection.
- :::image type="content" source="./media/how-to-create-and-manage-collections/select-root-collection.png" alt-text="Screenshot of Purview studio window, opened to the Data Map, with the root collection highlighted." border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/select-root-collection.png" alt-text="Screenshot of Azure Purview studio window, opened to the Data Map, with the root collection highlighted." border="true":::
1. Select **Role assignments** in the collection window.
- :::image type="content" source="./media/how-to-create-and-manage-collections/role-assignments.png" alt-text="Screenshot of Purview studio window, opened to the Data Map, with the role assignments tab highlighted." border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/role-assignments.png" alt-text="Screenshot of Azure Purview studio window, opened to the Data Map, with the role assignments tab highlighted." border="true":::
-1. To create a collection, you will need to be in the collection admin list under role assignments. If you created the Purview resource, you should be listed as a collection admin under the root collection already. If not, you will need to contact the collection admin to grant you permission.
+1. To create a collection, you will need to be in the collection admin list under role assignments. If you created the Azure Purview resource, you should be listed as a collection admin under the root collection already. If not, you will need to contact the collection admin to grant you permission.
- :::image type="content" source="./media/how-to-create-and-manage-collections/collection-admins.png" alt-text="Screenshot of Purview studio window, opened to the Data Map, with the collection admin section highlighted." border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/collection-admins.png" alt-text="Screenshot of Azure Purview studio window, opened to the Data Map, with the collection admin section highlighted." border="true":::
## Collection management
You will need to be a collection admin in order to create a collection. If you a
1. Select Data Map > Collections from the left pane to open collection management page.
- :::image type="content" source="./media/how-to-create-and-manage-collections/find-collections.png" alt-text="Screenshot of Purview studio window, opened to the Data Map, with the Collections tab selected and open." border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/find-collections.png" alt-text="Screenshot of Azure Purview studio window, opened to the Data Map, with the Collections tab selected and open." border="true":::
1. Select **+ Add a collection**. Again, note that only [collection admins](#check-permissions) can manage collections.
- :::image type="content" source="./media/how-to-create-and-manage-collections/select-add-a-collection.png" alt-text="Screenshot of Purview studio window, showing the new collection window, with the add a collection buttons highlighted." border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/select-add-a-collection.png" alt-text="Screenshot of Azure Purview studio window, showing the new collection window, with the add a collection buttons highlighted." border="true":::
1. In the right panel, enter the collection name and description. If needed you can also add users or groups as collection admins to the new collection. 1. Select **Create**.
- :::image type="content" source="./media/how-to-create-and-manage-collections/create-collection.png" alt-text="Screenshot of Purview studio window, showing the new collection window, with a display name and collection admins selected, and the create button highlighted." border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/create-collection.png" alt-text="Screenshot of Azure Purview studio window, showing the new collection window, with a display name and collection admins selected, and the create button highlighted." border="true":::
1. The new collection's information will reflect on the page.
- :::image type="content" source="./media/how-to-create-and-manage-collections/created-collection.png" alt-text="Screenshot of Purview studio window, showing the newly created collection window." border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/created-collection.png" alt-text="Screenshot of Azure Purview studio window, showing the newly created collection window." border="true":::
### Edit a collection 1. Select **Edit** either from the collection detail page, or from the collection's drop down menu.
- :::image type="content" source="./media/how-to-create-and-manage-collections/edit-collection.png" alt-text="Screenshot of Purview studio window, open to collection window, with the 'edit' button highlighted both in the selected collection window, and under the ellipsis button next to the name of the collection." border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/edit-collection.png" alt-text="Screenshot of Azure Purview studio window, open to collection window, with the 'edit' button highlighted both in the selected collection window, and under the ellipsis button next to the name of the collection." border="true":::
1. Currently collection description and collection admins can be edited. Make any changes, then select **Save** to save your change.
- :::image type="content" source="./media/how-to-create-and-manage-collections/edit-description.png" alt-text="Screenshot of Purview studio window with the edit collection window open, a description added to the collection, and the save button highlighted." border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/edit-description.png" alt-text="Screenshot of Azure Purview studio window with the edit collection window open, a description added to the collection, and the save button highlighted." border="true":::
### View Collections 1. Select the triangle icon beside the collection's name to expand or collapse the collection hierarchy. Select the collection names to navigate.
- :::image type="content" source="./media/how-to-create-and-manage-collections/subcollection-menu.png" alt-text="Screenshot of Purview studio collection window, with the button next to the collection name highlighted." border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/subcollection-menu.png" alt-text="Screenshot of Azure Purview studio collection window, with the button next to the collection name highlighted." border="true":::
1. Type in the filter box at the top of the list to filter collections.
- :::image type="content" source="./media/how-to-create-and-manage-collections/filter-collections.png" alt-text="Screenshot of Purview studio collection window, with the filter above the collections highlighted." border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/filter-collections.png" alt-text="Screenshot of Azure Purview studio collection window, with the filter above the collections highlighted." border="true":::
1. Select **Refresh** in Root collection's contextual menu to reload the collection list.
- :::image type="content" source="./media/how-to-create-and-manage-collections/refresh-collections.png" alt-text="Screenshot of Purview studio collection window, with the button next to the Resource name selected, and the refresh button highlighted." border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/refresh-collections.png" alt-text="Screenshot of Azure Purview studio collection window, with the button next to the Resource name selected, and the refresh button highlighted." border="true":::
1. Select **Refresh** in collection detail page to reload the single collection.
- :::image type="content" source="./media/how-to-create-and-manage-collections/refresh-single-collection.png" alt-text="Screenshot of Purview studio collection window, with the refresh button under the collection window highlighted." border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/refresh-single-collection.png" alt-text="Screenshot of Azure Purview studio collection window, with the refresh button under the collection window highlighted." border="true":::
### Delete a collection
You will need to be a collection admin in order to delete a collection. If you a
1. Select **Delete** from the collection detail page.
- :::image type="content" source="./media/how-to-create-and-manage-collections/delete-collections.png" alt-text="Screenshot of Purview studio window to delete a collection" border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/delete-collections.png" alt-text="Screenshot of Azure Purview studio window to delete a collection" border="true":::
2. Select **Confirm** when prompted, **Are you sure you want to delete this collection?**
- :::image type="content" source="./media/how-to-create-and-manage-collections/delete-collection-confirmation.png" alt-text="Screenshot of Purview studio window showing confirmation message to delete a collection" border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/delete-collection-confirmation.png" alt-text="Screenshot of Azure Purview studio window showing confirmation message to delete a collection" border="true":::
-3. Verify deletion of the collection from your Purview Data Map.
+3. Verify deletion of the collection from your Azure Purview Data Map.
## Add roles and restrict access through collections
-Since permissions are managed through collections in Purview, it is important to understand the roles and what permissions they will give your users. A user granted permissions on a collection will have access to sources and assets associated with that collection, as well as inherit permissions to subcollections. Inheritance [can be restricted](#restrict-inheritance), but is allowed by default.
+Since permissions are managed through collections in Azure Purview, it is important to understand the roles and what permissions they will give your users. A user granted permissions on a collection will have access to sources and assets associated with that collection, as well as inherit permissions to subcollections. Inheritance [can be restricted](#restrict-inheritance), but is allowed by default.
The guide below will discuss the roles, how to manage them, and permissions inheritance.
The guide below will discuss the roles, how to manage them, and permissions inhe
All assigned roles apply to sources, assets, and other objects within the collection where the role is applied.
-* **Collection admins** - can edit the collection, its details, and add subcollections. They can also add data curators, data readers, and other Purview roles to a collection scope. Collection admins that are automatically inherited from a parent collection can't be removed.
+* **Collection admins** - can edit the collection, its details, and add subcollections. They can also add data curators, data readers, and other Azure Purview roles to a collection scope. Collection admins that are automatically inherited from a parent collection can't be removed.
* **Data source admins** - can manage data sources and data scans. * **Data curators** - can perform create, read, modify, and delete actions on catalog data objects and establish relationships between objects. * **Data readers** - can access but not modify catalog data objects.
All assigned roles apply to sources, assets, and other objects within the collec
1. Select the **Role assignments** tab to see all the roles in a collection. Only a collection admin can manage role assignments.
- :::image type="content" source="./media/how-to-create-and-manage-collections/select-role-assignments.png" alt-text="Screenshot of Purview studio collection window, with the role assignments tab highlighted." border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/select-role-assignments.png" alt-text="Screenshot of Azure Purview studio collection window, with the role assignments tab highlighted." border="true":::
1. Select **Edit role assignments** or the person icon to edit each role member.
- :::image type="content" source="./media/how-to-create-and-manage-collections/edit-role-assignments.png" alt-text="Screenshot of Purview studio collection window, with the edit role assignments dropdown list selected." border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/edit-role-assignments.png" alt-text="Screenshot of Azure Purview studio collection window, with the edit role assignments dropdown list selected." border="true":::
1. Type in the textbox to search for users you want to add to the role member. Select **X** to remove members you don't want to add.
- :::image type="content" source="./media/how-to-create-and-manage-collections/search-user-permissions.png" alt-text="Screenshot of Purview studio collection collection admin window with the search bar highlighted." border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/search-user-permissions.png" alt-text="Screenshot of Azure Purview studio collection collection admin window with the search bar highlighted." border="true":::
1. Select **OK** to save your changes, and you will see the new users reflected in the role assignments list.
All assigned roles apply to sources, assets, and other objects within the collec
1. Select **X** button next to a user's name to remove a role assignment.
- :::image type="content" source="./media/how-to-create-and-manage-collections/remove-role-assignment.png" alt-text="Screenshot of Purview studio collection window, with the role assignments tab selected, and the x button beside one of the names highlighted." border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/remove-role-assignment.png" alt-text="Screenshot of Azure Purview studio collection window, with the role assignments tab selected, and the x button beside one of the names highlighted." border="true":::
1. Select **Confirm** if you're sure to remove the user.
All assigned roles apply to sources, assets, and other objects within the collec
### Restrict inheritance
-Collection permissions are inherited automatically from the parent collection. For example, any permissions on the root collection (the collection at the top of the list that has the same name as your Purview resource), will be inherited by all collections below it. You can restrict inheritance from a parent collection at any time, using the restrict inherited permissions option.
+Collection permissions are inherited automatically from the parent collection. For example, any permissions on the root collection (the collection at the top of the list that has the same name as your Azure Purview resource), will be inherited by all collections below it. You can restrict inheritance from a parent collection at any time, using the restrict inherited permissions option.
Once you restrict inheritance, you will need to add users directly to the restricted collection to grant them access. 1. Navigate to the collection where you want to restrict inheritance and select the **Role assignments** tab. 1. Select **Restrict inherited permissions** and select **Restrict access** in the popup dialog to remove inherited permissions from this collection and any subcollections. Note that collection admin permissions won't be affected.
- :::image type="content" source="./media/how-to-create-and-manage-collections/restrict-access-inheritance.png" alt-text="Screenshot of Purview studio collection window, with the role assignments tab selected, and the restrict inherited permissions slide button highlighted." border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/restrict-access-inheritance.png" alt-text="Screenshot of Azure Purview studio collection window, with the role assignments tab selected, and the restrict inherited permissions slide button highlighted." border="true":::
1. After restriction, inherited members are removed from the roles expect for collection admin. 1. Select the **Restrict inherited permissions** toggle button again to revert.
- :::image type="content" source="./media/how-to-create-and-manage-collections/remove-restriction.png" alt-text="Screenshot of Purview studio collection window, with the role assignments tab selected, and the unrestrict inherited permissions slide button highlighted." border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/remove-restriction.png" alt-text="Screenshot of Azure Purview studio collection window, with the role assignments tab selected, and the unrestrict inherited permissions slide button highlighted." border="true":::
## Register source to a collection 1. Select **Register** or register icon on collection node to register a data source. Note that only data source admin can register sources.
- :::image type="content" source="./media/how-to-create-and-manage-collections/register-by-collection.png" alt-text="Screenshot of the data map Purview studio window with the register button highlighted both at the top of the page and under a collection."border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/register-by-collection.png" alt-text="Screenshot of the data map Azure Purview studio window with the register button highlighted both at the top of the page and under a collection."border="true":::
1. Fill in the data source name, and other source information. It lists all the collections which you have scan permission on the bottom of the form. You can select one collection. All assets under this source will belong to the collection you select.
Once you restrict inheritance, you will need to add users directly to the restri
1. The created data source will be put under the selected collection. Select **View details** to see the data source.
- :::image type="content" source="./media/how-to-create-and-manage-collections/see-registered-source.png" alt-text="Screenshot of the data map Purview studio window with the newly added source card highlighted."border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/see-registered-source.png" alt-text="Screenshot of the data map Azure Purview studio window with the newly added source card highlighted."border="true":::
1. Select **New scan** to create scan under the data source.
- :::image type="content" source="./media/how-to-create-and-manage-collections/new-scan.png" alt-text="Screenshot of a source Purview studio window with the new scan button highlighted."border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/new-scan.png" alt-text="Screenshot of a source Azure Purview studio window with the new scan button highlighted."border="true":::
1. Similarly, at the bottom of the form, you can select a collection, and all assets scanned will be included in the collection. Note that the collections listed here are restricted to subcollections of the data source collection.
Note that the collections listed here are restricted to subcollections of the da
1. Back in the collection window, you will see the data sources linked to the collection on the sources card.
- :::image type="content" source="./media/how-to-create-and-manage-collections/source-under-collection.png" alt-text="Screenshot of the data map Purview studio window with the newly added source card highlighted in the map."border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/source-under-collection.png" alt-text="Screenshot of the data map Azure Purview studio window with the newly added source card highlighted in the map."border="true":::
## Add assets to collections
Assets and sources are also associated with collections. During a scan, if the s
1. Check the collection information in asset details. You can find collection information in the **Collection path** section on right-top corner of the asset details page.
- :::image type="content" source="./media/how-to-create-and-manage-collections/collection-path.png" alt-text="Screenshot of Purview studio asset window, with the collection path highlighted." border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/collection-path.png" alt-text="Screenshot of Azure Purview studio asset window, with the collection path highlighted." border="true":::
1. Permissions in asset details page: 1. Please check the collection based permission model by following the [add roles and restricting access on collections guide above](#add-roles-and-restrict-access-through-collections).
- 1. If you don't have read permission on a collection, the assets under that collection will not be listed in search results. If you get the direct URL of one asset and open it, you will see the no access page. In this case please contact your Purview admin to grant you the access. You can select the **Refresh** button to check the permission again.
+ 1. If you don't have read permission on a collection, the assets under that collection will not be listed in search results. If you get the direct URL of one asset and open it, you will see the no access page. In this case please contact your Azure Purview admin to grant you the access. You can select the **Refresh** button to check the permission again.
- :::image type="content" source="./media/how-to-create-and-manage-collections/no-access.png" alt-text="Screenshot of Purview studio asset window where the user has no permissions, and has no access to information or options." border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/no-access.png" alt-text="Screenshot of Azure Purview studio asset window where the user has no permissions, and has no access to information or options." border="true":::
1. If you have the read permission to one collection but don't have the write permission, you can browse the asset details page, but the following operations are disabled: * Edit the asset. The **Edit** button will be disabled.
Assets and sources are also associated with collections. During a scan, if the s
* Move asset to another collection. The ellipsis button on the right-top corner of Collection path section will be hidden. 1. The assets in **Hierarchy** section are also affected by permissions. Assets without read permission will be grayed.
- :::image type="content" source="./media/how-to-create-and-manage-collections/hierarchy-permissions.png" alt-text="Screenshot of Purview studio hierarchy window where the user has only read permissions, and has no access to options." border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/hierarchy-permissions.png" alt-text="Screenshot of Azure Purview studio hierarchy window where the user has only read permissions, and has no access to options." border="true":::
### Move asset to another collection 1. Select the ellipsis button on the right-top corner of Collection path section.
- :::image type="content" source="./media/how-to-create-and-manage-collections/move-asset.png" alt-text="Screenshot of Purview studio asset window with the collection path highlighted and the ellipsis button next to collection path selected." border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/move-asset.png" alt-text="Screenshot of Azure Purview studio asset window with the collection path highlighted and the ellipsis button next to collection path selected." border="true":::
1. Select the **Move to another collection** button. 1. In the right side panel, choose the target collection you want move to. Note that you can only see the collections where you have write permissions. The asset can also only be added to the subcollections of the data source collection.
- :::image type="content" source="./media/how-to-create-and-manage-collections/move-select-collection.png" alt-text="Screenshot of Purview studio pop-up window with the select a collection dropdown menu highlighted." border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/move-select-collection.png" alt-text="Screenshot of Azure Purview studio pop-up window with the select a collection dropdown menu highlighted." border="true":::
1. Select **Move** button on the bottom of the window to move the asset.
Assets and sources are also associated with collections. During a scan, if the s
### Search by collection
-1. In Azure Purview, the search bar is located at the top of the Purview studio UX.
+1. In Azure Purview, the search bar is located at the top of the Azure Purview studio UX.
:::image type="content" source="./media/how-to-create-and-manage-collections/purview-search-bar.png" alt-text="Screenshot showing the location of the Azure Purview search bar" border="true":::
Assets and sources are also associated with collections. During a scan, if the s
1. You can browse data assets, by selecting the **Browse assets** on the homepage.
- :::image type="content" source="./media/how-to-create-and-manage-collections/browse-by-collection.png" alt-text="Screenshot of the catalog Purview studio window with the browse assets button highlighted." border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/browse-by-collection.png" alt-text="Screenshot of the catalog Azure Purview studio window with the browse assets button highlighted." border="true":::
1. On the Browse asset page, select **By collection** pivot. Collections are listed with hierarchical table view. To further explore assets in each collection, select the corresponding collection name.
- :::image type="content" source="./media/how-to-create-and-manage-collections/by-collection-view.png" alt-text="Screenshot of the asset Purview studio window with the by collection tab selected."border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/by-collection-view.png" alt-text="Screenshot of the asset Azure Purview studio window with the by collection tab selected."border="true":::
1. On the next page, the search results of the assets under selected collection will be show up. You can narrow the results by selecting the facet filters. Or you can see the assets under other collections by selecting the sub/related collection names.
- :::image type="content" source="./media/how-to-create-and-manage-collections/search-results-by-collection.png" alt-text="Screenshot of the catalog Purview studio window with the by collection tab selected."border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/search-results-by-collection.png" alt-text="Screenshot of the catalog Azure Purview studio window with the by collection tab selected."border="true":::
1. To view the details of an asset, select the asset name in the search result. Or you can check the assets and bulk edit them.
- :::image type="content" source="./media/how-to-create-and-manage-collections/view-asset-details.png" alt-text="Screenshot of the catalog Purview studio window with the by collection tab selected and asset check boxes highlighted."border="true":::
+ :::image type="content" source="./media/how-to-create-and-manage-collections/view-asset-details.png" alt-text="Screenshot of the catalog Azure Purview studio window with the by collection tab selected and asset check boxes highlighted."border="true":::
## Next steps
purview How To Lineage Azure Synapse Analytics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/how-to-lineage-azure-synapse-analytics.md
Currently, Azure Purview captures runtime lineage from the following Azure Synap
## Access secured Azure Purview account
-If your Purview account is protected by firewall, learn how to let Azure Synapse [access a secured Purview account](../synapse-analytics/catalog-and-governance/how-to-access-secured-purview-account.md) through Purview private endpoints.
+If your Azure Purview account is protected by firewall, learn how to let Azure Synapse [access a secured Azure Purview account](../synapse-analytics/catalog-and-governance/how-to-access-secured-purview-account.md) through Azure Purview private endpoints.
-## Bring Azure Synapse lineage into Purview
+## Bring Azure Synapse lineage into Azure Purview
-### Step 1: Connect Azure Synapse workspace to your Purview account
+### Step 1: Connect Azure Synapse workspace to your Azure Purview account
-You can connect an Azure Synapse workspace to Purview, and the connection enables Azure Synapse to push lineage information to Purview. Follow the steps in [Connect Synapse workspace to Azure Purview](../synapse-analytics/catalog-and-governance/quickstart-connect-azure-purview.md). Multiple Azure Synapse workspaces can connect to a single Azure Purview account for holistic lineage tracking.
+You can connect an Azure Synapse workspace to Azure Purview, and the connection enables Azure Synapse to push lineage information to Azure Purview. Follow the steps in [Connect Synapse workspace to Azure Purview](../synapse-analytics/catalog-and-governance/quickstart-connect-azure-purview.md). Multiple Azure Synapse workspaces can connect to a single Azure Purview account for holistic lineage tracking.
### Step 2: Run pipeline in Azure Synapse workspace
After you run the Azure Synapse pipeline, in the Synapse pipeline monitoring vie
:::image type="content" source="../data-factory/media/data-factory-purview/monitor-lineage-reporting-status.png" alt-text="Monitor the lineage reporting status in pipeline monitoring view.":::
-### Step 4: View lineage information in your Purview account
+### Step 4: View lineage information in your Azure Purview account
-In your Purview account, you can browse assets and choose type "Azure Synapse Analytics". You can also search the Data Catalog using keywords.
+In your Azure Purview account, you can browse assets and choose type "Azure Synapse Analytics". You can also search the Data Catalog using keywords.
Select the Synapse account -> pipeline -> activity, you can view the lineage information. ## Next steps
purview How To Lineage Cassandra https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/how-to-lineage-cassandra.md
Last updated 08/12/2021
# How to get lineage from Cassandra into Azure Purview
-This article elaborates on the data lineage aspects of Cassandra source in Azure Purview. The prerequisite to see data lineage in Purview for Cassandra is to [scan your Cassandra server.](../purview/register-scan-cassandra-source.md)
+This article elaborates on the data lineage aspects of Cassandra source in Azure Purview. The prerequisite to see data lineage in Azure Purview for Cassandra is to [scan your Cassandra server.](../purview/register-scan-cassandra-source.md)
## Lineage of Cassandra artifacts in Azure Purview
purview How To Lineage Erwin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/how-to-lineage-erwin.md
Last updated 08/11/2021
# How to get lineage from Erwin into Azure Purview
-This article elaborates on the data lineage aspects of Erwin source in Azure Purview. The prerequisite to see data lineage in Purview for Erwin is to [scan your Erwin.](../purview/register-scan-erwin-source.md)
+This article elaborates on the data lineage aspects of Erwin source in Azure Purview. The prerequisite to see data lineage in Azure Purview for Erwin is to [scan your Erwin.](../purview/register-scan-erwin-source.md)
## Lineage of Erwin artifacts in Azure Purview
purview How To Lineage Google Bigquery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/how-to-lineage-google-bigquery.md
Last updated 08/12/2021
# How to get lineage from BigQuery into Azure Purview
-This article elaborates on the data lineage aspects of BigQuery source in Azure Purview. The prerequisite to see data lineage in Purview for BigQuery is to [scan your BigQuery project.](../purview/register-scan-google-bigquery-source.md)
+This article elaborates on the data lineage aspects of BigQuery source in Azure Purview. The prerequisite to see data lineage in Azure Purview for BigQuery is to [scan your BigQuery project.](../purview/register-scan-google-bigquery-source.md)
## Lineage of BigQuery artifacts in Azure Purview
purview How To Lineage Looker https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/how-to-lineage-looker.md
Last updated 08/12/2021
# How to get lineage from Looker into Azure Purview
-This article elaborates on the data lineage aspects of Looker source in Azure Purview. The prerequisite to see data lineage in Purview for Looker is to [scan your Looker.](../purview/register-scan-looker-source.md)
+This article elaborates on the data lineage aspects of Looker source in Azure Purview. The prerequisite to see data lineage in Azure Purview for Looker is to [scan your Looker.](../purview/register-scan-looker-source.md)
## Lineage of Looker artifacts in Azure Purview
purview How To Lineage Oracle https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/how-to-lineage-oracle.md
Last updated 08/11/2021
# How to get lineage from Oracle into Azure Purview
-This article elaborates on the data lineage aspects of Oracle source in Azure Purview. The prerequisite to see data lineage in Purview for Oracle is to [scan your Oracle.](../purview/register-scan-oracle-source.md)
+This article elaborates on the data lineage aspects of Oracle source in Azure Purview. The prerequisite to see data lineage in Azure Purview for Oracle is to [scan your Oracle.](../purview/register-scan-oracle-source.md)
## Lineage of Oracle artifacts in Azure Purview
purview How To Lineage Powerbi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/how-to-lineage-powerbi.md
Last updated 03/30/2021
# How to get lineage from Power BI into Azure Purview
-This article elaborates on the data lineage aspects of Power BI source in Azure Purview. The prerequisite to see data lineage in Purview for Power BI is to [scan your Power BI.](../purview/register-scan-power-bi-tenant.md)
+This article elaborates on the data lineage aspects of Power BI source in Azure Purview. The prerequisite to see data lineage in Azure Purview for Power BI is to [scan your Power BI.](../purview/register-scan-power-bi-tenant.md)
## Common scenarios
-1. After the Power BI source is scanned, data consumers can perform root cause analysis of a report or dashboard from Purview. For any data discrepancy in a report, users can easily identify the upstream datasets and contact their owners if necessary.
+1. After the Power BI source is scanned, data consumers can perform root cause analysis of a report or dashboard from Azure Purview. For any data discrepancy in a report, users can easily identify the upstream datasets and contact their owners if necessary.
2. Data producers can see the downstream reports or dashboards consuming their dataset. Before making any changes to their datasets, the data owners can make informed decisions.
This article elaborates on the data lineage aspects of Power BI source in Azure
## Power BI artifacts in Azure Purview
-Once the [scan of your Power BI](../purview/register-scan-power-bi-tenant.md) is complete, following Power BI artifacts will be inventoried in Purview
+Once the [scan of your Power BI](../purview/register-scan-power-bi-tenant.md) is complete, following Power BI artifacts will be inventoried in Azure Purview
* Capacity * Workspaces
purview How To Lineage Sapecc https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/how-to-lineage-sapecc.md
Last updated 08/12/2021
# How to get lineage from SAP ECC into Azure Purview
-This article elaborates on the data lineage aspects of SAP ECC source in Azure Purview. The prerequisite to see data lineage in Purview for SAP ECC is to [scan your SAP ECC.](../purview/register-scan-sapecc-source.md)
+This article elaborates on the data lineage aspects of SAP ECC source in Azure Purview. The prerequisite to see data lineage in Azure Purview for SAP ECC is to [scan your SAP ECC.](../purview/register-scan-sapecc-source.md)
## Lineage of SAP ECC artifacts in Azure Purview
purview How To Lineage Saps4hana https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/how-to-lineage-saps4hana.md
Last updated 08/12/2021
# How to get lineage from SAP S/4HANA into Azure Purview
-This article elaborates on the data lineage aspects of SAP S/4HANA source in Azure Purview. The prerequisite to see data lineage in Purview for SAP S/4HANA is to [scan your SAP S/4HANA.](../purview/register-scan-saps4hana-source.md)
+This article elaborates on the data lineage aspects of SAP S/4HANA source in Azure Purview. The prerequisite to see data lineage in Azure Purview for SAP S/4HANA is to [scan your SAP S/4HANA.](../purview/register-scan-saps4hana-source.md)
## Lineage of SAP S/4HANA artifacts in Azure Purview
purview How To Lineage Spark Atlas Connector https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/how-to-lineage-spark-atlas-connector.md
Last updated 04/28/2021
# How to use Apache Atlas connector to collect Spark lineage
-Apache Atlas Spark Connector is a hook to track Spark SQL/DataFrame data movements and push metadata changes to Purview Atlas endpoint.
+Apache Atlas Spark Connector is a hook to track Spark SQL/DataFrame data movements and push metadata changes to Azure Purview Atlas endpoint.
## Supported scenarios
This connector supports following tracking:
3. DataFrame movements that have inputs and outputs. This connector relies on query listener to retrieve query and examine the impacts. It will correlate with other systems like Hive, HDFS to track the life cycle of data in Atlas.
-Since Purview supports Atlas API and Atlas native hook, the connector can report lineage to Purview after configured with Spark. The connector could be configured per job or configured as the cluster default setting.
+Since Azure Purview supports Atlas API and Atlas native hook, the connector can report lineage to Azure Purview after configured with Spark. The connector could be configured per job or configured as the cluster default setting.
## Configuration requirement
The following steps are documented based on DataBricks as an example:
e. Put the package where the spark cluster could access. For DataBricks cluster, the package could upload to dbfs folder, such as /FileStore/jars. 2. Prepare Connector config
- 1. Get Kafka Endpoint and credential in Azure portal of the Purview Account
- 1. Provide your account with *ΓÇ£Purview Data CuratorΓÇ¥* permission
+ 1. Get Kafka Endpoint and credential in Azure portal of the Azure Purview Account
+ 1. Provide your account with *ΓÇ£Azure Purview Data CuratorΓÇ¥* permission
:::image type="content" source="./media/how-to-lineage-spark-atlas-connector/assign-purview-data-curator-role.png" alt-text="Screenshot showing data curator role assignment" lightbox="./media/how-to-lineage-spark-atlas-connector/assign-purview-data-curator-role.png":::
The following steps are documented based on DataBricks as an example:
atlas.kafka.sasl.mechanism=PLAIN atlas.kafka.security.protocol=SASL_SSL atlas.kafka.bootstrap.servers= atlas-46c097e6-899a-44aa-9a30-6ccd0b2a2a91.servicebus.windows.net:9093
- atlas.kafka.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="$ConnectionString" password="<connection string got from your Purview account>";
+ atlas.kafka.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="$ConnectionString" password="<connection string got from your Azure Purview account>";
``` c. Make sure the atlas configuration file is in the DriverΓÇÖs classpath generated in [step 1 Generate package section above](../purview/how-to-lineage-spark-atlas-connector.md#step-1-prepare-spark-atlas-connector-package). In cluster mode, ship this config file to the remote Drive *--files atlas-application.properties*
-### Step 2. Prepare your Purview account
+### Step 2. Prepare your Azure Purview account
After the Atlas Spark model definition is successfully created, follow below steps 1. Get spark type definition from GitHub https://github.com/apache/atlas/blob/release-2.1.0-rc3/addons/models/1000-Hadoop/1100-spark_model.json 2. Assign role:
- 1. Navigate to your Purview account and select Access control (IAM)
- 1. Add Users and grant your service principal *Purview Data source administrator* role
+ 1. Navigate to your Azure Purview account and select Access control (IAM)
+ 1. Add Users and grant your service principal *Azure Purview Data source administrator* role
3. Get auth token: 1. Open "postman" or similar tools 1. Use the service principal used in previous step to get the bearer token:
After the Atlas Spark model definition is successfully created, follow below ste
:::image type="content" source="./media/how-to-lineage-spark-atlas-connector/postman-examples.png" alt-text="Screenshot showing postman example" lightbox="./media/how-to-lineage-spark-atlas-connector/postman-examples.png":::
-4. Post Spark Atlas model definition to Purview Account:
- 1. Get Atlas Endpoint of the Purview account from properties section of Azure portal.
- 1. Post Spark type definition into the Purview account:
+4. Post Spark Atlas model definition to Azure Purview Account:
+ 1. Get Atlas Endpoint of the Azure Purview account from properties section of Azure portal.
+ 1. Post Spark type definition into the Azure Purview account:
* Post: {{endpoint}}/api/atlas/v2/types/typedefs * Use the generated access token * Body: choose raw and copy all content from GitHub https://github.com/apache/atlas/blob/release-2.1.0-rc3/addons/models/1000-Hadoop/1100-spark_model.json
spark-submit --class com.microsoft.SparkAtlasTest --master yarn --deploy-mode --
2. Below instructions are for Cluster Setting: The connector jar and listenerΓÇÖs setting should be put in Spark clustersΓÇÖ: *conf/spark-defaults.conf*. Spark-submit will read the options in *conf/spark-defaults.conf* and pass them to your application.
-### Step 5. Run and Check lineage in Purview account
-Kick off The Spark job and check the lineage info in your Purview account.
+### Step 5. Run and Check lineage in Azure Purview account
+Kick off The Spark job and check the lineage info in your Azure Purview account.
:::image type="content" source="./media/how-to-lineage-spark-atlas-connector/purview-with-spark-lineage.png" alt-text="Screenshot showing purview with spark lineage" lightbox="./media/how-to-lineage-spark-atlas-connector/purview-with-spark-lineage.png":::
purview How To Lineage Sql Server Integration Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/how-to-lineage-sql-server-integration-services.md
On premises SSIS lineage extraction is not supported yet.
*\* Azure Purview currently doesn't support query or stored procedure for lineage or scanning. Lineage is limited to table and view sources only.*
-## How to bring SSIS lineage into Purview
+## How to bring SSIS lineage into Azure Purview
### Step 1. [Connect a Data Factory to Azure Purview](how-to-link-azure-data-factory.md)
purview How To Lineage Teradata https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/how-to-lineage-teradata.md
Last updated 08/12/2021
# How to get lineage from Teradata into Azure Purview
-This article elaborates on the data lineage aspects of Teradata source in Azure Purview. The prerequisite to see data lineage in Purview for Teradata is to [scan your Teradata.](../purview/register-scan-teradata-source.md)
+This article elaborates on the data lineage aspects of Teradata source in Azure Purview. The prerequisite to see data lineage in Azure Purview for Teradata is to [scan your Teradata.](../purview/register-scan-teradata-source.md)
## Lineage of Teradata artifacts in Azure Purview
purview How To Link Azure Data Factory https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/how-to-link-azure-data-factory.md
This document explains the steps required for connecting an Azure Data Factory a
## View existing Data Factory connections
-Multiple Azure Data Factories can connect to a single Azure Purview to push lineage information. The current limit allows you to connect up 10 Data Factory accounts at a time from the Purview management center. To show the list of Data Factory accounts connected to your Purview account, do the following:
+Multiple Azure Data Factories can connect to a single Azure Purview to push lineage information. The current limit allows you to connect up 10 Data Factory accounts at a time from the Azure Purview management center. To show the list of Data Factory accounts connected to your Azure Purview account, do the following:
1. Select **Management** on the left navigation pane. 2. Under **Lineage connections**, select **Data Factory**.
Multiple Azure Data Factories can connect to a single Azure Purview to push line
4. Notice the various values for connection **Status**:
- - **Connected**: The data factory is connected to the Purview account.
+ - **Connected**: The data factory is connected to the Azure Purview account.
- **Disconnected**: The data factory has access to the catalog, but it's connected to another catalog. As a result, data lineage won't be reported to the catalog automatically. - **CannotAccess**: The current user doesn't have access to the data factory, so the connection status is unknown.
Multiple Azure Data Factories can connect to a single Azure Purview to push line
> > Also, it requires the users to be the data factory's "Owner" or "Contributor".
-Follow the steps below to connect an existing data factory to your Purview account. You can also [connect Data Factory to Purview account from ADF](../data-factory/connect-data-factory-to-azure-purview.md).
+Follow the steps below to connect an existing data factory to your Azure Purview account. You can also [connect Data Factory to Azure Purview account from ADF](../data-factory/connect-data-factory-to-azure-purview.md).
1. Select **Management** on the left navigation pane. 2. Under **Lineage connections**, select **Data Factory**.
Follow the steps below to connect an existing data factory to your Purview accou
:::image type="content" source="./media/how-to-link-azure-data-factory/connect-data-factory.png" alt-text="Screenshot showing how to connect Azure Data Factory." lightbox="./media/how-to-link-azure-data-factory/connect-data-factory.png":::
- Some Data Factory instances might be disabled if the data factory is already connected to the current Purview account, or the data factory doesn't have a managed identity.
+ Some Data Factory instances might be disabled if the data factory is already connected to the current Azure Purview account, or the data factory doesn't have a managed identity.
- A warning message will be displayed if any of the selected Data Factories are already connected to other Purview account. By selecting OK, the Data Factory connection with the other Purview account will be disconnected. No additional confirmations are required.
+ A warning message will be displayed if any of the selected Data Factories are already connected to other Azure Purview account. By selecting OK, the Data Factory connection with the other Azure Purview account will be disconnected. No additional confirmations are required.
:::image type="content" source="./media/how-to-link-azure-data-factory/warning-for-disconnect-factory.png" alt-text="Screenshot showing warning to disconnect Azure Data Factory.":::
Follow the steps below to connect an existing data factory to your Purview accou
### How authentication works
-Data factory's managed identity is used to authenticate lineage push operations from data factory to Purview. When connecting data factory to Purview on UI, it adds the role assignment automatically.
+Data factory's managed identity is used to authenticate lineage push operations from data factory to Azure Purview. When connecting data factory to Azure Purview on UI, it adds the role assignment automatically.
-Grant the data factory's managed identity **Data Curator** role on Purview **root collection**. Learn more about [Access control in Azure Purview](../purview/catalog-permissions.md) and [Add roles and restrict access through collections](../purview/how-to-create-and-manage-collections.md#add-roles-and-restrict-access-through-collections).
+Grant the data factory's managed identity **Data Curator** role on Azure Purview **root collection**. Learn more about [Access control in Azure Purview](../purview/catalog-permissions.md) and [Add roles and restrict access through collections](../purview/how-to-create-and-manage-collections.md#add-roles-and-restrict-access-through-collections).
### Remove data factory connections
Azure Purview captures runtime lineage from the following Azure Data Factory act
> [!IMPORTANT] > Azure Purview drops lineage if the source or destination uses an unsupported data storage system.
-The integration between Data Factory and Purview supports only a subset of the data systems that Data Factory supports, as described in the following sections.
+The integration between Data Factory and Azure Purview supports only a subset of the data systems that Data Factory supports, as described in the following sections.
[!INCLUDE[data-factory-supported-lineage-capabilities](includes/data-factory-common-supported-capabilities.md)]
Refer to [supported data stores](how-to-lineage-sql-server-integration-services.
## Access secured Azure Purview account
-If your Purview account is protected by firewall, learn how to let Data Factory [access a secured Purview account](../data-factory/how-to-access-secured-purview-account.md) through Purview private endpoints.
+If your Azure Purview account is protected by firewall, learn how to let Data Factory [access a secured Azure Purview account](../data-factory/how-to-access-secured-purview-account.md) through Azure Purview private endpoints.
-## Bring Data Factory lineage into Purview
+## Bring Data Factory lineage into Azure Purview
For an end to end walkthrough, follow the [Tutorial: Push Data Factory lineage data to Azure Purview](../data-factory/turorial-push-lineage-to-purview.md).
purview How To Link Azure Data Share https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/how-to-link-azure-data-share.md
A report has incorrect information because of upstream data issues from an exter
Data producers want to know who will be impacted upon making a change to their dataset. Using lineage, a data producer can easily understand the impact of the downstream internal or external partners who are consuming data using Azure Data Share.
-## Azure Data Share and Purview connected experience
+## Azure Data Share and Azure Purview connected experience
To connect your Azure Data Share and Azure Purview account, do the following:
-1. Create a Purview account. All the Data Share lineage information will be collected by a Purview account. You can use an existing one or create a new Purview account.
+1. Create an Azure Purview account. All the Data Share lineage information will be collected by an Azure Purview account. You can use an existing one or create a new Azure Purview account.
-1. Connect your Azure Data Share to your Purview account.
+1. Connect your Azure Data Share to your Azure Purview account.
- 1. In the Purview portal, you can go to **Management Center** and connect your Azure Data Share under the **External connections** section.
- 1. Select **+ New** on the top bar, find your Azure Data Share in the pop-up side bar and add the Data Share account. Run a snapshot job after connecting your Data Share to Purview account, so that the Data Share assets and lineage information is visible in Purview.
+ 1. In the Azure Purview portal, you can go to **Management Center** and connect your Azure Data Share under the **External connections** section.
+ 1. Select **+ New** on the top bar, find your Azure Data Share in the pop-up side bar and add the Data Share account. Run a snapshot job after connecting your Data Share to Azure Purview account, so that the Data Share assets and lineage information is visible in Azure Purview.
:::image type="content" source="media/how-to-link-azure-data-share/connect-to-data-share.png" alt-text="Management center to link Azure Data Share":::
To connect your Azure Data Share and Azure Purview account, do the following:
- Once the Azure Data share connection is established with Azure Purview, you can execute a snapshot for your existing shares. - If you donΓÇÖt have any existing shares, go to the Azure Data Share portal to [share your data](../data-share/share-your-data.md) [and subscribe to a data share](../data-share/subscribe-to-data-share.md).
- - Once the share snapshot is complete, you can view associated Data Share assets and lineage in Purview.
+ - Once the share snapshot is complete, you can view associated Data Share assets and lineage in Azure Purview.
-1. Discover Data Share accounts and share information in your Purview account.
+1. Discover Data Share accounts and share information in your Azure Purview account.
- - In the home page of Purview account, select **Browse by asset type** and select the **Azure Data Share** tile. You can search for an account name, share name, share snapshot, or partner organization. Otherwise apply filters on the Search result page for account name, share type (sent vs received shares).
+ - In the home page of Azure Purview account, select **Browse by asset type** and select the **Azure Data Share** tile. You can search for an account name, share name, share snapshot, or partner organization. Otherwise apply filters on the Search result page for account name, share type (sent vs received shares).
:::image type="content" source="media/how-to-link-azure-data-share/azure-data-share-search-result-page.png" alt-text="Azure Data share in Search result page"::: >[!Important]
- >For Data Share assets to show in Purview, a snapshot job must be run after you connect your Data Share to Purview.
+ >For Data Share assets to show in Azure Purview, a snapshot job must be run after you connect your Data Share to Azure Purview.
1. Track lineage of datasets shared with Azure Data Share.
- - From the Purview search result page, choose the Data share snapshot (received/sent) and select the **Lineage** tab, to see a lineage graph with upstream and downstream dependencies.
+ - From the Azure Purview search result page, choose the Data share snapshot (received/sent) and select the **Lineage** tab, to see a lineage graph with upstream and downstream dependencies.
:::image type="content" source="media/how-to-link-azure-data-share/azure-data-share-lineage.png" alt-text="Lineage of Datasets shared using Azure Data Share":::
purview How To Manage Quotas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/how-to-manage-quotas.md
Azure Purview is a cloud service for use by data users. You use Azure Purview to
|**Resource**| **Default Limit** |**Maximum Limit**| ||||
-|Purview accounts per region, per tenant (all subscriptions combined)|3|Contact Support|
+|Azure Purview accounts per region, per tenant (all subscriptions combined)|3|Contact Support|
|vCores available for scanning, per account*|160|160| |Concurrent scans, per account at a given point. The limit is based on the type of data sources scanned*|5 | 10 | |Maximum time that a scan can run for|7 days|7 days|
purview How To Manage Term Templates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/how-to-manage-term-templates.md
Last updated 11/04/2020
# How to manage term templates for business glossary
-Azure Purview allows you to create a glossary of terms that are important for enriching your data. Each new term added to your Purview Data Catalog Glossary is based on a term template that determines the fields for the term. This article describes how to create a term template and custom attributes that can be associated to glossary terms.
+Azure Purview allows you to create a glossary of terms that are important for enriching your data. Each new term added to your Azure Purview Data Catalog Glossary is based on a term template that determines the fields for the term. This article describes how to create a term template and custom attributes that can be associated to glossary terms.
## Manage term templates and custom attributes
purview How To Monitor With Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/how-to-monitor-with-azure-monitor.md
This article describes how to configure metrics, alerts, and diagnostic settings
## Monitor Azure Purview
-Azure Purview admins can use Azure Monitor to track the operational state of Purview account. Metrics are collected to provide data points for you to track potential problems, troubleshoot, and improve the reliability of the Purview account. The metrics are sent to Azure monitor for events occurring in Azure Purview.
+Azure Purview admins can use Azure Monitor to track the operational state of Azure Purview account. Metrics are collected to provide data points for you to track potential problems, troubleshoot, and improve the reliability of the Azure Purview account. The metrics are sent to Azure monitor for events occurring in Azure Purview.
## Aggregated metrics
-The metrics can be accessed from the Azure portal for a Purview account. Access to the metrics are controlled by the role assignment of Purview account. Users need to be part of the "Monitoring Reader" role in Azure Purview to see the metrics. Check out [Monitoring Reader Role permissions](../azure-monitor/roles-permissions-security.md#built-in-monitoring-roles) to learn more about the roles access levels.
+The metrics can be accessed from the Azure portal for an Azure Purview account. Access to the metrics are controlled by the role assignment of Azure Purview account. Users need to be part of the "Monitoring Reader" role in Azure Purview to see the metrics. Check out [Monitoring Reader Role permissions](../azure-monitor/roles-permissions-security.md#built-in-monitoring-roles) to learn more about the roles access levels.
-The person who created the Purview account automatically gets permissions to view metrics. If anyone else wants to see metrics, add them to the **Monitoring Reader** role, by following these steps:
+The person who created the Azure Purview account automatically gets permissions to view metrics. If anyone else wants to see metrics, add them to the **Monitoring Reader** role, by following these steps:
### Add a user to the Monitoring Reader role
-To add a user to the **Monitoring Reader** role, the owner of Purview account or the Subscription owner can follow these steps:
+To add a user to the **Monitoring Reader** role, the owner of Azure Purview account or the Subscription owner can follow these steps:
1. Go to the [Azure portal](https://portal.azure.com) and search for the Azure Purview account name.
To add a user to the **Monitoring Reader** role, the owner of Purview account or
## Metrics visualization
-Users in the **Monitoring Reader** role can see the aggregated metrics and diagnostic logs sent to Azure Monitor. The metrics are listed in the Azure portal for the corresponding Purview account. In the Azure portal, select the Metrics section to see the list of all available metrics.
+Users in the **Monitoring Reader** role can see the aggregated metrics and diagnostic logs sent to Azure Monitor. The metrics are listed in the Azure portal for the corresponding Azure Purview account. In the Azure portal, select the Metrics section to see the list of all available metrics.
- :::image type="content" source="./media/how-to-monitor-with-azure-monitor/purview-metrics.png" alt-text="Screenshot showing available Purview metrics section." lightbox="./media/how-to-monitor-with-azure-monitor/purview-metrics.png":::
+ :::image type="content" source="./media/how-to-monitor-with-azure-monitor/purview-metrics.png" alt-text="Screenshot showing available Azure Purview metrics section." lightbox="./media/how-to-monitor-with-azure-monitor/purview-metrics.png":::
-Azure Purview users can also access the metrics page directly from the management center of the Azure Purview account. Select Azure Monitor in the main page of Purview management center to launch Azure portal.
+Azure Purview users can also access the metrics page directly from the management center of the Azure Purview account. Select Azure Monitor in the main page of Azure Purview management center to launch Azure portal.
- :::image type="content" source="./media/how-to-monitor-with-azure-monitor/launch-metrics-from-management.png" alt-text="Screenshot to launch Purview metrics from management center." lightbox="./media/how-to-monitor-with-azure-monitor/launch-metrics-from-management.png":::
+ :::image type="content" source="./media/how-to-monitor-with-azure-monitor/launch-metrics-from-management.png" alt-text="Screenshot to launch Azure Purview metrics from management center." lightbox="./media/how-to-monitor-with-azure-monitor/launch-metrics-from-management.png":::
### Available metrics
The following table contains the list of metrics available to explore in the Azu
## Diagnostic Logs to Azure Storage account
-Raw telemetry events are emitted to Azure Monitor. Events can be logged to a customer storage account of choice for further analysis. Exporting of logs is done via the Diagnostic settings for the Purview account on the Azure portal.
+Raw telemetry events are emitted to Azure Monitor. Events can be logged to a customer storage account of choice for further analysis. Exporting of logs is done via the Diagnostic settings for the Azure Purview account on the Azure portal.
Follow the steps to create a Diagnostic setting for your Azure Purview account.
purview How To Search Catalog https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/how-to-search-catalog.md
The goal of search in Azure Purview is to speed up the process of data discovery
## Search the catalog for assets
-The search bar can be quickly accessed from the top bar of the Purview Studio UX. In the data catalog home page, the search bar is in the center of the screen.
+The search bar can be quickly accessed from the top bar of the Azure Purview Studio UX. In the data catalog home page, the search bar is in the center of the screen.
:::image type="content" source="./media/how-to-search-catalog/purview-search-bar.png" alt-text="Screenshot showing the location of the Azure Purview search bar" border="true":::
Once you click on the search bar, you will be presented with your search history
:::image type="content" source="./media/how-to-search-catalog/search-no-keywords.png" alt-text="Screenshot showing the search bar before any keywords have been entered" border="true":::
-Enter in keywords that help identify your asset such as its name, data type, classifications, and glossary terms. As you enter in search keywords, Purview dynamically suggests assets and searches that may fit your needs. To complete your search, click on "View search results" or press "Enter".
+Enter in keywords that help identify your asset such as its name, data type, classifications, and glossary terms. As you enter in search keywords, Azure Purview dynamically suggests assets and searches that may fit your needs. To complete your search, click on "View search results" or press "Enter".
:::image type="content" source="./media/how-to-search-catalog/search-keywords.png" alt-text="Screenshot showing the search bar as a user enters in keywords" border="true":::
-Once you enter in your search, Purview returns a list of data assets a user is a data reader for to that matched to the keywords entered in.
+Once you enter in your search, Azure Purview returns a list of data assets a user is a data reader for to that matched to the keywords entered in.
-The Purview relevance engine sorts through all the matches and ranks them based on what it believes their usefulness is to a user. For example, a table that matches on multiple keywords that a data steward has assigned glossary terms and given a description is likely going to be more interesting to a data consumer than a folder which has been unannotated. A large set of factors determine an assetΓÇÖs relevance score and the Purview search team is constantly tuning the relevance engine to ensure the top search results have value to you.
+The Azure Purview relevance engine sorts through all the matches and ranks them based on what it believes their usefulness is to a user. For example, a table that matches on multiple keywords that a data steward has assigned glossary terms and given a description is likely going to be more interesting to a data consumer than a folder which has been unannotated. A large set of factors determine an assetΓÇÖs relevance score and the Azure Purview search team is constantly tuning the relevance engine to ensure the top search results have value to you.
If the top results donΓÇÖt include the assets you are looking for, you can use the facets on the left-hand side to filter down by business metadata such glossary terms, classifications and the containing collection. If you are interested in a particular data source type such as Azure Data Lake Storage Gen2 or Azure SQL Database, you can use the source type pill filter to narrow down your search.
purview Manage Credentials https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/manage-credentials.md
A credential is authentication information that Azure Purview can use to authent
In Azure Purview, there are few options to use as authentication method to scan data sources such as the following options: -- [Azure Purview system-assigned managed identity](#use-purview-system-assigned-managed-identity-to-set-up-scans)
+- [Azure Purview system-assigned managed identity](#use-azure-purview-system-assigned-managed-identity-to-set-up-scans)
- [User-assigned managed identity](#create-a-user-assigned-managed-identity) (preview) - Account Key (using [Key Vault](#create-azure-key-vaults-connections-in-your-azure-purview-account)) - SQL Authentication (using [Key Vault](#create-azure-key-vaults-connections-in-your-azure-purview-account))
Before creating any credentials, consider your data source types and networking
:::image type="content" source="media/manage-credentials/manage-credentials-decision-tree-small.png" alt-text="Manage credentials decision tree" lightbox="media/manage-credentials/manage-credentials-decision-tree.png":::
-## Use Purview system-assigned managed identity to set up scans
+## Use Azure Purview system-assigned managed identity to set up scans
-If you are using the Purview system-assigned managed identity (SAMI) to set up scans, you will not have to explicitly create a credential and link your key vault to Purview to store them. For detailed instructions on adding the Purview SAMI to have access to scan your data sources, refer to the data source-specific authentication sections below:
+If you are using the Azure Purview system-assigned managed identity (SAMI) to set up scans, you will not have to explicitly create a credential and link your key vault to Azure Purview to store them. For detailed instructions on adding the Azure Purview SAMI to have access to scan your data sources, refer to the data source-specific authentication sections below:
- [Azure Blob Storage](register-scan-azure-blob-storage-source.md#authentication-for-a-scan) - [Azure Data Lake Storage Gen1](register-scan-adls-gen1.md#authentication-for-a-scan)
Currently Azure Key Vault supports two permission models:
- Option 1 - Access Policies - Option 2 - Role-based Access Control
-Before assigning access to the Purview system-assigned managed identity (SAMI), first identify your Azure Key Vault permission model from Key Vault resource **Access Policies** in the menu. Follow steps below based on relevant the permission model.
+Before assigning access to the Azure Purview system-assigned managed identity (SAMI), first identify your Azure Key Vault permission model from Key Vault resource **Access Policies** in the menu. Follow steps below based on relevant the permission model.
:::image type="content" source="media/manage-credentials/akv-permission-model.png" alt-text="Azure Key Vault Permission Model":::
Follow these steps only if permission model in your Azure Key Vault resource is
3. Select **Add Access Policy**.
- :::image type="content" source="media/manage-credentials/add-msi-to-akv-2.png" alt-text="Add Purview managed identity to AKV":::
+ :::image type="content" source="media/manage-credentials/add-msi-to-akv-2.png" alt-text="Add Azure Purview managed identity to AKV":::
4. In the **Secrets permissions** dropdown, select **Get** and **List** permissions.
-5. For **Select principal**, choose the Purview system managed identity. You can search for the Purview SAMI using either the Purview instance name **or** the managed identity application ID. We do not currently support compound identities (managed identity name + application ID).
+5. For **Select principal**, choose the Azure Purview system managed identity. You can search for the Azure Purview SAMI using either the Azure Purview instance name **or** the managed identity application ID. We do not currently support compound identities (managed identity name + application ID).
:::image type="content" source="media/manage-credentials/add-access-policy.png" alt-text="Add access policy":::
Follow these steps only if permission model in your Azure Key Vault resource is
3. Select **+ Add**.
-4. Set the **Role** to **Key Vault Secrets User** and enter your Azure Purview account name under **Select** input box. Then, select Save to give this role assignment to your Purview account.
+4. Set the **Role** to **Key Vault Secrets User** and enter your Azure Purview account name under **Select** input box. Then, select Save to give this role assignment to your Azure Purview account.
:::image type="content" source="media/manage-credentials/akv-add-rbac.png" alt-text="Azure Key Vault RBAC":::
Follow these steps only if permission model in your Azure Key Vault resource is
Before you can create a Credential, first associate one or more of your existing Azure Key Vault instances with your Azure Purview account.
-1. From the [Azure portal](https://portal.azure.com), select your Azure Purview account and open the [Purview Studio](https://web.purview.azure.com/resource/). Navigate to the **Management Center** in the studio and then navigate to **credentials**.
+1. From the [Azure portal](https://portal.azure.com), select your Azure Purview account and open the [Azure Purview Studio](https://web.purview.azure.com/resource/). Navigate to the **Management Center** in the studio and then navigate to **credentials**.
2. From the **Credentials** page, select **Manage Key Vault connections**.
Before you can create a Credential, first associate one or more of your existing
## Create a new credential
-These credential types are supported in Purview:
+These credential types are supported in Azure Purview:
- Basic authentication: You add the **password** as a secret in key vault. - Service Principal: You add the **service principal key** as a secret in key vault.
These credential types are supported in Purview:
- Consumer Key: For Salesforce data sources, you can add the **password** and the **consumer secret** in key vault. - User-assigned managed identity (preview): You can add user-assigned managed identity credentials. For more information, see the [create a user-assigned managed identity section](#create-a-user-assigned-managed-identity) below.
-For more information, see [Add a secret to Key Vault](../key-vault/secrets/quick-create-portal.md#add-a-secret-to-key-vault) and [Create a new AWS role for Purview](register-scan-amazon-s3.md#create-a-new-aws-role-for-purview).
+For more information, see [Add a secret to Key Vault](../key-vault/secrets/quick-create-portal.md#add-a-secret-to-key-vault) and [Create a new AWS role for Azure Purview](register-scan-amazon-s3.md#create-a-new-aws-role-for-azure-purview).
After storing your secrets in the key vault:
After storing your secrets in the key vault:
User-assigned managed identities (UAMI) enable Azure resources to authenticate directly with other resources using Azure Active Directory (Azure AD) authentication, without the need to manage those credentials. They allow you to authenticate and assign access just like you would with a system assigned managed identity, Azure AD user, Azure AD group, or service principal. User-assigned managed identities are created as their own resource (rather than being connected to a pre-existing resource). For more information about managed identities, see the [managed identities for Azure resources documentation](../active-directory/managed-identities-azure-resources/overview.md).
-The following steps will show you how to create a UAMI for Purview to use.
+The following steps will show you how to create a UAMI for Azure Purview to use.
### Supported data sources for UAMI
The following steps will show you how to create a UAMI for Purview to use.
:::image type="content" source="media/manage-credentials/status-successful.png" alt-text="Screenshot the Azure Purview account in the Azure Portal with Status highlighted under the overview tab and essentials menu.":::
-1. Once the managed identity is successfully deployed, navigate to the [Purview Studio](https://web.purview.azure.com/), by selecting the **Open Purview Studio** button.
+1. Once the managed identity is successfully deployed, navigate to the [Azure Purview Studio](https://web.purview.azure.com/), by selecting the **Open Azure Purview Studio** button.
-1. In the [Purview Studio](https://web.purview.azure.com/), navigate to the Management Center in the studio and then navigate to the Credentials section.
+1. In the [Azure Purview Studio](https://web.purview.azure.com/), navigate to the Management Center in the studio and then navigate to the Credentials section.
1. Create a user-assigned managed identity by selecting **+New**. 1. Select the Managed identity authentication method, and select your user assigned managed identity from the drop down menu.
The following steps will show you how to create a UAMI for Purview to use.
:::image type="content" source="media/manage-credentials/new-user-assigned-managed-identity-credential.png" alt-text="Screenshot showing the new managed identity creation tile, with the Learn More link highlighted."::: >[!NOTE]
- > If the portal was open during creation of your user assigned managed identity, you'll need to refresh the Purview web portal to load the settings finished in the Azure portal.
+ > If the portal was open during creation of your user assigned managed identity, you'll need to refresh the Azure Purview web portal to load the settings finished in the Azure portal.
1. After all the information is filled in, select **Create**.
purview Manage Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/manage-data-sources.md
In this article, you learn how to register new data sources, manage collections
Use the following steps to register a new source.
-1. Open [Purview Studio](https://web.purview.azure.com/resource/), navigate to the **Data Map**, **Sources**, and select **Register**.
+1. Open [Azure Purview Studio](https://web.purview.azure.com/resource/), navigate to the **Data Map**, **Sources**, and select **Register**.
:::image type="content" source="media/manage-data-sources/purview-studio.png" alt-text="Azure Purview Studio":::
purview Manage Integration Runtimes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/manage-integration-runtimes.md
Last updated 10/22/2021
This article describes how to create and manage a self-hosted integration runtime (SHIR) that let's you scan data sources in Azure Purview. > [!NOTE]
-> The Purview Integration Runtime cannot be shared with an Azure Synapse Analytics or Azure Data Factory Integration Runtime on the same machine. It needs to be installed on a separated machine.
+> The Azure Purview Integration Runtime cannot be shared with an Azure Synapse Analytics or Azure Data Factory Integration Runtime on the same machine. It needs to be installed on a separated machine.
## Prerequisites
To create and set up a self-hosted integration runtime, use the following proced
## Create a self-hosted integration runtime
-1. On the home page of the [Purview Studio](https://web.purview.azure.com/resource/), select **Data Map** from the left navigation pane.
+1. On the home page of the [Azure Purview Studio](https://web.purview.azure.com/resource/), select **Data Map** from the left navigation pane.
2. Under **Sources and scanning** on the left pane, select **Integration runtimes**, and then select **+ New**.
If you see error messages like the following ones, the likely reason is improper
Your self-hosted integration runtime machine will need to connect to several resources to work correctly: * The sources you want to scan using the self-hosted integration runtime.
-* Any Azure Key Vault used to store credentials for the Purview resource.
-* The managed Storage account and Event Hub resources created by Purview.
+* Any Azure Key Vault used to store credentials for the Azure Purview resource.
+* The managed Storage account and Event Hub resources created by Azure Purview.
-The managed Storage and Event Hub resources can be found in your subscription under a resource group containing the name of your Purview resource. Azure Purview uses these resources to ingest the results of the scan, among many other things, so the self-hosted integration runtime will need to be able to connect directly with these resources.
+The managed Storage and Event Hub resources can be found in your subscription under a resource group containing the name of your Azure Purview resource. Azure Purview uses these resources to ingest the results of the scan, among many other things, so the self-hosted integration runtime will need to be able to connect directly with these resources.
Here are the domains and ports that will need to be allowed through corporate and machine firewalls. > [!NOTE]
-> For domains listed with '\<managed Purview storage account>', you will add the name of the managed storage account associated with your Purview resource. You can find this resource in the Portal. Search your Resource Groups for a group named: managed-rg-\<your Purview Resource name>. For example: managed-rg-contosoPurview. You will use the name of the storage account in this resource group.
+> For domains listed with '\<managed Azure Purview storage account>', you will add the name of the managed storage account associated with your Azure Purview resource. You can find this resource in the Portal. Search your Resource Groups for a group named: managed-rg-\<your Azure Purview Resource name>. For example: managed-rg-contosoPurview. You will use the name of the storage account in this resource group.
>
-> For domains listed with '\<managed Event Hub resource>', you will add the name of the managed Event Hub associated with your Purview resource. You can find this in the same Resource Group as the managed storage account.
+> For domains listed with '\<managed Event Hub resource>', you will add the name of the managed Event Hub associated with your Azure Purview resource. You can find this in the same Resource Group as the managed storage account.
| Domain names | Outbound ports | Description | | -- | -- | - |
-| `*.servicebus.windows.net` | 443 | Global infrastructure Purview uses to run its scans. Wildcard required as there is no dedicated resource. |
-| `<managed Event Hub resource>.servicebus.windows.net` | 443 | Purview uses this to connect with the associated service bus. It will be covered by allowing the above domain, but if you are using Private Endpoints, you will need to test access to this single domain.|
-| `*.frontend.clouddatahub.net` | 443 | Global infrastructure Purview uses to run its scans. Wildcard required as there is no dedicated resource. |
-| `<managed Purview storage account>.core.windows.net` | 443 | Used by the self-hosted integration runtime to connect to the managed Azure storage account.|
-| `<managed Purview storage account>.queue.core.windows.net` | 443 | Queues used by purview to run the scan process. |
+| `*.servicebus.windows.net` | 443 | Global infrastructure Azure Purview uses to run its scans. Wildcard required as there is no dedicated resource. |
+| `<managed Event Hub resource>.servicebus.windows.net` | 443 | Azure Purview uses this to connect with the associated service bus. It will be covered by allowing the above domain, but if you are using Private Endpoints, you will need to test access to this single domain.|
+| `*.frontend.clouddatahub.net` | 443 | Global infrastructure Azure Purview uses to run its scans. Wildcard required as there is no dedicated resource. |
+| `<managed Azure Purview storage account>.core.windows.net` | 443 | Used by the self-hosted integration runtime to connect to the managed Azure storage account.|
+| `<managed Azure Purview storage account>.queue.core.windows.net` | 443 | Queues used by purview to run the scan process. |
| `*.login.windows.net` | 443 | Sign in to Azure Active Directory.| | `*.login.microsoftonline.com` | 443 | Sign in to Azure Active Directory. | | `download.microsoft.com` | 443 | Optional for SHIR updates. |
You can delete a self-hosted integration runtime by navigating to **Integration
## Java Runtime Environment Installation
-If you will be scanning Parquet files using the Self-Hosted Integration runtime with Purview, you will need to install either the Java Runtime Environment or OpenJDK on your self-hosted IR machine.
+If you will be scanning Parquet files using the Self-Hosted Integration runtime with Azure Purview, you will need to install either the Java Runtime Environment or OpenJDK on your self-hosted IR machine.
When scanning Parquet files using the Self-hosted IR, the service locates the Java runtime by firstly checking the registry *`(SOFTWARE\JavaSoft\Java Runtime Environment\{Current Version}\JavaHome)`* for JRE, if not found, secondly checking system variable *`JAVA_HOME`* for OpenJDK.
You can install the self-hosted integration runtime by downloading a Managed Ide
- [How scans detect deleted assets](concept-scans-and-ingestion.md#how-scans-detect-deleted-assets) -- [Use private endpoints with Purview](catalog-private-link.md)
+- [Use private endpoints with Azure Purview](catalog-private-link.md)
purview Manage Kafka Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/manage-kafka-dotnet.md
Title: Publish messages to and process messages from Azure Purview's Atlas Kafka topics via Event Hubs using .NET
-description: This article provides a walkthrough to create a .NET Core application that sends/receives events to/from Purview's Apache Atlas Kafka topics by using the latest Azure.Messaging.EventHubs package.
+description: This article provides a walkthrough to create a .NET Core application that sends/receives events to/from Azure Purview's Apache Atlas Kafka topics by using the latest Azure.Messaging.EventHubs package.
This quickstart shows how to send events to and receive events from Azure Purview's Atlas Kafka topics via event hub using the **Azure.Messaging.EventHubs** .NET library. > [!IMPORTANT]
-> A managed event hub is created as part of Purview account creation, see [Purview account creation](create-catalog-portal.md). You can publish messages to the event hub kafka topic ATLAS_HOOK and Purview will consume and process it. Purview will notify entity changes to event hub kafka topic ATLAS_ENTITIES and user can consume and process it.This quickstart uses the new **Azure.Messaging.EventHubs** library.
+> A managed event hub is created as part of Azure Purview account creation, see [Azure Purview account creation](create-catalog-portal.md). You can publish messages to the event hub kafka topic ATLAS_HOOK and Azure Purview will consume and process it. Azure Purview will notify entity changes to event hub kafka topic ATLAS_ENTITIES and user can consume and process it.This quickstart uses the new **Azure.Messaging.EventHubs** library.
## Prerequisites
To complete this quickstart, you need the following prerequisites:
- **Microsoft Azure subscription**. To use Azure services, including Azure Event Hubs, you need a subscription. If you don't have an existing Azure account, you can sign up for a [free trial](https://azure.microsoft.com/free/) or use your MSDN subscriber benefits when you [create an account](https://azure.microsoft.com). - **Microsoft Visual Studio 2019**. The Azure Event Hubs client library makes use of new features that were introduced in C# 8.0. You can still use the library with previous C# language versions, but the new syntax won't be available. To make use of the full syntax, it is recommended that you compile with the [.NET Core SDK](https://dotnet.microsoft.com/download) 3.0 or higher and [language version](/dotnet/csharp/language-reference/configure-language-version#override-a-default) set to `latest`. If you're using Visual Studio, versions before Visual Studio 2019 aren't compatible with the tools needed to build C# 8.0 projects. Visual Studio 2019, including the free Community edition, can be downloaded [here](https://visualstudio.microsoft.com/vs/).
-## Publish messages to Purview
-This section shows you how to create a .NET Core console application to send events to an Purview via event hub kafka topic **ATLAS_HOOK**.
+## Publish messages to Azure Purview
+This section shows you how to create a .NET Core console application to send events to an Azure Purview via event hub kafka topic **ATLAS_HOOK**.
## Create a Visual Studio project
Next, create a C# .NET console application in Visual Studio:
private const string eventHubName = "<EVENT HUB NAME>"; ```
- You can get event hub namespace associated with purview account by looking at Atlas kafka endpoint primary/secondary connection strings in properties tab of Purview account.
+ You can get event hub namespace associated with purview account by looking at Atlas kafka endpoint primary/secondary connection strings in properties tab of Azure Purview account.
:::image type="content" source="media/manage-eventhub-kafka-dotnet/properties.png" alt-text="Event Hub Namespace":::
- The event hub name should be **ATLAS_HOOK** for sending messages to Purview.
+ The event hub name should be **ATLAS_HOOK** for sending messages to Azure Purview.
-3. Replace the `Main` method with the following `async Main` method and add an `async ProduceMessage` to push messages into Purview. See the code comments for details.
+3. Replace the `Main` method with the following `async Main` method and add an `async ProduceMessage` to push messages into Azure Purview. See the code comments for details.
```csharp static async Task Main()
Next, create a C# .NET console application in Visual Studio:
```
-## Consume messages from Purview
-This section shows how to write a .NET Core console application that receives messages from an event hub using an event processor. You need to use ATLAS_ENTITIES event hub to receive messages from Purview.The event processor simplifies receiving events from event hubs by managing persistent checkpoints and parallel receptions from those event hubs.
+## Consume messages from Azure Purview
+This section shows how to write a .NET Core console application that receives messages from an event hub using an event processor. You need to use ATLAS_ENTITIES event hub to receive messages from Azure Purview.The event processor simplifies receiving events from event hubs by managing persistent checkpoints and parallel receptions from those event hubs.
> [!WARNING] > If you run this code on Azure Stack Hub, you will experience runtime errors unless you target a specific Storage API version. That's because the Event Hubs SDK uses the latest available Azure Storage API available in Azure that may not be available on your Azure Stack Hub platform. Azure Stack Hub may support a different version of Storage Blob SDK than those typically available on Azure. If you are using Azure Blob Storage as a checkpoint store, check the [supported Azure Storage API version for your Azure Stack Hub build](/azure-stack/user/azure-stack-acs-differences?#api-version) and target that version in your code.
In this quickstart, you use Azure Storage as the checkpoint store. Follow these
private const string blobContainerName = "<BLOB CONTAINER NAME>"; ```
- You can get event hub namespace associated with purview account by looking at Atlas kafka endpoint primary/secondary connection strings in properties tab of Purview account.
+ You can get event hub namespace associated with purview account by looking at Atlas kafka endpoint primary/secondary connection strings in properties tab of Azure Purview account.
:::image type="content" source="media/manage-eventhub-kafka-dotnet/properties.png" alt-text="Event Hub Namespace":::
- The event hub name should be **ATLAS_ENTITIES** for sending messages to Purview.
+ The event hub name should be **ATLAS_ENTITIES** for sending messages to Azure Purview.
3. Replace the `Main` method with the following `async Main` method. See the code comments for details.
In this quickstart, you use Azure Storage as the checkpoint store. Follow these
> For the complete source code with more informational comments, see [this file on the GitHub](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/eventhub/Azure.Messaging.EventHubs.Processor/samples/Sample01_HelloWorld.md). 6. Run the receiver application.
-### Sample Message received from Purview
+### Sample Message received from Azure Purview
```json {
In this quickstart, you use Azure Storage as the checkpoint store. Follow these
``` > [!IMPORTANT]
-> Atlas currently supports the following operation types: **ENTITY_CREATE_V2**, **ENTITY_PARTIAL_UPDATE_V2**, **ENTITY_FULL_UPDATE_V2**, **ENTITY_DELETE_V2**. Pushing messages to Purview is currently enabled by default. If the scenario involves reading from Purview contact us as it needs to be allow-listed. (provide subscription id and name of Purview account).
+> Atlas currently supports the following operation types: **ENTITY_CREATE_V2**, **ENTITY_PARTIAL_UPDATE_V2**, **ENTITY_FULL_UPDATE_V2**, **ENTITY_DELETE_V2**. Pushing messages to Azure Purview is currently enabled by default. If the scenario involves reading from Azure Purview contact us as it needs to be allow-listed. (provide subscription id and name of Azure Purview account).
## Next steps
purview Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/overview.md
Last updated 12/06/2021
Azure Purview is a unified data governance service that helps you manage and govern your on-premises, multi-cloud, and software-as-a-service (SaaS) data. Create a holistic, up-to-date map of your data landscape with automated data discovery, sensitive data classification, and end-to-end data lineage. Enable data curators to manage and secure your data estate. Empower data consumers to find valuable, trustworthy data. Azure Purview automates data discovery by providing data scanning and classification as a service for assets across your data estate. Metadata and descriptions of discovered data assets are integrated into a holistic map of your data estate. Atop this map, there are purpose-built apps that create environments for data discovery, access management, and insights about your data landscape.
Azure Purview automates data discovery by providing data scanning and classifica
## Data Map
-Azure Purview Data Map provides the foundation for data discovery and effective data governance. Purview Data Map is a cloud native PaaS service that captures metadata about enterprise data present in analytics and operation systems on-premises and cloud. Purview Data Map is automatically kept up to date with built-in automated scanning and classification system. Business users can configure and use the Purview Data Map through an intuitive UI and developers can programmatically interact with the Data Map using open-source Apache Atlas 2.0 APIs.
-
-Azure Purview Data Map powers the Purview Data Catalog and Purview data insights as unified experiences within the [Purview Studio](https://web.purview.azure.com/resource/).
-
+Azure Purview Data Map provides the foundation for data discovery and effective data governance. Azure Purview Data Map is a cloud native PaaS service that captures metadata about enterprise data present in analytics and operation systems on-premises and cloud. Azure Purview Data Map is automatically kept up to date with built-in automated scanning and classification system. Business users can configure and use the Azure Purview Data Map through an intuitive UI and developers can programmatically interact with the Data Map using open-source Apache Atlas 2.0 APIs.
+Azure Purview Data Map powers the Azure Purview Data Catalog and Azure Purview data insights as unified experiences within the [Azure Purview Studio](https://web.purview.azure.com/resource/).
For more information, see our [introduction to Data Map](concept-elastic-data-map.md). ## Data Catalog
-With the Purview Data Catalog, business and technical users alike can quickly & easily find relevant data using a search experience with filters based on various lenses like glossary terms, classifications, sensitivity labels and more. For subject matter experts, data stewards and officers, the Purview Data Catalog provides data curation features like business glossary management and ability to automate tagging of data assets with glossary terms. Data consumers and producers can also visually trace the lineage of data assets starting from the operational systems on-premises, through movement, transformation & enrichment with various data storage & processing systems in the cloud to consumption in an analytics system like Power BI.
-
+With the Azure Purview Data Catalog, business and technical users alike can quickly & easily find relevant data using a search experience with filters based on various lenses like glossary terms, classifications, sensitivity labels and more. For subject matter experts, data stewards and officers, the Azure Purview Data Catalog provides data curation features like business glossary management and ability to automate tagging of data assets with glossary terms. Data consumers and producers can also visually trace the lineage of data assets starting from the operational systems on-premises, through movement, transformation & enrichment with various data storage & processing systems in the cloud to consumption in an analytics system like Power BI.
For more information, see our [introduction to search using Data Catalog](concept-search.md). ## Data Insights
-With the Purview data insights, data officers and security officers can get a birdΓÇÖs eye view and at a glance understand what data is actively scanned, where sensitive data is and how it moves.
+With the Azure Purview data insights, data officers and security officers can get a birdΓÇÖs eye view and at a glance understand what data is actively scanned, where sensitive data is and how it moves.
For more information, see our [introduction to Data Insights](concept-insights.md).
purview Purview Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/purview-connector-overview.md
Title: Purview Connector Overview
-description: This article outlines the different data stores and functionalities supported in Purview
+ Title: Azure Purview Connector Overview
+description: This article outlines the different data stores and functionalities supported in Azure Purview
# Supported data stores
-Purview supports the following data stores. Select each data store to learn the supported capabilities and the corresponding configurations in details.
+Azure Purview supports the following data stores. Select each data store to learn the supported capabilities and the corresponding configurations in details.
-## Purview data sources
+## Azure Purview data sources
|**Category**| **Data Store** |**Technical metadata** |**Classification** |**Lineage** | **Access Policy** | |||||||
Purview supports the following data stores. Select each data store to learn the
\* Besides the lineage on assets within the data source, lineage is also supported if dataset is used as a source/sink in [Data Factory](how-to-link-azure-data-factory.md) or [Synapse pipeline](how-to-lineage-azure-synapse-analytics.md). > [!NOTE]
-> Currently, Purview can't scan an asset that has `/`, `\`, or `#` in its name. To scope your scan and avoid scanning assets that have those characters in the asset name, use the example in [Register and scan an Azure SQL Database](register-scan-azure-sql-database.md#creating-the-scan).
+> Currently, Azure Purview can't scan an asset that has `/`, `\`, or `#` in its name. To scope your scan and avoid scanning assets that have those characters in the asset name, use the example in [Register and scan an Azure SQL Database](register-scan-azure-sql-database.md#creating-the-scan).
## Scan regions
-The following is a list of all the Azure data source (data center) regions where the Purview scanner runs. If your Azure data source is in a region outside of this list, the scanner will run in the region of your Purview instance.
+The following is a list of all the Azure data source (data center) regions where the Azure Purview scanner runs. If your Azure data source is in a region outside of this list, the scanner will run in the region of your Azure Purview instance.
-### Purview scanner regions
+### Azure Purview scanner regions
- Australia East - Australia Southeast
purview Quickstart Create Collection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/quickstart-create-collection.md
Last updated 11/04/2021
-# Quickstart: Create a collection and assign permissions in Purview
+# Quickstart: Create a collection and assign permissions in Azure Purview
-Collections are Azure Purview's tool to manage ownership and access control across assets, sources, and information. They also organize your sources and assets into categories that are customized to match your management experience with your data. This guide will take you through setting up your first collection and collection admin to prepare your Purview environment for your organization.
+Collections are Azure Purview's tool to manage ownership and access control across assets, sources, and information. They also organize your sources and assets into categories that are customized to match your management experience with your data. This guide will take you through setting up your first collection and collection admin to prepare your Azure Purview environment for your organization.
## Prerequisites
Collections are Azure Purview's tool to manage ownership and access control acro
* Your own [Azure Active Directory tenant](../active-directory/fundamentals/active-directory-access-create-new-tenant.md).
-* An active [Purview resource](create-catalog-portal.md).
+* An active [Azure Purview resource](create-catalog-portal.md).
## Check permissions
-In order to create and manage collections in Purview, you will need to be a **Collection Admin** within Purview. We can check these permissions in the [Purview Studio](use-purview-studio.md). You can find the studio by going to your Purview resource in the [Azure portal](https://portal.azure.com), and selecting the **Open Purview Studio** tile on the overview page.
+In order to create and manage collections in Azure Purview, you will need to be a **Collection Admin** within Azure Purview. We can check these permissions in the [Azure Purview Studio](use-purview-studio.md). You can find the studio by going to your Azure Purview resource in the [Azure portal](https://portal.azure.com), and selecting the **Open Azure Purview Studio** tile on the overview page.
1. Select Data Map > Collections from the left pane to open collection management page.
- :::image type="content" source="./media/quickstart-create-collection/find-collections.png" alt-text="Screenshot of Purview studio opened to the Data Map, with the Collections tab selected." border="true":::
+ :::image type="content" source="./media/quickstart-create-collection/find-collections.png" alt-text="Screenshot of Azure Purview studio opened to the Data Map, with the Collections tab selected." border="true":::
-1. Select your root collection. This is the top collection in your collection list and will have the same name as your Purview resource. In our example below, it's called Contoso Purview.
+1. Select your root collection. This is the top collection in your collection list and will have the same name as your Azure Purview resource. In our example below, it's called Contoso Azure Purview.
- :::image type="content" source="./media/quickstart-create-collection/select-root-collection.png" alt-text="Screenshot of Purview studio window, opened to the Data Map, with the root collection highlighted." border="true":::
+ :::image type="content" source="./media/quickstart-create-collection/select-root-collection.png" alt-text="Screenshot of Azure Purview studio window, opened to the Data Map, with the root collection highlighted." border="true":::
1. Select role assignments in the collection window.
- :::image type="content" source="./media/quickstart-create-collection/role-assignments.png" alt-text="Screenshot of Purview studio window, opened to the Data Map, with the role assignments tab highlighted." border="true":::
+ :::image type="content" source="./media/quickstart-create-collection/role-assignments.png" alt-text="Screenshot of Azure Purview studio window, opened to the Data Map, with the role assignments tab highlighted." border="true":::
-1. To create a collection, you will need to be in the collection admin list under role assignments. If you created the Purview resource, you should be listed as a collection admin under the root collection already. If not, you'll need to contact the collection admin to grant you permission.
+1. To create a collection, you will need to be in the collection admin list under role assignments. If you created the Azure Purview resource, you should be listed as a collection admin under the root collection already. If not, you'll need to contact the collection admin to grant you permission.
- :::image type="content" source="./media/quickstart-create-collection/collection-admins.png" alt-text="Screenshot of Purview studio window, opened to the Data Map, with the collection admin section highlighted." border="true":::
+ :::image type="content" source="./media/quickstart-create-collection/collection-admins.png" alt-text="Screenshot of Azure Purview studio window, opened to the Data Map, with the collection admin section highlighted." border="true":::
## Create a collection in the portal
-To create your collection, we'll start in the [Purview Studio](use-purview-studio.md). You can find the studio by going to your Purview resource in the Azure portal and selecting the **Open Purview Studio** tile on the overview page.
+To create your collection, we'll start in the [Azure Purview Studio](use-purview-studio.md). You can find the studio by going to your Azure Purview resource in the Azure portal and selecting the **Open Azure Purview Studio** tile on the overview page.
1. Select Data Map > Collections from the left pane to open collection management page.
- :::image type="content" source="./media/quickstart-create-collection/find-collections-2.png" alt-text="Screenshot of Purview studio window, opened to the Data Map, with the Collections tab selected." border="true":::
+ :::image type="content" source="./media/quickstart-create-collection/find-collections-2.png" alt-text="Screenshot of Azure Purview studio window, opened to the Data Map, with the Collections tab selected." border="true":::
1. Select **+ Add a collection**.
- :::image type="content" source="./media/quickstart-create-collection/select-add-collection.png" alt-text="Screenshot of Purview studio window, opened to the Data Map, with the Collections tab selected and Add a Collection highlighted." border="true":::
+ :::image type="content" source="./media/quickstart-create-collection/select-add-collection.png" alt-text="Screenshot of Azure Purview studio window, opened to the Data Map, with the Collections tab selected and Add a Collection highlighted." border="true":::
1. In the right panel, enter the collection name, description, and search for users to add them as collection admins.
- :::image type="content" source="./media/quickstart-create-collection/create-collection.png" alt-text="Screenshot of Purview studio window, showing the new collection window, with a display name and collection admins selected, and the create button highlighted." border="true":::
+ :::image type="content" source="./media/quickstart-create-collection/create-collection.png" alt-text="Screenshot of Azure Purview studio window, showing the new collection window, with a display name and collection admins selected, and the create button highlighted." border="true":::
1. Select **Create**. The collection information will reflect on the page.
- :::image type="content" source="./media/quickstart-create-collection/created-collection.png" alt-text="Screenshot of Purview studio window, showing the newly created collection window." border="true":::
+ :::image type="content" source="./media/quickstart-create-collection/created-collection.png" alt-text="Screenshot of Azure Purview studio window, showing the newly created collection window." border="true":::
## Assign permissions to collection
-Now that you have a collection, you can assign permissions to this collection to manage your users access to Purview.
+Now that you have a collection, you can assign permissions to this collection to manage your users access to Azure Purview.
### Roles
All assigned roles apply to sources, assets, and other objects within the collec
1. Select **Role assignments** tab to see all the roles in a collection.
- :::image type="content" source="./media/quickstart-create-collection/select-role-assignments.png" alt-text="Screenshot of Purview studio collection window, with the role assignments tab highlighted." border="true":::
+ :::image type="content" source="./media/quickstart-create-collection/select-role-assignments.png" alt-text="Screenshot of Azure Purview studio collection window, with the role assignments tab highlighted." border="true":::
1. Select **Edit role assignments** or the person icon to edit each role member.
- :::image type="content" source="./media/quickstart-create-collection/edit-role-assignments.png" alt-text="Screenshot of Purview studio collection window, with the edit role assignments dropdown list selected." border="true":::
+ :::image type="content" source="./media/quickstart-create-collection/edit-role-assignments.png" alt-text="Screenshot of Azure Purview studio collection window, with the edit role assignments dropdown list selected." border="true":::
1. Type in the textbox to search for users you want to add to the role member. Select **OK** to save the change.
purview Reference Purview Glossary https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/reference-purview-glossary.md
Title: Purview product glossary
+ Title: Azure Purview product glossary
description: A glossary defining the terminology used throughout Azure Purview
Last updated 08/16/2021
Below is a glossary of terminology used throughout Azure Purview. ## Annotation
-Information that is associated with data assets in Azure Purview, for example, glossary terms and classifications. After they are applied, annotations can be used within Search to aid in the discovery of the data assets. 
+Information that is associated with data assets in Azure Azure Purview, for example, glossary terms and classifications. After they are applied, annotations can be used within Search to aid in the discovery of the data assets. 
## Approved The state given to any request that has been accepted as satisfactory by the designated individual or group who has authority to change the state of the request.  ## Asset
-Any single object that is stored within an Azure Purview data catalog.
+Any single object that is stored within an Azure Azure Purview data catalog.
> [!NOTE] > A single object in the catalog could potentially represent many objects in storage, for example, a resource set is an asset but it's made up of many partition files in storage. ## Azure Information Protection
Data that is in a data center controlled by a customer, for example, not in
An individual or group in charge of managing a data asset. ## Pattern rule A configuration that overrides how Azure Purview groups assets as resource sets and displays them within the catalog.
-## Purview instance
-A single Azure Purview resource. 
+## Azure Purview instance
+A single Azure Azure Purview resource. 
## Registered source A source that has been added to an Azure Purview instance and is now managed as a part of the Data catalog.  ## Related terms
Glossary terms that are linked to other terms within the organization.  
## Resource set A single asset that represents many partitioned files or objects in storage. For example, Azure Purview stores partitioned Apache Spark output as a single resource set instead of unique assets for each individual file.  ## Role
-Permissions assigned to a user within an Azure Purview instance. Roles, such as Purview Data Curator or Purview Data Reader, determine what can be done within the product.
+Permissions assigned to a user within an Azure Purview instance. Roles, such as Azure Purview Data Curator or Azure Purview Data Reader, determine what can be done within the product.
## Scan An Azure Purview process that examines a source or set of sources and ingests its metadata into the data catalog. Scans can be run manually or on a schedule using a scan trigger.  ## Scan ruleset
purview Register Scan Adls Gen1 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-adls-gen1.md
This article outlines the process to register an Azure Data Lake Storage Gen1 da
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An active [Purview resource](create-catalog-portal.md).
+* An active [Azure Purview resource](create-catalog-portal.md).
-* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
+* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
## Register
This section will enable you to register the ADLS Gen1 data source and set up an
It is important to register the data source in Azure Purview prior to setting up a scan for the data source.
-1. Go to the [Azure portal](https://portal.azure.com), and navigate to the **Purview accounts** page and select your _Purview account_
+1. Go to the [Azure portal](https://portal.azure.com), and navigate to the **Azure Purview accounts** page and select your _Purview account_
- :::image type="content" source="media/register-scan-adls-gen1/register-adls-gen1-purview-acct.png" alt-text="Screenshot that shows the Purview account used to register the data source":::
+ :::image type="content" source="media/register-scan-adls-gen1/register-adls-gen1-purview-acct.png" alt-text="Screenshot that shows the Azure Purview account used to register the data source":::
-1. **Open Purview Studio** and navigate to the **Data Map --> Sources**
+1. **Open Azure Purview Studio** and navigate to the **Data Map --> Sources**
- :::image type="content" source="media/register-scan-adls-gen1/register-adls-gen1-open-purview-studio.png" alt-text="Screenshot that shows the link to open Purview Studio":::
+ :::image type="content" source="media/register-scan-adls-gen1/register-adls-gen1-open-purview-studio.png" alt-text="Screenshot that shows the link to open Azure Purview Studio":::
:::image type="content" source="media/register-scan-adls-gen1/register-adls-gen1-sources.png" alt-text="Screenshot that navigates to the Sources link in the Data Map":::
The following options are supported:
#### Using system or user-assigned managed identity for scanning
-It is important to give your Purview account the permission to scan the ADLS Gen1 data source. You can add the system managed identity, or user-assigned managed identity at the Subscription, Resource Group, or Resource level, depending on what you want it to have scan permissions on.
+It is important to give your Azure Purview account the permission to scan the ADLS Gen1 data source. You can add the system managed identity, or user-assigned managed identity at the Subscription, Resource Group, or Resource level, depending on what you want it to have scan permissions on.
> [!Note] > You need to be an owner of the subscription to be able to add a managed identity on an Azure resource.
It is important to give your Purview account the permission to scan the ADLS Gen
:::image type="content" source="media/register-scan-adls-gen1/register-adls-gen1-storage-access.png" alt-text="Screenshot that shows the Data explorer for the storage account":::
-1. Choose **Select** and add the _Azure Purview Name_ (which is the system managed identity) or the _[user-assigned managed identity](manage-credentials.md#create-a-user-assigned-managed-identity)_(preview), that has already been registered in Purview, in the **Select user or group** menu.
+1. Choose **Select** and add the _Azure Purview Name_ (which is the system managed identity) or the _[user-assigned managed identity](manage-credentials.md#create-a-user-assigned-managed-identity)_(preview), that has already been registered in Azure Purview, in the **Select user or group** menu.
1. Select **Read** and **Execute** permissions. Make sure to choose **This folder and all children**, and **An access permission entry and a default permission entry** in the Add options as shown in the below screenshot. Select **OK**
- :::image type="content" source="media/register-scan-adls-gen1/register-adls-gen1-assign-permissions.png" alt-text="Screenshot that shows the details to assign permissions for the Purview account":::
+ :::image type="content" source="media/register-scan-adls-gen1/register-adls-gen1-assign-permissions.png" alt-text="Screenshot that shows the details to assign permissions for the Azure Purview account":::
> [!Tip] > An **access permission entry** is a permission entry on _current_ files and folders. A **default permission entry** is a permission entry that will be _inherited_ by new files and folders.
It is important to give your service principal the permission to scan the ADLS G
### Creating the scan
-1. Open your **Purview account** and select the **Open Purview Studio**
+1. Open your **Azure Purview account** and select the **Open Azure Purview Studio**
- :::image type="content" source="media/register-scan-adls-gen1/register-adls-gen1-purview-acct.png" alt-text="Screenshot that shows the Open Purview Studio":::
+ :::image type="content" source="media/register-scan-adls-gen1/register-adls-gen1-purview-acct.png" alt-text="Screenshot that shows the Open Azure Purview Studio":::
1. Navigate to the **Data map** --> **Sources** to view the collection hierarchy
Scans can be managed or run again on completion.
> [!NOTE] > * Deleting your scan does not delete catalog assets created from previous scans.
- > * The asset will no longer be updated with schema changes if your source table has changed and you re-scan the source table after editing the description in the schema tab of Purview.
+ > * The asset will no longer be updated with schema changes if your source table has changed and you re-scan the source table after editing the description in the schema tab of Azure Purview.
1. You can _run an incremental scan_ or a _full scan_ again.
Scans can be managed or run again on completion.
## Next steps
-Now that you have registered your source, follow the below guides to learn more about Purview and your data.
+Now that you have registered your source, follow the below guides to learn more about Azure Purview and your data.
- [Data insights in Azure Purview](concept-insights.md) - [Lineage in Azure Purview](catalog-lineage-user-guide.md)
purview Register Scan Adls Gen2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-adls-gen2.md
This article outlines the process to register an Azure Data Lake Storage Gen2 da
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An active [Purview resource](create-catalog-portal.md).
+* An active [Azure Purview resource](create-catalog-portal.md).
-* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
+* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
## Register
This section will enable you to register the ADLS Gen2 data source and set up an
It is important to register the data source in Azure Purview prior to setting up a scan for the data source.
-1. Go to the [Azure portal](https://portal.azure.com), and navigate to the **Purview accounts** page and select your _Purview account_
+1. Go to the [Azure portal](https://portal.azure.com), and navigate to the **Azure Purview accounts** page and select your _Purview account_
- :::image type="content" source="media/register-scan-adls-gen2/register-adls-gen2-purview-acct.png" alt-text="Screenshot that shows the Purview account used to register the data source":::
+ :::image type="content" source="media/register-scan-adls-gen2/register-adls-gen2-purview-acct.png" alt-text="Screenshot that shows the Azure Purview account used to register the data source":::
-1. **Open Purview Studio** and navigate to the **Data Map --> Sources**
+1. **Open Azure Purview Studio** and navigate to the **Data Map --> Sources**
- :::image type="content" source="media/register-scan-adls-gen2/register-adls-gen2-open-purview-studio.png" alt-text="Screenshot that shows the link to open Purview Studio":::
+ :::image type="content" source="media/register-scan-adls-gen2/register-adls-gen2-open-purview-studio.png" alt-text="Screenshot that shows the link to open Azure Purview Studio":::
:::image type="content" source="media/register-scan-adls-gen2/register-adls-gen2-sources.png" alt-text="Screenshot that navigates to the Sources link in the Data Map":::
The following options are supported:
#### Using a system or user assigned managed identity for scanning
-It is important to give your Purview account or user-assigned managed identity (UAMI) the permission to scan the ADLS Gen2 data source. You can add your Purview account's system-assigned managed identity (which has the same name as your Purview account) or UAMI at the Subscription, Resource Group, or Resource level, depending on what level scan permissions are needed.
+It is important to give your Azure Purview account or user-assigned managed identity (UAMI) the permission to scan the ADLS Gen2 data source. You can add your Azure Purview account's system-assigned managed identity (which has the same name as your Azure Purview account) or UAMI at the Subscription, Resource Group, or Resource level, depending on what level scan permissions are needed.
> [!Note] > You need to be an owner of the subscription to be able to add a managed identity on an Azure resource.
It is important to give your Purview account or user-assigned managed identity (
:::image type="content" source="media/register-scan-adls-gen2/register-adls-gen2-access-control.png" alt-text="Screenshot that shows the access control for the storage account":::
-1. Set the **Role** to **Storage Blob Data Reader** and enter your _Azure Purview account name_ or _[user-assigned managed identity](manage-credentials.md#create-a-user-assigned-managed-identity)_ under the **Select** input box. Then, select **Save** to give this role assignment to your Purview account.
+1. Set the **Role** to **Storage Blob Data Reader** and enter your _Azure Purview account name_ or _[user-assigned managed identity](manage-credentials.md#create-a-user-assigned-managed-identity)_ under the **Select** input box. Then, select **Save** to give this role assignment to your Azure Purview account.
- :::image type="content" source="media/register-scan-adls-gen2/register-adls-gen2-assign-permissions.png" alt-text="Screenshot that shows the details to assign permissions for the Purview account":::
+ :::image type="content" source="media/register-scan-adls-gen2/register-adls-gen2-assign-permissions.png" alt-text="Screenshot that shows the details to assign permissions for the Azure Purview account":::
> [!Note] > For more details, please see steps in [Authorize access to blobs and queues using Azure Active Directory](../storage/blobs/authorize-access-azure-active-directory.md)
When authentication method selected is **Account Key**, you need to get your acc
:::image type="content" source="media/register-scan-adls-gen2/register-adls-gen2-secret.png" alt-text="Screenshot that shows the key vault option to create a secret":::
-1. If your key vault is not connected to Purview yet, you will need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-azure-purview-account)
+1. If your key vault is not connected to Azure Purview yet, you will need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-azure-purview-account)
1. Finally, [create a new credential](manage-credentials.md#create-a-new-credential) using the key to set up your scan #### Using Service Principal for scanning
It is important to give your service principal the permission to scan the ADLS G
:::image type="content" source="media/register-scan-adls-gen2/register-adls-gen2-access-control.png" alt-text="Screenshot that shows the access control for the storage account":::
-1. Set the **Role** to **Storage Blob Data Reader** and enter your _service principal_ under **Select** input box. Then, select **Save** to give this role assignment to your Purview account.
+1. Set the **Role** to **Storage Blob Data Reader** and enter your _service principal_ under **Select** input box. Then, select **Save** to give this role assignment to your Azure Purview account.
:::image type="content" source="media/register-scan-adls-gen2/register-adls-gen2-sp-permission.png" alt-text="Screenshot that shows the details to provide storage account permissions to the service principal"::: ### Create the scan
-1. Open your **Purview account** and select the **Open Purview Studio**
+1. Open your **Azure Purview account** and select the **Open Azure Purview Studio**
1. Navigate to the **Data map** --> **Sources** to view the collection hierarchy 1. Select the **New Scan** icon under the **ADLS Gen2 data source** registered earlier
Follow this configuration guide to [enable access policies on an Azure Storage a
## Next steps
-Now that you have registered your source, follow the below guides to learn more about Purview and your data.
+Now that you have registered your source, follow the below guides to learn more about Azure Purview and your data.
- [Data insights in Azure Purview](concept-insights.md) - [Lineage in Azure Purview](catalog-lineage-user-guide.md)
purview Register Scan Amazon Rds https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-amazon-rds.md
The Multi-Cloud Scanning Connector for Azure Purview allows you to explore your
This article describes how to use Azure Purview to scan your structured data currently stored in Amazon RDS, including both Microsoft SQL and PostgreSQL databases, and discover what types of sensitive information exists in your data. You'll also learn how to identify the Amazon RDS databases where the data is currently stored for easy information protection and data compliance.
-For this service, use Purview to provide a Microsoft account with secure access to AWS, where the Multi-Cloud Scanning Connectors for Azure Purview will run. The Multi-Cloud Scanning Connectors for Azure Purview use this access to your Amazon RDS databases to read your data, and then reports the scanning results, including only the metadata and classification, back to Azure. Use the Purview classification and labeling reports to analyze and review your data scan results.
+For this service, use Azure Purview to provide a Microsoft account with secure access to AWS, where the Multi-Cloud Scanning Connectors for Azure Purview will run. The Multi-Cloud Scanning Connectors for Azure Purview use this access to your Amazon RDS databases to read your data, and then reports the scanning results, including only the metadata and classification, back to Azure. Use the Azure Purview classification and labeling reports to analyze and review your data scan results.
> [!IMPORTANT] > The Multi-Cloud Scanning Connectors for Azure Purview are separate add-ons to Azure Purview. The terms and conditions for the Multi-Cloud Scanning Connectors for Azure Purview are contained in the agreement under which you obtained Microsoft Azure Services. For more information, see Microsoft Azure Legal Information at https://azure.microsoft.com/support/legal/. > > [!IMPORTANT]
-> Purview support for Amazon RDS is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> Azure Purview support for Amazon RDS is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
>
-## Purview scope for Amazon RDS
+## Azure Purview scope for Amazon RDS
- **Supported database engines**: Amazon RDS structured data storage supports multiple database engines. Azure Purview supports Amazon RDS with/based on Microsoft SQL and PostgreSQL. - **Maximum columns supported**: Scanning RDS tables with more than 300 columns is not supported. -- **Public access support**: Purview supports scanning only with VPC Private Link in AWS, and does not include public access scanning.
+- **Public access support**: Azure Purview supports scanning only with VPC Private Link in AWS, and does not include public access scanning.
-- **Supported regions**: Purview only supports Amazon RDS databases that are located in the following AWS regions:
+- **Supported regions**: Azure Purview only supports Amazon RDS databases that are located in the following AWS regions:
- US East (Ohio) - US East (N. Virginia)
For more information, see:
- [Manage and increase quotas for resources with Azure Purview](how-to-manage-quotas.md) - [Supported data sources and file types in Azure Purview](sources-and-scans.md)-- [Use private endpoints for your Purview account](catalog-private-link.md)
+- [Use private endpoints for your Azure Purview account](catalog-private-link.md)
## Prerequisites
-Ensure that you've performed the following prerequisites before adding your Amazon RDS database as Purview data sources and scanning your RDS data.
+Ensure that you've performed the following prerequisites before adding your Amazon RDS database as Azure Purview data sources and scanning your RDS data.
> [!div class="checklist"] > * You need to be an Azure Purview Data Source Admin.
-> * You need a Purview account. [Create an Azure Purview account instance](create-catalog-portal.md), if you don't yet have one.
+> * You need an Azure Purview account. [Create an Azure Purview account instance](create-catalog-portal.md), if you don't yet have one.
> * You need an Amazon RDS PostgreSQL or Microsoft SQL database, with data.
-## Configure AWS to allow Purview to connect to your RDS VPC
+## Configure AWS to allow Azure Purview to connect to your RDS VPC
Azure Purview supports scanning only when your database is hosted in a virtual private cloud (VPC), where your RDS database can only be accessed from within the same VPC.
The following diagram shows the components in both your customer account and Mic
### Configure AWS PrivateLink using a CloudFormation template
-The following procedure describes how to use an AWS CloudFormation template to configure AWS PrivateLink, allowing Purview to connect to your RDS VPC. This procedure is performed in AWS and is intended for an AWS admin.
+The following procedure describes how to use an AWS CloudFormation template to configure AWS PrivateLink, allowing Azure Purview to connect to your RDS VPC. This procedure is performed in AWS and is intended for an AWS admin.
This CloudFormation template is available for download from the [Azure GitHub repository](https://github.com/Azure/Azure-Purview-Starter-Kit/tree/main/Amazon/AWS/RDS), and will help you create a target group, load balancer, and endpoint service. - **If you have multiple RDS servers in the same VPC**, perform this procedure once, [specifying all RDS server IP addresses and ports](#parameters). In this case, the CloudFormation output will include different ports for each RDS server.
- When [registering these RDS servers as data sources in Purview](#register-an-amazon-rds-data-source), use the ports included in the output instead of the real RDS server ports.
+ When [registering these RDS servers as data sources in Azure Purview](#register-an-amazon-rds-data-source), use the ports included in the output instead of the real RDS server ports.
- **If you have RDS servers in multiple VPCs**, perform this procedure for each of the VPCs.
This CloudFormation template is available for download from the [Azure GitHub re
|Name |Description | |||
- |**Endpoint & port** | Enter the resolved IP address of the RDS endpoint URL and port. For example: `192.168.1.1:5432` <br><br>- **If an RDS proxy is configured**, use the IP address of the read/write endpoint of the proxy for the relevant database. We recommend using an RDS proxy when working with Purview, as the IP address is static.<br><br>- **If you have multiple endpoints behind the same VPC**, enter up to 10, comma-separated endpoints. In this case, a single load balancer is created to the VPC, allowing a connection from the Amazon RDS Multi-Cloud Scanning Connector for Azure Purview in AWS to all RDS endpoints in the VPC. |
+ |**Endpoint & port** | Enter the resolved IP address of the RDS endpoint URL and port. For example: `192.168.1.1:5432` <br><br>- **If an RDS proxy is configured**, use the IP address of the read/write endpoint of the proxy for the relevant database. We recommend using an RDS proxy when working with Azure Purview, as the IP address is static.<br><br>- **If you have multiple endpoints behind the same VPC**, enter up to 10, comma-separated endpoints. In this case, a single load balancer is created to the VPC, allowing a connection from the Amazon RDS Multi-Cloud Scanning Connector for Azure Purview in AWS to all RDS endpoints in the VPC. |
|**Networking** | Enter your VPC ID | |**VPC IPv4 CIDR** | Enter the value your VPC's CIDR. You can find this value by selecting the VPC link on your RDS database page. For example: `192.168.0.0/16` | |**Subnets** |Select all the subnets that are associated with your VPC. |
This CloudFormation template is available for download from the [Azure GitHub re
- **Resources**: Shows the newly created target group, load balancer, and endpoint service - **Outputs**: Displays the **ServiceName** value, and the IP address and port of the RDS servers
- If you have multiple RDS servers configured, a different port is displayed. In this case, use the port shown here instead of the actual RDS server port when [registering your RDS database](#register-an-amazon-rds-data-source) as Purview data source.
+ If you have multiple RDS servers configured, a different port is displayed. In this case, use the port shown here instead of the actual RDS server port when [registering your RDS database](#register-an-amazon-rds-data-source) as Azure Purview data source.
1. In the **Outputs** tab, copy the **ServiceName** key value to the clipboard.
- You'll use the value of the **ServiceName** key in the Azure Purview portal, when [registering your RDS database](#register-an-amazon-rds-data-source) as Purview data source. There, enter the **ServiceName** key in the **Connect to private network via endpoint service** field.
+ You'll use the value of the **ServiceName** key in the Azure Purview portal, when [registering your RDS database](#register-an-amazon-rds-data-source) as Azure Purview data source. There, enter the **ServiceName** key in the **Connect to private network via endpoint service** field.
## Register an Amazon RDS data source
Your RDS data source appears in the Sources map or list. For example:
:::image type="content" source="media/register-scan-amazon-rds/amazon-rds-in-sources.png" alt-text="Screenshot of an Amazon RDS data source on the Sources page.":::
-## Create Purview credentials for your RDS scan
+## Create Azure Purview credentials for your RDS scan
Credentials supported for Amazon RDS data sources include username/password authentication only, with a password stored in an Azure KeyVault secret.
-### Create a secret for your RDS credentials to use in Purview
+### Create a secret for your RDS credentials to use in Azure Purview
1. Add your password to an Azure KeyVault as a secret. For more information, see [Set and retrieve a secret from Key Vault using Azure portal](../key-vault/secrets/quick-create-portal.md). 1. Add an access policy to your KeyVault with **Get** and **List** permissions. For example:
- :::image type="content" source="media/register-scan-amazon-rds/keyvault-for-rds.png" alt-text="Screenshot of an access policy for RDS in Purview.":::
+ :::image type="content" source="media/register-scan-amazon-rds/keyvault-for-rds.png" alt-text="Screenshot of an access policy for RDS in Azure Purview.":::
- When defining the principal for the policy, select your Purview account. For example:
+ When defining the principal for the policy, select your Azure Purview account. For example:
- :::image type="content" source="media/register-scan-amazon-rds/select-purview-as-principal.png" alt-text="Screenshot of selecting your Purview account as Principal.":::
+ :::image type="content" source="media/register-scan-amazon-rds/select-purview-as-principal.png" alt-text="Screenshot of selecting your Azure Purview account as Principal.":::
Select **Save** to save your Access Policy update. For more information, see [Assign an Azure Key Vault access policy](/azure/key-vault/general/assign-access-policy-portal).
-1. In Azure Purview, add a KeyVault connection to connect the KeyVault with your RDS secret to Purview. For more information, see [Credentials for source authentication in Azure Purview](manage-credentials.md).
+1. In Azure Purview, add a KeyVault connection to connect the KeyVault with your RDS secret to Azure Purview. For more information, see [Credentials for source authentication in Azure Purview](manage-credentials.md).
-### Create your Purview credential object for RDS
+### Create your Azure Purview credential object for RDS
In Azure Purview, create a credentials object to use when scanning your Amazon RDS account.
-1. In the Purview **Management** area, select **Security and access** > **Credentials** > **New**.
+1. In the Azure Purview **Management** area, select **Security and access** > **Credentials** > **New**.
1. Select **SQL authentication** as the authentication method. Then, enter details for the Key Vault where your RDS credentials are stored, including the names of your Key Vault and secret.
For more information, see [Credentials for source authentication in Azure Purvie
To configure an Azure Purview scan for your RDS database:
-1. From the Purview **Sources** page, select the Amazon RDS data source to scan.
+1. From the Azure Purview **Sources** page, select the Amazon RDS data source to scan.
1. Select :::image type="icon" source="media/register-scan-amazon-s3/new-scan-button.png" border="false"::: **New scan** to start defining your scan. In the pane that opens on the right, enter the following details, and then select **Continue**. - **Name**: Enter a meaningful name for your scan.
- - **Database name**: Enter the name of the database you want to scan. YouΓÇÖll need to find the names available from outside Purview, and create a separate scan for each database in the registered RDS server.
+ - **Database name**: Enter the name of the database you want to scan. YouΓÇÖll need to find the names available from outside Azure Purview, and create a separate scan for each database in the registered RDS server.
- **Credential**: Select the credential you created earlier for the Multi-Cloud Scanning Connectors for Azure Purview to access the RDS database. 1. On the **Select a scan rule set** pane, select the scan rule set you want to use, or create a new one. For more information, see [Create a scan rule set](create-a-scan-rule-set.md).
While you run your scan, select **Refresh** to monitor the scan progress.
## Explore scanning results
-After a Purview scan is complete on your Amazon RDS databases, drill down in the Purview **Data Map** area to view the scan history. Select a data source to view its details, and then select the **Scans** tab to view any currently running or completed scans.
+After an Azure Purview scan is complete on your Amazon RDS databases, drill down in the Azure Purview **Data Map** area to view the scan history. Select a data source to view its details, and then select the **Scans** tab to view any currently running or completed scans.
-Use the other areas of Purview to find out details about the content in your data estate, including your Amazon RDS databases:
+Use the other areas of Azure Purview to find out details about the content in your data estate, including your Amazon RDS databases:
-- **Explore RDS data in the catalog**. The Purview catalog shows a unified view across all source types, and RDS scanning results are displayed in a similar way to Azure SQL. You can browse the catalog using filters or browse the assets and navigate through the hierarchy. For more information, see:
+- **Explore RDS data in the catalog**. The Azure Purview catalog shows a unified view across all source types, and RDS scanning results are displayed in a similar way to Azure SQL. You can browse the catalog using filters or browse the assets and navigate through the hierarchy. For more information, see:
- [Tutorial: Browse assets in Azure Purview (preview) and view their lineage](tutorial-browse-and-view-lineage.md) - [Search the Azure Purview Data Catalog](how-to-search-catalog.md)
Use the other areas of Purview to find out details about the content in your dat
- **View Insight reports** to view statistics for the classification, sensitivity labels, file types, and more details about your content.
- All Purview Insight reports include the Amazon RDS scanning results, along with the rest of the results from your Azure data sources. When relevant, an **Amazon RDS** asset type is added to the report filtering options.
+ All Azure Purview Insight reports include the Amazon RDS scanning results, along with the rest of the results from your Azure data sources. When relevant, an **Amazon RDS** asset type is added to the report filtering options.
For more information, see the [Understand Insights in Azure Purview](concept-insights.md). -- **View RDS data in other Purview features**, such as the **Scans** and **Glossary** areas. For more information, see:
+- **View RDS data in other Azure Purview features**, such as the **Scans** and **Glossary** areas. For more information, see:
- [Create a scan rule set](create-a-scan-rule-set.md) - [Tutorial: Create and import glossary terms in Azure Purview (preview)](tutorial-import-create-glossary-terms.md)
After the [Load Balancer is created](#step-4-create-a-load-balancer) and its Sta
<a name="service-name"></a>**To copy the service name for use in Azure Purview**:
-After youΓÇÖve created your endpoint service, you can copy the **Service name** value in the Azure Purview portal, when [registering your RDS database](#register-an-amazon-rds-data-source) as Purview data source.
+After youΓÇÖve created your endpoint service, you can copy the **Service name** value in the Azure Purview portal, when [registering your RDS database](#register-an-amazon-rds-data-source) as Azure Purview data source.
Locate the **Service name** on the **Details** tab for your selected endpoint service.
If an error of `Invalid VPC service name` or `Invalid endpoint service` appears
For more information, see [Step 5: Create an endpoint service](#step-5-create-an-endpoint-service).
-1. Make sure that your RDS database is listed in one of the supported regions. For more information, see [Purview scope for Amazon RDS](#purview-scope-for-amazon-rds).
+1. Make sure that your RDS database is listed in one of the supported regions. For more information, see [Azure Purview scope for Amazon RDS](#azure-purview-scope-for-amazon-rds).
### Invalid availability zone
purview Register Scan Amazon S3 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-amazon-s3.md
The Multi-Cloud Scanning Connector for Azure Purview allows you to explore your
This article describes how to use Azure Purview to scan your unstructured data currently stored in Amazon S3 standard buckets, and discover what types of sensitive information exists in your data. This how-to guide also describes how to identify the Amazon S3 Buckets where the data is currently stored for easy information protection and data compliance.
-For this service, use Purview to provide a Microsoft account with secure access to AWS, where the Multi-Cloud Scanning Connector for Azure Purview will run. The Multi-Cloud Scanning Connector for Azure Purview uses this access to your Amazon S3 buckets to read your data, and then reports the scanning results, including only the metadata and classification, back to Azure. Use the Purview classification and labeling reports to analyze and review your data scan results.
+For this service, use Azure Purview to provide a Microsoft account with secure access to AWS, where the Multi-Cloud Scanning Connector for Azure Purview will run. The Multi-Cloud Scanning Connector for Azure Purview uses this access to your Amazon S3 buckets to read your data, and then reports the scanning results, including only the metadata and classification, back to Azure. Use the Azure Purview classification and labeling reports to analyze and review your data scan results.
> [!IMPORTANT] > The Multi-Cloud Scanning Connector for Azure Purview is a separate add-on to Azure Purview. The terms and conditions for the Multi-Cloud Scanning Connector for Azure Purview are contained in the agreement under which you obtained Microsoft Azure Services. For more information, see Microsoft Azure Legal Information at https://azure.microsoft.com/support/legal/.
For this service, use Purview to provide a Microsoft account with secure access
\** Lineage is supported if dataset is used as a source/sink in [Data Factory Copy activity](how-to-link-azure-data-factory.md)
-## Purview scope for Amazon S3
+## Azure Purview scope for Amazon S3
We currently do not support ingestion private endpoints that work with your AWS sources.
-For more information about Purview limits, see:
+For more information about Azure Purview limits, see:
- [Manage and increase quotas for resources with Azure Purview](how-to-manage-quotas.md) - [Supported data sources and file types in Azure Purview](sources-and-scans.md) ### Storage and scanning regions
-The Purview connector for the Amazon S3 service is currently deployed in specific regions only. The following table maps the regions where you data is stored to the region where it would be scanned by Azure Purview.
+The Azure Purview connector for the Amazon S3 service is currently deployed in specific regions only. The following table maps the regions where you data is stored to the region where it would be scanned by Azure Purview.
> [!IMPORTANT] > Customers will be charged for all related data transfer charges according to the region of their bucket.
The Purview connector for the Amazon S3 service is currently deployed in specifi
## Prerequisites
-Ensure that you've performed the following prerequisites before adding your Amazon S3 buckets as Purview data sources and scanning your S3 data.
+Ensure that you've performed the following prerequisites before adding your Amazon S3 buckets as Azure Purview data sources and scanning your S3 data.
> [!div class="checklist"] > * You need to be an Azure Purview Data Source Admin.
-> * [Create a Purview account](#create-a-purview-account) if you don't yet have one
-> * [Create a new AWS role for use with Purview](#create-a-new-aws-role-for-purview)
-> * [Create a Purview credential for your AWS bucket scan](#create-a-purview-credential-for-your-aws-s3-scan)
+> * [Create an Azure Purview account](#create-an-azure-purview-account) if you don't yet have one
+> * [Create a new AWS role for use with Azure Purview](#create-a-new-aws-role-for-azure-purview)
+> * [Create an Azure Purview credential for your AWS bucket scan](#create-an-azure-purview-credential-for-your-aws-s3-scan)
> * [Configure scanning for encrypted Amazon S3 buckets](#configure-scanning-for-encrypted-amazon-s3-buckets), if relevant > * Make sure that your bucket policy does not block the connection. For more information, see [Bucket policy requirements](#confirm-your-bucket-policy-access) and [SCP policy requirements](#confirm-your-scp-policy-access). For these items, you may need to consult with an AWS expert to ensure that your policies allow required access.
-> * When adding your buckets as Purview resources, you'll need the values of your [AWS ARN](#retrieve-your-new-role-arn), [bucket name](#retrieve-your-amazon-s3-bucket-name), and sometimes your [AWS account ID](#locate-your-aws-account-id).
+> * When adding your buckets as Azure Purview resources, you'll need the values of your [AWS ARN](#retrieve-your-new-role-arn), [bucket name](#retrieve-your-amazon-s3-bucket-name), and sometimes your [AWS account ID](#locate-your-aws-account-id).
-### Create a Purview account
+### Create an Azure Purview account
-- **If you already have a Purview account,** you can continue with the configurations required for AWS S3 support. Start with [Create a Purview credential for your AWS bucket scan](#create-a-purview-credential-for-your-aws-s3-scan).
+- **If you already have an Azure Purview account,** you can continue with the configurations required for AWS S3 support. Start with [Create an Azure Purview credential for your AWS bucket scan](#create-an-azure-purview-credential-for-your-aws-s3-scan).
-- **If you need to create a Purview account,** follow the instructions in [Create an Azure Purview account instance](create-catalog-portal.md). After creating your account, return here to complete configuration and begin using Purview connector for Amazon S3.
+- **If you need to create an Azure Purview account,** follow the instructions in [Create an Azure Purview account instance](create-catalog-portal.md). After creating your account, return here to complete configuration and begin using Azure Purview connector for Amazon S3.
-### Create a new AWS role for Purview
+### Create a new AWS role for Azure Purview
-The Purview scanner is deployed in a Microsoft account in AWS. To allow the Purview scanner to read your S3 data, you must create a dedicated role in the AWS portal, in the IAM area, to be used by the scanner.
+The Azure Purview scanner is deployed in a Microsoft account in AWS. To allow the Azure Purview scanner to read your S3 data, you must create a dedicated role in the AWS portal, in the IAM area, to be used by the scanner.
-This procedure describes how to create the AWS role, with the required Microsoft Account ID and External ID from Purview, and then enter the Role ARN value in Purview.
+This procedure describes how to create the AWS role, with the required Microsoft Account ID and External ID from Azure Purview, and then enter the Role ARN value in Azure Purview.
**To locate your Microsoft Account ID and External ID**:
-1. In Purview, go to the **Management Center** > **Security and access** > **Credentials**.
+1. In Azure Purview, go to the **Management Center** > **Security and access** > **Credentials**.
1. Select **New** to create a new credential.
This procedure describes how to create the AWS role, with the required Microsoft
[ ![Locate your Microsoft account ID and External ID values.](./media/register-scan-amazon-s3/locate-account-id-external-id.png) ](./media/register-scan-amazon-s3/locate-account-id-external-id.png#lightbox)
-**To create your AWS role for Purview**:
+**To create your AWS role for Azure Purview**:
1. Open your **Amazon Web Services** console, and under **Security, Identity, and Compliance**, select **IAM**.
This procedure describes how to create the AWS role, with the required Microsoft
- [Confirm your bucket policy access](#confirm-your-bucket-policy-access) - [Confirm your SCP policy access](#confirm-your-scp-policy-access)
-### Create a Purview credential for your AWS S3 scan
+### Create an Azure Purview credential for your AWS S3 scan
-This procedure describes how to create a new Purview credential to use when scanning your AWS buckets.
+This procedure describes how to create a new Azure Purview credential to use when scanning your AWS buckets.
> [!TIP]
-> If you're continuing directly on from [Create a new AWS role for Purview](#create-a-new-aws-role-for-purview), you may already have the **New credential** pane open in Purview.
+> If you're continuing directly on from [Create a new AWS role for Azure Purview](#create-a-new-aws-role-for-azure-purview), you may already have the **New credential** pane open in Azure Purview.
> > You can also create a new credential in the middle of the process, while [configuring your scan](#create-a-scan-for-one-or-more-amazon-s3-buckets). In that case, in the **Credential** field, select **New**. >
-1. In Purview, go to the **Management Center**, and under **Security and access**, select **Credentials**.
+1. In Azure Purview, go to the **Management Center**, and under **Security and access**, select **Credentials**.
-1. Select **New**, and in the **New credential** pane that appears on the right, use the following fields to create your Purview credential:
+1. Select **New**, and in the **New credential** pane that appears on the right, use the following fields to create your Azure Purview credential:
|Field |Description | ||| |**Name** |Enter a meaningful name for this credential. | |**Description** |Enter a optional description for this credential, such as `Used to scan the tutorial S3 buckets` | |**Authentication method** |Select **Role ARN**, since you're using a role ARN to access your bucket. |
- |**Role ARN** | Once you've [created your Amazon IAM role](#create-a-new-aws-role-for-purview), navigate to your role in the AWS IAM area, copy the **Role ARN** value, and enter it here. For example: `arn:aws:iam::181328463391:role/S3Role`. <br><br>For more information, see [Retrieve your new Role ARN](#retrieve-your-new-role-arn). |
+ |**Role ARN** | Once you've [created your Amazon IAM role](#create-a-new-aws-role-for-azure-purview), navigate to your role in the AWS IAM area, copy the **Role ARN** value, and enter it here. For example: `arn:aws:iam::181328463391:role/S3Role`. <br><br>For more information, see [Retrieve your new Role ARN](#retrieve-your-new-role-arn). |
| | |
- The **Microsoft account ID** and the **External ID** values are used when [creating your Role ARN in AWS.](#create-a-new-aws-role-for-purview).
+ The **Microsoft account ID** and the **External ID** values are used when [creating your Role ARN in AWS.](#create-a-new-aws-role-for-azure-purview).
1. Select **Create** when you're done to finish creating the credential.
-For more information about Purview credentials, see [Credentials for source authentication in Azure Purview](manage-credentials.md).
+For more information about Azure Purview credentials, see [Credentials for source authentication in Azure Purview](manage-credentials.md).
### Configure scanning for encrypted Amazon S3 buckets
AWS buckets support multiple encryption types. For buckets that use **AWS-KMS**
1. Attach your new policy to the role you added for scanning.
- 1. Navigate back to the **IAM** > **Roles** page, and select the role you added [earlier](#create-a-new-aws-role-for-purview).
+ 1. Navigate back to the **IAM** > **Roles** page, and select the role you added [earlier](#create-a-new-aws-role-for-azure-purview).
1. On the **Permissions** tab, select **Attach policies**.
AWS buckets support multiple encryption types. For buckets that use **AWS-KMS**
Make sure that the S3 bucket [policy](https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-iam-policies.html) does not block the connection: 1. In AWS, navigate to your S3 bucket, and then select the **Permissions** tab > **Bucket policy**.
-1. Check the policy details to make sure that it doesn't block the connection from the Purview scanner service.
+1. Check the policy details to make sure that it doesn't block the connection from the Azure Purview scanner service.
### Confirm your SCP policy access
For example, your SCP policy might block read API calls to the [AWS Region](#sto
- Required API calls, which must be allowed by your SCP policy, include: `AssumeRole`, `GetBucketLocation`, `GetObject`, `ListBucket`, `GetBucketPublicAccessBlock`. - Your SCP policy must also allow calls to the **us-east-1** AWS Region, which is the default Region for API calls. For more information, see the [AWS documentation](https://docs.aws.amazon.com/general/latest/gr/rande.html).
-Follow the [SCP documentation](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_create.html), review your organizationΓÇÖs SCP policies, and make sure all the [permissions required for the Purview scanner](#create-a-new-aws-role-for-purview) are available.
+Follow the [SCP documentation](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_create.html), review your organizationΓÇÖs SCP policies, and make sure all the [permissions required for the Azure Purview scanner](#create-a-new-aws-role-for-azure-purview) are available.
### Retrieve your new Role ARN
-You'll need to record your AWS Role ARN and copy it in to Purview when [creating a scan for your Amazon S3 bucket](#create-a-scan-for-one-or-more-amazon-s3-buckets).
+You'll need to record your AWS Role ARN and copy it in to Azure Purview when [creating a scan for your Amazon S3 bucket](#create-a-scan-for-one-or-more-amazon-s3-buckets).
**To retrieve your role ARN:**
-1. In the AWS **Identity and Access Management (IAM)** > **Roles** area, search for and select the new role you [created for Purview](#create-a-purview-credential-for-your-aws-s3-scan).
+1. In the AWS **Identity and Access Management (IAM)** > **Roles** area, search for and select the new role you [created for Azure Purview](#create-an-azure-purview-credential-for-your-aws-s3-scan).
1. On the role's **Summary** page, select the **Copy to clipboard** button to the right of the **Role ARN** value. ![Copy the role ARN value to the clipboard.](./media/register-scan-amazon-s3/aws-copy-role-purview.png)
-In Purview, you can edit your credential for AWS S3, and paste the retrieved role in the **Role ARN** field. For more information, see [Create a scan for one or more Amazon S3 buckets](#create-a-scan-for-one-or-more-amazon-s3-buckets).
+In Azure Purview, you can edit your credential for AWS S3, and paste the retrieved role in the **Role ARN** field. For more information, see [Create a scan for one or more Amazon S3 buckets](#create-a-scan-for-one-or-more-amazon-s3-buckets).
### Retrieve your Amazon S3 bucket name
-You'll need the name of your Amazon S3 bucket to copy it in to Purview when [creating a scan for your Amazon S3 bucket](#create-a-scan-for-one-or-more-amazon-s3-buckets)
+You'll need the name of your Amazon S3 bucket to copy it in to Azure Purview when [creating a scan for your Amazon S3 bucket](#create-a-scan-for-one-or-more-amazon-s3-buckets)
**To retrieve your bucket name:**
You'll need the name of your Amazon S3 bucket to copy it in to Purview when [cre
![Retrieve and copy the S3 bucket URL.](./media/register-scan-amazon-s3/retrieve-bucket-url-amazon.png)
- Paste your bucket name in a secure file, and add an `s3://` prefix to it to create the value you'll need to enter when configuring your bucket as a Purview resource.
+ Paste your bucket name in a secure file, and add an `s3://` prefix to it to create the value you'll need to enter when configuring your bucket as an Azure Purview resource.
For example: `s3://purview-tutorial-bucket` > [!TIP]
-> Only the root level of your bucket is supported as a Purview data source. For example, the following URL, which includes a sub-folder is *not* supported: `s3://purview-tutorial-bucket/view-data`
+> Only the root level of your bucket is supported as an Azure Purview data source. For example, the following URL, which includes a sub-folder is *not* supported: `s3://purview-tutorial-bucket/view-data`
> > However, if you configure a scan for a specific S3 bucket, you can select one or more specific folders for your scan. For more information, see the step to [scope your scan](#create-a-scan-for-one-or-more-amazon-s3-buckets). > ### Locate your AWS account ID
-You'll need your AWS account ID to register your AWS account as a Purview data source, together with all of its buckets.
+You'll need your AWS account ID to register your AWS account as an Azure Purview data source, together with all of its buckets.
Your AWS account ID is the ID you use to log in to the AWS console. You can also find it once you're logged in on the IAM dashboard, on the left under the navigation options, and at the top, as the numerical part of your sign-in URL:
For example:
![Retrieve your AWS account ID.](./media/register-scan-amazon-s3/aws-locate-account-id.png)
-## Add a single Amazon S3 bucket as a Purview resource
+## Add a single Amazon S3 bucket as an Azure Purview resource
-Use this procedure if you only have a single S3 bucket that you want to register to Purview as a data source, or if you have multiple buckets in your AWS account, but do not want to register all of them to Purview.
+Use this procedure if you only have a single S3 bucket that you want to register to Azure Purview as a data source, or if you have multiple buckets in your AWS account, but do not want to register all of them to Azure Purview.
**To add your bucket**: 1. In Azure Purview, go to the **Data Map** page, and select **Register** ![Register icon.](./media/register-scan-amazon-s3/register-button.png) > **Amazon S3** > **Continue**.
- ![Add an Amazon AWS bucket as a Purview data source.](./media/register-scan-amazon-s3/add-s3-datasource-to-purview.png)
+ ![Add an Amazon AWS bucket as an Azure Purview data source.](./media/register-scan-amazon-s3/add-s3-datasource-to-purview.png)
> [!TIP] > If you have multiple [collections](manage-data-sources.md#manage-collections) and want to add your Amazon S3 to a specific collection, select the **Map view** at the top right, and then select the **Register** ![Register icon.](./media/register-scan-amazon-s3/register-button.png) button inside your collection.
Use this procedure if you only have a single S3 bucket that you want to register
||| |**Name** |Enter a meaningful name, or use the default provided. | |**Bucket URL** | Enter your AWS bucket URL, using the following syntax: `s3://<bucketName>` <br><br>**Note**: Make sure to use only the root level of your bucket. For more information, see [Retrieve your Amazon S3 bucket name](#retrieve-your-amazon-s3-bucket-name). |
- |**Select a collection** |If you selected to register a data source from within a collection, that collection already listed. <br><br>Select a different collection as needed, **None** to assign no collection, or **New** to create a new collection now. <br><br>For more information about Purview collections, see [Manage data sources in Azure Purview](manage-data-sources.md#manage-collections).|
+ |**Select a collection** |If you selected to register a data source from within a collection, that collection already listed. <br><br>Select a different collection as needed, **None** to assign no collection, or **New** to create a new collection now. <br><br>For more information about Azure Purview collections, see [Manage data sources in Azure Purview](manage-data-sources.md#manage-collections).|
| | | When you're done, select **Finish** to complete the registration. Continue with [Create a scan for one or more Amazon S3 buckets.](#create-a-scan-for-one-or-more-amazon-s3-buckets).
-## Add an AWS account as a Purview resource
+## Add an AWS account as an Azure Purview resource
-Use this procedure if you have multiple S3 buckets in your Amazon account, and you want to register all of them as Purview data sources.
+Use this procedure if you have multiple S3 buckets in your Amazon account, and you want to register all of them as Azure Purview data sources.
When [configuring your scan](#create-a-scan-for-one-or-more-amazon-s3-buckets), you'll be able to select the specific buckets you want to scan, if you don't want to scan all of them together.
When [configuring your scan](#create-a-scan-for-one-or-more-amazon-s3-buckets),
1. In Azure Purview, go to the **Data Map** page, and select **Register** ![Register icon.](./media/register-scan-amazon-s3/register-button.png) > **Amazon accounts** > **Continue**.
- ![Add an Amazon account as a Purview data source.](./media/register-scan-amazon-s3/add-s3-account-to-purview.png)
+ ![Add an Amazon account as an Azure Purview data source.](./media/register-scan-amazon-s3/add-s3-account-to-purview.png)
> [!TIP] > If you have multiple [collections](manage-data-sources.md#manage-collections) and want to add your Amazon S3 to a specific collection, select the **Map view** at the top right, and then select the **Register** ![Register icon.](./media/register-scan-amazon-s3/register-button.png) button inside your collection.
When [configuring your scan](#create-a-scan-for-one-or-more-amazon-s3-buckets),
||| |**Name** |Enter a meaningful name, or use the default provided. | |**AWS account ID** | Enter your AWS account ID. For more information, see [Locate your AWS account ID](#locate-your-aws-account-id)|
- |**Select a collection** |If you selected to register a data source from within a collection, that collection already listed. <br><br>Select a different collection as needed, **None** to assign no collection, or **New** to create a new collection now. <br><br>For more information about Purview collections, see [Manage data sources in Azure Purview](manage-data-sources.md#manage-collections).|
+ |**Select a collection** |If you selected to register a data source from within a collection, that collection already listed. <br><br>Select a different collection as needed, **None** to assign no collection, or **New** to create a new collection now. <br><br>For more information about Azure Purview collections, see [Manage data sources in Azure Purview](manage-data-sources.md#manage-collections).|
| | | When you're done, select **Finish** to complete the registration.
Continue with [Create a scan for one or more Amazon S3 buckets](#create-a-scan-f
## Create a scan for one or more Amazon S3 buckets
-Once you've added your buckets as Purview data sources, you can configure a scan to run at scheduled intervals or immediately.
+Once you've added your buckets as Azure Purview data sources, you can configure a scan to run at scheduled intervals or immediately.
-1. Select the **Data Map** tab on the left pane in the [Purview Studio](https://web.purview.azure.com/resource/), and then do one of the following:
+1. Select the **Data Map** tab on the left pane in the [Azure Purview Studio](https://web.purview.azure.com/resource/), and then do one of the following:
- In the **Map view**, select **New scan** ![New scan icon.](./media/register-scan-amazon-s3/new-scan-button.png) in your data source box. - In the **List view**, hover over the row for your data source, and select **New scan** ![New scan icon.](./media/register-scan-amazon-s3/new-scan-button.png).
Once you've added your buckets as Purview data sources, you can configure a scan
|Field |Description | ||| |**Name** | Enter a meaningful name for your scan or use the default. |
- |**Type** |Displayed only if you've added your AWS account, with all buckets included. <br><br>Current options include only **All** > **Amazon S3**. Stay tuned for more options to select as Purview's support matrix expands. |
- |**Credential** | Select a Purview credential with your role ARN. <br><br>**Tip**: If you want to create a new credential at this time, select **New**. For more information, see [Create a Purview credential for your AWS bucket scan](#create-a-purview-credential-for-your-aws-s3-scan). |
+ |**Type** |Displayed only if you've added your AWS account, with all buckets included. <br><br>Current options include only **All** > **Amazon S3**. Stay tuned for more options to select as Azure Purview's support matrix expands. |
+ |**Credential** | Select an Azure Purview credential with your role ARN. <br><br>**Tip**: If you want to create a new credential at this time, select **New**. For more information, see [Create an Azure Purview credential for your AWS bucket scan](#create-an-azure-purview-credential-for-your-aws-s3-scan). |
| **Amazon S3** | Displayed only if you've added your AWS account, with all buckets included. <br><br>Select one or more buckets to scan, or **Select all** to scan all the buckets in your account. | | | |
- Purview automatically checks that the role ARN is valid, and that the buckets and objects within the buckets are accessible, and then continues if the connection succeeds.
+ Azure Purview automatically checks that the role ARN is valid, and that the buckets and objects within the buckets are accessible, and then continues if the connection succeeds.
> [!TIP] > To enter different values and test the connection yourself before continuing, select **Test connection** at the bottom right before selecting **Continue**.
Once you've added your buckets as Purview data sources, you can configure a scan
> Once started, scanning can take up to 24 hours to complete. You'll be able to review your **Insight Reports** and search the catalog 24 hours after you started each scan. >
-For more information, see [Explore Purview scanning results](#explore-purview-scanning-results).
+For more information, see [Explore Azure Purview scanning results](#explore-azure-purview-scanning-results).
-## Explore Purview scanning results
+## Explore Azure Purview scanning results
-Once a Purview scan is complete on your Amazon S3 buckets, drill down in the Purview **Data Map** area to view the scan history.
+Once an Azure Purview scan is complete on your Amazon S3 buckets, drill down in the Azure Purview **Data Map** area to view the scan history.
Select a data source to view its details, and then select the **Scans** tab to view any currently running or completed scans. If you've added an AWS account with multiple buckets, the scan history for each bucket is shown under the account.
For example:
![Show the AWS S3 bucket scans under your AWS account source.](./media/register-scan-amazon-s3/account-scan-history.png)
-Use the other areas of Purview to find out details about the content in your data estate, including your Amazon S3 buckets:
+Use the other areas of Azure Purview to find out details about the content in your data estate, including your Amazon S3 buckets:
-- **Search the Purview data catalog,** and filter for a specific bucket. For example:
+- **Search the Azure Purview data catalog,** and filter for a specific bucket. For example:
![Search the catalog for AWS S3 assets.](./media/register-scan-amazon-s3/search-catalog-screen-aws.png) - **View Insight reports** to view statistics for the classification, sensitivity labels, file types, and more details about your content.
- All Purview Insight reports include the Amazon S3 scanning results, along with the rest of the results from your Azure data sources. When relevant, an additional **Amazon S3** asset type was added to the report filtering options.
+ All Azure Purview Insight reports include the Amazon S3 scanning results, along with the rest of the results from your Azure data sources. When relevant, an additional **Amazon S3** asset type was added to the report filtering options.
For more information, see the [Understand Insights in Azure Purview](concept-insights.md). ## Minimum permissions for your AWS policy
-The default procedure for [creating an AWS role for Purview](#create-a-new-aws-role-for-purview) to use when scanning your S3 buckets uses the **AmazonS3ReadOnlyAccess** policy.
+The default procedure for [creating an AWS role for Azure Purview](#create-a-new-aws-role-for-azure-purview) to use when scanning your S3 buckets uses the **AmazonS3ReadOnlyAccess** policy.
The **AmazonS3ReadOnlyAccess** policy provides minimum permissions required for scanning your S3 buckets, and may include other permissions as well.
Make sure to define your resource with a wildcard. For example:
## Troubleshooting
-Scanning Amazon S3 resources requires [creating a role in AWS IAM](#create-a-new-aws-role-for-purview) to allow the Purview scanner service running in a Microsoft account in AWS to read the data.
+Scanning Amazon S3 resources requires [creating a role in AWS IAM](#create-a-new-aws-role-for-azure-purview) to allow the Azure Purview scanner service running in a Microsoft account in AWS to read the data.
Configuration errors in the role can lead to connection failure. This section describes some examples of connection failures that may occur while setting up the scan, and the troubleshooting guidelines for each case. If all of the items described in the following sections are properly configured, and scanning S3 buckets still fails with errors, contact Microsoft support. > [!NOTE]
-> For policy access issues, make sure that neither your bucket policy, nor your SCP policy are blocking access to your S3 bucket from Purview.
+> For policy access issues, make sure that neither your bucket policy, nor your SCP policy are blocking access to your S3 bucket from Azure Purview.
> >For more information, see [Confirm your bucket policy access](#confirm-your-bucket-policy-access) and [Confirm your SCP policy access](#confirm-your-scp-policy-access). >
Make sure that the AWS role has **KMS Decrypt** permissions. For more informatio
Make sure that the AWS role has the correct external ID: 1. In the AWS IAM area, select the **Role > Trust relationships** tab.
-1. Follow the steps in [Create a new AWS role for Purview](#create-a-new-aws-role-for-purview) again to verify your details.
+1. Follow the steps in [Create a new AWS role for Azure Purview](#create-a-new-aws-role-for-azure-purview) again to verify your details.
### Error found with the role ARN
This is a general error that indicates an issue when using the Role ARN. For exa
- Make sure that the AWS role has the required permissions to read the selected S3 bucket. Required permissions include `AmazonS3ReadOnlyAccess` or the [minimum read permissions](#minimum-permissions-for-your-aws-policy), and `KMS Decrypt` for encrypted buckets. -- Make sure that the AWS role has the correct Microsoft account ID. In the AWS IAM area, select the **Role > Trust relationships** tab and then follow the steps in [Create a new AWS role for Purview](#create-a-new-aws-role-for-purview) again to verify your details.
+- Make sure that the AWS role has the correct Microsoft account ID. In the AWS IAM area, select the **Role > Trust relationships** tab and then follow the steps in [Create a new AWS role for Azure Purview](#create-a-new-aws-role-for-azure-purview) again to verify your details.
For more information, see [Cannot find the specified bucket](#cannot-find-the-specified-bucket),
For more information, see [Cannot find the specified bucket](#cannot-find-the-sp
Make sure that the S3 bucket URL is properly defined: 1. In AWS, navigate to your S3 bucket, and copy the bucket name.
-1. In Purview, edit the Amazon S3 data source, and update the bucket URL to include your copied bucket name, using the following syntax: `s3://<BucketName>`
+1. In Azure Purview, edit the Amazon S3 data source, and update the bucket URL to include your copied bucket name, using the following syntax: `s3://<BucketName>`
## Next steps
purview Register Scan Azure Blob Storage Source https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-azure-blob-storage-source.md
For file types such as csv, tsv, psv, ssv, the schema is extracted when the foll
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An active [Purview resource](create-catalog-portal.md).
+* An active [Azure Purview resource](create-catalog-portal.md).
-* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
+* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
## Register
This section will enable you to register the Azure Blob storage account and set
It is important to register the data source in Azure Purview prior to setting up a scan for the data source.
-1. Go to the [Azure portal](https://portal.azure.com), and navigate to the **Purview accounts** page and select your _Purview account_
+1. Go to the [Azure portal](https://portal.azure.com), and navigate to the **Azure Purview accounts** page and select your _Purview account_
- :::image type="content" source="media/register-scan-azure-blob-storage-source/register-blob-purview-acct.png" alt-text="Screenshot that shows the Purview account used to register the data source":::
+ :::image type="content" source="media/register-scan-azure-blob-storage-source/register-blob-purview-acct.png" alt-text="Screenshot that shows the Azure Purview account used to register the data source":::
-1. **Open Purview Studio** and navigate to the **Data Map --> Sources**
+1. **Open Azure Purview Studio** and navigate to the **Data Map --> Sources**
- :::image type="content" source="media/register-scan-azure-blob-storage-source/register-blob-open-purview-studio.png" alt-text="Screenshot that shows the link to open Purview Studio":::
+ :::image type="content" source="media/register-scan-azure-blob-storage-source/register-blob-open-purview-studio.png" alt-text="Screenshot that shows the link to open Azure Purview Studio":::
:::image type="content" source="media/register-scan-azure-blob-storage-source/register-blob-sources.png" alt-text="Screenshot that navigates to the Sources link in the Data Map":::
The following options are supported:
#### Using a system or user assigned managed identity for scanning
-It is important to give your Purview account the permission to scan the Azure Blob data source. You can add access for the SAMI or UAMI at the Subscription, Resource Group, or Resource level, depending on what level scan permission is needed.
+It is important to give your Azure Purview account the permission to scan the Azure Blob data source. You can add access for the SAMI or UAMI at the Subscription, Resource Group, or Resource level, depending on what level scan permission is needed.
> [!NOTE] > If you have firewall enabled for the storage account, you must use **managed identity** authentication method when setting up a scan.
It is important to give your Purview account the permission to scan the Azure Bl
:::image type="content" source="media/register-scan-azure-blob-storage-source/register-blob-access-control.png" alt-text="Screenshot that shows the access control for the storage account":::
-1. Set the **Role** to **Storage Blob Data Reader** and enter your _Azure Purview account name_ or _[user-assigned managed identity](manage-credentials.md#create-a-user-assigned-managed-identity)_ under **Select** input box. Then, select **Save** to give this role assignment to your Purview account.
+1. Set the **Role** to **Storage Blob Data Reader** and enter your _Azure Purview account name_ or _[user-assigned managed identity](manage-credentials.md#create-a-user-assigned-managed-identity)_ under **Select** input box. Then, select **Save** to give this role assignment to your Azure Purview account.
- :::image type="content" source="media/register-scan-azure-blob-storage-source/register-blob-assign-permissions.png" alt-text="Screenshot that shows the details to assign permissions for the Purview account":::
+ :::image type="content" source="media/register-scan-azure-blob-storage-source/register-blob-assign-permissions.png" alt-text="Screenshot that shows the details to assign permissions for the Azure Purview account":::
1. Go into your Azure Blob storage account in [Azure portal](https://portal.azure.com) 1. Navigate to **Security + networking > Networking**
When authentication method selected is **Account Key**, you need to get your acc
1. Select **Create** to complete
-1. If your key vault is not connected to Purview yet, you will need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-azure-purview-account)
+1. If your key vault is not connected to Azure Purview yet, you will need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-azure-purview-account)
1. Finally, [create a new credential](manage-credentials.md#create-a-new-credential) using the key to set up your scan #### Using Service Principal for scanning
It is important to give your service principal the permission to scan the Azure
:::image type="content" source="media/register-scan-azure-blob-storage-source/register-blob-access-control.png" alt-text="Screenshot that shows the access control for the storage account":::
-1. Set the **Role** to **Storage Blob Data Reader** and enter your _service principal_ under **Select** input box. Then, select **Save** to give this role assignment to your Purview account.
+1. Set the **Role** to **Storage Blob Data Reader** and enter your _service principal_ under **Select** input box. Then, select **Save** to give this role assignment to your Azure Purview account.
:::image type="content" source="media/register-scan-azure-blob-storage-source/register-blob-sp-permission.png" alt-text="Screenshot that shows the details to provide storage account permissions to the service principal"::: ### Creating the scan
-1. Open your **Purview account** and select the **Open Purview Studio**
+1. Open your **Azure Purview account** and select the **Open Azure Purview Studio**
1. Navigate to the **Data map** --> **Sources** to view the collection hierarchy 1. Select the **New Scan** icon under the **Azure Blob data source** registered earlier
It is important to give your service principal the permission to scan the Azure
#### If using a system or user assigned managed identity
-Provide a **Name** for the scan, select the Purview accounts SAMI or UAMI under **Credential**, choose the appropriate collection for the scan, and select **Test connection**. On a successful connection, select **Continue**
+Provide a **Name** for the scan, select the Azure Purview accounts SAMI or UAMI under **Credential**, choose the appropriate collection for the scan, and select **Test connection**. On a successful connection, select **Continue**
:::image type="content" source="media/register-scan-azure-blob-storage-source/register-blob-managed-identity.png" alt-text="Screenshot that shows the managed identity option to run the scan":::
Follow this configuration guide to [enable access policies on an Azure Storage a
## Next steps
-Now that you have registered your source, follow the below guides to learn more about Purview and your data.
+Now that you have registered your source, follow the below guides to learn more about Azure Purview and your data.
- [Data insights in Azure Purview](concept-insights.md) - [Lineage in Azure Purview](catalog-lineage-user-guide.md)
purview Register Scan Azure Cosmos Database https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-azure-cosmos-database.md
This article outlines the process to register an Azure Cosmos database (SQL API)
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An active [Purview resource](create-catalog-portal.md).
+* An active [Azure Purview resource](create-catalog-portal.md).
-* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
+* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
## Register
This section will enable you to register the Azure Cosmos database (SQL API) and
It is important to register the data source in Azure Purview prior to setting up a scan for the data source.
-1. Go to the [Azure portal](https://portal.azure.com), and navigate to the **Purview accounts** page and select your _Purview account_
+1. Go to the [Azure portal](https://portal.azure.com), and navigate to the **Azure Purview accounts** page and select your _Purview account_
- :::image type="content" source="media/register-scan-azure-cosmos-database/register-cosmos-db-purview-acct.png" alt-text="Screenshot that shows the Purview account used to register the data source":::
+ :::image type="content" source="media/register-scan-azure-cosmos-database/register-cosmos-db-purview-acct.png" alt-text="Screenshot that shows the Azure Purview account used to register the data source":::
-1. **Open Purview Studio** and navigate to the **Data Map --> Collections**
+1. **Open Azure Purview Studio** and navigate to the **Data Map --> Collections**
:::image type="content" source="media/register-scan-azure-cosmos-database/register-cosmos-db-open-purview-studio.png" alt-text="Screenshot that navigates to the Sources link in the Data Map":::
You need to get your access key and store in the key vault:
:::image type="content" source="media/register-scan-azure-cosmos-database/register-cosmos-db-key-vault-options.png" alt-text="Screenshot that shows the key vault option to enter the secret values":::
-1. If your key vault is not connected to Purview yet, you will need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-azure-purview-account)
+1. If your key vault is not connected to Azure Purview yet, you will need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-azure-purview-account)
1. Finally, [create a new credential](manage-credentials.md#create-a-new-credential) using the key to set up your scan. ### Creating the scan
-1. Open your **Purview account** and select the **Open Purview Studio**
+1. Open your **Azure Purview account** and select the **Open Azure Purview Studio**
1. Navigate to the **Data map** --> **Sources** to view the collection hierarchy 1. Select the **New Scan** icon under the **Azure Cosmos database** registered earlier
Scans can be managed or run again on completion.
## Next steps
-Now that you have registered your source, follow the below guides to learn more about Purview and your data.
+Now that you have registered your source, follow the below guides to learn more about Azure Purview and your data.
- [Data insights in Azure Purview](concept-insights.md) - [Lineage in Azure Purview](catalog-lineage-user-guide.md)
purview Register Scan Azure Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-azure-data-explorer.md
Title: 'Connect to and manage Azure Data Explorer'
-description: This guide describes how to connect to Azure Data Explorer in Azure Purview, and use Purview's features to scan and manage your Azure Data Explorer source.
+description: This guide describes how to connect to Azure Data Explorer in Azure Purview, and use Azure Purview's features to scan and manage your Azure Data Explorer source.
This article outlines how to register Azure Data Explorer, and how to authentica
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An active [Purview resource](create-catalog-portal.md).
+* An active [Azure Purview resource](create-catalog-portal.md).
-* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
+* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
## Register
-This section describes how to register Azure Data Explorer in Azure Purview using the [Purview Studio](https://web.purview.azure.com/).
+This section describes how to register Azure Data Explorer in Azure Purview using the [Azure Purview Studio](https://web.purview.azure.com/).
### Authentication for registration
It is required to get the Service Principal's application ID and secret:
1. Select **Settings > Secrets** 1. Select **+ Generate/Import** and enter the **Name** of your choice and **Value** as the **Client secret** from your Service Principal 1. Select **Create** to complete
-1. If your key vault is not connected to Purview yet, you will need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-azure-purview-account)
+1. If your key vault is not connected to Azure Purview yet, you will need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-azure-purview-account)
1. Finally, [create a new credential](manage-credentials.md#create-a-new-credential) using the Service Principal to set up your scan #### Granting the Service Principal access to your Azure data explorer instance
To register using either of these managed identities, follow these steps:
To register a new Azure Data Explorer (Kusto) account in your data catalog, follow these steps:
-1. Navigate to your Purview account
+1. Navigate to your Azure Purview account
1. Select **Data Map** on the left navigation. 1. Select **Register** 1. On **Register sources**, select **Azure Data Explorer**
Follow the steps below to scan Azure Data Explorer to automatically identify ass
To create and run a new scan, follow these steps:
-1. Select the **Data Map** tab on the left pane in the [Purview Studio](https://web.purview.azure.com/resource/).
+1. Select the **Data Map** tab on the left pane in the [Azure Purview Studio](https://web.purview.azure.com/resource/).
1. Select the Azure Data Explorer source that you registered.
To create and run a new scan, follow these steps:
## Next steps
-Now that you have registered your source, follow the below guides to learn more about Purview and your data.
+Now that you have registered your source, follow the below guides to learn more about Azure Purview and your data.
- [Data insights in Azure Purview](concept-insights.md) - [Lineage in Azure Purview](catalog-lineage-user-guide.md)
purview Register Scan Azure Files Storage Source https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-azure-files-storage-source.md
Title: Connect to and manage Azure Files
-description: This guide describes how to connect to Azure Files in Azure Purview, and use Purview's features to scan and manage your Azure Files source.
+description: This guide describes how to connect to Azure Files in Azure Purview, and use Azure Purview's features to scan and manage your Azure Files source.
For file types such as csv, tsv, psv, ssv, the schema is extracted when the foll
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An active [Purview resource](create-catalog-portal.md).
+* An active [Azure Purview resource](create-catalog-portal.md).
-* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
+* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
## Register
-This section describes how to register Azure Files in Azure Purview using the [Purview Studio](https://web.purview.azure.com/).
+This section describes how to register Azure Files in Azure Purview using the [Azure Purview Studio](https://web.purview.azure.com/).
### Authentication for registration
When authentication method selected is **Account Key**, you need to get your acc
1. Select **Settings > Secrets** 1. Select **+ Generate/Import** and enter the **Name** and **Value** as the *key* from your storage account 1. Select **Create** to complete
-1. If your key vault isn't connected to Purview yet, you will need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-azure-purview-account)
+1. If your key vault isn't connected to Azure Purview yet, you will need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-azure-purview-account)
1. Finally, [create a new credential](manage-credentials.md#create-a-new-credential) using the key to set up your scan ### Steps to register To register a new Azure Files account in your data catalog, follow these steps:
-1. Navigate to your Purview Data Studio.
+1. Navigate to your Azure Purview Data Studio.
1. Select **Data Map** on the left navigation. 1. Select **Register** 1. On **Register sources**, select **Azure Files**
Follow the steps below to scan Azure Files to automatically identify assets and
To create and run a new scan, follow these steps:
-1. Select the **Data Map** tab on the left pane in the [Purview Studio](https://web.purview.azure.com/resource/).
+1. Select the **Data Map** tab on the left pane in the [Azure Purview Studio](https://web.purview.azure.com/resource/).
1. Select the Azure Files source that you registered.
To create and run a new scan, follow these steps:
## Next steps
-Now that you have registered your source, follow the below guides to learn more about Purview and your data.
+Now that you have registered your source, follow the below guides to learn more about Azure Purview and your data.
- [Data insights in Azure Purview](concept-insights.md) - [Lineage in Azure Purview](catalog-lineage-user-guide.md)
purview Register Scan Azure Multiple Sources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-azure-multiple-sources.md
Title: Connect to and manage multiple Azure sources
-description: This guide describes how to connect to multiple Azure sources in Azure Purview at once, and use Purview's features to scan and manage your sources.
+description: This guide describes how to connect to multiple Azure sources in Azure Purview at once, and use Azure Purview's features to scan and manage your sources.
This article outlines how to register multiple Azure sources and how to authenti
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An active [Purview resource](create-catalog-portal.md).
+* An active [Azure Purview resource](create-catalog-portal.md).
-* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
+* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
## Register
-This section describes how to register multiple Azure sources in Azure Purview using the [Purview Studio](https://web.purview.azure.com/).
+This section describes how to register multiple Azure sources in Azure Purview using the [Azure Purview Studio](https://web.purview.azure.com/).
### Prerequisites for registration
Follow the steps below to scan multiple Azure sources to automatically identify
To create and run a new scan, do the following:
-1. Select the **Data Map** tab on the left pane in the Purview Studio.
+1. Select the **Data Map** tab on the left pane in the Azure Purview Studio.
1. Select the data source that you registered. 1. Select **View details** > **+ New scan**, or use the **Scan** quick-action icon on the source tile. 1. For **Name**, fill in the name.
To manage a scan, do the following:
## Next steps
-Now that you have registered your source, follow the below guides to learn more about Purview and your data.
+Now that you have registered your source, follow the below guides to learn more about Azure Purview and your data.
- [Data insights in Azure Purview](concept-insights.md) - [Lineage in Azure Purview](catalog-lineage-user-guide.md)
purview Register Scan Azure Mysql Database https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-azure-mysql-database.md
Title: 'Connect to and manage Azure Database for MySQL'
-description: This guide describes how to connect to Azure Database for MySQL in Azure Purview, and use Purview's features to scan and manage your Azure Database for MySQL source.
+description: This guide describes how to connect to Azure Database for MySQL in Azure Purview, and use Azure Purview's features to scan and manage your Azure Database for MySQL source.
This article outlines how to register a database in Azure Database for MySQL, an
|||||||| | [Yes](#register) | [Yes](#scan)| [Yes*](#scan) | [Yes](#scan) | [Yes](#scan) | No | No** |
-\* Purview relies on UPDATE_TIME metadata from Azure Database for MySQL for incremental scans. In some cases, this field might not persist in the database and a full scan is performed. For more information, see [The INFORMATION_SCHEMA TABLES Table](https://dev.mysql.com/doc/refman/5.7/en/information-schema-tables-table.html) for MySQL.
+\* Azure Purview relies on UPDATE_TIME metadata from Azure Database for MySQL for incremental scans. In some cases, this field might not persist in the database and a full scan is performed. For more information, see [The INFORMATION_SCHEMA TABLES Table](https://dev.mysql.com/doc/refman/5.7/en/information-schema-tables-table.html) for MySQL.
\** Lineage is supported if dataset is used as a source/sink in [Data Factory Copy activity](how-to-link-azure-data-factory.md) > [!Important]
-> Purview only supports single server deployment option for Azure Database for MySQL.
+> Azure Purview only supports single server deployment option for Azure Database for MySQL.
## Prerequisites * An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An active [Purview resource](create-catalog-portal.md).
+* An active [Azure Purview resource](create-catalog-portal.md).
-* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
+* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
## Register
-This section describes how to register an Azure Database for MySQL in Azure Purview using the [Purview Studio](https://web.purview.azure.com/).
+This section describes how to register an Azure Database for MySQL in Azure Purview using the [Azure Purview Studio](https://web.purview.azure.com/).
### Authentication for registration
Follow the instructions in [CREATE DATABASES AND USERS](../mysql/howto-create-us
1. Select **Settings > Secrets** 1. Select **+ Generate/Import** and enter the **Name** and **Value** as the *password* from your Azure SQL Database 1. Select **Create** to complete
-1. If your key vault is not connected to Purview yet, you will need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-azure-purview-account)
+1. If your key vault is not connected to Azure Purview yet, you will need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-azure-purview-account)
1. Finally, [create a new credential](manage-credentials.md#create-a-new-credential) of type SQL authentication using the **username** and **password** to set up your scan. ### Steps to register To register a new Azure Database for MySQL in your data catalog, do the following:
-1. Navigate to your Purview account.
+1. Navigate to your Azure Purview account.
1. Select **Data Map** on the left navigation.
Follow the steps below to scan Azure Database for MySQL to automatically identif
To create and run a new scan, do the following:
-1. Select the **Data Map** tab on the left pane in the [Purview Studio](https://web.purview.azure.com/resource/).
+1. Select the **Data Map** tab on the left pane in the [Azure Purview Studio](https://web.purview.azure.com/resource/).
1. Select the Azure Database for MySQL source that you registered.
To create and run a new scan, do the following:
## Next steps
-Now that you have registered your source, follow the below guides to learn more about Purview and your data.
+Now that you have registered your source, follow the below guides to learn more about Azure Purview and your data.
- [Data insights in Azure Purview](concept-insights.md) - [Lineage in Azure Purview](catalog-lineage-user-guide.md)
purview Register Scan Azure Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-azure-postgresql.md
Title: 'Connect to and manage an Azure Database for PostgreSQL'
-description: This guide describes how to connect to an Azure Database for PostgreSQL single server in Azure Purview, and use Purview's features to scan and manage your Azure Database for PostgreSQL source.
+description: This guide describes how to connect to an Azure Database for PostgreSQL single server in Azure Purview, and use Azure Purview's features to scan and manage your Azure Database for PostgreSQL source.
This article outlines how to register an Azure Database for PostgreSQL deployed
\** Lineage is supported if dataset is used as a source/sink in [Data Factory Copy activity](how-to-link-azure-data-factory.md) > [!Important]
-> Purview only supports single server deployment option for Azure Database for PostgreSQL.
+> Azure Purview only supports single server deployment option for Azure Database for PostgreSQL.
> Versions 8.x to 12.x ## Prerequisites * An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An active [Purview resource](create-catalog-portal.md).
+* An active [Azure Purview resource](create-catalog-portal.md).
-* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
+* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
## Register
-This section describes how to register an Azure Database for PostgreSQL in Azure Purview using the [Purview Studio](https://web.purview.azure.com/).
+This section describes how to register an Azure Database for PostgreSQL in Azure Purview using the [Azure Purview Studio](https://web.purview.azure.com/).
### Authentication for registration
Connecting to an Azure Database for PostgreSQL database requires the fully quali
1. Select **Settings > Secrets** 1. Select **+ Generate/Import** and enter the **Name** and **Value** as the *password* from your Azure PostgreSQL Database 1. Select **Create** to complete
-1. If your key vault is not connected to Purview yet, you will need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-azure-purview-account)
+1. If your key vault is not connected to Azure Purview yet, you will need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-azure-purview-account)
1. Finally, [create a new credential](manage-credentials.md#create-a-new-credential) of type SQL authentication using the **username** and **password** to set up your scan ### Steps to register To register a new Azure Database for PostgreSQL in your data catalog, do the following:
-1. Navigate to your Purview account.
+1. Navigate to your Azure Purview account.
1. Select **Data Map** on the left navigation.
Follow the steps below to scan an Azure Database for PostgreSQL database to auto
To create and run a new scan, do the following:
-1. Select the **Data Map** tab on the left pane in the [Purview Studio](https://web.purview.azure.com/resource/).
+1. Select the **Data Map** tab on the left pane in the [Azure Purview Studio](https://web.purview.azure.com/resource/).
1. Select the Azure Database for PostgreSQL source that you registered.
To create and run a new scan, do the following:
## Next steps
-Now that you have registered your source, follow the below guides to learn more about Purview and your data.
+Now that you have registered your source, follow the below guides to learn more about Azure Purview and your data.
- [Data insights in Azure Purview](concept-insights.md) - [Lineage in Azure Purview](catalog-lineage-user-guide.md)
purview Register Scan Azure Sql Database Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-azure-sql-database-managed-instance.md
Title: 'Connect to and manage Azure SQL Database Managed Instance'
-description: This guide describes how to connect to Azure SQL Database Managed Instance in Azure Purview, and use Purview's features to scan and manage your Azure SQL Database Managed Instance source.
+description: This guide describes how to connect to Azure SQL Database Managed Instance in Azure Purview, and use Azure Purview's features to scan and manage your Azure SQL Database Managed Instance source.
This article outlines how to register and Azure SQL Database Managed Instance, a
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An active [Purview resource](create-catalog-portal.md).
+* An active [Azure Purview resource](create-catalog-portal.md).
-* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
+* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
* [Configure public endpoint in Azure SQL Managed Instance](../azure-sql/managed-instance/public-endpoint-configure.md)
This article outlines how to register and Azure SQL Database Managed Instance, a
## Register
-This section describes how to register an Azure SQL Database Managed Instance in Azure Purview using the [Purview Studio](https://web.purview.azure.com/).
+This section describes how to register an Azure SQL Database Managed Instance in Azure Purview using the [Azure Purview Studio](https://web.purview.azure.com/).
### Authentication for registration
-If you need to create new authentication, you need to [authorize database access to SQL Database Managed Instance](../azure-sql/database/logins-create-manage.md). There are three authentication methods that Purview supports today:
+If you need to create new authentication, you need to [authorize database access to SQL Database Managed Instance](../azure-sql/database/logins-create-manage.md). There are three authentication methods that Azure Purview supports today:
- [System or user assigned managed identity](#system-or-user-assigned-managed-identity-to-register) - [Service Principal](#service-principal-to-register)
If you need to create new authentication, you need to [authorize database access
#### System or user assigned managed identity to register
-You can use either your Purview system-assigned managed identity (SAMI), or a [user-assigned managed identity](manage-credentials.md#create-a-user-assigned-managed-identity) (UAMI) to authenticate. Both options allow you to assign authentication directly to Purview, like you would for any other user, group, or service principal. The Purview system-assigned managed identity is created automatically when the account is created and has the same name as your Azure Purview account. A user-assigned managed identity is a resource that can be created independently. To create one you can follow our [user-assigned managed identity guide](manage-credentials.md#create-a-user-assigned-managed-identity).
+You can use either your Azure Purview system-assigned managed identity (SAMI), or a [user-assigned managed identity](manage-credentials.md#create-a-user-assigned-managed-identity) (UAMI) to authenticate. Both options allow you to assign authentication directly to Azure Purview, like you would for any other user, group, or service principal. The Azure Purview system-assigned managed identity is created automatically when the account is created and has the same name as your Azure Purview account. A user-assigned managed identity is a resource that can be created independently. To create one you can follow our [user-assigned managed identity guide](manage-credentials.md#create-a-user-assigned-managed-identity).
You can find your managed identity Object ID in the Azure portal by following these steps:
-For Purview accountΓÇÖs system-assigned managed identity:
-1. Open the Azure portal, and navigate to your Purview account.
+For Azure Purview accountΓÇÖs system-assigned managed identity:
+1. Open the Azure portal, and navigate to your Azure Purview account.
1. Select the **Properties** tab on the left side menu. 1. Select the **Managed identity object ID** value and copy it. For user-assigned managed identity (preview):
-1. Open the Azure portal, and navigate to your Purview account.
+1. Open the Azure portal, and navigate to your Azure Purview account.
1. Select the Managed identities tab on the left side menu 1. Select the user assigned managed identities, select the intended identity to view the details. 1. The object (principal) ID is displayed in the overview essential section.
Either managed identity will need permission to get metadata for the database, s
#### Service Principal to register
-There are several steps to allow Purview to use service principal to scan your Azure SQL Database Managed Instance.
+There are several steps to allow Azure Purview to use service principal to scan your Azure SQL Database Managed Instance.
#### Create or use an existing service principal
The service principal must have permission to get metadata for the database, sch
- Create an Azure AD user in Azure SQL Database Managed Instance by following the prerequisites and tutorial on [Create contained users mapped to Azure AD identities](../azure-sql/database/authentication-aad-configure.md?tabs=azure-powershell#create-contained-users-mapped-to-azure-ad-identities) - Assign `db_datareader` permission to the identity.
-#### Add service principal to key vault and Purview's credential
+#### Add service principal to key vault and Azure Purview's credential
It is required to get the service principal's application ID and secret:
It is required to get the service principal's application ID and secret:
1. Select **Settings > Secrets** 1. Select **+ Generate/Import** and enter the **Name** of your choice and **Value** as the **Client secret** from your Service Principal 1. Select **Create** to complete
-1. If your key vault is not connected to Purview yet, you will need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-azure-purview-account)
+1. If your key vault is not connected to Azure Purview yet, you will need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-azure-purview-account)
1. Finally, [create a new credential](manage-credentials.md#create-a-new-credential) using the Service Principal to set up your scan. #### SQL authentication to register > [!Note]
-> Only the server-level principal login (created by the provisioning process) or members of the `loginmanager` database role in the master database can create new logins. It takes about **15 minutes** after granting permission, the Purview account should have the appropriate permissions to be able to scan the resource(s).
+> Only the server-level principal login (created by the provisioning process) or members of the `loginmanager` database role in the master database can create new logins. It takes about **15 minutes** after granting permission, the Azure Purview account should have the appropriate permissions to be able to scan the resource(s).
You can follow the instructions in [CREATE LOGIN](/sql/t-sql/statements/create-login-transact-sql?view=azuresqldb-current&preserve-view=true#examples-1) to create a login for Azure SQL Database Managed Instance if you don't have this login available. You will need **username** and **password** for the next steps.
You can follow the instructions in [CREATE LOGIN](/sql/t-sql/statements/create-l
1. Select **Settings > Secrets** 1. Select **+ Generate/Import** and enter the **Name** and **Value** as the *password* from your Azure SQL Database Managed Instance 1. Select **Create** to complete
-1. If your key vault is not connected to Purview yet, you will need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-azure-purview-account)
+1. If your key vault is not connected to Azure Purview yet, you will need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-azure-purview-account)
1. Finally, [create a new credential](manage-credentials.md#create-a-new-credential) using the **username** and **password** to set up your scan. ### Steps to register
-1. Navigate to your [Purview Studio](https://web.purview.azure.com/resource/)
+1. Navigate to your [Azure Purview Studio](https://web.purview.azure.com/resource/)
1. Select **Data Map** on the left navigation.
Follow the steps below to scan an Azure SQL Database Managed Instance to automat
To create and run a new scan, complete the following steps:
-1. Select the **Data Map** tab on the left pane in the Purview Studio.
+1. Select the **Data Map** tab on the left pane in the Azure Purview Studio.
1. Select the Azure SQL Database Managed Instance source that you registered.
To create and run a new scan, complete the following steps:
## Next steps
-Now that you have registered your source, follow the below guides to learn more about Purview and your data.
+Now that you have registered your source, follow the below guides to learn more about Azure Purview and your data.
- [Data insights in Azure Purview](concept-insights.md) - [Lineage in Azure Purview](catalog-lineage-user-guide.md)
purview Register Scan Azure Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-azure-sql-database.md
This article outlines the process to register an Azure SQL data source in Azure
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An active [Purview resource](create-catalog-portal.md).
+* An active [Azure Purview resource](create-catalog-portal.md).
-* You'll need to be a Data Source Administrator and Data Reader to register a source and manage it in the Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
+* You'll need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
## Register
This section will enable you to register the Azure SQL DB data source and set up
It's important to register the data source in Azure Purview before setting up a scan.
-1. Go to the [Azure portal](https://portal.azure.com), and navigate to the **Purview accounts** page and select your _Purview account_
+1. Go to the [Azure portal](https://portal.azure.com), and navigate to the **Azure Purview accounts** page and select your _Purview account_
- :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-purview-acct.png" alt-text="Screenshot that shows the Purview account used to register the data source":::
+ :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-purview-acct.png" alt-text="Screenshot that shows the Azure Purview account used to register the data source":::
-1. **Open Purview Studio** and navigate to the **Data Map**
+1. **Open Azure Purview Studio** and navigate to the **Data Map**
:::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-open-purview-studio.png" alt-text="Screenshot that navigates to the Sources link in the Data Map":::
The following options are supported:
#### Using SQL Authentication for scanning > [!Note]
-> Only the server-level principal login (created by the provisioning process) or members of the `loginmanager` database role in the master database can create new logins. It takes about **15 minutes** after granting permission, the Purview account should have the appropriate permissions to be able to scan the resource(s).
+> Only the server-level principal login (created by the provisioning process) or members of the `loginmanager` database role in the master database can create new logins. It takes about **15 minutes** after granting permission, the Azure Purview account should have the appropriate permissions to be able to scan the resource(s).
You can follow the instructions in [CREATE LOGIN](/sql/t-sql/statements/create-login-transact-sql?view=azuresqldb-current&preserve-view=true#examples-1) to create a login for Azure SQL Database. You'll' need **username** and **password** for the next steps.
You can follow the instructions in [CREATE LOGIN](/sql/t-sql/statements/create-l
1. Select **Create** to complete
-1. If your key vault is not connected to Purview yet, you will need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-azure-purview-account)
+1. If your key vault is not connected to Azure Purview yet, you will need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-azure-purview-account)
1. Finally, [create a new credential](manage-credentials.md#create-a-new-credential) using the key to set up your scan
You can follow the instructions in [CREATE LOGIN](/sql/t-sql/statements/create-l
The managed identity needs permission to get metadata for the database, schemas, and tables. It must also be authorized to query the tables to sample for classification. - If you haven't already, [configure Azure AD authentication with Azure SQL](../azure-sql/database/authentication-aad-configure.md)-- Create Azure AD user in Azure SQL Database with the exact Purview's managed identity by following tutorial on [create the user in Azure SQL Database](../azure-sql/database/authentication-aad-service-principal-tutorial.md#create-the-service-principal-user-in-azure-sql-database). Assign proper permission (for example: `db_datareader`) to the identity. Example SQL syntax to create user and grant permission:
+- Create Azure AD user in Azure SQL Database with the exact Azure Purview's managed identity by following tutorial on [create the user in Azure SQL Database](../azure-sql/database/authentication-aad-service-principal-tutorial.md#create-the-service-principal-user-in-azure-sql-database). Assign proper permission (for example: `db_datareader`) to the identity. Example SQL syntax to create user and grant permission:
```sql CREATE USER [Username] FROM EXTERNAL PROVIDER
The managed identity needs permission to get metadata for the database, schemas,
``` > [!Note]
- > The `Username` is your Purview's managed identity name. You can read more about [fixed-database roles and their capabilities](/sql/relational-databases/security/authentication-access/database-level-roles#fixed-database-roles).
+ > The `Username` is your Azure Purview's managed identity name. You can read more about [fixed-database roles and their capabilities](/sql/relational-databases/security/authentication-access/database-level-roles#fixed-database-roles).
##### Configure Portal Authentication
-It is important to give your Purview account's system-managed identity or [user-assigned managed identity](manage-credentials.md#create-a-user-assigned-managed-identity) the permission to scan the Azure SQL DB. You can add the SAMI or UAMI at the Subscription, Resource Group, or Resource level, depending on the breadth of the scan.
+It is important to give your Azure Purview account's system-managed identity or [user-assigned managed identity](manage-credentials.md#create-a-user-assigned-managed-identity) the permission to scan the Azure SQL DB. You can add the SAMI or UAMI at the Subscription, Resource Group, or Resource level, depending on the breadth of the scan.
> [!Note] > You need to be an owner of the subscription to be able to add a managed identity on an Azure resource.
It is important to give your Purview account's system-managed identity or [user-
:::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-sql-ds.png" alt-text="Screenshot that shows the Azure SQL database":::
-1. Set the **Role** to **Reader** and enter your _Azure Purview account name_ or _[user-assigned managed identity](manage-credentials.md#create-a-user-assigned-managed-identity)_ under **Select** input box. Then, select **Save** to give this role assignment to your Purview account.
+1. Set the **Role** to **Reader** and enter your _Azure Purview account name_ or _[user-assigned managed identity](manage-credentials.md#create-a-user-assigned-managed-identity)_ under **Select** input box. Then, select **Save** to give this role assignment to your Azure Purview account.
- :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-access-managed-identity.png" alt-text="Screenshot that shows the details to assign permissions for the Purview account":::
+ :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-access-managed-identity.png" alt-text="Screenshot that shows the details to assign permissions for the Azure Purview account":::
#### Using Service Principal for scanning
The service principal needs permission to get metadata for the database, schemas
:::image type="content" source="media/register-scan-azure-sql-database/select-create.png" alt-text="Screenshot that shows the Key Vault Create a secret menu, with the Create button highlighted.":::
-1. If your key vault is not connected to Purview yet, you will need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-azure-purview-account)
+1. If your key vault is not connected to Azure Purview yet, you will need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-azure-purview-account)
1. Then, [create a new credential](manage-credentials.md#create-a-new-credential).
A self-hosted integration runtime (SHIR) can be installed on a machine to connec
### Creating the scan
-1. Open your **Purview account** and select the **Open Purview Studio**
+1. Open your **Azure Purview account** and select the **Open Azure Purview Studio**
1. Navigate to the **Data map** --> **Sources** to view the collection hierarchy 1. Select the **New Scan** icon under the **Azure SQL DB** registered earlier
Scans can be managed or run again on completion
## Next steps
-Now that you have registered your source, follow the below guides to learn more about Purview and your data.
+Now that you have registered your source, follow the below guides to learn more about Azure Purview and your data.
- [Data insights in Azure Purview](concept-insights.md) - [Lineage in Azure Purview](catalog-lineage-user-guide.md)
purview Register Scan Azure Synapse Analytics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-azure-synapse-analytics.md
Title: 'Connect to and manage dedicated SQL pools (formerly SQL DW)'
-description: This guide describes how to connect to dedicated SQL pools (formerly SQL DW) in Azure Purview, and use Purview's features to scan and manage your dedicated SQL pools source.
+description: This guide describes how to connect to dedicated SQL pools (formerly SQL DW) in Azure Purview, and use Azure Purview's features to scan and manage your dedicated SQL pools source.
This article outlines how to register dedicated SQL pools(formerly SQL DW), and
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An active [Purview resource](create-catalog-portal.md).
+* An active [Azure Purview resource](create-catalog-portal.md).
-* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
+* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
## Register
-This section describes how to register dedicated SQL pools in Azure Purview using the [Purview Studio](https://web.purview.azure.com/).
+This section describes how to register dedicated SQL pools in Azure Purview using the [Azure Purview Studio](https://web.purview.azure.com/).
### Authentication for registration
There are three ways to set up authentication:
- [SQL authentication](#sql-authentication-to-register) > [!Note]
- > Only the server-level principal login (created by the provisioning process) or members of the `loginmanager` database role in the master database can create new logins. It takes about 15 minutes after granting permission, the Purview account should have the appropriate permissions to be able to scan the resource(s).
+ > Only the server-level principal login (created by the provisioning process) or members of the `loginmanager` database role in the master database can create new logins. It takes about 15 minutes after granting permission, the Azure Purview account should have the appropriate permissions to be able to scan the resource(s).
#### System or user assigned managed identity to register
It is required to get the Service Principal's application ID and secret:
1. Select **Settings > Secrets** 1. Select **+ Generate/Import** and enter the **Name** of your choice and **Value** as the **Client secret** from your Service Principal 1. Select **Create** to complete
-1. If your key vault is not connected to Purview yet, you will need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-azure-purview-account)
+1. If your key vault is not connected to Azure Purview yet, you will need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-azure-purview-account)
1. Finally, [create a new credential](manage-credentials.md#create-a-new-credential) using the Service Principal to set up your scan. ##### Granting the Service Principal access
GO
``` > [!Note]
-> Purview will need the **Application (client) ID** and the **client secret** in order to scan.
+> Azure Purview will need the **Application (client) ID** and the **client secret** in order to scan.
#### SQL authentication to register
When authentication method selected is **SQL Authentication**, you need to get y
1. Select **Settings > Secrets** 1. Select **+ Generate/Import** and enter the **Name** and **Value** as the *password* for your SQL login 1. Select **Create** to complete
-1. If your key vault is not connected to Purview yet, you will need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-azure-purview-account)
+1. If your key vault is not connected to Azure Purview yet, you will need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-azure-purview-account)
1. Finally, [create a new credential](manage-credentials.md#create-a-new-credential) using the key to set up your scan. ### Steps to register
-To register a new SQL dedicated pool in Purview, complete the following steps:
+To register a new SQL dedicated pool in Azure Purview, complete the following steps:
-1. Navigate to your Purview account.
+1. Navigate to your Azure Purview account.
1. Select **Data Map** on the left navigation. 1. Select **Register** 1. On **Register sources**, select **Azure Dedicated SQL Pool (formerly SQL DW)**.
Follow the steps below to scan dedicated SQL pools to automatically identify ass
To create and run a new scan, complete the following steps:
-1. Select the **Data Map** tab on the left pane in the [Purview Studio](https://web.purview.azure.com/resource/).
+1. Select the **Data Map** tab on the left pane in the [Azure Purview Studio](https://web.purview.azure.com/resource/).
1. Select the SQL dedicated pool source that you registered.
To create and run a new scan, complete the following steps:
## Next steps
-Now that you have registered your source, follow the below guides to learn more about Purview and your data.
+Now that you have registered your source, follow the below guides to learn more about Azure Purview and your data.
- [Data insights in Azure Purview](concept-insights.md) - [Lineage in Azure Purview](catalog-lineage-user-guide.md)
purview Register Scan Cassandra Source https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-cassandra-source.md
Title: Connect to and manage Cassandra
-description: This guide describes how to connect to Cassandra in Azure Purview, and use Purview's features to scan and manage your Cassandra source.
+description: This guide describes how to connect to Cassandra in Azure Purview, and use Azure Purview's features to scan and manage your Cassandra source.
This article outlines how to register Cassandra, and how to authenticate and int
The supported Cassandra server versions are 3.*x* or 4.*x*.
-When scanning Cassandra source, Purview supports:
+When scanning Cassandra source, Azure Purview supports:
- Extracting technical metadata including:
When scanning Cassandra source, Purview supports:
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An active [Purview resource](create-catalog-portal.md).
+* An active [Azure Purview resource](create-catalog-portal.md).
-* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
+* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
* Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [the create and configure a self-hosted integration runtime guide](manage-integration-runtimes.md).
When scanning Cassandra source, Purview supports:
## Register
-This section describes how to register Cassandra in Azure Purview using the [Purview Studio](https://web.purview.azure.com/).
+This section describes how to register Cassandra in Azure Purview using the [Azure Purview Studio](https://web.purview.azure.com/).
### Steps to register To register a new Cassandra server in your data catalog:
-1. Go to your Purview account.
+1. Go to your Azure Purview account.
1. Select **Data Map** on the left pane. 1. Select **Register**. 1. On the **Register sources** screen, select **Cassandra**, and then select **Continue**:
To create and run a new scan:
* In the **User name** box, provide the name of the user you're making the connection for. * In the key vault's secret, save the password of the Cassandra user you're making the connection for.
- For more information, see [Credentials for source authentication in Purview](manage-credentials.md).
+ For more information, see [Credentials for source authentication in Azure Purview](manage-credentials.md).
1. **Keyspaces**: Specify a list of Cassandra keyspaces to import. Multiple keyspaces must be separated with semicolons. For example, keyspace1; keyspace2. When the list is empty, all available keyspaces are imported.
To create and run a new scan:
## Next steps
-Now that you have registered your source, follow the below guides to learn more about Purview and your data.
+Now that you have registered your source, follow the below guides to learn more about Azure Purview and your data.
- [Data insights in Azure Purview](concept-insights.md) - [Lineage in Azure Purview](catalog-lineage-user-guide.md)
purview Register Scan Db2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-db2.md
Title: Connect to and manage DB2
-description: This guide describes how to connect to DB2 in Azure Purview, and use Purview's features to scan and manage your DB2 source.
+description: This guide describes how to connect to DB2 in Azure Purview, and use Azure Purview's features to scan and manage your DB2 source.
This article outlines how to register DB2, and how to authenticate and interact
The supported IBM DB2 versions are DB2 for LUW 9.7 to 11.x. DB2 for z/OS (mainframe) and iSeries (AS/400) are not supported now.
-When scanning IBM DB2 source, Purview supports:
+When scanning IBM DB2 source, Azure Purview supports:
- Extracting technical metadata including:
When scanning IBM DB2 source, Purview supports:
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An active [Purview resource](create-catalog-portal.md).
+* An active [Azure Purview resource](create-catalog-portal.md).
-* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Purview Studio. See [Azure Purview Permissions page](catalog-permissions.md) for details.
+* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See [Azure Purview Permissions page](catalog-permissions.md) for details.
* Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [the create and configure a self-hosted integration runtime guide](manage-integration-runtimes.md). The minimal supported Self-hosted Integration Runtime version is 5.12.7984.1.
When scanning IBM DB2 source, Purview supports:
## Register
-This section describes how to register DB2 in Azure Purview using the [Purview Studio](https://web.purview.azure.com/).
+This section describes how to register DB2 in Azure Purview using the [Azure Purview Studio](https://web.purview.azure.com/).
### Steps to register To register a new DB2 source in your data catalog, do the following:
-1. Navigate to your Purview account in the [Purview Studio](https://web.purview.azure.com/resource/).
+1. Navigate to your Azure Purview account in the [Azure Purview Studio](https://web.purview.azure.com/resource/).
1. Select **Data Map** on the left navigation. 1. Select **Register** 1. On Register sources, select **DB2**. Select **Continue**.
To create and run a new scan, do the following:
## Next steps
-Now that you have registered your source, follow the below guides to learn more about Purview and your data.
+Now that you have registered your source, follow the below guides to learn more about Azure Purview and your data.
- [Data insights in Azure Purview](concept-insights.md) - [Lineage in Azure Purview](catalog-lineage-user-guide.md)
purview Register Scan Erwin Source https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-erwin-source.md
Title: Connect to and manage erwin Mart servers
-description: This guide describes how to connect to erwin Mart servers in Azure Purview, and use Purview's features to scan and manage your erwin Mart server source.
+description: This guide describes how to connect to erwin Mart servers in Azure Purview, and use Azure Purview's features to scan and manage your erwin Mart server source.
This article outlines how to register erwin Mart servers, and how to authenticat
The supported erwin Mart versions are 9.x to 2021.
-When scanning erwin Mart source, Purview supports:
+When scanning erwin Mart source, Azure Purview supports:
- Extracting technical metadata including:
When scanning erwin Mart source, Purview supports:
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An active [Purview resource](create-catalog-portal.md).
+* An active [Azure Purview resource](create-catalog-portal.md).
-* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
+* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
* Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [the create and configure a self-hosted integration runtime guide](manage-integration-runtimes.md).
When scanning erwin Mart source, Purview supports:
## Register
-This section describes how to register erwin Mart servers in Azure Purview using the [Purview Studio](https://web.purview.azure.com/).
+This section describes how to register erwin Mart servers in Azure Purview using the [Azure Purview Studio](https://web.purview.azure.com/).
The only supported authentication for an erwin Mart source is **Server Authentication** in the form of username and password. ### Steps to register
-1. Navigate to your Purview account in the [Purview Studio](https://web.purview.azure.com/).
+1. Navigate to your Azure Purview account in the [Azure Purview Studio](https://web.purview.azure.com/).
1. Select **Data Map** on the left navigation. 1. Select **Register** 1. On Register sources, select **erwin**. Select **Continue.**
To create and run a new scan, do the following:
## Next steps
-Now that you have registered your source, follow the below guides to learn more about Purview and your data.
+Now that you have registered your source, follow the below guides to learn more about Azure Purview and your data.
- [Data insights in Azure Purview](concept-insights.md) - [Lineage in Azure Purview](catalog-lineage-user-guide.md)
purview Register Scan Google Bigquery Source https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-google-bigquery-source.md
Title: Connect to and manage Google BigQuery projects
-description: This guide describes how to connect to Google BigQuery projects in Azure Purview, and use Purview's features to scan and manage your Google BigQuery source.
+description: This guide describes how to connect to Google BigQuery projects in Azure Purview, and use Azure Purview's features to scan and manage your Google BigQuery source.
This article outlines how to register Google BigQuery projects, and how to authe
|||||||| | [Yes](#register)| [Yes](#scan)| No | [Yes](#scan) | No | No| [Yes](how-to-lineage-google-bigquery.md)|
-When scanning Google BigQuery source, Purview supports:
+When scanning Google BigQuery source, Azure Purview supports:
- Extracting technical metadata including:
When scanning Google BigQuery source, Purview supports:
- Fetching static lineage on assets relationships among tables and views. >[!NOTE]
-> Currently, Purview only supports scanning Google BigQuery datasets in US multi-regional location. If the specified dataset is in other location e.g. us-east1 or EU, you will observe scan completes but no assets shown up in Purview.
+> Currently, Azure Purview only supports scanning Google BigQuery datasets in US multi-regional location. If the specified dataset is in other location e.g. us-east1 or EU, you will observe scan completes but no assets shown up in Azure Purview.
## Prerequisites * An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An active [Purview resource](create-catalog-portal.md).
+* An active [Azure Purview resource](create-catalog-portal.md).
-* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
+* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
* Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [the create and configure a self-hosted integration runtime guide](manage-integration-runtimes.md).
When scanning Google BigQuery source, Purview supports:
## Register
-This section describes how to register a Google BigQuery project in Azure Purview using the [Purview Studio](https://web.purview.azure.com/).
+This section describes how to register a Google BigQuery project in Azure Purview using the [Azure Purview Studio](https://web.purview.azure.com/).
### Steps to register
-1. Navigate to your Purview account.
+1. Navigate to your Azure Purview account.
1. Select **Data Map** on the left navigation. 1. Select **Register.** 1. On Register sources, select **Google BigQuery** . Select **Continue.**
Follow the steps below to scan a Google BigQuery project to automatically identi
## Next steps
-Now that you have registered your source, follow the below guides to learn more about Purview and your data.
+Now that you have registered your source, follow the below guides to learn more about Azure Purview and your data.
- [Data insights in Azure Purview](concept-insights.md) - [Lineage in Azure Purview](catalog-lineage-user-guide.md)
purview Register Scan Hive Metastore Source https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-hive-metastore-source.md
This article outlines how to register Hive Metastore databases, and how to authe
The supported Hive versions are 2.x to 3.x. The supported platforms are Apache Hadoop, Cloudera, Hortonworks, and Azure Databricks (versions 8.0 and later).
-When scanning Hive metastore source, Purview supports:
+When scanning Hive metastore source, Azure Purview supports:
- Extracting technical metadata including:
Use the following steps to scan Hive Metastore databases to automatically identi
> [!NOTE] > When you copy the URL from *hive-site.xml*, remove `amp;` from the string or the scan will fail. Then append the path to your SSL certificate to the URL. This will be the path to the SSL certificate's location on your machine. [Download the SSL certificate](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem). >
- > When you enter local file system paths in the Purview Studio scan configuration, remember to change the Windows path separator character from a backslash (`\`) to a forward slash (`/`). For example, if your MariaDB JAR file is *C:\mariadb-jdbc.jar*, change it to *C:/mariadb-jdbc.jar*. Make the same change to the Metastore JDBC URL `sslCA` parameter. For example, if it's placed at local file system path *D:\Drivers\SSLCert\BaltimoreCyberTrustRoot.crt.pem*, change it to *D:/Drivers/SSLCert/BaltimoreCyberTrustRoot.crt.pem*.
+ > When you enter local file system paths in the Azure Purview Studio scan configuration, remember to change the Windows path separator character from a backslash (`\`) to a forward slash (`/`). For example, if your MariaDB JAR file is *C:\mariadb-jdbc.jar*, change it to *C:/mariadb-jdbc.jar*. Make the same change to the Metastore JDBC URL `sslCA` parameter. For example, if it's placed at local file system path *D:\Drivers\SSLCert\BaltimoreCyberTrustRoot.crt.pem*, change it to *D:/Drivers/SSLCert/BaltimoreCyberTrustRoot.crt.pem*.
The **Metastore JDBC URL** value will look like this example:
purview Register Scan Looker Source https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-looker-source.md
Title: Connect to and manage Looker
-description: This guide describes how to connect to Looker in Azure Purview, and use Purview's features to scan and manage your Looker source.
+description: This guide describes how to connect to Looker in Azure Purview, and use Azure Purview's features to scan and manage your Looker source.
This article outlines how to register Looker, and how to authenticate and intera
The supported Looker server version is 7.2.
-When scanning Looker source, Purview supports:
+When scanning Looker source, Azure Purview supports:
- Extracting technical metadata including:
When scanning Looker source, Purview supports:
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An active [Purview resource](create-catalog-portal.md).
+* An active [Azure Purview resource](create-catalog-portal.md).
-* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
+* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
* Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [the create and configure a self-hosted integration runtime guide](manage-integration-runtimes.md).
When scanning Looker source, Purview supports:
## Register
-This section describes how to register Looker in Azure Purview using the [Purview Studio](https://web.purview.azure.com/).
+This section describes how to register Looker in Azure Purview using the [Azure Purview Studio](https://web.purview.azure.com/).
### Authentication for registration
An API3 key is required to connect to the Looker server. The API3 key consists i
To register a new Looker server in your data catalog, do the following:
-1. Navigate to your Purview account.
+1. Navigate to your Azure Purview account.
1. Select **Data Map** on the left navigation. 1. Select **Register.** 1. On Register sources, select **Looker**. Select **Continue.**
To create and run a new scan, do the following:
## Next steps
-Now that you have registered your source, follow the below guides to learn more about Purview and your data.
+Now that you have registered your source, follow the below guides to learn more about Azure Purview and your data.
- [Data insights in Azure Purview](concept-insights.md) - [Lineage in Azure Purview](catalog-lineage-user-guide.md)
purview Register Scan Mysql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-mysql.md
Title: Connect to and manage MySQL
-description: This guide describes how to connect to MySQL in Azure Purview, and use Purview's features to scan and manage your MySQL source.
+description: This guide describes how to connect to MySQL in Azure Purview, and use Azure Purview's features to scan and manage your MySQL source.
This article outlines how to register MySQL, and how to authenticate and interac
The supported MySQL server versions are 5.7 to 8.x.
-When scanning MySQL source, Purview supports:
+When scanning MySQL source, Azure Purview supports:
- Extracting technical metadata including:
When scanning MySQL source, Purview supports:
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An active [Purview resource](create-catalog-portal.md).
+* An active [Azure Purview resource](create-catalog-portal.md).
-* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Purview Studio. See [Azure Purview Permissions page](catalog-permissions.md) for details.
+* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See [Azure Purview Permissions page](catalog-permissions.md) for details.
* Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [the create and configure a self-hosted integration runtime guide](manage-integration-runtimes.md). The minimal supported Self-hosted Integration Runtime version is 5.11.7953.1.
When scanning MySQL source, Purview supports:
## Register
-This section describes how to register MySQL in Azure Purview using the [Purview Studio](https://web.purview.azure.com/).
+This section describes how to register MySQL in Azure Purview using the [Azure Purview Studio](https://web.purview.azure.com/).
### Steps to register To register a new MySQL source in your data catalog, do the following:
-1. Navigate to your Purview account in the [Purview Studio](https://web.purview.azure.com/resource/).
+1. Navigate to your Azure Purview account in the [Azure Purview Studio](https://web.purview.azure.com/resource/).
1. Select **Data Map** on the left navigation. 1. Select **Register** 1. On Register sources, select **MySQL**. Select **Continue**.
To create and run a new scan, do the following:
## Next steps
-Now that you have registered your source, follow the below guides to learn more about Purview and your data.
+Now that you have registered your source, follow the below guides to learn more about Azure Purview and your data.
- [Data insights in Azure Purview](concept-insights.md) - [Lineage in Azure Purview](catalog-lineage-user-guide.md)
purview Register Scan On Premises Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-on-premises-sql-server.md
Title: Connect to and manage on-premises SQL server instances
-description: This guide describes how to connect to on-premises SQL server instances in Azure Purview, and use Purview's features to scan and manage your on-premises SQL server source.
+description: This guide describes how to connect to on-premises SQL server instances in Azure Purview, and use Azure Purview's features to scan and manage your on-premises SQL server source.
This article outlines how to register on-premises SQL server instances, and how
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An active [Purview resource](create-catalog-portal.md).
+* An active [Azure Purview resource](create-catalog-portal.md).
-* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
+* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
* Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [the create and configure a self-hosted integration runtime guide](manage-integration-runtimes.md). ## Register
-This section describes how to register an on-premises SQL server instance in Azure Purview using the [Purview Studio](https://web.purview.azure.com/).
+This section describes how to register an on-premises SQL server instance in Azure Purview using the [Azure Purview Studio](https://web.purview.azure.com/).
### Authentication for registration
There is only one way to set up authentication for SQL server on-premises:
#### SQL Authentication to register
-The SQL account must have access to the **master** database. This is because the `sys.databases` is in the master database. The Purview scanner needs to enumerate `sys.databases` in order to find all the SQL databases on the server.
+The SQL account must have access to the **master** database. This is because the `sys.databases` is in the master database. The Azure Purview scanner needs to enumerate `sys.databases` in order to find all the SQL databases on the server.
##### Creating a new login and user
If you would like to create a new login and user to be able to scan your SQL ser
:::image type="content" source="media/register-scan-on-premises-sql-server/change-password.png" alt-text="change password.":::
-##### Storing your SQL login password in a key vault and creating a credential in Purview
+##### Storing your SQL login password in a key vault and creating a credential in Azure Purview
1. Navigate to your key vault in the Azure portal1. Select **Settings > Secrets** 1. Select **+ Generate/Import** and enter the **Name** and **Value** as the *password* from your SQL server login 1. Select **Create** to complete
-1. If your key vault is not connected to Purview yet, you will need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-azure-purview-account)
+1. If your key vault is not connected to Azure Purview yet, you will need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-azure-purview-account)
1. Finally, [create a new credential](manage-credentials.md#create-a-new-credential) using the **username** and **password** to setup your scan ### Steps to register
-1. Navigate to your Purview account
+1. Navigate to your Azure Purview account
1. Under Sources and scanning in the left navigation, select **Integration runtimes**. Make sure a self-hosted integration runtime is set up. If it is not set up, follow the steps mentioned [here](manage-integration-runtimes.md) to create a self-hosted integration runtime for scanning on an on-premises or Azure VM that has access to your on-premises network.
Follow the steps below to scan on-premises SQL server instances to automatically
To create and run a new scan, do the following:
-1. Select the **Data Map** tab on the left pane in the [Purview Studio](https://web.purview.azure.com/resource/).
+1. Select the **Data Map** tab on the left pane in the [Azure Purview Studio](https://web.purview.azure.com/resource/).
1. Select the SQL Server source that you registered.
To create and run a new scan, do the following:
## Next steps
-Now that you have registered your source, follow the below guides to learn more about Purview and your data.
+Now that you have registered your source, follow the below guides to learn more about Azure Purview and your data.
- [Data insights in Azure Purview](concept-insights.md) - [Lineage in Azure Purview](catalog-lineage-user-guide.md)
purview Register Scan Oracle Source https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-oracle-source.md
Title: Connect to and manage Oracle
-description: This guide describes how to connect to Oracle in Azure Purview, and use Purview's features to scan and manage your Oracle source.
+description: This guide describes how to connect to Oracle in Azure Purview, and use Azure Purview's features to scan and manage your Oracle source.
This article outlines how to register Oracle, and how to authenticate and intera
The supported Oracle server versions are 6i to 19c. Proxy server is not supported when scanning Oracle source.
-When scanning Oracle source, Purview supports:
+When scanning Oracle source, Azure Purview supports:
- Extracting technical metadata including:
When scanning Oracle source, Purview supports:
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An active [Purview resource](create-catalog-portal.md).
+* An active [Azure Purview resource](create-catalog-portal.md).
-* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
+* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
* Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [the create and configure a self-hosted integration runtime guide](manage-integration-runtimes.md).
When scanning Oracle source, Purview supports:
## Register
-This section describes how to register Oracle in Azure Purview using the [Purview Studio](https://web.purview.azure.com/).
+This section describes how to register Oracle in Azure Purview using the [Azure Purview Studio](https://web.purview.azure.com/).
### Prerequisites for registration
The only supported authentication for an Oracle source is **Basic authentication
To register a new Oracle source in your data catalog, do the following:
-1. Navigate to your Purview account in the [Purview Studio](https://web.purview.azure.com/resource/).
+1. Navigate to your Azure Purview account in the [Azure Purview Studio](https://web.purview.azure.com/resource/).
1. Select **Data Map** on the left navigation. 1. Select **Register** 1. On Register sources, select **Oracle**. Select **Continue**.
To create and run a new scan, do the following:
## Next steps
-Now that you have registered your source, follow the below guides to learn more about Purview and your data.
+Now that you have registered your source, follow the below guides to learn more about Azure Purview and your data.
- [Data insights in Azure Purview](concept-insights.md) - [Lineage in Azure Purview](catalog-lineage-user-guide.md)
purview Register Scan Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-postgresql.md
Title: Connect to and manage PostgreSQL
-description: This guide describes how to connect to PostgreSQL in Azure Purview, and use Purview's features to scan and manage your PostgreSQL source.
+description: This guide describes how to connect to PostgreSQL in Azure Purview, and use Azure Purview's features to scan and manage your PostgreSQL source.
This article outlines how to register PostgreSQL, and how to authenticate and in
The supported PostgreSQL server versions are 8.4 to 12.x.
-When scanning PostgreSQL source, Purview supports:
+When scanning PostgreSQL source, Azure Purview supports:
- Extracting technical metadata including:
When scanning PostgreSQL source, Purview supports:
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An active [Purview resource](create-catalog-portal.md).
+* An active [Azure Purview resource](create-catalog-portal.md).
-* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
+* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
* Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [the create and configure a self-hosted integration runtime guide](manage-integration-runtimes.md). The minimal supported Self-hosted Integration Runtime version is 5.11.7953.1.
When scanning PostgreSQL source, Purview supports:
## Register
-This section describes how to register PostgreSQL in Azure Purview using the [Purview Studio](https://web.purview.azure.com/).
+This section describes how to register PostgreSQL in Azure Purview using the [Azure Purview Studio](https://web.purview.azure.com/).
### Steps to register To register a new PostgreSQL source in your data catalog, do the following:
-1. Navigate to your Purview account in the [Purview Studio](https://web.purview.azure.com/resource/).
+1. Navigate to your Azure Purview account in the [Azure Purview Studio](https://web.purview.azure.com/resource/).
1. Select **Data Map** on the left navigation. 1. Select **Register** 1. On Register sources, select **PostgreSQL**. Select **Continue**.
To create and run a new scan, do the following:
## Next steps
-Now that you have registered your source, follow the below guides to learn more about Purview and your data.
+Now that you have registered your source, follow the below guides to learn more about Azure Purview and your data.
- [Data insights in Azure Purview](concept-insights.md) - [Lineage in Azure Purview](catalog-lineage-user-guide.md)
purview Register Scan Power Bi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-power-bi-tenant.md
Title: Connect to and manage a Power BI tenant
-description: This guide describes how to connect to a Power BI tenant in Azure Purview, and use Purview's features to scan and manage your Power BI tenant source.
+description: This guide describes how to connect to a Power BI tenant in Azure Purview, and use Azure Purview's features to scan and manage your Power BI tenant source.
This article outlines how to register a Power BI tenant, and how to authenticate
| [Yes](#register)| [Yes](#scan)| No | No | No | No| [Yes](how-to-lineage-powerbi.md)| > [!Note]
-> If the Purview instance and the Power BI tenant are in the same Azure tenant, you can only use managed identity (MSI) authentication to set up a scan of a Power BI tenant.
+> If the Azure Purview instance and the Power BI tenant are in the same Azure tenant, you can only use managed identity (MSI) authentication to set up a scan of a Power BI tenant.
### Known limitations - For cross-tenant scenario, no UX experience currently available to register and scan cross Power BI tenant.-- By Editing the Power BI cross tenant registered with PowerShell using Purview Studio will tamper the data source registration with inconsistent scan behavior.
+- By Editing the Power BI cross tenant registered with PowerShell using Azure Purview Studio will tamper the data source registration with inconsistent scan behavior.
- Review [Power BI Metadata scanning limitations](/power-bi/admin/service-admin-metadata-scanning).
This article outlines how to register a Power BI tenant, and how to authenticate
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An active [Purview resource](create-catalog-portal.md).
+* An active [Azure Purview resource](create-catalog-portal.md).
-* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
+* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
## Register
For both same-tenant and cross-tenant scenarios, to set up authentication, creat
:::image type="content" source="./media/setup-power-bi-scan-PowerShell/allow-service-principals-power-bi-admin.png" alt-text="Image showing how to allow service principals to get read-only Power BI admin API permissions.":::
-1. Select **Admin API settings** > **Enhance admin APIs responses with detailed metadata** > Enable the toggle to allow Purview Data Map automatically discover the detailed metadata of Power BI datasets as part of its scans.
+1. Select **Admin API settings** > **Enhance admin APIs responses with detailed metadata** > Enable the toggle to allow Azure Purview Data Map automatically discover the detailed metadata of Power BI datasets as part of its scans.
> [!IMPORTANT] > After you update the Admin API settings on your power bi tenant, wait around 15 minutes before registering a scan and test connection.
For both same-tenant and cross-tenant scenarios, to set up authentication, creat
:::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-scan-sub-artifacts.png" alt-text="Image showing the Power BI admin portal config to enable subartifact scan."::: > [!Caution]
- > When you allow the security group you created (that has your Purview managed identity as a member) to use read-only Power BI admin APIs, you also allow it to access the metadata (e.g. dashboard and report names, owners, descriptions, etc.) for all of your Power BI artifacts in this tenant. Once the metadata has been pulled into the Azure Purview, Purview's permissions, not Power BI permissions, determine who can see that metadata.
+ > When you allow the security group you created (that has your Azure Purview managed identity as a member) to use read-only Power BI admin APIs, you also allow it to access the metadata (e.g. dashboard and report names, owners, descriptions, etc.) for all of your Power BI artifacts in this tenant. Once the metadata has been pulled into the Azure Purview, Azure Purview's permissions, not Power BI permissions, determine who can see that metadata.
> [!Note]
- > You can remove the security group from your developer settings, but the metadata previously extracted won't be removed from the Purview account. You can delete it separately, if you wish.
+ > You can remove the security group from your developer settings, but the metadata previously extracted won't be removed from the Azure Purview account. You can delete it separately, if you wish.
### Steps to register in the same tenant
This guide covers both [same-tenant](#create-and-run-scan-for-same-tenant-power-
To create and run a new scan, do the following:
-1. In the Purview Studio, navigate to the **Data map** in the left menu.
+1. In the Azure Purview Studio, navigate to the **Data map** in the left menu.
1. Navigate to **Sources**.
To create and run a new scan, do the following:
1. Select **Test Connection** before continuing to next steps. If **Test Connection** failed, select **View Report** to see the detailed status and troubleshoot the problem 1. Access - Failed status means the user authentication failed. Scans using managed identity will always pass because no user authentication required.
- 1. Assets (+ lineage) - Failed status means the Purview - Power BI authorization has failed. Make sure the Purview-managed identity is added to the security group associated in Power BI admin portal.
+ 1. Assets (+ lineage) - Failed status means the Azure Purview - Power BI authorization has failed. Make sure the Purview-managed identity is added to the security group associated in Power BI admin portal.
1. Detailed metadata (Enhanced) - Failed status means the Power BI admin portal is disabled for the following setting - **Enhance admin APIs responses with detailed metadata** :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-test-connection-status-report.png" alt-text="Screenshot of test connection status report page."::: 1. Set up a scan trigger. Your options are **Once**, **Every 7 days**, and **Every 30 days**.
- :::image type="content" source="media/setup-power-bi-scan-catalog-portal/scan-trigger.png" alt-text="Screenshot of the Purview scan scheduler.":::
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/scan-trigger.png" alt-text="Screenshot of the Azure Purview scan scheduler.":::
1. On **Review new scan**, select **Save and Run** to launch your scan.
To create and run a new scan inside Azure Purview execute the following cmdlets
## Next steps
-Now that you have registered your source, follow the below guides to learn more about Purview and your data.
+Now that you have registered your source, follow the below guides to learn more about Azure Purview and your data.
- [Data insights in Azure Purview](concept-insights.md) - [Lineage in Azure Purview](catalog-lineage-user-guide.md)
purview Register Scan Salesforce https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-salesforce.md
Title: Connect to and manage Salesforce
-description: This guide describes how to connect to Salesforce in Azure Purview, and use Purview's features to scan and manage your Salesforce source.
+description: This guide describes how to connect to Salesforce in Azure Purview, and use Azure Purview's features to scan and manage your Salesforce source.
This article outlines how to register Salesforce, and how to authenticate and in
|||||||| | [Yes](#register)| [Yes](#scan)| No | [Yes](#scan) | No | No| No|
-When scanning Salesforce source, Purview supports extracting technical metadata including:
+When scanning Salesforce source, Azure Purview supports extracting technical metadata including:
- Organization - Objects including the fields, foreign keys, and unique_constraints
When scanning Salesforce source, Purview supports extracting technical metadata
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An active [Purview resource](create-catalog-portal.md).
+* An active [Azure Purview resource](create-catalog-portal.md).
-* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
+* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
* Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [the create and configure a self-hosted integration runtime guide](manage-integration-runtimes.md). The minimal supported Self-hosted Integration Runtime version is 5.11.7953.1.
When scanning Salesforce source, Purview supports extracting technical metadata
## Register
-This section describes how to register Salesforce in Azure Purview using the [Purview Studio](https://web.purview.azure.com/).
+This section describes how to register Salesforce in Azure Purview using the [Azure Purview Studio](https://web.purview.azure.com/).
### Steps to register To register a new Salesforce source in your data catalog, do the following:
-1. Navigate to your Purview account in the [Purview Studio](https://web.purview.azure.com/resource/).
+1. Navigate to your Azure Purview account in the [Azure Purview Studio](https://web.purview.azure.com/resource/).
1. Select **Data Map** on the left navigation. 1. Select **Register** 1. On Register sources, select **Salesforce**. Select **Continue**.
To create and run a new scan, do the following:
## Next steps
-Now that you have registered your source, follow the below guides to learn more about Purview and your data.
+Now that you have registered your source, follow the below guides to learn more about Azure Purview and your data.
- [Data insights in Azure Purview](concept-insights.md) - [Lineage in Azure Purview](catalog-lineage-user-guide.md)
purview Register Scan Sap Hana https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-sap-hana.md
This article outlines how to register SAP HANA, and how to authenticate and inte
|||||||| | [Yes](#register)| [Yes](#scan)| No | [Yes](#scan) | No | No| No |
-When scanning SAP HANA source, Purview supports extracting technical metadata including:
+When scanning SAP HANA source, Azure Purview supports extracting technical metadata including:
- Server - Databases
purview Register Scan Sapecc Source https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-sapecc-source.md
Title: Connect to and manage an SAP ECC source
-description: This guide describes how to connect to SAP ECC in Azure Purview, and use Purview's features to scan and manage your SAP ECC source.
+description: This guide describes how to connect to SAP ECC in Azure Purview, and use Azure Purview's features to scan and manage your SAP ECC source.
This article outlines how to register SAP ECC, and how to authenticate and inter
\** Lineage is supported if dataset is used as a source/sink in [Data Factory Copy activity](how-to-link-azure-data-factory.md)
-When scanning SAP ECC source, Purview supports:
+When scanning SAP ECC source, Azure Purview supports:
- Extracting technical metadata including:
When scanning SAP ECC source, Purview supports:
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An active [Purview resource](create-catalog-portal.md).
+* An active [Azure Purview resource](create-catalog-portal.md).
-* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
+* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
* Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [the create and configure a self-hosted integration runtime guide](manage-integration-runtimes.md).
When scanning SAP ECC source, Purview supports:
## Register
-This section describes how to register SAP ECC in Azure Purview using the [Purview Studio](https://web.purview.azure.com/).
+This section describes how to register SAP ECC in Azure Purview using the [Azure Purview Studio](https://web.purview.azure.com/).
### Authentication for registration
The only supported authentication for SAP ECC source is **Basic authentication**
### Steps to register
-1. Navigate to your Purview account.
+1. Navigate to your Azure Purview account.
1. Select **Data Map** on the left navigation. 1. Select **Register** 1. On Register sources, select **SAP ECC**. Select **Continue.**
Follow the steps below to scan SAP ECC to automatically identify assets and clas
## Next steps
-Now that you have registered your source, follow the below guides to learn more about Purview and your data.
+Now that you have registered your source, follow the below guides to learn more about Azure Purview and your data.
- [Data insights in Azure Purview](concept-insights.md) - [Lineage in Azure Purview](catalog-lineage-user-guide.md)
purview Register Scan Saps4hana Source https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-saps4hana-source.md
Title: Connect to and manage an SAP S/4HANA source
-description: This guide describes how to connect to SAP S/4HANA in Azure Purview, and use Purview's features to scan and manage your SAP S/4HANA source.
+description: This guide describes how to connect to SAP S/4HANA in Azure Purview, and use Azure Purview's features to scan and manage your SAP S/4HANA source.
This article outlines how to register SAP S/4HANA, and how to authenticate and i
\** Lineage is supported if dataset is used as a source/sink in [Data Factory Copy activity](how-to-link-azure-data-factory.md)
-When scanning SAP S/4HANA source, Purview supports:
+When scanning SAP S/4HANA source, Azure Purview supports:
- Extracting technical metadata including:
When scanning SAP S/4HANA source, Purview supports:
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An active [Purview resource](create-catalog-portal.md).
+* An active [Azure Purview resource](create-catalog-portal.md).
-* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
+* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
* Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [the create and configure a self-hosted integration runtime guide](manage-integration-runtimes.md).
When scanning SAP S/4HANA source, Purview supports:
## Register
-This section describes how to register SAP S/4HANA in Azure Purview using the [Purview Studio](https://web.purview.azure.com/).
+This section describes how to register SAP S/4HANA in Azure Purview using the [Azure Purview Studio](https://web.purview.azure.com/).
### Authentication for registration
The only supported authentication for SAP S/4HANA source is **Basic authenticati
### Steps to register
-1. Navigate to your Purview account.
+1. Navigate to your Azure Purview account.
1. Select **Data Map** on the left navigation. 1. Select **Register** 1. On Register sources, select **SAP S/4HANA.** Select **Continue**
Follow the steps below to scan SAP S/4HANA to automatically identify assets and
## Next steps
-Now that you have registered your source, follow the below guides to learn more about Purview and your data.
+Now that you have registered your source, follow the below guides to learn more about Azure Purview and your data.
- [Data insights in Azure Purview](concept-insights.md) - [Lineage in Azure Purview](catalog-lineage-user-guide.md)
purview Register Scan Snowflake https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-snowflake.md
Title: Connect to and manage Snowflake
-description: This guide describes how to connect to Snowflake in Azure Purview, and use Purview's features to scan and manage your Snowflake source.
+description: This guide describes how to connect to Snowflake in Azure Purview, and use Azure Purview's features to scan and manage your Snowflake source.
This article outlines how to register Snowflake, and how to authenticate and int
|||||||| | [Yes](#register)| [Yes](#scan)| No | [Yes](#scan) | No | No| Yes|
-When scanning Snowflake source, Purview supports:
+When scanning Snowflake source, Azure Purview supports:
- Extracting technical metadata including:
When scanning Snowflake source, Purview supports:
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An active [Purview resource](create-catalog-portal.md).
+* An active [Azure Purview resource](create-catalog-portal.md).
-* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
+* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
* Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [the create and configure a self-hosted integration runtime guide](manage-integration-runtimes.md). The minimal supported Self-hosted Integration Runtime version is 5.11.7971.2.
When scanning Snowflake source, Purview supports:
Azure Purview supports basic authentication (username and password) for scanning Snowflake. The default role of the given user will be used to perform the scan. The Snowflake user must have usage rights on a warehouse and the database(s) to be scanned, and read access to system tables in order to access advanced metadata.
-Here is a sample walkthrough to create a user specifically for Purview scan and set up the permissions. If you choose to use an existing user, make sure it has adequate rights to the warehouse and database objects.
+Here is a sample walkthrough to create a user specifically for Azure Purview scan and set up the permissions. If you choose to use an existing user, make sure it has adequate rights to the warehouse and database objects.
1. Set up a `purview_reader` role. You will need _ACCOUNTADMIN_ rights to do this. ```sql USE ROLE ACCOUNTADMIN;
- --create role to allow read only access - this will later be assigned to the Purview user
+ --create role to allow read only access - this will later be assigned to the Azure Purview user
CREATE OR REPLACE ROLE purview_reader; --make sysadmin the parent role GRANT ROLE purview_reader TO ROLE sysadmin; ```
-2. Create a warehouse for Purview to use and grant rights.
+2. Create a warehouse for Azure Purview to use and grant rights.
```sql --create warehouse - account admin required
Here is a sample walkthrough to create a user specifically for Purview scan and
GRANT USAGE ON WAREHOUSE purview_wh TO ROLE purview_reader; ```
-3. Create a user `purview` for Purview scan.
+3. Create a user `purview` for Azure Purview scan.
```sql CREATE OR REPLACE USER purview
Here is a sample walkthrough to create a user specifically for Purview scan and
## Register
-This section describes how to register Snowflake in Azure Purview using the [Purview Studio](https://web.purview.azure.com/).
+This section describes how to register Snowflake in Azure Purview using the [Azure Purview Studio](https://web.purview.azure.com/).
### Steps to register To register a new Snowflake source in your data catalog, do the following:
-1. Navigate to your Purview account in the [Purview Studio](https://web.purview.azure.com/resource/).
+1. Navigate to your Azure Purview account in the [Azure Purview Studio](https://web.purview.azure.com/resource/).
1. Select **Data Map** on the left navigation. 1. Select **Register** 1. On Register sources, select **Snowflake**. Select **Continue**.
To create and run a new scan, do the following:
- Check your account identifer in the source registration step. Do not include `https://` part at the front. - Make sure the warehouse name and database name are in capital case on the scan setup page. - Check your key vault. Make sure there are no typos in the password.-- Check the credential you set up in Purview. The user you specify must have a default role with the necessary access rights to both the warehouse and the database you are trying to scan. See [Required permissions for scan](#required-permissions-for-scan). USE `DESCRIBE USER;` to verify the default role of the user you've specified for Purview.
+- Check the credential you set up in Azure Purview. The user you specify must have a default role with the necessary access rights to both the warehouse and the database you are trying to scan. See [Required permissions for scan](#required-permissions-for-scan). USE `DESCRIBE USER;` to verify the default role of the user you've specified for Azure Purview.
- Use Query History in Snowflake to see if any activity is coming across. - If there's a problem with the account identifer or password, you won't see any activity. - If there's a problem with the default role, you should at least see a `USE WAREHOUSE . . .` statement.
- - You can use the [QUERY_HISTORY_BY_USER table function](https://docs.snowflake.com/en/sql-reference/functions/query_history.html) to identify what role is being used by the connection. Setting up a dedicated Purview user will make troubleshooting easier.
+ - You can use the [QUERY_HISTORY_BY_USER table function](https://docs.snowflake.com/en/sql-reference/functions/query_history.html) to identify what role is being used by the connection. Setting up a dedicated Azure Purview user will make troubleshooting easier.
## Next steps
-Now that you have registered your source, follow the below guides to learn more about Purview and your data.
+Now that you have registered your source, follow the below guides to learn more about Azure Purview and your data.
- [Data insights in Azure Purview](concept-insights.md) - [Lineage in Azure Purview](catalog-lineage-user-guide.md)
purview Register Scan Synapse Workspace https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-synapse-workspace.md
Title: Connect to and manage Azure Synapse Analytics workspaces
-description: This guide describes how to connect to Azure Synapse Analytics workspaces in Azure Purview, and use Purview's features to scan and manage your Azure Synapse Analytics workspace source.
+description: This guide describes how to connect to Azure Synapse Analytics workspaces in Azure Purview, and use Azure Purview's features to scan and manage your Azure Synapse Analytics workspace source.
This article outlines how to register Azure Synapse Analytics workspaces and how
|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**| ||||||||
-| [Yes](#register) | [Yes](#scan)| [Yes](#scan) | [Yes](#scan)| [Yes](#scan)| No| [Yes- Synapse pipelines](how-to-lineage-azure-synapse-analytics.md)|
+| [Yes](#register) | [Yes](#scan)| [Yes](#scan) | No| [Yes](#scan)| No| [Yes- Synapse pipelines](how-to-lineage-azure-synapse-analytics.md)|
<!-- 4. Prerequisites
Required. Add any relevant/source-specific prerequisites for connecting with thi
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An active [Purview resource](create-catalog-portal.md).
+* An active [Azure Purview resource](create-catalog-portal.md).
-* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
+* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
## Register
-This section describes how to register Azure Synapse Analytics workspaces in Azure Purview using the [Purview Studio](https://web.purview.azure.com/).
+This section describes how to register Azure Synapse Analytics workspaces in Azure Purview using the [Azure Purview Studio](https://web.purview.azure.com/).
### Authentication for registration
Only users with at least a *Reader* role on the Azure Synapse workspace who is a
Follow the steps below to scan Azure Synapse Analytics workspaces to automatically identify assets and classify your data. For more information about scanning in general, see our [introduction to scans and ingestion](concept-scans-and-ingestion.md).
-You will first need to set up authentication for enumerating for either your [dedicated](#authentication-for-enumerating-dedicated-sql-database-resources) or [serverless](#authentication-for-enumerating-serverless-sql-database-resources) resources. This will allow Purview to enumerate your workspace assets and perform scoped scans.
+You will first need to set up authentication for enumerating for either your [dedicated](#authentication-for-enumerating-dedicated-sql-database-resources) or [serverless](#authentication-for-enumerating-serverless-sql-database-resources) resources. This will allow Azure Purview to enumerate your workspace assets and perform scans.
Then, you will need to [apply permissions to scan the contents of the workspace](#apply-permissions-to-scan-the-contents-of-the-workspace).
Then, you will need to [apply permissions to scan the contents of the workspace]
### Authentication for enumerating serverless SQL database resources
-There are three places you will need to set authentication to allow Purview to enumerate your serverless SQL database resources: the Synapse workspace, the associated storage, and on the Serverless databases. The steps below will set permissions for all three.
+There are three places you will need to set authentication to allow Azure Purview to enumerate your serverless SQL database resources: the Synapse workspace, the associated storage, and on the Serverless databases. The steps below will set permissions for all three.
1. In the Azure portal, go to the Azure Synapse workspace resource. 1. On the left pane, selectΓÇ»**Access Control (IAM)**.
GRANT REFERENCES ON DATABASE SCOPED CREDENTIAL::[scoped_credential] TO [PurviewA
To create and run a new scan, do the following:
-1. Select the **Data Map** tab on the left pane in [Purview Studio](https://web.purview.azure.com/resource/).
+1. Select the **Data Map** tab on the left pane in [Azure Purview Studio](https://web.purview.azure.com/resource/).
1. Select the data source that you registered.
To create and run a new scan, do the following:
## Next steps
-Now that you have registered your source, follow the below guides to learn more about Purview and your data.
+Now that you have registered your source, follow the below guides to learn more about Azure Purview and your data.
- [Data insights in Azure Purview](concept-insights.md) - [Lineage in Azure Purview](catalog-lineage-user-guide.md)
purview Register Scan Teradata Source https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-teradata-source.md
Title: Connect to and manage Teradata
-description: This guide describes how to connect to Teradata in Azure Purview, and use Purview's features to scan and manage your Teradata source.
+description: This guide describes how to connect to Teradata in Azure Purview, and use Azure Purview's features to scan and manage your Teradata source.
This article outlines how to register Teradata, and how to authenticate and inte
The supported Teradata database versions are 12.x to 17.x.
-When scanning Teradata source, Purview supports:
+When scanning Teradata source, Azure Purview supports:
- Extracting technical metadata including:
When scanning Teradata source, Purview supports:
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An active [Purview resource](create-catalog-portal.md).
+* An active [Azure Purview resource](create-catalog-portal.md).
-* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
+* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
* Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [the create and configure a self-hosted integration runtime guide](manage-integration-runtimes.md).
When scanning Teradata source, Purview supports:
## Register
-This section describes how to register Teradata in Azure Purview using the [Purview Studio](https://web.purview.azure.com/).
+This section describes how to register Teradata in Azure Purview using the [Azure Purview Studio](https://web.purview.azure.com/).
### Authentication for registration
The only supported authentication for a Teradata source is **Basic authenticatio
### Steps to register
-1. Navigate to your Purview account.
+1. Navigate to your Azure Purview account.
1. Select **Data Map** on the left navigation. 1. Select **Register** 1. On Register sources, select **Teradata**. Select **Continue**
Follow the steps below to scan Teradata to automatically identify assets and cla
1. In the Management Center, select **Integration runtimes**. Make sure a self-hosted integration runtime is set up. If it is not set up, use the steps mentioned [here](./manage-integration-runtimes.md) to set up a self-hosted integration runtime
-1. Select the **Data Map** tab on the left pane in the [Purview Studio](https://web.purview.azure.com/resource/).
+1. Select the **Data Map** tab on the left pane in the [Azure Purview Studio](https://web.purview.azure.com/resource/).
1. Select the registered Teradata source.
Follow the steps below to scan Teradata to automatically identify assets and cla
## Next steps
-Now that you have registered your source, follow the below guides to learn more about Purview and your data.
+Now that you have registered your source, follow the below guides to learn more about Azure Purview and your data.
- [Data insights in Azure Purview](concept-insights.md) - [Lineage in Azure Purview](catalog-lineage-user-guide.md)
purview Scan Insights https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/scan-insights.md
Title: Scan insights on your data in Azure Purview
-description: This how-to guide describes how to view and use Purview Insights scan reporting on your data.
+description: This how-to guide describes how to view and use Azure Purview Insights scan reporting on your data.
This how-to guide describes how to access, view, and filter Azure Purview scan i
In this how-to guide, you'll learn how to: > [!div class="checklist"]
-> * View insights from your Purview account.
+> * View insights from your Azure Purview account.
> * Get a bird's eye view of your scans. ## Prerequisites
-Before getting started with Purview insights, make sure that you've completed the following steps:
+Before getting started with Azure Purview insights, make sure that you've completed the following steps:
* Set up your Azure resources and populate the account with data. * Set up and complete a scan on the data source. For more information, see [Manage data sources in Azure Purview](manage-data-sources.md).
-## Use Purview Scan Insights
+## Use Azure Purview Scan Insights
In Azure Purview, you can register and scan source types. You can view the scan status over time in Scan Insights. The insights tell you how many scans failed, succeeded, or get canceled within a certain time period. ### View scan insights
-1. Go to the **Azure Purview** instance screen in the Azure portal and select your Purview account.
+1. Go to the **Azure Purview** instance screen in the Azure portal and select your Azure Purview account.
-1. On the **Overview** page, in the **Get Started** section, select the **Open Purview Studio** tile.
+1. On the **Overview** page, in the **Get Started** section, select the **Open Azure Purview Studio** tile.
- :::image type="content" source="./media/scan-insights/portal-access.png" alt-text="Launch Purview from the Azure portal":::
+ :::image type="content" source="./media/scan-insights/portal-access.png" alt-text="Launch Azure Purview from the Azure portal":::
-1. On the Purview **Home** page, select **Insights** on the left menu.
+1. On the Azure Purview **Home** page, select **Insights** on the left menu.
:::image type="content" source="./media/scan-insights/view-insights.png" alt-text="View your insights in the Azure portal":::
-1. In the **Insights** area, select **Scans** to display the Purview **Scan insights** report.
+1. In the **Insights** area, select **Scans** to display the Azure Purview **Scan insights** report.
### View high-level KPIs to show count of scans by status and deep-dive into each scan
purview Sensitivity Insights https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/sensitivity-insights.md
Title: Sensitivity label reporting on your data in Azure Purview using Purview Insights
-description: This how-to guide describes how to view and use Purview Sensitivity label reporting on your data.
+ Title: Sensitivity label reporting on your data in Azure Purview using Azure Purview Insights
+description: This how-to guide describes how to view and use Azure Purview Sensitivity label reporting on your data.
Last updated 09/27/2021
-# Customer intent: As a security officer, I need to understand how to use Purview Insights to learn about sensitive data identified and classified and labeled during scanning.
+# Customer intent: As a security officer, I need to understand how to use Azure Purview Insights to learn about sensitive data identified and classified and labeled during scanning.
Supported data sources include: Azure Blob Storage, Azure Data Lake Storage (ADL
In this how-to guide, you'll learn how to: > [!div class="checklist"]
-> - Launch your Purview account from Azure.
+> - Launch your Azure Purview account from Azure.
> - View sensitivity labeling insights on your data > - Drill down for more sensitivity labeling details on your data ## Prerequisites
-Before getting started with Purview insights, make sure that you've completed the following steps:
+Before getting started with Azure Purview insights, make sure that you've completed the following steps:
- Set up your Azure resources and populated the relevant accounts with test data
Before getting started with Purview insights, make sure that you've completed th
- Set up and completed a scan on the test data in each data source. For more information, see [Manage data sources in Azure Purview](manage-data-sources.md) and [Create a scan rule set](create-a-scan-rule-set.md). -- Signed in to Purview with account with a [Data Reader or Data Curator role](catalog-permissions.md#roles).
+- Signed in to Azure Purview with account with a [Data Reader or Data Curator role](catalog-permissions.md#roles).
For more information, see [Manage data sources in Azure Purview](manage-data-sources.md) and [Automatically label your data in Azure Purview](create-sensitivity-label.md).
-## Use Purview Sensitivity labeling insights
+## Use Azure Purview Sensitivity labeling insights
-In Purview, classifications are similar to subject tags, and are used to mark and identify data of a specific type that's found within your data estate during scanning.
+In Azure Purview, classifications are similar to subject tags, and are used to mark and identify data of a specific type that's found within your data estate during scanning.
Sensitivity labels enable you to state how sensitive certain data is in your organization. For example, a specific project name might be highly confidential within your organization, while that same term is not confidential to other organizations.
Classifications are matched directly, such as a social security number, which ha
In contrast, sensitivity labels are applied when one or more classifications and conditions are found together. In this context, [conditions](/microsoft-365/compliance/apply-sensitivity-label-automatically) refer to all the parameters that you can define for unstructured data, such as **proximity to another classification**, and **% confidence**.
-Purview uses the same classifications, also known as [sensitive information types](/microsoft-365/compliance/sensitive-information-type-entity-definitions), as Microsoft 365. This enables you to extend your existing sensitivity labels across your Azure Purview assets.
+Azure Purview uses the same classifications, also known as [sensitive information types](/microsoft-365/compliance/sensitive-information-type-entity-definitions), as Microsoft 365. This enables you to extend your existing sensitivity labels across your Azure Purview assets.
> [!NOTE] > After you have scanned your source types, give **Sensitivity labeling** Insights a couple of hours to reflect the new assets.
Purview uses the same classifications, also known as [sensitive information type
1. Go to the **Azure Purview** home page.
-1. On the **Overview** page, in the **Get Started** section, select the **Launch Purview account** tile.
+1. On the **Overview** page, in the **Get Started** section, select the **Launch Azure Purview account** tile.
-1. In Purview, select the **Insights** :::image type="icon" source="media/insights/ico-insights.png" border="false"::: menu item on the left to access your **Insights** area.
+1. In Azure Purview, select the **Insights** :::image type="icon" source="media/insights/ico-insights.png" border="false"::: menu item on the left to access your **Insights** area.
-1. In the **Insights** :::image type="icon" source="media/insights/ico-insights.png" border="false"::: area, select **Sensitivity labels** to display the Purview **Sensitivity labeling insights** report.
+1. In the **Insights** :::image type="icon" source="media/insights/ico-insights.png" border="false"::: area, select **Sensitivity labels** to display the Azure Purview **Sensitivity labeling insights** report.
> [!NOTE] > If this report is empty, you may not have extended your sensitivity labels to Azure Purview. For more information, see [Automatically label your data in Azure Purview](create-sensitivity-label.md).
Purview uses the same classifications, also known as [sensitive information type
||| |**Overview of sources with sensitivity labels** |Displays tiles that provide: <br>- The number of subscriptions found in your data. <br>- The number of unique sensitivity labels applied on your data <br>- The number of sources with sensitivity labels applied <br>- The number of files and tables found with sensitivity labels applied| |**Top sources with labeled data (last 30 days)** | Shows the trend, over the past 30 days, of the number of sources with sensitivity labels applied. |
- |**Top labels applied across sources** |Shows the top labels applied across all of your Purview data resources. |
+ |**Top labels applied across sources** |Shows the top labels applied across all of your Azure Purview data resources. |
|**Top labels applied on files** |Shows the top sensitivity labels applied to files in your data. | |**Top labels applied on tables** | Shows the top sensitivity labels applied to database tables in your data. | | **Labeling activity** | Displays separate graphs for files and tables, each showing the number of files or tables labeled over the selected time frame. <br>**Default**: 30 days<br>Select the **Time** filter above the graphs to select a different time frame to display. |
Do any of the following to learn more:
## Sensitivity label integration with Microsoft 365 compliance
-Close integration with [Microsoft Information Protection](/microsoft-365/compliance/information-protection) offered in Microsoft 365 means that Purview enables direct ways to extend visibility into your data estate, and classify and label your data.
+Close integration with [Microsoft Information Protection](/microsoft-365/compliance/information-protection) offered in Microsoft 365 means that Azure Purview enables direct ways to extend visibility into your data estate, and classify and label your data.
For your Microsoft 365 sensitivity labels to be extended to your assets in Azure Purview, you must actively turn on Information Protection for Azure Purview, in the Microsoft 365 compliance center.
purview Sources And Scans https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/sources-and-scans.md
Title: Supported data sources and file types
-description: This article provides conceptual details about supported data sources and file types in Purview.
+description: This article provides conceptual details about supported data sources and file types in Azure Purview.
# Supported data sources and file types in Azure Purview
-This article discusses supported data sources, file types and scanning concepts in Purview.
+This article discusses supported data sources, file types and scanning concepts in Azure Purview.
## Supported data sources
-Purview supports all the data sources listed [here](purview-connector-overview.md).
+Azure Purview supports all the data sources listed [here](purview-connector-overview.md).
## File types supported for scanning
The following file types are supported for scanning, for schema extraction and c
- Structured file formats supported by extension: AVRO, ORC, PARQUET, CSV, JSON, PSV, SSV, TSV, TXT, XML, GZIP > [!Note]
- > * Purview scanner only supports schema extraction for the structured file types listed above.
- > * For AVRO, ORC, and PARQUET file types, Purview scanner does not support schema extraction for files that contain complex data types (for example, MAP, LIST, STRUCT).
- > * Purview scanner supports scanning snappy compressed PARQUET types for schema extraction and classification.
+ > * Azure Purview scanner only supports schema extraction for the structured file types listed above.
+ > * For AVRO, ORC, and PARQUET file types, Azure Purview scanner does not support schema extraction for files that contain complex data types (for example, MAP, LIST, STRUCT).
+ > * Azure Purview scanner supports scanning snappy compressed PARQUET types for schema extraction and classification.
> * For GZIP file types, the GZIP must be mapped to a single csv file within. > Gzip files are subject to System and Custom Classification rules. We currently don't support scanning a gzip file mapped to multiple files within, or any file type other than csv. > * For delimited file types(CSV, PSV, SSV, TSV, TXT), we do not support data type detection. The data type will be listed as "string" for all columns. - Document file formats supported by extension: DOC, DOCM, DOCX, DOT, ODP, ODS, ODT, PDF, POT, PPS, PPSX, PPT, PPTM, PPTX, XLC, XLS, XLSB, XLSM, XLSX, XLT-- Purview also supports custom file extensions and custom parsers.
+- Azure Purview also supports custom file extensions and custom parsers.
## Sampling within a file
-In Purview terminology,
+In Azure Purview terminology,
- L1 scan: Extracts basic information and meta data like file name, size and fully qualified name - L2 scan: Extracts schema for structured file types and database tables - L3 scan: Extracts schema where applicable and subjects the sampled file to system and custom classification rules
-For all structured file formats, Purview scanner samples files in the following way:
+For all structured file formats, Azure Purview scanner samples files in the following way:
- For structured file types, it samples the top 128 rows in each column or the first 1 MB, whichever is lower. - For document file formats, it samples the first 20 MB of each file.
- - If a document file is larger than 20 MB, then it is not subject to a deep scan (subject to classification). In that case, Purview captures only basic meta data like file name and fully qualified name.
+ - If a document file is larger than 20 MB, then it is not subject to a deep scan (subject to classification). In that case, Azure Purview captures only basic meta data like file name and fully qualified name.
- For **tabular data sources(SQL, CosmosDB)**, it samples the top 128 rows. ## Resource set file sampling
-A folder or group of partition files is detected as a *resource set* in Purview, if it matches with a system resource set policy or a customer defined resource set policy. If a resource set is detected, then Purview will sample each folder that it contains. Learn more about resource sets [here](concept-resource-sets.md).
+A folder or group of partition files is detected as a *resource set* in Azure Purview, if it matches with a system resource set policy or a customer defined resource set policy. If a resource set is detected, then Azure Purview will sample each folder that it contains. Learn more about resource sets [here](concept-resource-sets.md).
File sampling for resource sets by file types:
All 206 system classification rules apply to structured file formats. Only the M
## Next steps -- [Scans and ingestion in Purview](concept-scans-and-ingestion.md)
+- [Scans and ingestion in Azure Purview](concept-scans-and-ingestion.md)
- [Manage data sources in Azure Purview](manage-data-sources.md)
purview Supported Browsers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/supported-browsers.md
Azure Purview supports the following browsers. We recommend that you use the mos
## Chrome Incognito mode
- Chrome Incognito blocking 3rd party cookies must be disabled for Purview Studio to work.
+ Chrome Incognito blocking 3rd party cookies must be disabled for Azure Purview Studio to work.
:::image type="content" source="./media/supported-browsers/incognito-chrome.png" alt-text="Screenshot showing chrome."::: ## Chromium Edge InPrivate mode
-Chromium Edge InPrivate using Strict Tracking Prevention must be disabled for Purview Studio to work.
+Chromium Edge InPrivate using Strict Tracking Prevention must be disabled for Azure Purview Studio to work.
:::image type="content" source="./media/supported-browsers/incognito-edge.png" alt-text="Screenshot showing edge.":::
purview Supported Classifications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/supported-classifications.md
This article lists the supported system classifications in Azure Purview. To lea
Azure Purview classifies data by [RegEx](https://wikipedia.org/wiki/Regular_expression) and [Bloom Filter](https://wikipedia.org/wiki/Bloom_filter). The following lists describe the format, pattern, and keywords for the Azure Purview defined system classifications. Each classification name is prefixed by *MICROSOFT*. > [!Note]
-> Azure Purview can classify both structured (CSV, TSV, JSON, SQL Table etc.) as well as unstructured data (DOC, PDF, TXT etc.). However, there are certain classifications that are only applicable to structured data. Here is the list of classifications that Purview doesn't apply on unstructured data - City Name, Country Name, Date Of Birth, Email, Ethnic Group, GeoLocation, Person Name, U.S. Phone Number, U.S. States, U.S. ZipCode
+> Azure Purview can classify both structured (CSV, TSV, JSON, SQL Table etc.) as well as unstructured data (DOC, PDF, TXT etc.). However, there are certain classifications that are only applicable to structured data. Here is the list of classifications that Azure Purview doesn't apply on unstructured data - City Name, Country Name, Date Of Birth, Email, Ethnic Group, GeoLocation, Person Name, U.S. Phone Number, U.S. States, U.S. ZipCode
## Bloom Filter based classifications
purview Troubleshoot Connections https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/troubleshoot-connections.md
There are specific instructions for each source type:
- [Azure Synapse Analytics](register-scan-azure-synapse-analytics.md#authentication-for-registration) - [SQL Server](register-scan-on-premises-sql-server.md#authentication-for-registration) - [Power BI](register-scan-power-bi-tenant.md)-- [Amazon S3](register-scan-amazon-s3.md#create-a-purview-credential-for-your-aws-s3-scan)
+- [Amazon S3](register-scan-amazon-s3.md#create-an-azure-purview-credential-for-your-aws-s3-scan)
## Verifying Azure Role-based Access Control to enumerate Azure resources in Azure Purview Studio
Verify this by following the steps below:
1. Select the secret you're using to authenticate against your data source for scans. 1. Select the version that you intend to use and verify that the password or account key is correct by selecting **Show Secret Value**.
-## Verify permissions for the Purview managed identity on your Azure Key Vault
+## Verify permissions for the Azure Purview managed identity on your Azure Key Vault
-Verify that the correct permissions have been configured for the Purview managed identity to access your Azure Key Vault.
+Verify that the correct permissions have been configured for the Azure Purview managed identity to access your Azure Key Vault.
To verify this, do the following steps: 1. Navigate to your key vault and to the **Access policies** section
-1. Verify that your Purview managed identity shows under the *Current access policies* section with at least **Get** and **List** permissions on Secrets
+1. Verify that your Azure Purview managed identity shows under the *Current access policies* section with at least **Get** and **List** permissions on Secrets
:::image type="content" source="./media/troubleshoot-connections/verify-minimum-permissions.png" alt-text="Image showing dropdown selection of both Get and List permission options":::
-If you don't see your Purview managed identity listed, then follow the steps in [Create and manage credentials for scans](manage-credentials.md) to add it.
+If you don't see your Azure Purview managed identity listed, then follow the steps in [Create and manage credentials for scans](manage-credentials.md) to add it.
## Next steps
purview Tutorial Metadata Policy Collections Apis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/tutorial-metadata-policy-collections-apis.md
For more information about the built-in roles in Azure Purview, see the [Azure P
The following table gives an overview of the [Azure Purview Metadata Policy API Reference](/rest/api/purview/metadatapolicydataplane/Metadata-Policy). > [!NOTE]
-> Replace {pv-acc-name} with the name of your Azure Purview account before running these APIs. For instance, if your Azure Purview account name is *FabrikamPurviewAccount*, your API endpoints will become *FabrikamPurviewAccount.purview.azure.com*. The "api-version" parameter is subject to change. Please refer the [Purview Metadata policy REST API documentation](/rest/api/purview/metadatapolicydataplane/Metadata-Policy) for the latest "api-version" and the API signature.
+> Replace {pv-acc-name} with the name of your Azure Purview account before running these APIs. For instance, if your Azure Purview account name is *FabrikamPurviewAccount*, your API endpoints will become *FabrikamPurviewAccount.purview.azure.com*. The "api-version" parameter is subject to change. Please refer the [Azure Purview Metadata policy REST API documentation](/rest/api/purview/metadatapolicydataplane/Metadata-Policy) for the latest "api-version" and the API signature.
| API&nbsp;function | REST&nbsp;method | API&nbsp;endpoint | Description | | :- | :- | :- | :- |
The default metadata roles are listed in the following table:
"properties": { "provisioningState": "Provisioned", "roleType": "BuiltIn",
- "friendlyName": "Purview Reader",
+ "friendlyName": "Azure Purview Reader",
"cnfCondition": [ [ {
purview Tutorial Purview Tools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/tutorial-purview-tools.md
Last updated 10/10/2021
-# Customer Intent: As an Azure Purview administrator, I want to kickstart and be up and running with Azure Purview service in a matter of minutes; additionally, I want to perform and set up automations, batch-mode API executions and scripts that help me run Purview smoothly and effectively for the long-term on a regular basis.
+# Customer Intent: As an Azure Purview administrator, I want to kickstart and be up and running with Azure Purview service in a matter of minutes; additionally, I want to perform and set up automations, batch-mode API executions and scripts that help me run Azure Purview smoothly and effectively for the long-term on a regular basis.
# Azure Purview open-source tools and utilities
-This article lists several open-source tools and utilities (command-line, python, and PowerShell interfaces) that help you get started quickly on Azure Purview service in a matter of minutes! These tools have been authored & developed by collective effort of the Azure Purview Product Group and the open-source community. The objective of such tools is to make learning, starting up, regular usage, and long-term adoption of Purview breezy and super fast.
+This article lists several open-source tools and utilities (command-line, python, and PowerShell interfaces) that help you get started quickly on Azure Purview service in a matter of minutes! These tools have been authored & developed by collective effort of the Azure Purview Product Group and the open-source community. The objective of such tools is to make learning, starting up, regular usage, and long-term adoption of Azure Purview breezy and super fast.
### Intended audience
This article lists several open-source tools and utilities (command-line, python
- Azure Purview catalog is based on [Apache Atlas](https://atlas.apache.org/) and extends full support for Apache Atlas APIs. We welcome Apache Atlas community, enthusiasts, and developers to wholeheartedly build on and evangelize Azure Purview.
-### Purview customer journey stages
+### Azure Purview customer journey stages
-- *Purview Learners*: Learners who are starting fresh with Azure Purview service and are keen to understand and explore how a multi-cloud unified data governance solution works. A section of learners includes users who want to compare and contrast Purview with other competing solutions in the data governance market and try it before adopting for long-term usage.
+- *Azure Purview Learners*: Learners who are starting fresh with Azure Purview service and are keen to understand and explore how a multi-cloud unified data governance solution works. A section of learners includes users who want to compare and contrast Azure Purview with other competing solutions in the data governance market and try it before adopting for long-term usage.
-- *Purview Innovators*: Innovators who are keen to understand existing and latest features, ideate, and conceptualize features upcoming on Purview. They are adept at building and developing solutions for customers, and have futuristic forward-looking ideas for the next-gen cutting-edge data governance product.
+- *Azure Purview Innovators*: Innovators who are keen to understand existing and latest features, ideate, and conceptualize features upcoming on Azure Purview. They are adept at building and developing solutions for customers, and have futuristic forward-looking ideas for the next-gen cutting-edge data governance product.
-- *Purview Enthusiasts/Evangelists*: Enthusiasts who are a combination of Learners and Innovators. They have developed solid understanding and knowledge of Purview, hence, are upbeat about adoption of Purview. They can help evangelize Purview as a service and educate several other Purview users and probable customers across the globe.
+- *Azure Purview Enthusiasts/Evangelists*: Enthusiasts who are a combination of Learners and Innovators. They have developed solid understanding and knowledge of Azure Purview, hence, are upbeat about adoption of Azure Purview. They can help evangelize Azure Purview as a service and educate several other Azure Purview users and probable customers across the globe.
-- *Purview Adopters*: Adopters who have migrated from starting up and exploring Purview and are smoothly using Purview for more than a few months.
+- *Azure Purview Adopters*: Adopters who have migrated from starting up and exploring Azure Purview and are smoothly using Azure Purview for more than a few months.
-- *Purview Long-Term Regular Users*: Long-term users who have been using Purview for more than one year and are now confident and comfortable using most advanced Purview use cases on the Azure portal and Purview Studio; furthermore they have near perfect knowledge and awareness of the Purview REST APIs and the additional use cases supported via Purview APIs.
+- *Azure Purview Long-Term Regular Users*: Long-term users who have been using Azure Purview for more than one year and are now confident and comfortable using most advanced Azure Purview use cases on the Azure portal and Azure Purview Studio; furthermore they have near perfect knowledge and awareness of the Azure Purview REST APIs and the additional use cases supported via Azure Purview APIs.
## Azure Purview open-source tools and utilities list
This article lists several open-source tools and utilities (command-line, python
1. [Purview-API-via-PowerShell](https://github.com/Azure/Azure-Purview-API-PowerShell/blob/main/README.md) - **Recommended customer journey stages**: *Learners, Innovators, Enthusiasts, Adopters, Long-Term Regular Users*
- - **Description**: This utility is based on and covers the entire set of [Azure Purview REST API Reference](/rest/api/purview/) Microsoft Docs. [Download & Install from PowerShell Gallery](https://aka.ms/purview-api-ps). It helps you execute all the documented Purview REST APIs through a breezy fast and easy to use PowerShell interface. Use and automate Purview APIs for regular and long-term usage via command-line and scripted methods. This is an alternative for customers looking to do bulk tasks in automated manner, batch-mode, or scheduled cron jobs; as against the GUI method of using the Azure portal and Purview Studio. Detailed documentation, sample usage guide, self-help, and examples are available on [GitHub:Azure-Purview-API-PowerShell](https://github.com/Azure/Azure-Purview-API-PowerShell).
+ - **Description**: This utility is based on and covers the entire set of [Azure Purview REST API Reference](/rest/api/purview/) Microsoft Docs. [Download & Install from PowerShell Gallery](https://aka.ms/purview-api-ps). It helps you execute all the documented Azure Purview REST APIs through a breezy fast and easy to use PowerShell interface. Use and automate Azure Purview APIs for regular and long-term usage via command-line and scripted methods. This is an alternative for customers looking to do bulk tasks in automated manner, batch-mode, or scheduled cron jobs; as against the GUI method of using the Azure portal and Azure Purview Studio. Detailed documentation, sample usage guide, self-help, and examples are available on [GitHub:Azure-Purview-API-PowerShell](https://github.com/Azure/Azure-Purview-API-PowerShell).
1. [Purview-Starter-Kit](https://aka.ms/PurviewKickstart) - **Recommended customer journey stages**: *Learners, Innovators, Enthusiasts*
- - **Description**: PowerShell script to perform initial set up of Purview account. Useful for anyone looking to set up several fresh new Purview account(s) in less than 5 minutes!
+ - **Description**: PowerShell script to perform initial set up of Azure Purview account. Useful for anyone looking to set up several fresh new Azure Purview account(s) in less than 5 minutes!
-1. [Purview Lab](https://aka.ms/purviewlab)
+1. [Azure Purview Lab](https://aka.ms/purviewlab)
- **Recommended customer journey stages**: *Learners, Innovators, Enthusiasts*
- - **Description**: A hands-on-lab introducing the myriad features of Purview and helping you learn the concepts in a practical and hands-on approach where you execute each step on your own by hand to develop the best possible understanding of Purview.
+ - **Description**: A hands-on-lab introducing the myriad features of Azure Purview and helping you learn the concepts in a practical and hands-on approach where you execute each step on your own by hand to develop the best possible understanding of Azure Purview.
-1. [Purview CLI](https://aka.ms/purviewcli)
+1. [Azure Purview CLI](https://aka.ms/purviewcli)
- **Recommended customer journey stages**: *Innovators, Enthusiasts, Adopters, Long-Term Regular Users*
- - **Description**: Python-based tool to execute the Purview APIs similar to [Purview-API-via-PowerShell](https://aka.ms/purview-api-ps) but has limited/lesser functionality than the PowerShell-based framework.
+ - **Description**: Python-based tool to execute the Azure Purview APIs similar to [Purview-API-via-PowerShell](https://aka.ms/purview-api-ps) but has limited/lesser functionality than the PowerShell-based framework.
-1. [Purview Demo](https://aka.ms/pvdemo)
+1. [Azure Purview Demo](https://aka.ms/pvdemo)
- **Recommended customer journey stages**: *Learners, Innovators, Enthusiasts* - **Description**: An Azure Resource Manager (ARM) template-based tool to automatically set up and deploy fresh new Azure Purview account quickly and securely at the issue of just one command. It is similar to [Purview-Starter-Kit](https://aka.ms/PurviewKickstart), the extra feature being it deploys a few more pre-configured data sources - Azure SQL Database, Azure Data Lake Storage Gen2 Account, Azure Data Factory, Azure Synapse Analytics Workspace
This article lists several open-source tools and utilities (command-line, python
- **Recommended customer journey stages**: *Innovators, Enthusiasts, Adopters, Long-Term Regular Users* - **Description**: A python package to work with Azure Purview and Apache Atlas API. Supports bulk loading, custom lineage, and more from a Pythonic set of classes and Excel templates. The package supports programmatic interaction and an Excel template for low-code uploads.
-1. [Purview EventHub Notifications Reader](https://github.com/Azure/Azure-Purview-API-PowerShell/blob/main/purview_atlas_eventhub_sample.py)
+1. [Azure Purview EventHub Notifications Reader](https://github.com/Azure/Azure-Purview-API-PowerShell/blob/main/purview_atlas_eventhub_sample.py)
- **Recommended customer journey stages**: *Innovators, Enthusiasts, Adopters, Long-Term Regular Users*
- - **Description**: This tool demonstrates how to read Purview's EventHub and catch real-time Kafka notifications from the EventHub in Atlas Notifications (https://atlas.apache.org/2.0.0/Notifications.html) format. Further, it generates an excel sheet CSV of the entities and assets on the fly that are discovered live during a scan, and any other notifications of interest that Purview generates.
+ - **Description**: This tool demonstrates how to read Azure Purview's EventHub and catch real-time Kafka notifications from the EventHub in Atlas Notifications (https://atlas.apache.org/2.0.0/Notifications.html) format. Further, it generates an excel sheet CSV of the entities and assets on the fly that are discovered live during a scan, and any other notifications of interest that Azure Purview generates.
## Feedback and disclaimer
purview Tutorial Register Scan On Premises Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/tutorial-register-scan-on-premises-sql-server.md
Title: 'Tutorial: Register and scan an on-premises SQL Server'
-description: This tutorial describes how to register an on-prem SQL Server to Purview, and scan the server using a self-hosted IR.
+description: This tutorial describes how to register an on-prem SQL Server to Azure Purview, and scan the server using a self-hosted IR.
# Tutorial: Register and scan an on-premises SQL Server
-Azure Purview is designed to connect to data sources to help you manage sensitive data, simplify data discovery, and ensure right use. Purview can connect to sources across your entire landscape, including multi-cloud and on-premises. For this scenario, you'll use a self-hosted integration runtime to connect to data on an on-premises SQL server. Then you'll use Azure Purview to scan and classify that data.
+Azure Purview is designed to connect to data sources to help you manage sensitive data, simplify data discovery, and ensure right use. Azure Purview can connect to sources across your entire landscape, including multi-cloud and on-premises. For this scenario, you'll use a self-hosted integration runtime to connect to data on an on-premises SQL server. Then you'll use Azure Purview to scan and classify that data.
In this tutorial, you'll learn how to: > [!div class="checklist"]
-> * Sign in to the Purview Studio.
+> * Sign in to the Azure Purview Studio.
> * Create a collection in Azure Purview. > * Create a self-hosted integration runtime. > * Store credentials in an Azure Key Vault.
-> * Register an on-premises SQL Server to Purview.
+> * Register an on-premises SQL Server to Azure Purview.
> * Scan the SQL Server. > * Browse your data catalog to view assets in your SQL Server.
In this tutorial, you'll learn how to:
- An Azure Purview account. If you don't already have one, you can [follow our quickstart guide to create one](create-catalog-portal.md). - An [on-premises SQL Server](https://www.microsoft.com/sql-server/sql-server-downloads).
-## Sign in to Purview Studio
+## Sign in to Azure Purview Studio
-To interact with Purview, you'll connect to the [Purview Studio](https://web.purview.azure.com/resource/) through the Azure portal. You can find the studio by going to your Purview resource in the [Azure portal](https://portal.azure.com), and selecting the **Open Purview Studio** tile on the overview page.
+To interact with Azure Purview, you'll connect to the [Azure Purview Studio](https://web.purview.azure.com/resource/) through the Azure portal. You can find the studio by going to your Azure Purview resource in the [Azure portal](https://portal.azure.com), and selecting the **Open Azure Purview Studio** tile on the overview page.
## Create a collection
-Collections in Azure Purview are used to organize assets and sources into a custom hierarchy for organization and discoverability. They're also the tool used to manage access across Purview. In this tutorial, we'll create one collection to house your SQL Server source and all its assets. This tutorial won't cover information about assigning permissions to other users, so for more information you can follow our [Purview permissions guide](catalog-permissions.md).
+Collections in Azure Purview are used to organize assets and sources into a custom hierarchy for organization and discoverability. They're also the tool used to manage access across Azure Purview. In this tutorial, we'll create one collection to house your SQL Server source and all its assets. This tutorial won't cover information about assigning permissions to other users, so for more information you can follow our [Azure Purview permissions guide](catalog-permissions.md).
### Check permissions
-To create and manage collections in Purview, you'll need to be a **Collection Admin** within Purview. We can check these permissions in the [Purview Studio](use-purview-studio.md).
+To create and manage collections in Azure Purview, you'll need to be a **Collection Admin** within Azure Purview. We can check these permissions in the [Azure Purview Studio](use-purview-studio.md).
1. Select **Data Map > Collections** from the left pane to open the collection management page.
- :::image type="content" source="./media/tutorial-register-scan-on-premises-sql-server/find-collections.png" alt-text="Screenshot of Purview studio window, opened to the Data Map, with the Collections tab selected." border="true":::
+ :::image type="content" source="./media/tutorial-register-scan-on-premises-sql-server/find-collections.png" alt-text="Screenshot of Azure Purview studio window, opened to the Data Map, with the Collections tab selected." border="true":::
-1. Select your root collection. The root collection is the top collection in your collection list and will have the same name as your Purview resource. In our example below, it is called Purview Account.
+1. Select your root collection. The root collection is the top collection in your collection list and will have the same name as your Azure Purview resource. In our example below, it is called Azure Purview Account.
- :::image type="content" source="./media/tutorial-register-scan-on-premises-sql-server/select-root-collection.png" alt-text="Screenshot of Purview studio window, opened to the Data Map, with the root collection highlighted." border="true":::
+ :::image type="content" source="./media/tutorial-register-scan-on-premises-sql-server/select-root-collection.png" alt-text="Screenshot of Azure Purview studio window, opened to the Data Map, with the root collection highlighted." border="true":::
1. Select **Role assignments** in the collection window.
- :::image type="content" source="./media/tutorial-register-scan-on-premises-sql-server/role-assignments.png" alt-text="Screenshot of Purview studio window, opened to the Data Map, with the role assignments tab highlighted." border="true":::
+ :::image type="content" source="./media/tutorial-register-scan-on-premises-sql-server/role-assignments.png" alt-text="Screenshot of Azure Purview studio window, opened to the Data Map, with the role assignments tab highlighted." border="true":::
-1. To create a collection, you'll need to be in the collection admin list under role assignments. If you created the Purview resource, you should be listed as a collection admin under the root collection already. If not, you'll need to contact the collection admin to grant you permission.
+1. To create a collection, you'll need to be in the collection admin list under role assignments. If you created the Azure Purview resource, you should be listed as a collection admin under the root collection already. If not, you'll need to contact the collection admin to grant you permission.
- :::image type="content" source="./media/tutorial-register-scan-on-premises-sql-server/collection-admins.png" alt-text="Screenshot of Purview studio window, opened to the Data Map, with the collection admin section highlighted." border="true":::
+ :::image type="content" source="./media/tutorial-register-scan-on-premises-sql-server/collection-admins.png" alt-text="Screenshot of Azure Purview studio window, opened to the Data Map, with the collection admin section highlighted." border="true":::
### Create the collection 1. Select **+ Add a collection**. Again, only [collection admins](#check-permissions) can manage collections.
- :::image type="content" source="./media/tutorial-register-scan-on-premises-sql-server/select-add-a-collection.png" alt-text="Screenshot of Purview studio window, showing the new collection window, with the 'add a collection' buttons highlighted." border="true":::
+ :::image type="content" source="./media/tutorial-register-scan-on-premises-sql-server/select-add-a-collection.png" alt-text="Screenshot of Azure Purview studio window, showing the new collection window, with the 'add a collection' buttons highlighted." border="true":::
1. In the right panel, enter the collection name and description. If needed you can also add users or groups as collection admins to the new collection. 1. Select **Create**.
- :::image type="content" source="./media/tutorial-register-scan-on-premises-sql-server/create-collection.png" alt-text="Screenshot of Purview studio window, showing the new collection window, with a display name and collection admins selected, and the create button highlighted." border="true":::
+ :::image type="content" source="./media/tutorial-register-scan-on-premises-sql-server/create-collection.png" alt-text="Screenshot of Azure Purview studio window, showing the new collection window, with a display name and collection admins selected, and the create button highlighted." border="true":::
1. The new collection's information will reflect on the page.
- :::image type="content" source="./media/tutorial-register-scan-on-premises-sql-server/created-collection.png" alt-text="Screenshot of Purview studio window, showing the newly created collection window." border="true":::
+ :::image type="content" source="./media/tutorial-register-scan-on-premises-sql-server/created-collection.png" alt-text="Screenshot of Azure Purview studio window, showing the newly created collection window." border="true":::
## Create a self-hosted integration runtime
-The Self-Hosted Integration Runtime (SHIR) is the compute infrastructure used by Purview to connect to on-premises data sources. The SHIR is downloaded and installed on a machine within the same network as the on-premises data source.
+The Self-Hosted Integration Runtime (SHIR) is the compute infrastructure used by Azure Purview to connect to on-premises data sources. The SHIR is downloaded and installed on a machine within the same network as the on-premises data source.
This tutorial assumes the machine where you'll install your self-hosted integration runtime can make network connections to the internet. This connection allows the SHIR to communicate between your source and Azure Purview. If your machine has a restricted firewall, or if you would like to secure your firewall, look into the [network requirements for the self-hosted integration runtime](manage-integration-runtimes.md#networking-requirements).
-1. On the home page of Purview Studio, select **Data Map** from the left navigation pane.
+1. On the home page of Azure Purview Studio, select **Data Map** from the left navigation pane.
1. Under **Source management** on the left pane, select **Integration runtimes**, and then select **+ New**.
There is only one way to set up authentication for SQL server on-premises:
### SQL authentication
-The SQL account must have access to the **master** database. This is because the `sys.databases` is in the database. The Purview scanner needs to enumerate `sys.databases` in order to find all the SQL databases on the server.
+The SQL account must have access to the **master** database. This is because the `sys.databases` is in the database. The Azure Purview scanner needs to enumerate `sys.databases` in order to find all the SQL databases on the server.
#### Create a new login and user
If you would like to create a new login and user to be able to scan your SQL ser
:::image type="content" source="media/tutorial-register-scan-on-premises-sql-server/create-credential-secret.png" alt-text="Add values to key vault credential."::: 1. Select **Create** to complete.
-1. In the [Azure Purview Studio](#sign-in-to-purview-studio), navigate to the **Management** page in the left menu.
+1. In the [Azure Purview Studio](#sign-in-to-azure-purview-studio), navigate to the **Management** page in the left menu.
:::image type="content" source="media/tutorial-register-scan-on-premises-sql-server/select-management.png" alt-text="Select Management page on left menu.":::
If you would like to create a new login and user to be able to scan your SQL ser
## Register SQL Server
-1. Navigate to your Purview account in the [Azure portal](https://portal.azure.com), and select the [Purview Studio](#sign-in-to-purview-studio).
+1. Navigate to your Azure Purview account in the [Azure portal](https://portal.azure.com), and select the [Azure Purview Studio](#sign-in-to-azure-purview-studio).
1. Under Sources and scanning in the left navigation, select **Integration runtimes**. Make sure a self-hosted integration runtime is set up. If it's not set up, follow the steps mentioned [here](manage-integration-runtimes.md) to create a self-hosted integration runtime for scanning on an on-premises or Azure VM that has access to your on-premises network.
If you would like to create a new login and user to be able to scan your SQL ser
To create and run a new scan, do the following:
-1. Select the **Data Map** tab on the left pane in the Purview Studio.
+1. Select the **Data Map** tab on the left pane in the Azure Purview Studio.
1. Select the SQL Server source that you registered.
To create and run a new scan, do the following:
## Clean up resources
-If you're not going to continue to use this Purview or SQL source moving forward, you can follow the steps below to delete the integration runtime, SQL credential, and purview resources.
+If you're not going to continue to use this Azure Purview or SQL source moving forward, you can follow the steps below to delete the integration runtime, SQL credential, and purview resources.
-### Remove SHIR from Purview
+### Remove SHIR from Azure Purview
-1. On the home page of [Purview Studio](https://web.purview.azure.com/resource/), select **Data Map** from the left navigation pane.
+1. On the home page of [Azure Purview Studio](https://web.purview.azure.com/resource/), select **Data Map** from the left navigation pane.
1. Under **Source management** on the left pane, select **Integration runtimes**.
If you're not going to continue to use this Purview or SQL source moving forward
### Remove SQL credentials
-1. Go to the [Azure portal](https://portal.azure.com) and navigate to the Key Vault resource where you stored your Purview credentials.
+1. Go to the [Azure portal](https://portal.azure.com) and navigate to the Key Vault resource where you stored your Azure Purview credentials.
1. Under **Settings** in the left menu, select **Secrets**
If you're not going to continue to use this Purview or SQL source moving forward
1. Select **Yes** to permanently delete the resource.
-### Delete Purview account
+### Delete Azure Purview account
-If you would like to delete your Purview account after completing this tutorial, follow these steps.
+If you would like to delete your Azure Purview account after completing this tutorial, follow these steps.
1. Go to the [Azure portal](https://portal.azure.com) and navigate to your purview account. 1. At the top of the page, select the **Delete** button.
- :::image type="content" source="media/tutorial-register-scan-on-premises-sql-server/select-delete.png" alt-text="Delete button on the Purview account page in the Azure portal is selected.":::
+ :::image type="content" source="media/tutorial-register-scan-on-premises-sql-server/select-delete.png" alt-text="Delete button on the Azure Purview account page in the Azure portal is selected.":::
1. When the process is complete, you'll receive a notification in the Azure portal. ## Next steps > [!div class="nextstepaction"]
-> [Use Purview REST APIs](tutorial-using-rest-apis.md)
+> [Use Azure Purview REST APIs](tutorial-using-rest-apis.md)
purview Tutorial Using Rest Apis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/tutorial-using-rest-apis.md
Title: "How to use REST APIs for Purview Data Planes"
-description: This tutorial describes how to use the Azure Purview REST APIs to access the contents of your Purview.
+ Title: "How to use REST APIs for Azure Purview Data Planes"
+description: This tutorial describes how to use the Azure Purview REST APIs to access the contents of your Azure Purview.
Last updated 09/17/2021
-# Customer intent: I can call the Data plane REST APIs to perform CRUD operations on Purview account.
+# Customer intent: I can call the Data plane REST APIs to perform CRUD operations on Azure Purview account.
# Tutorial: Use the REST APIs
-In this tutorial, you learn how to use the Azure Purview REST APIs. Anyone who wants to submit data to an Azure Purview, include Purview as part of an automated process, or build their own user experience on the Purview can use the REST APIs to do so.
+In this tutorial, you learn how to use the Azure Purview REST APIs. Anyone who wants to submit data to an Azure Purview, include Azure Purview as part of an automated process, or build their own user experience on the Azure Purview can use the REST APIs to do so.
If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) before you begin. ## Prerequisites
-* To get started, you must have an existing Azure Purview account. If you don't have a catalog, see the [quickstart for creating a Azure Purview account](create-catalog-portal.md).
+* To get started, you must have an existing Azure Purview account. If you don't have a catalog, see the [quickstart for creating an Azure Purview account](create-catalog-portal.md).
## Create a service principal (application)
its password. Here's how:
Once service principal is created, you need to assign Data plane roles of your purview account to the service principal created above. The below steps need to be followed to assign role to establish trust between the service principal and purview account.
-1. Navigate to your [Purview Studio](https://web.purview.azure.com/resource/).
+1. Navigate to your [Azure Purview Studio](https://web.purview.azure.com/resource/).
1. Select the Data Map in the left menu. 1. Select Collections.
-1. Select the root collection in the collections menu. This will be the top collection in the list, and will have the same name as your Purview account.
+1. Select the root collection in the collections menu. This will be the top collection in the list, and will have the same name as your Azure Purview account.
1. Select the Role assignments tab.
-1. Assign the following roles to service principal created above to access various data planes in Purview.
+1. Assign the following roles to service principal created above to access various data planes in Azure Purview.
1. 'Data Curator' role to access Catalog Data plane. 1. 'Data Source Administrator' role to access Scanning Data plane. 1. 'Collection Admin' role to access Account Data Plane. 1. 'Collection Admin' role to access Metadata policy Data Plane. > [!Note]
-> Only 'Collection Admin' can assign data plane roles in Purview [Access Control in Azure Purview](./catalog-permissions.md).
+> Only 'Collection Admin' can assign data plane roles in Azure Purview [Access Control in Azure Purview](./catalog-permissions.md).
## Get token You can send a POST request to the following URL to get access token.
https://login.microsoftonline.com/{your-tenant-id}/oauth2/token
The following parameters needs to be passed to the above URL. -- **client_id**: client ID of the application registered in Azure Active directory and is assigned to a data plane role for the Purview account.
+- **client_id**: client ID of the application registered in Azure Active directory and is assigned to a data plane role for the Azure Purview account.
- **client_secret**: client secret created for the above application. - **grant_type**: This should be ΓÇÿclient_credentialsΓÇÖ. - **resource**: This should be ΓÇÿhttps://purview.azure.netΓÇÖ
Use the access token above to call the Data plane APIs.
> [!div class="nextstepaction"] > [Manage data sources](manage-data-sources.md)
-> [Purview Data Plane REST APIs](/rest/api/purview/)
+> [Azure Purview Data Plane REST APIs](/rest/api/purview/)
purview Use Purview Studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/use-purview-studio.md
Last updated 09/27/2021
-# Use Purview Studio
+# Use Azure Purview Studio
This article gives an overview of some of the main features of Azure Purview. ## Prerequisites
-* An Active Purview account is already created in Azure portal and the user has permissions to access [Purview Studio](https://web.purview.azure.com/resource/).
+* An Active Azure Purview account is already created in Azure portal and the user has permissions to access [Azure Purview Studio](https://web.purview.azure.com/resource/).
-## Launch Purview account
+## Launch Azure Purview account
-* To launch your Purview account, go to Purview accounts in Azure portal, select the account you want to launch and launch the account.
+* To launch your Azure Purview account, go to Azure Purview accounts in Azure portal, select the account you want to launch and launch the account.
- :::image type="content" source="./media/use-purview-studio/open-purview-studio.png" alt-text="Screenshot of Purview window in Azure portal, with Purview Studio button highlighted." border="true":::
+ :::image type="content" source="./media/use-purview-studio/open-purview-studio.png" alt-text="Screenshot of Azure Purview window in Azure portal, with Azure Purview Studio button highlighted." border="true":::
-* Another way to launch Purview account is to go to `https://web.purview.azure.com`, select **Azure Active Directory** and an account name to launch the account.
+* Another way to launch Azure Purview account is to go to `https://web.purview.azure.com`, select **Azure Active Directory** and an account name to launch the account.
## Home page
The following list summarizes the main features of **Home page**. Each number in
5. The left navigation bar helps you locate the main pages of the application. 6. The **Recently accessed** tab shows a list of recently accessed data assets. For information about accessing assets, see [Search the Data Catalog](how-to-search-catalog.md) and [Browse by asset type](how-to-browse-catalog.md). **My items** tab is a list of data assets owned by the logged-on user.
-7. **Links** contains links to region status, documentation, pricing, overview, and Purview status
+7. **Links** contains links to region status, documentation, pricing, overview, and Azure Purview status
8. The top navigation bar contains information about release notes/updates, change purview account, notifications, help, and feedback sections. ## Knowledge center
-Knowledge center is where you can find all the videos and tutorials related to Purview.
+Knowledge center is where you can find all the videos and tutorials related to Azure Purview.
## Guided tours
role-based-access-control Built In Roles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/role-based-access-control/built-in-roles.md
The following table provides a brief description of each built-in role. Click th
> | [HDInsight Domain Services Contributor](#hdinsight-domain-services-contributor) | Can Read, Create, Modify and Delete Domain Services related operations needed for HDInsight Enterprise Security Package | 8d8d5a11-05d3-4bda-a417-a08778121c7c | > | [Log Analytics Contributor](#log-analytics-contributor) | Log Analytics Contributor can read all monitoring data and edit monitoring settings. Editing monitoring settings includes adding the VM extension to VMs; reading storage account keys to be able to configure collection of logs from Azure Storage; adding solutions; and configuring Azure diagnostics on all Azure resources. | 92aaf0da-9dab-42b6-94a3-d43ce8d16293 | > | [Log Analytics Reader](#log-analytics-reader) | Log Analytics Reader can view and search all monitoring data as well as and view monitoring settings, including viewing the configuration of Azure diagnostics on all Azure resources. | 73c42c96-874c-492b-b04d-ab87d138a893 |
-> | [Purview Data Curator (Legacy)](#purview-data-curator-legacy) | The Microsoft.Purview data curator is a legacy role that can create, read, modify and delete catalog data objects and establish relationships between objects. We have recently deprecated this role from Azure role-based access and introduced a new data curator inside Azure Purview data plane. See [Access control in Azure Purview - Roles](../purview/catalog-permissions.md#roles) | 8a3c2885-9b38-4fd2-9d99-91af537c1347 |
-> | [Purview Data Reader (Legacy)](#purview-data-reader-legacy) | The Microsoft.Purview data reader is a legacy role that can read catalog data objects. We have recently deprecated this role from Azure role-based access and introduced a new data reader inside Azure Purview data plane. See [Access control in Azure Purview - Roles](../purview/catalog-permissions.md#roles) | ff100721-1b9d-43d8-af52-42b69c1272db |
-> | [Purview Data Source Administrator (Legacy)](#purview-data-source-administrator-legacy) | The Microsoft.Purview data source administrator is a legacy role that can manage data sources and data scans. We have recently deprecated this role from Azure role-based access and introduced a new data source admin inside Azure Purview data plane. See [Access control in Azure Purview - Roles](../purview/catalog-permissions.md#roles) | 200bba9e-f0c8-430f-892b-6f0794863803 |
+> | [Azure Purview Data Curator (Legacy)](#azure-purview-data-curator-legacy) | The Microsoft.Purview data curator is a legacy role that can create, read, modify and delete catalog data objects and establish relationships between objects. We have recently deprecated this role from Azure role-based access and introduced a new data curator inside Azure Purview data plane. See [Access control in Azure Purview - Roles](../purview/catalog-permissions.md#roles) | 8a3c2885-9b38-4fd2-9d99-91af537c1347 |
+> | [Azure Purview Data Reader (Legacy)](#azure-purview-data-reader-legacy) | The Microsoft.Purview data reader is a legacy role that can read catalog data objects. We have recently deprecated this role from Azure role-based access and introduced a new data reader inside Azure Purview data plane. See [Access control in Azure Purview - Roles](../purview/catalog-permissions.md#roles) | ff100721-1b9d-43d8-af52-42b69c1272db |
+> | [Azure Purview Data Source Administrator (Legacy)](#azure-purview-data-source-administrator-legacy) | The Microsoft.Purview data source administrator is a legacy role that can manage data sources and data scans. We have recently deprecated this role from Azure role-based access and introduced a new data source admin inside Azure Purview data plane. See [Access control in Azure Purview - Roles](../purview/catalog-permissions.md#roles) | 200bba9e-f0c8-430f-892b-6f0794863803 |
> | [Schema Registry Contributor (Preview)](#schema-registry-contributor-preview) | Read, write, and delete Schema Registry groups and schemas. | 5dffeca3-4936-4216-b2bc-10343a5abb25 | > | [Schema Registry Reader (Preview)](#schema-registry-reader-preview) | Read and list Schema Registry groups and schemas. | 2c56ea50-c6b3-40a6-83c0-9d98858bc7d2 | > | **Blockchain** | | |
Log Analytics Reader can view and search all monitoring data as well as and view
} ```
-### Purview Data Curator (Legacy)
+### Azure Purview Data Curator (Legacy)
The Microsoft.Purview data curator is a legacy role that can create, read, modify and delete catalog data objects and establish relationships between objects. We have recently deprecated this role from Azure role-based access and introduced a new data curator inside Azure Purview data plane. See [Access control in Azure Purview - Roles](../purview/catalog-permissions.md#roles) > [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | [Microsoft.Purview](resource-provider-operations.md#microsoftpurview)/accounts/read | Read account resource for Microsoft Purview provider. |
+> | [Microsoft.Purview](resource-provider-operations.md#microsoftpurview)/accounts/read | Read account resource for Microsoft Azure Purview provider. |
> | **NotActions** | | > | *none* | | > | **DataActions** | |
The Microsoft.Purview data curator is a legacy role that can create, read, modif
"notDataActions": [] } ],
- "roleName": "Purview Data Curator (Legacy)",
+ "roleName": "Azure Purview Data Curator (Legacy)",
"roleType": "BuiltInRole", "type": "Microsoft.Authorization/roleDefinitions" } ```
-### Purview Data Reader (Legacy)
+### Azure Purview Data Reader (Legacy)
The Microsoft.Purview data reader is a legacy role that can read catalog data objects. We have recently deprecated this role from Azure role-based access and introduced a new data reader inside Azure Purview data plane. See [Access control in Azure Purview - Roles](../purview/catalog-permissions.md#roles) > [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | [Microsoft.Purview](resource-provider-operations.md#microsoftpurview)/accounts/read | Read account resource for Microsoft Purview provider. |
+> | [Microsoft.Purview](resource-provider-operations.md#microsoftpurview)/accounts/read | Read account resource for Microsoft Azure Purview provider. |
> | **NotActions** | | > | *none* | | > | **DataActions** | |
The Microsoft.Purview data reader is a legacy role that can read catalog data ob
"notDataActions": [] } ],
- "roleName": "Purview Data Reader (Legacy)",
+ "roleName": "Azure Purview Data Reader (Legacy)",
"roleType": "BuiltInRole", "type": "Microsoft.Authorization/roleDefinitions" } ```
-### Purview Data Source Administrator (Legacy)
+### Azure Purview Data Source Administrator (Legacy)
The Microsoft.Purview data source administrator is a legacy role that can manage data sources and data scans. We have recently deprecated this role from Azure role-based access and introduced a new data source admin inside Azure Purview data plane. See [Access control in Azure Purview - Roles](../purview/catalog-permissions.md#roles) > [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | [Microsoft.Purview](resource-provider-operations.md#microsoftpurview)/accounts/read | Read account resource for Microsoft Purview provider. |
+> | [Microsoft.Purview](resource-provider-operations.md#microsoftpurview)/accounts/read | Read account resource for Microsoft Azure Purview provider. |
> | **NotActions** | | > | *none* | | > | **DataActions** | |
The Microsoft.Purview data source administrator is a legacy role that can manage
"notDataActions": [] } ],
- "roleName": "Purview Data Source Administrator (Legacy)",
+ "roleName": "Azure Purview Data Source Administrator (Legacy)",
"roleType": "BuiltInRole", "type": "Microsoft.Authorization/roleDefinitions" }
role-based-access-control Resource Provider Operations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/role-based-access-control/resource-provider-operations.md
Azure service: [Azure Purview](../purview/index.yml)
> [!div class="mx-tableFixed"] > | Action | Description | > | | |
-> | Microsoft.Purview/register/action | Register the subscription for Microsoft Purview provider. |
-> | Microsoft.Purview/unregister/action | Unregister the subscription for Microsoft Purview provider. |
+> | Microsoft.Purview/register/action | Register the subscription for Microsoft Azure Purview provider. |
+> | Microsoft.Purview/unregister/action | Unregister the subscription for Microsoft Azure Purview provider. |
> | Microsoft.Purview/setDefaultAccount/action | Sets the default account for the scope. | > | Microsoft.Purview/removeDefaultAccount/action | Removes the default account for the scope. |
-> | Microsoft.Purview/accounts/read | Read account resource for Microsoft Purview provider. |
-> | Microsoft.Purview/accounts/write | Write account resource for Microsoft Purview provider. |
-> | Microsoft.Purview/accounts/delete | Delete account resource for Microsoft Purview provider. |
-> | Microsoft.Purview/accounts/listkeys/action | List keys on the account resource for Microsoft Purview provider. |
-> | Microsoft.Purview/accounts/addrootcollectionadmin/action | Add root collection admin to account resource for Microsoft Purview provider. |
-> | Microsoft.Purview/accounts/move/action | Move account resource for Microsoft Purview provider. |
+> | Microsoft.Purview/accounts/read | Read account resource for Microsoft Azure Purview provider. |
+> | Microsoft.Purview/accounts/write | Write account resource for Microsoft Azure Purview provider. |
+> | Microsoft.Purview/accounts/delete | Delete account resource for Microsoft Azure Purview provider. |
+> | Microsoft.Purview/accounts/listkeys/action | List keys on the account resource for Microsoft Azure Purview provider. |
+> | Microsoft.Purview/accounts/addrootcollectionadmin/action | Add root collection admin to account resource for Microsoft Azure Purview provider. |
+> | Microsoft.Purview/accounts/move/action | Move account resource for Microsoft Azure Purview provider. |
> | Microsoft.Purview/accounts/PrivateEndpointConnectionsApproval/action | Approve Private Endpoint Connection. |
-> | Microsoft.Purview/accounts/operationresults/read | Read the operation status on the account resource for Microsoft Purview provider. |
+> | Microsoft.Purview/accounts/operationresults/read | Read the operation status on the account resource for Microsoft Azure Purview provider. |
> | Microsoft.Purview/accounts/privateEndpointConnectionProxies/read | Read Account Private Endpoint Connection Proxy. | > | Microsoft.Purview/accounts/privateEndpointConnectionProxies/write | Write Account Private Endpoint Connection Proxy. | > | Microsoft.Purview/accounts/privateEndpointConnectionProxies/delete | Delete Account Private Endpoint Connection Proxy. |
Azure service: [Azure Purview](../purview/index.yml)
> | Microsoft.Purview/accounts/providers/Microsoft.Insights/diagnosticSettings/write | Creates or updates the diagnostic setting for the resource. | > | Microsoft.Purview/accounts/providers/Microsoft.Insights/logDefinitions/read | Gets the available logs for the catalog. | > | Microsoft.Purview/accounts/providers/Microsoft.Insights/metricDefinitions/read | Gets the available metrics for the catalog. |
-> | Microsoft.Purview/checknameavailability/read | Check if name of purview account resource is available for Microsoft Purview provider. |
+> | Microsoft.Purview/checknameavailability/read | Check if name of purview account resource is available for Microsoft Azure Purview provider. |
> | Microsoft.Purview/getDefaultAccount/read | Gets the default account for the scope. | > | Microsoft.Purview/locations/operationResults/read | Monitor async operations. |
-> | Microsoft.Purview/operations/read | Reads all available operations for Microsoft Purview provider. |
+> | Microsoft.Purview/operations/read | Reads all available operations for Microsoft Azure Purview provider. |
> | **DataAction** | **Description** | > | Microsoft.Purview/accounts/data/read | Read data objects. | > | Microsoft.Purview/accounts/data/write | Create, update and delete data objects. |
search Cognitive Search Concept Intro https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-concept-intro.md
Title: AI enrichment concepts
-description: Content extraction, natural language processing (NLP) and image processing are used to create searchable content in Azure Cognitive Search indexes with both pre-defined cognitive skills and custom AI algorithms.
+description: Content extraction, natural language processing (NLP), and image processing are used to create searchable content in Azure Cognitive Search indexes. AI enrichment can use both pre-defined cognitive skills and custom AI algorithms.
Previously updated : 08/10/2021 Last updated : 01/14/2022 # AI enrichment in Azure Cognitive Search
-In Azure Cognitive Search, AI enrichment refers to built-in cognitive skills and custom skills that add analysis, transformations, and content generation during indexing. Enrichments create new information where none previously existed: extracting information from images, detecting sentiment, key phrases, and entities from text, to name a few. Enrichments also add structure to undifferentiated text. All of these processes result in making previously unsearchable content available to full text search scenarios. In many instances, enriched documents are useful for scenarios other than search, such as for knowledge mining.
+In Azure Cognitive Search, AI enrichment refers to a pipeline process that adds machine learning to [indexer-based indexing](search-indexer-overview.md). Steps in the pipeline create new information where none previously existed: extracting information from images, detecting sentiment or key phrases from chunks of text, and recognizing entities, to name a few. All of these processes result in making previously unsearchable content available to full text search and knowledge mining scenarios.
-Enrichment is defined by a [**skillset**](cognitive-search-working-with-skillsets.md) that's attached to an [**indexer**](search-indexer-overview.md). The indexer will extract and set up the content, while the skillset identifies, analyzes, and creates new information and structures from images, blobs, and other unstructured data sources. The output of an enrichment pipeline is either a [**search index**](search-what-is-an-index.md) or a [**knowledge store**](knowledge-store-concept-intro.md).
+Azure Blob Storage is the most commonly used input, but any indexer-supported data source can provide the initial content. A [**skillset**](cognitive-search-working-with-skillsets.md), attached to an indexer, adds the AI processing. The indexer extracts content and sets up the pipeline, while the skillset identifies, analyzes, and creates new information out of blobs, images, and raw text. Output is a [**search index**](search-what-is-an-index.md) or optional [**knowledge store**](knowledge-store-concept-intro.md).
![Enrichment pipeline diagram](./media/cognitive-search-intro/cogsearch-architecture.png "enrichment pipeline overview")
-A skillset can contain built-in skills from Cognitive Search or embed external processing that you provide in a [*custom skill*](cognitive-search-create-custom-skill-example.md). Examples of a custom skill might be a custom entity module or document classifier targeting a specific domain such as finance, scientific publications, or medicine.
+Skillsets are composed of built-in skills from Cognitive Search or [*custom skills*](cognitive-search-create-custom-skill-example.md) for external processing that you provide. Custom skills might sound complex but can be simple and straightforward in terms of implementation. If you have existing packages that provide pattern matching or document classification models, the content you extract during indexing could be passed to these models for processing.
Built-in skills fall into these categories:
-+ **Natural language processing** skills include [entity recognition](cognitive-search-skill-entity-recognition-v3.md), [language detection](cognitive-search-skill-language-detection.md), [key phrase extraction](cognitive-search-skill-keyphrases.md), text manipulation, [sentiment detection (including opinion mining)](cognitive-search-skill-sentiment-v3.md), and [PII detection](cognitive-search-skill-pii-detection.md). With these skills, unstructured text is mapped as searchable and filterable fields in an index.
++ **Machine translation** is provided by the [text translation](cognitive-search-skill-text-translation.md) skill, often paired with [language detection](cognitive-search-skill-language-detection.md) for multi-language solutions.
-+ **Image processing** skills include [Optical Character Recognition (OCR)](cognitive-search-skill-ocr.md) and identification of [visual features](cognitive-search-skill-image-analysis.md), such as facial detection, image interpretation, image recognition (famous people and landmarks) or attributes like image orientation. These skills create text representations of image content, making it searchable using the query capabilities of Azure Cognitive Search.
++ **Image processing** skills include [Optical Character Recognition (OCR)](cognitive-search-skill-ocr.md) and identification of [visual features](cognitive-search-skill-image-analysis.md), such as facial detection, image interpretation, image recognition (famous people and landmarks), or attributes like image orientation. These skills create text representations of image content for full text search in Azure Cognitive Search.
-Built-in skills in Azure Cognitive Search are based on pre-trained machine learning models in Cognitive Services APIs: [Computer Vision](../cognitive-services/computer-vision/index.yml) and [Language Service](../cognitive-services/language-service/overview.md). You can attach a Cognitive Services resource if you want to leverage these resources during content processing.
++ **Natural language processing** skills include [entity recognition](cognitive-search-skill-entity-recognition-v3.md), [language detection](cognitive-search-skill-language-detection.md), [key phrase extraction](cognitive-search-skill-keyphrases.md), text manipulation, [sentiment detection (including opinion mining)](cognitive-search-skill-sentiment-v3.md), and [personal identifiable information (PII) detection](cognitive-search-skill-pii-detection.md). With these skills, unstructured text is mapped as searchable and filterable fields in an index.+
+Built-in skills are based on pre-trained machine learning models in Cognitive Services APIs: [Computer Vision](../cognitive-services/computer-vision/index.yml) and [Language Service](../cognitive-services/language-service/overview.md). You should [attach a billable Cognitive Services resource](cognitive-search-attach-cognitive-services.md) if you want these resources for larger workloads.
Natural language and image processing is applied during the data ingestion phase, with results becoming part of a document's composition in a searchable index in Azure Cognitive Search. Data is sourced as an Azure data set and then pushed through an indexing pipeline using whichever [built-in skills](cognitive-search-predefined-skills.md) you need.
AI enrichment is available in regions where Azure Cognitive Services is also ava
+ Australia Southeast + China North 2
-+ Norway East
+ Germany West Central If your search service is located in one of these regions, you will not be able to create and use skillsets, but all other search service functionality is available and fully supported. ## When to use AI enrichment
-You should consider enrichment if your raw content is unstructured text, image content, or content that needs language detection and translation. Applying AI through the built-in cognitive skills can unlock this content, increasing its value and utility in your search and data science apps.
+You should consider enrichment if your raw content is unstructured text, image content, or content that needs language detection and translation. Applying AI through the built-in cognitive skills can unlock this content for full text search and data science applications.
Additionally, you might consider adding a custom skill if you have open-source, third-party, or first-party code that you'd like to integrate into the pipeline. Classification models that identify salient characteristics of various document types fall into this category, but any package that adds value to your content could be used.
Post-indexing, you can access content via search requests through all [query typ
### Step 1: Connection and document cracking phase
-Indexers connect to external sources using information provided in an indexer data source. When the indexer connects to the resource, it will ["crack documents"](search-indexer-overview.md#document-cracking) to extract text and images. Image content can be routed to skills that perform image processing, while text content is queued for text processing.
+Indexers connect to external sources using information provided in an indexer data source. When the indexer connects to the resource, it will ["crack documents"](search-indexer-overview.md#document-cracking) to extract text and images.Image content can be routed to skills that perform image processing, while text content is queued for text processing.
![Document cracking phase](./media/cognitive-search-intro/document-cracking-phase-blowup.png "document cracking")
Finally, an indexer can [**cache enriched documents**](cognitive-search-incremen
Indexes and knowledge stores are fully independent of each other. While you must attach an index to satisfy indexer requirements, if your sole objective is a knowledge store, you can ignore the index after it's populated. Avoid deleting it though. If you want to rerun the indexer and skillset, you'll need the index in order for the indexer to run.
-## Using enriched content
+## Consume enriched content
+
+The output of AI enrichment is either a [fully text-searchable index](search-what-is-an-index.md) on Azure Cognitive Search, or a [knowledge store](knowledge-store-concept-intro.md) in Azure Storage.
+
+### Accessing content in a search index
+
+[**Querying the index**](search-query-overview.md) is how developers and users access the enriched content generated by the pipeline. The index is like any other you might create for Azure Cognitive Search: you can supplement text analysis with custom analyzers, invoke fuzzy search queries, add filters, or experiment with scoring profiles to tune search relevance.
+
+### Accessing content in a knowledge store
+
+In Azure Storage, a [knowledge store](knowledge-store-concept-intro.md) has two manifestations: a blob container of JSON document, a blob container of image objects, or tables in Table storage. You can use [Storage Browser](knowledge-store-view-storage-explorer.md), [Power BI](knowledge-store-connect-power-bi.md), or any app that connects to Azure Storage.
+++ A blob container captures enriched documents in their entirety, which is useful if you want to feed into other processes.
-When processing is finished, you have a [search index](search-what-is-an-index.md) consisting of enriched documents, fully text-searchable in Azure Cognitive Search. [**Querying the index**](search-query-overview.md) is how developers and users access the enriched content generated by the pipeline. The index is like any other you might create for Azure Cognitive Search: you can supplement text analysis with custom analyzers, invoke fuzzy search queries, add filters, or experiment with scoring profiles to tune search relevance.
++ In contrast, Table storage can accommodate physical projections of enriched documents. You can create slices or layers of enriched documents that include or exclude specific parts. For analysis in Power BI, the tables in Azure Table Storage become the data source for further visualization and exploration.
-You might also have a [knowledge store](knowledge-store-concept-intro.md). The knowledge store contains data that can be consumed in knowledge mining scenarios like analytics or machine learning. You can use [Storage Browser](knowledge-store-view-storage-explorer.md), [Power BI](knowledge-store-connect-power-bi.md), or any app that connects to Azure Storage.
+An enriched document at the output of the pipeline differs from its original source input by the presence of additional fields containing new information that was extracted or generated during enrichment. As such, you can work with a combination of original and created content, regardless of which output structure you use.
## Checklist: A typical workflow
search Cognitive Search Custom Skill Interface https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-custom-skill-interface.md
Previously updated : 12/30/2021 Last updated : 01/14/2022 # How to add a custom skill to an Azure Cognitive Search enrichment pipeline An [enrichment pipeline](cognitive-search-concept-intro.md) can include both [built-in skills](cognitive-search-predefined-skills.md) and [custom skills](cognitive-search-custom-skill-web-api.md) that you personally create and publish. Your custom code executes externally to the search service (for example, as an Azure function), but accepts inputs and sends outputs to the skillset just like any other skill.
+Custom skills might sound complex but can be simple and straightforward in terms of implementation. If you have existing packages that provide pattern matching or classification models, the content you extract from blobs could be passed to these models for processing. Since AI enrichment is Azure-based, your model should be on Azure also. Some common hosting methodologies include using [Azure Functions](cognitive-search-create-custom-skill-example.md) or [Containers](https://github.com/Microsoft/SkillsExtractorCognitiveSearch).
+ If you are building a custom skill, this article describes the interface you'll use to integrate the skill into the pipeline. The primary requirement is the ability to accept inputs and emit outputs in ways that are consumable within the [skillset](cognitive-search-defining-skillset.md) as a whole. As such, the focus of this article is on the input and output formats that the enrichment pipeline requires. ## Benefits of custom skills
search Search Blob Ai Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-blob-ai-integration.md
- Title: Use AI to enrich blob content-
-description: Learn about the natural language and image analysis capabilities in Azure Cognitive Search, and how those processes apply to content stored in Azure blobs.
------ Previously updated : 02/02/2021--
-# Use AI to process and analyze Blob content in Azure Cognitive Search
-
-Content in Azure Blob Storage that's composed of images or long undifferentiated text can undergo deep learning analysis to reveal and extract valuable information useful for downstream applications. By using [AI enrichment](cognitive-search-concept-intro.md), you can:
-
-+ Extract text from images using optical character recognition (OCR)
-+ Produce a scene description or tags from a photo
-+ Detect language and translate text into different languages
-+ Infer structure through entity recognition by finding references to people, dates, places, or organizations
-
-While you might need just one of these AI capabilities, itΓÇÖs common to combine multiple of them into the same pipeline (for example, extracting text from a scanned image and then finding all the dates and places referenced in it). It's also common to include custom AI or machine learning processing in the form of leading-edge external packages or in-house models tailored to your data and your requirements.
-
-Although you can apply AI enrichment to any data source supported by a search indexer, blobs are the most frequently used structures in an enrichment pipeline. Results are pulled into a search index for full text search, or rerouted back to Azure Storage to power new application experiences that include exploring data for discovery or analytics scenarios.
-
-In this article, we view AI enrichment through a wide lens so that you can quickly grasp the entire process, from transforming raw data in blobs, to queryable information in either a search index or a knowledge store.
-
-## What it means to "enrich" blob data with AI
-
-*AI enrichment* is part of the indexing architecture of Azure Cognitive Search that integrates machine learning models from Microsoft or custom learning models that you provide. It helps you implement end-to-end scenarios where you need to process blobs (both existing ones and new ones as they come in or are updated), crack open all file formats to extract images and text, extract the desired information using various AI capabilities, and index them in a search index for fast search, retrieval and exploration.
-
-Inputs are your blobs, in a single container, in Azure Blob Storage. Blobs can be almost any kind of text or image data.
-
-Output is always a search index, used for fast text search, retrieval, and exploration in client applications. Additionally, output can also be a [*knowledge store*](knowledge-store-concept-intro.md) that projects enriched documents into Azure blobs or Azure tables for downstream analysis in tools like Power BI or in data science workloads.
-
-In between is the pipeline architecture itself. The pipeline is based on the [*indexers*](search-indexer-overview.md), to which you can assign a [*skillset*](cognitive-search-working-with-skillsets.md), which is composed of one or more *skills* providing the AI. The purpose of the pipeline is to produce *enriched documents* that enter the pipeline as raw content but pick up additional structure, context, and information while moving through the pipeline. Enriched documents are consumed during indexing to create inverted indexes and other structures used in full text search or exploration and analytics.
-
-## Required resources
-
-In addition to Azure Blob Storage and Azure Cognitive Search, you need a third service or mechanism that provides the AI:
-
-+ For built-in AI, Cognitive Search integrates with Azure Cognitive Services vision and natural language processing APIs. You can [attach a Cognitive Services resource](cognitive-search-attach-cognitive-services.md) to add Optical Character Recognition (OCR), image analysis, or natural language processing (language detection, text translation, entity recognition, key phrase extraction).
-
-+ For custom AI using Azure resources, you can define a custom skill that wraps the external function or model you want to use. [Custom skills](cognitive-search-custom-skill-interface.md) can use code provided by Azure Functions, Azure Machine Learning, Azure Form Recognizer, or another resource that is reachable over HTTPS.
-
-+ For custom non-Azure AI, your model or module needs to be accessible to an indexer over HTTP.
-
-If you don't have all of the services readily available, start directly in your Storage account portal page. In the left navigation page, under **Blob service** click **Add Azure Cognitive Search** to create a new service or select an existing one.
-
-Once you add Azure Cognitive Search to your storage account, you can follow the standard process to enrich data in any Azure data source. We recommend the **Import data** wizard in Azure Cognitive Search for an easy initial introduction to AI enrichment. You can attach a Cognitive Services resource during the workflow. This quickstart walks you through the steps: [Create an AI enrichment pipeline in the portal](cognitive-search-quickstart-blob.md).
-
-The following sections take a closer look at components and workflow.
-
-## Use a Blob indexer
-
-AI enrichment is an add-on to an indexing pipeline, and in Azure Cognitive Search, those pipelines are built on top of an *indexer*. An indexer is a data-source-aware subservice equipped with internal logic for sampling data, reading metadata data, retrieving data, and serializing data from native formats into JSON documents for subsequent import. Indexers are often used by themselves for import, separate from AI, but if you want to build an AI enrichment pipeline, you will need an indexer and a skillset to go with it. This section highlights the indexer; the next section focuses on skillsets.
-
-Blobs in Azure Storage are indexed using the [blob indexer](search-howto-indexing-azure-blob-storage.md). You can invoke this indexer by using the **Import data** wizard, a REST API, or an SDK. A blob indexer is invoked when the data source used by the indexer is an Azure Blob container. You can index a subset of your blobs by creating a virtual directory, which you can then pass as a parameter, or by filtering on a file type extension.
-
-An indexer ["cracks a document"](search-indexer-overview.md#document-cracking), opening a blob to inspect content. After connecting to the data source, it's the first step in the pipeline. For blob data, this is where PDF, office docs, image, and other content types are detected. Document cracking with text extraction is no charge. Document cracking with image extraction is charged at rates you can find on the [pricing page](https://azure.microsoft.com/pricing/details/search/).
-
-Although all documents will be cracked, enrichment only occurs if you explicitly provide the skills to do so. For example, if your pipeline consists exclusively of image analysis, text in your container or documents is ignored.
-
-The blob indexer comes with configuration parameters and supports change tracking if the underlying data provides sufficient information. You can learn more in [How to configure a blob indexer](search-howto-indexing-azure-blob-storage.md).
-
-## Add AI components
-
-AI enrichment refers to modules that look for patterns or characteristics, and then performs an operation accordingly. Facial recognition in photos, text descriptions of photos, detecting key phrases in a document, and OCR (or recognizing printed or handwritten text in binary files) are illustrative examples.
-
-In Azure Cognitive Search, *skills* are the individual components of AI processing that you can use standalone, or in combination with other skills.
-
-+ Built-in skills are backed by Cognitive Services, with image analysis based on Computer Vision, and natural language processing based on Azure Cognitive Services for Language. For the complete list, see [Built-in skills for content enrichment](cognitive-search-predefined-skills.md).
-
-+ Custom skills are custom code, wrapped in an [interface definition](cognitive-search-custom-skill-interface.md) that allows for integration into the pipeline. In customer solutions, it's common practice to use both, with custom skills providing open-source, third-party, or first-party AI modules.
-
-A *skillset* is the collection of skills used in a pipeline, and it's invoked after the document cracking phase makes content available. An indexer can consume exactly one skillset, but that skillset exists independently of an indexer so that you can reuse it in other scenarios.
-
-Custom skills might sound complex but can be simple and straightforward in terms of implementation. If you have existing packages that provide pattern matching or classification models, the content you extract from blobs could be passed to these models for processing. Since AI enrichment is Azure-based, your model should be on Azure also. Some common hosting methodologies include using [Azure Functions](cognitive-search-create-custom-skill-example.md) or [Containers](https://github.com/Microsoft/SkillsExtractorCognitiveSearch).
-
-Built-in skills backed by Cognitive Services require an [attached Cognitive Services](cognitive-search-attach-cognitive-services.md) all-in-one subscription key that gives you access to the resource. An all-in-one key gives you OCR and image analysis, language detection, text translation, and natural language processing. Other built-in skills are features of Azure Cognitive Search and require no additional service or key. Shaper, splitter, and merger are examples of helper skills that are sometimes necessary when designing the pipeline.
-
-If you use only custom skills and built-in utility skills, there is no dependency or costs related to Cognitive Services.
-
-## Consume AI-enriched output in downstream solutions
-
-The output of AI enrichment is either a search index on Azure Cognitive Search, or a [knowledge store](knowledge-store-concept-intro.md) in Azure Storage.
-
-In Azure Cognitive Search, a search index is used for interactive exploration using free text and filtered queries in a client app. Enriched documents created through AI are formatted in JSON and indexed in the same way all documents are indexed in Azure Cognitive Search, leveraging all of the benefits an indexer provides. For example, during indexing, the blob indexer refers to configuration parameters and settings to utilize any field mappings or change detection logic. Such settings are fully available to regular indexing and AI enriched workloads. Post-indexing, when content is stored on Azure Cognitive Search, you can build rich queries and filter expressions to understand your content.
-
-In Azure Storage, a knowledge store has two manifestations: a blob container, or tables in Table storage.
-
-+ A blob container captures enriched documents in their entirety, which is useful if you want to feed into other processes.
-
-+ In contrast, Table storage can accommodate physical projections of enriched documents. You can create slices or layers of enriched documents that include or exclude specific parts. For analysis in Power BI, the tables in Azure Table Storage become the data source for further visualization and exploration.
-
-An enriched document at the end of the pipeline differs from its original input version by the presence of additional fields containing new information that was extracted or generated during enrichment. As such, you can work with a combination of original and created content, regardless of which output structure you use.
-
-## Next steps
-
-ThereΓÇÖs a lot more you can do with AI enrichment to get the most out of your data in Azure Storage, including combining Cognitive Services in different ways, and authoring custom skills for cases where thereΓÇÖs no existing Cognitive Service for the scenario. You can learn more by following the links below.
-
-+ [Upload, download, and list blobs with the Azure portal (Azure Blob Storage)](../storage/blobs/storage-quickstart-blobs-portal.md)
-+ [Set up a blob indexer (Azure Cognitive Search)](search-howto-indexing-azure-blob-storage.md)
-+ [AI enrichment overview (Azure Cognitive Search)](cognitive-search-concept-intro.md)
-+ [Create a skillset (Azure Cognitive Search)](cognitive-search-defining-skillset.md)
-+ [Map nodes in an annotation tree (Azure Cognitive Search)](cognitive-search-output-field-mapping.md)
search Search Blob Metadata Properties https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-blob-metadata-properties.md
Title: Content metadata properties
-description: Metadata properties of documents can provide content to fields in a search index, or information that informs indexing behavior at run time. This article lists metadata properties supported in Azure Cognitive Search.
-
+description: Metadata properties can provide content to fields in a search index. This article lists metadata properties supported in Azure Cognitive Search.
++ -+ Previously updated : 02/22/2021 Last updated : 01/15/2022 # Content metadata properties used in Azure Cognitive Search
-SharePoint Online and Azure Blob Storage can contain various content, and many of those content types have metadata properties that can be useful to index. Just as you can create search fields for standard blob properties like **`metadata_storage_name`**, you can create fields for metadata properties that are specific to a document format.
+Several of the indexer-supported data sources, including Azure Blob Storage, Azure Data Lake Storage Gen2, and SharePoint Online, contain standalone files or embedded objects of various content types. Many of those content types have metadata properties that can be useful to index. Just as you can create search fields for standard blob properties like **`metadata_storage_name`**, you can create fields in a search index for metadata properties that are specific to a document format.
## Supported document formats
The following table summarizes processing done for each document format, and des
| Document format / content type | Extracted metadata | Processing details | | | | |
-| HTML (text/html or application/xhtml+xml) |`metadata_content_encoding`<br/>`metadata_content_type`<br/>`metadata_language`<br/>`metadata_description`<br/>`metadata_keywords`<br/>`metadata_title` |Strip HTML markup and extract text |
-| PDF (application/pdf) |`metadata_content_type`<br/>`metadata_language`<br/>`metadata_author`<br/>`metadata_title`<br/>`metadata_creation_date` |Extract text, including embedded documents (excluding images) |
-| DOCX (application/vnd.openxmlformats-officedocument.wordprocessingml.document) |`metadata_content_type`<br/>`metadata_author`<br/>`metadata_character_count`<br/>`metadata_creation_date`<br/>`metadata_last_modified`<br/>`metadata_page_count`<br/>`metadata_word_count` |Extract text, including embedded documents |
+| CSV (text/csv) |`metadata_content_type`<br/>`metadata_content_encoding`<br/> | Extract text<br/>NOTE: If you need to extract multiple document fields from a CSV blob, see [Indexing CSV blobs](search-howto-index-csv-blobs.md) for details |
| DOC (application/msword) |`metadata_content_type`<br/>`metadata_author`<br/>`metadata_character_count`<br/>`metadata_creation_date`<br/>`metadata_last_modified`<br/>`metadata_page_count`<br/>`metadata_word_count` |Extract text, including embedded documents | | DOCM (application/vnd.ms-word.document.macroenabled.12) |`metadata_content_type`<br/>`metadata_author`<br/>`metadata_character_count`<br/>`metadata_creation_date`<br/>`metadata_last_modified`<br/>`metadata_page_count`<br/>`metadata_word_count` |Extract text, including embedded documents |
-| WORD XML (application/vnd.ms-word2006ml) |`metadata_content_type`<br/>`metadata_author`<br/>`metadata_character_count`<br/>`metadata_creation_date`<br/>`metadata_last_modified`<br/>`metadata_page_count`<br/>`metadata_word_count` |Strip XML markup and extract text |
+| DOCX (application/vnd.openxmlformats-officedocument.wordprocessingml.document) |`metadata_content_type`<br/>`metadata_author`<br/>`metadata_character_count`<br/>`metadata_creation_date`<br/>`metadata_last_modified`<br/>`metadata_page_count`<br/>`metadata_word_count` |Extract text, including embedded documents |
+| EML (message/rfc822) |`metadata_content_type`<br/>`metadata_message_from`<br/>`metadata_message_to`<br/>`metadata_message_cc`<br/>`metadata_creation_date`<br/>`metadata_subject` |Extract text, including attachments |
+| EPUB (application/epub+zip) |`metadata_content_type`<br/>`metadata_author`<br/>`metadata_creation_date`<br/>`metadata_title`<br/>`metadata_description`<br/>`metadata_language`<br/>`metadata_keywords`<br/>`metadata_identifier`<br/>`metadata_publisher` |Extract text from all documents in the archive |
+| GZ (application/gzip) |`metadata_content_type` |Extract text from all documents in the archive |
+| HTML (text/html or application/xhtml+xml) |`metadata_content_encoding`<br/>`metadata_content_type`<br/>`metadata_language`<br/>`metadata_description`<br/>`metadata_keywords`<br/>`metadata_title` |Strip HTML markup and extract text |
+| JSON (application/json) |`metadata_content_type`<br/>`metadata_content_encoding` |Extract text<br/>NOTE: If you need to extract multiple document fields from a JSON blob, see [Indexing JSON blobs](search-howto-index-json-blobs.md) for details |
+| KML (application/vnd.google-earth.kml+xml) |`metadata_content_type`<br/>`metadata_content_encoding`<br/>`metadata_language`<br/> |Strip XML markup and extract text |
+| MSG (application/vnd.ms-outlook) |`metadata_content_type`<br/>`metadata_message_from`<br/>`metadata_message_from_email`<br/>`metadata_message_to`<br/>`metadata_message_to_email`<br/>`metadata_message_cc`<br/>`metadata_message_cc_email`<br/>`metadata_message_bcc`<br/>`metadata_message_bcc_email`<br/>`metadata_creation_date`<br/>`metadata_last_modified`<br/>`metadata_subject` |Extract text, including text extracted from attachments. `metadata_message_to_email`, `metadata_message_cc_email` and `metadata_message_bcc_email` are string collections, the rest of the fields are strings.|
+| ODP (application/vnd.oasis.opendocument.presentation) |`metadata_content_type`<br/>`metadata_author`<br/>`metadata_creation_date`<br/>`metadata_last_modified`<br/>`metadata_title` |Extract text, including embedded documents |
+| ODS (application/vnd.oasis.opendocument.spreadsheet) |`metadata_content_type`<br/>`metadata_author`<br/>`metadata_creation_date`<br/>`metadata_last_modified` |Extract text, including embedded documents |
+| ODT (application/vnd.oasis.opendocument.text) |`metadata_content_type`<br/>`metadata_author`<br/>`metadata_character_count`<br/>`metadata_creation_date`<br/>`metadata_last_modified`<br/>`metadata_page_count`<br/>`metadata_word_count` |Extract text, including embedded documents |
+| PDF (application/pdf) |`metadata_content_type`<br/>`metadata_language`<br/>`metadata_author`<br/>`metadata_title`<br/>`metadata_creation_date` |Extract text, including embedded documents (excluding images) |
+| Plain text (text/plain) |`metadata_content_type`<br/>`metadata_content_encoding`<br/>`metadata_language`<br/> | Extract text|
+| PPT (application/vnd.ms-powerpoint) |`metadata_content_type`<br/>`metadata_author`<br/>`metadata_creation_date`<br/>`metadata_last_modified`<br/>`metadata_slide_count`<br/>`metadata_title` |Extract text, including embedded documents |
+| PPTM (application/vnd.ms-powerpoint.presentation.macroenabled.12) |`metadata_content_type`<br/>`metadata_author`<br/>`metadata_creation_date`<br/>`metadata_last_modified`<br/>`metadata_slide_count`<br/>`metadata_title` |Extract text, including embedded documents |
+| PPTX (application/vnd.openxmlformats-officedocument.presentationml.presentation) |`metadata_content_type`<br/>`metadata_author`<br/>`metadata_creation_date`<br/>`metadata_last_modified`<br/>`metadata_slide_count`<br/>`metadata_title` |Extract text, including embedded documents |
+| RTF (application/rtf) |`metadata_content_type`<br/>`metadata_author`<br/>`metadata_character_count`<br/>`metadata_creation_date`<br/>`metadata_last_modified`<br/>`metadata_page_count`<br/>`metadata_word_count`<br/> | Extract text|
| WORD 2003 XML (application/vnd.ms-wordml) |`metadata_content_type`<br/>`metadata_author`<br/>`metadata_creation_date` |Strip XML markup and extract text |
-| XLSX (application/vnd.openxmlformats-officedocument.spreadsheetml.sheet) |`metadata_content_type`<br/>`metadata_author`<br/>`metadata_creation_date`<br/>`metadata_last_modified` |Extract text, including embedded documents |
+| WORD XML (application/vnd.ms-word2006ml) |`metadata_content_type`<br/>`metadata_author`<br/>`metadata_character_count`<br/>`metadata_creation_date`<br/>`metadata_last_modified`<br/>`metadata_page_count`<br/>`metadata_word_count` |Strip XML markup and extract text |
| XLS (application/vnd.ms-excel) |`metadata_content_type`<br/>`metadata_author`<br/>`metadata_creation_date`<br/>`metadata_last_modified` |Extract text, including embedded documents | | XLSM (application/vnd.ms-excel.sheet.macroenabled.12) |`metadata_content_type`<br/>`metadata_author`<br/>`metadata_creation_date`<br/>`metadata_last_modified` |Extract text, including embedded documents |
-| PPTX (application/vnd.openxmlformats-officedocument.presentationml.presentation) |`metadata_content_type`<br/>`metadata_author`<br/>`metadata_creation_date`<br/>`metadata_last_modified`<br/>`metadata_slide_count`<br/>`metadata_title` |Extract text, including embedded documents |
-| PPT (application/vnd.ms-powerpoint) |`metadata_content_type`<br/>`metadata_author`<br/>`metadata_creation_date`<br/>`metadata_last_modified`<br/>`metadata_slide_count`<br/>`metadata_title` |Extract text, including embedded documents |
-| PPTM (application/vnd.ms-powerpoint.presentation.macroenabled.12) |`metadata_content_type`<br/>`metadata_author`<br/>`metadata_creation_date`<br/>`metadata_last_modified`<br/>`metadata_slide_count`<br/>`metadata_title` |Extract text, including embedded documents |
-| MSG (application/vnd.ms-outlook) |`metadata_content_type`<br/>`metadata_message_from`<br/>`metadata_message_from_email`<br/>`metadata_message_to`<br/>`metadata_message_to_email`<br/>`metadata_message_cc`<br/>`metadata_message_cc_email`<br/>`metadata_message_bcc`<br/>`metadata_message_bcc_email`<br/>`metadata_creation_date`<br/>`metadata_last_modified`<br/>`metadata_subject` |Extract text, including text extracted from attachments. `metadata_message_to_email`, `metadata_message_cc_email` and `metadata_message_bcc_email` are string collections, the rest of the fields are strings.|
-| ODT (application/vnd.oasis.opendocument.text) |`metadata_content_type`<br/>`metadata_author`<br/>`metadata_character_count`<br/>`metadata_creation_date`<br/>`metadata_last_modified`<br/>`metadata_page_count`<br/>`metadata_word_count` |Extract text, including embedded documents |
-| ODS (application/vnd.oasis.opendocument.spreadsheet) |`metadata_content_type`<br/>`metadata_author`<br/>`metadata_creation_date`<br/>`metadata_last_modified` |Extract text, including embedded documents |
-| ODP (application/vnd.oasis.opendocument.presentation) |`metadata_content_type`<br/>`metadata_author`<br/>`metadata_creation_date`<br/>`metadata_last_modified`<br/>`metadata_title` |Extract text, including embedded documents |
-| ZIP (application/zip) |`metadata_content_type` |Extract text from all documents in the archive |
-| GZ (application/gzip) |`metadata_content_type` |Extract text from all documents in the archive |
-| EPUB (application/epub+zip) |`metadata_content_type`<br/>`metadata_author`<br/>`metadata_creation_date`<br/>`metadata_title`<br/>`metadata_description`<br/>`metadata_language`<br/>`metadata_keywords`<br/>`metadata_identifier`<br/>`metadata_publisher` |Extract text from all documents in the archive |
+| XLSX (application/vnd.openxmlformats-officedocument.spreadsheetml.sheet) |`metadata_content_type`<br/>`metadata_author`<br/>`metadata_creation_date`<br/>`metadata_last_modified` |Extract text, including embedded documents |
| XML (application/xml) |`metadata_content_type`<br/>`metadata_content_encoding`<br/>`metadata_language`<br/> |Strip XML markup and extract text |
-| KML (application/vnd.google-earth.kml+xml) |`metadata_content_type`<br/>`metadata_content_encoding`<br/>`metadata_language`<br/> |Strip XML markup and extract text |
-| JSON (application/json) |`metadata_content_type`<br/>`metadata_content_encoding` |Extract text<br/>NOTE: If you need to extract multiple document fields from a JSON blob, see [Indexing JSON blobs](search-howto-index-json-blobs.md) for details |
-| EML (message/rfc822) |`metadata_content_type`<br/>`metadata_message_from`<br/>`metadata_message_to`<br/>`metadata_message_cc`<br/>`metadata_creation_date`<br/>`metadata_subject` |Extract text, including attachments |
-| RTF (application/rtf) |`metadata_content_type`<br/>`metadata_author`<br/>`metadata_character_count`<br/>`metadata_creation_date`<br/>`metadata_last_modified`<br/>`metadata_page_count`<br/>`metadata_word_count`<br/> | Extract text|
-| Plain text (text/plain) |`metadata_content_type`<br/>`metadata_content_encoding`<br/>`metadata_language`<br/> | Extract text|
-| CSV (text/csv) |`metadata_content_type`<br/>`metadata_content_encoding`<br/> | Extract text<br/>NOTE: If you need to extract multiple document fields from a CSV blob, see [Indexing CSV blobs](search-howto-index-csv-blobs.md) for details |
+| ZIP (application/zip) |`metadata_content_type` |Extract text from all documents in the archive |
## See also * [Indexers in Azure Cognitive Search](search-indexer-overview.md)
-* [Understand blobs using AI](search-blob-ai-integration.md)
+* [AI enrichment overview](cognitive-search-concept-intro.md)
* [Blob indexing overview](search-blob-storage-integration.md) * [SharePoint Online indexing](search-howto-index-sharepoint-online.md)
search Search Blob Storage Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-blob-storage-integration.md
Previously updated : 05/14/2021 Last updated : 01/14/2022 # Search over Azure Blob Storage content
-Searching across the variety of content types stored in Azure Blob Storage can be a difficult problem to solve. In this article, review the basic workflow for extracting content and metadata from blobs and sending it to a search index in Azure Cognitive Search. The resulting index can be queried using full text search.
+Searching across the variety of content types stored in Azure Blob Storage can be a difficult problem to solve, but [Azure Cognitive Search](search-what-is-azure-search.md) provides deep integration at the content layer, extracting and inferring textual information, which can then be queried in a search index.
+
+In this article, review the basic workflow for extracting content and metadata from blobs and sending it to a [search index](search-what-is-an-index.md) in Azure Cognitive Search. The resulting index can be queried using full text search. Optionally, you can send processed blob content to a [knowledge store](knowledge-store-concept-intro.md) for non-search scenarios.
> [!NOTE]
-> Already familiar with the workflow and composition? [How to configure a blob indexer](search-howto-indexing-azure-blob-storage.md) is your next step.
+> Already familiar with the workflow and composition? [Configure a blob indexer](search-howto-indexing-azure-blob-storage.md) is your next step.
## What it means to add full text search to blob data
-Azure Cognitive Search is a search service that supports indexing and query workloads over user-defined indexes that contain your remote searchable content hosted in the cloud. Co-locating your searchable content with the query engine is necessary for performance, returning results at a speed users have come to expect from search queries.
+Azure Cognitive Search is a standalone search service that supports indexing and query workloads over user-defined indexes that contain your remote searchable content hosted in the cloud. Co-locating your searchable content with the query engine is necessary for performance, returning results at a speed users have come to expect from search queries.
Cognitive Search integrates with Azure Blob Storage at the indexing layer, importing your blob content as search documents that are indexed into *inverted indexes* and other query structures that support free-form text queries and filter expressions. Because your blob content is indexed into a search index, you can use the full range of query features in Azure Cognitive Search to find information in your blob content.
-Inputs are your blobs, in a single container, in Azure Blob Storage. Blobs can be almost any kind of text data. If your blobs contain images, you can add [AI enrichment to blob indexing](search-blob-ai-integration.md) to create and extract text from images.
+Inputs are your blobs, in a single container, in Azure Blob Storage. Blobs can be almost any kind of text data. If your blobs contain images, you can add [AI enrichment](cognitive-search-concept-intro.md) to create and extract text from images.
Output is always an Azure Cognitive Search index, used for fast text search, retrieval, and exploration in client applications. In between is the indexing pipeline architecture itself. The pipeline is based on the *indexer* feature, discussed further on in this article.
You need both Azure Cognitive Search and Azure Blob Storage. Within blob storage
You can start directly in your Storage account portal page. In the left navigation page, under **Blob service** select **Add Azure Cognitive Search** to create a new service or select an existing one.
-Once you add Azure Cognitive Search to your storage account, you can follow the standard process to index blob data. We recommend the **Import data** wizard in Azure Cognitive Search for an easy initial introduction, or call the REST APIs using a tool like Postman. This tutorial walks you through the steps of calling the REST API in Postman: [Index and search semi-structured data (JSON blobs) in Azure Cognitive Search](search-semi-structured-data.md).
+Once you add Azure Cognitive Search to your storage account, you can follow the standard process to index blob data. We recommend the **Import data** wizard in Azure Cognitive Search for an easy initial introduction, or call the REST APIs using a tool like Postman. This tutorial walks you through the steps of calling the REST API in Postman: [Index and search semi-structured data (JSON blobs) in Azure Cognitive Search](search-semi-structured-data.md).
## Use a Blob indexer
An *indexer* is a data-source-aware subservice in Cognitive Search, equipped wit
Blobs in Azure Storage are indexed using the [Azure Cognitive Search Blob storage indexer](search-howto-indexing-azure-blob-storage.md). You can invoke this indexer by using the **Import data** wizard, a REST API, or the .NET SDK. In code, you use this indexer by setting the type, and by providing connection information that includes an Azure Storage account along with a blob container. You can subset your blobs by creating a virtual directory, which you can then pass as a parameter, or by filtering on a file type extension.
-An indexer ["cracks a document"](search-indexer-overview.md#document-cracking), opening a blob to inspect content. After connecting to the data source, it's the first step in the pipeline. For blob data, this is where PDF, Office docs, and other content types are detected. Document cracking with text extraction is no charge. If your blobs contain image content, images are ignored unless you [add AI enrichment](search-blob-ai-integration.md). Standard indexing applies only to text content.
+An indexer ["cracks a document"](search-indexer-overview.md#document-cracking), opening a blob to inspect content. After connecting to the data source, it's the first step in the pipeline. For blob data, this is where PDF, Office docs, and other content types are detected. Document cracking with text extraction is no charge. If your blobs contain image content, images are ignored unless you [add AI enrichment](cognitive-search-concept-intro.md). Standard indexing applies only to text content.
The Blob indexer comes with configuration parameters and supports change tracking if the underlying data provides sufficient information. You can learn more about the core functionality in [Azure Cognitive Search Blob storage indexer](search-howto-indexing-azure-blob-storage.md).
search Search Howto Index Azure Data Lake Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-index-azure-data-lake-storage.md
You can also set [blob configuration properties](/rest/api/searchservice/create-
+ [C# Sample: Index Data Lake Gen2 using Azure AD](https://github.com/Azure-Samples/azure-search-dotnet-samples/blob/master/data-lake-gen2-acl-indexing/README.md) + [Indexers in Azure Cognitive Search](search-indexer-overview.md) + [Create an indexer](search-howto-create-indexers.md)
-+ [AI enrichment over blobs overview](search-blob-ai-integration.md)
++ [AI enrichment overview](cognitive-search-concept-intro.md) + [Search over blobs overview](search-blob-storage-integration.md)
search Search Howto Indexing Azure Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-indexing-azure-blob-storage.md
# Configure a Blob indexer to import data from Azure Blob Storage
-In Azure Cognitive Search, blob indexers are frequently used for both [AI enrichment](cognitive-search-concept-intro.md) and text-oriented processing. This article focuses on how to configure a blob indexer for text-oriented indexing, where just the textual content and metadata are loaded into a search index for full text search scenarios.
+In Azure Cognitive Search, blob indexers are frequently used for both [AI enrichment](cognitive-search-concept-intro.md) and text-based processing.
+
+This article focuses on how to configure a blob indexer for text-based indexing, where just the textual content and metadata are loaded into a search index for full text search scenarios. Inputs are your blobs, in a single container. Output is a search index with searchable content and metadata stored in individual fields.
## Prerequisites
-+ [Azure Blob storage](../storage/blobs/storage-blobs-overview.md), Standard performance (general-purpose v2).
++ [Azure Blob Storage](../storage/blobs/storage-blobs-overview.md), Standard performance (general-purpose v2). + [Access tiers](../storage/blobs/access-tiers-overview.md) for Blob storage include hot, cool, and archive. Only hot and cool can be accessed by search indexers.
A [search index](search-what-is-an-index.md) specifies the fields in a search do
} ```
-<a name="DocumentKeys"></a>
-
-1. Designate one string field as the document key that uniquely identifies each document. For blob content, the best candidates for a document key are metadata properties on the blob:
+1. <a name="DocumentKeys"></a> Designate one string field as the document key that uniquely identifies each document. For blob content, the best candidates for a document key are metadata properties on the blob:
+ **`metadata_storage_path`** (default). Using the full path ensures uniqueness, but the path contains `/` characters that are [invalid in a document key](/rest/api/searchservice/naming-rules). Use the [base64Encode function](search-indexer-field-mappings.md#base64EncodeFunction) to encode characters (see the example in the next section). If using the portal to define the indexer, the encoding step is built in.
You can also set [blob configuration parameters](/rest/api/searchservice/create-
+ [Indexers in Azure Cognitive Search](search-indexer-overview.md) + [Create an indexer](search-howto-create-indexers.md)
-+ [AI enrichment over blobs overview](search-blob-ai-integration.md)
-+ [Search over blobs overview](search-blob-storage-integration.md)
++ [AI enrichment overview](cognitive-search-concept-intro.md)++ [Search over blobs overview](search-blob-storage-integration.md)
search Search Howto Schedule Indexers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-schedule-indexers.md
Previously updated : 12/17/2021 Last updated : 01/11/2022 # Schedule indexers in Azure Cognitive Search
Visually, a schedule might look like the following: starting on January 1 and ru
## Scheduling behavior
-Only one execution of an indexer can run at a time. If an indexer is already running when its next execution is scheduled, that execution is postponed until the next scheduled time.
+The scheduler will only kick off one indexer at a time. If you have multiple indexers that are all scheduled to start at 6:00 a.m. every morning, the scheduler will kick off the jobs sequentially. You can only obtain multiple concurrent jobs if you [run indexers on demand](search-howto-run-reset-indexers.md).
+
+Only one instance of a given indexer can run at a time. If it's still running when the next scheduled execution is set to start, indexer execution is postponed until the next scheduled occurrence.
LetΓÇÖs consider an example to make this more concrete. Suppose we configure an indexer schedule with an **Interval** of hourly and a **Start Time** of June 1, 2021 at 8:00:00 AM UTC. HereΓÇÖs what could happen when an indexer run takes longer than an hour:
search Search Indexer Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-indexer-overview.md
Previously updated : 01/05/2022 Last updated : 01/15/2022 # Indexers in Azure Cognitive Search
search Search Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-security-overview.md
For Azure Cognitive Search, there is currently one built-in definition. It is fo
Watch this fast-paced video for an overview of the security architecture and each feature category.
-> [!VIDEO https://channel9.msdn.com/Shows/AI-Show/Azure-Cognitive-Search-Whats-new-in-security/player]
+> [!VIDEO https://docs.microsoft.com/Shows/AI-Show/Azure-Cognitive-Search-Whats-new-in-security/player]
## See also
search Semantic Search Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/semantic-search-overview.md
Previously updated : 12/17/2021 Last updated : 01/13/2022 # Semantic search in Azure Cognitive Search > [!IMPORTANT]
-> Semantic search is in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). It's available through the Azure portal, preview REST API, and beta SDKs. These features are billable. For more information about, see [Availability and pricing](semantic-search-overview.md#availability-and-pricing).
+> Semantic search is in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). It's available through the Azure portal, preview REST API, and beta SDKs. These features are billable (see [Availability and pricing](semantic-search-overview.md#availability-and-pricing)).
-Semantic search is a collection of query-related capabilities that bring semantic relevance and language understanding to search results. This article is a high-level introduction to semantic search all-up, with descriptions of each feature and how they work collectively. The embedded video describes the technology, and the section at the end covers availability and pricing.
+Semantic search is a collection of query-related capabilities that bring semantic relevance and language understanding to search results. This article is a high-level introduction to semantic search. The embedded video describes the technology, and the section at the end covers availability and pricing.
Semantic search is a premium feature. We recommend this article for background, but if you'd rather get started, follow these steps:
Semantic search is a premium feature. We recommend this article for background,
> * [Check regional and service tier requirements](#availability-and-pricing). > * [Enable semantic search](#enable-semantic-search) on your search service. > * Create or modify queries to [return semantic captions and highlights](semantic-how-to-query-request.md).
-> * Add a few more query properties to also return [semantic answers](semantic-answers.md).
+> * Add a few more query properties to also [return semantic answers](semantic-answers.md).
## What is semantic search?
Semantic search is a collection of features that improve the quality of search r
| Feature | Description | ||-|
-| [Semantic re-ranking](semantic-ranking.md) | Uses the context or semantic meaning to compute a new relevance score over existing results. |
+| [Semantic re-ranking](semantic-ranking.md) | Uses the context or semantic meaning of a query to compute a new relevance score over existing results. |
| [Semantic captions and highlights](semantic-how-to-query-request.md) | Extracts sentences and phrases from a document that best summarize the content, with highlights over key passages for easy scanning. Captions that summarize a result are useful when individual content fields are too dense for the results page. Highlighted text elevates the most relevant terms and phrases so that users can quickly determine why a match was considered relevant. | | [Semantic answers](semantic-answers.md) | An optional and additional substructure returned from a semantic query. It provides a direct answer to a query that looks like a question. It requires that a document have text with the characteristics of an answer. |
security Identity Management Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/fundamentals/identity-management-overview.md
Azure AD Identity Protection is a security service that provides a consolidated
Learn more: * [Azure AD Identity Protection](../../active-directory/identity-protection/overview-identity-protection.md)
-* [Channel 9: Azure AD and Identity Show: Identity Protection Preview](https://channel9.msdn.com/Series/Azure-AD-Identity/Azure-AD-and-Identity-Show-Identity-Protection-Preview)
+* Channel 9: Azure AD and Identity Show: Identity Protection Preview
## Hybrid identity management/Azure AD connect
security Management Monitoring Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/fundamentals/management-monitoring-overview.md
By providing notifications and recommended remediation, Identity Protection help
Learn more: * [Azure Active Directory Identity Protection](../../active-directory/identity-protection/overview-identity-protection.md)
-* [Channel 9: Azure AD and Identity Show: Identity Protection Preview](https://channel9.msdn.com/Series/Azure-AD-Identity/Azure-AD-and-Identity-Show-Identity-Protection-Preview)
+* Channel 9: Azure AD and Identity Show: Identity Protection Preview
## Defender for Cloud
security Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/fundamentals/management.md
You can use Azure logon restrictions to constrain source IP addresses for access
Some applications or services that you deploy into Azure may have their own authentication mechanisms for both end-user and administrator access, whereas others take full advantage of Azure AD. Depending on whether you are federating credentials via Active Directory Federation Services (AD FS), using directory synchronization or maintaining user accounts solely in the cloud, using [Microsoft Identity Manager](/microsoft-identity-manager/) (part of Azure AD Premium) helps you manage identity lifecycles between the resources. ### Connectivity
-Several mechanisms are available to help secure client connections to your Azure virtual networks. Two of these mechanisms, [site-to-site VPN](https://channel9.msdn.com/series/Azure-Site-to-Site-VPN) (S2S) and [point-to-site VPN](../../vpn-gateway/vpn-gateway-howto-point-to-site-classic-azure-portal.md) (P2S), enable the use of industry standard IPsec (S2S) or the [Secure Socket Tunneling Protocol](/previous-versions/technet-magazine/cc162322(v=msdn.10)) (SSTP) (P2S) for encryption and tunneling. When Azure is connecting to public-facing Azure services management such as the Azure portal, Azure requires Hypertext Transfer Protocol Secure (HTTPS).
+Several mechanisms are available to help secure client connections to your Azure virtual networks. Two of these mechanisms, site-to-site VPN (S2S) and [point-to-site VPN](../../vpn-gateway/vpn-gateway-howto-point-to-site-classic-azure-portal.md) (P2S), enable the use of industry standard IPsec (S2S) or the [Secure Socket Tunneling Protocol](/previous-versions/technet-magazine/cc162322(v=msdn.10)) (SSTP) (P2S) for encryption and tunneling. When Azure is connecting to public-facing Azure services management such as the Azure portal, Azure requires Hypertext Transfer Protocol Secure (HTTPS).
A stand-alone hardened workstation that does not connect to Azure through an RD Gateway should use the SSTP-based point-to-site VPN to create the initial connection to the Azure Virtual Network, and then establish RDP connection to individual virtual machines from with the VPN tunnel.
service-bus-messaging Service Bus Premium Messaging https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-premium-messaging.md
To learn more about Service Bus Messaging, see the following links:
- [Automatically update messaging units](automate-update-messaging-units.md). - [Introducing Azure Service Bus Premium Messaging (blog post)](https://azure.microsoft.com/blog/introducing-azure-service-bus-premium-messaging/)-- [Introducing Azure Service Bus Premium Messaging (Channel9)](https://channel9.msdn.com/Blogs/Subscribe/Introducing-Azure-Service-Bus-Premium-Messaging)
service-fabric Service Fabric Concepts Scalability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/service-fabric-concepts-scalability.md
# Scaling in Service Fabric Azure Service Fabric makes it easy to build scalable applications by managing the services, partitions, and replicas on the nodes of a cluster. Running many workloads on the same hardware enables maximum resource utilization, but also provides flexibility in terms of how you choose to scale your workloads. This Channel 9 video describes how you can build scalable microservices applications:
-> [!VIDEO https://channel9.msdn.com/Events/Connect/2017/T116/player]
+> [!VIDEO https://docs.microsoft.com/Events/Connect/2017/T116/player]
Scaling in Service Fabric is accomplished several different ways:
service-fabric Service Fabric Overview Microservices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/service-fabric-overview-microservices.md
Azure Service Fabric emerged when Microsoft transitioned from delivering boxed p
***The aim of Service Fabric is to solve the hard problems of building and running a service and to use infrastructure resources efficiently, so teams can solve business problems by using a microservices approach.***
-This short video introduces Service Fabric and micro
-> [!VIDEO https://channel9.msdn.com/Blogs/Azure/Azure-Service-Fabric/player]
- Service Fabric helps you build applications that use a microservices approach by providing: * A platform that provides system services to deploy, upgrade, detect, and restart failed services, discover services, route messages, manage state, and monitor health.
service-fabric Service Fabric Tutorial Create Dotnet App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/service-fabric-tutorial-create-dotnet-app.md
# Tutorial: Create and deploy an application with an ASP.NET Core Web API front-end service and a stateful back-end service
-This tutorial is part one of a series. You will learn how to create an Azure Service Fabric application with an ASP.NET Core Web API front end and a stateful back-end service to store your data. When you're finished, you have a voting application with an ASP.NET Core web front-end that saves voting results in a stateful back-end service in the cluster. This tutorial series requires a Windows developer machine. If you don't want to manually create the voting application, you can [download the source code](https://github.com/Azure-Samples/service-fabric-dotnet-quickstart/) for the completed application and skip ahead to [Walk through the voting sample application](#walkthrough_anchor). If you prefer, you can also watch a [video walk-through](https://channel9.msdn.com/Events/Connect/2017/E100) of this tutorial.
+This tutorial is part one of a series. You will learn how to create an Azure Service Fabric application with an ASP.NET Core Web API front end and a stateful back-end service to store your data. When you're finished, you have a voting application with an ASP.NET Core web front-end that saves voting results in a stateful back-end service in the cluster. This tutorial series requires a Windows developer machine. If you don't want to manually create the voting application, you can [download the source code](https://github.com/Azure-Samples/service-fabric-dotnet-quickstart/) for the completed application and skip ahead to [Walk through the voting sample application](#walkthrough_anchor). If you prefer, you can also watch a [video walk-through](/Events/Connect/2017/E100) of this tutorial.
![AngularJS+ASP.NET API Front End, Connecting to a stateful backend service on Service Fabric](./media/service-fabric-tutorial-create-dotnet-app/application-diagram.png)
site-recovery Recovery Plan Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/recovery-plan-overview.md
You can use a recovery plan to trigger a test failover. Use the following best p
Watch a quick example video showing an on-click failover for a recovery plan for a two-tier WordPress app.
-> [!VIDEO https://channel9.msdn.com/Series/Azure-Site-Recovery/One-click-failover-of-a-2-tier-WordPress-application-using-Azure-Site-Recovery/player]
--- ## Next steps - [Create](site-recovery-create-recovery-plans.md) a recovery plan.
site-recovery Site Recovery Runbook Automation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/site-recovery-runbook-automation.md
To deploy sample scripts to your Automation account, click the **Deploy to Azure
This video provides another example. It demonstrates how to recover a two-tier WordPress application to Azure:
-> [!VIDEO https://channel9.msdn.com/Series/Azure-Site-Recovery/One-click-failover-of-a-2-tier-WordPress-application-using-Azure-Site-Recovery/player]
-- ## Next steps - Learn about an [Azure Automation Run As account](../automation/manage-runas-account.md)
site-recovery Site Recovery Sharepoint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/site-recovery-sharepoint.md
This article describes in detail how to protect a SharePoint application using [
You can watch the below video about recovering a multi-tier application to Azure.
-> [!VIDEO https://channel9.msdn.com/Series/Azure-Site-Recovery/Disaster-Recovery-of-load-balanced-multi-tier-applications-using-Azure-Site-Recovery/player]
-- ## Prerequisites Before you start, make sure you understand the following:
site-recovery Vmware Azure Prepare Failback https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/vmware-azure-prepare-failback.md
Last updated 12/24/2019
After [failover](site-recovery-failover.md) of on-premises VMware VMs or physical servers to Azure, you reprotect the Azure VMs created after failover, so that they replicate back to the on-premises site. With replication from Azure to on-premises in place, you can then fail back by running a failover from Azure to on-premises when you're ready. Before you continue, get a quick overview with this video about how to fail back from Azure to an on-premises site.<br /><br />
-> [!VIDEO https://channel9.msdn.com/Series/Azure-Site-Recovery/VMware-to-Azure-with-ASR-Video5-Failback-from-Azure-to-On-premises/player]
## Reprotection/failback components
spatial-anchors Coarse Reloc https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spatial-anchors/concepts/coarse-reloc.md
Coarse relocalization works by tagging anchors with various on-device sensor rea
## When to use coarse relocalization
-If you're planning to handle more than 35 spatial anchors in a space larger than a tennis court, you'll probably benefit from coarse relocalization spatial indexing.
+If you're planning to handle anchors in a space larger than a tennis court, you'll probably benefit from coarse relocalization spatial indexing.
The fast lookup of anchors enabled by coarse relocalization is designed to simplify the development of applications backed by world-scale collections of, say, millions of geo-distributed anchors. The complexity of spatial indexing is all hidden, so you can focus on your application logic. All the difficult work is done behind the scenes by Azure Spatial Anchors.
sql-database Sql Database Import Purview Labels https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sql-database/scripts/sql-database-import-purview-labels.md
This document describes how to add Azure Purview labels in your Azure SQL Databa
## Provide permissions to the application
-1. In your Azure portal, search for **Purview accounts**.
-2. Select the Purview account where your SQL databases and Synapse are classified.
+1. In your Azure portal, search for **Azure Purview accounts**.
+2. Select the Azure Purview account where your SQL databases and Synapse are classified.
3. Open **Access control (IAM)**, select **Add**. 4. Select **Add role assignment**.
-5. In the **Role** section, search for **Purview Data Reader** and select it.
+5. In the **Role** section, search for **Azure Purview Data Reader** and select it.
6. In the **Select** section, search for the application you previously created, select it, and hit **Save**. ## Extract the classification from Azure Purview
-1. Open your Purview account, and in the Home page, search for your Azure SQL Database or Azure Synapse Analytics where you want to copy the labels.
+1. Open your Azure Purview account, and in the Home page, search for your Azure SQL Database or Azure Synapse Analytics where you want to copy the labels.
2. Copy the qualifiedName under **Properties**, and keep it for future use. 3. Open your PowerShell shell.
static-web-apps Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/static-web-apps/custom-domain.md
By default, Azure Static Web Apps provides an auto-generated domain name. This article shows you how to map a custom domain name to an Azure Static Web Apps application.
-## Free SSL/TLS certificate
- Azure Static Web Apps automatically provides a free SSL/TLS certificate for the auto-generated domain name and any custom domains you may add. ## Walkthrough Video
-> [!VIDEO https://channel9.msdn.com/Shows/5-Things/Configuring-a-custom-domain-with-Azure-Static-Web-Apps/player?format=ny]
+> [!VIDEO https://docs.microsoft.com/Shows/5-Things/Configuring-a-custom-domain-with-Azure-Static-Web-Apps/player?format=ny]
+
+## Working with subdomains
+
+Domain names without a subdomain are known as apex or "naked" domains. For example the domain `www.example.com` is the domain with *www* as the subdomain, while `example.com` is known as the apex domain.
+
+Some domain registrars (like Google and GoDaddy) don't allow you to point the apex domain to the generated Static Web Apps URL. If your registrar doesn't allow you to redirect the apex domain, consider forwarding the apex domain to the *www* subdomain.
+
+With this configuration, requests to the *www* subdomain will resolve to the generated Static Web Apps location after following the steps in this article.
## Prerequisites
You'll need to configure a CNAME with your domain provider. Azure DNS is recomme
# [Azure DNS](#tab/azure-dns)
+> [!IMPORTANT]
+> If Azure DNS does not have the *Reader* permission to the static web app, the step to add an ALIAS record in Azure DNS returns an error. You need to grant permission to the Azure DNS application object to read the endpoint to update DNS.
+ 1. Make sure **CNAME** is selected from the _Hostname record type_ dropdown list. 1. Copy the value in the _Value_ field to your clipboard by selecting the **copy** icon.
static-web-apps Enterprise Edge https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/static-web-apps/enterprise-edge.md
A manual setup gives you full control over the CDN configuration including the c
* [Custom domain](./custom-domain.md) configured for your static web app with a time to live (TTL) set to less than 48 hrs. * An application deployed with [Azure Static Web Apps](./get-started-portal.md) that uses the Standard hosting plan.
+* The subscription has been re-registered for Microsoft.CDN Resource Provider.
# [Azure portal](#tab/azure-portal)
+1. Navigate to your subscription in the Azure portal.
+
+1. Select **Resource providers** in the left menu.
+
+1. Click on **Microsoft.CDN** out of the list of resource providers.
+
+1. Click **Register** or **Reregister**.
+ 1. Navigate to your static web app in the Azure portal. 1. Select **Enterprise-grade edge** in the left menu.
A manual setup gives you full control over the CDN configuration including the c
# [Azure CLI](#tab/azure-cli) ```azurecli
+az provider register --namespace 'Microsoft.CDN' --wait
+ az extension add -n enterprise-edge az staticwebapp enterprise-edge enable -n my-static-webapp -g my-resource-group
static-web-apps Functions Bring Your Own https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/static-web-apps/functions-bring-your-own.md
Previously updated : 05/07/2021 Last updated : 01/14/2022
Before you associate an existing Functions app, you first need to adjust to conf
1. From the _Environment_ dropdown, select **Production**.
-1. Next to the _Functions source_ label, select **Link to a Function app**.
+1. Next to the _Functions type_ label, select **Link to a Function app**.
1. From the _Subscription_ dropdown, select your Azure subscription name.
static-web-apps Local Development https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/static-web-apps/local-development.md
Previously updated : 10/21/2021 Last updated : 01/14/2022
Now requests that go through port `4280` are routed to either the static content
For more information on different debugging scenarios, with guidance on how to customize ports and server addresses, see the [Azure Static Web Apps CLI repository](https://github.com/Azure/static-web-apps-cli).
+### Sample debugging configuration
+
+Visual Studio Code uses a file to enable debugging sessions in the editor. If Visual Studio Code doesn't generate a *launch.json* file for you, you can place the the following configuration in *.vscode/launch.json*.
+
+```json
+{
+ "version": "0.2.0",
+ "configurations": [
+ {
+ "name": "Attach to Node Functions",
+ "type": "node",
+ "request": "attach",
+ "port": 9229,
+ "preLaunchTask": "func: host start"
+ }
+ ]
+}
+```
+ ## Next steps > [!div class="nextstepaction"]
storage Secure File Transfer Protocol Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/secure-file-transfer-protocol-known-issues.md
SFTP support in Azure Blob Storage currently limits its cryptographic algorithm
- PowerShell and Azure CLI and not supported. You can leverage Portal and ARM templates for Public Preview.
+- `ssh-keycan` is not supported.
+ ## Troubleshooting - To resolve the `Failed to update SFTP settings for account 'accountname'. Error: The value 'True' is not allowed for property isSftpEnabled.` error, ensure that the following pre-requisites are met at the storage account level:
storage Storage Blob Static Website https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/storage-blob-static-website.md
You can use any of these tools to upload content to the **$web** container:
> - [AzCopy](../common/storage-use-azcopy-v10.md) > - [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/) > - [Azure Pipelines](https://azure.microsoft.com/services/devops/pipelines/)
-> - [Visual Studio Code extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) and [Channel 9 video demonstration](https://channel9.msdn.com/Shows/Docs-Azure/Deploy-static-website-to-Azure-from-Visual-Studio-Code/player)
+> - [Visual Studio Code extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) and [Channel 9 video demonstration](/Shows/Docs-Azure/Deploy-static-website-to-Azure-from-Visual-Studio-Code/player)
## Viewing content
storage Customer Managed Keys Configure Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/customer-managed-keys-configure-key-vault.md
Previously updated : 02/16/2021 Last updated : 01/13/2022
az keyvault set-policy \
## Add a key
-Next, add a key in the key vault.
+Next, add a key to the key vault.
Azure Storage encryption supports RSA and RSA-HSM keys of sizes 2048, 3072 and 4096. For more information about keys, see [About keys](../../key-vault/keys/about-keys.md).
When you configure encryption with customer-managed keys, you can choose to auto
Azure Storage can automatically update the customer-managed key that is used for encryption to use the latest key version. When the customer-managed key is rotated in Azure Key Vault, Azure Storage will automatically begin using the latest version of the key for encryption.
-# [Azure portal](#tab/portal)
+### [Azure portal](#tab/portal)
To configure customer-managed keys with automatic updating of the key version in the Azure portal, follow these steps:
To configure customer-managed keys with automatic updating of the key version in
1. Select the **Customer Managed Keys** option. 1. Choose the **Select from Key Vault** option. 1. Select **Select a key vault and key**.
-1. Select the key vault containing the key you want to use.
-1. Select the key from the key vault.
+1. Select the key vault containing the key you want to use. You can also create a new key vault.
+1. Select the key from the key vault. You can also create a new key.
![Screenshot showing how to select key vault and key](./media/customer-managed-keys-configure-key-vault/portal-select-key-from-key-vault.png)
+1. Select the type of identity to use to authenticate access to the key vault. The options include **System-assigned** (the default) or **User-assigned**. To learn more about each type of managed identity, see [Managed identity types](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types).
+
+ 1. If you select **System-assigned**, the system-assigned managed identity for the storage account is created under the covers, if it does not already exist.
+ 1. If you select **User-assigned**, then you must select an existing user-assigned identity that has permissions to access the key vault. To learn how to create a user-assigned identity, see [Manage user-assigned managed identities](../../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md).
+
+ :::image type="content" source="media/customer-managed-keys-configure-key-vault/select-user-assigned-managed-identity-portal.png" alt-text="Screenshot showing how to select a user-assigned managed identity for key vault authentication":::
+ 1. Save your changes. After you've specified the key, the Azure portal indicates that automatic updating of the key version is enabled and displays the key version currently in use for encryption. :::image type="content" source="media/customer-managed-keys-configure-key-vault/portal-auto-rotation-enabled.png" alt-text="Screenshot showing automatic updating of the key version enabled":::
-# [PowerShell](#tab/powershell)
+### [PowerShell](#tab/powershell)
To configure customer-managed keys with automatic updating of the key version with PowerShell, install the [Az.Storage](https://www.powershellgallery.com/packages/Az.Storage) module, version 2.0.0 or later.
-To automatically update the key version for a customer-managed key, omit the key version when you configure encryption with customer-managed keys for the storage account. Call [Set-AzStorageAccount](/powershell/module/az.storage/set-azstorageaccount) to update the storage account's encryption settings, as shown in the following example, and include the **-KeyvaultEncryption** option to enable customer-managed keys for the storage account.
+You can use either a system-assigned managed identity or a user-assigned managed identity to authenticate access to the key vault. To learn more about each type of managed identity, see [Managed identity types](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types).
-Remember to replace the placeholder values in brackets with your own values and to use the variables defined in the previous examples.
+To authenticate access to the key vault with a system-assigned managed identity, assign the system-assigned managed identity to the storage account by calling [Set-AzStorageAccount](/powershell/module/az.storage/set-azstorageaccount):
+
+```powershell
+$storageAccount = Set-AzStorageAccount -ResourceGroupName <resource_group> `
+ -Name <storage-account> `
+ -AssignIdentity
+$objectId = $storageAccount.Identity.PrincipalId
+```
+
+To authenticate access to the key vault with a user-assigned managed identity, first find the object ID of the user-assigned managed identity. To run this example, you'll need the resource ID of the user-assigned managed identity.
+
+```powershell
+$userManagedIdentityResourceId = '/subscriptions/{my subscription ID}/resourceGroups/{my resource group name}/providers/Microsoft.ManagedIdentity/userAssignedIdentities/{my managed identity name}'
+$objectId = (Get-AzResource -ResourceId $userManagedIdentityResourceId).Properties.PrincipalId
+```
+
+Next, to set the access policy for the key vault, call [Set-AzKeyVaultAccessPolicy](/powershell/module/az.keyvault/set-azkeyvaultaccesspolicy), providing the identifier for the system-assigned managed identity. For more information about assigning the key vault access policy, see [Assign a Key Vault access policy using Azure PowerShell](../../key-vault/general/assign-access-policy-powershell.md)).
+
+```powershell
+Set-AzKeyVaultAccessPolicy `
+ -VaultName $keyVault.VaultName `
+ -ObjectId $objectId `
+ -PermissionsToKeys wrapkey,unwrapkey,get
+```
+
+For more information, see [Assign a Key Vault access policy using Azure PowerShell](../../key-vault/general/assign-access-policy-powershell.md)).
+
+Finally, configure the customer-managed key. To automatically update the key version for the customer-managed key, omit the key version when you configure encryption with customer-managed keys for the storage account. Call [Set-AzStorageAccount](/powershell/module/az.storage/set-azstorageaccount) to update the storage account's encryption settings, as shown in the following example, and include the **-KeyvaultEncryption** option to enable customer-managed keys for the storage account.
```powershell Set-AzStorageAccount -ResourceGroupName $storageAccount.ResourceGroupName `
Set-AzStorageAccount -ResourceGroupName $storageAccount.ResourceGroupName `
To configure customer-managed keys with automatic updating of the key version with Azure CLI, install [Azure CLI version 2.4.0](/cli/azure/release-notes-azure-cli#april-21-2020) or later. For more information, see [Install the Azure CLI](/cli/azure/install-azure-cli).
-To automatically update the key version for a customer-managed key, omit the key version when you configure encryption with customer-managed keys for the storage account. Call [az storage account update](/cli/azure/storage/account#az_storage_account_update) to update the storage account's encryption settings, as shown in the following example. Include the `--encryption-key-source` parameter and set it to `Microsoft.Keyvault` to enable customer-managed keys for the account.
+You can use either a system-assigned managed identity or a user-assigned managed identity to authenticate access to the key vault. To learn more about each type of managed identity, see [Managed identity types](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types).
+
+To authenticate access to the key vault with a system-assigned managed identity, assign the system-assigned managed identity to the storage account by calling [az storage account update](/cli/azure/storage/account#az_storage_account_update):
+
+```azurecli-interactive
+az storage account update \
+ --name <storage-account> \
+ --resource-group <resource_group> \
+ --assign-identity
+```
+
+To authenticate access to the key vault with a user-assigned managed identity, first find the object ID of the user-assigned managed identity.
+
+```azurecli-interactive
+az identity show \
+ --name <name-of-user-assigned-managed-identity> \
+ --resource-group <resource-group>
+```
+
+Next, to set the access policy for the key vault, call [az keyvault set-policy](/cli/azure/keyvault#az_keyvault_set_policy) and provide the object ID of the managed identity:
+
+```azurecli-interactive
+az keyvault set-policy \
+ --name <key-vault> \
+ --resource-group <resource_group>
+ --object-id <object-id> \
+ --key-permissions get unwrapKey wrapKey
+```
+
+Finally, configure the customer-managed key. To automatically update the key version for a customer-managed key, omit the key version when you configure encryption with customer-managed keys for the storage account. Call [az storage account update](/cli/azure/storage/account#az_storage_account_update) to update the storage account's encryption settings, as shown in the following example. Include the `--encryption-key-source` parameter and set it to `Microsoft.Keyvault` to enable customer-managed keys for the account.
Remember to replace the placeholder values in brackets with your own values.
To configure customer-managed keys with manual updating of the key version in th
![Screenshot showing how to enter key URI](./media/customer-managed-keys-configure-key-vault/portal-specify-key-uri.png) 1. Specify the subscription that contains the key vault.
+1. Specify either a system-assigned or user-assigned managed identity.
1. Save your changes. # [PowerShell](#tab/powershell)
storage Customer Managed Keys Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/customer-managed-keys-overview.md
Previously updated : 06/01/2021 Last updated : 01/13/2022
You can either create your own keys and store them in the key vault or managed H
## About customer-managed keys
-The following diagram shows how Azure Storage uses Azure Active Directory and a key vault or managed HSM to make requests using the customer-managed key:
+The following diagram shows how Azure Storage uses Azure AD and a key vault or managed HSM to make requests using the customer-managed key:
-![Diagram showing how customer-managed keys work in Azure Storage](media/customer-managed-keys-overview/encryption-customer-managed-keys-diagram.png)
The following list explains the numbered steps in the diagram:
-1. An Azure Key Vault admin grants permissions to encryption keys to the managed identity that's associated with the storage account.
-2. An Azure Storage admin configures encryption with a customer-managed key for the storage account.
-3. Azure Storage uses the managed identity that's associated with the storage account to authenticate access to Azure Key Vault via Azure Active Directory.
-4. Azure Storage wraps the account encryption key with the customer-managed key in Azure Key Vault.
-5. For read/write operations, Azure Storage sends requests to Azure Key Vault to unwrap the account encryption key to perform encryption and decryption operations.
+1. An Azure Key Vault admin grants permissions to encryption keys to either a user-assigned managed identity, or to the system-assigned managed identity that's associated with the storage account.
+1. An Azure Storage admin configures encryption with a customer-managed key for the storage account.
+1. Azure Storage uses the managed identity to which the Azure Key Vault admin granted permissions in step 1 to authenticate access to Azure Key Vault via Azure AD.
+1. Azure Storage wraps the account encryption key with the customer-managed key in Azure Key Vault.
+1. For read/write operations, Azure Storage sends requests to Azure Key Vault to unwrap the account encryption key to perform encryption and decryption operations.
The managed identity that's associated with the storage account must have these permissions at a minimum to access a customer-managed key in Azure Key Vault:
When you configure a customer-managed key, Azure Storage wraps the root data enc
When you enable or disable customer managed keys, or when you modify the key or the key version, the protection of the root encryption key changes, but the data in your Azure Storage account does not need to be re-encrypted.
-Customer-managed keys can enabled only on existing storage accounts. The key vault or managed HSM must be configured to grant permissions to the managed identity that is associated with the storage account. The managed identity is available only after the storage account is created.
+You can enable customer-managed keys on existing storage accounts or on new accounts when you create them. When you enable customer-managed keys while creating an account, only user-assigned managed identities are available. To use a system-assigned managed identity, you must first create the account and then enable customer-managed keys, because the system-assigned managed identity can exist only after the account is created. For more information on system-assigned versus user-assigned managed identities, see [Managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/overview.md).
You can switch between customer-managed keys and Microsoft-managed keys at any time. For more information about Microsoft-managed keys, see [About encryption key management](storage-service-encryption.md#about-encryption-key-management). To learn how to configure Azure Storage encryption with customer-managed keys in a key vault, see [Configure encryption with customer-managed keys stored in Azure Key Vault](customer-managed-keys-configure-key-vault.md). To configure customer-managed keys in a managed HSM, see [Configure encryption with customer-managed keys stored in Azure Key Vault Managed HSM](customer-managed-keys-configure-key-vault-hsm.md). > [!IMPORTANT]
-> Customer-managed keys rely on managed identities for Azure resources, a feature of Azure AD. Managed identities do not currently support cross-directory scenarios. When you configure customer-managed keys in the Azure portal, a managed identity is automatically assigned to your storage account under the covers. If you subsequently move the subscription, resource group, or storage account from one Azure AD directory to another, the managed identity associated with the storage account is not transferred to the new tenant, so customer-managed keys may no longer work. For more information, see **Transferring a subscription between Azure AD directories** in [FAQs and known issues with managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/known-issues.md#transferring-a-subscription-between-azure-ad-directories).
+> Customer-managed keys rely on managed identities for Azure resources, a feature of Azure AD. Managed identities do not currently support cross-tenant scenarios. When you configure customer-managed keys in the Azure portal, a managed identity is automatically assigned to your storage account under the covers. If you subsequently move the subscription, resource group, or storage account from one Azure AD tenant to another, the managed identity associated with the storage account is not transferred to the new tenant, so customer-managed keys may no longer work. For more information, see **Transferring a subscription between Azure AD directories** in [FAQs and known issues with managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/known-issues.md#transferring-a-subscription-between-azure-ad-directories).
Azure storage encryption supports RSA and RSA-HSM keys of sizes 2048, 3072 and 4096. For more information about keys, see [About keys](../../key-vault/keys/about-keys.md).
storage Storage Account Create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-account-create.md
Previously updated : 05/18/2021 Last updated : 01/13/2022
None.
# [PowerShell](#tab/azure-powershell)
-To create an Azure storage account with PowerShell, make sure you have installed the [Az PowerShell module](https://www.powershellgallery.com/packages/Az), version 0.7 or later. For more information, see [Introducing the Azure PowerShell Az module](/powershell/azure/new-azureps-module-az).
-
-To find your current version, run the following command:
-
-```powershell
-Get-InstalledModule -Name "Az"
-```
-
-To install or upgrade Azure PowerShell, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+To create an Azure storage account with PowerShell, make sure you have installed the latest [Azure Az PowerShell module](https://www.powershellgallery.com/packages/Az). See [Install the Azure PowerShell module](/powershell/azure/install-az-ps).
# [Azure CLI](#tab/azure-cli)
The button launches an interactive shell that you can use to run the steps outli
### Install the CLI locally
-You can also install and use the Azure CLI locally. The examples in this article require Azure CLI version 2.0.4 or later. Run `az --version` to find your installed version. If you need to install or upgrade, see [Install the Azure CLI](/cli/azure/install-azure-cli).
+You can also install and use the Azure CLI locally. If you plan to use Azure CLI locally, make sure you have installed the latest version of the Azure CLI. See [Install the Azure CLI](/cli/azure/install-azure-cli).
# [Template](#tab/template)
The following table describes the fields on the **Advanced** tab.
| Section | Field | Required or optional | Description | |--|--|--|--|
-| Security | Enable secure transfer | Optional | Enable secure transfer to require that incoming requests to this storage account are made only via HTTPS (default). Recommended for optimal security. For more information, see [Require secure transfer to ensure secure connections](storage-require-secure-transfer.md). |
-| Security | Enable infrastructure encryption | Optional | By default, infrastructure encryption is not enabled. Enable infrastructure encryption to encrypt your data at both the service level and the infrastructure level. For more information, see [Create a storage account with infrastructure encryption enabled for double encryption of data](infrastructure-encryption-enable.md). |
+| Security | Require secure transfer for REST API operations | Optional | Require secure transfer to ensure that incoming requests to this storage account are made only via HTTPS (default). Recommended for optimal security. For more information, see [Require secure transfer to ensure secure connections](storage-require-secure-transfer.md). |
| Security | Enable blob public access | Optional | When enabled, this setting allows a user with the appropriate permissions to enable anonymous public access to a container in the storage account (default). Disabling this setting prevents all anonymous public access to the storage account. For more information, see [Prevent anonymous public read access to containers and blobs](../blobs/anonymous-read-access-prevent.md).<br> <br> Enabling blob public access does not make blob data available for public access unless the user takes the additional step to explicitly configure the container's public access setting. |
-| Security | Enable storage account key access (preview) | Optional | When enabled, this setting allows clients to authorize requests to the storage account using either the account access keys or an Azure Active Directory (Azure AD) account (default). Disabling this setting prevents authorization with the account access keys. For more information, see [Prevent Shared Key authorization for an Azure Storage account](shared-key-authorization-prevent.md). |
+| Security | Enable storage account key access | Optional | When enabled, this setting allows clients to authorize requests to the storage account using either the account access keys or an Azure Active Directory (Azure AD) account (default). Disabling this setting prevents authorization with the account access keys. For more information, see [Prevent Shared Key authorization for an Azure Storage account](shared-key-authorization-prevent.md). |
+| Security | Default to Azure Active Directory authorization in the Azure portal | Optional | When enabled, the Azure portal authorizes data operations with the user's Azure AD credentials by default. If the user does not have the appropriate permissions assigned via Azure role-based access control (Azure RBAC) to perform data operations, then the portal will use the account access keys for data access instead. The user can also choose to switch to using the account access keys. For more information, see [Default to Azure AD authorization in the Azure portal](../blobs/authorize-data-operations-portal.md#default-to-azure-ad-authorization-in-the-azure-portal). |
| Security | Minimum TLS version | Required | Select the minimum version of Transport Layer Security (TLS) for incoming requests to the storage account. The default value is TLS version 1.2. When set to the default value, incoming requests made using TLS 1.0 or TLS 1.1 are rejected. For more information, see [Enforce a minimum required version of Transport Layer Security (TLS) for requests to a storage account](transport-layer-security-configure-minimum-version.md). | | Data Lake Storage Gen2 | Enable hierarchical namespace | Optional | To use this storage account for Azure Data Lake Storage Gen2 workloads, configure a hierarchical namespace. For more information, see [Introduction to Azure Data Lake Storage Gen2](../blobs/data-lake-storage-introduction.md). | | Secure File Transfer Protocol (SFTP) | Enable SFTP | Optional | Enable the use of Secure File Transfer Protocol (SFTP) to securely transfer of data over the internet. For more information, see [Secure File Transfer (SFTP) protocol support in Azure Blob Storage](../blobs/secure-file-transfer-protocol-support.md). | | Blob storage | Enable network file share (NFS) v3 | Optional | NFS v3 provides Linux file system compatibility at object storage scale enables Linux clients to mount a container in Blob storage from an Azure Virtual Machine (VM) or a computer on-premises. For more information, see [Network File System (NFS) 3.0 protocol support in Azure Blob storage](../blobs/network-file-system-protocol-support.md). |
+| Blob storage | Allow cross-tenant replication | Required | By default, users with appropriate permissions can configure object replication across Azure AD tenants. To prevent replication across tenants, deselect this option. For more information, see [Prevent replication across Azure AD tenants](../blobs/object-replication-overview.md#prevent-replication-across-azure-ad-tenants). |
| Blob storage | Access tier | Required | Blob access tiers enable you to store blob data in the most cost-effective manner, based on usage. Select the hot tier (default) for frequently accessed data. Select the cool tier for infrequently accessed data. For more information, see [Hot, Cool, and Archive access tiers for blob data](../blobs/access-tiers-overview.md). | | Azure Files | Enable large file shares | Optional | Available only for standard file shares with the LRS or ZRS redundancies. | | Tables and queues | Enable support for customer-managed keys | Optional | To enable support for customer-managed keys for tables and queues, you must select this setting at the time that you create the storage account. For more information, see [Create an account that supports customer-managed keys for tables and queues](account-encryption-key-create.md). |
The following table describes the fields on the **Data protection** tab.
|--|--|--|--| | Recovery | Enable point-in-time restore for containers | Optional | Point-in-time restore provides protection against accidental deletion or corruption by enabling you to restore block blob data to an earlier state. For more information, see [Point-in-time restore for block blobs](../blobs/point-in-time-restore-overview.md).<br /><br />Enabling point-in-time restore also enables blob versioning, blob soft delete, and blob change feed. These prerequisite features may have a cost impact. For more information, see [Pricing and billing](../blobs/point-in-time-restore-overview.md#pricing-and-billing) for point-in-time restore. | | Recovery | Enable soft delete for blobs | Optional | Blob soft delete protects an individual blob, snapshot, or version from accidental deletes or overwrites by maintaining the deleted data in the system for a specified retention period. During the retention period, you can restore a soft-deleted object to its state at the time it was deleted. For more information, see [Soft delete for blobs](../blobs/soft-delete-blob-overview.md).<br /><br />Microsoft recommends enabling blob soft delete for your storage accounts and setting a minimum retention period of seven days. |
-| Recovery | Enable soft delete for containers (preview) | Optional | Container soft delete protects a container and its contents from accidental deletes by maintaining the deleted data in the system for a specified retention period. During the retention period, you can restore a soft-deleted container to its state at the time it was deleted. For more information, see [Soft delete for containers (preview)](../blobs/soft-delete-container-overview.md).<br /><br />Microsoft recommends enabling container soft delete for your storage accounts and setting a minimum retention period of seven days. |
+| Recovery | Enable soft delete for containers | Optional | Container soft delete protects a container and its contents from accidental deletes by maintaining the deleted data in the system for a specified retention period. During the retention period, you can restore a soft-deleted container to its state at the time it was deleted. For more information, see [Soft delete for containers (preview)](../blobs/soft-delete-container-overview.md).<br /><br />Microsoft recommends enabling container soft delete for your storage accounts and setting a minimum retention period of seven days. |
| Recovery | Enable soft delete for file shares | Optional | Soft delete for file shares protects a file share and its contents from accidental deletes by maintaining the deleted data in the system for a specified retention period. During the retention period, you can restore a soft-deleted file share to its state at the time it was deleted. For more information, see [Prevent accidental deletion of Azure file shares](../files/storage-files-prevent-file-share-deletion.md).<br /><br />Microsoft recommends enabling soft delete for file shares for Azure Files workloads and setting a minimum retention period of seven days. | | Tracking | Enable versioning for blobs | Optional | Blob versioning automatically saves the state of a blob in a previous version when the blob is overwritten. For more information, see [Blob versioning](../blobs/versioning-overview.md).<br /><br />Microsoft recommends enabling blob versioning for optimal data protection for the storage account. | | Tracking | Enable blob change feed | Optional | The blob change feed provides transaction logs of all changes to all blobs in your storage account, as well as to their metadata. For more information, see [Change feed support in Azure Blob Storage](../blobs/storage-blob-change-feed.md). |
+| Access control | Enable version-level immutability support | Optional | Enable support for immutability policies that are scoped to the blob version. If this option is selected, then after you create the storage account, you can configure a default time-based retention policy for the account or for the container, which blob versions within the account or container will inherit by default. For more information, see [Enable version-level immutability support on a storage account](../blobs/immutable-policy-configure-version-scope.md#enable-version-level-immutability-support-on-a-storage-account). |
+
+### Encryption tab
+
+On the **Encryption** tab, you can configure options that relate to how your data is encrypted when it is persisted to the cloud. Some of these options can be configured only when you create the storage account.
+
+| Field | Required or optional | Description |
+|--|--|--|
+| Encryption type| Required | By default, data in the storage account is encrypted by using Microsoft-managed keys. You can rely on Microsoft-managed keys for the encryption of your data, or you can manage encryption with your own keys. For more information, see [Azure Storage encryption for data at rest](storage-service-encryption.md). |
+| Enable support for customer-managed keys | Required | By default, customer managed keys can be used to encrypt only blobs and files. You can use the options presented in this section to enable support for tables and queues as well. This option can be configured only when you create the storage account. For more information, see [Customer-managed keys for Azure Storage encryption](customer-managed-keys-overview.md). |
+| Enable infrastructure encryption | Optional | By default, infrastructure encryption is not enabled. Enable infrastructure encryption to encrypt your data at both the service level and the infrastructure level. For more information, see [Create a storage account with infrastructure encryption enabled for double encryption of data](infrastructure-encryption-enable.md). |
### Tags tab
az deployment group create --resource-group $resourceGroupName --template-uri "h
``` > [!NOTE]
-> This template serves only as an example. There are many storage account settings that aren't configured as part of this template. For example, if you want to use [Data Lake Storage](https://azure.microsoft.com/services/storage/data-lake-storage/), you would modify this template by setting the `isHnsEnabledad` property of the `StorageAccountPropertiesCreateParameters` object to `true`.
+> This template serves only as an example. There are many storage account settings that aren't configured as part of this template. For example, if you want to use [Data Lake Storage](https://azure.microsoft.com/services/storage/data-lake-storage/), you would modify this template by setting the `isHnsEnabled` property of the `StorageAccountPropertiesCreateParameters` object to `true`.
To learn how to modify this template or create new ones, see:
storage Storage Network Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-network-security.md
You can use the same technique for an account that has the hierarchical namespac
| Azure Machine Learning Service | Microsoft.MachineLearningServices | Authorized Azure Machine Learning workspaces write experiment output, models, and logs to Blob storage and read the data. [Learn more](../../machine-learning/how-to-network-security-overview.md#secure-the-workspace-and-associated-resources). | | Azure Media Services | Microsoft.Media/mediaservices | Allows access to storage accounts through Media Services. | | Azure Migrate | Microsoft.Migrate/migrateprojects | Allows access to storage accounts through Azure Migrate. |
-| Azure Purview | Microsoft.Purview/accounts | Allows Purview to access storage accounts. |
+| Azure Purview | Microsoft.Purview/accounts | Allows Azure Purview to access storage accounts. |
| Azure Remote Rendering | Microsoft.MixedReality/remoteRenderingAccounts | Allows access to storage accounts through Remote Rendering. | | Azure Site Recovery | Microsoft.RecoveryServices/vaults | Allows access to storage accounts through Site Recovery. | | Azure SQL Database | Microsoft.Sql | Allows [writing](../../azure-sql/database/audit-write-storage-account-behind-vnet-firewall.md) audit data to storage accounts behind firewall. |
storage Storage Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-redundancy.md
The following table shows which types of storage accounts support ZRS in which r
| Storage account type | Supported regions | Supported services | |--|--|--| | General-purpose v2<sup>1</sup> | (Africa) South Africa North<br /> (Asia Pacific) Southeast Asia<br /> (Asia Pacific) Australia East<br /> (Asia Pacific) Japan East<br /> (Canada) Canada Central<br /> (Europe) North Europe<br /> (Europe) West Europe<br /> (Europe) France Central<br /> (Europe) Germany West Central<br /> (Europe) UK South<br /> (South America) Brazil South<br /> (US) Central US<br /> (US) East US<br /> (US) East US 2<br /> (US) South Central US<br /> (US) West US 2 | Block blobs<br /> Page blobs<sup>2</sup><br /> File shares (standard)<br /> Tables<br /> Queues<br /> |
-| Premium block blobs<sup>1</sup> | Asia Southeast<br /> Australia East<br /> Europe North<br /> Europe West<br /> France Central <br /> Japan East<br /> UK South <br /> US East <br /> US East 2 <br /> US West 2| Premium block blobs only |
-| Premium file shares | Asia Southeast<br /> Australia East<br /> Europe North<br /> Europe West<br /> France Central <br /> Japan East<br /> UK South <br /> US East <br /> US East 2 <br /> US West 2 | Premium files shares only |
+| Premium block blobs<sup>1</sup> | Asia Southeast<br /> Australia East<br /> Brazil South<br /> Europe North<br /> Europe West<br /> France Central <br /> Japan East<br /> UK South <br /> US East <br /> US East 2 <br /> US West 2| Premium block blobs only |
+| Premium file shares | Asia Southeast<br /> Australia East<br /> Brazil South<br /> Europe North<br /> Europe West<br /> France Central <br /> Japan East<br /> UK South <br /> US East <br /> US East 2 <br /> US West 2 | Premium files shares only |
<sup>1</sup> The archive tier is not currently supported for ZRS accounts.<br /> <sup>2</sup> Azure unmanaged disks should also use LRS. It is possible to create a storage account for Azure unmanaged disks that uses GRS, but it is not recommended due to potential issues with consistency over asynchronous geo-replication. Unmanaged disks don't support ZRS or GZRS.
storsimple Storsimple 8000 System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storsimple/storsimple-8000-system-requirements.md
StorSimple device model 8600 includes an Extended Bunch of Disks (EBOD) enclosur
Carefully review these best practices to ensure the high availability of hosts connected to your StorSimple device. * Configure StorSimple with [two-node file server cluster configurations][1]. By removing single points of failure and building in redundancy on the host side, the entire solution becomes highly available.
-* Use Continuously available (CA) shares available with Windows Server 2012 (SMB 3.0) for high availability during failover of the storage controllers. For additional information for configuring file server clusters and Continuously Available shares with Windows Server 2012, refer to this [video demo](https://channel9.msdn.com/Events/IT-Camps/IT-Camps-On-Demand-Windows-Server-2012/DEMO-Continuously-Available-File-Shares).
+* Use Continuously available (CA) shares available with Windows Server 2012 (SMB 3.0) for high availability during failover of the storage controllers. For additional information for configuring file server clusters and Continuously Available shares with Windows Server 2012.
## Next steps
stream-analytics Stream Analytics Define Outputs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/stream-analytics/stream-analytics-define-outputs.md
Previously updated : 12/9/2020 Last updated : 01/14/2022 # Outputs from Azure Stream Analytics
Some outputs types support [partitioning](#partitioning), and [output batch size
|[Azure Data Explorer](azure-database-explorer-output.md)|Yes|Managed Identity| |[Azure Database for PostgreSQL](postgresql-database-output.md)|Yes|Username and password auth| |[Azure SQL Database](sql-database-output.md)|Yes, optional.|SQL user auth, </br> Managed Identity|
-|[Azure Synapse Analytics](azure-synapse-analytics-output.md)|Yes|SQL user auth, </br> Managed Identity (preview)|
+|[Azure Synapse Analytics](azure-synapse-analytics-output.md)|Yes|SQL user auth, </br> Managed Identity|
|[Blob storage and Azure Data Lake Gen 2](blob-storage-azure-data-lake-gen2-output.md)|Yes|Access key, </br> Managed Identity| |[Azure Event Hubs](event-hubs-output.md)|Yes, need to set the partition key column in output configuration.|Access key, </br> Managed Identity| |[Power BI](power-bi-output.md)|No|Azure Active Directory user, </br> Managed Identity|
stream-analytics Stream Analytics Machine Learning Anomaly Detection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/stream-analytics/stream-analytics-machine-learning-anomaly-detection.md
The machine learning operations do not support seasonality trends or multi-varia
The following video demonstrates how to detect an anomaly in real time using machine learning functions in Azure Stream Analytics.
-> [!VIDEO https://channel9.msdn.com/Shows/Internet-of-Things-Show/Real-Time-ML-Based-Anomaly-Detection-In-Azure-Stream-Analytics/player]
+> [!VIDEO https://docs.microsoft.com/Shows/Internet-of-Things-Show/Real-Time-ML-Based-Anomaly-Detection-In-Azure-Stream-Analytics/player]
## Model behavior
synapse-analytics How To Access Secured Purview Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/catalog-and-governance/how-to-access-secured-purview-account.md
This article describes how to access a secured Azure Purview account from Azure
## Azure Purview private endpoint deployment scenarios
-You can use [Azure private endpoints](../../private-link/private-endpoint-overview.md) for your Azure Purview accounts to allow secure access from a virtual network (VNet) to the catalog over a Private Link. Purview provides different types of private points for various access need: *account* private endpoint, *portal* private endpoint, and *ingestion* private endpoints. Learn more from [Purview private endpoints conceptual overview](../../purview/catalog-private-link.md#conceptual-overview).
+You can use [Azure private endpoints](../../private-link/private-endpoint-overview.md) for your Azure Purview accounts to allow secure access from a virtual network (VNet) to the catalog over a Private Link. Azure Purview provides different types of private points for various access need: *account* private endpoint, *portal* private endpoint, and *ingestion* private endpoints. Learn more from [Azure Purview private endpoints conceptual overview](../../purview/catalog-private-link.md#conceptual-overview).
-If your Purview account is protected by firewall and denies public access, make sure you follow below checklist to set up the private endpoints so Synapse can successfully connect to Purview.
+If your Azure Purview account is protected by firewall and denies public access, make sure you follow below checklist to set up the private endpoints so Synapse can successfully connect to Azure Purview.
-| Scenario | Required Purview private endpoints |
+| Scenario | Required Azure Purview private endpoints |
| | |
-| [Run pipeline and report lineage to Purview](../../purview/how-to-lineage-azure-synapse-analytics.md) | For Synapse pipeline to push lineage to Purview, Purview ***account*** and ***ingestion*** private endpoints are required. <br>- When using **Azure Integration Runtime**, follow the steps in [Managed private endpoints for Purview](#managed-private-endpoints-for-purview) section to create managed private endpoints in the Synapse managed virtual network.<br>- When using **Self-hosted Integration Runtime**, follow the steps in [this section](../../purview/catalog-private-link-end-to-end.md#option-2enable-account-portal-and-ingestion-private-endpoint-on-existing-azure-purview-accounts) to create the *account* and *ingestion* private endpoints in your integration runtime's virtual network. |
-| [Discover and explore data using Purview on Synapse Studio](how-to-discover-connect-analyze-azure-purview.md) | To use the search bar at the top center of Synapse Studio to search for Purview data and perform actions, you need to create Purview ***account*** and ***portal*** private endpoints in the virtual network that you launch the Synapse Studio. Follow the steps in [Enable *account* and *portal* private endpoint](../../purview/catalog-private-link-account-portal.md#option-2enable-account-and-portal-private-endpoint-on-existing-azure-purview-accounts). |
+| [Run pipeline and report lineage to Azure Purview](../../purview/how-to-lineage-azure-synapse-analytics.md) | For Synapse pipeline to push lineage to Azure Purview, Azure Purview ***account*** and ***ingestion*** private endpoints are required. <br>- When using **Azure Integration Runtime**, follow the steps in [Managed private endpoints for Azure Purview](#managed-private-endpoints-for-azure-purview) section to create managed private endpoints in the Synapse managed virtual network.<br>- When using **Self-hosted Integration Runtime**, follow the steps in [this section](../../purview/catalog-private-link-end-to-end.md#option-2enable-account-portal-and-ingestion-private-endpoint-on-existing-azure-purview-accounts) to create the *account* and *ingestion* private endpoints in your integration runtime's virtual network. |
+| [Discover and explore data using Azure Purview on Synapse Studio](how-to-discover-connect-analyze-azure-purview.md) | To use the search bar at the top center of Synapse Studio to search for Azure Purview data and perform actions, you need to create Azure Purview ***account*** and ***portal*** private endpoints in the virtual network that you launch the Synapse Studio. Follow the steps in [Enable *account* and *portal* private endpoint](../../purview/catalog-private-link-account-portal.md#option-2enable-account-and-portal-private-endpoint-on-existing-azure-purview-accounts). |
-## Managed private endpoints for Purview
+## Managed private endpoints for Azure Purview
-[Managed private endpoints](../security/synapse-workspace-managed-private-endpoints.md) are private endpoints created a Managed Virtual Network associated with your Azure Synapse workspace. When you run pipeline and report lineage to a firewall protected Azure Purview account, make sure your Synapse workspace is created with "Managed virtual network" option enabled, then create the Purview ***account*** and ***ingestion*** managed private endpoints as follows.
+[Managed private endpoints](../security/synapse-workspace-managed-private-endpoints.md) are private endpoints created a Managed Virtual Network associated with your Azure Synapse workspace. When you run pipeline and report lineage to a firewall protected Azure Purview account, make sure your Synapse workspace is created with "Managed virtual network" option enabled, then create the Azure Purview ***account*** and ***ingestion*** managed private endpoints as follows.
### Create managed private endpoints
-To create managed private endpoints for Purview on Synapse Studio:
+To create managed private endpoints for Azure Purview on Synapse Studio:
-1. Go to **Manage** -> **Azure Purview**, and click **Edit** to edit your existing connected Purview account or click **Connect to a Purview account** to connect to a new Purview account.
+1. Go to **Manage** -> **Azure Purview**, and click **Edit** to edit your existing connected Azure Purview account or click **Connect to an Azure Purview account** to connect to a new Azure Purview account.
2. Select **Yes** for **Create managed private endpoints**. You need to have "**workspaces/managedPrivateEndpoint/write**" permission, e.g. Synapse Administrator or Synapse Linked Data Manager role.
-3. Click **+ Create all** button to batch create the needed Purview private endpoints, including the ***account*** private endpoint and the ***ingestion*** private endpoints for the Purview managed resources - Blob storage, Queue storage, and Event Hubs namespace. You need to have at least **Reader** role on your Purview account for Synapse to retrieve the Purview managed resources' information.
+3. Click **+ Create all** button to batch create the needed Azure Purview private endpoints, including the ***account*** private endpoint and the ***ingestion*** private endpoints for the Azure Purview managed resources - Blob storage, Queue storage, and Event Hubs namespace. You need to have at least **Reader** role on your Azure Purview account for Synapse to retrieve the Azure Purview managed resources' information.
- :::image type="content" source="./media/purview-create-all-managed-private-endpoints.png" alt-text="Create managed private endpoint for your connected Purview account.":::
+ :::image type="content" source="./media/purview-create-all-managed-private-endpoints.png" alt-text="Create managed private endpoint for your connected Azure Purview account.":::
4. In the next page, specify a name for the private endpoint. It will be used to generate names for the ingestion private endpoints as well with suffix.
- :::image type="content" source="./media/name-purview-private-endpoints.png" alt-text="Name the managed private endpoints for your connected Purview account.":::
+ :::image type="content" source="./media/name-purview-private-endpoints.png" alt-text="Name the managed private endpoints for your connected Azure Purview account.":::
-5. Click **Create** to create the private endpoints. After creation, 4 private endpoint requests will be generated that must [get approved by an owner of Purview](#approve-private-endpoint-connections).
+5. Click **Create** to create the private endpoints. After creation, 4 private endpoint requests will be generated that must [get approved by an owner of Azure Purview](#approve-private-endpoint-connections).
-Such batch managed private endpoint creation is provided on the Synapse Studio only. If you want to create the managed private endpoints programmatically, you need to create those PEs individually. You can find Purview managed resources' information from Azure portal -> your Purview account -> Managed resources.
+Such batch managed private endpoint creation is provided on the Synapse Studio only. If you want to create the managed private endpoints programmatically, you need to create those PEs individually. You can find Azure Purview managed resources' information from Azure portal -> your Azure Purview account -> Managed resources.
### Approve private endpoint connections
-After you create the managed private endpoints for Purview, you see "Pending" state first. The Purview owner need to approve the private endpoint connections for each resource.
+After you create the managed private endpoints for Azure Purview, you see "Pending" state first. The Azure Purview owner need to approve the private endpoint connections for each resource.
-If you have permission to approve the Purview private endpoint connection, from Synapse Studio:
+If you have permission to approve the Azure Purview private endpoint connection, from Synapse Studio:
1. Go to **Manage** -> **Azure Purview** -> **Edit** 2. In the private endpoint list, click the **Edit** (pencil) button next to each private endpoint name
If you have permission to approve the Purview private endpoint connection, from
4. On the given resource, go to **Networking** -> **Private endpoint connection** to approve it. The private endpoint is named as `data_factory_name.your_defined_private_endpoint_name` with description as "Requested by data_factory_name". 5. Repeat this operation for all private endpoints.
-If you don't have permission to approve the Purview private endpoint connection, ask the Purview account owner to do as follows.
+If you don't have permission to approve the Azure Purview private endpoint connection, ask the Azure Purview account owner to do as follows.
-- For *account* private endpoint, go to Azure portal -> your Purview account -> Networking -> Private endpoint connection to approve.-- For *ingestion* private endpoints, go to Azure portal -> your Purview account -> Managed resources, click into the Storage account and Event Hubs namespace respectively, and approve the private endpoint connection in Networking -> Private endpoint connection page.
+- For *account* private endpoint, go to Azure portal -> your Azure Purview account -> Networking -> Private endpoint connection to approve.
+- For *ingestion* private endpoints, go to Azure portal -> your Azure Purview account -> Managed resources, click into the Storage account and Event Hubs namespace respectively, and approve the private endpoint connection in Networking -> Private endpoint connection page.
### Monitor managed private endpoints
-You can monitor the created managed private endpoints for Purview at two places:
+You can monitor the created managed private endpoints for Azure Purview at two places:
-- Go to **Manage** -> **Azure Purview** -> **Edit** to open your existing connected Purview account. To see all the relevant private endpoints, you need to have at least **Reader** role on your Purview account for Synapse to retrieve the Purview managed resources' information. Otherwise, you only see *account* private endpoint with warning.-- Go to **Manage** -> **Managed private endpoints** where you see all the managed private endpoints created under the Synapse workspace. If you have at least **Reader** role on your Purview account, you see Purview relevant private endpoints being grouped together. Otherwise, they show up separately in the list.
+- Go to **Manage** -> **Azure Purview** -> **Edit** to open your existing connected Azure Purview account. To see all the relevant private endpoints, you need to have at least **Reader** role on your Azure Purview account for Synapse to retrieve the Azure Purview managed resources' information. Otherwise, you only see *account* private endpoint with warning.
+- Go to **Manage** -> **Managed private endpoints** where you see all the managed private endpoints created under the Synapse workspace. If you have at least **Reader** role on your Azure Purview account, you see Azure Purview relevant private endpoints being grouped together. Otherwise, they show up separately in the list.
## Next steps
synapse-analytics How To Discover Connect Analyze Azure Purview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/catalog-and-governance/how-to-discover-connect-analyze-azure-purview.md
In this document, you will learn the type of interactions that you can perform
## Prerequisites -- [Azure Purview account](../../purview/create-catalog-portal.md)
+- [Azure Azure Purview account](../../purview/create-catalog-portal.md)
- [Synapse workspace](../quickstart-create-workspace.md) - [Connect an Azure Purview Account into Synapse](quickstart-connect-azure-purview.md) ## Using Azure Purview in Synapse
-The use Azure Purview in Synapse requires you to have access to that Purview account. Synapse passes-through your Purview permission. As an example, if you have a curator permission role, you will be able to edit metadata scanned by Azure Purview.
+The use Azure Purview in Synapse requires you to have access to that Azure Purview account. Synapse passes-through your Azure Purview permission. As an example, if you have a curator permission role, you will be able to edit metadata scanned by Azure Purview.
### Data discovery: search datasets
synapse-analytics Quickstart Connect Azure Purview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/catalog-and-governance/quickstart-connect-azure-purview.md
# QuickStart: Connect a Synapse workspace to an Azure Purview account
-In this quickstart, you will register an Azure Purview Account to a Synapse workspace. That connection allows you to discover Azure Purview assets, interact with them through Synapse capabilities, and push lineage information to Purview.
+In this quickstart, you will register an Azure Purview Account to a Synapse workspace. That connection allows you to discover Azure Purview assets, interact with them through Synapse capabilities, and push lineage information to Azure Purview.
You can perform the following tasks in Synapse:-- Use the search box at the top to find Purview assets based on keywords
+- Use the search box at the top to find Azure Purview assets based on keywords
- Understand the data based on metadata, [lineage](../../purview/catalog-lineage-user-guide.md), annotations - Connect those data to your workspace with linked services or integration datasets - Analyze those datasets with Synapse Apache Spark, Synapse SQL, and Data Flow -- Execute pipelines and [push lineage information to Purview](../../purview/how-to-lineage-azure-synapse-analytics.md)
+- Execute pipelines and [push lineage information to Azure Purview](../../purview/how-to-lineage-azure-synapse-analytics.md)
## Prerequisites -- [Azure Purview account](../../purview/create-catalog-portal.md)
+- [Azure Azure Purview account](../../purview/create-catalog-portal.md)
- [Synapse workspace](../quickstart-create-workspace.md) ## Permissions for connecting an Azure Purview account
To connect an Azure Purview Account to a Synapse workspace, you need a **Contrib
Follow the steps to connect an Azure Purview account: 1. Go to [https://web.azuresynapse.net](https://web.azuresynapse.net) and sign in to your Synapse workspace.
-2. Go to **Manage** -> **Azure Purview**, select **Connect to a Purview account**.
+2. Go to **Manage** -> **Azure Purview**, select **Connect to an Azure Purview account**.
3. You can choose **From Azure subscription** or **Enter manually**. **From Azure subscription**, you can select the account that you have access to.
-4. Once connected, you can see the name of the Purview account in the tab **Azure Purview account**.
+4. Once connected, you can see the name of the Azure Purview account in the tab **Azure Purview account**.
-If your Purview account is protected by firewall, create the managed private endpoints for Purview. Learn more about how to let Azure Synapse [access a secured Purview account](how-to-access-secured-purview-account.md). You can either do it during the initial connection or edit an existing connection later.
+If your Azure Purview account is protected by firewall, create the managed private endpoints for Azure Purview. Learn more about how to let Azure Synapse [access a secured Azure Purview account](how-to-access-secured-purview-account.md). You can either do it during the initial connection or edit an existing connection later.
-The Purview connection information is stored in the Synapse workspace resource like the following. To establish the connection programmatically, you can update the Synapse workspace and add the `purviewConfiguration` settings.
+The Azure Purview connection information is stored in the Synapse workspace resource like the following. To establish the connection programmatically, you can update the Synapse workspace and add the `purviewConfiguration` settings.
```json {
The Purview connection information is stored in the Synapse workspace resource l
## Set up authentication
-Synapse workspace's managed identity is used to authenticate lineage push operations from Synapse workspace to Purview.
+Synapse workspace's managed identity is used to authenticate lineage push operations from Synapse workspace to Azure Purview.
-Grant the Synapse workspace's managed identity **Data Curator** role on your Purview **root collection**. Learn more about [Access control in Azure Purview](../../purview/catalog-permissions.md) and [Add roles and restrict access through collections](../../purview/how-to-create-and-manage-collections.md#add-roles-and-restrict-access-through-collections).
+Grant the Synapse workspace's managed identity **Data Curator** role on your Azure Purview **root collection**. Learn more about [Access control in Azure Purview](../../purview/catalog-permissions.md) and [Add roles and restrict access through collections](../../purview/how-to-create-and-manage-collections.md#add-roles-and-restrict-access-through-collections).
-When connecting Synapse workspace to Purview in Synapse Studio, Synapse tries to add such role assignment automatically. If you have **Collection admins** role on the Purview root collection and have access to Purview account from your network, this operation is done successfully.
+When connecting Synapse workspace to Azure Purview in Synapse Studio, Synapse tries to add such role assignment automatically. If you have **Collection admins** role on the Azure Purview root collection and have access to Azure Purview account from your network, this operation is done successfully.
-## Monitor Purview connection
+## Monitor Azure Purview connection
-Once you connect the Synapse workspace to a Purview account, you see the following page with details on the enabled integration capabilities.
+Once you connect the Synapse workspace to an Azure Purview account, you see the following page with details on the enabled integration capabilities.
For **Data Lineage - Synapse Pipeline**, you may see one of below status: -- **Connected**: The Synapse workspace is successfully connected to the Purview account. Note this indicates Synapse workspace is associated with a Purview account and has permission to push lineage to it. If your Purview account is protected by firewall, you also need to make sure the integration runtime used to execute the activities and conduct lineage push can reach the Purview account. Learn more from [Access a secured Azure Purview account](how-to-access-secured-purview-account.md).-- **Disconnected**: The Synapse workspace cannot push lineage to Purview because Purview Data Curator role is not granted to Synapse workspace's managed identity. To fix this issue, go to your Purview account to check the role assignments, and manually grant the role as needed. Learn more from [Set up authentication](#set-up-authentication) section.
+- **Connected**: The Synapse workspace is successfully connected to the Azure Purview account. Note this indicates Synapse workspace is associated with an Azure Purview account and has permission to push lineage to it. If your Azure Purview account is protected by firewall, you also need to make sure the integration runtime used to execute the activities and conduct lineage push can reach the Azure Purview account. Learn more from [Access a secured Azure Purview account](how-to-access-secured-purview-account.md).
+- **Disconnected**: The Synapse workspace cannot push lineage to Azure Purview because Azure Purview Data Curator role is not granted to Synapse workspace's managed identity. To fix this issue, go to your Azure Purview account to check the role assignments, and manually grant the role as needed. Learn more from [Set up authentication](#set-up-authentication) section.
- **Unknown**: Azure Synapse cannot check the status. Possible reasons are:
- - Cannot reach the Purview account from your current network because the account is protected by firewall. You can launch the Synapse Studio from a private network that has connectivity to your Purview account instead.
- - You don't have permission to check role assignments on the Purview account. You can contact the Purview account admin to check the role assignments for you. Learn about the needed Purview role from [Set up authentication](#set-up-authentication) section.
+ - Cannot reach the Azure Purview account from your current network because the account is protected by firewall. You can launch the Synapse Studio from a private network that has connectivity to your Azure Purview account instead.
+ - You don't have permission to check role assignments on the Azure Purview account. You can contact the Azure Purview account admin to check the role assignments for you. Learn about the needed Azure Purview role from [Set up authentication](#set-up-authentication) section.
## Report lineage to Azure Purview
-Once you connect the Synapse workspace to a Purview account, when you execute pipelines, Synapse reports lineage information to the Purview account. For detailed supported capabilities and an end to end walkthrough, see [Metadata and lineage from Azure Synapse Analytics](../../purview/how-to-lineage-azure-synapse-analytics.md).
+Once you connect the Synapse workspace to an Azure Purview account, when you execute pipelines, Synapse reports lineage information to the Azure Purview account. For detailed supported capabilities and an end to end walkthrough, see [Metadata and lineage from Azure Synapse Analytics](../../purview/how-to-lineage-azure-synapse-analytics.md).
-## Discover and explore data using Purview
+## Discover and explore data using Azure Purview
-Once you connect the Synapse workspace to a Purview account, you can use the search bar at the top center of Synapse workspace to search for data and perform actions. Learn more from [Discover, connect and explore data in Synapse using Azure Purview](how-to-discover-connect-analyze-azure-purview.md).
+Once you connect the Synapse workspace to an Azure Purview account, you can use the search bar at the top center of Synapse workspace to search for data and perform actions. Learn more from [Discover, connect and explore data in Synapse using Azure Purview](how-to-discover-connect-analyze-azure-purview.md).
## Next steps
Once you connect the Synapse workspace to a Purview account, you can use the sea
[Metadata and lineage from Azure Synapse Analytics](../../purview/how-to-lineage-azure-synapse-analytics.md)
-[Access a secured Purview account](how-to-access-secured-purview-account.md)
+[Access a secured Azure Purview account](how-to-access-secured-purview-account.md)
[Register and scan Azure Synapse assets in Azure Purview](../../purview/register-scan-azure-synapse-analytics.md)
synapse-analytics Create Empty Lake Database https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/database-designer/create-empty-lake-database.md
In this article, you'll learn how to create an empty [lake database](./concepts-
- Your database will be validated for errors before it's published. Any errors found will be showing in the notifications tab with instructions on how to remedy the error. ![Screenshot of the validation pane showing validation errors in the database](./media/create-empty-lake-database/validation-error.png)
- - Publishing will create your database schema in the Azure Synapse Metastore. After publishing, the database and table objects will be visible to other Azure services and allow the metadata from your database to flow into apps like Power BI or Purview.
+ - Publishing will create your database schema in the Azure Synapse Metastore. After publishing, the database and table objects will be visible to other Azure services and allow the metadata from your database to flow into apps like Power BI or Azure Purview.
11. You've now created an empty lake database in Azure Synapse, and added tables to it using the **Custom** and **From data lake** options.
synapse-analytics Create Lake Database From Lake Database Templates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/database-designer/create-lake-database-from-lake-database-templates.md
In this article, you'll learn how to use the Azure Synapse database templates to
- Your database will be validated for errors before it's published. Any errors found will be showing in the notifications tab with instructions on how to remedy the error. ![Screenshot of the validation pane showing validation errors in the database](./media/create-lake-database-from-lake-database-template/validation-error.png)
- - Publishing will create your database schema in the Azure Synapse Metastore. After publishing, the database and table objects will be visible to other Azure services and allow the metadata from your database to flow into apps like Power BI or Purview.
+ - Publishing will create your database schema in the Azure Synapse Metastore. After publishing, the database and table objects will be visible to other Azure services and allow the metadata from your database to flow into apps like Power BI or Azure Purview.
12. You've now created a lake database using a lake database template in Azure Synapse.
synapse-analytics Modify Lake Database https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/database-designer/modify-lake-database.md
In this article, you'll learn how to modify an existing [lake database](./concep
- Your database will be validated for errors before it's published. Any errors found will be showing in the notifications tab with instructions on how to remedy the error. ![Screenshot of the validation pane showing validation errors in the database](./media/create-lake-database-from-lake-database-template/validation-error.png)
- - Publishing will create your database schema in the Azure Synapse Metastore. After publishing, the database and table objects will be visible to other Azure services and allow the metadata from your database to flow into apps like Power BI or Purview.
+ - Publishing will create your database schema in the Azure Synapse Metastore. After publishing, the database and table objects will be visible to other Azure services and allow the metadata from your database to flow into apps like Power BI or Azure Purview.
## Customize tables within a database
synapse-analytics Overview Terminology https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/overview-terminology.md
Previously updated : 11/02/2021 Last updated : 01/13/2022
A workspace can contain any number of **Linked service**, essentially connection
Inside Synapse Studio, you can work with SQL pools by running **SQL scripts**.
+> [!NOTE]
+> Dedicated SQL pools in Azure Synapse is different from the dedicated SQL pool (formerly SQL DW). Not all features of the dedicated SQL pool in Azure Synapse workspaces apply to dedicated SQL pool (formerly SQL DW), and vice versa. To enable workspace features for an existing dedicated SQL pool (formerly SQL DW), see [How to enable a workspace for your dedicated SQL pool (formerly SQL DW)](sql-data-warehouse/workspace-connected-create.md).
+ ## Apache Spark for Synapse To use Spark analytics, create and use **serverless Apache Spark pools** in your Synapse workspace. When you start using a Spark pool, the workspaces creates a **spark session** to handle the resources associated with that session.
synapse-analytics Overview Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/overview-features.md
Query languages used in Synapse SQL can have different supported features depend
| | Dedicated | Serverless | | | | | | **SELECT statement** | Yes. `SELECT` statement is supported, but some Transact-SQL query clauses, such as [FOR XML/FOR JSON](/sql/t-sql/queries/select-for-clause-transact-sql?view=azure-sqldw-latest&preserve-view=true), [MATCH](/sql/t-sql/queries/match-sql-graph?view=azure-sqldw-latest&preserve-view=true), OFFSET/FETCH are not supported. | Yes, `SELECT` statement is supported, but some Transact-SQL query clauses like [FOR XML](/sql/t-sql/queries/select-for-clause-transact-sql?view=azure-sqldw-latest&preserve-view=true), [MATCH](/sql/t-sql/queries/match-sql-graph?view=azure-sqldw-latest&preserve-view=true), [PREDICT](/sql/t-sql/queries/predict-transact-sql?view=azure-sqldw-latest&preserve-view=true), GROUPNG SETS, and query hints are not supported. |
-| **INSERT statement** | Yes | No, upload new data to Data lake using Spark or other tools. Use Cosmos DB with the analytical storage for highly transactional workloads. |
+| **INSERT statement** | Yes | No. Upload new data to Data lake using Spark or other tools. Use Cosmos DB with the analytical storage for highly transactional workloads. You can use [CETAS](/sql/t-sql/statements/create-external-table-as-select-transact-sql?view=azure-sqldw-latest&preserve-view=true) to create an external table and insert data. |
| **UPDATE statement** | Yes | No, update Parquet/CSV data using Spark and the changes will be automatically available in serverless pool. Use Cosmos DB with the analytical storage for highly transactional workloads. | | **DELETE statement** | Yes | No, delete Parquet/CSV data using Spark and the changes will be automatically available in serverless pool. Use Cosmos DB with the analytical storage for highly transactional workloads.| | **MERGE statement** | Yes ([preview](/sql/t-sql/statements/merge-transact-sql?view=azure-sqldw-latest&preserve-view=true)) | No, merge Parquet/CSV data using Spark and the changes will be automatically available in serverless pool. |
+| **CTAS statement** | Yes | No |
+| **CETAS statement** | Yes, you can perform initial load into an external table using [CETAS](/sql/t-sql/statements/create-external-table-as-select-transact-sql?view=azure-sqldw-latest&preserve-view=true). | Yes, you can perform initial load into an external table using [CETAS](/sql/t-sql/statements/create-external-table-as-select-transact-sql?view=azure-sqldw-latest&preserve-view=true). |
| **[Transactions](develop-transactions.md)** | Yes | Yes, applicable only on the meta-data objects. | | **[Labels](develop-label.md)** | Yes | No | | **Data load** | Yes. Preferred utility is [COPY](/sql/t-sql/statements/copy-into-transact-sql?view=azure-sqldw-latest&preserve-view=true) statement, but the system supports both BULK load (BCP) and [CETAS](/sql/t-sql/statements/create-external-table-as-select-transact-sql?view=azure-sqldw-latest&preserve-view=true) for data loading. | No, you can initially load data into an external table using CETAS statement. |
Synapse SQL pools enable you to use built-in security features to secure your da
| **Azure Active Directory (Azure AD) authentication**| Yes, Azure AD users | Yes, Azure AD logins and users can access serverless SQL pools using their Azure AD identities. | | **Storage Azure Active Directory (Azure AD) passthrough authentication** | Yes | Yes, [Azure AD passthrough authentication](develop-storage-files-storage-access-control.md?tabs=user-identity#supported-storage-authorization-types) is applicable to Azure AD logins. The identity of the Azure AD user is passed to the storage if a credential is not specified. Azure AD passthrough authentication is not available for the SQL users. | | **Storage SAS token authentication** | No | Yes, using [DATABASE SCOPED CREDENTIAL](/sql/t-sql/statements/create-database-scoped-credential-transact-sql?view=azure-sqldw-latest&preserve-view=true) in [EXTERNAL DATA SOURCE](/sql/t-sql/statements/create-external-data-source-transact-sql?view=azure-sqldw-latest&preserve-view=true) or instance-level [CREDENTIAL](/sql/t-sql/statements/create-credential-transact-sql?view=azure-sqldw-latest&preserve-view=true). |
-| **Storage Access Key authentication** | Yes, using [DATABASE SCOPED CREDENTIAL](/sql/t-sql/statements/create-database-scoped-credential-transact-sql?view=azure-sqldw-latest&preserve-view=true) in [EXTERNAL DATA SOURCE](/sql/t-sql/statements/create-external-data-source-transact-sql?view=azure-sqldw-latest&preserve-view=true) | No |
+| **Storage Access Key authentication** | Yes, using [DATABASE SCOPED CREDENTIAL](/sql/t-sql/statements/create-database-scoped-credential-transact-sql?view=azure-sqldw-latest&preserve-view=true) in [EXTERNAL DATA SOURCE](/sql/t-sql/statements/create-external-data-source-transact-sql?view=azure-sqldw-latest&preserve-view=true) | No, use SAS token instead of storage access key. |
| **Storage [Managed Identity](../../data-factory/data-factory-service-identity.md?context=/azure/synapse-analytics/context/context&tabs=synapse-analytics) authentication** | Yes, using [Managed Service Identity Credential](../../azure-sql/database/vnet-service-endpoint-rule-overview.md?bc=%2fazure%2fsynapse-analytics%2fbreadcrumb%2ftoc.json&preserve-view=true&toc=%2fazure%2fsynapse-analytics%2ftoc.json&view=azure-sqldw-latest&preserve-view=true) | Yes, The query can access the storage using the workspace [Managed Identity](develop-storage-files-storage-access-control.md?tabs=managed-identity#database-scoped-credential) credential. | | **Storage Application identity/Service principal (SPN) authentication** | [Yes](/sql/t-sql/statements/create-external-data-source-transact-sql?view=azure-sqldw-latest&preserve-view=true) | Yes, you can create a [credential](develop-storage-files-storage-access-control.md?tabs=service-principal#database-scoped-credential) with a [service principal application ID](develop-storage-files-storage-access-control.md?tabs=service-principal#supported-storage-authorization-types) that will be used to authenticate on the storage. |
-| **Server-level roles** | No | Yes, sysadmin, public, and other server-roles are supported. |
+| **Server roles** | No | Yes, sysadmin, public, and other server-roles are supported. |
| **SERVER SCOPED CREDENTIAL** | No | Yes, the server scoped credentials are used by the `OPENROWSET` function that do not uses explicit data source. | | **Permissions - [Server-level](/sql/relational-databases/security/authentication-access/server-level-roles)** | No | Yes, for example, `CONNECT ANY DATABASE` and `SELECT ALL USER SECURABLES` enable a user to read data from any databases. |
-| **Database-scoped roles** | Yes | Yes, you can use `db_owner`, `db_datareader` and `db_ddladmin` roles. |
+| **Database roles** | Yes | Yes, you can use `db_owner`, `db_datareader` and `db_ddladmin` roles. |
| **DATABASE SCOPED CREDENTIAL** | Yes, used in external data sources. | Yes, used in external data sources. | | **Permissions - [Database-level](/sql/relational-databases/security/authentication-access/database-level-roles?view=azure-sqldw-latest&preserve-view=true)** | Yes | Yes | | **Permissions - Schema-level** | Yes, including ability to GRANT, DENY, and REVOKE permissions to users/logins on the schema | Yes, you can specify schema-level permissions including ability to GRANT, DENY, and REVOKE permissions to users/logins on the schema |
Synapse SQL pools enable you to use built-in security features to secure your da
| **Data Discovery & Classification** | [Yes](../../azure-sql/database/data-discovery-and-classification-overview.md) | No | | **Vulnerability Assessment** | [Yes](../../azure-sql/database/sql-vulnerability-assessment.md) | No | | **Advanced Threat Protection** | [Yes](../../azure-sql/database/threat-detection-overview.md) | No |
-| **Auditing** | [Yes](../../azure-sql/database/auditing-overview.md) | [Yes](../../azure-sql/database/auditing-overview.md) |
-| **[Firewall rules](../security/synapse-workspace-ip-firewall.md)**| Yes | Yes, the firewall rules can be set on serverless SQL endpoint. |
-| **[Private endpoint](../security/synapse-workspace-managed-private-endpoints.md)**| Yes | Yes, the private endpoint can be set on serverless SQL pool. |
+| **Auditing** | [Yes](../../azure-sql/database/auditing-overview.md) | Yes, [auditing is supported](../../azure-sql/database/auditing-overview.md) in serverless SQL pools. |
+| **[Firewall rules](../security/synapse-workspace-ip-firewall.md)**| Yes | Yes, the firewall rules can be set on the serverless SQL endpoint. |
+| **[Private endpoint](../security/synapse-workspace-managed-private-endpoints.md)**| Yes | Yes, the private endpoint can be set on the serverless SQL pool. |
Dedicated SQL pool and serverless SQL pool use standard Transact-SQL language to query data. For detailed differences, look at the [Transact-SQL language reference](/sql/t-sql/language-reference).
You can use various tools to connect to Synapse SQL to query data.
| | Dedicated | Serverless | | | | |
-| **Synapse Studio** | Yes, SQL scripts | Yes, SQL scripts. Use SSMS or ADS instead of Synapse Studio if you are returning a large amount of data as a result. |
-| **Power BI** | Yes | [Yes](tutorial-connect-power-bi-desktop.md) |
+| **Synapse Studio** | Yes, SQL scripts | Yes, SQL scripts can be used in Synapse Studio. Use SSMS or ADS instead of Synapse Studio if you are returning a large amount of data as a result. |
+| **Power BI** | Yes | Yes, you can [use Power BI](tutorial-connect-power-bi-desktop.md) to create reports on serverless SQL pool. Import mode is recommended for reporting.|
| **Azure Analysis Service** | Yes | Yes |
-| **Azure Data Studio** | Yes | [Yes](get-started-azure-data-studio.md), version 1.18.0 or higher. SQL scripts and SQL Notebooks are supported. |
-| **SQL Server Management Studio** | Yes | [Yes](get-started-ssms.md), version 18.5 or higher |
+| **Azure Data Studio (ADS)** | Yes | Yes, you can [use ADS](get-started-azure-data-studio.md)(version 1.18.0 or higher) to query serverless SQL pool. SQL scripts and SQL Notebooks are supported. |
+| **SQL Server Management Studio (SSMS)** | Yes | Yes, you can [use SSMS](get-started-ssms.md)(version 18.5 or higher) to query serverless SQL pool. |
> [!NOTE] > You can use SSMS to connect to serverless SQL pool and query. It is partially supported starting from version 18.5, you can use it to connect and query only.
Data that is analyzed can be stored on various storage types. The following tabl
| **Azure Data Lake v2** | Yes | Yes, you can use external tables and the `OPENROWSET` function to read data from ADLS. | | **Azure Blob Storage** | Yes | Yes, you can use external tables and the `OPENROWSET` function to read data from Azure Blob Storage. | | **Azure SQL/SQL Server (remote)** | No | No, serverless SQL pool cannot reference Azure SQL database. You can reference serverless SQL pools from Azure SQL using [elastic queries](https://devblogs.microsoft.com/azure-sql/read-azure-storage-files-using-synapse-sql-external-tables/) or [linked servers](https://devblogs.microsoft.com/azure-sql/linked-server-to-synapse-sql-to-implement-polybase-like-scenarios-in-managed-instance). |
-| **Dataverse** | No | Yes, using [Synapse link](https://docs.microsoft.com/powerapps/maker/data-platform/azure-synapse-link-data-lake). |
-| **Azure CosmosDB transactional storage** | No | No, use Spark pools to update the Cosmos DB transactional storage. |
-| **Azure CosmosDB analytical storage** | No | Yes, using [Synapse Link](../../cosmos-db/synapse-link.md?bc=%2fazure%2fsynapse-analytics%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fsynapse-analytics%2ftoc.json) |
-| **Apache Spark tables (in workspace)** | No | Only PARQUET and CSV tables using [metadata synchronization](develop-storage-files-spark-tables.md) |
-| **Apache Spark tables (remote)** | No | No |
-| **Databricks tables (remote)** | No | No |
+| **Dataverse** | No | Yes, you can read Dataverse tables using [Synapse link](https://docs.microsoft.com/powerapps/maker/data-platform/azure-synapse-link-data-lake). |
+| **Azure CosmosDB transactional storage** | No | No, you cannot access Cosmos DB containers to update data or read data from the Cosmos DB transactional storage. Use Spark pools to update the Cosmos DB transactional storage. |
+| **Azure CosmosDB analytical storage** | No | Yes, you can access Cosmos DB analytical storage using [Synapse Link](../../cosmos-db/synapse-link.md?bc=%2fazure%2fsynapse-analytics%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fsynapse-analytics%2ftoc.json) |
+| **Apache Spark tables (in workspace)** | No | Yes, serverless pool can read PARQUET and CSV tables using [metadata synchronization](develop-storage-files-spark-tables.md). |
+| **Apache Spark tables (remote)** | No | No, serverless pool can access only the PARQUET and CSV tables that are [created in Apache Spark pools in the same Synapse workspace](develop-storage-files-spark-tables.md). |
+| **Databricks tables (remote)** | No | No, serverless pool can access only the PARQUET and CSV tables that are [created in Apache Spark pools in the same Synapse workspace](develop-storage-files-spark-tables.md). |
## Data formats
Data that is analyzed can be stored in various storage formats. The following ta
| | Dedicated | Serverless | | | | |
-| **Delimited** | [Yes](/sql/t-sql/statements/create-external-file-format-transact-sql?view=azure-sqldw-latest&preserve-view=true) | [Yes](query-single-csv-file.md) |
-| **CSV** | Yes (multi-character delimiters not supported) | [Yes](query-single-csv-file.md) |
-| **Parquet** | [Yes](/sql/t-sql/statements/create-external-file-format-transact-sql?view=azure-sqldw-latest&preserve-view=true) | [Yes](query-parquet-files.md), including files with [nested types](query-parquet-nested-types.md) |
+| **Delimited** | [Yes](/sql/t-sql/statements/create-external-file-format-transact-sql?view=azure-sqldw-latest&preserve-view=true) | Yes, you can [query delimited files](query-single-csv-file.md). |
+| **CSV** | Yes (multi-character delimiters not supported) | Yes, you can [query CSV files](query-single-csv-file.md). |
+| **Parquet** | [Yes](/sql/t-sql/statements/create-external-file-format-transact-sql?view=azure-sqldw-latest&preserve-view=true) | Yes, you can [query CSV files](query-parquet-files.md), including the files with [nested types](query-parquet-nested-types.md) |
| **Hive ORC** | [Yes](/sql/t-sql/statements/create-external-file-format-transact-sql?view=azure-sqldw-latest&preserve-view=true) | No | | **Hive RC** | [Yes](/sql/t-sql/statements/create-external-file-format-transact-sql?view=azure-sqldw-latest&preserve-view=true) | No |
-| **JSON** | Yes | [Yes](query-json-files.md) |
+| **JSON** | Yes | Yes, you can [query JSON files](query-json-files.md) using delimited text format and JSON functions. |
| **Avro** | No | No | | **[Delta Lake](https://delta.io/)** | No | [Yes](query-delta-lake-format.md), including files with [nested types](query-parquet-nested-types.md) | | **[CDM](/common-data-model/)** | No | No |
synapse-analytics Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/whats-new.md
The following updates are new to Azure Synapse Analytics this month.
* Synapse Link for Dataverse [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-1397891373) [article](/powerapps/maker/data-platform/azure-synapse-link-synapse) * Custom partitions for Synapse link for Azure Cosmos DB in preview [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId--409563090) [article](../cosmos-db/custom-partitioning-analytical-store.md)
-* Map data tool (Public Preview), a no-code guided ETL experience [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-december-2021-update/ba-p/3042904#REF7) [article](/database-designer/overview-map-data.md)
+* Map data tool (Public Preview), a no-code guided ETL experience [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-december-2021-update/ba-p/3042904#REF7) [article](/azure/synapse-analytics/database-designer/overview-map-data)
* Quick reuse of spark cluster [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-december-2021-update/ba-p/3042904#REF7) [article](../data-factory/concepts-integration-runtime-performance.md#time-to-live) * External Call transformation [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-december-2021-update/ba-p/3042904#REF9) [article](../data-factory/data-flow-external-call.md) * Flowlets (Public Preview) [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-december-2021-update/ba-p/3042904#REF10) [article](../data-factory/concepts-data-flow-flowlet.md)
time-series-insights Concepts Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/time-series-insights/concepts-power-bi.md
Azure Time Series Insights now seamlessly integrates with [Power BI](https://pow
### Learn more about integrating Azure Time Series Insights with Power BI.</br>
-> [!VIDEO https://channel9.msdn.com/Shows/Internet-of-Things-Show/Power-BI-integration-with-TSI/player]
+> [!VIDEO https://docs.microsoft.com/Shows/Internet-of-Things-Show/Power-BI-integration-with-TSI/player]
## Summary
time-series-insights Overview What Is Tsi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/time-series-insights/overview-what-is-tsi.md
Azure Time Series Insights Gen2 is designed for ad hoc data exploration and oper
Learn more about Azure Time Series Insights Gen2.
-> [!VIDEO https://channel9.msdn.com/Shows/Internet-of-Things-Show/Using-Azure-Time-Series-Insights-to-create-an-Industrial-IoT-analytics-platform/player]
+> [!VIDEO https://docs.microsoft.com/Shows/Internet-of-Things-Show/Using-Azure-Time-Series-Insights-to-create-an-Industrial-IoT-analytics-platform/player]
## Definition of IoT data
virtual-desktop Start Virtual Machine Connect https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/start-virtual-machine-connect.md
Now that you've assigned your subscription the role, it's time to configure the
### Deployment considerations
-Start VM on Connect is a host pool setting. If you only want a select group of users to use this feature, make sure you only assign the required role to the users you want to add.
+Start VM on Connect is a host pool setting.
For personal desktops, the feature will only turn on an existing VM that the service has already assigned or will assign to a user. In a pooled host pool scenario, the service will only turn on a VM when none are turned on. The feature will only turn on additional VMs when the first VM reaches the session limit.
virtual-desktop Teams On Avd https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/teams-on-avd.md
Title: Microsoft Teams on Azure Virtual Desktop - Azure
description: How to use Microsoft Teams on Azure Virtual Desktop. Previously updated : 01/07/2021 Last updated : 01/14/2021
>Media optimization for Teams is supported for Microsoft 365 Government (GCC) and GCC-High environments. Media optimization for Teams is not supported for Microsoft 365 DoD. >[!NOTE]
->Media optimization for Microsoft Teams is only available for the following two Windows 10 clients:
->
-> - Windows Desktop client, version 1.2.1026.0 or later
-> - macOS Remote Desktop client, version 10.7.2 or later
->
-> Teams for the macOS Remote Desktop client is currently in public preview. In order for the macOS client version of Teams to work properly, you must go to **App Preferences** > **General** and enable Teams optimizations.
+>Media optimization for Microsoft Teams is only available for the Windows Desktop client on Windows 10 machines. Media optimizations require Windows Desktop client version 1.2.1026.0 or later.
Microsoft Teams on Azure Virtual Desktop supports chat and collaboration. With media optimizations, it also supports calling and meeting functionality. To learn more about how to use Microsoft Teams in Virtual Desktop Infrastructure (VDI) environments, see [Teams for Virtualized Desktop Infrastructure](/microsoftteams/teams-for-vdi/).
virtual-machines Field Programmable Gate Arrays Attestation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/field-programmable-gate-arrays-attestation.md
Prior to performing any operations with Azure, you must log into Azure and set t
Your netlist file must be uploaded to an Azure storage blob container for access by the attestation service.
-Refer to this page for more information on creating the account, a container, and uploading your netlist as a blob to that container: [https://docs.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-cli](../storage/blobs/storage-quickstart-blobs-cli.md).
+Refer to this page for more information on creating the account, a container, and uploading your netlist as a blob to that container: [https://docs.microsoft.com/azure/storage/blobs/storage-quickstart-blobs-cli](../storage/blobs/storage-quickstart-blobs-cli.md).
You can also use the Azure portal for this as well.
virtual-machines Image Builder Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/image-builder-overview.md
To allow Azure VM Image Builder to distribute images to either the managed image
In API version 2021-10-01 and beyond, Azure VM Image Builder supports adding Azure user-assigned identities to the build VM to enable scenarios where you will need to authenticate with services like Azure Key Vault in your subscription.
-For more information on permissions, please see the following links: [PowerShell](./linux/image-builder-permissions-powershell.md), [AZ CLI](./linux/image-builder-permissions-cli.md) and [Image Builder template reference: Identity](https://docs.microsoft.com/azure/virtual-machines/linux/image-builder-json#identity).
+For more information on permissions, please see the following links: [PowerShell](./linux/image-builder-permissions-powershell.md), [AZ CLI](./linux/image-builder-permissions-cli.md) and [Image Builder template reference: Identity](./linux/image-builder-json.md#identity).
## Costs You will incur some compute, networking and storage costs when creating, building and storing images with Azure Image Builder. These costs are similar to the costs incurred in manually creating custom images. For the resources, you will be charged at your Azure rates.
az vm image list --publisher Canonical --sku gen2 --output table --all
``` For more information on which Azure VM images support Gen2, please visit: [Generation 2 VM images in Azure Marketplace
-](https://docs.microsoft.com/azure/virtual-machines/generation-2)
+](./generation-2.md)
## Next steps
-To try out the Azure Image Builder, see the articles for building [Linux](./linux/image-builder.md) or [Windows](./windows/image-builder.md) images.
+To try out the Azure Image Builder, see the articles for building [Linux](./linux/image-builder.md) or [Windows](./windows/image-builder.md) images.
virtual-machines Migrate To Premium Storage Using Azure Site Recovery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/migrate-to-premium-storage-using-azure-site-recovery.md
For specific scenarios for migrating virtual machines, see the following resourc
* [Migrate Azure Virtual Machines between Storage Accounts](https://azure.microsoft.com/blog/2014/10/22/migrate-azure-virtual-machines-between-storage-accounts/) * [Upload a Linux virtual hard disk](upload-vhd.md)
-* [Migrating Virtual Machines from Amazon AWS to Microsoft Azure](https://channel9.msdn.com/Series/Migrating-Virtual-Machines-from-Amazon-AWS-to-Microsoft-Azure)
+* Migrating Virtual Machines from Amazon AWS to Microsoft Azure
Also, see the following resources to learn more about Azure Storage and Azure Virtual Machines:
virtual-machines Scheduled Events https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/scheduled-events.md
if __name__ == '__main__':
``` ## Next steps -- Watch [Scheduled Events on Azure Friday](https://channel9.msdn.com/Shows/Azure-Friday/Using-Azure-Scheduled-Events-to-Prepare-for-VM-Maintenance) to see a demo. - Review the Scheduled Events code samples in the [Azure Instance Metadata Scheduled Events GitHub repository](https://github.com/Azure-Samples/virtual-machines-scheduled-events-discover-endpoint-for-non-vnet-vm). - Read more about the APIs that are available in the [Instance Metadata Service](instance-metadata-service.md). - Learn about [planned maintenance for Linux virtual machines in Azure](../maintenance-and-updates.md?bc=/azure/virtual-machines/linux/breadcrumb/toc.json&toc=/azure/virtual-machines/linux/toc.json).
virtual-machines Spot Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/spot-cli.md
You can simulate an eviction of an Azure Spot Virtual Machine using REST, PowerS
In most cases, you will want to use the REST API [Virtual Machines - Simulate Eviction](/rest/api/compute/virtualmachines/simulateeviction) to help with automated testing of applications. For REST, a `Response Code: 204` means the simulated eviction was successful. You can combine simulated evictions with the [Scheduled Event service](scheduled-events.md), to automate how your app will respond when the VM is evicted.
-To see scheduled events in action, watch [Azure Friday - Using Azure Scheduled Events to prepare for VM maintenance](https://channel9.msdn.com/Shows/Azure-Friday/Using-Azure-Scheduled-Events-to-Prepare-for-VM-Maintenance).
+To see scheduled events in action, watch Azure Friday - Using Azure Scheduled Events to prepare for VM maintenance.
### Quick test
virtual-machines Time Sync https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/time-sync.md
Time sync is important for security and event correlation. Sometimes it is used
Azure is backed by infrastructure running Windows Server 2016. Windows Server 2016 has improved algorithms used to correct time and condition the local clock to synchronize with UTC. The Windows Server 2016 Accurate Time feature greatly improved how the VMICTimeSync service that governs VMs with the host for accurate time. Improvements include more accurate initial time on VM start or VM restore and interrupt latency correction. > [!NOTE]
-> For a quick overview of Windows Time service, take a look at this [high-level overview video](https://aka.ms/WS2016TimeVideo).
+> For a quick overview of Windows Time service, take a look at this [high-level overview video](/shows/).
> > For more information, see [Accurate time for Windows Server 2016](/windows-server/networking/windows-time-service/accurate-time).
On SUSE and Ubuntu releases before 19.10, time sync is configured using [systemd
## Next steps
-For more information, see [Accurate time for Windows Server 2016](/windows-server/networking/windows-time-service/accurate-time).
--
+For more information, see [Accurate time for Windows Server 2016](/windows-server/networking/windows-time-service/accurate-time).
virtual-machines Managed Disks Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/managed-disks-overview.md
Refer to our [design for high performance](premium-storage-performance.md) artic
## Next steps
-If you'd like a video going into more detail on managed disks, check out: [Better Azure VM Resiliency with Managed Disks](https://channel9.msdn.com/Blogs/Azure/Managed-Disks-for-Azure-Resiliency).
+If you'd like a video going into more detail on managed disks, check out: [Better Azure VM Resiliency with Managed Disks).
Learn more about the individual disk types Azure offers, which type is a good fit for your needs, and learn about their performance targets in our article on disk types.
virtual-machines Monitor Vm Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/monitor-vm-reference.md
This section lists the platform metrics that are collected for Azure virtual mac
| Metric type | Resource provider / type namespace<br/> and link to individual metrics | |-|--|
-| Virtual machines | [Microsoft.Compute/virtualMachines](/azure/azure-monitor/essentials/metrics-supported#microsoftcomputevirtualmachines) |
-| Virtual machine scale sets | [Microsoft.Compute/virtualMachineScaleSets](/azure/azure-monitor/essentials/metrics-supported#microsoftcomputevirtualmachinescalesets)|
-| Virtual machine scale sets and virtual machines | [Microsoft.Compute/virtualMachineScaleSets/virtualMachines](/azure/azure-monitor/essentials/metrics-supported#microsoftcomputevirtualmachinescalesetsvirtualmachines)|
+| Virtual machines | [Microsoft.Compute/virtualMachines](../azure-monitor/essentials/metrics-supported.md#microsoftcomputevirtualmachines) |
+| Virtual machine scale sets | [Microsoft.Compute/virtualMachineScaleSets](../azure-monitor/essentials/metrics-supported.md#microsoftcomputevirtualmachinescalesets)|
+| Virtual machine scale sets and virtual machines | [Microsoft.Compute/virtualMachineScaleSets/virtualMachines](../azure-monitor/essentials/metrics-supported.md#microsoftcomputevirtualmachinescalesetsvirtualmachines)|
| | | For more information, see a list of [platform metrics that are supported in Azure Monitor](/azure/azure-monitor/platform/metrics-supported).
For reference documentation about Azure Monitor Logs and Log Analytics tables, s
## Activity log
-The following table lists a few example operations that relate to creating virtual machines in the activity log. For a complete list of possible log entries, see [Microsoft.Compute Resource Provider options](/azure/role-based-access-control/resource-provider-operations#compute).
+The following table lists a few example operations that relate to creating virtual machines in the activity log. For a complete list of possible log entries, see [Microsoft.Compute Resource Provider options](../role-based-access-control/resource-provider-operations.md#compute).
| Operation | Description | |:|:|
The following table lists a few example operations that relate to creating virtu
| Microsoft.Compute/virtualMachineScaleSets/write | Starts the instances of the virtual machine scale set | | | |
-For more information about the schema of activity log entries, see [Activity log schema](/azure/azure-monitor/essentials/activity-log-schema).
+For more information about the schema of activity log entries, see [Activity log schema](../azure-monitor/essentials/activity-log-schema.md).
## See also
-For a description of monitoring Azure virtual machines, see [Monitoring Azure virtual machines](../virtual-machines/monitor-vm.md).
+For a description of monitoring Azure virtual machines, see [Monitoring Azure virtual machines](../virtual-machines/monitor-vm.md).
virtual-machines Monitor Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/monitor-vm.md
Last updated 11/17/2021
# Monitor Azure virtual machines
-When you have critical applications and business processes that rely on Azure resources, it's important to monitor those resources for their availability, performance, and operation. This article describes the monitoring data that's generated by Azure virtual machines (VMs), and it discusses how to use the features of [Azure Monitor](/azure/azure-monitor/overview) to analyze and alert you about this data.
+When you have critical applications and business processes that rely on Azure resources, it's important to monitor those resources for their availability, performance, and operation. This article describes the monitoring data that's generated by Azure virtual machines (VMs), and it discusses how to use the features of [Azure Monitor](../azure-monitor/overview.md) to analyze and alert you about this data.
> [!NOTE] > This article provides basic information to help you get started with monitoring your VMs. For a complete guide to monitoring your entire environment of Azure and hybrid virtual machines, see [Monitor virtual machines with Azure Monitor](../azure-monitor/vm/monitor-virtual-machine.md). ## What is Azure Monitor?
-[Azure Monitor](/azure/azure-monitor/overview) is a full stack monitoring service that provides a complete set of features to monitor your Azure resources. You don't need to directly interact with Azure Monitor, though, to perform a variety of monitoring tasks, because its features are integrated with the Azure portal for the Azure services that it monitors. For a tutorial with an overview of how Azure Monitor works with Azure resources, see [Monitor Azure resources by using Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md).
+[Azure Monitor](../azure-monitor/overview.md) is a full stack monitoring service that provides a complete set of features to monitor your Azure resources. You don't need to directly interact with Azure Monitor, though, to perform a variety of monitoring tasks, because its features are integrated with the Azure portal for the Azure services that it monitors. For a tutorial with an overview of how Azure Monitor works with Azure resources, see [Monitor Azure resources by using Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md).
## Monitoring virtual machine data
After you've enabled VM insights, install the [Azure Monitor agent](../azure-mon
## Analyze metrics Metrics are numerical values that describe some aspect of a system at a particular point in time. Although platform metrics for the virtual machine host are collected automatically, you must [install the Azure Monitor agent](#collect-guest-metrics-and-logs) to collect guest metrics.
-The **Overview** pane includes the most common host metrics, and you can access others by using the **Metrics** pane. With this tool, you can create charts from metric values and visually correlate trends. You can also create a metric alert rule or pin a chart to an Azure dashboard. For a tutorial on using this tool, see [Analyze metrics for an Azure resource](/azure/azure-monitor/essentials/tutorial-metrics).
+The **Overview** pane includes the most common host metrics, and you can access others by using the **Metrics** pane. With this tool, you can create charts from metric values and visually correlate trends. You can also create a metric alert rule or pin a chart to an Azure dashboard. For a tutorial on using this tool, see [Analyze metrics for an Azure resource](../azure-monitor/essentials/tutorial-metrics.md).
:::image type="content" source="media/monitor-vm/metrics-explorer.png" lightbox="media/monitor-vm/metrics-explorer.png" alt-text="Screenshot of the 'Metrics' pane in Azure Monitor.":::
For more information about the various alerts for Azure virtual machines, see th
## Next steps
-For documentation about the logs and metrics that are generated by Azure virtual machines, see [Reference: Monitoring Azure virtual machine data](monitor-vm-reference.md).
+For documentation about the logs and metrics that are generated by Azure virtual machines, see [Reference: Monitoring Azure virtual machine data](monitor-vm-reference.md).
virtual-machines Migrate To Premium Storage Using Azure Site Recovery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/windows/migrate-to-premium-storage-using-azure-site-recovery.md
For specific scenarios for migrating virtual machines, see the following resourc
* [Migrate Azure Virtual Machines between Storage Accounts](https://azure.microsoft.com/blog/2014/10/22/migrate-azure-virtual-machines-between-storage-accounts/) * [Create and upload a Windows Server VHD to Azure](upload-generalized-managed.md)
-* [Migrating Virtual Machines from Amazon AWS to Microsoft Azure](https://channel9.msdn.com/Series/Migrating-Virtual-Machines-from-Amazon-AWS-to-Microsoft-Azure)
+* Migrating Virtual Machines from Amazon AWS to Microsoft Azure
Also, see the following resources to learn more about Azure Storage and Azure Virtual Machines:
virtual-machines Ps Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/windows/ps-template.md
Creating an Azure virtual machine usually includes two steps:
- Create a resource group. An Azure resource group is a logical container into which Azure resources are deployed and managed. A resource group must be created before a virtual machine. - Create a virtual machine.
-The following example creates an [Azure Generation 2 VM](https://docs.microsoft.com/azure/virtual-machines/generation-2) by default from an [Azure Quickstart template](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.compute/vm-simple-windows/azuredeploy.json). Here is a copy of the template:
+The following example creates an [Azure Generation 2 VM](../generation-2.md) by default from an [Azure Quickstart template](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.compute/vm-simple-windows/azuredeploy.json). Here is a copy of the template:
[!code-json[create-windows-vm](~/quickstart-templates/quickstarts/microsoft.compute/vm-simple-windows/azuredeploy.json)]
To learn more about creating templates, view the JSON syntax and properties for
- [Microsoft.Network/publicIPAddresses](/azure/templates/microsoft.network/publicipaddresses) - [Microsoft.Network/virtualNetworks](/azure/templates/microsoft.network/virtualnetworks) - [Microsoft.Network/networkInterfaces](/azure/templates/microsoft.network/networkinterfaces)-- [Microsoft.Compute/virtualMachines](/azure/templates/microsoft.compute/virtualmachines)
+- [Microsoft.Compute/virtualMachines](/azure/templates/microsoft.compute/virtualmachines)
virtual-machines Scheduled Events https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/windows/scheduled-events.md
if __name__ == '__main__':
``` ## Next steps -- Watch [Scheduled Events on Azure Friday](https://channel9.msdn.com/Shows/Azure-Friday/Using-Azure-Scheduled-Events-to-Prepare-for-VM-Maintenance) to see a demo. - Review the Scheduled Events code samples in the [Azure Instance Metadata Scheduled Events GitHub repository](https://github.com/Azure-Samples/virtual-machines-scheduled-events-discover-endpoint-for-non-vnet-vm). - Read more about the APIs that are available in the [Instance Metadata Service](instance-metadata-service.md). - Learn about [planned maintenance for Windows virtual machines in Azure](../maintenance-and-updates.md?bc=/azure/virtual-machines/windows/breadcrumb/toc.json&toc=/azure/virtual-machines/windows/toc.json).
virtual-machines Spot Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/windows/spot-powershell.md
You can simulate an eviction of an Azure Spot Virtual Machine using REST, PowerS
In most cases, you will want to use the REST API [Virtual Machines - Simulate Eviction](/rest/api/compute/virtualmachines/simulateeviction) to help with automated testing of applications. For REST, a `Response Code: 204` means the simulated eviction was successful. You can combine simulated evictions with the [Scheduled Event service](scheduled-events.md), to automate how your app will respond when the VM is evicted.
-To see scheduled events in action, watch [Azure Friday - Using Azure Scheduled Events to prepare for VM maintenance](https://channel9.msdn.com/Shows/Azure-Friday/Using-Azure-Scheduled-Events-to-Prepare-for-VM-Maintenance).
+To see scheduled events in action, watch Azure Friday - Using Azure Scheduled Events to prepare for VM maintenance.
### Quick test
virtual-machines Time Sync https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/windows/time-sync.md
Azure is now backed by infrastructure running Windows Server 2016. Windows Serve
>[!NOTE]
->For a quick overview of Windows Time service, take a look at this [high-level overview video](https://aka.ms/WS2016TimeVideo).
+>For a quick overview of Windows Time service, take a look at this [high-level overview video](/shows/).
> > For more information, see [Accurate time for Windows Server 2016](/windows-server/networking/windows-time-service/accurate-time).
Below are links to more details about the time sync:
- [Windows Server 2016 Improvements ](/windows-server/networking/windows-time-service/windows-server-2016-improvements) - [Accurate Time for Windows Server 2016](/windows-server/networking/windows-time-service/accurate-time)-- [Support boundary to configure the Windows Time service for high-accuracy environments](/windows-server/networking/windows-time-service/support-boundary)
+- [Support boundary to configure the Windows Time service for high-accuracy environments](/windows-server/networking/windows-time-service/support-boundary)
virtual-machines Automation Configure System https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/automation-configure-system.md
By default the SAP System deployment uses the credentials from the SAP Workload
The high availability configuration for the database tier and the SCS tier is configured using the `database_high_availability` and `scs_high_availability` flags.
-High availability configurations use Pacemaker with Azure fencing agents. The fencing agents should be configured to use a unique service principal with permissions to stop and start virtual machines. For more information see [Create Fencing Agent](high-availability-guide-suse-pacemaker.md#create-azure-fence-agent-stonith-device)
+High availability configurations use Pacemaker with Azure fencing agents. The fencing agents should be configured to use a unique service principal with permissions to stop and start virtual machines. For more information see [Create Fencing Agent](high-availability-guide-suse-pacemaker.md#create-an-azure-fence-agent-stonith-device)
```azurecli-interactive az ad sp create-for-rbac --role="Linux Fence Agent Role" --scopes="/subscriptions/<subscriptionID>" --name="<prefix>-Fencing-Agent"
virtual-machines Hana Vm Troubleshoot Scale Out Ha On Sles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/hana-vm-troubleshoot-scale-out-ha-on-sles.md
The **corosync** config file has to be correct on every node in the cluster incl
The content of **corosync.conf** from the test system is an example.
-The first section is **totem**, as described in [Cluster installation](./high-availability-guide-suse-pacemaker.md#cluster-installation), step 11. You can ignore the value for **mcastaddr**. Just keep the existing entry. The entries for **token** and **consensus** must be set according to [Microsoft Azure SAP HANA documentation][sles-pacemaker-ha-guide].
+The first section is **totem**, as described in [Cluster installation](./high-availability-guide-suse-pacemaker.md#install-the-cluster), step 11. You can ignore the value for **mcastaddr**. Just keep the existing entry. The entries for **token** and **consensus** must be set according to [Microsoft Azure SAP HANA documentation][sles-pacemaker-ha-guide].
<pre><code> totem {
systemctl restart corosync
## SBD device
-How to set up an SBD device on an Azure VM is described in [SBD fencing](./high-availability-guide-suse-pacemaker.md#sbd-device-using-iscsi-target-server).
+How to set up an SBD device on an Azure VM is described in [SBD fencing](./high-availability-guide-suse-pacemaker.md#sbd-with-an-iscsi-target-server).
First, check on the SBD server VM if there are ACL entries for every node in the cluster. Run the following command on the SBD server VM:
On the target VM side, **hso-hana-vm-s2-2** in this example, you can find the fo
/dev/disk/by-id/scsi-36001405e614138d4ec64da09e91aea68: notice: servant: Received command test from hso-hana-vm-s2-1 on disk /dev/disk/by-id/scsi-36001405e614138d4ec64da09e91aea68 </code></pre>
-Check that the entries in **/etc/sysconfig/sbd** correspond to the description in [Setting up Pacemaker on SUSE Linux Enterprise Server in Azure](./high-availability-guide-suse-pacemaker.md#sbd-device-using-iscsi-target-server). Verify that the startup setting in **/etc/iscsi/iscsid.conf** is set to automatic.
+Check that the entries in **/etc/sysconfig/sbd** correspond to the description in [Setting up Pacemaker on SUSE Linux Enterprise Server in Azure](./high-availability-guide-suse-pacemaker.md#sbd-with-an-iscsi-target-server). Verify that the startup setting in **/etc/iscsi/iscsid.conf** is set to automatic.
The following entries are important in **/etc/sysconfig/sbd**. Adapt the **id** value if necessary:
virtual-machines High Availability Guide Suse Pacemaker https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/high-availability-guide-suse-pacemaker.md
Title: Setting up Pacemaker on SLES in Azure | Microsoft Docs
-description: Setting up Pacemaker on SUSE Linux Enterprise Server in Azure
+ Title: Set up Pacemaker on SUSE Linux Enterprise Server (SLES) in Azure | Microsoft Docs
+description: This article discusses how to set up Pacemaker on SUSE Linux Enterprise Server in Azure.
documentationcenter: saponazure
-# Setting up Pacemaker on SUSE Linux Enterprise Server in Azure
+# Set up Pacemaker on SUSE Linux Enterprise Server in Azure
+
+This article discusses how to set up Pacemaker on SUSE Linux Enterprise Server (SLES) in Azure.
## Overview
[sles-nfs-guide]:high-availability-guide-suse-nfs.md [sles-guide]:high-availability-guide-suse.md
-In Azure, there are two options to set up stonith in Pacemaker cluster for SLES. You can either use an Azure fence agent, which takes care of restarting a failed node via the Azure APIs or you can use an SBD device. To configure stonith using SBD device, two different methods available in Azure.
+In Azure, you have two options for setting up STONITH in the Pacemaker cluster for SLES. You can use an Azure fence agent, which restarts a failed node via the Azure APIs, or you can use a STONITH block device (SBD device).
+
+### Use an SBD device
+
+You can configure the SBD device by using either of two options:
-- SBD device using iSCSI target server
+- SBD with an iSCSI target server:
- The SBD device requires at least one additional virtual machine that acts as an iSCSI target server and provides an SBD device. These iSCSI target servers can however be shared with other Pacemaker clusters. The advantage of using an SBD device is, if you are already using SBD devices on-premises, doesn't require any changes on how you operate the pacemaker cluster. You can use up to three SBD devices for a Pacemaker cluster to allow an SBD device to become unavailable, for example during OS patching of the iSCSI target server. If you want to use more than one SBD device per Pacemaker, make sure to deploy multiple iSCSI target servers and connect one SBD from each iSCSI target server. We recommend using either one SBD device or three. Pacemaker will not be able to automatically fence a cluster node if you only configure two SBD devices and one of them is not available. If you want to be able to fence when one iSCSI target server is down, you have to use three SBD devices and therefore three iSCSI target servers, which is the most resilient configuration when using SBDs.
+ The SBD device requires at least one additional virtual machine (VM) that acts as an Internet Small Computer System Interface (iSCSI) target server and provides an SBD device. These iSCSI target servers can, however, be shared with other Pacemaker clusters. The advantage of using an SBD device is that if you're already using SBD devices on-premises, they don't require any changes to how you operate the Pacemaker cluster.
- ![Pacemaker on SLES overview](./media/high-availability-guide-suse-pacemaker/pacemaker.png)
+ You can use up to three SBD devices for a Pacemaker cluster to allow an SBD device to become unavailable (for example, during OS patching of the iSCSI target server). If you want to use more than one SBD device per Pacemaker, be sure to deploy multiple iSCSI target servers and connect one SBD from each iSCSI target server. We recommend using either one SBD device or three. Pacemaker can't automatically fence a cluster node if only two SBD devices are configured and one of them is unavailable. If you want to be able to fence when one iSCSI target server is down, you have to use three SBD devices and, therefore, three iSCSI target servers. That's the most resilient configuration when you're using SBDs.
+
+ ![Diagram of Pacemaker on SLES overview.](./media/high-availability-guide-suse-pacemaker/pacemaker.png)
>[!IMPORTANT]
- > When planning and deploying Linux Pacemaker clustered nodes and SBD devices, it is essential for the overall reliability of the complete cluster configuration that the routing between the VMs involved and the VM(s) hosting the SBD device(s) is not passing through any other devices like [NVAs](https://azure.microsoft.com/solutions/network-appliances/). Otherwise, issues and maintenance events with the NVA can have a negative impact on the stability and reliability of the overall cluster configuration. In order to avoid such obstacles, don't define routing rules of NVAs or [User Defined Routing rules](../../../virtual-network/virtual-networks-udr-overview.md) that route traffic between clustered nodes and SBD devices through NVAs and similar devices when planning and deploying Linux Pacemaker clustered nodes and SBD devices.
+ > When you're planning and deploying Linux Pacemaker clustered nodes and SBD devices, do not allow the routing between your virtual machines and the VMs that are hosting the SBD devices to pass through any other devices, such as a [network virtual appliance (NVA)](https://azure.microsoft.com/solutions/network-appliances/).
+ >
+ >Maintenance events and other issues with the NVA can have a negative impact on the stability and reliability of the overall cluster configuration. For more information, see [User-defined routing rules](../../../virtual-network/virtual-networks-udr-overview.md).
-- SBD device using Azure shared disk
+- SBD with an Azure shared disk:
- To configure SBD device, you need to attach at least one [Azure shared disk](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/virtual-machines/disks-shared.md) to all virtual machines that are part of Pacemaker cluster. The advantage of SBD device using Azure shared disk is that you donΓÇÖt need to deploy additional virtual machines.
+ To configure an SBD device, you need to attach at least one [Azure shared disk](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/virtual-machines/disks-shared.md) to all virtual machines that are part of Pacemaker cluster. The advantage of SBD device using an Azure shared disk is that you donΓÇÖt need to deploy additional virtual machines.
- ![Azure shared disk SBD device for SLES Pacemaker cluster](./media/high-availability-guide-suse-pacemaker/azure-shared-disk-sbd-device.png)
+ ![Diagram of the Azure shared disk SBD device for SLES Pacemaker cluster.](./media/high-availability-guide-suse-pacemaker/azure-shared-disk-sbd-device.png)
- **Important Consideration for SBD device using Azure shared disk**
-
- - Azure shared disk with Premium SSD is supported as SBD device.
- - SBD device using Azure shared disk is supported on SLES HA 15 SP01 and above.
- - SBD device using Azure premium shared disk is supported on [locally redundant storage (LRS)](../../disks-redundancy.md#locally-redundant-storage-for-managed-disks) and [zone-redundant storage (ZRS)](../../disks-redundancy.md#zone-redundant-storage-for-managed-disks).
- - Depending on the type of your deployment - availability set or availability zones, choose the appropriate redundant storage for Azure shared disk as your SBD device.
- - SBD device using LRS for Azure premium shared disk (skuName - Premium_LRS) is only supported with deployment in availability set.
- - SBD device using ZRS for Azure premium shared disk (skuName - Premium_ZRS) is recommended with deployment in availability zones.
- - ZRS for managed disk is currently not available in all regions with availability zones. Review the [limitations](../../disks-redundancy.md#limitations) section of ZRS for managed disks for more details.
- - The Azure shared disk used for SBD device doesnΓÇÖt need to be large. The [maxShares](../../disks-shared-enable.md#disk-sizes) value determines how many cluster nodes can use the shared disk. For example, you can use P1 or P2 disk sizes for your SBD device on two-node cluster like SAP ASCS/ERS, SAP HANA scale-up.
- - For [HANA scale-out with HANA system replication (HSR) and pacemaker](sap-hana-high-availability-scale-out-hsr-suse.md), you can use Azure shared disk for SBD device in clusters with up to four nodes per replication site because of the current limit of [maxShares](../../disks-shared-enable.md#disk-sizes).
- - We donΓÇÖt recommend to attach Azure shared disk SBD device across Pacemaker clusters.
- - If multiple Azure shared disk SBD devices are used, check on the limit for maximum number of data disks that can be attached to VM.
- - For further details on limitations for Azure shared disk, review carefully the [limitations](../../disks-shared.md#limitations) section of Azure Shared Disk documentation.
--- Azure fence agent-
- Azure fence agent requires service principal that takes care of restarting failed nodes via Azure APIs. Azure Fence agent doesn't require deploying additional virtual machine(s).
-
-## SBD device using iSCSI target server
-
-Follow these steps if you want to use SBD device using iSCSI target server for fencing.
+ Here are some important considerations about SBD devices when you're using an Azure shared disk:
-### Set up iSCSI target servers
+ - An Azure shared disk with Premium SSD is supported as an SBD device.
+ - SBD devices that use an Azure shared disk are supported on SLES High Availability 15 SP01 and later.
+ - SBD devices that use an Azure premium shared disk are supported on [locally redundant storage (LRS)](../../disks-redundancy.md#locally-redundant-storage-for-managed-disks) and [zone-redundant storage (ZRS)](../../disks-redundancy.md#zone-redundant-storage-for-managed-disks).
+ - Depending on the type of your deployment (availability set or availability zones), choose the appropriate redundant storage for an Azure shared disk as your SBD device.
+ - An SBD device using LRS for Azure premium shared disk (skuName - Premium_LRS) is only supported with deployment in availability set.
+ - An SBD device using ZRS for an Azure premium shared disk (skuName - Premium_ZRS) is recommended with deployment in availability zones.
+ - A ZRS for managed disk is currently unavailable in all regions with availability zones. For more information, review the ZRS "Limitations" section in [Redundancy options for managed disks](../../disks-redundancy.md#limitations).
+ - The Azure shared disk that you use for SBD devices doesnΓÇÖt need to be large. The [maxShares](../../disks-shared-enable.md#disk-sizes) value determines how many cluster nodes can use the shared disk. For example, you can use P1 or P2 disk sizes for your SBD device on two-node cluster such as SAP ASCS/ERS or SAP HANA scale-up.
+ - For [HANA scale-out with HANA system replication (HSR) and Pacemaker](sap-hana-high-availability-scale-out-hsr-suse.md), you can use an Azure shared disk for SBD devices in clusters with up to four nodes per replication site because of the current limit of [maxShares](../../disks-shared-enable.md#disk-sizes).
+ - We do *not* recommend attaching an Azure shared disk SBD device across Pacemaker clusters.
+ - If you use multiple Azure shared disk SBD devices, check on the limit for a maximum number of data disks that can be attached to a VM.
+ - For more information about limitations for Azure shared disks, carefully review the "Limitations" section of [Azure shared disk documentation](../../disks-shared.md#limitations).
-You first need to create the iSCSI target virtual machines. iSCSI target servers can be shared with multiple Pacemaker clusters.
+### Use an Azure fence agent
+You can set up STONITH by using an Azure fence agent. Azure fence agents require a service principal that manages restarting failed nodes via Azure APIs. Azure fence agents don't require the deployment of additional virtual machines.
-1. Deploy new SLES 12 SP3 or higher virtual machines and connect to them via ssh. The machines don't need to be large. A virtual machine size like Standard_E2s_v3 or Standard_D2s_v3 is sufficient. Make sure to use Premium storage the OS disk.
-
-Run the following commands on all **iSCSI target virtual machines**.
-
-1. Update SLES
-
- <pre><code>sudo zypper update
- </code></pre>
-
- > [!NOTE]
- > You might need to reboot the OS after you upgrade or update the OS.
-
-1. Remove packages
-
- To avoid a known issue with targetcli and SLES 12 SP3, uninstall the following packages. You can ignore errors about packages that cannot be found
-
- <pre><code>sudo zypper remove lio-utils python-rtslib python-configshell targetcli
- </code></pre>
-
-1. Install iSCSI target packages
-
- <pre><code>sudo zypper install targetcli-fb dbus-1-python
- </code></pre>
-
-1. Enable the iSCSI target service
-
- <pre><code>sudo systemctl enable targetcli
- sudo systemctl start targetcli
- </code></pre>
-
-### Create iSCSI device on iSCSI target server
-
-Run the following commands on all **iSCSI target virtual machines** to create the iSCSI disks for the clusters used by your SAP systems. In the following example, SBD devices for multiple clusters are created. It shows you how you would use one iSCSI target server for multiple clusters. The SBD devices are placed on the OS disk. Make sure that you have enough space.
-
-**`nfs`** is used to identify the NFS cluster, **ascsnw1** is used to identify the ASCS cluster of **NW1**, **dbnw1** is used to identify the database cluster of **NW1**, **nfs-0** and **nfs-1** are the hostnames of the NFS cluster nodes, **nw1-xscs-0** and **nw1-xscs-1** are the hostnames of the **NW1** ASCS cluster nodes, and **nw1-db-0** and **nw1-db-1** are the hostnames of the database cluster nodes. Replace them with the hostnames of your cluster nodes and the SID of your SAP system.
-
-<pre><code># Create the root folder for all SBD devices
-sudo mkdir /sbd
-
-# Create the SBD device for the NFS server
-sudo targetcli backstores/fileio create sbdnfs /sbd/sbdnfs 50M write_back=false
-sudo targetcli iscsi/ create iqn.2006-04.nfs.local:nfs
-sudo targetcli iscsi/iqn.2006-04.nfs.local:nfs/tpg1/luns/ create /backstores/fileio/sbdnfs
-sudo targetcli iscsi/iqn.2006-04.nfs.local:nfs/tpg1/acls/ create iqn.2006-04.<b>nfs-0.local:nfs-0</b>
-sudo targetcli iscsi/iqn.2006-04.nfs.local:nfs/tpg1/acls/ create iqn.2006-04.<b>nfs-1.local:nfs-1</b>
-
-# Create the SBD device for the ASCS server of SAP System NW1
-sudo targetcli backstores/fileio create sbdascs<b>nw1</b> /sbd/sbdascs<b>nw1</b> 50M write_back=false
-sudo targetcli iscsi/ create iqn.2006-04.ascs<b>nw1</b>.local:ascs<b>nw1</b>
-sudo targetcli iscsi/iqn.2006-04.ascs<b>nw1</b>.local:ascs<b>nw1</b>/tpg1/luns/ create /backstores/fileio/sbdascs<b>nw1</b>
-sudo targetcli iscsi/iqn.2006-04.ascs<b>nw1</b>.local:ascs<b>nw1</b>/tpg1/acls/ create iqn.2006-04.<b>nw1-xscs-0.local:nw1-xscs-0</b>
-sudo targetcli iscsi/iqn.2006-04.ascs<b>nw1</b>.local:ascs<b>nw1</b>/tpg1/acls/ create iqn.2006-04.<b>nw1-xscs-1.local:nw1-xscs-1</b>
-
-# Create the SBD device for the database cluster of SAP System NW1
-sudo targetcli backstores/fileio create sbddb<b>nw1</b> /sbd/sbddb<b>nw1</b> 50M write_back=false
-sudo targetcli iscsi/ create iqn.2006-04.db<b>nw1</b>.local:db<b>nw1</b>
-sudo targetcli iscsi/iqn.2006-04.db<b>nw1</b>.local:db<b>nw1</b>/tpg1/luns/ create /backstores/fileio/sbddb<b>nw1</b>
-sudo targetcli iscsi/iqn.2006-04.db<b>nw1</b>.local:db<b>nw1</b>/tpg1/acls/ create iqn.2006-04.<b>nw1-db-0.local:nw1-db-0</b>
-sudo targetcli iscsi/iqn.2006-04.db<b>nw1</b>.local:db<b>nw1</b>/tpg1/acls/ create iqn.2006-04.<b>nw1-db-1.local:nw1-db-1</b>
-
-# save the targetcli changes
-sudo targetcli saveconfig
-</code></pre>
-
-You can check if everything was set up correctly with
-
-<pre><code>sudo targetcli ls
-
-o- / .......................................................................................................... [...]
- o- backstores ............................................................................................... [...]
- | o- block ................................................................................... [Storage Objects: 0]
- | o- fileio .................................................................................. [Storage Objects: 3]
- | | o- <b>sbdascsnw1</b> ................................................ [/sbd/sbdascsnw1 (50.0MiB) write-thru activated]
- | | | o- alua .................................................................................... [ALUA Groups: 1]
- | | | o- default_tg_pt_gp ........................................................ [ALUA state: Active/optimized]
- | | o- <b>sbddbnw1</b> .................................................... [/sbd/sbddbnw1 (50.0MiB) write-thru activated]
- | | | o- alua .................................................................................... [ALUA Groups: 1]
- | | | o- default_tg_pt_gp ........................................................ [ALUA state: Active/optimized]
- | | o- <b>sbdnfs</b> ........................................................ [/sbd/sbdnfs (50.0MiB) write-thru activated]
- | | o- alua .................................................................................... [ALUA Groups: 1]
- | | o- default_tg_pt_gp ........................................................ [ALUA state: Active/optimized]
- | o- pscsi ................................................................................... [Storage Objects: 0]
- | o- ramdisk ................................................................................. [Storage Objects: 0]
- o- iscsi ............................................................................................. [Targets: 3]
- | o- <b>iqn.2006-04.ascsnw1.local:ascsnw1</b> .................................................................. [TPGs: 1]
- | | o- tpg1 ................................................................................ [no-gen-acls, no-auth]
- | | o- acls ........................................................................................... [ACLs: 2]
- | | | o- <b>iqn.2006-04.nw1-xscs-0.local:nw1-xscs-0</b> ............................................... [Mapped LUNs: 1]
- | | | | o- mapped_lun0 ............................................................ [lun0 fileio/<b>sbdascsnw1</b> (rw)]
- | | | o- <b>iqn.2006-04.nw1-xscs-1.local:nw1-xscs-1</b> ............................................... [Mapped LUNs: 1]
- | | | o- mapped_lun0 ............................................................ [lun0 fileio/<b>sbdascsnw1</b> (rw)]
- | | o- luns ........................................................................................... [LUNs: 1]
- | | | o- lun0 .......................................... [fileio/sbdascsnw1 (/sbd/sbdascsnw1) (default_tg_pt_gp)]
- | | o- portals ..................................................................................... [Portals: 1]
- | | o- 0.0.0.0:3260 ...................................................................................... [OK]
- | o- <b>iqn.2006-04.dbnw1.local:dbnw1</b> ...................................................................... [TPGs: 1]
- | | o- tpg1 ................................................................................ [no-gen-acls, no-auth]
- | | o- acls ........................................................................................... [ACLs: 2]
- | | | o- <b>iqn.2006-04.nw1-db-0.local:nw1-db-0</b> ................................................... [Mapped LUNs: 1]
- | | | | o- mapped_lun0 .............................................................. [lun0 fileio/<b>sbddbnw1</b> (rw)]
- | | | o- <b>iqn.2006-04.nw1-db-1.local:nw1-db-1</b> ................................................... [Mapped LUNs: 1]
- | | | o- mapped_lun0 .............................................................. [lun0 fileio/<b>sbddbnw1</b> (rw)]
- | | o- luns ........................................................................................... [LUNs: 1]
- | | | o- lun0 .............................................. [fileio/sbddbnw1 (/sbd/sbddbnw1) (default_tg_pt_gp)]
- | | o- portals ..................................................................................... [Portals: 1]
- | | o- 0.0.0.0:3260 ...................................................................................... [OK]
- | o- <b>iqn.2006-04.nfs.local:nfs</b> .......................................................................... [TPGs: 1]
- | o- tpg1 ................................................................................ [no-gen-acls, no-auth]
- | o- acls ........................................................................................... [ACLs: 2]
- | | o- <b>iqn.2006-04.nfs-0.local:nfs-0</b> ......................................................... [Mapped LUNs: 1]
- | | | o- mapped_lun0 ................................................................ [lun0 fileio/<b>sbdnfs</b> (rw)]
- | | o- <b>iqn.2006-04.nfs-1.local:nfs-1</b> ......................................................... [Mapped LUNs: 1]
- | | o- mapped_lun0 ................................................................ [lun0 fileio/<b>sbdnfs</b> (rw)]
- | o- luns ........................................................................................... [LUNs: 1]
- | | o- lun0 .................................................. [fileio/sbdnfs (/sbd/sbdnfs) (default_tg_pt_gp)]
- | o- portals ..................................................................................... [Portals: 1]
- | o- 0.0.0.0:3260 ...................................................................................... [OK]
- o- loopback .......................................................................................... [Targets: 0]
- o- vhost ............................................................................................. [Targets: 0]
- o- xen-pvscsi ........................................................................................ [Targets: 0]
-</code></pre>
-
-### Set up iSCSI target server SBD device
-
-Connect to the iSCSI device that was created in the last step from the cluster.
-Run the following commands on the nodes of the new cluster you want to create.
-The following items are prefixed with either **[A]** - applicable to all nodes, **[1]** - only applicable to node 1 or **[2]** - only applicable to node 2.
-
-1. **[A]** Connect to the iSCSI devices
-
- First, enable the iSCSI and SBD services.
+## SBD with an iSCSI target server
+
+To use an SBD device that uses an iSCSI target server for fencing, follow the instructions in the next sections.
+
+### Set up the iSCSI target server
+
+You first need to create the iSCSI target virtual machines. You can share iSCSI target servers with multiple Pacemaker clusters.
+
+1. Deploy new SLES 12 SP3 or higher virtual machines and connect to them via SSH. The machines don't need to be large. Virtual machine sizes Standard_E2s_v3 or Standard_D2s_v3 are sufficient. Be sure to use Premium storage for the OS disk.
+
+1. On **iSCSI target virtual machines**, run the following commands:
+
+ a. Update SLES.
+
+ <pre><code>sudo zypper update
+ </code></pre>
+
+ > [!NOTE]
+ > You might need to reboot the OS after you upgrade or update the OS.
+
+ b. Remove packages.
+
+ To avoid a known issue with targetcli and SLES 12 SP3, uninstall the following packages. You can ignore errors about packages that can't be found.
+
+ <pre><code>sudo zypper remove lio-utils python-rtslib python-configshell targetcli
+ </code></pre>
+
+ c. Install iSCSI target packages.
+
+ <pre><code>sudo zypper install targetcli-fb dbus-1-python
+ </code></pre>
+
+ d. Enable the iSCSI target service.
+
+ <pre><code>sudo systemctl enable targetcli
+ sudo systemctl start targetcli
+ </code></pre>
+
+### Create an iSCSI device on the iSCSI target server
+
+To create the iSCSI disks for the clusters to be used by your SAP systems, run the following commands on all iSCSI target virtual machines. In the example, SBD devices for multiple clusters are created. It shows how you would use one iSCSI target server for multiple clusters. The SBD devices are placed on the OS disk. Make sure that you have enough space.
+
+* **nfs**: Identifies the NFS cluster.
+* **ascsnw1**: Identifies the ASCS cluster of **NW1**.
+* **dbnw1**: Identifies the database cluster of **NW1**.
+* **nfs-0** and **nfs-1**: The hostnames of the NFS cluster nodes.
+* **nw1-xscs-0** and **nw1-xscs-1**: The hostnames of the **NW1** ASCS cluster nodes.
+* **nw1-db-0** and **nw1-db-1**: The hostnames of the database cluster nodes.
+
+In the following instructions, replace the bold-formatted placeholder text with the hostnames of your cluster nodes and the SID of your SAP system.
+
+1. Create the root folder for all SBD devices.
+ <pre><code>sudo mkdir /sbd</code></pre>
+
+1. Create the SBD device for the NFS server.
+ <pre><code>sudo targetcli backstores/fileio create sbdnfs /sbd/sbdnfs 50M write_back=false
+ sudo targetcli iscsi/ create iqn.2006-04.nfs.local:nfs
+ sudo targetcli iscsi/iqn.2006-04.nfs.local:nfs/tpg1/luns/ create /backstores/fileio/sbdnfs
+ sudo targetcli iscsi/iqn.2006-04.nfs.local:nfs/tpg1/acls/ create iqn.2006-04.<b>nfs-0.local:nfs-0</b>
+ sudo targetcli iscsi/iqn.2006-04.nfs.local:nfs/tpg1/acls/ create iqn.2006-04.<b>nfs-1.local:nfs-1</b></code></pre>
+
+1. Create the SBD device for the ASCS server of SAP System NW1.
+ <pre><code>sudo targetcli backstores/fileio create sbdascs<b>nw1</b> /sbd/sbdascs<b>nw1</b> 50M write_back=false
+ sudo targetcli iscsi/ create iqn.2006-04.ascs<b>nw1</b>.local:ascs<b>nw1</b>
+ sudo targetcli iscsi/iqn.2006-04.ascs<b>nw1</b>.local:ascs<b>nw1</b>/tpg1/luns/ create /backstores/fileio/sbdascs<b>nw1</b>
+ sudo targetcli iscsi/iqn.2006-04.ascs<b>nw1</b>.local:ascs<b>nw1</b>/tpg1/acls/ create iqn.2006-04.<b>nw1-xscs-0.local:nw1-xscs-0</b>
+ sudo targetcli iscsi/iqn.2006-04.ascs<b>nw1</b>.local:ascs<b>nw1</b>/tpg1/acls/ create iqn.2006-04.<b>nw1-xscs-1.local:nw1-xscs-1</b></code></pre>
+
+1. Create the SBD device for the database cluster of SAP System NW1.
+ <pre><code>sudo targetcli backstores/fileio create sbddb<b>nw1</b> /sbd/sbddb<b>nw1</b> 50M write_back=false
+ sudo targetcli iscsi/ create iqn.2006-04.db<b>nw1</b>.local:db<b>nw1</b>
+ sudo targetcli iscsi/iqn.2006-04.db<b>nw1</b>.local:db<b>nw1</b>/tpg1/luns/ create /backstores/fileio/sbddb<b>nw1</b>
+ sudo targetcli iscsi/iqn.2006-04.db<b>nw1</b>.local:db<b>nw1</b>/tpg1/acls/ create iqn.2006-04.<b>nw1-db-0.local:nw1-db-0</b>
+ sudo targetcli iscsi/iqn.2006-04.db<b>nw1</b>.local:db<b>nw1</b>/tpg1/acls/ create iqn.2006-04.<b>nw1-db-1.local:nw1-db-1</b></code></pre>
+
+1. Save the targetcli changes.
+ <pre><code>sudo targetcli saveconfig</code></pre>
+
+1. Check to ensure that everything was set up correctly.
+ <pre><code>sudo targetcli ls
+
+ o- / .......................................................................................................... [...]
+ o- backstores ............................................................................................... [...]
+ | o- block ................................................................................... [Storage Objects: 0]
+ | o- fileio .................................................................................. [Storage Objects: 3]
+ | | o- <b>sbdascsnw1</b> ................................................ [/sbd/sbdascsnw1 (50.0MiB) write-thru activated]
+ | | | o- alua .................................................................................... [ALUA Groups: 1]
+ | | | o- default_tg_pt_gp ........................................................ [ALUA state: Active/optimized]
+ | | o- <b>sbddbnw1</b> .................................................... [/sbd/sbddbnw1 (50.0MiB) write-thru activated]
+ | | | o- alua .................................................................................... [ALUA Groups: 1]
+ | | | o- default_tg_pt_gp ........................................................ [ALUA state: Active/optimized]
+ | | o- <b>sbdnfs</b> ........................................................ [/sbd/sbdnfs (50.0MiB) write-thru activated]
+ | | o- alua .................................................................................... [ALUA Groups: 1]
+ | | o- default_tg_pt_gp ........................................................ [ALUA state: Active/optimized]
+ | o- pscsi ................................................................................... [Storage Objects: 0]
+ | o- ramdisk ................................................................................. [Storage Objects: 0]
+ o- iscsi ............................................................................................. [Targets: 3]
+ | o- <b>iqn.2006-04.ascsnw1.local:ascsnw1</b> .................................................................. [TPGs: 1]
+ | | o- tpg1 ................................................................................ [no-gen-acls, no-auth]
+ | | o- acls ........................................................................................... [ACLs: 2]
+ | | | o- <b>iqn.2006-04.nw1-xscs-0.local:nw1-xscs-0</b> ............................................... [Mapped LUNs: 1]
+ | | | | o- mapped_lun0 ............................................................ [lun0 fileio/<b>sbdascsnw1</b> (rw)]
+ | | | o- <b>iqn.2006-04.nw1-xscs-1.local:nw1-xscs-1</b> ............................................... [Mapped LUNs: 1]
+ | | | o- mapped_lun0 ............................................................ [lun0 fileio/<b>sbdascsnw1</b> (rw)]
+ | | o- luns ........................................................................................... [LUNs: 1]
+ | | | o- lun0 .......................................... [fileio/sbdascsnw1 (/sbd/sbdascsnw1) (default_tg_pt_gp)]
+ | | o- portals ..................................................................................... [Portals: 1]
+ | | o- 0.0.0.0:3260 ...................................................................................... [OK]
+ | o- <b>iqn.2006-04.dbnw1.local:dbnw1</b> ...................................................................... [TPGs: 1]
+ | | o- tpg1 ................................................................................ [no-gen-acls, no-auth]
+ | | o- acls ........................................................................................... [ACLs: 2]
+ | | | o- <b>iqn.2006-04.nw1-db-0.local:nw1-db-0</b> ................................................... [Mapped LUNs: 1]
+ | | | | o- mapped_lun0 .............................................................. [lun0 fileio/<b>sbddbnw1</b> (rw)]
+ | | | o- <b>iqn.2006-04.nw1-db-1.local:nw1-db-1</b> ................................................... [Mapped LUNs: 1]
+ | | | o- mapped_lun0 .............................................................. [lun0 fileio/<b>sbddbnw1</b> (rw)]
+ | | o- luns ........................................................................................... [LUNs: 1]
+ | | | o- lun0 .............................................. [fileio/sbddbnw1 (/sbd/sbddbnw1) (default_tg_pt_gp)]
+ | | o- portals ..................................................................................... [Portals: 1]
+ | | o- 0.0.0.0:3260 ...................................................................................... [OK]
+ | o- <b>iqn.2006-04.nfs.local:nfs</b> .......................................................................... [TPGs: 1]
+ | o- tpg1 ................................................................................ [no-gen-acls, no-auth]
+ | o- acls ........................................................................................... [ACLs: 2]
+ | | o- <b>iqn.2006-04.nfs-0.local:nfs-0</b> ......................................................... [Mapped LUNs: 1]
+ | | | o- mapped_lun0 ................................................................ [lun0 fileio/<b>sbdnfs</b> (rw)]
+ | | o- <b>iqn.2006-04.nfs-1.local:nfs-1</b> ......................................................... [Mapped LUNs: 1]
+ | | o- mapped_lun0 ................................................................ [lun0 fileio/<b>sbdnfs</b> (rw)]
+ | o- luns ........................................................................................... [LUNs: 1]
+ | | o- lun0 .................................................. [fileio/sbdnfs (/sbd/sbdnfs) (default_tg_pt_gp)]
+ | o- portals ..................................................................................... [Portals: 1]
+ | o- 0.0.0.0:3260 ...................................................................................... [OK]
+ o- loopback .......................................................................................... [Targets: 0]
+ o- vhost ............................................................................................. [Targets: 0]
+ o- xen-pvscsi ........................................................................................ [Targets: 0]
+ </code></pre>
+
+### Set up the iSCSI target server SBD device
+
+Connect to the iSCSI device that you created in the last step from the cluster.
+Run the following commands on the nodes of the new cluster that you want to create.
+
+> [!NOTE]
+> * **[A]**: Applies to all nodes.
+> * **[1]**: Applies only to node 1.
+> * **[2]**: Applies only to node 2.
+
+1. **[A]** Connect to the iSCSI devices. First, enable the iSCSI and SBD services.
<pre><code>sudo systemctl enable iscsid sudo systemctl enable iscsi sudo systemctl enable sbd </code></pre>
-1. **[1]** Change the initiator name on the first node
+1. **[1]** Change the initiator name on the first node.
<pre><code>sudo vi /etc/iscsi/initiatorname.iscsi </code></pre>
- Change the content of the file to match the ACLs you used when creating the iSCSI device on the iSCSI target server, for example for the NFS server.
+1. **[1]** Change the contents of the file to match the access control lists (ACLs) you used when you created the iSCSI device on the iSCSI target server (for example, for the NFS server).
- <pre><code>InitiatorName=<b>iqn.2006-04.nfs-0.local:nfs-0</b>
- </code></pre>
+ <pre><code>InitiatorName=<b>iqn.2006-04.nfs-0.local:nfs-0</b></code></pre>
-1. **[2]** Change the initiator name on the second node
+1. **[2]** Change the initiator name on the second node.
<pre><code>sudo vi /etc/iscsi/initiatorname.iscsi </code></pre>
- Change the content of the file to match the ACLs you used when creating the iSCSI device on the iSCSI target server
+1. **[2]** Change the contents of the file to match the ACLs you used when you created the iSCSI device on the iSCSI target server.
<pre><code>InitiatorName=<b>iqn.2006-04.nfs-1.local:nfs-1</b> </code></pre>
-1. **[A]** Restart the iSCSI service
-
- Now restart the iSCSI service to apply the change
+1. **[A]** Restart the iSCSI service to apply the change.
<pre><code>sudo systemctl restart iscsid sudo systemctl restart iscsi </code></pre>
- Connect the iSCSI devices. In the example below, 10.0.0.17 is the IP address of the iSCSI target server and 3260 is the default port. <b>iqn.2006-04.nfs.local:nfs</b> is one of the target names that is listed when you run the first command below (iscsiadm -m discovery).
+1. **[A]** Connect the iSCSI devices. In the following example, 10.0.0.17 is the IP address of the iSCSI target server, and 3260 is the default port. <b>iqn.2006-04.nfs.local:nfs</b> is one of the target names that's listed when you run the first command, `iscsiadm -m discovery`.
<pre><code>sudo iscsiadm -m discovery --type=st --portal=<b>10.0.0.17:3260</b> sudo iscsiadm -m node -T <b>iqn.2006-04.nfs.local:nfs</b> --login --portal=<b>10.0.0.17:3260</b>
- sudo iscsiadm -m node -p <b>10.0.0.17:3260</b> -T <b>iqn.2006-04.nfs.local:nfs</b> --op=update --name=node.startup --value=automatic
+ sudo iscsiadm -m node -p <b>10.0.0.17:3260</b> -T <b>iqn.2006-04.nfs.local:nfs</b> --op=update --name=node.startup --value=automatic</code></pre>
- # If you want to use multiple SBD devices, also connect to the second iSCSI target server
- sudo iscsiadm -m discovery --type=st --portal=<b>10.0.0.18:3260</b>
+1. **[A]** If you want to use multiple SBD devices, also connect to the second iSCSI target server.
+
+ <pre><code>sudo iscsiadm -m discovery --type=st --portal=<b>10.0.0.18:3260</b>
sudo iscsiadm -m node -T <b>iqn.2006-04.nfs.local:nfs</b> --login --portal=<b>10.0.0.18:3260</b>
- sudo iscsiadm -m node -p <b>10.0.0.18:3260</b> -T <b>iqn.2006-04.nfs.local:nfs</b> --op=update --name=node.startup --value=automatic
+ sudo iscsiadm -m node -p <b>10.0.0.18:3260</b> -T <b>iqn.2006-04.nfs.local:nfs</b> --op=update --name=node.startup --value=automatic</code></pre>
- # If you want to use multiple SBD devices, also connect to the third iSCSI target server
- sudo iscsiadm -m discovery --type=st --portal=<b>10.0.0.19:3260</b>
+1. **[A]** If you want to use multiple SBD devices, also connect to the third iSCSI target server.
+
+ <pre><code>sudo iscsiadm -m discovery --type=st --portal=<b>10.0.0.19:3260</b>
sudo iscsiadm -m node -T <b>iqn.2006-04.nfs.local:nfs</b> --login --portal=<b>10.0.0.19:3260</b> sudo iscsiadm -m node -p <b>10.0.0.19:3260</b> -T <b>iqn.2006-04.nfs.local:nfs</b> --op=update --name=node.startup --value=automatic </code></pre>
- Make sure that the iSCSI devices are available and note down the device name (in the following example /dev/sde)
+1. **[A]** Make sure that the iSCSI devices are available and note the device name (**/dev/sde**, in the following example).
<pre><code>lsscsi
The following items are prefixed with either **[A]** - applicable to all nodes,
# <b>[8:0:0:0] disk LIO-ORG sbdnfs 4.0 /dev/sdf</b> </code></pre>
- Now, retrieve the IDs of the iSCSI devices.
+1. **[A]** Retrieve the IDs of the iSCSI devices.
<pre><code>ls -l /dev/disk/by-id/scsi-* | grep <b>sdd</b>
The following items are prefixed with either **[A]** - applicable to all nodes,
# lrwxrwxrwx 1 root root 9 Aug 9 13:32 /dev/disk/by-id/scsi-SLIO-ORG_sbdnfs_f88f30e7-c968-4678-bc87-fe7bfcbdb625 -> ../../sdf </code></pre>
- The command list three device IDs for every SBD device. We recommend using the ID that starts with scsi-3, in the example above this is
+ The command lists three device IDs for every SBD device. We recommend using the ID that starts with scsi-1. In the preceding example, the IDs are:
* **/dev/disk/by-id/scsi-36001405afb0ba8d3a3c413b8cc2cca03** * **/dev/disk/by-id/scsi-360014053fe4da371a5a4bb69a419a4df** * **/dev/disk/by-id/scsi-36001405f88f30e7c9684678bc87fe7bf**
-1. **[1]** Create the SBD device
+1. **[1]** Create the SBD device.
- Use the device ID of the iSCSI devices to create the new SBD devices on the first cluster node.
+ a. Use the device ID of the iSCSI devices to create the new SBD devices on the first cluster node.
- <pre><code>sudo sbd -d <b>/dev/disk/by-id/scsi-36001405afb0ba8d3a3c413b8cc2cca03</b> -1 60 -4 120 create
- # Also create the second and third SBD devices if you want to use more than one.
- sudo sbd -d <b>/dev/disk/by-id/scsi-360014053fe4da371a5a4bb69a419a4df</b> -1 60 -4 120 create
+ <pre><code>sudo sbd -d <b>/dev/disk/by-id/scsi-36001405afb0ba8d3a3c413b8cc2cca03</b> -1 60 -4 120 create</code></pre>
+
+ b. Also create the second and third SBD devices if you want to use more than one.
+ <pre><code>sudo sbd -d <b>/dev/disk/by-id/scsi-360014053fe4da371a5a4bb69a419a4df</b> -1 60 -4 120 create
sudo sbd -d <b>/dev/disk/by-id/scsi-36001405f88f30e7c9684678bc87fe7bf</b> -1 60 -4 120 create </code></pre>
-1. **[A]** Adapt the SBD config
+1. **[A]** Adapt the SBD configuration.
- Open the SBD config file
+ a. Open the SBD config file.
<pre><code>sudo vi /etc/sysconfig/sbd </code></pre>
- Change the property of the SBD device, enable the pacemaker integration, and change the start mode of SBD.
+ b. Change the property of the SBD device, enable the Pacemaker integration, and change the start mode of SBD.
<pre><code>[...] <b>SBD_DEVICE="/dev/disk/by-id/scsi-36001405afb0ba8d3a3c413b8cc2cca03;/dev/disk/by-id/scsi-360014053fe4da371a5a4bb69a419a4df;/dev/disk/by-id/scsi-36001405f88f30e7c9684678bc87fe7bf"</b>
The following items are prefixed with either **[A]** - applicable to all nodes,
[...] </code></pre>
- Create the `softdog` configuration file
+1. **[A]** Create the `softdog` configuration file.
<pre><code>echo softdog | sudo tee /etc/modules-load.d/softdog.conf </code></pre>
- Now load the module
+1. **[A]** Load the module.
<pre><code>sudo modprobe -v softdog </code></pre>
-## SBD device using Azure shared disk
+## SBD with an Azure shared disk
-This section is only applicable, if you want to use SBD device using Azure shared disk.
+This section applies only if you want to use an SBD device with an Azure shared disk.
-### Create and attach Azure shared disk with PowerShell
+### Create and attach an Azure shared disk with PowerShell
-Adjust the values for your resource group, Azure region, virtual machines, LUN, and so on.
+1. Adjust the values for your resource group, Azure region, virtual machines, logical unit numbers (LUNs), and so on.
-<pre><code>$ResourceGroup = "<b>MyResourceGroup</b>"
-$Location = "<b>MyAzureRegion</b>"
+ <pre><code>$ResourceGroup = "<b>MyResourceGroup</b>"
+ $Location = "<b>MyAzureRegion</b>"</code></pre>
-# Define the size of the disk based on available disk size for Premium SSDs. In this example, P1 disk size of 4G is mentioned.
-$DiskSizeInGB = <b>4</b>
-$DiskName = "<b>SBD-disk1</b>"
+1. Define the size of the disk based on available disk size for Premium SSDs. In this example, P1 disk size of 4G is mentioned.
+ <pre><code>$DiskSizeInGB = <b>4</b>
+ $DiskName = "<b>SBD-disk1</b>"</code></pre>
-# With parameter '-MaxSharesCount', we define the maximum number of cluster nodes to attach the shared disk for SBD device
-$ShareNodes = <b>2</b>
+1. With parameter -MaxSharesCount, define the maximum number of cluster nodes to attach the shared disk for the SBD device.
+ <pre><code>$ShareNodes = <b>2</b></code></pre>
-# For SBD device using LRS for Azure premium shared disk, use below storage SkuName
-$SkuName = "<b>Premium_LRS</b>"
-# For SBD device using ZRS for Azure premium shared disk, use below storage SkuName
-$SkuName = "<b>Premium_ZRS</b>"
+1. For an SBD device that uses LRS for an Azure premium shared disk, use the following storage SkuName:
+ <pre><code>$SkuName = "<b>Premium_LRS</b>"</code></pre>
+1. For an SBD device that uses ZRS for an Azure premium shared disk, use the following storage SkuName:
+ <pre><code>$SkuName = "<b>Premium_ZRS</b>"</code></pre>
-# Provision Azure shared disk
-$diskConfig = New-AzDiskConfig -Location $Location -SkuName $SkuName -CreateOption Empty -DiskSizeGB $DiskSizeInGB -MaxSharesCount $ShareNodes
-$dataDisk = New-AzDisk -ResourceGroupName $ResourceGroup -DiskName $DiskName -Disk $diskConfig
+1. Set up an Azure shared disk.
+ <pre><code>$diskConfig = New-AzDiskConfig -Location $Location -SkuName $SkuName -CreateOption Empty -DiskSizeGB $DiskSizeInGB -MaxSharesCount $ShareNodes
+ $dataDisk = New-AzDisk -ResourceGroupName $ResourceGroup -DiskName $DiskName -Disk $diskConfig</code></pre>
-# Attach the disk to the cluster VMs
-$VM1 = "<b>prod-cl1-0</b>"
-$VM2 = "<b>prod-cl1-1</b>"
+1. Attach the disk to the cluster VMs.
+ <pre><code>$VM1 = "<b>prod-cl1-0</b>"
+ $VM2 = "<b>prod-cl1-1</b>"</code></pre>
-# Add the Azure shared disk to cluster node 1.
-$vm = Get-AzVM -ResourceGroupName $ResourceGroup -Name $VM1
-$vm = Add-AzVMDataDisk -VM $vm -Name $DiskName -CreateOption Attach -ManagedDiskId $dataDisk.Id -Lun <b>0</b>
-Update-AzVm -VM $vm -ResourceGroupName $ResourceGroup -Verbose
+ a. Add the Azure shared disk to cluster node 1.
+ <pre><code>$vm = Get-AzVM -ResourceGroupName $ResourceGroup -Name $VM1
+ $vm = Add-AzVMDataDisk -VM $vm -Name $DiskName -CreateOption Attach -ManagedDiskId $dataDisk.Id -Lun <b>0</b>
+ Update-AzVm -VM $vm -ResourceGroupName $ResourceGroup -Verbose</code></pre>
-# Add the Azure shared disk to cluster node 2
-$vm = Get-AzVM -ResourceGroupName $ResourceGroup -Name $VM2
-$vm = Add-AzVMDataDisk -VM $vm -Name $DiskName -CreateOption Attach -ManagedDiskId $dataDisk.Id -Lun <b>0</b>
-Update-AzVm -VM $vm -ResourceGroupName $ResourceGroup -Verbose
-</code></pre>
+ b. Add the Azure shared disk to cluster node 2.
+ <pre><code>$vm = Get-AzVM -ResourceGroupName $ResourceGroup -Name $VM2
+ $vm = Add-AzVMDataDisk -VM $vm -Name $DiskName -CreateOption Attach -ManagedDiskId $dataDisk.Id -Lun <b>0</b>
+ Update-AzVm -VM $vm -ResourceGroupName $ResourceGroup -Verbose</code></pre>
-You can also refer to [Deploy a ZRS disk](../../disks-deploy-zrs.md) document if you want to deploy resources using Azure CLI or Azure portal.
+If you want to deploy resources by using the Azure CLI or the Azure portal, you can also refer to [Deploy a ZRS disk](../../disks-deploy-zrs.md).
-### Set up Azure shared disk SBD device
+### Set up an Azure shared disk SBD device
-1. **[A]** Make sure the attached disk is available.
+1. **[A]** Make sure that the attached disk is available.
<pre><code># lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
You can also refer to [Deploy a ZRS disk](../../disks-deploy-zrs.md) document if
<b>[5:0:0:0] disk Msft Virtual Disk 1.0 /dev/sdc</b> </code></pre>
-2. **[A]** Retrieve the IDs of the attached disks.
+1. **[A]** Retrieve the IDs of the attached disks.
<pre><code># ls -l /dev/disk/by-id/scsi-* | grep sdc lrwxrwxrwx 1 root root 9 Nov 8 16:55 /dev/disk/by-id/scsi-14d534654202020204208a67da80744439b513b2a9728af19 -> ../../sdc <b>lrwxrwxrwx 1 root root 9 Nov 8 16:55 /dev/disk/by-id/scsi-3600224804208a67da8073b2a9728af19 -> ../../sdc</b> </code></pre>
- The command list device IDs for SBD device. We recommend using the ID that starts with scsi-3, in the example above this is
+ The commands list device IDs for the SBD device. We recommend using the ID that starts with scsi-3. In the preceding example, the ID is **/dev/disk/by-id/scsi-3600224804208a67da8073b2a9728af19**.
- - **/dev/disk/by-id/scsi-3600224804208a67da8073b2a9728af19**
-
-3. **[1]** Create the SBD device
+1. **[1]** Create the SBD device.
Use the device ID from step 2 to create the new SBD devices on the first cluster node. <pre><code># sudo sbd -d <b>/dev/disk/by-id/scsi-3600224804208a67da8073b2a9728af19</b> -1 60 -4 120 create </code></pre>
-4. **[A]** Adapt the SBD config
+1. **[A]** Adapt the SBD configuration.
- Open the SBD config file
+ a. Open the SBD config file.
<pre><code>sudo vi /etc/sysconfig/sbd </code></pre>
- Change the property of the SBD device, enable the pacemaker integration, and change the start mode of SBD.
+ b. Change the property of the SBD device, enable the Pacemaker integration, and change the start mode of the SBD device.
<pre><code>[...] <b>SBD_DEVICE="/dev/disk/by-id/scsi-3600224804208a67da8073b2a9728af19"</b>
You can also refer to [Deploy a ZRS disk](../../disks-deploy-zrs.md) document if
[...] </code></pre>
- Create the `softdog` configuration file
+1. Create the `softdog` configuration file.
<pre><code>echo softdog | sudo tee /etc/modules-load.d/softdog.conf </code></pre>
- Now load the module
+1. Load the module.
<pre><code>sudo modprobe -v softdog </code></pre>
-## STONITH device using Azure fence agent
+## Use an Azure fence agent
-This section is only applicable, if you want to use STONITH device using Azure shared disk.
+This section applies only if you want to use a STONITH device with an Azure fence agent.
-### Create Azure Fence agent STONITH device
+### Create an Azure fence agent STONITH device
-This section of the documentation is only applicable, if using STONITH, based on Azure Fence agent.
-The STONITH device uses a Service Principal to authorize against Microsoft Azure. Follow these steps to create a Service Principal.
+This section applies only if you're using a STONITH device that's based on an Azure fence agent. The STONITH device uses a service principal to authorize against Microsoft Azure. To create a service principal, do the following:
-1. Go to <https://portal.azure.com>
-1. Open the Azure Active Directory blade
- Go to Properties and write down the Directory ID. This is the **tenant ID**.
-1. Click App registrations
-1. Click New Registration
-1. Enter a Name, select "Accounts in this organization directory only"
-2. Select Application Type "Web", enter a sign-on URL (for example http:\//localhost) and click Add.
- The sign-on URL is not used and can be any valid URL
-1. Select Certificates and Secrets, then click New client secret
-1. Enter a description for a new key, select "Never expires" and click Add
-1. Write down the Value. It is used as the **password** for the Service Principal
-1. Select Overview. Write down the Application ID. It is used as the username of the Service Principal
+1. In the [Azure portal](https://portal.azure.com), select **Azure Active Directory** > **Properties**, and then write down the Directory ID. This is the **tenant ID**.
+1. Select **App registrations**.
+1. Select **New registration**.
+1. Enter a name for the registration, and then select **Accounts in this organization directory only**.
+1. For **Application type**, select **Web**, enter a sign-on URL (for example, <code>*http://</code><code>localhost*</code>), and then select **Add**.
+ The sign-on URL is not used and can be any valid URL.
+1. Select **Certificates and secrets**, and then select **New client secret**.
+1. Enter a description for a new key, select **Never expires**, and then select **Add**.
+1. Write down the value, which you'll use as the password for the service principal.
+1. Select **Overview**, and then write down the application ID, which you'll use as the username of the service principal.
### **[1]** Create a custom role for the fence agent
-The Service Principal doesn't have permissions to access your Azure resources by default. You need to give the Service Principal permissions to start and stop (deallocate) all virtual machines of the cluster. If you did not already create the custom role, you can create it using [PowerShell](../../../role-based-access-control/custom-roles-powershell.md#create-a-custom-role) or [Azure CLI](../../../role-based-access-control/custom-roles-cli.md)
+By default, the service principal doesn't have permissions to access your Azure resources. You need to give the service principal permissions to start and stop (deallocate) all virtual machines in the cluster. If you didn't already create the custom role, you can do so by using [PowerShell](../../../role-based-access-control/custom-roles-powershell.md#create-a-custom-role) or the [Azure CLI](../../../role-based-access-control/custom-roles-cli.md).
-Use the following content for the input file. You need to adapt the content to your subscriptions that is, replace c276fc76-9cd4-44c9-99a7-4fd71546436e and e91d47c4-76f3-4271-a796-21b4ecfe3624 with the Ids of your subscription. If you only have one subscription, remove the second entry in AssignableScopes.
+Use the following content for the input file. You need to adapt the content to your subscriptions. That is, replace *c276fc76-9cd4-44c9-99a7-4fd71546436e* and *e91d47c4-76f3-4271-a796-21b4ecfe3624* with your own subscription IDs. If you have only one subscription, remove the second entry under AssignableScopes.
```json {
- "Name": "Linux Fence Agent Role",
+ "Name": "Linux fence agent Role",
"description": "Allows to power-off and start virtual machines", "assignableScopes": [ "/subscriptions/e663cc2d-722b-4be1-b636-bbd9e4c60fd9",
Use the following content for the input file. You need to adapt the content to y
} ```
-### **[A]** Assign the custom role to the Service Principal
+### **[A]** Assign the custom role to the service principal
-Assign the custom role "Linux Fence Agent Role" that was created in the last chapter to the Service Principal. Do not use the Owner role anymore! For detailed steps, see [Assign Azure roles using the Azure portal](../../../role-based-access-control/role-assignments-portal.md).
+Assign the custom role *Linux fence agent Role* that you already created to the service principal. Do *not* use the *Owner* role anymore. For more information, see [Assign Azure roles by using the Azure portal](../../../role-based-access-control/role-assignments-portal.md).
-Make sure to assign the role for both cluster nodes.
+Be sure to assign the role for both cluster nodes.
-## Cluster installation
+## Install the cluster
-The following items are prefixed with either **[A]** - applicable to all nodes, **[1]** - only applicable to node 1 or **[2]** - only applicable to node 2.
+> [!NOTE]
+> * **[A]**: Applies to all nodes.
+> * **[1]**: Applies only to node 1.
+> * **[2]**: Applies only to node 2.
-1. **[A]** Update SLES
+1. **[A]** Update SLES.
<pre><code>sudo zypper update </code></pre>
-2. **[A]** Install component, needed for cluster resources
+1. **[A]** Install the component, which you'll need for the cluster resources.
<pre><code>sudo zypper in socat </code></pre>
-3. **[A]** Install azure-lb component, needed for cluster resources
+1. **[A]** Install the azure-lb component, which you'll need for the cluster resources.
<pre><code>sudo zypper in resource-agents </code></pre> > [!NOTE]
- > Check the version of package resource-agents and make sure the minimum version requirements are met:
- > - For SLES 12 SP4/SP5, the version must be at least resource-agents-4.3.018.a7fb5035-3.30.1.
- > - For SLES 15/15 SP1, the version must be at least resource-agents-4.3.0184.6ee15eb2-4.13.1.
+ > Check the version of the *resource-agents* package, and make sure that the minimum version requirements are met:
+ > - **SLES 12 SP4/SP5**: The version must be resource-agents-4.3.018.a7fb5035-3.30.1 or later.
+ > - **SLES 15/15 SP1**: The version must be resource-agents-4.3.0184.6ee15eb2-4.13.1 or later.
-4. **[A]** Configure the operating system
+1. **[A]** Configure the operating system.
- In some cases, Pacemaker creates many processes and thereby exhausts the allowed number of processes. In such a case, a heartbeat between the cluster nodes might fail and lead to failover of your resources. We recommend increasing the maximum allowed processes by setting the following parameter.
+ a. Pacemaker occasionally creates many processes, which can exhaust the allowed number. When this happens, a heartbeat between the cluster nodes might fail and lead to a failover of your resources. We recommend increasing the maximum number of allowed processes by setting the following parameter:
<pre><code># Edit the configuration file sudo vi /etc/systemd/system.conf
The following items are prefixed with either **[A]** - applicable to all nodes,
#DefaultTasksMax=512 DefaultTasksMax=4096
- # and to activate this setting
+ # Activate this setting
sudo systemctl daemon-reload
- # test if the change was successful
+ # Test to ensure that the change was successful
sudo systemctl --no-pager show | grep DefaultTasksMax </code></pre>
- Reduce the size of the dirty cache. For more information, see [Low write performance on SLES 11/12 servers with large RAM](https://www.suse.com/support/kb/doc/?id=7010287).
+ b. Reduce the size of the dirty cache. For more information, see [Low write performance on SLES 11/12 servers with large RAM](https://www.suse.com/support/kb/doc/?id=7010287).
<pre><code>sudo vi /etc/sysctl.conf # Change/set the following settings
The following items are prefixed with either **[A]** - applicable to all nodes,
vm.dirty_background_bytes = 314572800 </code></pre>
-5. **[A]** Configure cloud-netconfig-azure for HA Cluster
+1. **[A]** Configure *cloud-netconfig-azure* for the high availability cluster.
>[!NOTE]
- > Check the installed version of package **cloud-netconfig-azure** by running **zypper info cloud-netconfig-azure**. If the version in your environment is 1.3 or higher, it is no longer necessary to suppress the management of network interfaces by the cloud network plugin. If the version is lower than 1.3, we suggest to update package **cloud-netconfig-azure** to the latest available version.
+ > Check the installed version of the *cloud-netconfig-azure* package by running **zypper info cloud-netconfig-azure**. If the version in your environment is 1.3 or later, it's no longer necessary to suppress the management of network interfaces by the cloud network plug-in. If the version is earlier than 1.3, we recommend that you update the *cloud-netconfig-azure* package to the latest available version.
- Change the configuration file for the network interface as shown below to prevent the cloud network plugin from removing the virtual IP address (Pacemaker must control the VIP assignment). For more information, see [SUSE KB 7023633](https://www.suse.com/support/kb/doc/?id=7023633).
+ To prevent the cloud network plug-in from removing the virtual IP address (Pacemaker must control the assignment), change the configuration file for the network interface as shown in the following code. For more information, see [SUSE KB 7023633](https://www.suse.com/support/kb/doc/?id=7023633).
<pre><code># Edit the configuration file sudo vi /etc/sysconfig/network/ifcfg-eth0
The following items are prefixed with either **[A]** - applicable to all nodes,
CLOUD_NETCONFIG_MANAGE="no" </code></pre>
-6. **[1]** Enable ssh access
+1. **[1]** Enable SSH access.
<pre><code>sudo ssh-keygen
- # Enter file in which to save the key (/root/.ssh/id_rsa): -> Press ENTER
- # Enter passphrase (empty for no passphrase): -> Press ENTER
- # Enter same passphrase again: -> Press ENTER
+ # Enter file in which to save the key (/root/.ssh/id_rsa), and then select Enter
+ # Enter passphrase (empty for no passphrase), and then select Enter
+ # Enter same passphrase again, and then select Enter
# copy the public key sudo cat /root/.ssh/id_rsa.pub </code></pre>
-7. **[2]** Enable ssh access
+1. **[2]** Enable SSH access.
<pre><code>sudo ssh-keygen
- # Enter file in which to save the key (/root/.ssh/id_rsa): -> Press ENTER
- # Enter passphrase (empty for no passphrase): -> Press ENTER
- # Enter same passphrase again: -> Press ENTER
+ # Enter file in which to save the key (/root/.ssh/id_rsa), and then select Enter
+ # Enter passphrase (empty for no passphrase), and then select Enter
+ # Enter same passphrase again, and then select Enter
- # insert the public key you copied in the last step into the authorized keys file on the second server
+ # Insert the public key you copied in the last step into the authorized keys file on the second server
sudo vi /root/.ssh/authorized_keys # copy the public key sudo cat /root/.ssh/id_rsa.pub </code></pre>
-8. **[1]** Enable ssh access
+1. **[1]** Enable SSH access.
<pre><code># insert the public key you copied in the last step into the authorized keys file on the first server sudo vi /root/.ssh/authorized_keys </code></pre>
-9. **[A]** Install Fence agents package, if using STONITH device, based on Azure Fence Agent.
+1. **[A]** Install the *fence-agents* package if you're using a STONITH device, based on the Azure fence agent.
<pre><code>sudo zypper install fence-agents </code></pre> >[!IMPORTANT]
- > The installed version of package **fence-agents** must be at least **4.4.0** to benefit from the faster failover times with Azure Fence Agent, if a cluster nodes needs to be fenced. We recommend that you update the package, if running a lower version.
-
+ > The installed version of the *fence-agents* package must be 4.4.0 or later to benefit from the faster failover times with the Azure fence agent, when a cluster node is fenced. If you're running an earlier version, we recommend that you update the package.
-10. **[A]** Install Azure Python SDK
- - On SLES 12 SP4 or SLES 12 SP5
- <pre><code># You may need to activate the Public cloud extention first
+1. **[A]** Install the Azure Python SDK on SLES 12 SP4 or SLES 12 SP5.
+ <pre><code># You might need to activate the public cloud extension first
SUSEConnect -p sle-module-public-cloud/12/x86_64 sudo zypper install python-azure-mgmt-compute </code></pre>
- - On SLES 15 and higher
- <pre><code># You may need to activate the Public cloud extention first. In this example the SUSEConnect command is for SLES 15 SP1
+ Install the Azure Python SDK on SLES 15 or later:
+ <pre><code># You might need to activate the public cloud extension first. In this example, the SUSEConnect command is for SLES 15 SP1
SUSEConnect -p sle-module-public-cloud/15.1/x86_64 sudo zypper install python3-azure-mgmt-compute </code></pre> >[!IMPORTANT]
- >Depending on your version and image type, you may need to activate the Public cloud extension for your OS release, before you can install Azure Python SDK.
- >You can check the extension, by running SUSEConnect list-extensions.
- >To achieve the faster failover times with Azure Fence Agent:
- > - on SLES 12 SP4 or SLES 12 SP5 install version **4.6.2** or higher of package python-azure-mgmt-compute.
- > - If python-azure-mgmt-compute or python**3**-azure-mgmt-compute version is **17.0.0-6.7.1**, follow the instrustion in [SUSE KBA](https://www.suse.com/support/kb/doc/?id=000020377) to update fence-agents version and install Azure identity python module if it is missing
+ >Depending on your version and image type, you might need to activate the public cloud extension for your OS release before you can install the Azure Python SDK.
+ >You can check the extension by running `SUSEConnect list-extensions`.
+ >To achieve the faster failover times with the Azure fence agent:
+ > - On SLES 12 SP4 or SLES 12 SP5, install version 4.6.2 or later of the *python-azure-mgmt-compute* package.
+ > - If your *python-azure-mgmt-compute or python**3**-azure-mgmt-compute* package version is 17.0.0-6.7.1, follow the instructions in [SUSE KBA](https://www.suse.com/support/kb/doc/?id=000020377) to update the fence-agents version and install the Azure Identity client library for Python module if it is missing.
-11. **[A]** Setup host name resolution
+1. **[A]** Set up the hostname resolution.
- You can either use a DNS server or modify the /etc/hosts on all nodes. This example shows how to use the /etc/hosts file.
+ You can either use a DNS server or modify the */etc/hosts* file on all nodes. This example shows how to use the */etc/hosts* file.
Replace the IP address and the hostname in the following commands. >[!IMPORTANT]
- > If using host names in the cluster configuration, it is vital to have reliable host name resolution. The cluster communication will fail, if the names are not available and that can lead to cluster failover delays.
- > The benefit of using /etc/hosts is that your cluster becomes independent of DNS, which could be a single point of failures too.
+ > If you're using hostnames in the cluster configuration, it's essential to have a reliable hostname resolution. The cluster communication will fail if the names are unavailable, and that can lead to cluster failover delays.
+ >
+ > The benefit of using */etc/hosts* is that your cluster becomes independent of the DNS, which could be a single point of failure too.
<pre><code>sudo vi /etc/hosts </code></pre>
- Insert the following lines to /etc/hosts. Change the IP address and hostname to match your environment.
+ Insert the following lines in the */etc/hosts*. Change the IP address and hostname to match your environment.
<pre><code># IP address of the first cluster node <b>10.0.0.6 prod-cl1-0</b>
The following items are prefixed with either **[A]** - applicable to all nodes,
<b>10.0.0.7 prod-cl1-1</b> </code></pre>
-12. **[1]** Install Cluster
+1. **[1]** Install the cluster.
- - If using SBD devices for fencing, which either be iSCSI target server or Azure shared disk.
-
- <pre><code>sudo ha-cluster-init -u
- # ! NTP is not configured to start at system boot.
- # Do you want to continue anyway (y/n)? <b>y</b>
- # /root/.ssh/id_rsa already exists - overwrite (y/n)? <b>n</b>
- # Address for ring0 [10.0.0.6] <b>Press ENTER</b>
- # Port for ring0 [5405] <b>Press ENTER</b>
- # SBD is already configured to use /dev/disk/by-id/scsi-36001405639245768818458b930abdf69;/dev/disk/by-id/scsi-36001405afb0ba8d3a3c413b8cc2cca03;/dev/disk/by-id/scsi-36001405f88f30e7c9684678bc87fe7bf - overwrite (y/n)? <b>n</b>
- # Do you wish to configure an administration IP (y/n)? <b>n</b>
- </code></pre>
+ - If you're using SBD devices for fencing (for either the iSCSI target server or Azure shared disk):
+
+ <pre><code>sudo ha-cluster-init -u
+ # ! NTP is not configured to start at system boot.
+ # Do you want to continue anyway (y/n)? <b>y</b>
+ # /root/.ssh/id_rsa already exists - overwrite (y/n)? <b>n</b>
+ # Address for ring0 [10.0.0.6] <b>Select Enter</b>
+ # Port for ring0 [5405] <b>Select Enter</b>
+ # SBD is already configured to use /dev/disk/by-id/scsi-36001405639245768818458b930abdf69;/dev/disk/by-id/scsi-36001405afb0ba8d3a3c413b8cc2cca03;/dev/disk/by-id/scsi-36001405f88f30e7c9684678bc87fe7bf - overwrite (y/n)? <b>n</b>
+ # Do you wish to configure an administration IP (y/n)? <b>n</b>
+ </code></pre>
- - If *not using* SBD devices for fencing
+ - If you're *not* using SBD devices for fencing:
- <pre><code>sudo ha-cluster-init -u
- # ! NTP is not configured to start at system boot.
- # Do you want to continue anyway (y/n)? <b>y</b>
- # /root/.ssh/id_rsa already exists - overwrite (y/n)? <b>n</b>
- # Address for ring0 [10.0.0.6] <b>Press ENTER</b>
- # Port for ring0 [5405] <b>Press ENTER</b>
- # Do you wish to use SBD (y/n)? <b>n</b>
- #WARNING: Not configuring SBD - STONITH will be disabled.
- # Do you wish to configure an administration IP (y/n)? <b>n</b>
- </code></pre>
-
-13. **[2]** Add node to cluster
+ <pre><code>sudo ha-cluster-init -u
+ # ! NTP is not configured to start at system boot.
+ # Do you want to continue anyway (y/n)? <b>y</b>
+ # /root/.ssh/id_rsa already exists - overwrite (y/n)? <b>n</b>
+ # Address for ring0 [10.0.0.6] <b>Select Enter</b>
+ # Port for ring0 [5405] <b>Select Enter</b>
+ # Do you wish to use SBD (y/n)? <b>n</b>
+ #WARNING: Not configuring SBD - STONITH will be disabled.
+ # Do you wish to configure an administration IP (y/n)? <b>n</b>
+ </code></pre>
+
+1. **[2]** Add the node to the cluster.
<pre><code>sudo ha-cluster-join # ! NTP is not configured to start at system boot. # Do you want to continue anyway (y/n)? <b>y</b>
- # IP address or hostname of existing node (e.g.: 192.168.1.1) []<b>10.0.0.6</b>
+ # IP address or hostname of existing node (for example, 192.168.1.1) []<b>10.0.0.6</b>
# /root/.ssh/id_rsa already exists - overwrite (y/n)? <b>n</b> </code></pre>
-14. **[A]** Change hacluster password to the same password
+1. **[A]** Change the hacluster password to the same password.
<pre><code>sudo passwd hacluster </code></pre>
-15. **[A]** Adjust corosync settings.
+1. **[A]** Adjust the corosync settings.
<pre><code>sudo vi /etc/corosync/corosync.conf </code></pre>
- Add the following bold content to the file if the values are not there or different. Make sure to change the token to 30000 to allow Memory preserving maintenance. For more information, see [this article for Linux][virtual-machines-linux-maintenance] or [Windows][virtual-machines-windows-maintenance].
+ a. Add the following bold-formatted content to the file if the values are not there or are different. Be sure to change the token to 30000 to allow memory-preserving maintenance. For more information, see the "Maintenance for virtual machines in Azure" article for [Linux][virtual-machines-linux-maintenance] or [Windows][virtual-machines-windows-maintenance].
<pre><code>[...] <b>token: 30000
The following items are prefixed with either **[A]** - applicable to all nodes,
} quorum { # Enable and configure quorum subsystem (default: off)
- # see also corosync.conf.5 and votequorum.5
+ # See also corosync.conf.5 and votequorum.5
provider: corosync_votequorum <b>expected_votes: 2</b> <b>two_node: 1</b> } </code></pre>
- Then restart the corosync service
+ b. Restart the corosync service.
<pre><code>sudo service corosync restart </code></pre>
-### Create STONITH device on pacemaker cluster
+### Create a STONITH device on the Pacemaker cluster
-1. **[1]** Execute following commands, if you are using SDB device (iSCSI target server or Azure shared disk) as STONITH. Enable the use of a STONITH device and set the fence delay.
+1. **[1]** If you're using an SDB device (iSCSI target server or Azure shared disk) as STONITH, run the following commands. Enable the use of a STONITH device, and set the fence delay.
<pre><code>sudo crm configure property stonith-timeout=144 sudo crm configure property stonith-enabled=true
The following items are prefixed with either **[A]** - applicable to all nodes,
op monitor interval="600" timeout="15" </code></pre>
-2. **[1]** Execute following commands, if you are using Azure fence agent as STONITH. After assigning roles to both cluster nodes, you can configure the STONITH devices in the cluster.
+1. **[1]** If you're using an Azure fence agent as STONITH, run the following commands. After you've assigned roles to both cluster nodes, you can configure the STONITH devices in the cluster.
> [!NOTE]
- > Option 'pcmk_host_map' is ONLY required in the command, if the host names and the Azure VM names are NOT identical. Specify the mapping in the format **hostname:vm-name**.
- > Refer to the bold section in the command.
+ > The 'pcmk_host_map' option is required in the command only if the hostnames and the Azure VM names are *not* identical. Specify the mapping in the format *hostname:vm-name*.
+ > Refer to the bold section in the following command.
<pre><code>sudo crm configure property stonith-enabled=true crm configure property concurrent-fencing=true
The following items are prefixed with either **[A]** - applicable to all nodes,
</code></pre> > [!IMPORTANT]
- > The monitoring and fencing operations are de-serialized. As a result, if there is a longer running monitoring operation and simultaneous fencing event, there is no delay to the cluster failover, due to the already running monitoring operation.
+ > The monitoring and fencing operations are deserialized. As a result, if there's a longer-running monitoring operation and simultaneous fencing event, there's no delay to the cluster failover because the monitoring operation is already running.
> [!TIP]
- >Azure Fence Agent requires outbound connectivity to public end points as documented, along with possible solutions, in [Public endpoint connectivity for VMs using standard ILB](./high-availability-guide-standard-load-balancer-outbound-connections.md).
+ >The Azure fence agent requires outbound connectivity to the public endpoints, as documented, along with possible solutions, in [Public endpoint connectivity for VMs using standard ILB](./high-availability-guide-standard-load-balancer-outbound-connections.md).
-## Pacemaker configuration for Azure scheduled events
+## Configure Pacemaker for Azure scheduled events
-Azure offers [scheduled events](../../linux/scheduled-events.md). Scheduled events are provided via meta-data service and allow time for the application to prepare for events like VM shutdown, VM redeployment, etc. Resource agent **[azure-events](https://github.com/ClusterLabs/resource-agents/pull/1161)** monitors for scheduled Azure events. If events are detected and the resource agent determines that there is another available cluster node, the azure-events agent will place the target cluster node in standby mode, in order to force the cluster to migrate resources away from the VM with pending [Azure scheduled events](../../linux/scheduled-events.md). To achieve that additional Pacemaker resources must be configured.
+Azure offers [scheduled events](../../linux/scheduled-events.md). Scheduled events are provided via the metadata service and allow time for the application to prepare for such events as VM shutdown, VM redeployment, and so on. Resource agent [azure-events](https://github.com/ClusterLabs/resource-agents/pull/1161) monitors for scheduled Azure events. If events are detected and the resource agent determines that another cluster node is available, the azure-events agent will place the target cluster node in standby mode to force the cluster to migrate resources away from the VM with pending [Azure scheduled events](../../linux/scheduled-events.md). To achieve that, you must configure additional Pacemaker resources.
-1. **[A]** Make sure the package for the **azure-events** agent is already installed and up to date.
+1. **[A]** Make sure that the package for the azure-events agent is already installed and up to date.
<pre><code>sudo zypper info resource-agents </code></pre>
-2. **[1]** Configure the resources in Pacemaker.
+1. **[1]** Configure the resources in Pacemaker.
<pre><code>#Place the cluster in maintenance mode sudo crm configure property maintenance-mode=true
Azure offers [scheduled events](../../linux/scheduled-events.md). Scheduled even
</code></pre> > [!NOTE]
- > After you configure the Pacemaker resources for azure-events agent, when you place the cluster in or out of maintenance mode, you may get warning messages like:
+ > After you've configured the Pacemaker resources for the azure-events agent, if you place the cluster in or out of maintenance mode, you might get warning messages such as:
WARNING: cib-bootstrap-options: unknown attribute 'hostName_ <strong> hostname</strong>' WARNING: cib-bootstrap-options: unknown attribute 'azure-events_globalPullState' WARNING: cib-bootstrap-options: unknown attribute 'hostName_ <strong>hostname</strong>'
Azure offers [scheduled events](../../linux/scheduled-events.md). Scheduled even
* [Azure Virtual Machines DBMS deployment for SAP][dbms-guide] * [High availability for NFS on Azure VMs on SUSE Linux Enterprise Server][sles-nfs-guide] * [High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise Server for SAP applications][sles-guide]
-* To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see [High Availability of SAP HANA on Azure Virtual Machines (VMs)][sap-hana-ha]
+* To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see [High availability of SAP HANA on Azure Virtual Machines][sap-hana-ha]
virtual-machines High Availability Guide Windows Azure Files Smb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/high-availability-guide-windows-azure-files-smb.md
Prerequisites for the installation of SAP NetWeaver High Availability Systems on
8. Verify the ACLs on the SID and trans directory. ## Disaster Recovery setup
-Disaster Recovery scenarios or Cross-Region Replication scenarios are supported with Azure Files Premium SMB. All data in Azure Files Premium SMB directories can be continuously synchronized to a DR region storage account using [Synchronize Files under Transfer data with AzCopy and file storage.](/azure/storage/common/storage-use-azcopy-files#synchronize-files) After a Disaster Recovery event and failover of the ASCS instance to the DR region, change the SAPGLOBALHOST profile parameter to the point to Azure Files SMB in the DR region. The same preparation steps should be performed on the DR storage account to join the storage account to Active Directory and assign RBAC roles for SAP users and groups.
+Disaster Recovery scenarios or Cross-Region Replication scenarios are supported with Azure Files Premium SMB. All data in Azure Files Premium SMB directories can be continuously synchronized to a DR region storage account using [Synchronize Files under Transfer data with AzCopy and file storage.](../../../storage/common/storage-use-azcopy-files.md#synchronize-files) After a Disaster Recovery event and failover of the ASCS instance to the DR region, change the SAPGLOBALHOST profile parameter to the point to Azure Files SMB in the DR region. The same preparation steps should be performed on the DR storage account to join the storage account to Active Directory and assign RBAC roles for SAP users and groups.
## Troubleshooting The PowerShell scripts downloaded in step 3.c contain a debug script to conduct some basic checks to validate the configuration.
virtual-machines Sap Deployment Checklist https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/sap-deployment-checklist.md
During this phase, you usually deploy development systems, unit testing systems,
3. If you use SUSE and Red Hat Linux images from the Azure Compute Gallery, you need to use the images for SAP provided by the Linux vendors in the Azure Compute Gallery. 4. Make sure to fulfill the SAP support requirements for Microsoft support agreements. See [SAP support note #2015553](https://launchpad.support.sap.com/#/notes/2015553). For HANA Large Instances, see [Onboarding requirements](./hana-onboarding-requirements.md). 4. Make sure the right people get [planned maintenance notifications](https://azure.microsoft.com/blog/a-new-planned-maintenance-experience-for-your-virtual-machines/) so you can choose the best downtimes.
-5. Frequently check for Azure presentations on channels like [Channel 9](https://channel9.msdn.com/) for new functionality that might apply to your deployments.
+5. Frequently check for Azure presentations on channels like [Channel 9](/teamblog/channel9joinedmicrosoftlearn) for new functionality that might apply to your deployments.
6. Check SAP notes related to Azure, like [support note #1928533](https://launchpad.support.sap.com/#/notes/1928533), for new VM SKUs and newly supported OS and DBMS releases. Compare the pricing of new VM types against that of older VM types, so you can deploy VMs with the best price/performance ratio. 7. Recheck SAP support notes, the SAP HANA hardware directory, and the SAP PAM. Make sure there were no changes in supported VMs for Azure, supported OS releases on those VMs, and supported SAP and DBMS releases. 8. Check [the SAP website](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/iaas.html#categories=Microsoft%20Azure) for new HANA-certified SKUs in Azure. Compare the pricing of new SKUs with the ones you planned to use. Eventually, make necessary changes to use the ones that have the best price/performance ratio.
virtual-machines Sap Ha Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/sap-ha-availability-zones.md
The following considerations apply for this configuration:
- For SUSE Linux, an NFS share that's built as documented in [High availability for NFS on Azure VMs on SUSE Linux Enterprise Server](./high-availability-guide-suse-nfs.md). Currently, the solution that uses Microsoft Scale-Out File Server, as documented in [Prepare Azure infrastructure for SAP high availability by using a Windows failover cluster and file share for SAP ASCS/SCS instances](./sap-high-availability-infrastructure-wsfc-file-share.md), is not supported across zones.-- The third zone is used to host the SBD device if you build a [SUSE Linux Pacemaker cluster](./high-availability-guide-suse-pacemaker.md#create-azure-fence-agent-stonith-device) and use SBD devices instead of the Azure Fencing Agent. Or for additional application instances.
+- The third zone is used to host the SBD device if you build a [SUSE Linux Pacemaker cluster](./high-availability-guide-suse-pacemaker.md#create-an-azure-fence-agent-stonith-device) and use SBD devices instead of the Azure Fencing Agent. Or for additional application instances.
- To achieve run time consistency for critical business processes, you can try to direct certain batch jobs and users to application instances that are in-zone with the active DBMS instance by using SAP batch server groups, SAP logon groups, or RFC groups. However, in the case of a zonal failover, you would need to manually move these groups to instances running on VMs that are in-zone with the active DB VM. - You might want to deploy dormant dialog instances in each of the zones.
The following considerations apply for this configuration:
- For SUSE Linux, an NFS share that's built as documented in [High availability for NFS on Azure VMs on SUSE Linux Enterprise Server](./high-availability-guide-suse-nfs.md). Currently, the solution that uses Microsoft Scale-Out File Server, as documented in [Prepare Azure infrastructure for SAP high availability by using a Windows failover cluster and file share for SAP ASCS/SCS instances](./sap-high-availability-infrastructure-wsfc-file-share.md), is not supported across zones.-- The third zone is used to host the SBD device if you build a [SUSE Linux Pacemaker cluster](./high-availability-guide-suse-pacemaker.md#create-azure-fence-agent-stonith-device) and use SBD devices instead of the Azure Fencing Agent. Or for additional application instances.
+- The third zone is used to host the SBD device if you build a [SUSE Linux Pacemaker cluster](./high-availability-guide-suse-pacemaker.md#create-an-azure-fence-agent-stonith-device) and use SBD devices instead of the Azure Fencing Agent. Or for additional application instances.
- You should deploy dormant VMs in the passive zone (from a DBMS point of view) so you can start application resources for the case of a zone failure. - [Azure Site Recovery](https://azure.microsoft.com/services/site-recovery/) is currently unable to replicate active VMs to dormant VMs between zones. - You should invest in automation that allows you to automatically start the SAP application layer in the second zone if a zonal outage occurs.
The following considerations apply for this configuration:
- For SUSE Linux, an NFS share that's built as documented in [High availability for NFS on Azure VMs on SUSE Linux Enterprise Server](./high-availability-guide-suse-nfs.md). Currently, the solution that uses Microsoft Scale-Out File Server, as documented in [Prepare Azure infrastructure for SAP high availability by using a Windows failover cluster and file share for SAP ASCS/SCS instances](./sap-high-availability-infrastructure-wsfc-file-share.md), is not supported across zones.-- The third zone is used to host the SBD device if you build a [SUSE Linux Pacemaker cluster](./high-availability-guide-suse-pacemaker.md#create-azure-fence-agent-stonith-device) and use SBD devices instead of the Azure Fencing Agent.
+- The third zone is used to host the SBD device if you build a [SUSE Linux Pacemaker cluster](./high-availability-guide-suse-pacemaker.md#create-an-azure-fence-agent-stonith-device) and use SBD devices instead of the Azure Fencing Agent.
virtual-machines Sap Hana Availability One Region https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/sap-hana-availability-one-region.md
In this scenario, data that's replicated to the HANA instance in the second VM i
### SAP HANA system replication with automatic failover
-In the standard and most common availability configuration within one Azure region, two Azure VMs running SLES Linux have a failover cluster defined. The SLES Linux cluster is based on the [Pacemaker](./high-availability-guide-suse-pacemaker.md) framework, in conjunction with a [STONITH](./high-availability-guide-suse-pacemaker.md#create-azure-fence-agent-stonith-device) device.
+In the standard and most common availability configuration within one Azure region, two Azure VMs running SLES Linux have a failover cluster defined. The SLES Linux cluster is based on the [Pacemaker](./high-availability-guide-suse-pacemaker.md) framework, in conjunction with a [STONITH](./high-availability-guide-suse-pacemaker.md#create-an-azure-fence-agent-stonith-device) device.
From an SAP HANA perspective, the replication mode that's used is synced and an automatic failover is configured. In the second VM, the SAP HANA instance acts as a hot standby node. The standby node receives a synchronous stream of change records from the primary SAP HANA instance. As transactions are committed by the application at the HANA primary node, the primary HANA node waits to confirm the commit to the application until the secondary SAP HANA node confirms that it received the commit record. SAP HANA offers two synchronous replication modes. For details and for a description of differences between these two synchronous replication modes, see the SAP article [Replication modes for SAP HANA system replication](https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.02/en-US/c039a1a5b8824ecfa754b55e0caffc01.html).
virtual-machines Sap Proximity Placement Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/sap-proximity-placement-scenarios.md
The scenarios where you used proximity placement groups so far were:
- You wanted to deploy the critical resources of your SAP workload across different Availability Zones and on the other hand wanted to make sure that the VMs of the application tier in each of the zones would be spread across different fault domains by using availability sets. In this case, as later described in the document, proximity placement groups are the glue needed - You used proximity placement groups to group VMs together to achieve optimal network latency between the services hosted in the VMs
-As for deployment scenario #1, in many regions, especially regions without Availability Zones and most regions with Availability Zones, the network latency independent on where the VMs land is acceptable. Though there some regions of Azure that cannot provide a sufficiently good experience without collocating the three different availability sets with the usage of availability sets.
+As for deployment scenario #1, in many regions, especially regions without Availability Zones and most regions with Availability Zones, the network latency independent on where the VMs land is acceptable. Though there are some regions of Azure that cannot provide a sufficiently good experience without collocating the three different availability sets without the usage of proximity placement groups.
As of the deployment scenario #2, we are going to recommend a different way of using proximity placement groups in the following sections of this document.
virtual-network Nat Gateway Resource https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-network/nat-gateway/nat-gateway-resource.md
Any outbound configuration from a load-balancing rule or outbound rules is super
A network security group allows you to filter inbound and outbound traffic to and from a virtual machine. To monitor outbound traffic flowing from NAT, you can enable NSG flow logs.
-To learn more about NSG flow logs, see [NSG Flow Log Overview](/azure/network-watcher/network-watcher-nsg-flow-logging-overview).
+To learn more about NSG flow logs, see [NSG Flow Log Overview](../../network-watcher/network-watcher-nsg-flow-logging-overview.md).
-For guides on how to enable NSG flow logs, see [Enabling NSG Flow Logs](/azure/network-watcher/network-watcher-nsg-flow-logging-overview#enabling-nsg-flow-logs).
+For guides on how to enable NSG flow logs, see [Enabling NSG Flow Logs](../../network-watcher/network-watcher-nsg-flow-logging-overview.md#enabling-nsg-flow-logs).
## Performance
Here are some design recommendations for configuring timers:
## Limitations - Basic load balancers and basic Public IP addresses are not compatible with NAT. Use standard SKU load balancers and Public IPs instead.
+ - To upgrade a basic load balancer to standard, see [Upgrade Azure Public Load Balancer](../../load-balancer/upgrade-basic-standard.md)
+ - To upgrade a basic public IP address to standard, see [Upgrade a public IP address](../ip-services/public-ip-upgrade-portal.md)
- IP fragmentation isn't available for NAT gateway. ## Next steps - Review [virtual network NAT](nat-overview.md). - Learn about [metrics and alerts for NAT gateway](nat-metrics.md).-- Learn how to [troubleshoot NAT gateway](troubleshoot-nat.md).
+- Learn how to [troubleshoot NAT gateway](troubleshoot-nat.md).
virtual-network Nat Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-network/nat-gateway/nat-overview.md
NAT is fully scaled out from the start. There's no ramp up or scale-out operatio
* Public IP * Public IP prefix * NAT is compatible with Standard SKU public IP address or public IP prefix resources or a combination of both. You can use a public IP prefix directly or distribute the public IP addresses of the prefix across multiple NAT gateway resources. NAT will groom all traffic to the range of IP addresses of the prefix. Basic resources, such as Basic Load Balancer or Basic Public IP aren't compatible with NAT. Basic resources must be placed on a subnet not associated to a NAT Gateway. Basic Load Balancer and Basic Public IP can be upgraded to standard in order to work with NAT gateway.
- * To upgrade a basic load balancer to standard, see [Upgrade Azure Public Load Balancer](/azure/load-balancer/upgrade-basic-standard)
- * To upgrade a basic public IP to standard, see [Upgrade a public IP address](/azure/virtual-network/ip-services/public-ip-upgrade-portal)
-* NAT is the recommended method for outbound connectivity. A NAT gateway does not have the same limitations of SNAT port exhaustion as does [default outbound access](/azure/virtual-network/ip-services/default-outbound-access) and [outbound rules of a load balancer](/azure/load-balancer/outbound-rules).
- * To migrate outbound access to NAT gateway from default outbound access or from outbound rules of a load balancer, see [Migrate outbound access to Azure Virtual Network NAT](/azure/virtual-network/nat-gateway/tutorial-migrate-outbound-nat)
+ * To upgrade a basic load balancer to standard, see [Upgrade Azure Public Load Balancer](../../load-balancer/upgrade-basic-standard.md)
+ * To upgrade a basic public IP to standard, see [Upgrade a public IP address](../ip-services/public-ip-upgrade-portal.md)
+* NAT is the recommended method for outbound connectivity. A NAT gateway does not have the same limitations of SNAT port exhaustion as does [default outbound access](../ip-services/default-outbound-access.md) and [outbound rules of a load balancer](../../load-balancer/outbound-rules.md).
+ * To migrate outbound access to NAT gateway from default outbound access or from outbound rules of a load balancer, see [Migrate outbound access to Azure Virtual Network NAT](./tutorial-migrate-outbound-nat.md)
* NAT cannot be associated to an IPv6 Public IP address or IPv6 Public IP Prefix. However, it can be associated to a dual stack subnet. * NAT allows flows to be created from the virtual network to the services outside your VNet. Return traffic from the Internet is only allowed in response to an active flow. Services outside your VNet cannot initiate a connection to instances. * NAT cannot span multiple virtual networks.
For pricing details, see [Virtual Network pricing](https://azure.microsoft.com/p
* Learn [how to get better outbound connectivity using an Azure NAT Gateway](https://www.youtube.com/watch?v=2Ng_uM0ZaB4). * Learn about [NAT gateway resource](./nat-gateway-resource.md).
-* Learn more about [NAT gateway metrics](./nat-metrics.md).
+* Learn more about [NAT gateway metrics](./nat-metrics.md).
virtual-network Virtual Network Service Endpoints Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-network/virtual-network-service-endpoints-overview.md
Service endpoints provide the following benefits:
- The feature is available only to virtual networks deployed through the Azure Resource Manager deployment model. - Endpoints are enabled on subnets configured in Azure virtual networks. Endpoints can't be used for traffic from your premises to Azure services. For more information, see [Secure Azure service access from on-premises](#secure-azure-services-to-virtual-networks)-- For Azure SQL, a service endpoint applies only to Azure service traffic within a virtual network's region. For Azure Storage, you can [enable access to virtual networks in other regions](https://docs.microsoft.com/azure/storage/common/storage-network-security?tabs=azure-portal) in preview.
+- For Azure SQL, a service endpoint applies only to Azure service traffic within a virtual network's region. For Azure Storage, you can [enable access to virtual networks in other regions](../storage/common/storage-network-security.md?tabs=azure-portal) in preview.
- For Azure Data Lake Storage (ADLS) Gen 1, the VNet Integration capability is only available for virtual networks within the same region. Also note that virtual network integration for ADLS Gen1 uses the virtual network service endpoint security between your virtual network and Azure Active Directory (Azure AD) to generate additional security claims in the access token. These claims are then used to authenticate your virtual network to your Data Lake Storage Gen1 account and allow access. The *Microsoft.AzureActiveDirectory* tag listed under services supporting service endpoints is used only for supporting service endpoints to ADLS Gen 1. Azure AD doesn't support service endpoints natively. For more information about Azure Data Lake Store Gen 1 VNet integration, see [Network security in Azure Data Lake Storage Gen1](../data-lake-store/data-lake-store-network-security.md?toc=%2fazure%2fvirtual-network%2ftoc.json). ## Secure Azure services to virtual networks
Service endpoints provide the following benefits:
- Configure service endpoints on a subnet in a virtual network. Endpoints work with any type of compute instances running within that subnet. - You can configure multiple service endpoints for all supported Azure services (Azure Storage or Azure SQL Database, for example) on a subnet.-- For Azure SQL Database, virtual networks must be in the same region as the Azure service resource. For Azure Storage, you can [enable access to virtual networks in other regions](https://docs.microsoft.com/azure/storage/common/storage-network-security?tabs=azure-portal) in preview. For all other services, you can secure Azure service resources to virtual networks in any region.
+- For Azure SQL Database, virtual networks must be in the same region as the Azure service resource. For Azure Storage, you can [enable access to virtual networks in other regions](../storage/common/storage-network-security.md?tabs=azure-portal) in preview. For all other services, you can secure Azure service resources to virtual networks in any region.
- The virtual network where the endpoint is configured can be in the same or different subscription than the Azure service resource. For more information on permissions required for setting up endpoints and securing Azure services, see [Provisioning](#provisioning). - For supported services, you can secure new or existing resources to virtual networks using service endpoints.
For FAQs, see [Virtual Network Service Endpoint FAQs](./virtual-networks-faq.md#
- [Secure an Azure Synapse Analytics to a virtual network](../azure-sql/database/vnet-service-endpoint-rule-overview.md?toc=%2fazure%2fsql-data-warehouse%2ftoc.json) - [Compare Private Endpoints and Service Endpoints](./vnet-integration-for-azure-services.md#compare-private-endpoints-and-service-endpoints) - [Virtual Network Service Endpoint Policies](./virtual-network-service-endpoint-policies-overview.md)-- [Azure Resource Manager template](https://azure.microsoft.com/resources/templates/vnet-2subnets-service-endpoints-storage-integration)
+- [Azure Resource Manager template](https://azure.microsoft.com/resources/templates/vnet-2subnets-service-endpoints-storage-integration)
virtual-wan Site To Site Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-wan/site-to-site-powershell.md
+
+ Title: 'Create a Site-to-Site connection to Azure Virtual WAN using PowerShell'
+description: Learn how to create a Site-to-Site connection from your branch site to Azure Virtual WAN using PowerShell.
++++++ Last updated : 01/13/2022++++
+# Create a site-to-site connection to Azure Virtual WAN using PowerShell
+
+This article shows you how to use Virtual WAN to connect to your resources in Azure over an IPsec/IKE (IKEv1 and IKEv2) VPN connection via PowerShell. This type of connection requires a VPN device located on-premises that has an externally facing public IP address assigned to it. For more information about Virtual WAN, see the [Virtual WAN Overview](virtual-wan-about.md).
++
+## Prerequisites
+
+* Verify that you have an Azure subscription. If you don't already have an Azure subscription, you can activate your [MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details) or sign up for a [free account](https://azure.microsoft.com/pricing/free-trial).
+* Decide the IP address range that you want to use for your virtual hub private address space. This information is used when configuring your virtual hub. A virtual hub is a virtual network that is created and used by Virtual WAN. It's the core of your Virtual WAN network in a region. The address space range must conform the certain rules:
+
+ * The address range that you specify for the hub can't overlap with any of the existing virtual networks that you connect to.
+ * The address range can't overlap with the on-premises address ranges that you connect to.
+ * If you are unfamiliar with the IP address ranges located in your on-premises network configuration, coordinate with someone who can provide those details for you.
+
+### Azure PowerShell
++
+## <a name="signin"></a>Sign in
++
+## <a name="openvwan"></a>Create a virtual WAN
+
+Before you can create a virtual WAN, you have to create a resource group to host the virtual WAN or use an existing resource group. Create a resource group with [New-AzResourceGroup](/powershell/module/az.Resources/New-azResourceGroup). This example creates a new resource group named **testRG** in the **West US** location:
+
+Create a resource group:
+
+```azurepowershell-interactive
+New-AzResourceGroup -Location "West US" -Name "testRG"
+```
+
+Create the virtual WAN:
+
+```azurepowershell-interactive
+$virtualWan = New-AzVirtualWan -ResourceGroupName testRG -Name myVirtualWAN -Location "West US"
+```
+
+### To create the virtual WAN in an already existing resource group
+
+Use the steps in this section if you need to create the virtual WAN in an already existing resource group.
+
+1. Set the variables for the existing resource group.
+
+ ```azurepowershell-interactive
+ $resourceGroup = Get-AzResourceGroup -ResourceGroupName "testRG"
+ ```
+
+2. Create the virtual WAN.
+
+ ```azurepowershell-interactive
+ $virtualWan = New-AzVirtualWan -ResourceGroupName testRG -Name myVirtualWAN -Location "West US"
+ ```
++
+## <a name="hub"></a>Create the hub and configure hub settings
+
+A hub is a virtual network that can contain gateways for site-to-site, ExpressRoute, or point-to-site functionality. Create a virtual hub with [New-AzVirtualHub](/powershell/module/az.Network/New-AzVirtualHub). This example creates a default virtual hub named **westushub** with the specified address prefix and a location for the hub:
+
+```azurepowershell-interactive
+$virtualHub = New-AzVirtualHub -VirtualWan $virtualWan -ResourceGroupName "testRG" -Name "westushub" -AddressPrefix "10.11.0.0/24" -Location "westus"
+```
+
+## <a name="gateway"></a>Create a site-to-site VPN gateway
+
+In this section, you create a site-to-site VPN gateway that will be in the same location as the referenced VirtualHub. The site-to-site VPN gateway scales based on the scale unit specified and can take about 30 minutes to create.
+```azurepowershell-interactive
+New-AzVpnGateway -ResourceGroupName "testRG" -Name "testvpngw" -VirtualHubId $virtualHub.Id -VpnGatewayScaleUnit 2
+```
+
+Once your VPN gateway is created, you can view it using the following example.
+
+```azurepowershell-interactive
+Get-AzVpnGateway -ResourceGroupName "testRG" -Name "testvpngw"
+```
+
+## <a name="site"></a>Create a site and the connections
+
+In this section, you create sites that correspond to your physical locations and the connections. These sites contain your on-premises VPN device endpoints, you can create up to 1000 sites per virtual hub in a virtual WAN. If you have multiple hubs, you can create 1000 per each of those hubs.
+
+Set the variable for the VPN gateway and for the IP address space that is located on your on-premises site, traffic destined for this address space is routed to your local site. This is required when BGP is not enabled for the site:
+
+```azurepowershell-interactive
+$vpnGateway = Get-AzVpnGateway -ResourceGroupName "testRG" -Name "testvpngw"
+$vpnSiteAddressSpaces = New-Object string[] 2
+$vpnSiteAddressSpaces[0] = "192.168.2.0/24"
+$vpnSiteAddressSpaces[1] = "192.168.3.0/24"
+```
+
+Create Links to add information about the physical links at the branch including metadata about the link speed, link provider name and add the public IP address of the on-premise device.
+
+```azurepowershell-interactive
+$vpnSiteLink1 = New-AzVpnSiteLink -Name "testVpnSiteLink1" -IpAddress "15.25.35.45" -LinkProviderName "SomeTelecomProvider" -LinkSpeedInMbps "10"
+$vpnSiteLink2 = New-AzVpnSiteLink -Name "testVpnSiteLink2" -IpAddress "15.25.35.55" -LinkProviderName "SomeTelecomProvider2" -LinkSpeedInMbps "100"
+```
+
+Create the vpnSite and reference the variables of the vpnSiteLinks just created:
+
+```azurepowershell-interactive
+
+$vpnSite = New-AzVpnSite -ResourceGroupName "testRG" -Name "testVpnSite" -Location "West US" -VirtualWan $virtualWan -AddressSpace $vpnSiteAddressSpaces -DeviceModel "SomeDevice" -DeviceVendor "SomeDeviceVendor" -VpnSiteLink @($vpnSiteLink1, $vpnSiteLink2)
+```
+
+Next is the VPN Site Link connection which is composed of 2 Active-Active tunnels from a branch/Site known as VPNSite to the scalable gateway:
+
+```azurepowershell-interactive
+$vpnSiteLinkConnection1 = New-AzVpnSiteLinkConnection -Name "testLinkConnection1" -VpnSiteLink $vpnSite.VpnSiteLinks[0] -ConnectionBandwidth 100
+$vpnSiteLinkConnection2 = New-AzVpnSiteLinkConnection -Name "testLinkConnection2" -VpnSiteLink $vpnSite.VpnSiteLinks[1] -ConnectionBandwidth 10
+```
+
+## <a name="connectsites"></a>Connect the VPN site to a hub
+
+Finally, you connect your VPN site to the hub Site-to-Site VPN gateway:
+
+```azurepowershell-interactive
+New-AzVpnConnection -ResourceGroupName $vpnGateway.ResourceGroupName -ParentResourceName $vpnGateway.Name -Name "testConnection" -VpnSite $vpnSite -VpnSiteLinkConnection @($vpnSiteLinkConnection1, $vpnSiteLinkConnection2)
+```
+
+## <a name="cleanup"></a>Clean up resources
+
+When you no longer need the resources that you created, delete them. Some of the Virtual WAN resources must be deleted in a certain order due to dependencies. Deleting can take about 30 minutes to complete.
+
+## Next steps
+
+Next, to learn more about Virtual WAN, see:
+
+> [!div class="nextstepaction"]
+> * [Virtual WAN FAQ](virtual-wan-faq.md)
vpn-gateway Design https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/design.md
Previously updated : 04/28/2021 Last updated : 01/14/2022
It's important to know that there are different configurations available for VPN gateway connections. You need to determine which configuration best fits your needs. In the sections below, you can view design information and topology diagrams about the following VPN gateway connections. Use the diagrams and descriptions to help select the connection topology to match your requirements. The diagrams show the main baseline topologies, but it's possible to build more complex configurations using the diagrams as guidelines.
-## <a name="s2smulti"></a>Site-to-Site and Multi-Site (IPsec/IKE VPN tunnel)
-
-### <a name="S2S"></a>Site-to-Site
+## <a name="s2smulti"></a>Site-to-Site VPN
A Site-to-Site (S2S) VPN gateway connection is a connection over IPsec/IKE (IKEv1 or IKEv2) VPN tunnel. S2S connections can be used for cross-premises and hybrid configurations. A S2S connection requires a VPN device located on-premises that has a public IP address assigned to it. For information about selecting a VPN device, see the [VPN Gateway FAQ - VPN devices](vpn-gateway-vpn-faq.md#s2s).
A Site-to-Site (S2S) VPN gateway connection is a connection over IPsec/IKE (IKEv
VPN Gateway can be configured in active-standby mode using one public IP or in active-active mode using two public IPs. In active-standby mode, one IPsec tunnel is active and the other tunnel is in standby. In this setup, traffic flows through the active tunnel, and if some issue happens with this tunnel, the traffic switches over to the standby tunnel. Setting up VPN Gateway in active-active mode is *recommended* in which both the IPsec tunnels are simultaneously active, with data flowing through both tunnels at the same time. An additional advantage of active-active mode is that customers experience higher throughputs.
-### <a name="Multi"></a>Multi-Site
-
-This type of connection is a variation of the Site-to-Site connection. You create more than one VPN connection from your virtual network gateway, typically connecting to multiple on-premises sites. When working with multiple connections, you must use a RouteBased VPN type (known as a dynamic gateway when working with classic VNets). Because each virtual network can only have one VPN gateway, all connections through the gateway share the available bandwidth. This type of connection is often called a "multi-site" connection.
+You can create more than one VPN connection from your virtual network gateway, typically connecting to multiple on-premises sites. When working with multiple connections, you must use a RouteBased VPN type (known as a dynamic gateway when working with classic VNets). Because each virtual network can only have one VPN gateway, all connections through the gateway share the available bandwidth. This type of connection is sometimes referred to as a "multi-site" connection.
![Azure VPN Gateway Multi-Site connection example](./media/design/vpngateway-multisite-connection-diagram.png)
-### Deployment models and methods for Site-to-Site and Multi-Site
+### Deployment models and methods for S2S
## <a name="P2S"></a>Point-to-Site VPN
You may be able to use VNet peering to create your connection, as long as your v
[!INCLUDE [vpn-gateway-table-vnet-to-vnet](../../includes/vpn-gateway-table-vnet-to-vnet-include.md)]
-## <a name="ExpressRoute"></a>ExpressRoute (private connection)
-
-ExpressRoute lets you extend your on-premises networks into the Microsoft cloud over a private connection facilitated by a connectivity provider. With ExpressRoute, you can establish connections to Microsoft cloud services, such as Microsoft Azure, Microsoft 365, and CRM Online. Connectivity can be from an any-to-any (IP VPN) network, a point-to-point Ethernet network, or a virtual cross-connection through a connectivity provider at a colocation facility.
-
-ExpressRoute connections do not go over the public Internet. This allows ExpressRoute connections to offer more reliability, faster speeds, lower latencies, and higher security than typical connections over the Internet.
-
-An ExpressRoute connection uses a virtual network gateway as part of its required configuration. In an ExpressRoute connection, the virtual network gateway is configured with the gateway type 'ExpressRoute', rather than 'Vpn'. While traffic that travels over an ExpressRoute circuit is not encrypted by default, it is possible to create a solution that allows you to send encrypted traffic over an ExpressRoute circuit. For more information about ExpressRoute, see the [ExpressRoute technical overview](../expressroute/expressroute-introduction.md).
- ## <a name="coexisting"></a>Site-to-Site and ExpressRoute coexisting connections
-ExpressRoute is a direct, private connection from your WAN (not over the public Internet) to Microsoft Services, including Azure. Site-to-Site VPN traffic travels encrypted over the public Internet. Being able to configure Site-to-Site VPN and ExpressRoute connections for the same virtual network has several advantages.
+[ExpressRoute](../expressroute/expressroute-introduction.md) is a direct, private connection from your WAN (not over the public Internet) to Microsoft Services, including Azure. Site-to-Site VPN traffic travels encrypted over the public Internet. Being able to configure Site-to-Site VPN and ExpressRoute connections for the same virtual network has several advantages.
You can configure a Site-to-Site VPN as a secure failover path for ExpressRoute, or use Site-to-Site VPNs to connect to sites that are not part of your network, but that are connected through ExpressRoute. Notice that this configuration requires two virtual network gateways for the same virtual network, one using the gateway type 'Vpn', and the other using the gateway type 'ExpressRoute'.
web-application-firewall Per Site Policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/web-application-firewall/ag/per-site-policies.md
$wafPolicyURI = New-AzApplicationGatewayFirewallPolicy `
-PolicySetting $PolicySettingURI ` -CustomRule $rule4, $rule5
-$Gateway = Get-AzApplicationGateway -Name "myAppGateway"
+$appgw = Get-AzApplicationGateway `
+ -ResourceGroupName myResourceGroupAG `
+ -Name myAppGateway
$PathRuleConfig = New-AzApplicationGatewayPathRuleConfig -Name "base" ` -Paths "/base" `
$URLPathMap = New-AzApplicationGatewayUrlPathMapConfig -Name "PathMap" `
-DefaultBackendAddressPoolId $defaultPool.Id ` -DefaultBackendHttpSettingsId $poolSettings.Id
-Add-AzApplicationGatewayRequestRoutingRule -ApplicationGateway $AppGw `
+Add-AzApplicationGatewayRequestRoutingRule -ApplicationGateway $appgw `
-Name "RequestRoutingRule" ` -RuleType PathBasedRouting ` -HttpListener $siteListener `