Updates from: 01/15/2022 02:11:49
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Billing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/billing.md
Previously updated : 11/16/2021 Last updated : 01/14/2022
# Billing model for Azure Active Directory B2C
-Azure Active Directory B2C (Azure AD B2C) pricing is based on monthly active users (MAU), which is the count of unique users with authentication activity within a calendar month. This billing model applies to both Azure AD B2C tenants and [Azure AD guest user collaboration (B2B)](../active-directory/external-identities/external-identities-pricing.md). MAU billing helps you reduce costs by offering a free tier and flexible, predictable pricing. In this article, learn about MAU billing, linking your Azure AD B2C tenants to a subscription, and changing your pricing tier.
+Azure Active Directory B2C (Azure AD B2C) pricing is based on monthly active users (MAU), which is the count of unique users with authentication activity within a calendar month. This billing model applies to both Azure AD B2C tenants and [Azure AD guest user collaboration (B2B)](../active-directory/external-identities/external-identities-pricing.md). MAU billing helps you reduce costs by offering a free tier and flexible, predictable pricing. In this article, learn about MAU billing, linking Azure AD B2C tenants to a subscription, and changing the pricing tier.
## MAU overview A monthly active user (MAU) is a unique user that performs an authentication within a given month. A user that authenticates multiple times within a given month is counted as one MAU. Customers are not charged for a MAUΓÇÖs subsequent authentications during the month, nor for inactive users. Authentications may include: -- Active, interactive sign-in by the user, for example through [sign-up or sign-in](add-sign-up-and-sign-in-policy.md), [self-service password reset](add-password-reset-policy.md), [profile editing](add-profile-editing-policy.md), or any type of [user flow](user-flow-overview.md) or [custom policy](custom-policy-overview.md).-- Passive, non-interactive sign-in such as [single sign-on (SSO)](session-behavior.md), or any type of token acquisition, such as authorization code flow, token refresh, or [resource owner password credentials (ROPC)](add-ropc-policy.md).
+- Active, interactive sign-in by the user. For example, [sign-up or sign-in](add-sign-up-and-sign-in-policy.md), [self-service password reset](add-password-reset-policy.md), or any type of [user flow](user-flow-overview.md) or [custom policy](custom-policy-overview.md).
+- Passive, non-interactive sign-in such as [single sign-on (SSO)](session-behavior.md), or any type of token acquisition. For example, authorization code flow, token refresh, or [resource owner password credentials flow](add-ropc-policy.md).
If you choose to provide higher levels of assurance using Multi-factor Authentication (MFA) for Voice and SMS, you will continue to be charged a worldwide flat fee for each MFA attempt that month, whether the sign-in is successful or unsuccessful.
To take advantage of MAU billing, your Azure AD B2C tenant must be linked to an
## About the monthly active users (MAU) billing model
-MAU billing went into effect for Azure AD B2C tenants on **November 1, 2019**. Any Azure AD B2C tenants that you created and linked to a subscription on or after that date have been billed on a per-MAU basis. If you have an Azure AD B2C tenant that hasn't been linked to a subscription, you'll need to do so now. If you have an existing Azure AD B2C tenant that was linked to a subscription before November 1, 2019, we recommend you upgrade to the monthly active users (MAU) billing model, or you can stay on the per-authentication billing model.
+MAU billing went into effect for Azure AD B2C tenants on **November 1, 2019**. Any Azure AD B2C tenants that you created and linked to a subscription on or after that date have been billed on a per-MAU basis.
+
+- If you have an Azure AD B2C tenant that hasn't been linked to a subscription, link it now.
+- If you have an existing Azure AD B2C tenant that was linked to a subscription before November 1, 2019, upgrade to the monthly active users (MAU) billing model. You can also choose to stay on the per-authentication billing model.
Your Azure AD B2C tenant must also be linked to the appropriate Azure pricing tier based on the features you want to use. Premium features require Azure AD B2C [Premium P1 or P2 pricing](https://azure.microsoft.com/pricing/details/active-directory-b2c/). You might need to upgrade your pricing tier as you use new features. For example, for risk-based Conditional Access policies, you’ll need to select the Azure AD B2C Premium P2 pricing tier for your tenant. > [!NOTE] > Your first 50,000 MAUs per month are free for both Premium P1 and Premium P2 features, but the **free tier doesn’t apply to free trial, credit-based, or sponsorship subscriptions**. Once the free trial period or credits expire for these types of subscriptions, you'll begin to be charged for Azure AD B2C MAUs. To determine the total number of MAUs, we combine MAUs from all your tenants (both Azure AD and Azure AD B2C) that are linked to the same subscription.+ ## Link an Azure AD B2C tenant to a subscription
-Usage charges for Azure Active Directory B2C (Azure AD B2C) are billed to an Azure subscription. You need to explicitly link an Azure AD B2C tenant to an Azure subscription by creating an Azure AD B2C *resource* within the target Azure subscription. Several Azure AD B2C resources can be created in a single Azure subscription, along with other Azure resources like virtual machines, Storage accounts, and Logic Apps. You can see all of the resources within a subscription by going to the Azure Active Directory (Azure AD) tenant that the subscription is associated with.
+Usage charges for Azure Active Directory B2C (Azure AD B2C) are billed to an Azure subscription. You need to explicitly link an Azure AD B2C tenant to an Azure subscription by creating an Azure AD B2C *resource* within the target Azure subscription. Several Azure AD B2C resources can be created in a single Azure subscription, along with other Azure resources like virtual machines, and storage accounts. You can see all of the resources within a subscription by going to the Azure Active Directory (Azure AD) tenant that the subscription is associated with.
A subscription linked to an Azure AD B2C tenant can be used for the billing of Azure AD B2C usage or other Azure resources, including additional Azure AD B2C resources. It can't be used to add other Azure license-based services or Office 365 licenses within the Azure AD B2C tenant.
After you complete these steps for an Azure AD B2C tenant, your Azure subscripti
## Change your Azure AD pricing tier
-A tenant must be linked to the appropriate Azure pricing tier based on the features you want to use with your Azure AD B2C tenant. Premium features require Azure AD B2C Premium P1 or P2, as described in the [Azure Active Directory B2C pricing](https://azure.microsoft.com/pricing/details/active-directory-b2c/). In some cases, you'll need to upgrade your pricing tier as you use new features. For example, if you want to use Identity Protection, risk-based Conditional Access policies, and any future Premium P2 capabilities with Azure AD B2C, you’ll need to select the Azure AD B2C Premium P2 pricing tier for your tenant.
+A tenant must be linked to the appropriate Azure pricing tier based on the features you want to use with your Azure AD B2C tenant. Premium features require Azure AD B2C Premium P1 or P2, as described in the [Azure Active Directory B2C pricing](https://azure.microsoft.com/pricing/details/active-directory-b2c/).
+
+In some cases, you'll need to upgrade your pricing tier as you use new features. For example, if you want to use [Identity Protection](conditional-access-identity-protection-overview.md), risk-based Conditional Access policies, and any future Premium P2 capabilities with Azure AD B2C.
-To change your pricing tier, follow these steps.
+To change your pricing tier, follow these steps:
1. Sign in to the Azure portal.
To change your pricing tier, follow these steps.
1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD directory in the **Directory name** list, and then select **Switch**. 1. In the search box at the top of the portal, enter the name of your Azure AD B2C tenant. Then select the tenant in the search results under **Resources**.
+
+ ![Screenshot that shows how to select an Azure AD B2C tenant in Azure portal.](media/billing/select-azure-ad-b2c-tenant.png)
1. On the resource **Overview** page, under **Pricing tier**, select **change**.
- ![Change pricing tier](media/billing/change-pricing-tier.png)
+ ![Screenshot that shows how to change the pricing tier.](media/billing/change-pricing-tier.png)
1. Select the pricing tier that includes the features you want to enable.
- ![Select the pricing tier](media/billing/select-tier.png)
+ ![Screenshot that shows how to select the pricing tier.](media/billing/select-tier.png)
## Switch to MAU billing (pre-November 2019 Azure AD B2C tenants)
Here's how to make the switch to MAU billing for an existing Azure AD B2C resour
1. In the left menu, select **Azure AD B2C**. Or, select **All services** and search for and select **Azure AD B2C**. 1. On the **Overview** page of the Azure AD B2C tenant, select the link under **Resource name**. You're directed to the Azure AD B2C resource in your Azure AD tenant.<br/>
- ![Azure AD B2C resource link highlighted in Azure portal](./media/billing/portal-mau-02-b2c-resource-link.png)
+ ![Screenshot that shows how to select the Azure AD B2C resource in Azure portal.](./media/billing/portal-mau-02-b2c-resource-link.png)
1. On the **Overview** page of the Azure AD B2C resource, under **Billable Units**, select the **Per Authentication (Change to MAU)** link.<br/>
- ![Change to MAU link highlighted in Azure portal](./media/billing/portal-mau-03-change-to-mau-link.png)
+ ![Screenshot that shows how to change to MAU link highlighted in Azure portal.](./media/billing/portal-mau-03-change-to-mau-link.png)
1. Select **Confirm** to complete the upgrade to MAU billing.<br/>
- ![MAU-based billing confirmation dialog in Azure portal](./media/billing/portal-mau-04-confirm-change-to-mau.png)
+ ![Screenshot that shows the MAU-based billing confirmation dialog in Azure portal.](./media/billing/portal-mau-04-confirm-change-to-mau.png)
### What to expect when you transition to MAU billing from per-authentication billing
During the billing period of the transition, the subscription owner will likely
* An entry for the usage until the date/time of change that reflects per-authentication. * An entry for the usage after the change that reflects monthly active users (MAU).
-For the latest information about usage billing and pricing for Azure AD B2C, see [Azure Active Directory B2C pricing](https://azure.microsoft.com/pricing/details/active-directory-b2c/).
+For the latest information about usage billing and pricing, see [Azure Active Directory B2C pricing](https://azure.microsoft.com/pricing/details/active-directory-b2c/).
## Manage your Azure AD B2C tenant resources
active-directory-b2c Date Transformations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/date-transformations.md
Previously updated : 02/16/2020 Last updated : 1/14/2022
# Date claims transformations - This article provides examples for using the date claims transformations of the Identity Experience Framework schema in Azure Active Directory B2C (Azure AD B2C). For more information, see [ClaimsTransformations](claimstransformations.md). ## AssertDateTimeIsGreaterThan
-Checks that one date and time claim (string data type) is later than a second date and time claim (string data type), and throws an exception.
+Asserts that one date is later than a second date. Determines whether the `rightOperand` is greater than the `leftOperand`. If yes, throws an exception.
| Item | TransformationClaimType | Data Type | Notes | | - | -- | | -- | | InputClaim | leftOperand | string | First claim's type, which should be later than the second claim. | | InputClaim | rightOperand | string | Second claim's type, which should be earlier than the first claim. |
-| InputParameter | AssertIfEqualTo | boolean | Specifies whether this assertion should throw an error if the left operand is equal to the right operand. An error will be thrown if the left operand is equal to the right operand and the value is set to `true`. Possible values: `true` (default), or `false`. |
+| InputParameter | AssertIfEqualTo | boolean | Specifies whether this assertion should throw an error if the left operand is equal to the right operand. Possible values: `true` (default), or `false`. |
| InputParameter | AssertIfRightOperandIsNotPresent | boolean | Specifies whether this assertion should pass if the right operand is missing. | | InputParameter | TreatAsEqualIfWithinMillseconds | int | Specifies the number of milliseconds to allow between the two date times to consider the times equal (for example, to account for clock skew). |
The **AssertDateTimeIsGreaterThan** claims transformation is always executed fro
![AssertStringClaimsAreEqual execution](./media/date-transformations/assert-execution.png)
+### AssertDateTimeIsGreaterThan example
+ The following example compares the `currentDateTime` claim with the `approvedDateTime` claim. An error is thrown if `currentDateTime` is later than `approvedDateTime`. The transformation treats values as equal if they are within 5 minutes (30000 milliseconds) difference. It won't throw an error if the values are equal because `AssertIfEqualTo` is set to `false`. ```xml
The following example compares the `currentDateTime` claim with the `approvedDat
> In the example above, if you remove the `AssertIfEqualTo` input parameter, and the `currentDateTime` is equal to`approvedDateTime`, an error will be thrown. The `AssertIfEqualTo` default value is `true`. >
-The `login-NonInteractive` validation technical profile calls the `AssertApprovedDateTimeLaterThanCurrentDateTime` claims transformation.
+- Input claims:
+ - **leftOperand**: 2022-01-01T15:00:00
+ - **rightOperand**: 2022-01-22T15:00:00
+- Input parameters:
+ - **AssertIfEqualTo**: false
+ - **AssertIfRightOperandIsNotPresent**: true
+ - **TreatAsEqualIfWithinMillseconds**: 300000 (30 seconds)
+- Result: Error thrown
+
+### Call the claims transformation
+
+The following `Example-AssertDates` validation technical profile calls the `AssertApprovedDateTimeLaterThanCurrentDateTime` claims transformation.
+ ```xml
-<TechnicalProfile Id="login-NonInteractive">
- ...
+<TechnicalProfile Id="Example-AssertDates">
+ <DisplayName>Unit test</DisplayName>
+ <Protocol Name="Proprietary" Handler="Web.TPEngine.Providers.ClaimsTransformationProtocolProvider, Web.TPEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" />
+ <OutputClaims>
+ <OutputClaim ClaimTypeReferenceId="ComparisonResult" DefaultValue="false" />
+ </OutputClaims>
<OutputClaimsTransformations>
- <OutputClaimsTransformation ReferenceId="AssertApprovedDateTimeLaterThanCurrentDateTime" />
+ <OutputClaimsTransformation ReferenceId="AssertDates" />
</OutputClaimsTransformations>
+ <UseTechnicalProfileForSessionManagement ReferenceId="SM-Noop" />
</TechnicalProfile> ```
-The self-asserted technical profile calls the validation **login-NonInteractive** technical profile.
+The self-asserted technical profile calls the validation `Example-AssertDates` technical profile.
```xml
-<TechnicalProfile Id="SelfAsserted-LocalAccountSignin-Email">
+<TechnicalProfile Id="SelfAsserted-AssertDateTimeIsGreaterThan">
+ <DisplayName>User ID signup</DisplayName>
+ <Protocol Name="Proprietary" Handler="Web.TPEngine.Providers.SelfAssertedAttributeProvider, Web.TPEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" />
<Metadata>
- <Item Key="DateTimeGreaterThan">Custom error message if the provided left operand is greater than the right operand.</Item>
+ <Item Key="ContentDefinitionReferenceId">api.selfasserted</Item>
+ <Item Key="DateTimeGreaterThan">Custom error message if the provided right operand is greater than the right operand.</Item>
</Metadata>
+ ...
<ValidationTechnicalProfiles>
- <ValidationTechnicalProfile ReferenceId="login-NonInteractive" />
+ <ValidationTechnicalProfile ReferenceId="ClaimsTransformation-AssertDateTimeIsGreaterThan" />
</ValidationTechnicalProfiles> </TechnicalProfile> ```
-### Example
--- Input claims:
- - **leftOperand**: 2020-03-01T15:00:00.0000000Z
- - **rightOperand**: 2020-03-01T14:00:00.0000000Z
-- Result: Error thrown- ## ConvertDateToDateTimeClaim
-Converts a **Date** ClaimType to a **DateTime** ClaimType. The claims transformation converts the time format and adds 12:00:00 AM to the date.
+Converts a `Date` claim type to a `DateTime` claim type. The claims transformation converts the time format and adds 12:00:00 AM to the date.
| Item | TransformationClaimType | Data Type | Notes | | - | -- | | -- |
-| InputClaim | inputClaim | date | The ClaimType to be converted. |
-| OutputClaim | outputClaim | dateTime | The ClaimType that is produced after this ClaimsTransformation has been invoked. |
+| InputClaim | inputClaim | date | The claim type to be converted. |
+| OutputClaim | outputClaim | dateTime | The claim type that is produced after this claims transformation has been invoked. |
+
+### ConvertDateToDateTimeClaim example
The following example demonstrates the conversion of the claim `dateOfBirth` (date data type) to another claim `dateOfBirthWithTime` (dateTime data type).
The following example demonstrates the conversion of the claim `dateOfBirth` (da
</ClaimsTransformation> ```
-### Example
- - Input claims:
- - **inputClaim**: 2020-15-03
+ - **inputClaim**: 2022-01-03
- Output claims:
- - **outputClaim**: 2020-15-03T00:00:00.0000000Z
+ - **outputClaim**: 2022-01-03T00:00:00.0000000Z
## ConvertDateTimeToDateClaim
-Converts a **DateTime** ClaimType to a **Date** ClaimType. The claims transformation removes the time format from the date.
+Converts a `DateTime` claim type to a `Date` claim type. The claims transformation removes the time format from the date.
| Item | TransformationClaimType | Data Type | Notes | | - | -- | | -- |
-| InputClaim | inputClaim | dateTime | The ClaimType to be converted. |
-| OutputClaim | outputClaim | date | The ClaimType that is produced after this ClaimsTransformation has been invoked. |
+| InputClaim | inputClaim | dateTime | The claim type to be converted. |
+| OutputClaim | outputClaim | date | The claim type that is produced after this claims transformation has been invoked. |
+
+### ConvertDateTimeToDateClaim example
The following example demonstrates the conversion of the claim `systemDateTime` (dateTime data type) to another claim `systemDate` (date data type).
The following example demonstrates the conversion of the claim `systemDateTime`
</ClaimsTransformation> ```
-### Example
- - Input claims:
- - **inputClaim**: 2020-15-03T11:34:22.0000000Z
+ - **inputClaim**: 2022-01-03T11:34:22.0000000Z
- Output claims:
- - **outputClaim**: 2020-15-03
-
-## GetCurrentDateTime
-
-Get the current UTC date and time and add the value to a ClaimType.
-
-| Item | TransformationClaimType | Data Type | Notes |
-| - | -- | | -- |
-| OutputClaim | currentDateTime | dateTime | The ClaimType that is produced after this ClaimsTransformation has been invoked. |
-
-```xml
-<ClaimsTransformation Id="GetSystemDateTime" TransformationMethod="GetCurrentDateTime">
- <OutputClaims>
- <OutputClaim ClaimTypeReferenceId="systemDateTime" TransformationClaimType="currentDateTime" />
- </OutputClaims>
-</ClaimsTransformation>
-```
-
-### Example
-
-* Output claims:
- * **currentDateTime**: 2020-15-03T11:40:35.0000000Z
+ - **outputClaim**: 2022-01-03
## DateTimeComparison
-Determine whether one dateTime is later, earlier, or equal to another. The result is a new boolean ClaimType boolean with a value of `true` or `false`.
+Compares two dates and determines whether the first date is later, earlier, or equal to another. The result is a new Boolean claim with a value of `true` or `false`.
| Item | TransformationClaimType | Data Type | Notes | | - | -- | | -- |
-| InputClaim | firstDateTime | dateTime | The first dateTime to compare whether it is earlier or later than the second dateTime. Null value throws an exception. |
-| InputClaim | secondDateTime | dateTime | The second dateTime to compare whether it is earlier or later than the first dateTime. Null value is treated as the current datetTime. |
+| InputClaim | firstDateTime | dateTime | The first date to compare whether it's later, earlier, or equal to the second date. Null value throws an exception. |
+| InputClaim | secondDateTime | dateTime | The second date to compare. Null value is treated as the current datetTime. |
+| InputParameter | timeSpanInSeconds | int | Timespan to add to the first date. Possible values: range from negative -2,147,483,648 through positive 2,147,483,647. |
| InputParameter | operator | string | One of following values: same, later than, or earlier than. |
-| InputParameter | timeSpanInSeconds | int | Add the timespan to the first datetime. |
-| OutputClaim | result | boolean | The ClaimType that is produced after this ClaimsTransformation has been invoked. |
+| OutputClaim | result | boolean | The claim that is produced after this claims transformation has been invoked. |
+
+Use this claims transformation to determine if first date plus the timespan parameter is later, earlier, or equal to another. For example, you may store the last time a user accepted your terms of services (TOS). After three months, you can ask the user to access the TOS again.
+To run the claim transformation, you first need to get the current date and also the last time user accepts the TOS.
+
+### DateTimeComparison example
-Use this claims transformation to determine if two ClaimTypes are equal, later, or earlier than each other. For example, you may store the last time a user accepted your terms of services (TOS). After 3 months, you can ask the user to access the TOS again.
-To run the claim transformation, you first need to get the current dateTime and also the last time user accepts the TOS.
+The following example shows that the first date (2022-01-01T00:00:00) plus 90 days is later than the second date (2022-03-16T00:00:00).
```xml <ClaimsTransformation Id="CompareLastTOSAcceptedWithCurrentDateTime" TransformationMethod="DateTimeComparison"> <InputClaims>
- <InputClaim ClaimTypeReferenceId="currentDateTime" TransformationClaimType="firstDateTime" />
<InputClaim ClaimTypeReferenceId="extension_LastTOSAccepted" TransformationClaimType="secondDateTime" />
+ <InputClaim ClaimTypeReferenceId="currentDateTime" TransformationClaimType="firstDateTime" />
</InputClaims> <InputParameters> <InputParameter Id="operator" DataType="string" Value="later than" />
To run the claim transformation, you first need to get the current dateTime and
</ClaimsTransformation> ```
-### Example
- - Input claims:
- - **firstDateTime**: 2020-01-01T00:00:00.100000Z
- - **secondDateTime**: 2020-04-01T00:00:00.100000Z
+ - **firstDateTime**: 2022-01-01T00:00:00.100000Z
+ - **secondDateTime**: 2022-03-16T00:00:00.100000Z
- Input parameters: - **operator**: later than - **timeSpanInSeconds**: 7776000 (90 days) - Output claims: - **result**: true
+
+## GetCurrentDateTime
+
+Get the current UTC date and time and add the value to a claim type.
+
+| Item | TransformationClaimType | Data Type | Notes |
+| - | -- | | -- |
+| OutputClaim | currentDateTime | dateTime | The claim type that is produced after this claims transformation has been invoked. |
+
+### GetCurrentDateTime example
+
+The following example shows how to get the current data and time:
+
+```xml
+<ClaimsTransformation Id="GetSystemDateTime" TransformationMethod="GetCurrentDateTime">
+ <OutputClaims>
+ <OutputClaim ClaimTypeReferenceId="systemDateTime" TransformationClaimType="currentDateTime" />
+ </OutputClaims>
+</ClaimsTransformation>
+```
+
+* Output claims:
+ * **currentDateTime**: 2022-01-14T11:40:35.0000000Z
+
+## Next steps
+
+- Find more [claims transformation samples](https://github.com/azure-ad-b2c/unit-tests/tree/main/claims-transformation) on the Azure AD B2C community GitHub repo
active-directory-b2c General Transformations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/general-transformations.md
Previously updated : 02/03/2020 Last updated : 01/14/2022
[!INCLUDE [active-directory-b2c-advanced-audience-warning](../../includes/active-directory-b2c-advanced-audience-warning.md)]
-This article provides examples for using general claims transformations of the Identity Experience Framework schema in Azure Active Directory B2C (Azure AD B2C). For more information, see [ClaimsTransformations](claimstransformations.md).
+This article provides examples for using general claims transformations of the Identity Experience Framework schema in Azure Active Directory B2C (Azure AD B2C). For more information, see [claims transformations](claimstransformations.md).
## CopyClaim
Copy value of a claim to another. Both claims must be from the same type.
| Item | TransformationClaimType | Data Type | Notes | | - | -- | | -- |
-| InputClaim | inputClaim | string, int | The claim type which is to be copied. |
-| OutputClaim | outputClaim | string, int | The ClaimType that is produced after this ClaimsTransformation has been invoked. |
+| InputClaim | inputClaim | string, int | The claim type, which is to be copied. |
+| OutputClaim | outputClaim | string, int | The claim that is produced after this claims transformation has been invoked. |
Use this claims transformation to copy a value from a string or numeric claim, to another claim. The following example copies the externalEmail claim value to email claim.
Use this claims transformation to copy a value from a string or numeric claim, t
</ClaimsTransformation> ```
-### Example
+### CopyClaim example
- Input claims: - **inputClaim**: bob@contoso.com
Checks if the **inputClaim** exists or not and sets **outputClaim** to true or f
| Item | TransformationClaimType | Data Type | Notes | | - | -- | | -- | | InputClaim | inputClaim |Any | The input claim whose existence needs to be verified. |
-| OutputClaim | outputClaim | boolean | The ClaimType that is produced after this ClaimsTransformation has been invoked. |
+| OutputClaim | outputClaim | boolean | The claim that is produced after this claims transformation has been invoked. |
Use this claims transformation to check if a claim exists or contains any value. The return value is a boolean that indicates whether the claim exists. Following example checks if the email address exists.
Use this claims transformation to check if a claim exists or contains any value.
</ClaimsTransformation> ```
-### Example
+### DoesClaimExist example
- Input claims: - **inputClaim**: someone@contoso.com
Hash the provided plain text using the salt and a secret. The hashing algorithm
| InputClaim | plaintext | string | The input claim to be encrypted | | InputClaim | salt | string | The salt parameter. You can create a random value, using `CreateRandomString` claims transformation. | | InputParameter | randomizerSecret | string | Points to an existing Azure AD B2C **policy key**. To create a new policy key: In your Azure AD B2C tenant, under **Manage**, select **Identity Experience Framework**. Select **Policy keys** to view the keys that are available in your tenant. Select **Add**. For **Options**, select **Manual**. Provide a name (the prefix *B2C_1A_* might be added automatically.). In the **Secret** text box, enter any secret you want to use, such as 1234567890. For **Key usage**, select **Signature**. Select **Create**. |
-| OutputClaim | hash | string | The ClaimType that is produced after this claims transformation has been invoked. The claim configured in the `plaintext` inputClaim. |
+| OutputClaim | hash | string | The claim that is produced after this claims transformation has been invoked. The claim configured in the `plaintext` inputClaim. |
```xml <ClaimsTransformation Id="HashPasswordWithEmail" TransformationMethod="Hash">
Hash the provided plain text using the salt and a secret. The hashing algorithm
</ClaimsTransformation> ```
-### Example
+### Hash example
- Input claims: - **plaintext**: MyPass@word1
Hash the provided plain text using the salt and a secret. The hashing algorithm
- **randomizerSecret**: B2C_1A_AccountTransformSecret - Output claims: - **outputClaim**: CdMNb/KTEfsWzh9MR1kQGRZCKjuxGMWhA5YQNihzV6U=+
+## Next steps
+
+- Find more [claims transformation samples](https://github.com/azure-ad-b2c/unit-tests/tree/main/claims-transformation) on the Azure AD B2C community GitHub repo
active-directory-b2c Multi Factor Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/multi-factor-authentication.md
Previously updated : 12/09/2021 Last updated : 01/14/2022
A customer account is created in your tenant before the multifactor authenticati
::: zone pivot="b2c-custom-policy"
-To enable multifactor authentication, get the custom policy starter packs from GitHub as follows:
+To enable multifactor authentication, get the custom policy starter pack from GitHub as follows:
-- [Download the .zip file](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/archive/master.zip) or clone the repository from `https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack`, and then update the XML files in the **SocialAndLocalAccountsWithMFA** starter pack with your Azure AD B2C tenant name. The **SocialAndLocalAccountsWithMFA** enables social, local, and multifactor authentication options, except the Authenticator app - TOTP MFA option.
+- [Download the .zip file](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/archive/master.zip) or clone the repository from `https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack`, and then update the XML files in the **SocialAndLocalAccountsWithMFA** starter pack with your Azure AD B2C tenant name. The **SocialAndLocalAccountsWithMFA** enables social and local sign in options, and multifactor authentication options, except for the Authenticator app - TOTP option.
- To support the **Authenticator app - TOTP** MFA option, download the custom policy files from `https://github.com/azure-ad-b2c/samples/tree/master/policies/totp`, and then update the XML files with your Azure AD B2C tenant name. Make sure to include `TrustFrameworkExtensions.xml`, `TrustFrameworkLocalization.xml`, and `TrustFrameworkBase.xml` XML files from the **SocialAndLocalAccounts** starter pack. - Update your [page layout] to version `2.1.9`. For more information, see [Select a page layout](contentdefinitions.md#select-a-page-layout).
When an Azure AD B2C application enables MFA using the TOTP option, end users ne
1. Select **+ Add account**. 1. Select **Other account (Google, Facebook, etc.)**, and then scan the QR code shown in the application (for example, *Contoso webapp*) to enroll your account. If you're unable to scan the QR code, you can add the account manually: 1. In the Microsoft Authenticator app on your phone, select **OR ENTER CODE MANUALLY**.
- 1. In the application (for example, *Contoso webapp*), select **Still having trouble?** to show **Account Name** and **Secret**.
+ 1. In the application (for example, *Contoso webapp*), select **Still having trouble?**. This displays **Account Name** and **Secret**.
1. Enter the **Account Name** and **Secret** in your Microsoft Authenticator app, and then select **FINISH**. 1. In the application (for example, *Contoso webapp*), select **Continue**. 1. In **Enter your code**, enter the code that appears in your Microsoft Authenticator app.
Learn about [OATH software tokens](../active-directory/authentication/concept-au
## Delete a user's TOTP authenticator enrollment (for system admins)
-In Azure AD B2C, you can delete a user's TOTP authenticator app enrollment. Then the user would be required to re-enroll their account to use TOTP authentication again. To delete a user's TOTP enrollment, you can use either the Azure portal or the Microsoft Graph API.
+In Azure AD B2C, you can delete a user's TOTP authenticator app enrollment. Then the user would be required to re-enroll their account to use TOTP authentication again. To delete a user's TOTP enrollment, you can use either the [Azure portal](https://portal.azure.com) or the [Microsoft Graph API](/graph/api/softwareoathauthenticationmethod-delete).
> [!NOTE] > - Deleting a user's TOTP authenticator app enrollment from Azure AD B2C doesn't remove the user's account in the TOTP authenticator app. The system admin needs to direct the user to manually delete their account from the TOTP authenticator app before trying to enroll again.
active-directory-b2c Self Asserted Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/self-asserted-technical-profile.md
Previously updated : 03/10/2021 Last updated : 01/14/2022
You can also call a REST API technical profile with your business logic, overwri
| setting.enableRememberMe <sup>2</sup>| No| Displays the [Keep me signed in](session-behavior.md?pivots=b2c-custom-policy#enable-keep-me-signed-in-kmsi) checkbox. Possible values: `true` , or `false` (default). | | setting.inputVerificationDelayTimeInMilliseconds <sup>3</sup>| No| Improves user experience, by waiting for the user to stop typing, and then validate the value. Default value 2000 milliseconds. | | IncludeClaimResolvingInClaimsHandling  | No | For input and output claims, specifies whether [claims resolution](claim-resolver-overview.md) is included in the technical profile. Possible values: `true`, or `false` (default). If you want to use a claims resolver in the technical profile, set this to `true`. |
-|forgotPasswordLinkOverride <sup>4</sup>| No | A password reset claims exchange to be executed. For more information, see [Self-service password reset](add-password-reset-policy.md). |
+|setting.forgotPasswordLinkOverride <sup>4</sup>| No | A password reset claims exchange to be executed. For more information, see [Self-service password reset](add-password-reset-policy.md). |
Notes: 1. Available for content definition [DataUri](contentdefinitions.md#datauri) type of `unifiedssp`, or `unifiedssd`.
active-directory-b2c Solution Articles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/solution-articles.md
Azure Active Directory B2C (Azure AD B2C) enables organizations to implement bus
| Title | Medium | Description | | -- | |-- |
-| [Customer Identity Management with Azure AD B2C](https://channel9.msdn.com/Shows/On-NET/Customer-Identity-Management-with-Azure-AD-B2C) | Video (20 minutes) | In this overview of the service, Parakh Jain ([@jainparakh](https://twitter.com/jainparakh)) from the Azure AD B2C team provides us an overview of how the service works, and also show how we can quickly connect B2C to an ASP.NET Core application. |
+| [Customer Identity Management with Azure AD B2C](/Shows/On-NET/Customer-Identity-Management-with-Azure-AD-B2C) | Video (20 minutes) | In this overview of the service, Parakh Jain ([@jainparakh](https://twitter.com/jainparakh)) from the Azure AD B2C team provides us an overview of how the service works, and also show how we can quickly connect B2C to an ASP.NET Core application. |
| [Benefits of using Azure AD B2C](https://aka.ms/b2coverview) | PDF | Understand the benefits and common scenarios of Azure AD B2C, and how your application(s) can leverage this CIAM service. | | [Gaining Expertise in Azure AD B2C: A Course for Developers](https://aka.ms/learnAADB2C) | PDF | This end-to-end course takes developers through a complete journey on developing applications with Azure AD B2C as the authentication mechanism. Ten in-depth modules with labs cover everything from setting up an Azure subscription to creating custom policies that define the journeys that engage your customers. | | [Enabling partners, Suppliers, and Customers to Access Applications with Azure active Directory](https://aka.ms/aadexternalidentities) | PDF | Every organizationΓÇÖs success, regardless of its size, industry, or compliance and security posture, relies on organizational ability to collaborate with other organizations and connect with customers.<br><br>Bringing together Azure AD, Azure AD B2C, and Azure AD B2B Collaboration, this guide details the business value and the mechanics of building an application or web experience that provides a consolidated authentication experience tailored to the contexts of your employees, business partners and suppliers, and customers. |
active-directory Console App Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/console-app-quickstart.md
+
+ Title: "Quickstart: Call Microsoft Graph from a console application | Azure"
+
+description: In this quickstart, you learn how a console application can get an access token and call an API protected by Microsoft identity platform, using the app's own identity
++++++++ Last updated : 12/06/2021++
+zone_pivot_groups: console-app-quickstart
+#Customer intent: As an app developer, I want to learn how my console app can get an access token and call an API that's protected by the Microsoft identity platform by using the client credentials flow.
++
+# Quickstart: Acquire a token and call the Microsoft Graph API by using a console app's identity
++++
active-directory Desktop App Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/desktop-app-quickstart.md
+
+ Title: "Quickstart: Sign in users and call Microsoft Graph in a desktop app | Azure"
+
+description: In this quickstart, learn how a desktop application can get an access token and call an API protected by the Microsoft identity platform.
++++++++ Last updated : 01/14/2022++
+zone_pivot_groups: desktop-app-quickstart
+#Customer intent: As an application developer, I want to learn how my desktop application can get an access token and call an API that's protected by the Microsoft identity platform.
++
+# Quickstart: Acquire a token and call Microsoft Graph API from a desktop application
+++
active-directory Mobile App Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/mobile-app-quickstart.md
+
+ Title: "Quickstart: Add sign in with Microsoft to a mobile app | Azure"
+
+description: In this quickstart, learn how a mobile app can sign in users, get an access token from the Microsoft identity platform, and call the Microsoft Graph API.
++++++++ Last updated : 01/14/2022++
+zone_pivot_groups: mobile-app-quickstart
+#Customer intent: As an application developer, I want to learn how to sign in users and call Microsoft Graph from my mobile application.
++
+# Quickstart: Sign in users and call the Microsoft Graph API from a mobile application
+++
active-directory Quickstart Register App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-register-app.md
Previously updated : 10/27/2021 Last updated : 01/13/2022 #Customer intent: As developer, I want to know how to register my application with the Microsoft identity platform so that the security token service can issue ID and/or access tokens to client applications that request them.
# Quickstart: Register an application with the Microsoft identity platform
-In this quickstart, you register an app in the Azure portal so the Microsoft identity platform can provide authentication and authorization services for your application and its users.
+Get started with the Microsoft identity platform by registering an application in the Azure portal.
The Microsoft identity platform performs identity and access management (IAM) only for registered applications. Whether it's a client application like a web or mobile app, or it's a web API that backs a client app, registering it establishes a trust relationship between your application and the identity provider, the Microsoft identity platform.
You add and modify redirect URIs for your registered applications by configuring
Settings for each application type, including redirect URIs, are configured in **Platform configurations** in the Azure portal. Some platforms, like **Web** and **Single-page applications**, require you to manually specify a redirect URI. For other platforms, like mobile and desktop, you can select from redirect URIs generated for you when you configure their other settings.
-To configure application settings based on the platform or device you're targeting:
+To configure application settings based on the platform or device you're targeting, follow these steps:
1. In the Azure portal, in **App registrations**, select your application. 1. Under **Manage**, select **Authentication**.
Sometimes called a _public key_, a certificate is the recommended credential typ
Sometimes called an _application password_, a client secret is a string value your app can use in place of a certificate to identity itself.
-Client secrets are considered less secure than certificate credentials. Application developers sometimes use client secrets during local app development because of their ease of use. However, you should use certificate credentials for any application you have running in production.
+Client secrets are considered less secure than certificate credentials. Application developers sometimes use client secrets during local app development because of their ease of use. However, you should use certificate credentials for any of your applications that are running in production.
1. In the Azure portal, in **App registrations**, select your application. 1. Select **Certificates & secrets** > **Client secrets** > **New client secret**.
active-directory Quickstart V2 Android https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-android.md
-+ Previously updated : 10/15/2019 Last updated : 01/14/2022 #Customer intent: As an application developer, I want to learn how Android native apps can call protected APIs that require login and access tokens using the Microsoft identity platform.
Applications must be represented by an app object in Azure Active Directory so t
* Android Studio * Android 16+
-> [!div class="sxs-lookup" renderon="portal"]
-> ### Step 1: Configure your application in the Azure portal
-> For the code sample in this quickstart to work, add a **Redirect URI** compatible with the Auth broker.
-> > [!div renderon="portal" id="makechanges" class="nextstepaction"]
-> > [Make these changes for me]()
->
-> > [!div id="appconfigured" class="alert alert-info"]
-> > ![Already configured](media/quickstart-v2-android/green-check.png) Your application is configured with these attributes
->
-> ### Step 2: Download the project
-> [!div class="sxs-lookup" renderon="portal"]
-> Run the project using Android Studio.
-> [!div class="sxs-lookup" renderon="portal" id="autoupdate" class="nextstepaction"]
+### Step 1: Configure your application in the Azure portal
+For the code sample in this quickstart to work, add a **Redirect URI** compatible with the Auth broker.
+> [!div class="nextstepaction"]
+> [Make these changes for me]()
+
+> [!div class="alert alert-info"]
+> ![Already configured](media/quickstart-v2-android/green-check.png) Your application is configured with these attributes
+
+### Step 2: Download the project
+
+Run the project using Android Studio.
+> [!div class="nextstepaction"]
> [Download the code sample](https://github.com/Azure-Samples/ms-identity-android-java/archive/master.zip)
->
-> [!div class="sxs-lookup" renderon="portal"]
-> ### Step 3: Your app is configured and ready to run
-> We have configured your project with values of your app's properties and it's ready to run.
-> The sample app starts on the **Single Account Mode** screen. A default scope, **user.read**, is provided by default, which is used when reading your own profile data during the Microsoft Graph API call. The URL for the Microsoft Graph API call is provided by default. You can change both of these if you wish.
->
-> ![MSAL sample app showing single and multiple account usage](./media/quickstart-v2-android/quickstart-sample-app.png)
->
-> Use the app menu to change between single and multiple account modes.
->
-> In single account mode, sign in using a work or home account:
->
-> 1. Select **Get graph data interactively** to prompt the user for their credentials. You'll see the output from the call to the Microsoft Graph API in the bottom of the screen.
-> 2. Once signed in, select **Get graph data silently** to make a call to the Microsoft Graph API without prompting the user for credentials again. You'll see the output from the call to the Microsoft Graph API in the bottom of the screen.
->
-> In multiple account mode, you can repeat the same steps. Additionally, you can remove the signed-in account, which also removes the cached tokens for that account.
-
-> [!div class="sxs-lookup" renderon="portal"]
++
+### Step 3: Your app is configured and ready to run
+
+We have configured your project with values of your app's properties and it's ready to run.
+The sample app starts on the **Single Account Mode** screen. A default scope, **user.read**, is provided by default, which is used when reading your own profile data during the Microsoft Graph API call. The URL for the Microsoft Graph API call is provided by default. You can change both of these if you wish.
+
+![MSAL sample app showing single and multiple account usage](./media/quickstart-v2-android/quickstart-sample-app.png)
+
+Use the app menu to change between single and multiple account modes.
+
+In single account mode, sign in using a work or home account:
+
+1. Select **Get graph data interactively** to prompt the user for their credentials. You'll see the output from the call to the Microsoft Graph API in the bottom of the screen.
+2. Once signed in, select **Get graph data silently** to make a call to the Microsoft Graph API without prompting the user for credentials again. You'll see the output from the call to the Microsoft Graph API in the bottom of the screen.
+
+In multiple account mode, you can repeat the same steps. Additionally, you can remove the signed-in account, which also removes the cached tokens for that account.
+
+> [!div class="sxs-lookup"]
> > [!NOTE] > > `Enter_the_Supported_Account_Info_Here`
-> [!div renderon="docs"]
-> ## Step 1: Get the sample app
->
-> [Download the code](https://github.com/Azure-Samples/ms-identity-android-java/archive/master.zip).
->
-> ## Step 2: Run the sample app
->
-> Select your emulator, or physical device, from Android Studio's **available devices** dropdown and run the app.
->
-> The sample app starts on the **Single Account Mode** screen. A default scope, **user.read**, is provided by default, which is used when reading your own profile data during the Microsoft Graph API call. The URL for the Microsoft Graph API call is provided by default. You can change both of these if you wish.
->
-> ![MSAL sample app showing single and multiple account usage](./media/quickstart-v2-android/quickstart-sample-app.png)
->
-> Use the app menu to change between single and multiple account modes.
->
-> In single account mode, sign in using a work or home account:
->
-> 1. Select **Get graph data interactively** to prompt the user for their credentials. You'll see the output from the call to the Microsoft Graph API in the bottom of the screen.
-> 2. Once signed in, select **Get graph data silently** to make a call to the Microsoft Graph API without prompting the user for credentials again. You'll see the output from the call to the Microsoft Graph API in the bottom of the screen.
->
-> In multiple account mode, you can repeat the same steps. Additionally, you can remove the signed-in account, which also removes the cached tokens for that account.
- ## How the sample works ![Screenshot of the sample app](media/quickstart-v2-android/android-intro.svg)
active-directory Quickstart V2 Aspnet Core Web Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-aspnet-core-web-api.md
-+ Previously updated : 09/22/2020 Last updated : 01/11/2022 #Customer intent: As an application developer, I want to know how to write an ASP.NET Core web API that uses the Microsoft identity platform to authorize API requests from clients.
In this quickstart, you download an ASP.NET Core web API code sample and review the way it restricts resource access to authorized accounts only. The sample supports authorization of personal Microsoft accounts and accounts in any Azure Active Directory (Azure AD) organization.
-> [!div renderon="docs"]
-> ## Prerequisites
->
-> - Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-> - [Azure Active Directory tenant](quickstart-create-new-tenant.md)
-> - [.NET Core SDK 3.1+](https://dotnet.microsoft.com/)
-> - [Visual Studio 2019](https://visualstudio.microsoft.com/vs/) or [Visual Studio Code](https://code.visualstudio.com/)
->
-> ## Step 1: Register the application
->
-> First, register the web API in your Azure AD tenant and add a scope by following these steps:
->
-> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
-> 1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
-> 1. Search for and select **Azure Active Directory**.
-> 1. Under **Manage**, select **App registrations** > **New registration**.
-> 1. For **Name**, enter a name for your application. For example, enter **AspNetCoreWebApi-Quickstart**. Users of your app will see this name, and you can change it later.
-> 1. Select **Register**.
-> 1. Under **Manage**, select **Expose an API** > **Add a scope**. For **Application ID URI**, accept the default by selecting **Save and continue**, and then enter the following details:
-> - **Scope name**: `access_as_user`
-> - **Who can consent?**: **Admins and users**
-> - **Admin consent display name**: `Access AspNetCoreWebApi-Quickstart`
-> - **Admin consent description**: `Allows the app to access AspNetCoreWebApi-Quickstart as the signed-in user.`
-> - **User consent display name**: `Access AspNetCoreWebApi-Quickstart`
-> - **User consent description**: `Allow the application to access AspNetCoreWebApi-Quickstart on your behalf.`
-> - **State**: **Enabled**
-> 1. Select **Add scope** to complete the scope addition.
+
+## Prerequisites
+
+- Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- [Azure Active Directory tenant](quickstart-create-new-tenant.md)
+- [.NET Core SDK 3.1+](https://dotnet.microsoft.com/)
+- [Visual Studio 2019](https://visualstudio.microsoft.com/vs/) or [Visual Studio Code](https://code.visualstudio.com/)
+
+## Step 1: Register the application
+
+First, register the web API in your Azure AD tenant and add a scope by following these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
+1. Search for and select **Azure Active Directory**.
+1. Under **Manage**, select **App registrations** > **New registration**.
+1. For **Name**, enter a name for your application. For example, enter **AspNetCoreWebApi-Quickstart**. Users of your app will see this name, and you can change it later.
+1. Select **Register**.
+1. Under **Manage**, select **Expose an API** > **Add a scope**. For **Application ID URI**, accept the default by selecting **Save and continue**, and then enter the following details:
+ - **Scope name**: `access_as_user`
+ - **Who can consent?**: **Admins and users**
+ - **Admin consent display name**: `Access AspNetCoreWebApi-Quickstart`
+ - **Admin consent description**: `Allows the app to access AspNetCoreWebApi-Quickstart as the signed-in user.`
+ - **User consent display name**: `Access AspNetCoreWebApi-Quickstart`
+ - **User consent description**: `Allow the application to access AspNetCoreWebApi-Quickstart on your behalf.`
+ - **State**: **Enabled**
+1. Select **Add scope** to complete the scope addition.
## Step 2: Download the ASP.NET Core project
-> [!div renderon="docs"]
-> [Download the ASP.NET Core solution](https://github.com/Azure-Samples/active-directory-dotnet-native-aspnetcore-v2/archive/aspnetcore3-1.zip) from GitHub.
+[Download the ASP.NET Core solution](https://github.com/Azure-Samples/active-directory-dotnet-native-aspnetcore-v2/archive/aspnetcore3-1.zip) from GitHub.
[!INCLUDE [active-directory-develop-path-length-tip](../../../includes/active-directory-develop-path-length-tip.md)]
-> [!div renderon="docs"]
-> ## Step 3: Configure the ASP.NET Core project
->
-> In this step, configure the sample code to work with the app registration that you created earlier.
->
-> 1. Extract the .zip archive into a folder near the root of your drive. For example, extract into *C:\Azure-Samples*.
->
-> We recommend extracting the archive into a directory near the root of your drive to avoid errors caused by path length limitations on Windows.
->
-> 1. Open the solution in the *webapi* folder in your code editor.
-> 1. Open the *appsettings.json* file and modify the following code:
->
-> ```json
-> "ClientId": "Enter_the_Application_Id_here",
-> "TenantId": "Enter_the_Tenant_Info_Here"
-> ```
->
-> - Replace `Enter_the_Application_Id_here` with the application (client) ID of the application that you registered in the Azure portal. You can find the application (client) ID on the app's **Overview** page.
-> - Replace `Enter_the_Tenant_Info_Here` with one of the following:
-> - If your application supports **Accounts in this organizational directory only**, replace this value with the directory (tenant) ID (a GUID) or tenant name (for example, `contoso.onmicrosoft.com`). You can find the directory (tenant) ID on the app's **Overview** page.
-> - If your application supports **Accounts in any organizational directory**, replace this value with `organizations`.
-> - If your application supports **All Microsoft account users**, leave this value as `common`.
->
-> For this quickstart, don't change any other values in the *appsettings.json* file.
+
+## Step 3: Configure the ASP.NET Core project
+
+In this step, configure the sample code to work with the app registration that you created earlier.
+
+1. Extract the .zip archive into a folder near the root of your drive. For example, extract into *C:\Azure-Samples*.
+
+ We recommend extracting the archive into a directory near the root of your drive to avoid errors caused by path length limitations on Windows.
+
+1. Open the solution in the *webapi* folder in your code editor.
+1. Open the *appsettings.json* file and modify the following code:
+
+ ```json
+ "ClientId": "Enter_the_Application_Id_here",
+ "TenantId": "Enter_the_Tenant_Info_Here"
+ ```
+
+ - Replace `Enter_the_Application_Id_here` with the application (client) ID of the application that you registered in the Azure portal. You can find the application (client) ID on the app's **Overview** page.
+ - Replace `Enter_the_Tenant_Info_Here` with one of the following:
+ - If your application supports **Accounts in this organizational directory only**, replace this value with the directory (tenant) ID (a GUID) or tenant name (for example, `contoso.onmicrosoft.com`). You can find the directory (tenant) ID on the app's **Overview** page.
+ - If your application supports **Accounts in any organizational directory**, replace this value with `organizations`.
+ - If your application supports **All Microsoft account users**, leave this value as `common`.
+
+For this quickstart, don't change any other values in the *appsettings.json* file.
## How the sample works
active-directory Quickstart V2 Dotnet Native Aspnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-dotnet-native-aspnet.md
-+ Previously updated : 10/05/2020 Last updated : 01/11/2022 #Customer intent: As an application developer, I want to know how to set up OpenId Connect authentication in a web application that's built by using Node.js with Express.
You can obtain the sample in either of two ways:
Register your web API in **App registrations** in the Azure portal.
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
+1. Sign in to the [Azure portal](https://portal.azure.com/).
1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant in which you want to register an application. 1. Find and select **Azure Active Directory**. 1. Under **Manage**, select **App registrations** > **New registration**.
active-directory Quickstart V2 Ios https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-ios.md
-+ Previously updated : 09/24/2019 Last updated : 01/14/2022
The quickstart applies to both iOS and macOS apps. Some steps are needed only fo
![Shows how the sample app generated by this quickstart works](media/quickstart-v2-ios/ios-intro.svg)
-> [!div renderon="docs"]
-> ## Register and download your quickstart app
-> You have two options to start your quickstart application:
-> * [Express] [Option 1: Register and auto configure your app and then download your code sample](#option-1-register-and-auto-configure-your-app-and-then-download-the-code-sample)
-> * [Manual] [Option 2: Register and manually configure your application and code sample](#option-2-register-and-manually-configure-your-application-and-code-sample)
->
-> ### Option 1: Register and auto configure your app and then download the code sample
-> #### Step 1: Register your application
-> To register your app,
-> 1. Go to the <a href="https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/IosQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations</a> quickstart experience.
-> 1. Enter a name for your application and select **Register**.
-> 1. Follow the instructions to download and automatically configure your new application with just one click.
->
-> ### Option 2: Register and manually configure your application and code sample
->
-> #### Step 1: Register your application
-> To register your application and add the app's registration information to your solution manually, follow these steps:
->
-> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
-> 1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
-> 1. Search for and select **Azure Active Directory**.
-> 1. Under **Manage**, select **App registrations** > **New registration**.
-> 1. Enter a **Name** for your application. Users of your app might see this name, and you can change it later.
-> 1. Select **Register**.
-> 1. Under **Manage**, select **Authentication** > **Add Platform** > **iOS**.
-> 1. Enter the **Bundle Identifier** for your application. The bundle identifier is a unique string that uniquely identifies your application, for example `com.<yourname>.identitysample.MSALMacOS`. Make a note of the value you use. Note that the iOS configuration is also applicable to macOS applications.
-> 1. Select **Configure** and save the **MSAL Configuration** details for later in this quickstart.
-> 1. Select **Done**.
-
-> [!div renderon="portal" class="sxs-lookup"]
->
-> #### Step 1: Configure your application
-> For the code sample for this quickstart to work, add a **Redirect URI** compatible with the Auth broker.
-> > [!div renderon="portal" id="makechanges" class="nextstepaction"]
-> > [Make this change for me]()
->
-> > [!div id="appconfigured" class="alert alert-info"]
-> > ![Already configured](media/quickstart-v2-ios/green-check.png) Your application is configured with these attributes
->
-> #### Step 2: Download the sample project
-> > [!div id="autoupdate_ios" class="nextstepaction"]
-> > [Download the code sample for iOS]()
->
-> > [!div id="autoupdate_macos" class="nextstepaction"]
-> > [Download the code sample for macOS]()
-> [!div renderon="docs"]
-> #### Step 2: Download the sample project
->
-> - [Download the code sample for iOS](https://github.com/Azure-Samples/active-directory-ios-swift-native-v2/archive/master.zip)
-> - [Download the code sample for macOS](https://github.com/Azure-Samples/active-directory-macOS-swift-native-v2/archive/master.zip)
+#### Step 1: Configure your application
+For the code sample for this quickstart to work, add a **Redirect URI** compatible with the Auth broker.
+> [!div class="nextstepaction"]
+> [Make this change for me]()
+
+> [!div class="alert alert-info"]
+> ![Already configured](media/quickstart-v2-ios/green-check.png) Your application is configured with these attributes
+
+#### Step 2: Download the sample project
+> [!div class="nextstepaction"]
+> [Download the code sample for iOS]()
+
+> [!div class="nextstepaction"]
+> [Download the code sample for macOS]()
#### Step 3: Install dependencies 1. Extract the zip file. 2. In a terminal window, navigate to the folder with the downloaded code sample and run `pod install` to install the latest MSAL library.
-> [!div renderon="portal" class="sxs-lookup"]
-> #### Step 4: Your app is configured and ready to run
-> We have configured your project with values of your app's properties and it's ready to run.
-> > [!NOTE]
-> > `Enter_the_Supported_Account_Info_Here`
->
-> [!div renderon="docs"]
-> #### Step 4: Configure your project
-> If you selected Option 1 above, you can skip these steps.
-> 1. Open the project in XCode.
-> 1. Edit **ViewController.swift** and replace the line starting with 'let kClientID' with the following code snippet. Remember to update the value for `kClientID` with the clientID that you saved when you registered your app in the portal earlier in this quickstart:
->
-> ```swift
-> let kClientID = "Enter_the_Application_Id_Here"
-> ```
-
-> 1. If you're building an app for [Azure AD national clouds](/graph/deployments#app-registration-and-token-service-root-endpoints), replace the line starting with 'let kGraphEndpoint' and 'let kAuthority' with correct endpoints. For global access, use default values:
->
-> ```swift
-> let kGraphEndpoint = "https://graph.microsoft.com/"
-> let kAuthority = "https://login.microsoftonline.com/common"
-> ```
-
-> 1. Other endpoints are documented [here](/graph/deployments#app-registration-and-token-service-root-endpoints). For example, to run the quickstart with Azure AD Germany, use following:
->
-> ```swift
-> let kGraphEndpoint = "https://graph.microsoft.de/"
-> let kAuthority = "https://login.microsoftonline.de/common"
-> ```
-
-> 3. Open the project settings. In the **Identity** section, enter the **Bundle Identifier** that you entered into the portal.
-> 4. Right-click **Info.plist** and select **Open As** > **Source Code**.
-> 5. Under the dict root node, replace `Enter_the_bundle_Id_Here` with the ***Bundle Id*** that you used in the portal. Notice the `msauth.` prefix in the string.
->
-> ```xml
-> <key>CFBundleURLTypes</key>
-> <array>
-> <dict>
-> <key>CFBundleURLSchemes</key>
-> <array>
-> <string>msauth.Enter_the_Bundle_Id_Here</string>
-> </array>
-> </dict>
-> </array>
-> ```
-
-> 6. Build and run the app!
+#### Step 4: Your app is configured and ready to run
+We have configured your project with values of your app's properties and it's ready to run.
+> [!NOTE]
+> `Enter_the_Supported_Account_Info_Here`
+
+1. If you're building an app for [Azure AD national clouds](/graph/deployments#app-registration-and-token-service-root-endpoints), replace the line starting with 'let kGraphEndpoint' and 'let kAuthority' with correct endpoints. For global access, use default values:
+
+ ```swift
+ let kGraphEndpoint = "https://graph.microsoft.com/"
+ let kAuthority = "https://login.microsoftonline.com/common"
+ ```
+
+1. Other endpoints are documented [here](/graph/deployments#app-registration-and-token-service-root-endpoints). For example, to run the quickstart with Azure AD Germany, use following:
+
+ ```swift
+ let kGraphEndpoint = "https://graph.microsoft.de/"
+ let kAuthority = "https://login.microsoftonline.de/common"
+ ```
+
+3. Open the project settings. In the **Identity** section, enter the **Bundle Identifier** that you entered into the portal.
+4. Right-click **Info.plist** and select **Open As** > **Source Code**.
+5. Under the dict root node, replace `Enter_the_bundle_Id_Here` with the ***Bundle Id*** that you used in the portal. Notice the `msauth.` prefix in the string.
+
+ ```xml
+ <key>CFBundleURLTypes</key>
+ <array>
+ <dict>
+ <key>CFBundleURLSchemes</key>
+ <array>
+ <string>msauth.Enter_the_Bundle_Id_Here</string>
+ </array>
+ </dict>
+ </array>
+ ```
+
+6. Build and run the app!
## More Information
active-directory Quickstart V2 Java Daemon https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-java-daemon.md
-+ Previously updated : 01/22/2021 Last updated : 01/10/2022 #Customer intent: As an application developer, I want to learn how my Java app can get an access token and call an API that's protected by Microsoft identity platform endpoint using client credentials flow.
In this quickstart, you download and run a code sample that demonstrates how a Java application can get an access token using the app's identity to call the Microsoft Graph API and display a [list of users](/graph/api/user-list) in the directory. The code sample demonstrates how an unattended job or Windows service can run with an application identity, instead of a user's identity.
-> [!div renderon="docs"]
-> ![Shows how the sample app generated by this quickstart works](media/quickstart-v2-java-daemon/java-console-daemon.svg)
- ## Prerequisites To run this sample, you need:
To run this sample, you need:
- [Java Development Kit (JDK)](https://openjdk.java.net/) 8 or greater - [Maven](https://maven.apache.org/)
-> [!div renderon="docs"]
-> ## Register and download your quickstart app
-
-> [!div renderon="docs" class="sxs-lookup"]
->
-> You have two options to start your quickstart application: Express (Option 1 below), and Manual (Option 2)
->
-> ### Option 1: Register and auto configure your app and then download your code sample
->
-> 1. Go to the <a href="https://portal.azure.com/?Microsoft_AAD_RegisteredApps=true#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/JavaDaemonQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations</a> quickstart experience.
-> 1. Enter a name for your application and select **Register**.
-> 1. Follow the instructions to download and automatically configure your new application with just one click.
->
-> ### Option 2: Register and manually configure your application and code sample
-
-> [!div renderon="docs"]
-> #### Step 1: Register your application
-> To register your application and add the app's registration information to your solution manually, follow these steps:
->
-> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
-> 1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
-> 1. Search for and select **Azure Active Directory**.
-> 1. Under **Manage**, select **App registrations** > **New registration**.
-> 1. Enter a **Name** for your application, for example `Daemon-console`. Users of your app might see this name, and you can change it later.
-> 1. Select **Register**.
-> 1. Under **Manage**, select **Certificates & secrets**.
-> 1. Under **Client secrets**, select **New client secret**, enter a name, and then select **Add**. Record the secret value in a safe location for use in a later step.
-> 1. Under **Manage**, select **API Permissions** > **Add a permission**. Select **Microsoft Graph**.
-> 1. Select **Application permissions**.
-> 1. Under **User** node, select **User.Read.All**, then select **Add permissions**.
-
-> [!div class="sxs-lookup" renderon="portal"]
-> ### Download and configure the quickstart app
->
-> #### Step 1: Configure the application in Azure portal
-> For the code sample for this quickstart to work, you need to create a client secret, and add Graph API's **User.Read.All** application permission.
-> > [!div renderon="portal" id="makechanges" class="nextstepaction"]
-> > [Make these changes for me]()
->
-> > [!div id="appconfigured" class="alert alert-info"]
-> > ![Already configured](media/quickstart-v2-netcore-daemon/green-check.png) Your application is configured with these attributes.
+> [!div class="sxs-lookup"]
+### Download and configure the quickstart app
-#### Step 2: Download the Java project
+#### Step 1: Configure the application in Azure portal
+For the code sample for this quickstart to work, you need to create a client secret, and add Graph API's **User.Read.All** application permission.
+> [!div class="nextstepaction"]
+> [Make these changes for me]()
-> [!div renderon="docs"]
-> [Download the Java daemon project](https://github.com/Azure-Samples/ms-identity-java-daemon/archive/master.zip)
+> [!div class="alert alert-info"]
+> ![Already configured](media/quickstart-v2-netcore-daemon/green-check.png) Your application is configured with these attributes.
+
+#### Step 2: Download the Java project
-> [!div renderon="portal" id="autoupdate" class="sxs-lookup nextstepaction"]
+> [!div class="sxs-lookup nextstepaction"]
> [Download the code sample](https://github.com/Azure-Samples/ms-identity-java-daemon/archive/master.zip)
-> [!div class="sxs-lookup" renderon="portal"]
+> [!div class="sxs-lookup"]
> > [!NOTE] > > `Enter_the_Supported_Account_Info_Here`
-> [!div renderon="docs"]
-> #### Step 3: Configure the Java project
->
-> 1. Extract the zip file to a local folder close to the root of the disk, for example, *C:\Azure-Samples*.
-> 1. Navigate to the sub folder **msal-client-credential-secret**.
-> 1. Edit *src\main\resources\application.properties* and replace the values of the fields `AUTHORITY`, `CLIENT_ID`, and `SECRET` with the following snippet:
->
-> ```
-> AUTHORITY=https://login.microsoftonline.com/Enter_the_Tenant_Id_Here/
-> CLIENT_ID=Enter_the_Application_Id_Here
-> SECRET=Enter_the_Client_Secret_Here
-> ```
-> Where:
-> - `Enter_the_Application_Id_Here` - is the **Application (client) ID** for the application you registered.
-> - `Enter_the_Tenant_Id_Here` - replace this value with the **Tenant Id** or **Tenant name** (for example, contoso.microsoft.com).
-> - `Enter_the_Client_Secret_Here` - replace this value with the client secret created on step 1.
->
-> > [!TIP]
-> > To find the values of **Application (client) ID**, **Directory (tenant) ID**, go to the app's **Overview** page in the Azure portal. To generate a new key, go to **Certificates & secrets** page.
-
-> [!div class="sxs-lookup" renderon="portal"]
-> #### Step 3: Admin consent
-
-> [!div renderon="docs"]
-> #### Step 4: Admin consent
+#### Step 3: Admin consent
If you try to run the application at this point, you'll receive *HTTP 403 - Forbidden* error: `Insufficient privileges to complete the operation`. This error happens because any *app-only permission* requires Admin consent: a global administrator of your directory must give consent to your application. Select one of the options below depending on your role: ##### Global tenant administrator
-> [!div renderon="docs"]
-> If you are a global tenant administrator, go to **API Permissions** page in **App registrations** in the Azure portal and select **Grant admin consent for {Tenant Name}** (Where {Tenant Name} is the name of your directory).
-
-> [!div renderon="portal" class="sxs-lookup"]
-> If you are a global administrator, go to **API Permissions** page select **Grant admin consent for Enter_the_Tenant_Name_Here**.
-> > [!div id="apipermissionspage"]
-> > [Go to the API Permissions page]()
+If you are a global administrator, go to **API Permissions** page select **Grant admin consent for Enter_the_Tenant_Name_Here**.
+> [!div id="apipermissionspage"]
+> [Go to the API Permissions page]()
##### Standard user
If you're a standard user of your tenant, then you need to ask a global administ
```url https://login.microsoftonline.com/Enter_the_Tenant_Id_Here/adminconsent?client_id=Enter_the_Application_Id_Here ```-
-> [!div renderon="docs"]
-> > Where:
-> > * `Enter_the_Tenant_Id_Here` - replace this value with the **Tenant Id** or **Tenant name** (for example, contoso.microsoft.com)
-> > * `Enter_the_Application_Id_Here` - is the **Application (client) ID** for the application you registered.
-
-> [!div class="sxs-lookup" renderon="portal"]
-> #### Step 4: Run the application
-
-> [!div renderon="docs"]
-> #### Step 5: Run the application
+#### Step 4: Run the application
You can test the sample directly by running the main method of ClientCredentialGrant.java from your IDE.
ConfidentialClientApplication cca =
> | Where: |Description | > |||
-> | `CLIENT_SECRET` | Is the client secret created for the application in Azure Portal. |
+> | `CLIENT_SECRET` | Is the client secret created for the application in Azure portal. |
> | `CLIENT_ID` | Is the **Application (client) ID** for the application registered in the Azure portal. You can find this value in the app's **Overview** page in the Azure portal. | > | `AUTHORITY` | The STS endpoint for user to authenticate. Usually `https://login.microsoftonline.com/{tenant}` for public cloud, where {tenant} is the name of your tenant or your tenant Id.|
IAuthenticationResult result;
> |Where:| Description | > |||
-> | `SCOPE` | Contains the scopes requested. For confidential clients, this should use the format similar to `{Application ID URI}/.default` to indicate that the scopes being requested are the ones statically defined in the app object set in the Azure portal (for Microsoft Graph, `{Application ID URI}` points to `https://graph.microsoft.com`). For custom web APIs, `{Application ID URI}` is defined under the **Expose an API** section in **App registrations** in the Azure Portal.|
+> | `SCOPE` | Contains the scopes requested. For confidential clients, this should use the format similar to `{Application ID URI}/.default` to indicate that the scopes being requested are the ones statically defined in the app object set in the Azure portal (for Microsoft Graph, `{Application ID URI}` points to `https://graph.microsoft.com`). For custom web APIs, `{Application ID URI}` is defined under the **Expose an API** section in **App registrations** in the Azure portal.|
[!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)]
active-directory Quickstart V2 Netcore Daemon https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-netcore-daemon.md
-+ Previously updated : 10/05/2020 Last updated : 01/10/2022
In this quickstart, you download and run a code sample that demonstrates how a .NET Core console application can get an access token to call the Microsoft Graph API and display a [list of users](/graph/api/user-list) in the directory. The code sample also demonstrates how a job or a Windows service can run with an application identity, instead of a user's identity. The sample console application in this quickstart is also a daemon application, so it's a confidential client application.
-> [!div renderon="docs"]
-> The following diagram shows how the sample app works:
->
-> ![Diagram that shows how the sample app generated by this quickstart works.](media/quickstart-v2-netcore-daemon/netcore-daemon-intro.svg)
->
- ## Prerequisites This quickstart requires [.NET Core 3.1 SDK](https://dotnet.microsoft.com/download) but will also work with .NET 5.0 SDK.
-> [!div renderon="docs"]
-> ## Register and download the app
-
-> [!div renderon="docs" class="sxs-lookup"]
->
-> You have two options to start building your application: automatic or manual configuration.
->
-> ### Automatic configuration
->
-> If you want to register and automatically configure your app and then download the code sample, follow these steps:
->
-> 1. Go to the <a href="https://portal.azure.com/?Microsoft_AAD_RegisteredApps=true#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/DotNetCoreDaemonQuickstartPage/sourceType/docs" target="_blank">Azure portal page for app registration</a>.
-> 1. Enter a name for your application and select **Register**.
-> 1. Follow the instructions to download and automatically configure your new application in one click.
->
-> ### Manual configuration
->
-> If you want to manually configure your application and code sample, use the following procedures.
->
-> [!div renderon="docs"]
-> #### Step 1: Register your application
-> To register your application and add the app's registration information to your solution manually, follow these steps:
->
-> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</span></a>.
-> 1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
-> 1. Search for and select **Azure Active Directory**.
-> 1. Under **Manage**, select **App registrations** > **New registration**.
-> 1. For **Name**, enter a name for your application. For example, enter **Daemon-console**. Users of your app will see this name, and you can change it later.
-> 1. Select **Register** to create the application.
-> 1. Under **Manage**, select **Certificates & secrets**.
-> 1. Under **Client secrets**, select **New client secret**, enter a name, and then select **Add**. Record the secret value in a safe location for use in a later step.
-> 1. Under **Manage**, select **API Permissions** > **Add a permission**. Select **Microsoft Graph**.
-> 1. Select **Application permissions**.
-> 1. Under the **User** node, select **User.Read.All**, and then select **Add permissions**.
-
-> [!div class="sxs-lookup" renderon="portal"]
-> ### Download and configure your quickstart app
->
-> #### Step 1: Configure your application in the Azure portal
-> For the code sample in this quickstart to work, create a client secret and add the Graph API's **User.Read.All** application permission.
-> > [!div renderon="portal" id="makechanges" class="nextstepaction"]
-> > [Make these changes for me]()
->
-> > [!div id="appconfigured" class="alert alert-info"]
-> > ![Already configured](media/quickstart-v2-netcore-daemon/green-check.png) Your application is configured with these attributes.
+> [!div class="sxs-lookup"]
+### Download and configure your quickstart app
-#### Step 2: Download your Visual Studio project
+#### Step 1: Configure your application in the Azure portal
+For the code sample in this quickstart to work, create a client secret and add the Graph API's **User.Read.All** application permission.
+> [!div class="nextstepaction"]
+> [Make these changes for me]()
-> [!div renderon="docs"]
-> [Download the Visual Studio project](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/archive/master.zip)
->
-> You can run the provided project in either Visual Studio or Visual Studio for Mac.
+> [!div class="alert alert-info"]
+> ![Already configured](media/quickstart-v2-netcore-daemon/green-check.png) Your application is configured with these attributes.
+#### Step 2: Download your Visual Studio project
-> [!div class="sxs-lookup" renderon="portal"]
+> [!div class="sxs-lookup"]
> Run the project by using Visual Studio 2019.
-> [!div class="sxs-lookup" renderon="portal" id="autoupdate" class="nextstepaction"]
+> [!div class="sxs-lookup" id="autoupdate" class="nextstepaction"]
> [Download the code sample](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/archive/master.zip) [!INCLUDE [active-directory-develop-path-length-tip](../../../includes/active-directory-develop-path-length-tip.md)]
-> [!div class="sxs-lookup" renderon="portal"]
+> [!div class="sxs-lookup"]
> > [!NOTE] > > `Enter_the_Supported_Account_Info_Here`
-> [!div renderon="docs"]
-> #### Step 3: Configure your Visual Studio project
->
-> 1. Extract the .zip file to a local folder that's close to the root of the disk. For example, extract to *C:\Azure-Samples*.
->
-> We recommend extracting the archive into a directory near the root of your drive to avoid errors caused by path length limitations on Windows.
->
-> 1. Open the solution in Visual Studio: *1-Call-MSGraph\daemon-console.sln* (optional).
-> 1. In *appsettings.json*, replace the values of `Tenant`, `ClientId`, and `ClientSecret`:
->
-> ```json
-> "Tenant": "Enter_the_Tenant_Id_Here",
-> "ClientId": "Enter_the_Application_Id_Here",
-> "ClientSecret": "Enter_the_Client_Secret_Here"
-> ```
-> In that code:
-> - `Enter_the_Application_Id_Here` is the application (client) ID for the application that you registered.
- To find the values for the application (client) ID and the directory (tenant) ID, go to the app's **Overview** page in the Azure portal.
-> - Replace `Enter_the_Tenant_Id_Here` with the tenant ID or tenant name (for example, `contoso.microsoft.com`).
-> - Replace `Enter_the_Client_Secret_Here` with the client secret that you created in step 1.
- To generate a new key, go to the **Certificates & secrets** page.
-
-> [!div class="sxs-lookup" renderon="portal"]
-> #### Step 3: Admin consent
-
-> [!div renderon="docs"]
-> #### Step 4: Admin consent
+#### Step 3: Admin consent
If you try to run the application at this point, you'll receive an *HTTP 403 - Forbidden* error: "Insufficient privileges to complete the operation." This error happens because any app-only permission requires a global administrator of your directory to give consent to your application. Select one of the following options, depending on your role. ##### Global tenant administrator
-> [!div renderon="docs"]
-> If you're a global tenant administrator, go to **Enterprise applications** in the Azure portal. Select your app registration, and select **Permissions** from the **Security** section of the left pane. Then select the large button labeled **Grant admin consent for {Tenant Name}** (where **{Tenant Name}** is the name of your directory).
-
-> [!div renderon="portal" class="sxs-lookup"]
-> If you're a global administrator, go to the **API Permissions** page and select **Grant admin consent for Enter_the_Tenant_Name_Here**.
-> > [!div id="apipermissionspage"]
-> > [Go to the API Permissions page]()
+If you're a global administrator, go to the **API Permissions** page and select **Grant admin consent for Enter_the_Tenant_Name_Here**.
+> [!div id="apipermissionspage"]
+> [Go to the API Permissions page]()
##### Standard user
If you're a standard user of your tenant, ask a global administrator to grant ad
https://login.microsoftonline.com/Enter_the_Tenant_Id_Here/adminconsent?client_id=Enter_the_Application_Id_Here ```
-> [!div renderon="docs"]
-> In that URL:
-> * Replace `Enter_the_Tenant_Id_Here` with the tenant ID or tenant name (for example, `contoso.microsoft.com`).
-> * `Enter_the_Application_Id_Here` is the application (client) ID for the application that you registered.
- You might see the error "AADSTS50011: No reply address is registered for the application" after you grant consent to the app by using the preceding URL. This error happens because this application and the URL don't have a redirect URI. You can ignore it.
-> [!div class="sxs-lookup" renderon="portal"]
-> #### Step 4: Run the application
-
-> [!div renderon="docs"]
-> #### Step 5: Run the application
+#### Step 4: Run the application
If you're using Visual Studio or Visual Studio for Mac, press **F5** to run the application. Otherwise, run the application via command prompt, console, or terminal:
This quickstart application uses a client secret to identify itself as a confide
## More information This section gives an overview of the code required to sign in users. This overview can be useful to understand how the code works, what the main arguments are, and how to add sign-in to an existing .NET Core console application.
-> [!div class="sxs-lookup" renderon="portal"]
-> ### How the sample works
->
-> ![Diagram that shows how the sample app generated by this quickstart works.](media/quickstart-v2-netcore-daemon/netcore-daemon-intro.svg)
+> [!div class="sxs-lookup"]
+### How the sample works
+
+![Diagram that shows how the sample app generated by this quickstart works.](media/quickstart-v2-netcore-daemon/netcore-daemon-intro.svg)
### MSAL.NET
active-directory Quickstart V2 Nodejs Console https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-nodejs-console.md
- Previously updated : 02/17/2021+ Last updated : 01/10/2022 #Customer intent: As an application developer, I want to learn how my Node.js app can get an access token and call an API that is protected by a Microsoft identity platform endpoint using client credentials flow.
This quickstart uses the [Microsoft Authentication Library for Node.js (MSAL Nod
* [Node.js](https://nodejs.org/en/download/) * [Visual Studio Code](https://code.visualstudio.com/download) or another code editor
-> [!div renderon="docs"]
-> ## Register and download the sample application
->
-> Follow the steps below to get started.
->
-> [!div renderon="docs"]
-> #### Step 1: Register the application
-> To register your application and add the app's registration information to your solution manually, follow these steps:
->
-> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
-> 1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
-> 1. Search for and select **Azure Active Directory**.
-> 1. Under **Manage**, select **App registrations** > **New registration**.
-> 1. Enter a **Name** for your application, for example `msal-node-cli`. Users of your app might see this name, and you can change it later.
-> 1. Select **Register**.
-> 1. Under **Manage**, select **Certificates & secrets**.
-> 1. Under **Client secrets**, select **New client secret**, enter a name, and then select **Add**. Record the secret value in a safe location for use in a later step.
-> 1. Under **Manage**, select **API Permissions** > **Add a permission**. Select **Microsoft Graph**.
-> 1. Select **Application permissions**.
-> 1. Under **User** node, select **User.Read.All**, then select **Add permissions**.
-
-> [!div class="sxs-lookup" renderon="portal"]
-> ### Download and configure the sample app
->
-> #### Step 1: Configure the application in Azure portal
-> For the code sample for this quickstart to work, you need to create a client secret, and add Graph API's **User.Read.All** application permission.
-> > [!div renderon="portal" id="makechanges" class="nextstepaction"]
-> > [Make these changes for me]()
->
-> > [!div id="appconfigured" class="alert alert-info"]
-> > ![Already configured](media/quickstart-v2-netcore-daemon/green-check.png) Your application is configured with these attributes.
-#### Step 2: Download the Node.js sample project
+### Download and configure the sample app
-> [!div renderon="docs"]
-> [Download the code sample](https://github.com/azure-samples/ms-identity-javascript-nodejs-console/archive/main.zip)
+#### Step 1: Configure the application in Azure portal
+For the code sample for this quickstart to work, you need to create a client secret, and add Graph API's **User.Read.All** application permission.
+> [!div class="nextstepaction"]
+> [Make these changes for me]()
-> [!div renderon="portal" id="autoupdate" class="sxs-lookup nextstepaction"]
+> [!div class="alert alert-info"]
+> ![Already configured](media/quickstart-v2-netcore-daemon/green-check.png) Your application is configured with these attributes.
+
+#### Step 2: Download the Node.js sample project
+
+> [!div class="sxs-lookup nextstepaction"]
> [Download the code sample](https://github.com/azure-samples/ms-identity-javascript-nodejs-console/archive/main.zip)
-> [!div class="sxs-lookup" renderon="portal"]
+> [!div class="sxs-lookup"]
> > [!NOTE] > > `Enter_the_Supported_Account_Info_Here`
-> [!div renderon="docs"]
-> #### Step 3: Configure the Node.js sample project
->
-> 1. Extract the zip file to a local folder close to the root of the disk, for example, *C:/Azure-Samples*.
-> 1. Edit *.env* and replace the values of the fields `TENANT_ID`, `CLIENT_ID`, and `CLIENT_SECRET` with the following snippet:
->
-> ```
-> "TENANT_ID": "Enter_the_Tenant_Id_Here",
-> "CLIENT_ID": "Enter_the_Application_Id_Here",
-> "CLIENT_SECRET": "Enter_the_Client_Secret_Here"
-> ```
-> Where:
-> - `Enter_the_Application_Id_Here` - is the **Application (client) ID** of the application you registered earlier. Find this ID on the app registration's **Overview** pane in the Azure portal.
-> - `Enter_the_Tenant_Id_Here` - replace this value with the **Tenant ID** or **Tenant name** (for example, contoso.microsoft.com). Find these values on the app registration's **Overview** pane in the Azure portal.
-> - `Enter_the_Client_Secret_Here` - replace this value with the client secret you created earlier. To generate a new key, use **Certificates & secrets** in the app registration settings in the Azure portal.
->
-> > [!WARNING]
-> > Any plaintext secret in source code poses an increased security risk. This article uses a plaintext client secret for simplicity only. Use [certificate credentials](active-directory-certificate-credentials.md) instead of client secrets in your confidential client applications, especially those apps you intend to deploy to production.
-
-> [!div class="sxs-lookup" renderon="portal"]
-> #### Step 3: Admin consent
-
-> [!div renderon="docs"]
-> #### Step 4: Admin consent
+#### Step 3: Admin consent
If you try to run the application at this point, you'll receive *HTTP 403 - Forbidden* error: `Insufficient privileges to complete the operation`. This error happens because any *app-only permission* requires **admin consent**: a global administrator of your directory must give consent to your application. Select one of the options below depending on your role: ##### Global tenant administrator
-> [!div renderon="docs"]
-> If you are a global tenant administrator, go to **API Permissions** page in the Azure portal's Application Registration and select **Grant admin consent for {Tenant Name}** (where {Tenant Name} is the name of your directory).
-
-> [!div renderon="portal" class="sxs-lookup"]
-> If you are a global administrator, go to **API Permissions** page select **Grant admin consent for Enter_the_Tenant_Name_Here**
+If you are a global administrator, go to **API Permissions** page select **Grant admin consent for Enter_the_Tenant_Name_Here**
> > [!div id="apipermissionspage"] > > [Go to the API Permissions page]()
If you're a standard user of your tenant, then you need to ask a global administ
https://login.microsoftonline.com/Enter_the_Tenant_Id_Here/adminconsent?client_id=Enter_the_Application_Id_Here ```
-> [!div renderon="docs"]
->> Where:
->> * `Enter_the_Tenant_Id_Here` - replace this value with the **Tenant Id** or **Tenant name** (for example, contoso.microsoft.com)
->> * `Enter_the_Application_Id_Here` - is the **Application (client) ID** for the application you registered.
-
-> [!div class="sxs-lookup" renderon="portal"]
-> #### Step 4: Run the application
-
-> [!div renderon="docs"]
-> #### Step 5: Run the application
+#### Step 4: Run the application
Locate the sample's root folder (where `package.json` resides) in a command prompt or console. You'll need to install the dependencies of this sample once:
active-directory Quickstart V2 Nodejs Desktop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-nodejs-desktop.md
Previously updated : 02/17/2021 Last updated : 01/14/2022 #Customer intent: As an application developer, I want to learn how my Node.js Electron desktop application can get an access token and call an API that's protected by a Microsoft identity platform endpoint.
This quickstart uses the [Microsoft Authentication Library for Node.js (MSAL Nod
* [Node.js](https://nodejs.org/en/download/) * [Visual Studio Code](https://code.visualstudio.com/download) or another code editor
-> [!div renderon="docs"]
-> ## Register and download the sample application
->
-> Follow the steps below to get started.
->
-> #### Step 1: Register the application
-> To register your application and add the app's registration information to your solution manually, follow these steps:
->
-> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
-> 1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
-> 1. Search for and select **Azure Active Directory**.
-> 1. Under **Manage**, select **App registrations** > **New registration**.
-> 1. Enter a **Name** for your application, for example `msal-node-desktop`. Users of your app might see this name, and you can change it later.
-> 1. Select **Register** to create the application.
-> 1. Under **Manage**, select **Authentication**.
-> 1. Select **Add a platform** > **Mobile and desktop applications**.
-> 1. In the **Redirect URIs** section, enter `msal://redirect`.
-> 1. Select **Configure**.
-
-> [!div class="sxs-lookup" renderon="portal"]
-> #### Step 1: Configure the application in Azure portal
-> For the code sample for this quickstart to work, you need to add a reply URL as **msal://redirect**.
-> > [!div renderon="portal" id="makechanges" class="nextstepaction"]
-> > [Make this change for me]()
->
-> > [!div id="appconfigured" class="alert alert-info"]
-> > ![Already configured](media/quickstart-v2-windows-desktop/green-check.png) Your application is configured with these attributes.
+#### Step 1: Configure the application in Azure portal
+For the code sample for this quickstart to work, you need to add a reply URL as **msal://redirect**.
+> [!div class="nextstepaction"]
+> [Make this change for me]()
-#### Step 2: Download the Electron sample project
+> [!div class="alert alert-info"]
+> ![Already configured](media/quickstart-v2-windows-desktop/green-check.png) Your application is configured with these attributes.
-> [!div renderon="docs"]
-> [Download the code sample](https://github.com/azure-samples/ms-identity-javascript-nodejs-desktop/archive/main.zip)
+#### Step 2: Download the Electron sample project
-> [!div renderon="portal" id="autoupdate" class="sxs-lookup nextstepaction"]
+> [!div class="nextstepaction"]
> [Download the code sample](https://github.com/azure-samples/ms-identity-javascript-nodejs-desktop/archive/main.zip)
-> [!div class="sxs-lookup" renderon="portal"]
+> [!div class="sxs-lookup"]
> > [!NOTE] > > `Enter_the_Supported_Account_Info_Here`
-> [!div renderon="docs"]
-> #### Step 3: Configure the Electron sample project
->
-> 1. Extract the zip file to a local folder close to the root of the disk, for example, *C:/Azure-Samples*.
-> 1. Edit *.env* and replace the values of the fields `TENANT_ID` and `CLIENT_ID` with the following snippet:
->
-> ```
-> "TENANT_ID": "Enter_the_Tenant_Id_Here",
-> "CLIENT_ID": "Enter_the_Application_Id_Here"
-> ```
-> Where:
-> - `Enter_the_Application_Id_Here` - is the **Application (client) ID** for the application you registered.
-> - `Enter_the_Tenant_Id_Here` - replace this value with the **Tenant Id** or **Tenant name** (for example, contoso.microsoft.com)
->
-> > [!TIP]
-> > To find the values of **Application (client) ID**, **Directory (tenant) ID**, go to the app's **Overview** page in the Azure portal.
-
-> [!div class="sxs-lookup" renderon="portal"]
-> #### Step 4: Run the application
-
-> [!div renderon="docs"]
-> #### Step 4: Run the application
+#### Step 4: Run the application
You'll need to install the dependencies of this sample once:
active-directory Quickstart V2 Python Daemon https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-python-daemon.md
-+ Previously updated : 10/22/2019 Last updated : 01/10/2022 #Customer intent: As an application developer, I want to learn how my Python app can get an access token and call an API that's protected by the Microsoft identity platform using client credentials flow.
In this quickstart, you download and run a code sample that demonstrates how a Python application can get an access token using the app's identity to call the Microsoft Graph API and display a [list of users](/graph/api/user-list) in the directory. The code sample demonstrates how an unattended job or Windows service can run with an application identity, instead of a user's identity.
-> [!div renderon="docs"]
-> ![Shows how the sample app generated by this quickstart works](media/quickstart-v2-python-daemon/python-console-daemon.svg)
- ## Prerequisites To run this sample, you need:
To run this sample, you need:
- [Python 2.7+](https://www.python.org/downloads/release/python-2713) or [Python 3+](https://www.python.org/downloads/release/python-364/) - [MSAL Python](https://github.com/AzureAD/microsoft-authentication-library-for-python)
-> [!div renderon="docs"]
-> ## Register and download your quickstart app
-
-> [!div renderon="docs" class="sxs-lookup"]
->
-> You have two options to start your quickstart application: Express (Option 1 below), and Manual (Option 2)
->
-> ### Option 1: Register and auto configure your app and then download your code sample
->
-> 1. Go to the <a href="https://portal.azure.com/?Microsoft_AAD_RegisteredApps=true#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/PythonDaemonQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations</a> quickstart experience.
-> 1. Enter a name for your application and select **Register**.
-> 1. Follow the instructions to download and automatically configure your new application with just one click.
->
-> ### Option 2: Register and manually configure your application and code sample
-
-> [!div renderon="docs"]
-> #### Step 1: Register your application
-> To register your application and add the app's registration information to your solution manually, follow these steps:
->
-> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
-> 1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
-> 1. Search for and select **Azure Active Directory**.
-> 1. Under **Manage**, select **App registrations** > **New registration**.
-> 1. Enter a **Name** for your application, for example `Daemon-console`. Users of your app might see this name, and you can change it later.
-> 1. Select **Register**.
-> 1. Under **Manage**, select **Certificates & secrets**.
-> 1. Under **Client secrets**, select **New client secret**, enter a name, and then select **Add**. Record the secret value in a safe location for use in a later step.
-> 1. Under **Manage**, select **API Permissions** > **Add a permission**. Select **Microsoft Graph**.
-> 1. Select **Application permissions**.
-> 1. Under **User** node, select **User.Read.All**, then select **Add permissions**.
-
-> [!div class="sxs-lookup" renderon="portal"]
-> ### Download and configure the quickstart app
->
-> #### Step 1: Configure your application in Azure portal
-> For the code sample in this quickstart to work, create a client secret and add Graph API's **User.Read.All** application permission.
-> > [!div renderon="portal" id="makechanges" class="nextstepaction"]
-> > [Make these changes for me]()
->
-> > [!div id="appconfigured" class="alert alert-info"]
-> > ![Already configured](media/quickstart-v2-netcore-daemon/green-check.png) Your application is configured with these attributes.
+> [!div class="sxs-lookup"]
+### Download and configure the quickstart app
-#### Step 2: Download the Python project
+#### Step 1: Configure your application in Azure portal
+For the code sample in this quickstart to work, create a client secret and add Graph API's **User.Read.All** application permission.
+> [!div class="nextstepaction"]
+> [Make these changes for me]()
+
+> [!div class="alert alert-info"]
+> ![Already configured](media/quickstart-v2-netcore-daemon/green-check.png) Your application is configured with these attributes.
-> [!div renderon="docs"]
-> [Download the Python daemon project](https://github.com/Azure-Samples/ms-identity-python-daemon/archive/master.zip)
+#### Step 2: Download the Python project
-> [!div renderon="portal" id="autoupdate" class="sxs-lookup nextstepaction"]
+> [!div class="sxs-lookup nextstepaction"]
> [Download the code sample](https://github.com/Azure-Samples/ms-identity-python-daemon/archive/master.zip)
-> [!div class="sxs-lookup" renderon="portal"]
+> [!div class="sxs-lookup"]
> > [!NOTE] > > `Enter_the_Supported_Account_Info_Here` -
-> [!div renderon="docs"]
-> #### Step 3: Configure the Python project
->
-> 1. Extract the zip file to a local folder close to the root of the disk, for example, **C:\Azure-Samples**.
-> 1. Navigate to the sub folder **1-Call-MsGraph-WithSecret**.
-> 1. Edit **parameters.json** and replace the values of the fields `authority`, `client_id`, and `secret` with the following snippet:
->
-> ```json
-> "authority": "https://login.microsoftonline.com/Enter_the_Tenant_Id_Here",
-> "client_id": "Enter_the_Application_Id_Here",
-> "secret": "Enter_the_Client_Secret_Here"
-> ```
-> Where:
-> - `Enter_the_Application_Id_Here` - is the **Application (client) ID** for the application you registered.
-> - `Enter_the_Tenant_Id_Here` - replace this value with the **Tenant Id** or **Tenant name** (for example, contoso.microsoft.com)
-> - `Enter_the_Client_Secret_Here` - replace this value with the client secret created on step 1.
->
-> > [!TIP]
-> > To find the values of **Application (client) ID**, **Directory (tenant) ID**, go to the app's **Overview** page in the Azure portal. To generate a new key, go to **Certificates & secrets** page.
-
-> [!div class="sxs-lookup" renderon="portal"]
-> #### Step 3: Admin consent
-
-> [!div renderon="docs"]
-> #### Step 4: Admin consent
+#### Step 3: Admin consent
If you try to run the application at this point, you'll receive *HTTP 403 - Forbidden* error: `Insufficient privileges to complete the operation`. This error happens because any *app-only permission* requires Admin consent: a global administrator of your directory must give consent to your application. Select one of the options below depending on your role: ##### Global tenant administrator
-> [!div renderon="docs"]
-> If you are a global tenant administrator, go to **API Permissions** page in **App registrations** in the Azure portal and select **Grant admin consent for {Tenant Name}** (Where {Tenant Name} is the name of your directory).
-
-> [!div renderon="portal" class="sxs-lookup"]
-> If you are a global administrator, go to **API Permissions** page select **Grant admin consent for Enter_the_Tenant_Name_Here**.
-> > [!div id="apipermissionspage"]
-> > [Go to the API Permissions page]()
+If you are a global administrator, go to **API Permissions** page select **Grant admin consent for Enter_the_Tenant_Name_Here**.
+> [!div id="apipermissionspage"]
+> [Go to the API Permissions page]()
##### Standard user
If you're a standard user of your tenant, ask a global administrator to grant ad
https://login.microsoftonline.com/Enter_the_Tenant_Id_Here/adminconsent?client_id=Enter_the_Application_Id_Here ```
-> [!div renderon="docs"]
->> Where:
->> * `Enter_the_Tenant_Id_Here` - replace this value with the **Tenant Id** or **Tenant name** (for example, contoso.microsoft.com)
->> * `Enter_the_Application_Id_Here` - is the **Application (client) ID** for the application you registered.
-
-> [!div class="sxs-lookup" renderon="portal"]
-> #### Step 4: Run the application
-> [!div renderon="docs"]
-> #### Step 5: Run the application
+#### Step 4: Run the application
You'll need to install the dependencies of this sample once.
app = msal.ConfidentialClientApplication(
> | Where: |Description | > |||
-> | `config["secret"]` | Is the client secret created for the application in Azure Portal. |
+> | `config["secret"]` | Is the client secret created for the application in Azure portal. |
> | `config["client_id"]` | Is the **Application (client) ID** for the application registered in the Azure portal. You can find this value in the app's **Overview** page in the Azure portal. | > | `config["authority"]` | The STS endpoint for user to authenticate. Usually `https://login.microsoftonline.com/{tenant}` for public cloud, where {tenant} is the name of your tenant or your tenant Id.|
if not result:
> |Where:| Description | > |||
-> | `config["scope"]` | Contains the scopes requested. For confidential clients, this should use the format similar to `{Application ID URI}/.default` to indicate that the scopes being requested are the ones statically defined in the app object set in the Azure portal (for Microsoft Graph, `{Application ID URI}` points to `https://graph.microsoft.com`). For custom web APIs, `{Application ID URI}` is defined under the **Expose an API** section in **App registrations** in the Azure Portal.|
+> | `config["scope"]` | Contains the scopes requested. For confidential clients, this should use the format similar to `{Application ID URI}/.default` to indicate that the scopes being requested are the ones statically defined in the app object set in the Azure portal (for Microsoft Graph, `{Application ID URI}` points to `https://graph.microsoft.com`). For custom web APIs, `{Application ID URI}` is defined under the **Expose an API** section in **App registrations** in the Azure portal.|
For more information, please see the [reference documentation for `AcquireTokenForClient`](https://msal-python.readthedocs.io/en/latest/#msal.ConfidentialClientApplication.acquire_token_for_client).
active-directory Quickstart V2 Uwp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-uwp.md
Previously updated : 10/07/2020 Last updated : 01/14/2022+ #Customer intent: As an application developer, I want to learn how my Universal Windows Platform (XAML) application can get an access token and call an API that's protected by the Microsoft identity platform.
In this quickstart, you download and run a code sample that demonstrates how a U
See [How the sample works](#how-the-sample-works) for an illustration.
-> [!div renderon="docs"]
-> ## Prerequisites
->
-> * An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-> * [Visual Studio 2019](https://visualstudio.microsoft.com/vs/)
->
-> ## Register and download your quickstart app
-> [!div renderon="docs" class="sxs-lookup"]
-> You have two options to start your quickstart application:
-> * [Express] [Option 1: Register and auto configure your app and then download your code sample](#option-1-register-and-auto-configure-your-app-and-then-download-your-code-sample)
-> * [Manual] [Option 2: Register and manually configure your application and code sample](#option-2-register-and-manually-configure-your-application-and-code-sample)
->
-> ### Option 1: Register and auto configure your app and then download your code sample
->
-> 1. Go to the <a href="https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/UwpQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations</a> quickstart experience.
-> 1. Enter a name for your application and select **Register**.
-> 1. Follow the instructions to download and automatically configure your new application for you in one click.
->
-> ### Option 2: Register and manually configure your application and code sample
-> [!div renderon="docs"]
-> #### Step 1: Register your application
-> To register your application and add the app's registration information to your solution, follow these steps:
-> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
-> 1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
-> 1. Search for and select **Azure Active Directory**.
-> 1. Under **Manage**, select **App registrations** > **New registration**.
-> 1. Enter a **Name** for your application, for example `UWP-App-calling-MsGraph`. Users of your app might see this name, and you can change it later.
-> 1. In the **Supported account types** section, select **Accounts in any organizational directory and personal Microsoft accounts (for example, Skype, Xbox, Outlook.com)**.
-> 1. Select **Register** to create the application, and then record the **Application (client) ID** for use in a later step.
-> 1. Under **Manage**, select **Authentication**.
-> 1. Select **Add a platform** > **Mobile and desktop applications**.
-> 1. Under **Redirect URIs**, select `https://login.microsoftonline.com/common/oauth2/nativeclient`.
-> 1. Select **Configure**.
-
-> [!div renderon="portal" class="sxs-lookup"]
-> #### Step 1: Configure the application
-> For the code sample in this quickstart to work, add a **Redirect URI** of `https://login.microsoftonline.com/common/oauth2/nativeclient`.
-> > [!div renderon="portal" id="makechanges" class="nextstepaction"]
-> > [Make this change for me]()
->
-> > [!div id="appconfigured" class="alert alert-info"]
-> > ![Already configured](media/quickstart-v2-uwp/green-check.png) Your application is configured with these attributes.
-#### Step 2: Download the Visual Studio project
+## Prerequisites
-> [!div renderon="docs"]
-> [Download the Visual Studio project](https://github.com/Azure-Samples/active-directory-dotnet-native-uwp-v2/archive/msal3x.zip)
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* [Visual Studio 2019](https://visualstudio.microsoft.com/vs/)
+
+#### Step 1: Configure the application
+For the code sample in this quickstart to work, add a **Redirect URI** of `https://login.microsoftonline.com/common/oauth2/nativeclient`.
+> [!div class="nextstepaction"]
+> [Make this change for me]()
+
+> [!div class="alert alert-info"]
+> ![Already configured](media/quickstart-v2-uwp/green-check.png) Your application is configured with these attributes.
+
+#### Step 2: Download the Visual Studio project
-> [!div class="sxs-lookup" renderon="portal"]
-> Run the project using Visual Studio 2019.
-> [!div class="sxs-lookup" renderon="portal" id="autoupdate" class="nextstepaction"]
+Run the project using Visual Studio 2019.
+> [!div class="nextstepaction"]
> [Download the code sample](https://github.com/Azure-Samples/active-directory-dotnet-native-uwp-v2/archive/msal3x.zip) [!INCLUDE [active-directory-develop-path-length-tip](../../../includes/active-directory-develop-path-length-tip.md)]
-> [!div class="sxs-lookup" renderon="portal"]
-> #### Step 3: Your app is configured and ready to run
-> We have configured your project with values of your app's properties and it's ready to run.
-
-> [!div class="sxs-lookup" renderon="portal"]
-> > [!NOTE]
-> > `Enter_the_Supported_Account_Info_Here`
-
-> [!div renderon="docs"]
-> #### Step 3: Configure the Visual Studio project
->
-> 1. Extract the .zip archive to a local folder close to the root of your drive. For example, into **C:\Azure-Samples**.
-> 1. Open the project in Visual Studio. Install the **Universal Windows Platform development** workload and any individual SDK components if prompted.
-> 1. In *MainPage.Xaml.cs*, change the value of the `ClientId` variable to the **Application (Client) ID** of the application you registered earlier.
->
-> ```csharp
-> private const string ClientId = "Enter_the_Application_Id_here";
-> ```
->
-> You can find the **Application (client) ID** on the app's **Overview** pane in the Azure portal (**Azure Active Directory** > **App registrations** > *{Your app registration}*).
-> 1. Create and then select a new self-signed test certificate for the package:
-> 1. In the **Solution Explorer**, double-click the *Package.appxmanifest* file.
-> 1. Select **Packaging** > **Choose Certificate...** > **Create...**.
-> 1. Enter a password and then select **OK**. A certificate called *Native_UWP_V2_TemporaryKey.pfx* is created.
-> 1. Select **OK** to dismiss the **Choose a certificate** dialog, and then verify that you see *Native_UWP_V2_TemporaryKey.pfx* in Solution Explorer.
-> 1. In the **Solution Explorer**, right-click the **Native_UWP_V2** project and select **Properties**.
-> 1. Select **Signing**, and then select the .pfx you created in the **Choose a strong name key file** drop-down.
+#### Step 3: Your app is configured and ready to run
+We have configured your project with values of your app's properties and it's ready to run.
#### Step 4: Run the application To run the sample application on your local machine:
active-directory Quickstart V2 Windows Desktop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-windows-desktop.md
Previously updated : 12/12/2019 Last updated : 01/14/2022 #Customer intent: As an application developer, I want to learn how my Windows desktop .NET application can get an access token and call an API that's protected by the Microsoft identity platform.
active-directory Reference Breaking Changes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/reference-breaking-changes.md
You can review the current text of the 50105 error and more on the error lookup
**Change**
-For single tenant applications, a request to add/update AppId URI (identifierUris) will validate that domain in the value of URI is part of the verified domain list in the customer tenant or the value uses the default scheme (`api://{appId}`) provided by AAD.
-This could prevent applications from adding an AppId URI if the domain isn't in the verified domain list or value does not use the default scheme.
+For single tenant applications, adding or updating the AppId URI validates that the domain in the HTTPS scheme URI is listed in the verified domain list in the customer tenant or that the value uses the default scheme (`api://{appId}`) provided by Azure AD. This could prevent applications from adding an AppId URI if the domain isn't in the verified domain list or the value does not use the default scheme.
To find more information on verified domains, refer to the [custom domains documentation](../../active-directory/fundamentals/add-custom-domain.md). The change does not affect existing applications using unverified domains in their AppID URI. It validates only new applications or when an existing application updates an identifier URIs or adds a new one to the identifierUri collection. The new restrictions apply only to URIs added to an app's identifierUris collection after 10/15/2021. AppId URIs already in an application's identifierUris collection when the restriction takes affect on 10/15/2021 will continue to function even if you add new URIs to that collection.
Azure AD will no longer double-encode this parameter, allowing apps to correctly
**Protocol impacted**: All flows
-On 1 June 2018, the official Azure Active Directory (AAD) Authority for Azure Government changed from `https://login-us.microsoftonline.com` to `https://login.microsoftonline.us`. This change also applied to Microsoft 365 GCC High and DoD, which Azure Government AAD also services. If you own an application within a US Government tenant, you must update your application to sign users in on the `.us` endpoint.
+On 1 June 2018, the official Azure Active Directory (Azure AD) Authority for Azure Government changed from `https://login-us.microsoftonline.com` to `https://login.microsoftonline.us`. This change also applied to Microsoft 365 GCC High and DoD, which Azure Government Azure AD also services. If you own an application within a US Government tenant, you must update your application to sign users in on the `.us` endpoint.
Starting May 5th, Azure AD will begin enforcing the endpoint change, blocking government users from signing into apps hosted in US Government tenants using the public endpoint (`microsoftonline.com`). Impacted apps will begin seeing an error `AADSTS900439` - `USGClientNotSupportedOnPublicEndpoint`. This error indicates that the app is attempting to sign in a US Government user on the public cloud endpoint. If your app is in a public cloud tenant and intended to support US Government users, you will need to [update your app to support them explicitly](./authentication-national-cloud.md). This may require creating a new app registration in the US Government cloud.
active-directory V2 Oauth2 On Behalf Of Flow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-oauth2-on-behalf-of-flow.md
Depending on the architecture or usage of your application, you may consider dif
> [!NOTE] > Previously the Microsoft account system (personal accounts) did not support the "Known client application" field, nor could it show combined consent. This has been added and all apps in the Microsoft identity platform can use the known client application approach for getting consent for OBO calls.
-### /.default and combined consent
+### .default and combined consent
-The middle tier application adds the client to the known client applications list in its manifest, and then the client can trigger a combined consent flow for both itself and the middle tier application. On the Microsoft identity platform, this is done using the [`/.default` scope](v2-permissions-and-consent.md#the-default-scope). When triggering a consent screen using known client applications and `/.default`, the consent screen will show permissions for **both** the client to the middle tier API, and also request whatever permissions are required by the middle-tier API. The user provides consent for both applications, and then the OBO flow works.
+The middle tier application adds the client to the known client applications list in its manifest. If a consent prompt is triggered by the client, the consent flow will be both for itself and the middle tier application. On the Microsoft identity platform, this is done using the [`.default` scope](v2-permissions-and-consent.md#the-default-scope). When triggering a consent screen using known client applications and `.default`, the consent screen will show permissions for **both** the client to the middle tier API, and also request whatever permissions are required by the middle-tier API. The user provides consent for both applications, and then the OBO flow works.
+
+The resource service (API) identified in the request should be the API for which the client application is requesting an access token as a result of the user's sign-in. For example, `scope=openid https://middle-tier-api.example.com/.default` (to request an access token for the middle tier API), or `scope=openid offline_access .default` (when a resource is not identified, it defaults to Microsoft Graph).
+
+Regardless of which API is identified in the authorization request, the consent prompt will be a combined consent prompt including all required permissions configured for the client app, as well as all required permissions configured for each middle tier API listed in the client's required permissions list, and which have identified the client as a known client application.
### Pre-authorized applications
active-directory V2 Permissions And Consent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-permissions-and-consent.md
Last updated 07/06/2021 -+
The `scope` parameter is a space-separated list of delegated permissions that th
After the user enters their credentials, the Microsoft identity platform checks for a matching record of *user consent*. If the user hasn't consented to any of the requested permissions in the past, and if the administrator hasn't consented to these permissions on behalf of the entire organization, the Microsoft identity platform asks the user to grant the requested permissions.
-At this time, the `offline_access` ("Maintain access to data you have given it access to") permission and `user.read` ("Sign you in and read your profile") permission are automatically included in the initial consent to an application. These permissions are generally required for proper app functionality. The `offline_access` permission gives the app access to refresh tokens that are critical for native apps and web apps. The `user.read` permission gives access to the `sub` claim. It allows the client or app to correctly identify the user over time and access rudimentary user information.
+At this time, the `offline_access` ("Maintain access to data you have given it access to") permission and `User.Read` ("Sign you in and read your profile") permission are automatically included in the initial consent to an application. These permissions are generally required for proper app functionality. The `offline_access` permission gives the app access to refresh tokens that are critical for native apps and web apps. The `User.Read` permission gives access to the `sub` claim. It allows the client or app to correctly identify the user over time and access rudimentary user information.
![Example screenshot that shows work account consent.](./media/v2-permissions-and-consent/work_account_consent.png)
To see a code sample that implements the steps, see the [admin-restricted scopes
### Request the permissions in the app registration portal
-In the app registration portal, applications can list the permissions they require, including both delegated permissions and application permissions. This setup allows the use of the `/.default` scope and the Azure portal's **Grant admin consent** option.
+In the app registration portal, applications can list the permissions they require, including both delegated permissions and application permissions. This setup allows the use of the `.default` scope and the Azure portal's **Grant admin consent** option.
In general, the permissions should be statically defined for a given application. They should be a superset of the permissions that the app will request dynamically or incrementally. > [!NOTE]
->Application permissions can be requested only through the use of [`/.default`](#the-default-scope). So if your app needs application permissions, make sure they're listed in the app registration portal.
+>Application permissions can be requested only through the use of [`.default`](#the-default-scope). So if your app needs application permissions, make sure they're listed in the app registration portal.
To configure the list of statically requested permissions for an application:
https://graph.microsoft.com/mail.send
| `client_id` | Required | The application (client) ID that the [Azure portal ΓÇô App registrations](https://go.microsoft.com/fwlink/?linkid=2083908) experience assigned to your app. | | `redirect_uri` | Required |The redirect URI where you want the response to be sent for your app to handle. It must exactly match one of the redirect URIs that you registered in the app registration portal. | | `state` | Recommended | A value included in the request that will also be returned in the token response. It can be a string of any content you want. Use the state to encode information about the user's state in the app before the authentication request occurred, such as the page or view they were on. |
-|`scope` | Required | Defines the set of permissions being requested by the application. Scopes can be either static (using [`/.default`](#the-default-scope)) or dynamic. This set can include the OpenID Connect scopes (`openid`, `profile`, `email`). If you need application permissions, you must use `/.default` to request the statically configured list of permissions. |
+|`scope` | Required | Defines the set of permissions being requested by the application. Scopes can be either static (using [`.default`](#the-default-scope)) or dynamic. This set can include the OpenID Connect scopes (`openid`, `profile`, `email`). If you need application permissions, you must use `.default` to request the statically configured list of permissions. |
-At this point, Azure AD requires a tenant administrator to sign in to complete the request. The administrator is asked to approve all the permissions that you requested in the `scope` parameter. If you used a static (`/.default`) value, it will function like the v1.0 admin consent endpoint and request consent for all scopes found in the required permissions for the app.
+At this point, Azure AD requires a tenant administrator to sign in to complete the request. The administrator is asked to approve all the permissions that you requested in the `scope` parameter. If you used a static (`.default`) value, it will function like the v1.0 admin consent endpoint and request consent for all scopes found in the required permissions for the app.
#### Successful response
Content-Type: application/json
{ "grant_type": "authorization_code", "client_id": "6731de76-14a6-49ae-97bc-6eba6914391e",
- "scope": "https://outlook.office.com/mail.read https://outlook.office.com/mail.send",
+ "scope": "https://outlook.office.com/Mail.Read https://outlook.office.com/mail.send",
"code": "AwABAAAAvPM1KaPlrEqdFSBzjqfTGBCmLdgfSTLEMPGYuNHSUYBrq...", "redirect_uri": "https://localhost/myapp", "client_secret": "zc53fwe80980293klaj9823" // NOTE: Only required for web apps
You can use the resulting access token in HTTP requests to the resource. It reli
For more information about the OAuth 2.0 protocol and how to get access tokens, see the [Microsoft identity platform endpoint protocol reference](active-directory-v2-protocols.md).
-## The /.default scope
+## The .default scope
-You can use the `/.default` scope to help migrate your apps from the v1.0 endpoint to the Microsoft identity platform endpoint. The `/.default` scope is built in for every application that refers to the static list of permissions configured on the application registration.
+The `.default` scope is used to refer generically to a resource service (API) in a request, without identifying specific permissions. If consent is necessary, using `.default` signals that consent should be prompted for all required permissions listed in the application registration (for all APIs in the list).
-A `scope` value of `https://graph.microsoft.com/.default` is functionally the same as `resource=https://graph.microsoft.com` on the v1.0 endpoint. By specifying the `https://graph.microsoft.com/.default` scope in its request, your application is requesting an access token that includes scopes for every Microsoft Graph permission you've selected for the app in the app registration portal. The scope is constructed by using the resource URI and `/.default`. So if the resource URI is `https://contosoApp.com`, the scope requested is `https://contosoApp.com/.default`. For cases where you must include a second slash to correctly request the token, see the [section about trailing slashes](#trailing-slash-and-default).
+The scope parameter value is constructed by using the identifier URI for the resource and `.default`, separated by a forward slash (`/`). For example, if the resource's identifier URI is `https://contoso.com`, the scope to request is `https://contoso.com/.default`. For cases where you must include a second slash to correctly request the token, see the [section about trailing slashes](#trailing-slash-and-default).
-The `/.default` scope can be used in any OAuth 2.0 flow. But it's necessary in the [On-Behalf-Of flow](v2-oauth2-on-behalf-of-flow.md) and [client credentials flow](v2-oauth2-client-creds-grant-flow.md). You also need it when you use the v2 admin consent endpoint to request application permissions.
+Using `scope={resource-identifier}/.default` is functionally the same as `resource={resource-identifier}` on the v1.0 endpoint (where `{resource-identifier}` is the identifier URI for the API, for example `https://graph.microsoft.com` for Microsoft Graph).
-Clients can't combine static (`/.default`) consent and dynamic consent in a single request. So `scope=https://graph.microsoft.com/.default+mail.read` results in an error because it combines scope types.
+The `.default` scope can be used in any OAuth 2.0 flow and to initiate [admin consent](v2-admin-consent.md). It's use is required in the [On-Behalf-Of flow](v2-oauth2-on-behalf-of-flow.md) and [client credentials flow](v2-oauth2-client-creds-grant-flow.md).
-### /.default and consent
+Clients can't combine static (`.default`) consent and dynamic consent in a single request. So `scope=https://graph.microsoft.com/.default Mail.Read` results in an error because it combines scope types.
-The `/.default` scope triggers the v1.0 endpoint behavior for `prompt=consent` as well. It requests consent for all permissions that the application registered, regardless of the resource. If it's included as part of the request, the `/.default` scope returns a token that contains the scopes for the resource requested.
+### .default when the user has already given consent
-### /.default when the user has already given consent
+The `.default` scope is functionally identical to the behavior of the `resource`-centric v1.0 endpoint. It carries the consent behavior of the v1.0 endpoint as well. That is, `.default` triggers a consent prompt only if consent has not been granted for any delegated permission between the client and the resource, on behalf of the signed-in user.
-The `/.default` scope is functionally identical to the behavior of the `resource`-centric v1.0 endpoint. It carries the consent behavior of the v1.0 endpoint as well. That is, `/.default` triggers a consent prompt only if the user has granted no permission between the client and the resource.
+If consent does exists, the returned token contains all scopes granted for that resource for the signed-in user. However, if no permission has been granted for the requested resource (or if the `prompt=consent` parameter has been provided), a consent prompt is shown for all required permissions configured on the client application registration, for all APIs in the list.
-If any such consent exists, the returned token contains all scopes the user granted for that resource. However, if no permission has been granted or if the `prompt=consent` parameter has been provided, a consent prompt is shown for all scopes that the client application registered.
+For example, if the scope `https://graph.microsoft.com/.default` is requested, your application is requesting an access token for the Microsoft Graph API. If at least one delegated permission has been granted for Microsoft Graph on behalf of the signed-in user, the sign-in will continue and all Microsoft Graph delegated permissions which have been granted for that user will be included in the access token. If no permissions have been granted for the requested resource (Microsoft Graph, in this example), then a consent prompt will be presented for all required permissions configured on the application, for all APIs in the list.
#### Example 1: The user, or tenant admin, has granted permissions
-In this example, the user or a tenant administrator has granted the `mail.read` and `user.read` Microsoft Graph permissions to the client.
+In this example, the user or a tenant administrator has granted the `Mail.Read` and `User.Read` Microsoft Graph permissions to the client.
-If the client requests `scope=https://graph.microsoft.com/.default`, no consent prompt is shown, regardless of the contents of the client application's registered permissions for Microsoft Graph. The returned token contains the scopes `mail.read` and `user.read`.
+If the client requests `scope=https://graph.microsoft.com/.default`, no consent prompt is shown, regardless of the contents of the client application's registered permissions for Microsoft Graph. The returned token contains the scopes `Mail.Read` and `User.Read`.
#### Example 2: The user hasn't granted permissions between the client and the resource
-In this example, the user hasn't granted consent between the client and Microsoft Graph. The client has registered for the permissions `user.read` and `contacts.read`. It has also registered for the Azure Key Vault scope `https://vault.azure.net/user_impersonation`.
+In this example, the user hasn't granted consent between the client and Microsoft Graph, nor has an administrator. The client has registered for the permissions `User.Read` and `Contacts.Read`. It has also registered for the Azure Key Vault scope `https://vault.azure.net/user_impersonation`.
-When the client requests a token for `scope=https://graph.microsoft.com/.default`, the user sees a consent page for the `user.read` scope, the `contacts.read` scope, and the Key Vault `user_impersonation` scopes. The returned token contains only the `user.read` and `contacts.read` scopes. It can be used only against Microsoft Graph.
+When the client requests a token for `scope=https://graph.microsoft.com/.default`, the user sees a consent page for the Microsoft Graph `User.Read` and `Contacts.Read` scopes, and for the Azure Key Vault `user_impersonation` scope. The returned token contains only the `User.Read` and `Contacts.Read` scopes, and it can be used only against Microsoft Graph.
#### Example 3: The user has consented, and the client requests more scopes
-In this example, the user has already consented to `mail.read` for the client. The client has registered for the `contacts.read` scope.
+In this example, the user has already consented to `Mail.Read` for the client. The client has registered for the `Contacts.Read` scope.
-When the client requests a token by using `scope=https://graph.microsoft.com/.default` and requests consent through `prompt=consent`, the user sees a consent page for all (and only) the permissions that the application registered. The `contacts.read` scope is on the consent page but `mail.read` isn't. The token returned is for Microsoft Graph. It contains `mail.read` and `contacts.read`.
+The client first performs a sign-in with `scope=https://graph.microsoft.com/.default`. Based on the `scopes` parameter of the response, the application's code detects that only `Mail.Read` has been granted. The client then initiates a second sign-in using `scope=https://graph.microsoft.com/.default`, and this time forces consent using `prompt=consent`. If the user is allowed to consent for all the permissions that the application registered, they will be shown the consent prompt. (If not, they will be shown an error message or the [admin consent request](../manage-apps/configure-admin-consent-workflow.md) form.) Both `Contacts.Read` and `Mail.Read` will be in the consent prompt. If consent is granted and the sign-in continues, the token returned is for Microsoft Graph, and contains `Mail.Read` and `Contacts.Read`.
-### Using the /.default scope with the client
+### Using the .default scope with the client
-In some cases, a client can request its own `/.default` scope. The following example demonstrates this scenario.
+In some cases, a client can request its own `.default` scope. The following example demonstrates this scenario.
-```HTTP
+```http
// Line breaks are for legibility only.
-GET https://login.microsoftonline.com/{tenant}/oauth2/v2.0/authorize?
-response_type=token //Code or a hybrid flow is also possible here
-&client_id=9ada6f8a-6d83-41bc-b169-a306c21527a5
-&scope=9ada6f8a-6d83-41bc-b169-a306c21527a5/.default
-&redirect_uri=https%3A%2F%2Flocalhost
-&state=1234
+GET https://login.microsoftonline.com/{tenant}/oauth2/v2.0/authorize
+ ?response_type=token //Code or a hybrid flow is also possible here
+ &client_id=9ada6f8a-6d83-41bc-b169-a306c21527a5
+ &scope=9ada6f8a-6d83-41bc-b169-a306c21527a5/.default
+ &redirect_uri=https%3A%2F%2Flocalhost
+ &state=1234
```
-This code example produces a consent page for all registered permissions if the preceding descriptions of consent and `/.default` apply to the scenario. Then the code returns an `id_token`, rather than an access token.
+This code example produces a consent page for all registered permissions if the preceding descriptions of consent and `.default` apply to the scenario. Then the code returns an `id_token`, rather than an access token.
This behavior accommodates some legacy clients that are moving from Azure AD Authentication Library (ADAL) to the Microsoft Authentication Library (MSAL). This setup *shouldn't* be used by new clients that target the Microsoft identity platform.
-### Client credentials grant flow and /.default
+### Client credentials grant flow and .default
-Another use of `/.default` is to request application permissions (or *roles*) in a noninteractive application like a daemon app that uses the [client credentials](v2-oauth2-client-creds-grant-flow.md) grant flow to call a web API.
+Another use of `.default` is to request app roles (also known as application permissions) in a non-interactive application like a daemon app that uses the [client credentials](v2-oauth2-client-creds-grant-flow.md) grant flow to call a web API.
-To create application permissions (roles) for a web API, see [Add app roles in your application](howto-add-app-roles-in-azure-ad-apps.md).
+To define app roles (application permissions) for a web API, see [Add app roles in your application](howto-add-app-roles-in-azure-ad-apps.md).
-Client credentials requests in your client app *must* include `scope={resource}/.default`. Here, `{resource}` is the web API that your app intends to call. Issuing a client credentials request by using individual application permissions (roles) is *not* supported. All the application permissions (roles) that have been granted for that web API are included in the returned access token.
+Client credentials requests in your client service *must* include `scope={resource}/.default`. Here, `{resource}` is the web API that your app intends to call, and wishes to obtain an access token for. Issuing a client credentials request by using individual application permissions (roles) is *not* supported. All the app roles (application permissions) that have been granted for that web API are included in the returned access token.
-To grant access to the application permissions you define, including granting admin consent for the application, see [Configure a client application to access a web API](quickstart-configure-app-access-web-apis.md).
+To grant access to the app roles you define, including granting admin consent for the application, see [Configure a client application to access a web API](quickstart-configure-app-access-web-apis.md).
-### Trailing slash and /.default
+### Trailing slash and .default
-Some resource URIs have a trailing forward slash, for example, `https://contoso.com/` as opposed to `https://contoso.com`. The trailing slash can cause problems with token validation. Problems occur primarily when a token is requested for Azure Resource Manager (`https://management.azure.com/`). In this case, a trailing slash on the resource URI means the slash must be present when the token is requested. So when you request a token for `https://management.azure.com/` and use `/.default`, you must request `https://management.azure.com//.default` (notice the double slash!). In general, if you verify that the token is being issued, and if the token is being rejected by the API that should accept it, consider adding a second forward slash and trying again.
+Some resource URIs have a trailing forward slash, for example, `https://contoso.com/` as opposed to `https://contoso.com`. The trailing slash can cause problems with token validation. Problems occur primarily when a token is requested for Azure Resource Manager (`https://management.azure.com/`). In this case, a trailing slash on the resource URI means the slash must be present when the token is requested. So when you request a token for `https://management.azure.com/` and use `.default`, you must request `https://management.azure.com//.default` (notice the double slash!). In general, if you verify that the token is being issued, and if the token is being rejected by the API that should accept it, consider adding a second forward slash and trying again.
## Troubleshooting permissions and consent
active-directory V2 Saml Bearer Assertion https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-saml-bearer-assertion.md
Title: Microsoft identity platform & SAML bearer assertion flow | Azure
-description: Learn how to fetch data from Microsoft Graph without prompting the user for credentials using the SAML bearer assertion flow.
+ Title: Exchange a SAML token issued by Active Directory Federation Services (AD FS) for a Microsoft Graph access token
+
+description: Learn how to fetch data from Microsoft Graph without prompting an AD FS-federated user for credentials by using the SAML bearer assertion flow.
-+ Previously updated : 10/21/2021-- Last updated : 01/11/2022++
-# Microsoft identity platform and OAuth 2.0 SAML bearer assertion flow
-The OAuth 2.0 SAML bearer assertion flow allows you to request an OAuth access token using a SAML assertion when a client needs to use an existing trust relationship. The signature applied to the SAML assertion provides authentication of the authorized app. A SAML assertion is an XML security token issued by an identity provider and consumed by a service provider. The service provider relies on its content to identify the assertionΓÇÖs subject for security-related purposes.
+# Exchange a SAML token issued by AD FS for a Microsoft Graph access token
-The SAML assertion is posted to the OAuth token endpoint. The endpoint processes the assertion and issues an access token based on prior approval of the app. The client isnΓÇÖt required to have or store a refresh token, nor is the client secret required to be passed to the token endpoint.
+To enable single sign-on (SSO) in applications that use SAML tokens issued by Active Directory Federation Services (AD FS) and also require access to Microsoft Graph, follow the steps in this article.
-SAML Bearer Assertion flow is useful when fetching data from Microsoft Graph APIs (which only support delegated permissions) without prompting the user for credentials. In this scenario the client credentials grant, which is preferred for background processes, doesn't work.
+You'll enable the SAML bearer assertion flow to exchange a SAMLv1 token issued by the federated AD FS instance for an OAuth 2.0 access token for Microsoft Graph. When the user's browser is redirected to Azure Active Directory (Azure AD) to authenticate them, the browser picks up the session from the SAML sign-in instead of asking the user to enter their credentials.
-For applications that do interactive browser-based sign-in to get a SAML assertion and add access to an OAuth protected API (such as Microsoft Graph), you can make an OAuth request to get an access token for the API. When the browser is redirected to Azure Active Directory (Azure AD) to authenticate the user, the browser will pick up the session from the SAML sign-in and the user doesn't need to enter their credentials.
+> [!IMPORTANT]
+> This scenario works **only** when AD FS is the federated identity provider that issued the original SAMLv1 token. You **cannot** exchange a SAMLv2 token issued by Azure AD for a Microsoft Graph access token.
-The OAuth SAML Bearer Assertion flow is also supported for users authenticating with identity providers such as Active Directory Federation Services (ADFS) federated to Azure AD. The SAML assertion obtained from ADFS can be used in an OAuth flow to authenticate the user.
+## Prerequisites
-![OAuth flow](./media/v2-saml-bearer-assertion/1.png)
+- AD FS federated as an identity provider for single sign-on; see [Setting up AD FS and Enabling Single Sign-On to Office 365](/archive/blogs/canitpro/step-by-step-setting-up-ad-fs-and-enabling-single-sign-on-to-office-365) for an example.
+- [Postman](https://www.getpostman.com/) for testing requests.
+
+## Scenario overview
+
+The OAuth 2.0 SAML bearer assertion flow allows you to request an OAuth access token using a SAML assertion when a client needs to use an existing trust relationship. The signature applied to the SAML assertion provides authentication of the authorized app. A SAML assertion is an XML security token issued by an identity provider and consumed by a service provider. The service provider relies on its content to identify the assertion's subject for security-related purposes.
-## Call Graph using SAML bearer assertion
-Now let us understand on how we can actually fetch SAML assertion programatically. The programmatic approach is tested with ADFS. However, the approach works with any identity provider that supports the return of SAML assertion programatically. The basic process is: get a SAML assertion, get an access token, and access Microsoft Graph.
+The SAML assertion is posted to the OAuth token endpoint. The endpoint processes the assertion and issues an access token based on prior approval of the app. The client isn't required to have or store a refresh token, nor is the client secret required to be passed to the token endpoint.
+
+![OAuth flow](./media/v2-saml-bearer-assertion/1.png)
-### Prerequisites
+## Register the application with Azure AD
-Establish a trust relationship between the authorization server/environment (Microsoft 365) and the identity provider, or issuer of the SAML 2.0 bearer assertion. To configure ADFS for single sign-on and as an identity provider, see [Setting up AD FS and Enabling Single Sign-On to Office 365](/archive/blogs/canitpro/step-by-step-setting-up-ad-fs-and-enabling-single-sign-on-to-office-365).
+Start by registering the application in the [portal](https://ms.portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade):
-Register the application in the [portal](https://ms.portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade):
-1. Sign in to the [app registration page of the portal](https://ms.portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade) (Please note that we are using the v2.0 endpoints for Graph API and hence need to register the application in Azure portal. Otherwise we could have used the registrations in Azure AD).
+1. Sign in to the [app registration page of the portal](https://ms.portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade) (Please note that we are using the v2.0 endpoints for Graph API and hence need to register the application in Azure portal. Otherwise we could have used the registrations in Azure AD).
1. Select **New registration**.
-1. When the **Register an application** page appears, enter your application's registration information:
+1. When the **Register an application** page appears, enter your application's registration information:
1. **Name** - Enter a meaningful application name that will be displayed to users of the app. 1. **Supported account types** - Select which accounts you would like your application to support. 1. **Redirect URI (optional)** - Select the type of app you're building, Web, or Public client (mobile & desktop), and then enter the redirect URI (or reply URL) for your application. 1. When finished, select **Register**. 1. Make a note of the application (client) ID. 1. In the left pane, select **Certificates & secrets**. Click **New client secret** in the **Client secrets** section. Copy the new client secret, you won't be able to retrieve when you leave the page.
-1. In the left pane, select **API permissions** and then **Add a permission**. Select **Microsoft Graph**, then **delegated permissions**, and then select **Tasks.read** since we intend to use the Outlook Graph API.
+1. In the left pane, select **API permissions** and then **Add a permission**. Select **Microsoft Graph**, then **delegated permissions**, and then select **Tasks.read** since we intend to use the Outlook Graph API.
-Install [Postman](https://www.getpostman.com/), a tool required to test the sample requests. Later, you can convert the requests to code.
+## Get the SAML assertion from AD FS
-### Get the SAML assertion from ADFS
-Create a POST request to the ADFS endpoint using SOAP envelope to fetch the SAML assertion:
+Create a POST request to the AD FS endpoint using SOAP envelope to fetch the SAML assertion:
![Get SAML assertion](./media/v2-saml-bearer-assertion/2.png)
Header values:
![Header values](./media/v2-saml-bearer-assertion/3.png)
-ADFS request body:
+AD FS request body:
-![ADFS request body](./media/v2-saml-bearer-assertion/4.png)
+![AD FS request body](./media/v2-saml-bearer-assertion/4.png)
-Once the request is posted successfully, you should receive a SAML assertion from ADFS. Only the **SAML:Assertion** tag data is required, convert it to base64 encoding to use in further requests.
+Once the request is posted successfully, you should receive a SAML assertion from AD FS. Only the **SAML:Assertion** tag data is required, convert it to base64 encoding to use in further requests.
-### Get the OAuth2 token using the SAML assertion
+## Get the OAuth 2.0 token using the SAML assertion
-Fetch an OAuth2 token using the ADFS assertion response.
+Fetch an OAuth 2.0 token using the AD FS assertion response.
1. Create a POST request as shown below with the header values:
Fetch an OAuth2 token using the ADFS assertion response.
![Request body](./media/v2-saml-bearer-assertion/6.png) 1. Upon successful request, you'll receive an access token from Azure active directory.
-### Get the data with the OAuth2 token
+## Get the data with the OAuth 2.0 token
-After receiving the access token, call the Graph APIs (Outlook tasks in this example).
+After receiving the access token, call the Graph APIs (Outlook tasks in this example).
1. Create a GET request with the access token fetched in the previous step:
active-directory Web Api Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/web-api-quickstart.md
+
+ Title: "Quickstart: Protect a web API with the Microsoft identity platform | Azure"
+
+description: In this quickstart, you download and modify a code sample that demonstrates how to protect a web API by using the Microsoft identity platform for authorization.
+++++++ Last updated : 01/11/2022++
+zone_pivot_groups: web-api-quickstart
+#Customer intent: As an app developer, I want to learn how to get access tokens and refresh tokens by using the Microsoft identity platform so that my web app can sign in users of personal accounts, work accounts, and school accounts.
++
+# Quickstart: Protect a web API with the Microsoft identity platform
++
active-directory Direct Federation Adfs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/direct-federation-adfs.md
An AD FS server must already be set up and functioning before you begin this pro
### Add the claim description 1. On your AD FS server, select **Tools** > **AD FS management**.
-2. In the navigation pane, select **Service** > **Claim Descriptions**.
-3. Under **Actions**, select **Add Claim Description**.
-4. In the **Add a Claim Description** window, specify the following values:
+1. In the navigation pane, select **Service** > **Claim Descriptions**.
+1. Under **Actions**, select **Add Claim Description**.
+1. In the **Add a Claim Description** window, specify the following values:
- **Display Name**: Persistent Identifier - **Claim identifier**: `urn:oasis:names:tc:SAML:2.0:nameid-format:persistent` - Select the check box for **Publish this claim description in federation metadata as a claim type that this federation service can accept**. - Select the check box for **Publish this claim description in federation metadata as a claim type that this federation service can send**.
-5. Click **Ok**.
+1. Click **Ok**.
-### Add the relying party trust and claim rules
+### Add the relying party trust
1. On the AD FS server, go to **Tools** > **AD FS management**.
-2. In the navigation pane, select **Trust Relationships** > **Relying Party Trusts**.
-3. Under **Actions**, select **Add Relying Party Trust**.
-4. In the add relying party trust wizard for **Select Data Source**, use the option **Import data about the relying party published online or on a local network**. Specify this federation metadata URL- https://nexus.microsoftonline-p.com/federationmetadata/saml20/federationmetadata.xml. Leave other default selections. Select **Close**.
-5. The **Edit Claim Rules** wizard opens.
-6. In the **Edit Claim Rules** wizard, select **Add Rule**. In **Choose Rule Type**, select **Send LDAP Attributes as Claims**. Select **Next**.
-7. In **Configure Claim Rule**, specify the following values:
+1. In the navigation pane, select **Relying Party Trusts**.
+1. Under **Actions**, select **Add Relying Party Trust**.
+1. In the **Add Relying Party Trust** wizard, select **Claims aware**, and then select **Start**.
+1. In the **Select Data Source** section, select the check box for **Import data about the relying party published online or on a local network**. Enter this federation metadata URL: `https://nexus.microsoftonline-p.com/federationmetadata/saml20/federationmetadata.xml`. Select **Next**.
+1. Leave the other settings in their default options. Continue to select **Next**, and finally select **Close** to close the wizard.
+
+### Create claims rules
+
+1. Right-click the relying party trust you created, and then select **Edit Claim Issuance Policy**.
+1. In the **Edit Claim Rules** wizard, select **Add Rule**.
+1. In **Claim rule template**, select **Send LDAP Attributes as Claims**.
+1. In **Configure Claim Rule**, specify the following values:
- **Claim rule name**: Email claim rule - **Attribute store**: Active Directory - **LDAP Attribute**: E-Mail-Addresses - **Outgoing Claim Type**: E-Mail Address
-8. Select **Finish**.
-9. The **Edit Claim Rules** window will show the new rule. Click **Apply**.
-10. Click **Ok**.
-
-### Create an email transform rule
-1. Go to **Edit Claim Rules** and click **Add Rule**. In **Choose Rule Type**, select **Transform an Incoming Claim** and click **Next**.
-2. In **Configure Claim Rule**, specify the following values:
+1. Select **Finish**.
+1. Select **Add Rule**.
+1. In **Claim rule template**, select **Transform an Incoming Claim**, and then select **Next**.
+1. In **Configure Claim Rule**, specify the following values:
- **Claim rule name**: Email transform rule - **Incoming claim type**: E-mail Address
An AD FS server must already be set up and functioning before you begin this pro
- **Outgoing name ID format**: Persistent Identifier - Select **Pass through all claim values**.
-3. Click **Finish**.
-4. The **Edit Claim Rules** window will show the new rules. Click **Apply**.
-5. Click **OK**. The AD FS server is now configured for federation using the SAML 2.0 protocol.
+1. Select **Finish**.
+1. The **Edit Claim Rules** pane shows the new rules. Select **Apply**.
+1. Select **OK**. The AD FS server is now configured for federation using the SAML 2.0 protocol.
+
+## Configure AD FS for WS-Fed federation
-## Configure AD FS for WS-Fed federation
Azure AD B2B can be configured to federate with IdPs that use the WS-Fed protocol with the specific requirements listed below. Currently, the two WS-Fed providers have been tested for compatibility with Azure AD include AD FS and Shibboleth. Here, weΓÇÖll use Active Directory Federation Services (AD FS) as an example of the WS-Fed IdP. For more information about establishing a relying party trust between a WS-Fed compliant provider with Azure AD, download the Azure AD Identity Provider Compatibility Docs. To set up federation, the following attributes must be received in the WS-Fed message from the IdP. These attributes can be configured by linking to the online security token service XML file or by entering them manually. Step 12 in [Create a test AD FS instance](https://medium.com/in-the-weeds/create-a-test-active-directory-federation-services-3-0-instance-on-an-azure-virtual-machine-9071d978e8ed) describes how to find the AD FS endpoints or how to generate your metadata URL, for example `https://fs.iga.azure-test.net/federationmetadata/2007-06/federationmetadata.xml`.
active-directory Active Directory Get Started Premium https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/active-directory-get-started-premium.md
Before you sign up for Active Directory Premium 1 or Premium 2, you must first d
Signing up using your Azure subscription with previously purchased and activated Azure AD licenses, automatically activates the licenses in the same directory. If that's not the case, you must still activate your license plan and your Azure AD access. For more information about activating your license plan, see [Activate your new license plan](#activate-your-new-license-plan). For more information about activating your Azure AD access, see [Activate your Azure AD access](#activate-your-azure-ad-access). ## Sign up using your existing Azure or Microsoft 365 subscription
-As an Azure or Microsoft 365 subscriber, you can purchase the Azure Active Directory Premium editions online. For detailed steps, see [How to Purchase Azure Active Directory Premium - New Customers](https://channel9.msdn.com/Series/Azure-Active-Directory-Videos-Demos/How-to-Purchase-Azure-Active-Directory-Premium-New-Customers).
+As an Azure or Microsoft 365 subscriber, you can purchase the Azure Active Directory Premium editions online. For detailed steps, see How to Purchase Azure Active Directory Premium - New Customers.
## Sign up using your Enterprise Mobility + Security licensing plan Enterprise Mobility + Security is a suite, comprised of Azure AD Premium, Azure Information Protection, and Microsoft Intune. If you already have an EMS license, you can get started with Azure AD, using one of these licensing options:
active-directory Entitlement Management Access Package Incompatible https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-access-package-incompatible.md
na Previously updated : 07/2/2021 Last updated : 12/15/2021
Follow these steps to view the list of other access packages that have indicated
1. Click on **Incompatible With**.
+## Identifying users who already have incompatible access to another access package
+
+If you are configuring incompatible access settings on an access package that already has users assigned to it, then any of those users who also have an assignment to the incompatible access package or groups will not be able to re-request access.
+
+**Prerequisite role**: Global administrator, Identity Governance administrator, User administrator, Catalog owner or Access package manager
+
+Follow these steps to view the list of users who have assignments to two access packages.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Click **Azure Active Directory**, and then click **Identity Governance**.
+
+1. In the left menu, click **Access packages** and then open the access package where you will be configuring incompatible assignments.
+
+1. In the left menu, click **Assignments**.
+
+1. In the **Status** field, ensure that **Delivered** status is selected.
+
+1. Click the **Download** button and save the resulting CSV file as the first file with a list of assignments.
+
+1. In the navigation bar, click **Identity Governance**.
+
+1. In the left menu, click **Access packages** and then open the access package which you plan to indicate as incompatible.
+
+1. In the left menu, click **Assignments**.
+
+1. In the **Status** field, ensure that the **Delivered** status is selected.
+
+1. Click the **Download** button and save the resulting CSV file as the second file with a list of assignments.
+
+1. Use a spreadsheet program such as Excel to open the two files.
+
+1. Users who are listed in both files will have already-existing incompatible assignments.
+
+### Identifying users who already have incompatible access programmatically
+
+You can also query the users who have assignments to an access package with the `Get-MgEntitlementManagementAccessPackageAssignment` cmdlet from the [Microsoft Graph PowerShell cmdlets for Identity Governance](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.Governance/) module version 1.6.0 or later.
+
+For example, if you have two access packages, one with ID `29be137f-b006-426c-b46a-0df3d4e25ccd` and the other with ID `cce10272-68d8-4482-8ba3-a5965c86cfe5`, then you could retrieve the users who have assignments to the first access package, and then compare them to the users who have assignments to the second access package. You can also report the users who have assignments delivered to both, using a PowerShell script similar to the following:
+
+```powershell
+$c = Connect-MgGraph -Scopes "EntitlementManagement.Read.All"
+Select-MgProfile -Name "beta"
+$ap_w_id = "29be137f-b006-426c-b46a-0df3d4e25ccd"
+$ap_e_id = "cce10272-68d8-4482-8ba3-a5965c86cfe5"
+$apa_w_filter = "accessPackage/id eq '" + $ap_w_id + "' and assignmentState eq 'Delivered'"
+$apa_e_filter = "accessPackage/id eq '" + $ap_e_id + "' and assignmentState eq 'Delivered'"
+$apa_w = Get-MgEntitlementManagementAccessPackageAssignment -Filter $apa_w_filter -ExpandProperty target -All
+$apa_e = Get-MgEntitlementManagementAccessPackageAssignment -Filter $apa_e_filter -ExpandProperty target -All
+$htt = @{}; foreach ($e in $apa_e) { if ($null -ne $e.Target -and $null -ne $e.Target.Id) {$htt[$e.Target.Id] = $e} }
+foreach ($w in $apa_w) { if ($null -ne $w.Target -and $null -ne $w.Target.Id -and $htt.ContainsKey($w.Target.Id)) { write-output $w.Target.Email } }
+```
+
+## Configuring multiple access packages for override scenarios
+
+If an access package has been configured as incompatible, then a user who has an assignment to that incompatible access package cannot request the access package, nor can an administrator make a new assignment that would be incompatible.
+
+For example, if the **Production environment** access package has marked the **Development environment** package as incompatible, and a user has an assignment to the **Development environment** access package, then the access package manager for **Production environment** cannot create an assignment for that user to the **Production environment**. In order to proceed with that assignment, the user's existing assignment to the **Development environment** access package must first be removed.
+
+If there is an exceptional situation where separation of duties rules might need to be overridden, then configuring an additional access package to capture the users who have overlapping access rights will make it clear to the approvers, reviewers, and auditors the exceptional nature of those assignments.
+
+For example, if there was a scenario that some users would need to have access to both production and deployment environments at the same time, you could create a new access package **Production and development environments**. That access package could have as its resource roles some of the resource roles of the **Production environment** access package and some of the resource roles of the **Development environment** access package.
+
+If the motivation of the incompatible access is one resource's roles are particularly problematic, then that resource could be omitted from the combined access package, and require explicit administrator assignment of a user to the role. If that is a third party application or your own application, then you can ensure oversight by monitoring those role assignments using the *Application role assignment activity* workbook described in the next section.
+
+Depending on your governance processes, that combined access package could have as its policy either:
+
+ - a **direct assignments policy**, so that only an access package manager would be interacting with the access package, or
+ - a **users can request access policy**, so that a user can request, with potentially an additional approval stage
+
+This policy could have as its lifecycle settings a much shorter expiration number of days than a policy on other access packages, or require more frequent access reviews, with regular oversight so that users do not retain access longer than necessary.
+ ## Monitor and report on access assignments You can use Azure Monitor workbooks to get insights on how users have been receiving their access.
active-directory Plan Connect Topologies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/plan-connect-topologies.md
na Previously updated : 11/27/2018 Last updated : 01/14/2022
We recommend having a single tenant in Azure AD for an organization. Before you
### (Public preview) Sync AD objects to multiple Azure AD tenants
-![Diagram that shows a topology of multiple Azure A D tenants.](./media/plan-connect-topologies/multi-tenant-1.png)
+![Diagram that shows a topology of multiple Azure A D tenants.](./media/plan-connect-topologies/multi-tenant-2.png)
> [!NOTE] > This topology is currently in Public Preview. As the supported scenarios might still change, we recommend not deploying this topology in a production environment.
active-directory Reference Connect Dirsync Deprecated https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/reference-connect-dirsync-deprecated.md
If you are running DirSync, there are two ways you can upgrade: In-place upgrade
| [Upgrade from DirSync](how-to-dirsync-upgrade-get-started.md) |<li>If you have an existing DirSync server already running.</li> | | [Upgrade from Azure AD Sync](how-to-upgrade-previous-version.md) |<li>If you are moving from Azure AD Sync.</li> |
-If you want to see how to do an in-place upgrade from DirSync to Azure AD Connect, then see this Channel 9 video:
-
-> [!VIDEO https://channel9.msdn.com/Series/Azure-Active-Directory-Videos-Demos/Azure-Active-Directory-Connect-in-place-upgrade-from-legacy-tools/player]
->
->
## FAQ **Q: I have received an email notification from the Azure Team and/or a message from the Microsoft 365 message center, but I am using Connect.**
active-directory F5 Aad Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/f5-aad-integration.md
Steps 1-4 in the diagram illustrate the front-end pre-authentication exchange be
Whether a direct employee, affiliate, or consumer, most users are already acquainted with the Office 365 login experience, so accessing BIG-IP services via SHA remains largely familiar.
-Users now find their BIG-IP published services consolidated in the [MyApps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510) or [O365 launchpads](https://o365pp.blob.core.windows.net/media/Resources/Microsoft%20365%20Business/Launchpad%20Overview_for%20Partners_10292019.pdf) along with self-service capabilities to a broader set of services, no matter the type of device or location. Users can even continue accessing published services directly via the BIG-IPs proprietary Webtop portal, if preferred. When logging off, SHA ensures a usersΓÇÖ session is terminated at both ends, the BIG-IP and Azure AD, ensuring services remain fully protected from unauthorized access.
+Users now find their BIG-IP published services consolidated in the [MyApps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510) or [O365 launchpads](https://airhead.io/airbase/launchpads/R3kW-RkDFEedipcU1AFlnA) along with self-service capabilities to a broader set of services, no matter the type of device or location. Users can even continue accessing published services directly via the BIG-IPs proprietary Webtop portal, if preferred. When logging off, SHA ensures a usersΓÇÖ session is terminated at both ends, the BIG-IP and Azure AD, ensuring services remain fully protected from unauthorized access.
The screenshots provided are from the Azure AD app portal that users access securely to find their BIG-IP published services and for managing their account properties.
active-directory Secure Hybrid Access Integrations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/secure-hybrid-access-integrations.md
The following providers of software-defined perimeter (SDP) solutions connect wi
| **SDP vendor** | **Link** | | | |
-| Datawiza Access Broker | [https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/datawiza-with-azure-ad](./datawiza-with-azure-ad.md) |
+| Datawiza Access Broker | [https://docs.microsoft.com/azure/active-directory/manage-apps/datawiza-with-azure-ad](./datawiza-with-azure-ad.md) |
| Perimeter 81 | [https://docs.microsoft.com/azure/active-directory/saas-apps/perimeter-81-tutorial](../saas-apps/perimeter-81-tutorial.md) |
-| Silverfort Authentication Platform | [https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/silverfort-azure-ad-integration](./silverfort-azure-ad-integration.md) |
+| Silverfort Authentication Platform | [https://docs.microsoft.com/azure/active-directory/manage-apps/silverfort-azure-ad-integration](./silverfort-azure-ad-integration.md) |
| Strata Maverics Identity Orchestrator | [https://docs.microsoft.com/azure/active-directory/saas-apps/maverics-identity-orchestrator-saml-connector-tutorial](../saas-apps/maverics-identity-orchestrator-saml-connector-tutorial.md) | | Zscaler Private Access | [https://docs.microsoft.com/azure/active-directory/saas-apps/zscalerprivateaccess-tutorial](../saas-apps/zscalerprivateaccess-tutorial.md) |
active-directory Troubleshoot Password Based Sso https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/troubleshoot-password-based-sso.md
If you experience any of these problems, do the following things:
- Try the manual capture process again. Make sure that the red markers are over the correct fields. - If the manual capture process seems to stop responding or the sign-in page doesnΓÇÖt respond, try the manual capture process again. But this time, after completing the process, press the F12 key to open your browserΓÇÖs developer console. Select the **console** tab. Type **window.location="*&lt;the sign-in URL that you specified when configuring the app&gt;*"**, and then press Enter. This forces a page redirect that ends the capture process and stores the fields that were captured.
-### I can't add another user to my Password-based SSO app
+### I can't add another user to my password-based SSO app
-Password-based SSO app has a limit of 48 users. Thus, it has a limit of 48 keys for username/password pairs per app.
-If you want to add additional users you can either:
+A user cannot have more than 48 credentials configured across all password SSO apps where the user is directly assigned.
+
+If you want to add more apps with password-based SSO to a user, consider assigning the app to a group the user is a direct member of, and configuring the credential for the group. Note that the credentials configured for the group will be available for all members of the group.
+
+### I can't add another group to my password-based SSO app
+
+Each password-based SSO app has a limit of 48 groups which are assigned and have had credentials configured for them. If you want to add additional groups, you can either:
- Add additional instance of the app-- Remove users who are no longer using the app first
+- Remove groups who are no longer using the app
## Request support
active-directory Ways Users Get Assigned To Applications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/ways-users-get-assigned-to-applications.md
This article help you to understand how users get assigned to an application in your tenant.
-## How do users get assigned to an application in Azure AD?
+## How do users get assigned an application in Azure AD?
-For a user to access an application, they must first be assigned to it in some way. Assignment can be performed by an administrator, a business delegate, or sometimes, the user themselves. Below describes the ways users can get assigned to applications:
+There are several ways a user can be assigned an application. Assignment can be performed by an administrator, a business delegate, or sometimes, the user themselves. Below describes the ways users can get assigned to applications:
* An administrator [assigns a user](./assign-user-or-group-access-portal.md) to the application directly * An administrator [assigns a group](./assign-user-or-group-access-portal.md) that the user is a member of to the application, including:
For a user to access an application, they must first be assigned to it in some w
* An administrator enables [Self-service Application Access](./manage-self-service-access.md) to allow a user to add an application using [My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510) **Add App** feature, but only **with prior approval from a selected set of business approvers** * An administrator enables [Self-service Group Management](../enterprise-users/groups-self-service-management.md) to allow a user to join a group that an application is assigned to **without business approval** * An administrator enables [Self-service Group Management](../enterprise-users/groups-self-service-management.md) to allow a user to join a group that an application is assigned to, but only **with prior approval from a selected set of business approvers**
-* An administrator assigns a license to a user directly for a first party application, like [Microsoft 365](https://products.office.com/)
-* An administrator assigns a license to a group that the user is a member of to a first party application, like [Microsoft 365](https://products.office.com/)
-* An [administrator consents to an application](../develop/howto-convert-app-to-be-multi-tenant.md) to be used by all users and then a user signs in to the application
-* A user [consents to an application](../develop/howto-convert-app-to-be-multi-tenant.md) themselves by signing in to the application
+* An administrator assigns a license to a user directly, for a Microsoft service such as [Microsoft 365](https://products.office.com/)
+* An administrator assigns a license to a group that the user is a member of, for a Microsoft service such as [Microsoft 365](https://products.office.com/)
+* A user [consents to an application](consent-and-permissions-overview.md#user-consent) on behalf of themselves.
## Next steps
active-directory Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/overview.md
A common challenge for developers is the management of secrets and credentials u
Take a look at how you can use managed identities</br>
-> [!VIDEO https://channel9.msdn.com/Shows/On-NET/Using-Azure-Managed-identities/player?format=ny]
+> [!VIDEO https://docs.microsoft.com/Shows/On-NET/Using-Azure-Managed-identities/player?format=ny]
active-directory Admin Units Members Add https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/admin-units-members-add.md
Previously updated : 12/17/2021 Last updated : 01/14/2022
# Add users or groups to an administrative unit
-In Azure Active Directory (Azure AD), you can add users or groups to an administrative unit to restrict the scope of role permissions.
+In Azure Active Directory (Azure AD), you can add users or groups to an administrative unit to restrict the scope of role permissions. For additional details on what scoped administrators can do, see [Administrative units in Azure Active Directory](administrative-units.md).
## Prerequisites
Example
## Next steps
+- [Administrative units in Azure Active Directory](administrative-units.md)
- [Assign Azure AD roles with administrative unit scope](admin-units-assign-roles.md) - [Remove users or groups from an administrative unit](admin-units-members-remove.md)
active-directory Administrative Units https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/administrative-units.md
Previously updated : 11/04/2020 Last updated : 01/14/2022
A central administrator could:
- Create a role with administrative permissions over only Azure AD users in the School of Business administrative unit. - Add the business school IT team to the role, along with its scope.
+Administrative units apply scope only to management permissions. They don't prevent members or administrators from using their [default user permissions](../fundamentals/users-default-permissions.md) to browse other users, groups, or resources outside the administrative unit. In the Microsoft 365 admin center, users outside a scoped admin's administrative units are filtered out. But you can browse other users in the Azure portal, PowerShell, and other Microsoft services.
+ ## License requirements Using administrative units requires an Azure AD Premium P1 license for each administrative unit administrator, and Azure AD Free licenses for administrative unit members. To find the right license for your requirements, see [Comparing generally available features of the Free and Premium editions](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing).
The following sections describe current support for administrative unit scenario
| Permissions | Graph/PowerShell | Azure portal | Microsoft 365 admin center | | | | | |
-| Administrative unit-scoped management of group properties and members | Supported | Supported | Not supported |
+| Administrative unit-scoped management of group properties and membership | Supported | Supported | Not supported |
| Administrative unit-scoped management of group licensing | Supported | Supported | Not supported |
-Administrative units apply scope only to management permissions. They don't prevent members or administrators from using their [default user permissions](../fundamentals/users-default-permissions.md) to browse other users, groups, or resources outside the administrative unit. In the Microsoft 365 admin center, users outside a scoped admin's administrative units are filtered out. But you can browse other users in the Azure portal, PowerShell, and other Microsoft services.
+> [!NOTE]
+> Adding a group to an administrative unit does not grant scoped group administrators the ability to manage properties for individual members of that group. For example, a scoped group administrator can manage group membership, but they can't manage authentication methods of users who are members of the group added to an administrative unit. To manage authentication methods of users who are members of the group that is added to an administrative unit, the individual group members must be directly added as users of the administrative unit, and the group administrator must also be assigned a role that can manage user authentication methods.
+
+## Constraints
+
+Here are some of the constraints for administrative units.
+
+- Administrative units can't be nested.
+- Administrative unit-scoped user account administrators can't create or delete users.
+- A scoped role assignment doesn't apply to members of groups added to an administrative unit, unless the group members are directly added to the administrative unit. For more information, see [Add members to an administrative unit](admin-units-members-add.md).
+- Administrative units are currently not available in [Azure AD Identity Governance](../governance/identity-governance-overview.md).
## Next steps - [Create or delete administrative units](admin-units-manage.md) - [Add users or groups to an administrative unit](admin-units-members-add.md) - [Assign Azure AD roles with administrative unit scope](admin-units-assign-roles.md)-
+- [Administrative unit limits](../enterprise-users/directory-service-limits-restrictions.md?context=%2fazure%2factive-directory%2froles%2fcontext%2fugr-context)
aks Command Invoke https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/command-invoke.md
Title: Use `command invoke` to access a private Azure Kubernetes Service (AKS) c
description: Learn how to use `command invoke` to access a private Azure Kubernetes Service (AKS) cluster Previously updated : 11/30/2021 Last updated : 1/14/2022 # Use `command invoke` to access a private Azure Kubernetes Service (AKS) cluster
-Accessing a private AKS cluster requires that you connect to that cluster either from the cluster virtual network or from a peered network. These approaches require configuring a VPN, Express Route, or deploying a *jumpbox* within the cluster virtual network. Alternatively, you can use `command invoke` to access private clusters without having to configure a VPN or Express Route. Using `command invoke` allows you to remotely invoke commands like `kubectl` and `helm` on your private cluster through the Azure API without directly connecting to the cluster. Permissions for using `command invoke` are controlled through the `Microsoft.ContainerService/managedClusters/runcommand/action` and `Microsoft.ContainerService/managedclusters/commandResults/read` roles.
+Accessing a private AKS cluster requires that you connect to that cluster either from the cluster virtual network, from a peered network, or via a configured private endpoint. These approaches require configuring a VPN, Express Route, deploying a *jumpbox* within the cluster virtual network, or creating a private endpoint inside of another virtual network. Alternatively, you can use `command invoke` to access private clusters without having to configure a VPN or Express Route. Using `command invoke` allows you to remotely invoke commands like `kubectl` and `helm` on your private cluster through the Azure API without directly connecting to the cluster. Permissions for using `command invoke` are controlled through the `Microsoft.ContainerService/managedClusters/runcommand/action` and `Microsoft.ContainerService/managedclusters/commandResults/read` roles.
## Prerequisites
aks Ingress Basic https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/ingress-basic.md
Alternatively, you can also:
This article uses [Helm 3][helm] to install the NGINX ingress controller on a [supported version of Kubernetes][aks-supported versions]. Make sure that you are using the latest release of Helm and have access to the *ingress-nginx* Helm repository. The steps outlined in this article may not be compatible with previous versions of the Helm chart, NGINX ingress controller, or Kubernetes.
+### [Azure CLI](#tab/azure-cli)
+ This article also requires that you are running the Azure CLI version 2.0.64 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install]. In addition, this article assumes you have an existing AKS cluster with an integrated ACR. For more details on creating an AKS cluster with an integrated ACR, see [Authenticate with Azure Container Registry from Azure Kubernetes Service][aks-integrated-acr].
-## Import the images used by the Helm chart into your ACR
+### [Azure PowerShell](#tab/azure-powershell)
+
+This article also requires that you're running Azure PowerShell version 5.9.0 or later. Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell][azure-powershell-install].
+
+In addition, this article assumes you have an existing AKS cluster with an integrated ACR. For more details on creating an AKS cluster with an integrated ACR, see [Authenticate with Azure Container Registry from Azure Kubernetes Service][aks-integrated-acr-ps].
+++
+## Basic configuration
+
+To create a simple NGINX ingress controller without customizing the defaults, you will use helm.
+
+### [Azure CLI](#tab/azure-cli)
+
+```console
+NAMESPACE=ingress-basic
+
+helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
+helm repo update
+
+helm install ingress-nginx ingress-nginx/ingress-nginx --create-namespace --namespace $NAMESPACE
+```
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+```powershell-interactive
+$Namespace = 'ingress-basic'
+
+helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
+helm repo update
+
+helm install ingress-nginx ingress-nginx/ingress-nginx --create-namespace --namespace $Namespace
+```
+++
+Note that the above configuration uses the 'out of the box' configuration for simplicity. If needed, you could add parameters for customizing the deployment, eg, `--set controller.replicaCount=3`. The next section will show a highly customized example of the ingress controller.
+
+## Customized configuration
+As an alternative to the basic configuration presented in the above section, the next set of steps will show how to deploy a customized ingress controller.
-This article uses the [NGINX ingress controller Helm chart][ingress-nginx-helm-chart], which relies on three container images. Use `az acr import` to import those images into your ACR.
+### Import the images used by the Helm chart into your ACR
+
+### [Azure CLI](#tab/azure-cli)
+
+To control image versions, you will want to import them into your own Azure Container registry. The [NGINX ingress controller Helm chart][ingress-nginx-helm-chart] relies on three container images. Use `az acr import` to import those images into your ACR.
```azurecli REGISTRY_NAME=<REGISTRY_NAME>
az acr import --name $REGISTRY_NAME --source $SOURCE_REGISTRY/$PATCH_IMAGE:$PATC
az acr import --name $REGISTRY_NAME --source $SOURCE_REGISTRY/$DEFAULTBACKEND_IMAGE:$DEFAULTBACKEND_TAG --image $DEFAULTBACKEND_IMAGE:$DEFAULTBACKEND_TAG ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+To control image versions, you will want to import them into your own Azure Container registry. The [NGINX ingress controller Helm chart][ingress-nginx-helm-chart] relies on three container images. Use `Import-AzContainerRegistryImage` to import those images into your ACR.
++
+```azurepowershell-interactive
+$RegistryName = "<REGISTRY_NAME>"
+$ResourceGroup = (Get-AzContainerRegistry | Where-Object {$_.name -eq $RegistryName} ).ResourceGroupName
+$SourceRegistry = "k8s.gcr.io"
+$ControllerImage = "ingress-nginx/controller"
+$ControllerTag = "v1.0.4"
+$PatchImage = "ingress-nginx/kube-webhook-certgen"
+$PatchTag = "v1.1.1"
+$DefaultBackendImage = "defaultbackend-amd64"
+$DefaultBackendTag = "1.5"
+
+Import-AzContainerRegistryImage -ResourceGroupName $ResourceGroup -RegistryName $RegistryName -SourceRegistryUri $SourceRegistry -SourceImage "${ControllerImage}:${ControllerTag}"
+Import-AzContainerRegistryImage -ResourceGroupName $ResourceGroup -RegistryName $RegistryName -SourceRegistryUri $SourceRegistry -SourceImage "${PatchImage}:${PatchTag}"
+Import-AzContainerRegistryImage -ResourceGroupName $ResourceGroup -RegistryName $RegistryName -SourceRegistryUri $SourceRegistry -SourceImage "${DefaultBackendImage}:${DefaultBackendTag}"
+```
+++ > [!NOTE] > In addition to importing container images into your ACR, you can also import Helm charts into your ACR. For more information, see [Push and pull Helm charts to an Azure container registry][acr-helm].
The ingress controller also needs to be scheduled on a Linux node. Windows Serve
> > If you would like to enable [client source IP preservation][client-source-ip] for requests to containers in your cluster, add `--set controller.service.externalTrafficPolicy=Local` to the Helm install command. The client source IP is stored in the request header under *X-Forwarded-For*. When using an ingress controller with client source IP preservation enabled, SSL pass-through will not work.
+### [Azure CLI](#tab/azure-cli)
+ ```console # Add the ingress-nginx repository helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm install nginx-ingress ingress-nginx/ingress-nginx \
--set defaultBackend.image.digest="" ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell-interactive
+# Add the ingress-nginx repository
+helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
+
+# Set variable for ACR location to use for pulling images
+$AcrUrl = (Get-AzContainerRegistry -ResourceGroupName $ResourceGroup -Name $RegistryName).LoginServer
+
+# Use Helm to deploy an NGINX ingress controller
+helm install nginx-ingress ingress-nginx/ingress-nginx `
+ --namespace ingress-basic --create-namespace `
+ --set controller.replicaCount=2 `
+ --set controller.nodeSelector."kubernetes\.io/os"=linux `
+ --set controller.image.registry=$AcrUrL `
+ --set controller.image.image=$ControllerImage `
+ --set controller.image.tag=$ControllerTag `
+ --set controller.image.digest="" `
+ --set controller.admissionWebhooks.patch.nodeSelector."kubernetes\.io/os"=linux `
+ --set controller.admissionWebhooks.patch.image.registry=$AcrUrl `
+ --set controller.admissionWebhooks.patch.image.image=$PatchImage `
+ --set controller.admissionWebhooks.patch.image.tag=$PatchTag `
+ --set controller.admissionWebhooks.patch.image.digest="" `
+ --set defaultBackend.nodeSelector."kubernetes\.io/os"=linux `
+ --set defaultBackend.image.registry=$AcrUrl `
+ --set defaultBackend.image.image=$DefaultBackendImage `
+ --set defaultBackend.image.tag=$DefaultBackendTag `
+ --set defaultBackend.image.digest=""
+```
+++
+## Check the load balancer service
+ When the Kubernetes load balancer service is created for the NGINX ingress controller, a dynamic public IP address is assigned, as shown in the following example output: ```
You can also:
[client-source-ip]: concepts-network.md#ingress-controllers [aks-supported versions]: supported-kubernetes-versions.md [aks-integrated-acr]: cluster-container-registry-integration.md?tabs=azure-cli#create-a-new-aks-cluster-with-acr-integration
-[acr-helm]: ../container-registry/container-registry-helm-repos.md
+[aks-integrated-acr-ps]: cluster-container-registry-integration.md?tabs=azure-powershell#create-a-new-aks-cluster-with-acr-integration
+[acr-helm]: ../container-registry/container-registry-helm-repos.md
+[azure-powershell-install]: /powershell/azure/install-az-ps
aks Ingress Internal Ip https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/ingress-internal-ip.md
You can also:
This article uses [Helm 3][helm] to install the NGINX ingress controller on a [supported version of Kubernetes][aks-supported versions]. Make sure that you are using the latest release of Helm and have access to the *ingress-nginx* Helm repository. The steps outlined in this article may not be compatible with previous versions of the Helm chart, NGINX ingress controller, or Kubernetes. For more information on configuring and using Helm, see [Install applications with Helm in Azure Kubernetes Service (AKS)][use-helm].
+This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
+
+### [Azure CLI](#tab/azure-cli)
+
+In addition, this article assumes you have an existing AKS cluster with an integrated ACR. For more details on creating an AKS cluster with an integrated ACR, see [Authenticate with Azure Container Registry from Azure Kubernetes Service][aks-integrated-acr].
+ This article also requires that you are running the Azure CLI version 2.0.64 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
-In addition, this article assumes you have an existing AKS cluster with an [integrated ACR][aks-integrated-acr].
+### [Azure PowerShell](#tab/azure-powershell)
+
+In addition, this article assumes you have an existing AKS cluster with an integrated ACR. For more details on creating an AKS cluster with an integrated ACR, see [Authenticate with Azure Container Registry from Azure Kubernetes Service][aks-integrated-acr-ps].
+
+This article also requires that you're running Azure PowerShell version 5.9.0 or later. Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell][azure-powershell-install].
++ ## Import the images used by the Helm chart into your ACR Often when using an AKS cluster with a private network, it is a requirement to manage the provenance of the container images used within the cluster. See [Best practices for container image management and security in Azure Kubernetes Service (AKS)][aks-container-best-practices] for more information. To support this requirement, and for completeness, the examples in this article rely on importing the three container images used by the [NGINX ingress controller Helm chart][ingress-nginx-helm-chart] into your ACR.
+### [Azure CLI](#tab/azure-cli)
+ Use `az acr import` to import these images into your ACR. ```azurecli
az acr import --name $REGISTRY_NAME --source $SOURCE_REGISTRY/$PATCH_IMAGE:$PATC
az acr import --name $REGISTRY_NAME --source $SOURCE_REGISTRY/$DEFAULTBACKEND_IMAGE:$DEFAULTBACKEND_TAG --image $DEFAULTBACKEND_IMAGE:$DEFAULTBACKEND_TAG ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+Use `Import-AzContainerRegistryImage` to import those images into your ACR.
+
+```azurepowershell-interactive
+$RegistryName = "<REGISTRY_NAME>"
+$ResourceGroup = (Get-AzContainerRegistry | Where-Object {$_.name -eq $RegistryName} ).ResourceGroupName
+$SourceRegistry = "k8s.gcr.io"
+$ControllerImage = "ingress-nginx/controller"
+$ControllerTag = "v1.0.4"
+$PatchImage = "ingress-nginx/kube-webhook-certgen"
+$PatchTag = "v1.1.1"
+$DefaultBackendImage = "defaultbackend-amd64"
+$DefaultBackendTag = "1.5"
+
+Import-AzContainerRegistryImage -ResourceGroupName $ResourceGroup -RegistryName $RegistryName -SourceRegistryUri $SourceRegistry -SourceImage "${ControllerImage}:${ControllerTag}"
+Import-AzContainerRegistryImage -ResourceGroupName $ResourceGroup -RegistryName $RegistryName -SourceRegistryUri $SourceRegistry -SourceImage "${PatchImage}:${PatchTag}"
+Import-AzContainerRegistryImage -ResourceGroupName $ResourceGroup -RegistryName $RegistryName -SourceRegistryUri $SourceRegistry -SourceImage "${DefaultBackendImage}:${DefaultBackendTag}"
+```
+++ > [!NOTE] > In addition to importing container images into your ACR, you can also import Helm charts into your ACR. For more information, see [Push and pull Helm charts to an Azure container registry][acr-helm].
The ingress controller also needs to be scheduled on a Linux node. Windows Serve
> [!TIP] > If you would like to enable [client source IP preservation][client-source-ip] for requests to containers in your cluster, add `--set controller.service.externalTrafficPolicy=Local` to the Helm install command. The client source IP is stored in the request header under *X-Forwarded-For*. When using an ingress controller with client source IP preservation enabled, TLS pass-through will not work.
+### [Azure CLI](#tab/azure-cli)
+ ```console # Add the ingress-nginx repository helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm install nginx-ingress ingress-nginx/ingress-nginx \
--set defaultBackend.image.digest="" ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell-interactive
+# Add the ingress-nginx repository
+helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
+
+# Set variable for ACR location to use for pulling images
+$AcrUrl = (Get-AzContainerRegistry -ResourceGroupName $ResourceGroup -Name $RegistryName).LoginServer
+
+# Use Helm to deploy an NGINX ingress controller
+helm install nginx-ingress ingress-nginx/ingress-nginx `
+ --namespace ingress-basic --create-namespace `
+ --set controller.replicaCount=2 `
+ --set controller.nodeSelector."kubernetes\.io/os"=linux `
+ --set controller.image.registry=$AcrUrl `
+ --set controller.image.image=$ControllerImage `
+ --set controller.image.tag=$ControllerTag `
+ --set controller.image.digest="" `
+ --set controller.admissionWebhooks.patch.nodeSelector."kubernetes\.io/os"=linux `
+ --set controller.admissionWebhooks.patch.image.registry=$AcrUrl `
+ --set controller.admissionWebhooks.patch.image.image=$PatchImage `
+ --set controller.admissionWebhooks.patch.image.tag=$PatchTag `
+ --set controller.admissionWebhooks.patch.image.digest="" `
+ --set defaultBackend.nodeSelector."kubernetes\.io/os"=linux `
+ --set defaultBackend.image.registry=$AcrUrl `
+ --set defaultBackend.image.image=$DefaultBackendImage `
+ --set defaultBackend.image.tag=$DefaultBackendTag `
+ --set defaultBackend.image.digest=""
+
+```
+++ When the Kubernetes load balancer service is created for the NGINX ingress controller, your internal IP address is assigned. To get the public IP address, use the `kubectl get service` command. ```console
You can also:
[aks-http-app-routing]: http-application-routing.md [aks-ingress-own-tls]: ingress-own-tls.md [client-source-ip]: concepts-network.md#ingress-controllers
+[aks-quickstart-cli]: kubernetes-walkthrough.md
+[aks-quickstart-powershell]: kubernetes-walkthrough-powershell.md
+[aks-quickstart-portal]: kubernetes-walkthrough-portal.md
[aks-configure-kubenet-networking]: configure-kubenet.md [aks-configure-advanced-networking]: configure-azure-cni.md [aks-supported versions]: supported-kubernetes-versions.md [ingress-nginx-helm-chart]: https://github.com/kubernetes/ingress-nginx/tree/main/charts/ingress-nginx [aks-integrated-acr]: cluster-container-registry-integration.md?tabs=azure-cli#create-a-new-aks-cluster-with-acr-integration
+[aks-integrated-acr-ps]: cluster-container-registry-integration.md?tabs=azure-powershell#create-a-new-aks-cluster-with-acr-integration
+[azure-powershell-install]: /powershell/azure/install-az-ps
[acr-helm]: ../container-registry/container-registry-helm-repos.md [aks-container-best-practices]: operator-best-practices-container-image-management.md
aks Ingress Own Tls https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/ingress-own-tls.md
This article uses [Helm 3][helm] to install the NGINX ingress controller on a [s
For more information on configuring and using Helm, see [Install applications with Helm in Azure Kubernetes Service (AKS)][use-helm].
-This article also requires that you are running the Azure CLI version 2.0.64 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
+### [Azure CLI](#tab/azure-cli)
In addition, this article assumes you have an existing AKS cluster with an integrated ACR. For more details on creating an AKS cluster with an integrated ACR, see [Authenticate with Azure Container Registry from Azure Kubernetes Service][aks-integrated-acr].
+This article also requires that you are running the Azure CLI version 2.0.64 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+In addition, this article assumes you have an existing AKS cluster with an integrated ACR. For more details on creating an AKS cluster with an integrated ACR, see [Authenticate with Azure Container Registry from Azure Kubernetes Service][aks-integrated-acr-ps].
+
+This article also requires that you're running PowerShell 7.2 or newer and Azure PowerShell version 5.9.0 or later. Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell][azure-powershell-install].
+++ ## Import the images used by the Helm chart into your ACR
-This article uses the [NGINX ingress controller Helm chart][ingress-nginx-helm-chart], which relies on three container images. Use `az acr import` to import those images into your ACR.
+This article uses the [NGINX ingress controller Helm chart][ingress-nginx-helm-chart], which relies on three container images.
+
+### [Azure CLI](#tab/azure-cli)
+
+Use `az acr import` to import those images into your ACR.
```azurecli REGISTRY_NAME=<REGISTRY_NAME>
az acr import --name $REGISTRY_NAME --source $SOURCE_REGISTRY/$PATCH_IMAGE:$PATC
az acr import --name $REGISTRY_NAME --source $SOURCE_REGISTRY/$DEFAULTBACKEND_IMAGE:$DEFAULTBACKEND_TAG --image $DEFAULTBACKEND_IMAGE:$DEFAULTBACKEND_TAG ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+Use `Import-AzContainerRegistryImage` to import those images into your ACR.
+
+```azurepowershell-interactive
+$RegistryName = "<REGISTRY_NAME>"
+$ResourceGroup = (Get-AzContainerRegistry | Where-Object {$_.name -eq $RegistryName} ).ResourceGroupName
+$SourceRegistry = "k8s.gcr.io"
+$ControllerImage = "ingress-nginx/controller"
+$ControllerTag = "v1.0.4"
+$PatchImage = "ingress-nginx/kube-webhook-certgen"
+$PatchTag = "v1.1.1"
+$DefaultBackendImage = "defaultbackend-amd64"
+$DefaultBackendTag = "1.5"
+
+Import-AzContainerRegistryImage -ResourceGroupName $ResourceGroup -RegistryName $RegistryName -SourceRegistryUri $SourceRegistry -SourceImage "${ControllerImage}:${ControllerTag}"
+Import-AzContainerRegistryImage -ResourceGroupName $ResourceGroup -RegistryName $RegistryName -SourceRegistryUri $SourceRegistry -SourceImage "${PatchImage}:${PatchTag}"
+Import-AzContainerRegistryImage -ResourceGroupName $ResourceGroup -RegistryName $RegistryName -SourceRegistryUri $SourceRegistry -SourceImage "${DefaultBackendImage}:${DefaultBackendTag}"
+```
+++ > [!NOTE] > In addition to importing container images into your ACR, you can also import Helm charts into your ACR. For more information, see [Push and pull Helm charts to an Azure container registry][acr-helm].
The ingress controller also needs to be scheduled on a Linux node. Windows Serve
> [!TIP] > If you would like to enable [client source IP preservation][client-source-ip] for requests to containers in your cluster, add `--set controller.service.externalTrafficPolicy=Local` to the Helm install command. The client source IP is stored in the request header under *X-Forwarded-For*. When using an ingress controller with client source IP preservation enabled, TLS pass-through will not work.
+### [Azure CLI](#tab/azure-cli)
+ ```console # Add the ingress-nginx repository helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm install nginx-ingress ingress-nginx/ingress-nginx \
--set defaultBackend.image.digest="" ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell-interactive
+# Create a namespace for your ingress resources
+kubectl create namespace ingress-basic
+
+# Add the ingress-nginx repository
+helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
+
+# Set variable for ACR location to use for pulling images
+$AcrUrl = (Get-AzContainerRegistry -ResourceGroupName $ResourceGroup -Name $RegistryName).LoginServer
+
+# Use Helm to deploy an NGINX ingress controller
+helm install nginx-ingress ingress-nginx/ingress-nginx `
+ --namespace ingress-basic `
+ --set controller.replicaCount=2 `
+ --set controller.nodeSelector."kubernetes\.io/os"=linux `
+ --set controller.image.registry=$AcrUrl `
+ --set controller.image.image=$ControllerImage `
+ --set controller.image.tag=$ControllerTag `
+ --set controller.image.digest="" `
+ --set controller.admissionWebhooks.patch.nodeSelector."kubernetes\.io/os"=linux `
+ --set controller.admissionWebhooks.patch.image.registry=$AcrUrl `
+ --set controller.admissionWebhooks.patch.image.image=$PatchImage `
+ --set controller.admissionWebhooks.patch.image.tag=$PatchTag `
+ --set controller.admissionWebhooks.patch.image.digest="" `
+ --set defaultBackend.nodeSelector."kubernetes\.io/os"=linux `
+ --set defaultBackend.image.registry=$AcrUrl `
+ --set defaultBackend.image.image=$DefaultBackendImage `
+ --set defaultBackend.image.tag=$DefaultBackendTag `
+ --set defaultBackend.image.digest=""
+```
+++ During the installation, an Azure public IP address is created for the ingress controller. This public IP address is static for the life-span of the ingress controller. If you delete the ingress controller, the public IP address assignment is lost. If you then create an additional ingress controller, a new public IP address is assigned. If you wish to retain the use of the public IP address, you can instead [create an ingress controller with a static public IP address][aks-ingress-static-tls]. To get the public IP address, use the `kubectl get service` command.
No ingress rules have been created yet. If you browse to the public IP address,
## Generate TLS certificates
+### [Azure CLI](#tab/azure-cli)
+ For this article, let's generate a self-signed certificate with `openssl`. For production use, you should request a trusted, signed certificate through a provider or your own certificate authority (CA). In the next step, you generate a Kubernetes *Secret* using the TLS certificate and private key generated by OpenSSL. The following example generates a 2048-bit RSA X509 certificate valid for 365 days named *aks-ingress-tls.crt*. The private key file is named *aks-ingress-tls.key*. A Kubernetes TLS secret requires both of these files.
openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
-subj "/CN=demo.azure.com/O=aks-ingress-tls" ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+For this article, let's generate a self-signed certificate with `New-SelfSignedCertificate`. For production use, you should request a trusted, signed certificate through a provider or your own certificate authority (CA). In the next step, you generate a Kubernetes *Secret* using the TLS certificate and private key you generated.
+
+The following example generates a 2048-bit RSA X509 certificate valid for 365 days named *aks-ingress-tls.crt*. The private key file is named *aks-ingress-tls.key*. A Kubernetes TLS secret requires both of these files.
+
+This article works with the *demo.azure.com* subject common name and doesn't need to be changed. For production use, specify your own organizational values for the `-subj` parameter:
+
+```powershell-interactive
+$Certificate = New-SelfSignedCertificate -KeyAlgorithm RSA -KeyLength 2048 -Subject "CN=demo.azure.com,O=aks-ingress-tls" -KeyExportPolicy Exportable -CertStoreLocation Cert:\CurrentUser\My\
+$certificatePem =[System.Security.Cryptography.PemEncoding]::Write("CERTIFICATE", $Certificate.RawData)
+$certificatePem -join '' | Out-File -FilePath aks-ingress-tls.crt
+
+$privKeyBytes = $Certificate.PrivateKey.ExportPkcs8PrivateKey()
+$privKeyPem = [System.Security.Cryptography.PemEncoding]::Write("PRIVATE KEY", $privKeyBytes)
+$privKeyPem -join '' | Out-File -FilePath aks-ingress-tls.key
+
+```
+++ ## Create Kubernetes secret for the TLS certificate To allow Kubernetes to use the TLS certificate and private key for the ingress controller, you create and use a Secret. The secret is defined once, and uses the certificate and key file created in the previous step. You then reference this secret when you define ingress routes. The following example creates a Secret name *aks-ingress-tls*:
+### [Azure CLI](#tab/azure-cli)
+ ```console kubectl create secret tls aks-ingress-tls \ --namespace ingress-basic \
kubectl create secret tls aks-ingress-tls \
--cert aks-ingress-tls.crt ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+```powershell-interactive
+kubectl create secret tls aks-ingress-tls `
+ --namespace ingress-basic `
+ --key aks-ingress-tls.key `
+ --cert aks-ingress-tls.crt
+```
+++ ## Run demo applications An ingress controller and a Secret with your certificate have been configured. To see the ingress controller in action, run two demo applications in your AKS cluster. In this example, you use `kubectl apply` to deploy two instances of a simple *Hello world* application.
You can also:
[aks-supported versions]: supported-kubernetes-versions.md [client-source-ip]: concepts-network.md#ingress-controllers [aks-integrated-acr]: cluster-container-registry-integration.md?tabs=azure-cli#create-a-new-aks-cluster-with-acr-integration
+[aks-integrated-acr-ps]: cluster-container-registry-integration.md?tabs=azure-powershell#create-a-new-aks-cluster-with-acr-integration
+[azure-powershell-install]: /powershell/azure/install-az-ps
[acr-helm]: ../container-registry/container-registry-helm-repos.md
aks Ingress Static Ip https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/ingress-static-ip.md
You can also:
## Before you begin
-This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli] or [using the Azure portal][aks-quickstart-portal].
+This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
This article uses [Helm 3][helm] to install the NGINX ingress controller on a [supported version of Kubernetes][aks-supported versions]. Make sure that you are using the latest release of Helm and have access to the *ingress-nginx* and *jetstack* Helm repositories. The steps outlined in this article may not be compatible with previous versions of the Helm chart, NGINX ingress controller, or Kubernetes. For more information on configuring and using Helm, see [Install applications with Helm in Azure Kubernetes Service (AKS)][use-helm]. For upgrade instructions, see the [Helm install docs][helm-install].
-This article also requires that you are running the Azure CLI version 2.0.64 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
+### [Azure CLI](#tab/azure-cli)
In addition, this article assumes you have an existing AKS cluster with an integrated ACR. For more details on creating an AKS cluster with an integrated ACR, see [Authenticate with Azure Container Registry from Azure Kubernetes Service][aks-integrated-acr].
+This article also requires that you are running the Azure CLI version 2.0.64 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+In addition, this article assumes you have an existing AKS cluster with an integrated ACR. For more details on creating an AKS cluster with an integrated ACR, see [Authenticate with Azure Container Registry from Azure Kubernetes Service][aks-integrated-acr-ps].
+
+This article also requires that you're running Azure PowerShell version 5.9.0 or later. Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell][azure-powershell-install].
+++ ## Import the images used by the Helm chart into your ACR
-This article uses the [NGINX ingress controller Helm chart][ingress-nginx-helm-chart], which relies on three container images. Use `az acr import` to import those images into your ACR.
+This article uses the [NGINX ingress controller Helm chart][ingress-nginx-helm-chart], which relies on three container images.
+
+### [Azure CLI](#tab/azure-cli)
+
+Use `az acr import` to import those images into your ACR.
```azurecli REGISTRY_NAME=<REGISTRY_NAME>
az acr import --name $REGISTRY_NAME --source $CERT_MANAGER_REGISTRY/$CERT_MANAGE
az acr import --name $REGISTRY_NAME --source $CERT_MANAGER_REGISTRY/$CERT_MANAGER_IMAGE_CAINJECTOR:$CERT_MANAGER_TAG --image $CERT_MANAGER_IMAGE_CAINJECTOR:$CERT_MANAGER_TAG ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+Use `Import-AzContainerRegistryImage` to import those images into your ACR.
+
+```azurepowershell-interactive
+$RegistryName = "<REGISTRY_NAME>"
+$ResourceGroup = (Get-AzContainerRegistry | Where-Object {$_.name -eq $RegistryName} ).ResourceGroupName
+$ControllerRegistry = "k8s.gcr.io"
+$ControllerImage = "ingress-nginx/controller"
+$ControllerTag = "v1.0.4"
+$PatchRegistry = "docker.io"
+$PatchImage = "jettech/kube-webhook-certgen"
+$PatchTag = "v1.5.1"
+$DefaultBackendRegistry = "k8s.gcr.io"
+$DefaultBackendImage = "defaultbackend-amd64"
+$DefaultBackendTag = "1.5"
+$CertManagerRegistry = "quay.io"
+$CertManagerTag = "v1.3.1"
+$CertManagerImageController = "jetstack/cert-manager-controller"
+$CertManagerImageWebhook = "jetstack/cert-manager-webhook"
+$CertManagerImageCaInjector = "jetstack/cert-manager-cainjector"
+
+Import-AzContainerRegistryImage -ResourceGroupName $ResourceGroup -RegistryName $RegistryName -SourceRegistryUri $ControllerRegistry -SourceImage "${ControllerImage}:${ControllerTag}"
+Import-AzContainerRegistryImage -ResourceGroupName $ResourceGroup -RegistryName $RegistryName -SourceRegistryUri $PatchRegistry -SourceImage "${PatchImage}:${PatchTag}"
+Import-AzContainerRegistryImage -ResourceGroupName $ResourceGroup -RegistryName $RegistryName -SourceRegistryUri $DefaultBackendRegistry -SourceImage "${DefaultBackendImage}:${DefaultBackendTag}"
+Import-AzContainerRegistryImage -ResourceGroupName $ResourceGroup -RegistryName $RegistryName -SourceRegistryUri $CertManagerRegistry -SourceImage "${CertManagerImageController}:${CertManagerTag}"
+Import-AzContainerRegistryImage -ResourceGroupName $ResourceGroup -RegistryName $RegistryName -SourceRegistryUri $CertManagerRegistry -SourceImage "${CertManagerImageWebhook}:${CertManagerTag}"
+Import-AzContainerRegistryImage -ResourceGroupName $ResourceGroup -RegistryName $RegistryName -SourceRegistryUri $CertManagerRegistry -SourceImage "${CertManagerImageCaInjector}:${CertManagerTag}"
+
+```
+++ > [!NOTE] > In addition to importing container images into your ACR, you can also import Helm charts into your ACR. For more information, see [Push and pull Helm charts to an Azure container registry][acr-helm].
az acr import --name $REGISTRY_NAME --source $CERT_MANAGER_REGISTRY/$CERT_MANAGE
By default, an NGINX ingress controller is created with a new public IP address assignment. This public IP address is only static for the life-span of the ingress controller, and is lost if the controller is deleted and re-created. A common configuration requirement is to provide the NGINX ingress controller an existing static public IP address. The static public IP address remains if the ingress controller is deleted. This approach allows you to use existing DNS records and network configurations in a consistent manner throughout the lifecycle of your applications.
+### [Azure CLI](#tab/azure-cli)
+ If you need to create a static public IP address, first get the resource group name of the AKS cluster with the [az aks show][az-aks-show] command: ```azurecli-interactive
Next, create a public IP address with the *static* allocation method using the [
az network public-ip create --resource-group MC_myResourceGroup_myAKSCluster_eastus --name myAKSPublicIP --sku Standard --allocation-method static --query publicIp.ipAddress -o tsv ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+If you need to create a static public IP address, first get the resource group name of the AKS cluster with the [Get-AzAksCluster][get-az-aks-cluster] command:
+
+```azurepowershell-interactive
+(Get-AzAksCluster -ResourceGroupName $ResourceGroup -Name myAKSCluster).NodeResourceGroup
+```
+
+Next, create a public IP address with the *static* allocation method using the [New-AzPublicIpAddress][new-az-public-ip-address] command. The following example creates a public IP address named *myAKSPublicIP* in the AKS cluster resource group obtained in the previous step:
+
+```azurepowershell-interactive
+(New-AzPublicIpAddress -ResourceGroupName MC_myResourceGroup_myAKSCluster_eastus -Name myAKSPublicIP -Sku Standard -AllocationMethod Static -Location eastus).IpAddress
+```
+++ > [!NOTE] > The above commands create an IP address that will be deleted if you delete your AKS cluster. Alternatively, you can create an IP address in a different resource group which can be managed separately from your AKS cluster. If you create an IP address in a different resource group, ensure the cluster identity used by the AKS cluster has delegated permissions to the other resource group, such as *Network Contributor*. For more information, see [Use a static public IP address and DNS label with the AKS load balancer][aks-static-ip].
Update the following script with the **IP address** of your ingress controller a
> [!IMPORTANT] > You must update replace `<STATIC_IP>` and `<DNS_LABEL>` with your own IP address and unique name when running the command. The DNS_LABEL must be unique within the Azure region.
+### [Azure CLI](#tab/azure-cli)
+ ```console # Add the ingress-nginx repository helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm install nginx-ingress ingress-nginx/ingress-nginx \
--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-dns-label-name"=$DNS_LABEL ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+```powershell-interactive
+# Add the ingress-nginx repository
+helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
+
+# Set variable for ACR location to use for pulling images
+$AcrUrl = (Get-AzContainerRegistry -ResourceGroupName $ResourceGroup -Name $RegistryName).LoginServer
+$StaticIp = "<STATIC_IP>"
+$DnsLabel = "<DNS_LABEL>"
+
+helm install nginx-ingress ingress-nginx/ingress-nginx `
+ --namespace ingress-basic --create-namespace `
+ --set controller.replicaCount=2 `
+ --set controller.nodeSelector."kubernetes\.io/os"=linux `
+ --set controller.image.registry=$AcrUrl `
+ --set controller.image.image=$ControllerImage `
+ --set controller.image.tag=$ControllerTag `
+ --set controller.image.digest="" `
+ --set controller.admissionWebhooks.patch.nodeSelector."kubernetes\.io/os"=linux `
+ --set controller.admissionWebhooks.patch.image.registry=$AcrUrl `
+ --set controller.admissionWebhooks.patch.image.image=$PatchImage `
+ --set controller.admissionWebhooks.patch.image.tag=$PatchTag `
+ --set controller.admissionWebhooks.patch.image.digest="" `
+ --set defaultBackend.nodeSelector."kubernetes\.io/os"=linux `
+ --set defaultBackend.image.registry=$AcrUrl, `
+ --set defaultBackend.image.image=$DefaultBackendImage `
+ --set defaultBackend.image.tag=$DefaultBackendTag `
+ --set defaultBackend.image.digest="" `
+ --set controller.service.loadBalancerIP=$StaticIp `
+ --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-dns-label-name"=$DnsLabel
+```
++++ When the Kubernetes load balancer service is created for the NGINX ingress controller, your static IP address is assigned, as shown in the following example output: ```
No ingress rules have been created yet, so the NGINX ingress controller's defaul
You can verify that the DNS name label has been applied by querying the FQDN on the public IP address as follows:
+### [Azure CLI](#tab/azure-cli)
+ ```azurecli-interactive az network public-ip list --resource-group MC_myResourceGroup_myAKSCluster_eastus --query "[?name=='myAKSPublicIP'].[dnsSettings.fqdn]" -o tsv ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell-interactive
+(Get-AzPublicIpAddress -ResourceGroupName MC_myResourceGroup_myAKSCluster_eastus -Name myAKSPublicIP).DnsSettings.Fqdn
+```
+++ The ingress controller is now accessible through the IP address or the FQDN. ## Install cert-manager
The NGINX ingress controller supports TLS termination. There are several ways to
To install the cert-manager controller in an Kubernetes RBAC-enabled cluster, use the following `helm install` command:
+### [Azure CLI](#tab/azure-cli)
+ ```console # Label the cert-manager namespace to disable resource validation kubectl label namespace ingress-basic cert-manager.io/disable-validation=true
helm install cert-manager jetstack/cert-manager \
--set webhook.image.repository=$ACR_URL/$CERT_MANAGER_IMAGE_WEBHOOK \ --set webhook.image.tag=$CERT_MANAGER_TAG \ --set cainjector.image.repository=$ACR_URL/$CERT_MANAGER_IMAGE_CAINJECTOR \
- --set cainjector.image.tag=$CERT_MANAGER_TAG
+ --set cainjector.image.tag=$CERT_MANAGER_TAG
```
+### [Azure PowerShell](#tab/azure-powershell)
+
+```powershell-interactive
+# Label the cert-manager namespace to disable resource validation
+kubectl label namespace ingress-basic cert-manager.io/disable-validation=true
+
+# Add the Jetstack Helm repository
+helm repo add jetstack https://charts.jetstack.io
+
+# Update your local Helm chart repository cache
+helm repo update
+
+# Install the cert-manager Helm chart
+helm install cert-manager jetstack/cert-manager `
+ --namespace ingress-basic `
+ --version $CertManagerTag `
+ --set installCRDs=true `
+ --set nodeSelector."kubernetes\.io/os"=linux `
+ --set image.repository=$AcrUrl/$CertManagerImageController `
+ --set image.tag=$CertManagerTag `
+ --set webhook.image.repository=$AcrUrl/$CertManagerImageWebhook `
+ --set webhook.image.tag=$CertManagerTag `
+ --set cainjector.image.repository=$AcrUrl/$CertManagerImageCaInjector `
+ --set cainjector.image.tag=$CertManagerTag
+```
+++ For more information on cert-manager configuration, see the [cert-manager project][cert-manager]. ## Create a CA cluster issuer
An ingress controller and a certificate management solution have been configured
To see the ingress controller in action, run two demo applications in your AKS cluster. In this example, you use `kubectl apply` to deploy two instances of a simple *Hello world* application.
-Create a *aks-helloworld.yaml* file and copy in the following example YAML:
+Create a *aks-helloworld-one.yaml* file and copy in the following example YAML:
```yml apiVersion: apps/v1 kind: Deployment metadata:
- name: aks-helloworld
+ name: aks-helloworld-one
spec: replicas: 1 selector: matchLabels:
- app: aks-helloworld
+ app: aks-helloworld-one
template: metadata: labels:
- app: aks-helloworld
+ app: aks-helloworld-one
spec: containers:
- - name: aks-helloworld
+ - name: aks-helloworld-one
image: mcr.microsoft.com/azuredocs/aks-helloworld:v1 ports: - containerPort: 80
spec:
apiVersion: v1 kind: Service metadata:
- name: aks-helloworld
+ name: aks-helloworld-one
spec: type: ClusterIP ports: - port: 80 selector:
- app: aks-helloworld
+ app: aks-helloworld-one
```
-Create a *ingress-demo.yaml* file and copy in the following example YAML:
+Create a *aks-helloworld-two.yaml* file and copy in the following example YAML:
```yml apiVersion: apps/v1 kind: Deployment metadata:
- name: ingress-demo
+ name: aks-helloworld-two
spec: replicas: 1 selector: matchLabels:
- app: ingress-demo
+ app: aks-helloworld-two
template: metadata: labels:
- app: ingress-demo
+ app: aks-helloworld-two
spec: containers:
- - name: ingress-demo
+ - name: aks-helloworld-two
image: mcr.microsoft.com/azuredocs/aks-helloworld:v1 ports: - containerPort: 80
spec:
apiVersion: v1 kind: Service metadata:
- name: ingress-demo
+ name: aks-helloworld-two
spec: type: ClusterIP ports: - port: 80 selector:
- app: ingress-demo
+ app: aks-helloworld-two
``` Run the two demo applications using `kubectl apply`: ```console
-kubectl apply -f aks-helloworld.yaml --namespace ingress-basic
-kubectl apply -f ingress-demo.yaml --namespace ingress-basic
+kubectl apply -f aks-helloworld-one.yaml --namespace ingress-basic
+kubectl apply -f aks-helloworld-two.yaml --namespace ingress-basic
``` ## Create an ingress route
spec:
pathType: Prefix backend: service:
- name: aks-helloworld
+ name: aks-helloworld-one
port: number: 80 - path: /hello-world-two(/|$)(.*) pathType: Prefix backend: service:
- name: ingress-demo
+ name: aks-helloworld-two
port: number: 80 - path: /(.*) pathType: Prefix backend: service:
- name: aks-helloworld
+ name: aks-helloworld-one
port: number: 80 ```
release "cert-manager" deleted
Next, remove the two sample applications: ```console
-kubectl delete -f aks-helloworld.yaml --namespace ingress-basic
-kubectl delete -f ingress-demo.yaml --namespace ingress-basic
+kubectl delete -f aks-helloworld-one.yaml --namespace ingress-basic
+kubectl delete -f aks-helloworld-two.yaml --namespace ingress-basic
``` Delete the itself namespace. Use the `kubectl delete` command and specify your namespace name:
kubectl delete namespace ingress-basic
Finally, remove the static public IP address created for the ingress controller. Provide your *MC_* cluster resource group name obtained in the first step of this article, such as *MC_myResourceGroup_myAKSCluster_eastus*:
+### [Azure CLI](#tab/azure-cli)
+ ```azurecli-interactive az network public-ip delete --resource-group MC_myResourceGroup_myAKSCluster_eastus --name myAKSPublicIP ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell-interactive
+Remove-AzPublicIpAddress -ResourceGroupName MC_myResourceGroup_myAKSCluster_eastus -Name myAKSPublicIP
+```
+++ ## Next steps This article included some external components to AKS. To learn more about these components, see the following project pages:
You can also:
[use-helm]: kubernetes-helm.md [azure-cli-install]: /cli/azure/install-azure-cli [az-aks-show]: /cli/azure/aks#az_aks_show
+[get-az-aks-cluster]: /powershell/module/az.aks/get-azakscluster
[az-network-public-ip-create]: /cli/azure/network/public-ip#az_network_public_ip_create
+[new-az-public-ip-address]: /powershell/module/az.network/new-azpublicipaddress
[aks-ingress-internal]: ingress-internal-ip.md [aks-ingress-basic]: ingress-basic.md [aks-ingress-tls]: ingress-tls.md [aks-http-app-routing]: http-application-routing.md [aks-ingress-own-tls]: ingress-own-tls.md [aks-quickstart-cli]: kubernetes-walkthrough.md
+[aks-quickstart-powershell]: kubernetes-walkthrough-powershell.md
[aks-quickstart-portal]: kubernetes-walkthrough-portal.md [client-source-ip]: concepts-network.md#ingress-controllers
-[install-azure-cli]: /cli/azure/install-azure-cli
[aks-static-ip]: static-ip.md [aks-supported versions]: supported-kubernetes-versions.md [aks-integrated-acr]: cluster-container-registry-integration.md?tabs=azure-cli#create-a-new-aks-cluster-with-acr-integration
+[aks-integrated-acr-ps]: cluster-container-registry-integration.md?tabs=azure-powershell#create-a-new-aks-cluster-with-acr-integration
+[azure-powershell-install]: /powershell/azure/install-az-ps
[acr-helm]: ../container-registry/container-registry-helm-repos.md
aks Ingress Tls https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/ingress-tls.md
You can also:
## Before you begin
-This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell] or [using the Azure portal][aks-quickstart-portal].
+This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
This article also assumes you have [a custom domain][custom-domain] with a [DNS Zone][dns-zone] in the same resource group as your AKS cluster.
In addition, this article assumes you have an existing AKS cluster with an integ
This article also requires that you're running Azure PowerShell version 5.9.0 or later. Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell][azure-powershell-install]. + ## Import the images used by the Helm chart into your ACR This article uses the [NGINX ingress controller Helm chart][ingress-nginx-helm-chart], which relies on three container images.
az acr import --name $REGISTRY_NAME --source $CERT_MANAGER_REGISTRY/$CERT_MANAGE
### [Azure PowerShell](#tab/azure-powershell)
+Use `Import-AzContainerRegistryImage` to import those images into your ACR.
+ ```azurepowershell $RegistryName = "<REGISTRY_NAME>" $ResourceGroup = (Get-AzContainerRegistry | Where-Object {$_.name -eq $RegistryName} ).ResourceGroupName
kubectl create namespace ingress-basic
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx # Set variable for ACR location to use for pulling images
-$AcrUrl = "$RegistryName.azurecr.io"
-
-# Get the SHA256 digest of the controller and patch images
-$ControllerDigest = (Get-AzContainerRegistryTag -RegistryName $RegistryName -RepositoryName $ControllerImage -Name $ControllerTag).Attributes.digest
-$PatchDigest = (Get-AzContainerRegistryTag -RegistryName $RegistryName -RepositoryName $PatchImage -Name $PatchTag).Attributes.digest
+$AcrUrl = (Get-AzContainerRegistry -ResourceGroupName $ResourceGroup -Name $RegistryName).LoginServer
# Use Helm to deploy an NGINX ingress controller helm install nginx-ingress ingress-nginx/ingress-nginx `
helm install nginx-ingress ingress-nginx/ingress-nginx `
--set controller.image.registry=$AcrUrl ` --set controller.image.image=$ControllerImage ` --set controller.image.tag=$ControllerTag `
- --set controller.image.digest=$ControllerDigest `
+ --set controller.image.digest="" `
--set controller.admissionWebhooks.patch.nodeSelector."kubernetes\.io/os"=linux ` --set controller.admissionWebhooks.patch.image.registry=$AcrUrl ` --set controller.admissionWebhooks.patch.image.image=$PatchImage ` --set controller.admissionWebhooks.patch.image.tag=$PatchTag `
- --set controller.admissionWebhooks.patch.image.digest=$PatchDigest `
+ --set controller.admissionWebhooks.patch.image.digest="" `
--set defaultBackend.nodeSelector."kubernetes\.io/os"=linux ` --set defaultBackend.image.registry=$AcrUrl ` --set defaultBackend.image.image=$DefaultBackendImage `
- --set defaultBackend.image.tag=$DefaultBackendTag
+ --set defaultBackend.image.tag=$DefaultBackendTag `
+ --set defaultBackend.image.digest=""
```
New-AzDnsRecordSet -Name "*" `
-RecordType A ` -ResourceGroupName <Name of Resource Group for the DNS Zone> ` -ZoneName <Custom Domain Name> `
- -TTL 3600
+ -TTL 3600 `
-DnsRecords $Records ```
aks Private Clusters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/private-clusters.md
Title: Create a private Azure Kubernetes Service cluster
description: Learn how to create a private Azure Kubernetes Service (AKS) cluster Previously updated : 11/30/2021 Last updated : 01/12/2022
The API server endpoint has no public IP address. To manage the API server, you'
* Use a VM in a separate network and set up [Virtual network peering][virtual-network-peering]. See the section below for more information on this option. * Use an [Express Route or VPN][express-route-or-VPN] connection. * Use the [AKS `command invoke` feature][command-invoke].
+* Use a [private endpoint][private-endpoint-service] connection.
-Creating a VM in the same VNET as the AKS cluster is the easiest option. Express Route and VPNs add costs and require additional networking complexity. Virtual network peering requires you to plan your network CIDR ranges to ensure there are no overlapping ranges.
+Creating a VM in the same VNET as the AKS cluster is the easiest option. Express Route and VPNs add costs and require additional networking complexity. Virtual network peering requires you to plan your network CIDR ranges to ensure there are no overlapping ranges.
## Virtual network peering
As mentioned, virtual network peering is one way to access your private cluster.
> [!NOTE] > If you are using [Bring Your Own Route Table with kubenet](./configure-kubenet.md#bring-your-own-subnet-and-route-table-with-kubenet) and Bring Your Own DNS with Private Cluster, the cluster creation will fail. You will need to associate the [RouteTable](./configure-kubenet.md#bring-your-own-subnet-and-route-table-with-kubenet) in the node resource group to the subnet after the cluster creation failed, in order to make the creation successful.
+## Using a private endpoint connection
+
+A private endpoint can be set up so that an Azure Virtual Network doesn't need to be peered to communicate to the private cluster. To use a private endpoint, create a new private endpoint in your virtual network then create a link between your virtual network and a new private DNS zone.
+
+> [!IMPORTANT]
+> If the virtual network is configured with custom DNS servers, private DNS will need to be set up appropriately for the environment. See the [virtual networks name resolution documentation][virtual-networks-name-resolution] for more details.
+
+1. On the Azure portal menu or from the Home page, select **Create a resource**.
+2. Search for **Private Endpoint** and select **Create > Private Endpoint**.
+3. Select **Create**.
+4. On the **Basics** tab, set up the following options:
+ * **Project details**:
+ * Select an Azure **Subscription**.
+ * Select the Azure **Resource group** where your virtual network is located.
+ * **Instance details**:
+ * Enter a **Name** for the private endpoint, such as *myPrivateEndpoint*.
+ * Select a **Region** for the private endpoint.
+
+ > [!IMPORTANT]
+ > Check that the region selected is the same as the virtual network where you want to connect from, otherwise you won't see your virtual network in the **Configuration** tab.
+
+5. Select **Next: Resource** when complete.
+6. On the **Resource** tab, set up the following options:
+ * **Connection method**: *Connect to an Azure resource in my directory*
+ * **Subscription**: Select your Azure Subscription where the private cluster is located
+ * **Resource type**: *Microsoft.ContainerService/managedClusters*
+ * **Resource**: *myPrivateAKSCluster*
+ * **Target sub-resource**: *management*
+7. Select **Next: Configuration** when complete.
+8. On the **Configuration** tab, set up the following options:
+ * **Networking**:
+ * **Virtual network**: *myVirtualNetwork*
+ * **Subnet**: *mySubnet*
+9. Select **Next: Tags** when complete.
+10. (Optional) On the **Tags** tab, set up key-values as needed.
+11. Select **Next: Review + create**, and then select **Create** when validation completes.
+
+Record the private IP address of the private endpoint. This private IP address is used in a later step.
+
+After the private endpoint has been created, create a new private DNS zone with the same name as the private DNS zone that was created by the private cluster.
+
+1. Go to the node resource group in the Azure portal.
+2. Select the private DNS zone and record:
+ * the name of the private DNS zone, which follows the pattern `*.privatelink.<region>.azmk8s.io`
+ * the name of the A record (excluding the private DNS name)
+ * the time-to-live (TTL)
+3. On the Azure portal menu or from the Home page, select **Create a resource**.
+4. Search for **Private DNS zone** and select **Create > Private DNS Zone**.
+5. On the **Basics** tab, set up the following options:
+ * **Project details**:
+ * Select an Azure **Subscription**
+ * Select the Azure **Resource group** where the private endpoint was created
+ * **Instance details**:
+ * Enter the **Name** of the DNS zone retrieved from previous steps
+ * **Region** defaults to the Azure Resource group location
+6. Select **Review + create** when complete and select **Create** when validation completes.
+
+After the private DNS zone is created, create an A record. This record associates the private endpoint to the private cluster.
+
+1. Go to the private DNS zone created in previous steps.
+2. On the **Overview** page, select **+ Record set**.
+3. On the **Add record set** tab, set up the following options:
+ * **Name**: Input the name retrieved from the A record in the private cluster's DNS zone
+ * **Type**: *A - Alias record to IPv4 address*
+ * **TTL**: Input the number to match the record from the A record private cluster's DNS zone
+ * **TTL Unit**: Change the dropdown value to match the A record from the private cluster's DNS zone
+ * **IP address**: Input the IP address of the private endpoint that was created previously
+
+> [!IMPORTANT]
+> When creating the A record, use only the name, and not the fully qualified domain name (FQDN).
+
+Once the A record is created, link the private DNS zone to the virtual network that will access the private cluster.
+
+1. Go to the private DNS zone created in previous steps.
+2. In the left pane, select **Virtual network links**.
+3. Create a new link to add the virtual network to the private DNS zone. It takes a few minutes for the DNS zone link to become available.
+
+> [!WARNING]
+> If the private cluster is stopped and restarted, the private cluster's original private link service is removed and re-created, which breaks the connection between your private endpoint and the private cluster. To resolve this issue, delete and re-create any user created private endpoints linked to the private cluster. DNS records will also need to be updated if the re-created private endpoints have new IP addresses.
+ ## Limitations
-* IP authorized ranges can't be applied to the private api server endpoint, they only apply to the public API server
+* IP authorized ranges can't be applied to the private API server endpoint, they only apply to the public API server
* [Azure Private Link service limitations][private-link-service] apply to private clusters.
-* No support for Azure DevOps Microsoft-hosted Agents with private clusters. Consider to use [Self-hosted Agents](/azure/devops/pipelines/agents/agents?tabs=browser).
-* For customers that need to enable Azure Container Registry to work with private AKS, the Container Registry virtual network must be peered with the agent cluster virtual network.
+* No support for Azure DevOps Microsoft-hosted Agents with private clusters. Consider using [Self-hosted Agents](/azure/devops/pipelines/agents/agents?tabs=browser).
+* If you need to enable Azure Container Registry to work with a private AKS cluster, [set up a private link for the container registry in the cluster virtual network][container-registry-private-link] or set up peering between the Container Registry virtual network and the private cluster's virtual network.
* No support for converting existing AKS clusters into private clusters * Deleting or modifying the private endpoint in the customer subnet will cause the cluster to stop functioning.
As mentioned, virtual network peering is one way to access your private cluster.
[az-extension-add]: /cli/azure/extension#az_extension_add [az-extension-update]: /cli/azure/extension#az_extension_update [private-link-service]: ../private-link/private-link-service-overview.md#limitations
+[private-endpoint-service]: ../private-link/private-endpoint-overview.md
[virtual-network-peering]: ../virtual-network/virtual-network-peering-overview.md [azure-bastion]: ../bastion/tutorial-create-host-portal.md [express-route-or-vpn]: ../expressroute/expressroute-about-virtual-network-gateways.md [devops-agents]: /azure/devops/pipelines/agents/agents [availability-zones]: availability-zones.md [command-invoke]: command-invoke.md
+[container-registry-private-link]: ../container-registry/container-registry-private-link.md
+[virtual-networks-name-resolution]: ../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server
analysis-services Analysis Services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/analysis-services/analysis-services-overview.md
Azure Analysis Services is a fully managed platform as a service (PaaS) that pro
In Azure portal, you can [create a server](analysis-services-create-server.md) within minutes. And with Azure Resource Manager [templates](../azure-resource-manager/templates/quickstart-create-templates-use-the-portal.md) and PowerShell, you can create servers using a declarative template. With a single template, you can deploy server resources along with other Azure components such as storage accounts and Azure Functions.
-**Video:** Check out [Automating deployment](https://channel9.msdn.com/series/Azure-Analysis-Services/AzureAnalysisServicesAutomation) to learn more about how you can use Azure Automation to speed server creation.
+**Video:** Check out Automating deployment to learn more about how you can use Azure Automation to speed server creation.
Azure Analysis Services integrates with many Azure services enabling you to build sophisticated analytics solutions. Integration with [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md) provides secure, role-based access to your critical data. Integrate with [Azure Data Factory](../data-factory/introduction.md) pipelines by including an activity that loads data into the model. [Azure Automation](../automation/automation-intro.md) and [Azure Functions](../azure-functions/functions-overview.md) can be used for lightweight orchestration of models using custom code.
api-management Api Management Caching Policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-caching-policies.md
The `cache-store` policy caches responses according to the specified cache setti
``` #### Example using policy expressions
-This example shows how to configure API Management response caching duration that matches the response caching of the backend service as specified by the backed service's `Cache-Control` directive. For a demonstration of configuring and using this policy, see [Cloud Cover Episode 177: More API Management Features with Vlad Vinogradsky](https://channel9.msdn.com/Shows/Cloud+Cover/Episode-177-More-API-Management-Features-with-Vlad-Vinogradsky) and fast-forward to 25:25.
+This example shows how to configure API Management response caching duration that matches the response caching of the backend service as specified by the backed service's `Cache-Control` directive. For a demonstration of configuring and using this policy, see Cloud Cover Episode 177: More API Management Features with Vlad Vinogradsky and fast-forward to 25:25.
```xml <!-- The following cache policy snippets demonstrate how to control API Management response cache duration with Cache-Control headers sent by the backend service. -->
api-management Api Management Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-role-based-access-control.md
New-AzRoleAssignment -ObjectId <object ID of the user account> -RoleDefinitionNa
The [Azure Resource Manager resource provider operations](../role-based-access-control/resource-provider-operations.md#microsoftapimanagement) article contains the list of permissions that can be granted on the API Management level.
-## Video
--
-> [!VIDEO https://channel9.msdn.com/Blogs/AzureApiMgmt/Role-Based-Access-Control-in-API-Management/player]
->
->
- ## Next steps To learn more about Role-Based Access Control in Azure, see the following articles:
api-management Quickstart Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/quickstart-arm-template.md
Title: Quickstart - Create Azure API Management instance by using ARM template
description: Learn how to create an Azure API Management instance in the Developer tier by using an Azure Resource Manager template (ARM template). -+
+tags: azure-resource-manager
app-service App Service Asp Net Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/app-service-asp-net-migration.md
Azure Migrate recently announced at-scale, agentless discovery, and assessment o
| **Best practices** | | [Assessment best practices in Azure Migrate Discovery and assessment tool](../migrate/best-practices-assessment.md) | | **Video** |
-| [At scale discovery and assessment for ASP.NET app migration with Azure Migrate](https://channel9.msdn.com/Shows/Inside-Azure-for-IT/At-scale-discovery-and-assessment-for-ASPNET-app-migration-with-Azure-Migrate) |
+| [At scale discovery and assessment for ASP.NET app migration with Azure Migrate](/Shows/Inside-Azure-for-IT/At-scale-discovery-and-assessment-for-ASPNET-app-migration-with-Azure-Migrate) |
## Migrate from an IIS server <!-- Intent: discover how to assess and migrate from a single IIS server -->
-You can migrate ASP.NET web apps from single IIS server discovered through Azure Migrate's at-scale discovery experience using [PowerShell scripts](https://github.com/Azure/App-Service-Migration-Assistant/wiki/PowerShell-Scripts) [(download)](https://appmigration.microsoft.com/api/download/psscriptpreview/AppServiceMigrationScripts.zip). Watch the video for [updates on migrating to Azure App Service](https://channel9.msdn.com/Shows/The-Launch-Space/Updates-on-Migrating-to-Azure-App-Service).
+You can migrate ASP.NET web apps from single IIS server discovered through Azure Migrate's at-scale discovery experience using [PowerShell scripts](https://github.com/Azure/App-Service-Migration-Assistant/wiki/PowerShell-Scripts) [(download)](https://appmigration.microsoft.com/api/download/psscriptpreview/AppServiceMigrationScripts.zip). Watch the video for [updates on migrating to Azure App Service](/Shows/The-Launch-Space/Updates-on-Migrating-to-Azure-App-Service).
## ASP.NET web app migration <!-- Intent: migrate a single web app -->
app-service App Service Java Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/app-service-java-migration.md
Azure App Service provides tools to discover web apps deployed to on-premise web
## Standalone Tomcat Web App Migration (Windows OS)
-Download this [preview tool](https://azure.microsoft.com/services/app-service/migration-assistant/) to migrate a Java web app on Apache Tomcat to App Service on Windows. For more information, see the [video](https://channel9.msdn.com/Shows/The-Launch-Space/Updates-on-Migrating-to-Azure-App-Service) and [how-to](https://github.com/Azure/App-Service-Migration-Assistant/wiki/TOMCAT-Java-Information).
+Download this [preview tool](https://azure.microsoft.com/services/app-service/migration-assistant/) to migrate a Java web app on Apache Tomcat to App Service on Windows. For more information, see the [video](/Shows/The-Launch-Space/Updates-on-Migrating-to-Azure-App-Service) and [how-to](https://github.com/Azure/App-Service-Migration-Assistant/wiki/TOMCAT-Java-Information).
## Containerize standalone Tomcat Web App
app-service How To Migrate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/how-to-migrate.md
+
+ Title: How to migrate App Service Environment v2 to App Service Environment v3
+description: Learn how to migrate your App Service Environment v2 to App Service Environment v3
++ Last updated : 1/17/2022++
+# How to migrate App Service Environment v2 to App Service Environment v3
+
+> [!IMPORTANT]
+> This article describes a feature that is currently in preview. You should use this feature for dev environments first before migrating any production environments to ensure there are no unexpected issues. Please provide any feedback related to this article or the feature using the buttons at the bottom of the page.
+>
+
+An App Service Environment (ASE) v2 can be migrated to an [App Service Environment v3](overview.md). To learn more about the migration process and to see if your App Service Environment supports migration at this time, see the [Migration to App Service Environment v3 Overview](migrate.md).
+
+## Prerequisites
+
+Ensure you understand how migrating to an App Service Environment v3 will affect your applications. Review the [migration process](migrate.md#overview-of-the-migration-process) to understand the process timeline and where and when you'll need to get involved. Also review the [FAQs](migrate.md#frequently-asked-questions), which may answer some questions you currently have.
+
+For the initial preview of the migration feature, you should follow the below steps in order and as written since you'll be making Azure REST API calls. The recommended way for making these calls is by using the [Azure CLI](/cli/azure/). For information about other methods, see [Getting Started with Azure REST](/rest/api/azure/).
+
+For this guide, [install the Azure CLI](/cli/azure/install-azure-cli) or use the [Azure Cloud Shell](https://shell.azure.com/).
+
+## 1. Get your App Service Environment ID
+
+Run these commands to get your App Service Environment ID and store it as an environment variable. Replace the placeholders for name and resource group with your values for the App Service Environment you want to migrate.
+
+```azurecli
+ASE_NAME=<Your-App-Service-Environment-name>
+ASE_RG=<Your-Resource-Group>
+ASE_ID=$(az appservice ase show --name $ASE_NAME --resource-group $ASE_RG --query id --output tsv)
+```
+
+## 2. Delegate your App Service Environment subnet
+
+App Service Environment v3 requires the subnet it's in to have a single delegation of `Microsoft.Web/hostingEnvironments`. Previous versions didn't require this delegation. You'll need to confirm your subnet is delegated properly and update the delegation if needed before migrating. You can update the delegation either by running the following command or by navigating to the subnet in the [Azure portal](https://portal.azure.com).
+
+```azurecli
+az network vnet subnet update -g $ASE_RG -n <subnet-name> --vnet-name <vnet-name> --delegations Microsoft.Web/hostingEnvironments
+```
+
+![subnet delegation sample](./media/migration/subnet-delegation.jpg)
+
+## 3. Validate migration is supported
+
+The following command will check whether your App Service Environment is supported for migration. If you receive an error or if your App Service Environment is in an unhealthy or suspended state, you can't migrate at this time. For an estimate of when you can migrate, see the [timeline](migrate.md#preview-limitations). If your environment [won't be supported for migration](migrate.md#migration-feature-limitations) or you want to migrate to ASEv3 without using the migration feature, see [migration alternatives](migration-alternatives.md).
+
+```azurecli
+az rest --method post --uri "${ASE_ID}/migrate?api-version=2021-02-01&phase=validation"
+```
+
+If there are no errors, your migration is supported and you can continue to the next step.
+
+## 4. Generate IP addresses for your new App Service Environment v3
+
+Run the following command to create the new IPs. This step will take about 5 minutes to complete. Don't scale or make changes to your existing App Service Environment during this time.
+
+```azurecli
+az rest --method post --uri "${ASE_ID}/migrate?api-version=2021-02-01&phase=premigration" --verbose
+```
+
+Run the following command to check the status of this step.
+
+```azurecli
+az rest --method get --uri "${ASE_ID}?api-version=2018-11-01" --query properties.status
+```
+
+If it's in progress, you'll get a status of "Migrating". Once you get a status of "Ready", run the following command to get your new IPs. If you don't see the new IPs immediately, wait a few minutes and try again.
+
+```azurecli
+az rest --method get --uri "${ASE_ID}/configurations/networking?api-version=2018-11-01"
+```
+
+## 5. Update dependent resources with new IPs
+
+Don't move on to full migration immediately after completing the previous step. Using the new IPs, update any resources and networking components to ensure your new environment functions as intended once migration is complete. It's your responsibility to make any necessary updates.
+
+## 6. Full migration
+
+Only start this step once you've completed all pre-migration actions listed above and understand the [implications of full migration](migrate.md#full-migration) including what will happen during this time. There will be about one hour of downtime. Don't scale or make changes to your existing App Service Environment during this step.
+
+```azurecli
+az rest --method post --uri "${ASE_ID}/migrate?api-version=2021-02-01&phase=fullmigration" --verbose
+```
+
+Run the following command to check the status of your migration. The status will show as "Migrating" while in progress.
+
+```azurecli
+az rest --method get --uri "${ASE_ID}?api-version=2018-11-01" --query properties.status
+```
+
+Once you get a status of "Ready", migration is done and you have an App Service Environment v3.
+
+Get the details of your new environment by running the following command or by navigating to the [Azure portal](https://portal.azure.com).
+
+```azurecli
+az appservice ase show --name $ASE_NAME --resource group $ASE_RG
+```
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Using an App Service Environment v3](using.md)
+
+> [!div class="nextstepaction"]
+> [App Service Environment v3 Networking](networking.md)
app-service Intro https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/intro.md
In ASEv1, you need to manage all of the resources manually. That includes the fr
ASEv1 uses a different pricing model from ASEv2. In ASEv1, you pay for each vCPU allocated. That includes vCPUs used for front ends or workers that aren't hosting any workloads. In ASEv1, the default maximum-scale size of an ASE is 55 total hosts. That includes workers and front ends. One advantage to ASEv1 is that it can be deployed in a classic virtual network and a Resource Manager virtual network. To learn more about ASEv1, see [App Service Environment v1 introduction][ASEv1Intro]. <!--Links-->
-[App Service Environments v2]: https://channel9.msdn.com/Blogs/Azure/Azure-Application-Service-Environments-v2-Private-PaaS-Environments-in-the-Cloud?term=app%20service%20environment
-[Isolated offering]: https://channel9.msdn.com/Shows/Azure-Friday/Security-and-Horsepower-with-App-Service-The-New-Isolated-Offering?term=app%20service%20environment
[Intro]: ./intro.md [MakeExternalASE]: ./create-external-ase.md [MakeASEfromTemplate]: ./create-from-template.md
app-service Migrate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/migrate.md
+
+ Title: Migration to App Service Environment v3
+description: Overview of the migration process to App Service Environment v3
++ Last updated : 1/17/2022+++
+# Migration to App Service Environment v3
+
+> [!IMPORTANT]
+> This article describes a feature that is currently in preview. You should use this feature for dev environments first before migrating any production environments to ensure there are no unexpected issues. Please provide any feedback related to this article or the feature using the buttons at the bottom of the page.
+>
+
+App Service can now migrate your App Service Environment (ASE) v2 to an [App Service Environment v3](overview.md). App Service Environment v3 provides [advantages and feature differences](overview.md#feature-differences) over earlier versions. Make sure to review the [supported features](overview.md#feature-differences) of App Service Environment v3 before migrating to reduce the risk of an unexpected application issue.
+
+## Supported scenarios
+
+At this time, App Service Environment migrations to v3 support both [Internal Load Balancer (ILB)](create-ilb-ase.md) and [external (internet facing with public IP)](create-external-ase.md) App Service Environment v2 in the following regions:
+
+- West Central US
+- Canada Central
+- Canada East
+- UK South
+- Germany West Central
+- East Asia
+- Australia East
+- Australia Southeast
+
+You can find the version of your App Service Environment by navigating to your App Service Environment in the [Azure portal](https://portal.azure.com) and selecting **Configuration** under **Settings** on the left-hand side. You can also use [Azure Resource Explorer](https://resources.azure.com/) and review the value of the `kind` property for your App Service Environment.
+
+### Preview limitations
+
+For this version of the preview, your new App Service Environment will be placed in the existing subnet that was used for your old environment. Internet facing App Service Environment cannot be migrated to ILB App Service Environment v3 and vice versa.
+
+Note that App Service Environment v3 doesn't currently support the following features that you may be using with your current App Service Environment. If you require any of these features, don't migrate until they're supported.
+
+- Sending SMTP traffic. You can still have email triggered alerts but your app can't send outbound traffic on port 25.
+- Deploying your apps with FTP
+- Using remote debug with your apps
+- Monitoring your traffic with Network Watcher or NSG Flow
+- Configuring an IP-based TLS/SSL binding with your apps
+
+The following scenarios aren't supported in this version of the preview.
+
+- App Service Environment v2 -> Zone Redundant App Service Environment v3
+- App Service Environment v1
+- App Service Environment v1 -> Zone Redundant App Service Environment v3
+- |ILB App Service Environment v2 with a custom domain suffix
+- ILB App Service Environment v1 with a custom domain suffix
+- Internet facing App Service Environment v2 with IP SSL addresses
+- Internet facing App Service Environment v1 with IP SSL addresses
+- [Zone pinned](zone-redundancy.md) App Service Environment v2
+- App Service Environment in a region not listed above
+
+The App Service platform will review your App Service Environment to confirm migration support. If your scenario doesn't pass all validation checks, you won't be able to migrate at this time.
+
+## Overview of the migration process
+
+Migration consists of a series of steps that must be followed in order. Key points are given below for a subset of the steps. It's important to understand what will happen during these steps and how your environment and apps will be impacted. After reviewing the following information and when you're ready to migrate, follow the [step-by-step guide](how-to-migrate.md).
+
+> [!NOTE]
+> For this version of the preview, migration must be carried out using Azure REST API calls.
+>
+
+### Delegate your App Service Environment subnet
+
+App Service Environment v3 requires the subnet it's in to have a single delegation of `Microsoft.Web/hostingEnvironments`. If the App Service Environment's subnet isn't delegated or it's delegated to a different resource, migration will fail.
+
+### Generate IP addresses for your new App Service Environment v3
+
+The platform will create the [new inbound IP (if you're migrating an internet facing App Service Environment) and the new outbound IP](networking.md#addresses). While these IPs are getting created, activity with your existing App Service Environment won't be interrupted, however, you won't be able to scale or make changes to your existing environment. This process will take about 5 minutes to complete.
+
+When completed, you'll be given the new IPs that will be used by your future App Service Environment v3. These new IPs have no effect on your existing environment. The IPs used by your existing environment will continue to be used up until your existing environment is shut down during the full migration step.
+
+### Update dependent resources with new IPs
+
+Once the new IPs are created, you'll have the new default outbound to the internet public addresses so you can adjust any external firewalls, DNS routing, network security groups, and so on, in preparation for the migration. For public internet facing App Service Environment, you'll also have the new inbound IP address that you can use to set up new endpoints with services like [Traffic Manager](../../traffic-manager/traffic-manager-overview.md) or [Azure Front Door](../../frontdoor/front-door-overview.md). **It's your responsibility to update any and all resources that will be impacted by the IP address change associated with the new App Service Environment v3. Don't move on to the next step until you've made all required updates.**
+
+### Full migration
+
+After updating all dependent resources with your new IPs, you should continue with full migration as soon as possible. It's recommended that you move on within one week.
+
+During full migration, the following events will occur:
+
+- The existing App Service Environment is shut down and replaced by the new App Service Environment v3
+- All App Service plans in the App Service Environment are converted from Isolated to Isolated v2
+- All of the apps that are on your App Service Environment are temporarily down. You should expect about one hour of downtime.
+ - If you can't support downtime, see [migration-alternatives](migration-alternatives.md#guidance-for-manual-migration)
+- The public addresses that are used by the App Service Environment will change to the IPs identified previously
+
+As in the IP generation step, you won't be able to scale or modify your App Service Environment or deploy apps to it during this process. When migration is complete, the apps that were on the old App Service Environment will be running on the new App Service Environment v3.
+
+> [!NOTE]
+> Due to the conversion of App Service plans from Isolated to Isolated v2, your apps may be over-provisioned after the migration since the Isolated v2 tier has more memory and CPU per corresponding instance size. You'll have the opportunity to [scale your environment](../manage-scale-up.md) as needed once migration is complete. For more information, review the [SKU details](https://azure.microsoft.com/pricing/details/app-service/windows/).
+>
+
+## Pricing
+
+There's no cost to migrate your App Service Environment. You'll stop being charged for your previous App Service Environment as soon as it shuts down during the full migration process, and you'll begin getting charged for your new App Service Environment v3 as soon as it's deployed. For more information about App Service Environment v3 pricing, see the [pricing details](overview.md#pricing).
+
+## Migration feature limitations
+
+The migration feature doesn't plan on supporting App Service Environment v1 within a classic VNet. See [migration alternatives](migration-alternatives.md) if your App Service Environment falls into this category. Also, you won't be able to migrate if your App Service Environment is in an unhealthy or suspended state.
+
+## Frequently asked questions
+
+- **What if migrating my App Service Environment is not currently supported?**
+ You won't be able migrate using the migration feature at this time. If you have an unsupported environment and want to migrate immediately, see [migration alternatives](migration-alternatives.md).
+- **Will I experience downtime during the migration?**
+ Yes, you should expect about one hour of downtime during the full migration step so plan accordingly. If downtime isn't an option for you, see [migration alternatives](migration-alternatives.md).
+- **Will I need to do anything to my apps after the migration to get them running on the new App Service Environment?**
+ No, all of your apps running on the old environment will be automatically migrated to the new environment and run like before. No user input is needed.
+- **What if my App Service Environment has a custom domain suffix?**
+ You won't be able migrate using the migration feature at this time. If you have an unsupported environment and want to migrate immediately, see [migration alternatives](migration-alternatives.md).
+- **What if my App Service Environment is zone pinned?**
+ Zone pinned App Service Environment is currently not a supported scenario for migration. When supported, zone pinned App Service Environments will be migrated to zone redundant App Service Environment v3.
+- **What properties of my App Service Environment will change?**
+ You'll now be on App Service Environment v3 so be sure to review the [features and feature differences](overview.md#feature-differences) compared to previous versions. For ILB App Service Environment, you'll keep the same ILB IP address. For internet facing App Service Environment, the public IP address and the outbound IP address will change. Note for internet facing App Service Environment, previously there was a single IP for both inbound and outbound. For App Service Environment v3, they're separate. For more information, see [App Service Environment v3 networking](networking.md#addresses).
+- **What happens if migration fails or there is an unexpected issue during the migration?**
+ If there's an unexpected issue, support teams will be on hand. It's recommended to migrate dev environments before touching any production environments.
+- **What happens to my old App Service Environment?**
+ If you decide to migrate an App Service Environment, the old environment gets shut down and deleted and all of your apps are migrated to a new environment. Your old environment will no longer be accessible.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Migrate App Service Environment v2 to App Service Environment v3](how-to-migrate.md)
+
+> [!div class="nextstepaction"]
+> [Migration Alternatives](migration-alternatives.md)
+
+> [!div class="nextstepaction"]
+> [App Service Environment v3 Networking](networking.md)
+
+> [!div class="nextstepaction"]
+> [Using an App Service Environment v3](using.md)
app-service Migration Alternatives https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/migration-alternatives.md
+
+ Title: Alternative methods for migrating to App Service Environment v3
+description: Migrate to App Service Environment v3 Without Using the Migration Feature
++ Last updated : 1/17/2022++
+# Migrate to App Service Environment v3 without using the migration feature
+
+> [!NOTE]
+> The App Service Environment v3 [migration feature](migrate.md) is now available in preview for a set of supported environment configurations. Consider that feature which provides an automated migration path to [App Service Environment v3](overview.md).
+>
+
+If you're currently using App Service Environment (ASE) v1 or v2, you have the opportunity to migrate your workloads to [App Service Environment v3](overview.md). App Service Environment v3 has [advantages and feature differences](overview.md#feature-differences) that provide enhanced support for your workloads and can reduce overall costs. Consider using the [migration feature](migrate.md) if your environment falls into one of the [supported scenarios](migrate.md#supported-scenarios). If your environment isn't currently supported by the migration feature, you can wait for support if your scenario is listed in the [upcoming supported scenarios](migrate.md#preview-limitations). Otherwise, you can choose to use one of the alternative migration options given below.
+
+If your App Service Environment [won't be supported for migration](migrate.md#migration-feature-limitations) with the migration feature, you must use one of the alternative methods to migrate to App Service Environment v3.
+
+## Prerequisites
+
+Scenario: An existing app running on an App Service Environment v1 or App Service Environment v2 and you need that app to run on an App Service Environment v3.
+
+For any migration method that doesn't use the [migration feature](migrate.md), you'll need to [create the App Service Environment v3](creation.md) and a new subnet using the method of your choice. There are [feature differences](overview.md#feature-differences) between App Service Environment v1/v2 and App Service Environment v3 as well as [networking changes](networking.md) that will involve new (and for internet-facing environments, additional) IP addresses. You'll need to update any infrastructure that relies on these IPs.
+
+Note that multiple App Service Environments can't exist in a single subnet. If you need to use your existing subnet for your new App Service Environment v3, you'll need to delete the existing App Service Environment before you create a new one. For this scenario, the recommended migration method is to [back up your apps and then restore them](#back-up-and-restore) on the new environment after it gets created and configured. There will be application downtime during this process because of the time it takes to delete the old environment (15 minutes), create the new App Service Environment v3 (30 minutes), configure any infrastructure and connected resources to work with the new environment (your responsibility), and deploy your apps onto the new environment (application deployment, type, and quantity dependent).
+
+### Checklist before migrating apps
+
+- [Create an App Service Environment v3](creation.md)
+- After creating the new environment, update any networking dependencies with the IP addresses associated with the new environment
+- Plan for downtime (if applicable)
+- Decide on a process for recreating your apps in your new environment
+
+## Isolated v2 App Service plans
+
+App Service Environment v3 uses Isolated v2 App Service plans that are priced and sized differently than those from Isolated plans. Review the [SKU details](https://azure.microsoft.com/pricing/details/app-service/windows/) to understand how you're new environment will need to be sized and scaled to ensure appropriate capacity. There's no difference in how you create App Service plans for App Service Environment v3 compared to previous versions.
+
+## Back up and restore
+
+The [back up](../manage-backup.md) and [restore](../web-sites-restore.md) feature allows you to keep your app configuration, file content, and database connected to your app when migrating to your new environment. Make sure you review the [requirements and restrictions](../manage-backup.md#requirements-and-restrictions) of this feature.
+
+The step-by-step instructions in the current documentation for [back up](../manage-backup.md) and [restore](../web-sites-restore.md) should be sufficient to allow you to use this feature. When restoring, the **Storage** option lets you select any backup ZIP file from any existing Azure Storage account container in your subscription. A sample of a restore configuration is given below.
+
+![back up and restore sample](./media/migration/back-up-restore-sample.png)
+
+|Benefits |Limitations |
+|||
+|Quick - should only take 5-10 minutes per app |Support is limited to [certain database types](../manage-backup.md#what-gets-backed-up) |
+|Multiple apps can be restored at the same time (restoration needs to be configured for each app individually) |Old and new environments as well as supporting resources (for example apps, databases, storage accounts and containers) must all be in the same subscription |
+|In-app MySQL databases are automatically backed up without any configuration |Backups can be up to 10 GB of app and database content, up to 4 GB of which can be the database backup. If the backup size exceeds this limit, you get an error. |
+|Can restore the app to a snapshot of a previous state |Using a [firewall enabled storage account](../../storage/common/storage-network-security.md) as the destination for your backups isn't supported |
+|Can integrate with [Azure Traffic Manager](../../traffic-manager/traffic-manager-overview.md) and [Azure Application Gateway](../../application-gateway/overview.md) to distribute traffic across old and new environments |Using a [private endpoint enabled storage account](../../storage/common/storage-private-endpoints.md) for backup and restore isn't supported |
+|Can create empty web apps to restore to in your new environment before you start restoring to speed up the process | |
+
+## Clone your app to an App Service Environment v3
+
+[Cloning your apps](../app-service-web-app-cloning.md) is another feature that can be used to get your **Windows** apps onto your App Service Environment v3. There are limitations with cloning apps. These limitations are the same as those for the App Service Backup feature, see [Back up an app in Azure App Service](../manage-backup.md#requirements-and-restrictions).
+
+> [!NOTE]
+> Cloning apps is supported on Windows App Service only.
+>
+
+This solution is recommended for users that are using Windows App Service and can't migrate using the [migration feature](migrate.md). You'll need to set up your new App Service Environment v3 before cloning any apps. Cloning an app can take up to 30 minutes to complete. Cloning can be done using PowerShell as described in the [documentation](../app-service-web-app-cloning.md#cloning-an-existing-app-to-an-app-service-environment) or using the Azure portal as described below.
+
+To clone an app using the [Azure portal](https://www.portal.azure.com), navigate to your existing App Service and select **Clone App** under **Development Tools**. Fill in the required fields using the details for your new App Service Environment v3.
+
+1. Select an existing or create a new **Resource Group**
+1. Give your app a **Name**. This name can be the same as the old app, but note the site's default URL using the new environment will be different. You'll need to update any custom DNS or connected resources to point to the new URL.
+1. Use your App Service Environment v3 name for **Region**
+1. Choose whether or not to clone your deployment source
+1. You can use an existing Windows **App Service plan** from your new environment if you created one already, or create a new one. The available Windows App Service plans in your new App Service Environment v3, if any, will be listed in the dropdown.
+1. Modify **SKU and size** as needed using one of the Isolated v2 options if creating a new App Service plan. Note App Service Environment v3 uses Isolated v2 plans, which have more memory and CPU per corresponding instance size compared to the Isolated plan. For more information, see [App Service Environment v3 pricing](overview.md#pricing).
+
+![clone sample](./media/migration/portal-clone-sample.png)
+
+|Benefits |Limitations |
+|||
+|Can be automated using PowerShell |Only supported on Windows App Service |
+|Multiple apps can be cloned at the same time (cloning needs to be configured for each app individually or using a script) |Support is limited to [certain database types](../manage-backup.md#what-gets-backed-up) |
+|Can integrate with [Azure Traffic Manager](../../traffic-manager/traffic-manager-overview.md) and [Azure Application Gateway](../../application-gateway/overview.md) to distribute traffic across old and new environments |Old and new environments as well as supporting resources (for example apps, databases, storage accounts and containers) must all be in the same subscription |
+
+## Manually create your apps on an App Service Environment v3
+
+If the above features don't support your apps or you're looking to take a more manual route, you have the option of deploying your apps following the same process you used for your existing App Service Environment. At this time, all deployment methods except FTP are supported on App Service Environment v3. You don't need to make updates when you deploy your apps to your new environment unless you want to make changes or take advantage of App Service Environment v3's dedicated features.
+
+You can export [Azure Resource Manager (ARM) templates](../../azure-resource-manager/templates/overview.md) of your existing apps, App Service plans, and any other supported resources and deploy them in your new environment. To export a template for just your app, head over to your App Service and go to **Export template** under **Automation**.
+
+![export from toc](./media/migration/export-toc.png)
+
+You can also export templates for multiple resources directly from your resource group by going to your resource group, selecting the resources you want a template for, and then selecting **Export template**.
+
+![export template sample](./media/migration/export-template-sample.png)
+
+The following initial changes to your Azure Resource Manager templates are required to get your apps onto your App Service Environment v3:
+
+- Update SKU parameters for App Service plan to an Isolated v2 plan as shown below if creating a new plan
+
+ ```json
+ "type": "Microsoft.Web/serverfarms",
+ "apiVersion": "2021-02-01",
+ "name": "[parameters('serverfarm_name')]",
+ "location": "East US",
+ "sku": {
+ "name": "I1v2",
+ "tier": "IsolatedV2",
+ "size": "I1v2",
+ "family": "Iv2",
+ "capacity": 1
+ },
+ ```
+
+- Update App Service plan (serverfarm) parameter the app is to be deployed into to the plan associated with the App Service Environment v3
+- Update hosting environment profile (hostingEnvironmentProfile) parameter to the new App Service Environment v3 resource ID
+- An Azure Resource Manager template export includes all properties exposed by the resource providers for the given resources. Remove all non-required properties such as those which point to the domain of the old app. For example, you `sites` resource could be simplified to the below:
+
+ ```json
+ "type": "Microsoft.Web/sites",
+ "apiVersion": "2021-02-01",
+ "name": "[parameters('site_name')]",
+ "location": "East US",
+ "dependsOn": [
+ "[resourceId('Microsoft.Web/serverfarms', parameters('serverfarm_name'))]"
+ ],
+ "properties": {
+ "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', parameters('serverfarm_name'))]",
+ "siteConfig": {
+ "linuxFxVersion": "NODE|14-lts"
+ },
+ "hostingEnvironmentProfile": {
+ "id": "[parameters('hostingEnvironments_externalid')]"
+ }
+ }
+ ```
+
+Other changes may be required depending on how your app is configured.
+
+Azure Resource Manager templates can be [deployed](../deploy-complex-application-predictably.md) using multiple methods including using the Azure portal, Azure CLI, or PowerShell.
+
+## Guidance for manual migration
+
+The [migration feature](migrate.md) automates the migration to App Service Environment v3 and at the same time transfers all of your apps to the new environment. There's about one hour of downtime during this migration. If you're in a position where you can't have any downtime, the recommendation is to use one of the manual options to recreate your apps in an App Service Environment v3.
+
+You can distribute traffic between your old and new environment using an [Application Gateway](../networking/app-gateway-with-service-endpoints.md). If you're using an Internal Load Balancer (ILB) App Service Environment, see the [considerations](../networking/app-gateway-with-service-endpoints.md#considerations-for-ilb-ase) and [create an Azure Application Gateway](integrate-with-application-gateway.md) with an extra backend pool to distribute traffic between your environments. For internet facing App Service Environments, see these [considerations](../networking/app-gateway-with-service-endpoints.md#considerations-for-external-ase). You can also use services like [Azure Front Door](../../frontdoor/quickstart-create-front-door.md), [Azure Content Delivery Network (CDN)](../../cdn/cdn-add-to-web-app.md), and [Azure Traffic Manager](../../cdn/cdn-traffic-manager.md) to distribute traffic between environments. Using these services allows for testing of your new environment in a controlled manner and allows you to move to your new environment at your own pace.
+
+Once your migration and any testing with your new environment is complete, delete your old App Service Environment, the apps that are on it, and any supporting resources that you no longer need. You'll continue to be charged for any resources that haven't been deleted.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [App Service Environment v3 Networking](networking.md)
+
+> [!div class="nextstepaction"]
+> [Using an App Service Environment v3](using.md)
+
+> [!div class="nextstepaction"]
+> [Integrate your ILB App Service Environment with the Azure Application Gateway](integrate-with-application-gateway.md)
app-service Troubleshoot Domain Ssl Certificates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/troubleshoot-domain-ssl-certificates.md
This problem occurs for one of the following reasons:
**Do I have to configure my custom domain for my website once I buy it?**
-When you purchase a domain from the Azure portal, the App Service application is automatically configured to use that custom domain. You donΓÇÖt have to take any additional steps. For more information, watch [Azure App Service Self Help: Add a Custom Domain Name](https://channel9.msdn.com/blogs/Azure-App-Service-Self-Help/Add-a-Custom-Domain-Name) on Channel9.
+When you purchase a domain from the Azure portal, the App Service application is automatically configured to use that custom domain. You donΓÇÖt have to take any additional steps. For more information, watch Azure App Service Self Help: Add a Custom Domain Name on Channel9.
**Can I use a domain purchased in the Azure portal to point to an Azure VM instead?**
app-service Troubleshoot Performance Degradation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/troubleshoot-performance-degradation.md
Uptime is monitored using HTTP response codes, and response time is measured in
To set it up, see [Monitor apps in Azure App Service](web-sites-monitor.md).
-Also, see [Keeping Azure Web Sites up plus Endpoint Monitoring - with Stefan Schackow](https://channel9.msdn.com/Shows/Azure-Friday/Keeping-Azure-Web-Sites-up-plus-Endpoint-Monitoring-with-Stefan-Schackow) for a video on endpoint monitoring.
+Also, see Keeping Azure Web Sites up plus Endpoint Monitoring - with Stefan Schackow for a video on endpoint monitoring.
#### Application performance monitoring using Extensions You can also monitor your application performance by using a *site extension*.
application-gateway Url Route Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/url-route-overview.md
description: This article provides an overview of the Azure Application Gateway
Previously updated : 01/12/2022 Last updated : 01/14/2022
In the following example, Application Gateway is serving traffic for contoso.com
Requests for http\://contoso.com/video/* are routed to VideoServerPool, and http\://contoso.com/images/* are routed to ImageServerPool. DefaultServerPool is selected if none of the path patterns match. > [!IMPORTANT]
-> For both the v1 and v2 SKUs, rules are processed in the order they are listed in the portal. The best practice when you create path rules is to have the least specific path (the ones with wildcards) at the end. If wildcards are on the top, then they take priority even if there is more specific match in subsequent path rules.
+> For both the v1 and v2 SKUs, rules are processed in the order they are listed in the portal. The best practice when you create path rules is to have the least specific path (the ones with wildcards) at the end. If wildcards are on the top, then they take priority even if there's a more specific match in subsequent path rules.
>
-> If a basic listener is listed first and matches an incoming request, it gets processed by that listener. However, it is highly recommended to configure multi-site listeners first prior to configuring a basic listener. This ensures that traffic gets routed to the right back end.
+> If a basic listener is listed first and matches an incoming request, it gets processed by that listener. However, it's highly recommended to configure multi-site listeners first prior to configuring a basic listener. This ensures that traffic gets routed to the right back end.
## UrlPathMap configuration element
The urlPathMap element is used to specify Path patterns to back-end server pool
### PathPattern
-PathPattern is a list of path patterns to match. Each path must start with / and may use \* as a wildcard character. The string fed to the path matcher does not include any text after the first ? or #, and those chars are not allowed here. Otherwise, any characters allowed in a URL are allowed in PathPattern.
+PathPattern is a list of path patterns to match. Each path must start with / and may use \* as a wildcard character. The string fed to the path matcher doesn't include any text after the first `?` or `#`, and those chars aren't allowed here. Otherwise, any characters allowed in a URL are allowed in PathPattern.
Path rules are case insensitive.
Path rules are case insensitive.
|`/Repos/*/Comments/*` |no| |`/CurrentUser/Comments/*` |yes|
+#### Examples
+Path-based rule processing when wildcard (*) is used:
-You can check out a [Resource Manager template using URL-based routing](https://azure.microsoft.com/resources/templates/application-gateway-url-path-based-routing) for more information.
+**Example 1:**
+
+`/master-dev* to contoso.com`
+
+`/master-dev/api-core/ to fabrikam.com`
+
+`/master-dev/* to microsoft.com`
+
+Because the wildcard path `/master-dev*` is present above more granular paths, all client requests containing `/master-dev` are routed to contoso.com, including the specific `/master-dev/api-core/`. To ensure that the client requests are routed to the appropriate paths, it's critical to have the granular paths above wildcard paths.
+
+**Example 2:**
+
+`/ (default) to contoso.com`
+
+`/master-dev/api-core/ to fabrikam.com`
+
+`/master-dev/api to bing.com`
+
+`/master-dev/* to microsoft.com`
+
+All client requests with the path pattern `/master-dev/*` are processed in the order as listed. If there's no match within the path rules, the request is routed to the default target.
+
+For more information, see [Resource Manager template using URL-based routing](https://azure.microsoft.com/resources/templates/application-gateway-url-path-based-routing).
## PathBasedRouting rule
applied-ai-services Compose Custom Models https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/form-recognizer/compose-custom-models.md
Form Recognizer uses the [Layout](concept-layout.md) API to learn the expected s
[Get started with Train with labels](label-tool.md)
-> [!VIDEO https://channel9.msdn.com/Shows/Docs-Azure/Azure-Form-Recognizer/player]
+> [!VIDEO https://docs.microsoft.com/Shows/Docs-Azure/Azure-Form-Recognizer/player]
## Create a composed model
applied-ai-services Label Tool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/form-recognizer/label-tool.md
keywords: document processing
In this article, you'll use the Form Recognizer REST API with the Sample Labeling tool to train a custom document processing model with manually labeled data.
-> [!VIDEO https://channel9.msdn.com/Shows/Docs-Azure/Azure-Form-Recognizer/player]
+> [!VIDEO https://docs.microsoft.com/Shows/Docs-Azure/Azure-Form-Recognizer/player]
## Prerequisites
azure-arc Managed Instance Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/managed-instance-overview.md
To learn more about these capabilities, watch these introductory videos.
### Azure Arc-enabled SQL Managed Instance - indirect connected mode
-> [!VIDEO https://channel9.msdn.com/Shows/Inside-Azure-for-IT/Azure-Arcenabled-data-services-in-disconnected-mode/player?format=ny]
+> [!VIDEO https://docs.microsoft.com/Shows/Inside-Azure-for-IT/Azure-Arcenabled-data-services-in-disconnected-mode/player?format=ny]
### Azure Arc-enabled SQL Managed Instance - direct connected mode
-> [!VIDEO https://channel9.msdn.com/Shows/Inside-Azure-for-IT/Azure-Arcenabled-data-services-in-connected-mode/player?format=ny]
+> [!VIDEO https://docs.microsoft.com/Shows/Inside-Azure-for-IT/Azure-Arcenabled-data-services-in-connected-mode/player?format=ny]
## Next steps
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/overview.md
Currently, the following Azure Arc-enabled data services are available:
For an introduction to how Azure Arc-enabled data services supports your hybrid work environment, see this introductory video:
-> [!VIDEO https://channel9.msdn.com/Shows//Inside-Azure-for-IT/Choose-the-right-data-solution-for-your-hybrid-environment/player?format=ny]
+> [!VIDEO https://docs.microsoft.com/Shows//Inside-Azure-for-IT/Choose-the-right-data-solution-for-your-hybrid-environment/player?format=ny]
## Always current
azure-arc What Is Azure Arc Enabled Postgres Hyperscale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/what-is-azure-arc-enabled-postgres-hyperscale.md
With the Direct connectivity mode offered by Azure Arc-enabled data services you
- The scale-out and scale-in operations are not automatic. They are controlled by the users. Users may script these operations and automate the execution of those scripts. Not all workloads can benefit from scaling out. Read further details on this topic as suggested in the "Next steps" section. **To learn more about these capabilities, you can also refer to this Data Exposed episode:**
-> [!VIDEO https://channel9.msdn.com/Shows/Data-Exposed/What-is-Azure-Arc-Enabled-PostgreSQL-Hyperscale--Data-Exposed/player?format=ny]
+> [!VIDEO https://docs.microsoft.com/Shows/Data-Exposed/What-is-Azure-Arc-Enabled-PostgreSQL-Hyperscale--Data-Exposed/player?format=ny]
## Roles and responsibilities: Azure managed services (Platform as a service (PaaS)) _vs._ Azure Arc-enabled data services
azure-arc Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/agent-overview.md
Azure Arc-enabled servers depend on the following Azure resource providers in yo
* **Microsoft.HybridCompute** * **Microsoft.GuestConfiguration**
+* **Microsoft.HybridConnectivity**
If they are not registered, you can register them using the following commands:
Login-AzAccount
Set-AzContext -SubscriptionId [subscription you want to onboard] Register-AzResourceProvider -ProviderNamespace Microsoft.HybridCompute Register-AzResourceProvider -ProviderNamespace Microsoft.GuestConfiguration
+Register-AzResourceProvider -ProviderNamespace Microsoft.HybridConnectivity
``` Azure CLI:
Azure CLI:
az account set --subscription "{Your Subscription Name}" az provider register --namespace 'Microsoft.HybridCompute' az provider register --namespace 'Microsoft.GuestConfiguration'
+az provider register --namespace 'Microsoft.HybridConnectivity'
``` You can also register the resource providers in the Azure portal by following the steps under [Azure portal](../../azure-resource-manager/management/resource-providers-and-types.md#azure-portal).
azure-cache-for-redis Cache How To Geo Replication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-how-to-geo-replication.md
To configure geo-replication between two caches, the following prerequisites mus
- Both caches are in the same Azure subscription. - The secondary linked cache is either the same cache size or a larger cache size than the primary linked cache. - Both caches are created and in a running state.
+- Neither cache can have more than one replica.
> [!NOTE] > Data transfer between Azure regions will be charged at standard [bandwidth rates](https://azure.microsoft.com/pricing/details/bandwidth/).
After geo-replication is configured, the following restrictions apply to your li
The primary linked cache remains available for use during the linking process. The secondary linked cache isn't available until the linking process completes.
+[!NOTE]
+> Geo-replication can be enabled for this cache if you scale it to 'Premium' pricing tier and disable data persistence. This feature is not available at this time when using extra replicas.
+ ## Remove a geo-replication link 1. To remove the link between two caches and stop geo-replication, click **Unlink caches** from the **Geo-replication** on the left .
azure-cache-for-redis Cache How To Premium Persistence https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-how-to-premium-persistence.md
The following list contains answers to commonly asked questions about Azure Cach
- [Can I enable persistence on a previously created cache?](#can-i-enable-persistence-on-a-previously-created-cache) - [Can I enable AOF and RDB persistence at the same time?](#can-i-enable-aof-and-rdb-persistence-at-the-same-time)
+- [How does persistence work with geo-replication?](#how-does-persistence-work-with-geo-replication)
- [Which persistence model should I choose?](#which-persistence-model-should-i-choose) - [What happens if I've scaled to a different size and a backup is restored that was made before the scaling operation?](#what-happens-if-ive-scaled-to-a-different-size-and-a-backup-is-restored-that-was-made-before-the-scaling-operation) - [Can I use the same storage account for persistence across two different caches?](#can-i-use-the-same-storage-account-for-persistence-across-two-different-caches)
Yes, Redis persistence can be configured both at cache creation and on existing
No, you can enable RDB or AOF, but not both at the same time.
+### How does persistence work with geo-replication?
+
+If you enable data persistence, geo-replication cannot be enabled for your premium cache.
+ ### Which persistence model should I choose? AOF persistence saves every write to a log, which has a significant effect on throughput. Compared AOF with RDB persistence, which saves backups based on the configured backup interval with minimal effect to performance. Choose AOF persistence if your primary goal is to minimize data loss, and you can handle a lower throughput for your cache. Choose RDB persistence if you wish to maintain optimal throughput on your cache, but still want a mechanism for data recovery.
After a rewrite, two sets of AOF files exist in storage. Rewrites occur in the b
Learn more about Azure Cache for Redis features. -- [Azure Cache for Redis Premium service tiers](cache-overview.md#service-tiers)
+- [Azure Cache for Redis Premium service tiers](cache-overview.md#service-tiers)
azure-cache-for-redis Cache Troubleshoot Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-troubleshoot-connectivity.md
In this article, we provide troubleshooting help for connecting your client appl
- [Kubernetes hosted applications](#kubernetes-hosted-applications) - [Linux-based client application](#linux-based-client-application) - [Continuous connectivity issues](#continuous-connectivity)
- - [Azure Cache for Redis CLI](#azure-cache-for-redis-cli)
- - [PSPING](#psping)
+ - [Test connectivity using _redis-cli_](#test-connectivity-using-redis-cli)
+ - [Test connectivity using PSPING](#test-connectivity-using-psping)
- [Virtual network configuration](#virtual-network-configuration) - [Private endpoint configuration](#private-endpoint-configuration) - [Firewall rules](#third-party-firewall-or-external-proxy)
Using optimistic TCP settings in Linux might cause client applications to experi
## Continuous connectivity
-If your application can't maintain a continuous connection to your Azure Cache for Redis, it's possible some configuration on the cache isn't set up correctly. The following sections offer suggestions on how to make sure your cache is configured correctly.
+If your application can't connect to your Azure Cache for Redis, it's possible some configuration on the cache isn't set up correctly. The following sections offer suggestions on how to make sure your cache is configured correctly.
-### Azure Cache for Redis CLI
+### Test connectivity using _redis-cli_
-Test connectivity using Azure Cache for Redis CLI. For more information on CLI, [Use the Redis command-line tool with Azure Cache for Redis](cache-how-to-redis-cli-tool.md).
+Test connectivity using _redis-cli_. For more information on CLI, [Use the Redis command-line tool with Azure Cache for Redis](cache-how-to-redis-cli-tool.md).
-### PSPING
+### Test connectivity using PSPING
-If Azure Cache for Redis CLI is unable to connect, you can test connectivity using `PSPING` in PowerShell.
+If _redis-cli_ is unable to connect, you can test connectivity using `PSPING` in PowerShell.
```azurepowershell-interactive psping -q <cache DNS endpoint>:<Port Number>
azure-functions Durable Functions Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/durable-functions-overview.md
Durable Functions is developed in collaboration with Microsoft Research. As a re
The following video highlights the benefits of Durable Functions:
-> [!VIDEO https://channel9.msdn.com/Shows/Azure-Friday/Durable-Functions-in-Azure-Functions/player]
+> [!VIDEO https://docs.microsoft.com/Shows/Azure-Friday/Durable-Functions-in-Azure-Functions/player]
For a more in-depth discussion of Durable Functions and the underlying technology, see the following video (it's focused on .NET, but the concepts also apply to other supported languages):
-> [!VIDEO https://channel9.msdn.com/Events/dotnetConf/2018/S204/player]
+> [!VIDEO https://docs.microsoft.com/Events/dotnetConf/2018/S204/player]
Because Durable Functions is an advanced extension for [Azure Functions](../functions-overview.md), it isn't appropriate for all applications. For a comparison with other Azure orchestration technologies, see [Compare Azure Functions and Azure Logic Apps](../functions-compare-logic-apps-ms-flow-webjobs.md#compare-azure-functions-and-azure-logic-apps).
azure-functions Durable Functions Perf And Scale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/durable-functions-perf-and-scale.md
Activity functions have all the same behaviors as regular queue-triggered functi
Entity functions are also executed on a single thread and operations are processed one-at-a-time. However, entity functions do not have any restrictions on the type of code that can be executed.
+## Function timeouts
+
+Activity, orchestrator, and entity functions are subject to the same [function timeouts](../functions-scale.md#timeout) as all Azure Functions. As a general rule, Durable Functions treats function timeouts the same way as unhandled exceptions thrown by the application code. For example, if an activity times out, the function execution is recorded as a failure, and the orchestrator is notified and handles the timeout just like any other exception: retries take place if specified by the call, or an exception handler may be executed.
+ ## Concurrency throttles Azure Functions supports executing multiple functions concurrently within a single app instance. This concurrent execution helps increase parallelism and minimizes the number of "cold starts" that a typical app will experience over time. However, high concurrency can exhaust per-VM system resources such network connections or available memory. Depending on the needs of the function app, it may be necessary to throttle the per-instance concurrency to avoid the possibility of running out of memory in high-load situations. Activity, orchestrator, and entity function concurrency limits can be configured in the **host.json** file. The relevant settings are `durableTask/maxConcurrentActivityFunctions` for activity functions and `durableTask/maxConcurrentOrchestratorFunctions` for both orchestrator and entity functions. These settings control the maximum number of orchestrator, entity, or activity functions that can be loaded into memory concurrently.
+> [!NOTE]
+> The concurrency throttles only apply locally, to limit what is currently being processed on one individual machine. Thus, these throttles do not limit the total throughput of the system. Quite to the contrary, they can actually support proper scale out, as they prevent individual machines from taking on too much work at once. If this leads to unprocessed work accumulating in the queues, the autoscaler adds more machines. The total throughput of the system thus scales out as needed.
+
+> [!NOTE]
+> The `durableTask/maxConcurrentOrchestratorFunctions` limit applies only to the act of processing new events or operations. Orchestrations or entities that are idle waiting for events or operations do not count towards the limit.
+ ### Functions 2.0 ```json
In all other situations, there is typically no observable performance improvemen
> [!NOTE] > These settings should only be used after an orchestrator function has been fully developed and tested. The default aggressive replay behavior can useful for detecting [orchestrator function code constraints](durable-functions-code-constraints.md) violations at development time, and is therefore disabled by default.
-### Entity function unloading
+## Entity operation batching
+
+To improve performance and cost, entity operations are executed in batches. Each batch is billed as a single function execution.
-Entity functions process up to 20 operations in a single batch. As soon as an entity finishes processing a batch of operations, it persists its state and unloads from memory. You can delay the unloading of entities from memory using the extended sessions setting. Entities continue to persist their state changes as before, but remain in memory for the configured period of time to reduce the number of loads from storage. This reduction of loads from storage can improve the overall throughput of frequently accessed entities.
+By default, the maximum batch size is 50 (for consumption plans) and 5000 (for all other plans). The maximum batch size can also be configured in the [host.json](durable-functions-bindings.md#host-json) file. If the maximum batch size is 1, batching is effectively disabled.
+
+> [!NOTE]
+> If individual entity operations take a long time to execute, it may be beneficial to limit the maximum batch size to reduce the risk of [function timeouts](#function-timeouts), in particular on consumption plans.
## Performance targets
azure-government Compare Azure Government Global Azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/compare-azure-government-global-azure.md
For secured virtual networks, you will want to allow network security groups (NS
|US Gov Virginia|13.72.49.126 </br> 13.72.55.55 </br> 13.72.184.124 </br> 13.72.190.110| 443| |US Gov Arizona|52.127.3.176 </br> 52.127.3.178| 443|
-For a demo on how to build data-centric solutions on Azure Government using HDInsight, see [Cognitive Services, HDInsight, and Power BI on Azure Government](https://channel9.msdn.com/Blogs/Azure/Cognitive-Services-HDInsight-and-Power-BI-on-Azure-Government).
+For a demo on how to build data-centric solutions on Azure Government using HDInsight, see Cognitive Services, HDInsight, and Power BI on Azure Government.
### [Power BI](/power-bi/service-govus-overview)
-For usage guidance, feature variations, and limitations, see [Power BI for US government customers](/power-bi/admin/service-govus-overview). For a demo on how to build data-centric solutions on Azure Government using Power BI, see [Cognitive Services, HDInsight, and Power BI on Azure Government](https://channel9.msdn.com/Blogs/Azure/Cognitive-Services-HDInsight-and-Power-BI-on-Azure-Government).
+For usage guidance, feature variations, and limitations, see [Power BI for US government customers](/power-bi/admin/service-govus-overview). For a demo on how to build data-centric solutions on Azure Government using Power BI, see Cognitive Services, HDInsight, and Power BI on Azure Government.
### [Power BI Embedded](/azure/power-bi-embedded/)
azure-government Documentation Government Developer Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-developer-guide.md
The [Azure Government video library](https://aka.ms/AzureGovVideos) contains man
## Compliance
-For more information on Azure Government Compliance, refer to the [compliance documentation](./documentation-government-plan-compliance.md) and watch this [video](https://channel9.msdn.com/blogs/Azure-Government/Compliance-on-Azure-Government).
+For more information on Azure Government Compliance, refer to the [compliance documentation](./documentation-government-plan-compliance.md).
### Azure Blueprints
azure-government Documentation Government Overview Jps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-overview-jps.md
When they are properly planned and secured, cloud services can deliver powerful
From devices to the cloud, Microsoft puts privacy and information security first, while increasing productivity for officers in the field and throughout the department. By combining highly secure mobile devices with "anytime-anywhere" access to the cloud, JPS agencies can contribute to ongoing investigations, analyze data, manage evidence, and help protect citizens from threats.
-Other cloud providers treat Criminal Justice Information Systems (CJIS) compliance as a check box, rather than a commitment. At Microsoft, we're committed to providing solutions that meet the applicable CJIS controls, today and in the future. In addition, we extend our commitment to justice and public safety through our <a href="https://news.microsoft.com/presskits/dcu/#sm.0000eqdq0pxj4ex3u272bevclb0uc#KwSv0iLdMkJerFly.97">Digital Crimes Unit</a>, <a href="https://channel9.msdn.com/Blogs/Taste-of-Premier/Satya-Nadella-on-Cybersecurity">Cyber Defense Operations Center</a>, and <a href="https://enterprise.microsoft.com/en-us/industries/government/public-safety/">Worldwide Justice and Public Safety organization</a>.
+Other cloud providers treat Criminal Justice Information Systems (CJIS) compliance as a check box, rather than a commitment. At Microsoft, we're committed to providing solutions that meet the applicable CJIS controls, today and in the future. In addition, we extend our commitment to justice and public safety through our <a href="https://news.microsoft.com/presskits/dcu/#sm.0000eqdq0pxj4ex3u272bevclb0uc#KwSv0iLdMkJerFly.97">Digital Crimes Unit</a>, Cyber Defense Operations Center, and <a href="https://enterprise.microsoft.com/en-us/industries/government/public-safety/">Worldwide Justice and Public Safety organization</a>.
## Next steps * <a href="https://www.microsoft.com/en-us/TrustCenter/Compliance/CJIS"> Microsoft Trust Center - Criminal Justice Information Services webpage</a>
azure-government Documentation Government Welcome https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-welcome.md
The following video provides a good introduction to Azure Government:
</br>
-> [!VIDEO https://channel9.msdn.com/Shows/Azure-Friday/Enable-government-missions-in-the-cloud-with-Azure-Government/player]
+> [!VIDEO https://docs.microsoft.com/Shows/Azure-Friday/Enable-government-missions-in-the-cloud-with-Azure-Government/player]
## Compare Azure Government and global Azure
azure-maps About Azure Maps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/about-azure-maps.md
The following video explains Azure Maps in depth:
</br>
-> [!VIDEO https://channel9.msdn.com/Shows/Internet-of-Things-Show/Azure-Maps/player?format=ny]
+> [!VIDEO https://docs.microsoft.com/Shows/Internet-of-Things-Show/Azure-Maps/player?format=ny]
## Map controls
azure-maps Add Heat Map Layer Ios https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/add-heat-map-layer-ios.md
You can use heat maps in many different scenarios, including:
> [!TIP] > Heat map layers by default render the coordinates of all geometries in a data source. To limit the layer so that it only renders point geometry features, set the `filter` option of the layer to `NSPredicate(format: "%@ == \"Point\"", NSExpression.geometryTypeAZMVariable)`. If you want to include MultiPoint features as well, use `NSCompoundPredicate`.
-[Internet of Things Show - Heat Maps and Image Overlays in Azure Maps](https://channel9.msdn.com/Shows/Internet-of-Things-Show/Heat-Maps-and-Image-Overlays-in-Azure-Maps/player?format=ny)
+[Internet of Things Show - Heat Maps and Image Overlays in Azure Maps](/shows/internet-of-things-show/heat-maps-and-image-overlays-in-azure-maps/player?format=ny)
## Prerequisites
azure-maps Clustering Point Data Android Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/clustering-point-data-android-sdk.md
When visualizing many data points on the map, data points may overlap over each
</br>
->[!VIDEO https://channel9.msdn.com/Shows/Internet-of-Things-Show/Clustering-point-data-in-Azure-Maps/player?format=ny]
+>[!VIDEO https://docs.microsoft.com/Shows/Internet-of-Things-Show/Clustering-point-data-in-Azure-Maps/player?format=ny]
## Prerequisites
azure-maps Clustering Point Data Ios Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/clustering-point-data-ios-sdk.md
When visualizing many data points on the map, data points may overlap over each other. The overlap may cause the map may become unreadable and difficult to use. Clustering point data is the process of combining point data that are near each other and representing them on the map as a single clustered data point. As the user zooms into the map, the clusters break apart into their individual data points. When you work with large number of data points, use the clustering processes to improve your user experience.
-[Internet of Things Show - Clustering point data in Azure Maps](https://channel9.msdn.com/Shows/Internet-of-Things-Show/Clustering-point-data-in-Azure-Maps/player?format=ny)
+[Internet of Things Show - Clustering point data in Azure Maps](/shows/Internet-of-Things-Show/Clustering-point-data-in-Azure-Maps/player?format=ny)
## Prerequisites
azure-maps Clustering Point Data Web Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/clustering-point-data-web-sdk.md
When visualizing many data points on the map, data points may overlap over each
</br>
->[!VIDEO https://channel9.msdn.com/Shows/Internet-of-Things-Show/Clustering-point-data-in-Azure-Maps/player?format=ny]
+>[!VIDEO https://docs.microsoft.com/Shows/Internet-of-Things-Show/Clustering-point-data-in-Azure-Maps/player?format=ny]
## Enabling clustering on a data source
azure-maps Data Driven Style Expressions Android Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/data-driven-style-expressions-android-sdk.md
This video provides an overview of data-driven styling in Azure Maps.
</br>
->[!VIDEO https://channel9.msdn.com/Shows/Internet-of-Things-Show/Data-Driven-Styling-with-Azure-Maps/player?format=ny]
+>[!VIDEO https://docs.microsoft.com/Shows/Internet-of-Things-Show/Data-Driven-Styling-with-Azure-Maps/player?format=ny]
## Data expressions
azure-maps Data Driven Style Expressions Ios Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/data-driven-style-expressions-ios-sdk.md
Using this approach can make it easy to reuse style expressions between mobile a
This video provides an overview of data-driven styling in Azure Maps.
->[Internet of Things Show - Data-Driven Styling with Azure Maps](https://channel9.msdn.com/Shows/Internet-of-Things-Show/Data-Driven-Styling-with-Azure-Maps/player?format=ny)
+>[Internet of Things Show - Data-Driven Styling with Azure Maps](/shows/Internet-of-Things-Show/Data-Driven-Styling-with-Azure-Maps/player?format=ny)
### Constant values
azure-maps Data Driven Style Expressions Web Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/data-driven-style-expressions-web-sdk.md
This video provides an overview of data-driven styling in the Azure Maps Web SDK
</br>
->[!VIDEO https://channel9.msdn.com/Shows/Internet-of-Things-Show/Data-Driven-Styling-with-Azure-Maps/player?format=ny]
+>[!VIDEO https://docs.microsoft.com/Shows/Internet-of-Things-Show/Data-Driven-Styling-with-Azure-Maps/player?format=ny]
Expressions are represented as JSON arrays. The first element of an expression in the array is a string that specifies the name of the expression operator. For example, "+" or "case". The next elements (if any) are the arguments to the expression. Each argument is either a literal value (a string, number, boolean, or `null`), or another expression array. The following pseudocode defines the basic structure of an expression.
azure-maps How To Request Weather Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-request-weather-data.md
This video provides examples for making REST calls to Azure Maps Weather service
</br>
->[!VIDEO https://channel9.msdn.com/Shows/Internet-of-Things-Show/Azure-Maps-Weather-services-for-developers/player?format=ny]
+>[!VIDEO https://docs.microsoft.com/Shows/Internet-of-Things-Show/Azure-Maps-Weather-services-for-developers/player?format=ny]
## Prerequisites
azure-maps How To Use Spatial Io Module https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-use-spatial-io-module.md
This video provides an overview of Spatial IO module in the Azure Maps Web SDK.
</br>
-> [!VIDEO https://channel9.msdn.com/Shows/Internet-of-Things-Show/Easily-integrate-spatial-data-into-the-Azure-Maps/player?format=ny]
+> [!VIDEO https://docs.microsoft.com/Shows/Internet-of-Things-Show/Easily-integrate-spatial-data-into-the-Azure-Maps/player?format=ny]
> [!WARNING] > Only use data and services that are from a source you trust, especially if referencing it from another domain. The spatial IO module does take steps to minimize risk, however the safest approach is too not allow any danagerous data into your application to begin with.
azure-maps Map Add Heat Map Layer Android https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/map-add-heat-map-layer-android.md
You can use heat maps in many different scenarios, including:
</br>
-> [!VIDEO https://channel9.msdn.com/Shows/Internet-of-Things-Show/Heat-Maps-and-Image-Overlays-in-Azure-Maps/player?format=ny]
+> [!VIDEO https://docs.microsoft.com/Shows/Internet-of-Things-Show/Heat-Maps-and-Image-Overlays-in-Azure-Maps/player?format=ny]
## Prerequisites
azure-maps Map Add Heat Map Layer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/map-add-heat-map-layer.md
You can use heat maps in many different scenarios, including:
</br>
->[!VIDEO https://channel9.msdn.com/Shows/Internet-of-Things-Show/Heat-Maps-and-Image-Overlays-in-Azure-Maps/player?format=ny]
+>[!VIDEO https://docs.microsoft.com/Shows/Internet-of-Things-Show/Heat-Maps-and-Image-Overlays-in-Azure-Maps/player?format=ny]
## Add a heat map layer
azure-monitor Cloudservices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/cloudservices.md
If you have a client mobile app, use [App Center](../app/mobile-center-quickstar
## Exception "method not found" on running in Azure cloud services Did you build for .NET 4.6? .NET 4.6 is not automatically supported in Azure cloud services roles. [Install .NET 4.6 on each role](../../cloud-services/cloud-services-dotnet-install-dotnet.md) before running your app.
-## Video
-
-> [!VIDEO https://channel9.msdn.com/events/Connect/2016/100/player]
## Next steps * [Configure sending Azure Diagnostics to Application Insights](../agents/diagnostics-extension-to-application-insights.md)
azure-monitor Devops https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/devops.md
When an alert is raised, Application Insights can automatically create a work it
* [Pricing](./pricing.md) - You can get started for free, and that continues while you're in low volume.
-## Video
-
-> [!VIDEO https://channel9.msdn.com/events/Connect/2016/112/player]
- ## Next steps Getting started with Application Insights is easy. The main options are:
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-standalone-config.md
To disable auto-collection of Micrometer metrics (including Spring Boot Actuator
} ```
+## HTTP headers
+
+Starting from 3.2.5-BETA, you can capture request and response headers on your server (request) telemetry:
+
+```json
+{
+ "preview": {
+ "captureHttpServerHeaders": {
+ "requestHeaders": [
+ "My-Header-A"
+ ],
+ "responseHeaders": [
+ "My-Header-B"
+ ]
+ }
+ }
+}
+```
+
+The header names are case-insensitive.
+
+The examples above will be captured under property names `http.request.header.my_header_a` and
+`http.response.header.my_header_b`.
+
+Similarly, you can capture request and response headers on your client (dependency) telemetry:
+
+```json
+{
+ "preview": {
+ "captureHttpClientHeaders": {
+ "requestHeaders": [
+ "My-Header-C"
+ ],
+ "responseHeaders": [
+ "My-Header-D"
+ ]
+ }
+ }
+}
+```
+
+Again, the header names are case-insensitive, and the examples above will be captured under property names
+`http.request.header.my_header_c` and `http.response.header.my_header_d`.
+
+## Http server 4xx response codes
+
+By default, http server requests that result in 4xx response codes are captured as errors.
+
+Starting from version 3.2.5-BETA, you can change this behavior to capture them as success if you prefer:
+
+```json
+{
+ "preview": {
+ "captureHttpServer4xxAsError": false
+ }
+}
+```
+ ## Suppressing specific auto-collected telemetry Starting from version 3.0.3, specific auto-collected telemetry can be suppressed using these configuration options:
you can configure Application Insights Java 3.x to use an HTTP proxy:
Application Insights Java 3.x also respects the global `https.proxyHost` and `https.proxyPort` system properties if those are set (and `http.nonProxyHosts` if needed).
+Starting from 3.2.5-BETA, authenticated proxies are supported. You can add `"user"` and `"password"` under `"proxy"` in
+the json above (or if you are using the system properties above, you can add `https.proxyUser` and `https.proxyPassword`
+system properties).
+ ## Self-diagnostics "Self-diagnostics" refers to internal logging from Application Insights Java 3.x.
azure-monitor Proactive Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/proactive-diagnostics.md
Configuring email notifications for a specific smart detection rule can be done
Alternatively, you can change the configuration using Azure Resource Manager templates. For more information, see [Manage Application Insights smart detection rules using Azure Resource Manager templates](./proactive-arm-config.md) for more details.
-## Video
-
-> [!VIDEO https://channel9.msdn.com/events/Connect/2016/112/player]
-- ## Next steps These diagnostic tools help you inspect the telemetry from your app:
azure-monitor Sql Insights Enable https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/sql-insights-enable.md
This article describes how to enable [SQL insights](sql-insights-overview.md) to
> To enable SQL insights by creating the monitoring profile and virtual machine using a resource manager template, see [Resource Manager template samples for SQL insights](resource-manager-sql-insights.md). To learn how to enable SQL Insights, you can also refer to this Data Exposed episode.
-> [!VIDEO https://channel9.msdn.com/Shows/Data-Exposed/How-to-Set-up-Azure-Monitor-for-SQL-Insights/player?format=ny]
+> [!VIDEO https://docs.microsoft.com/Shows/Data-Exposed/How-to-Set-up-Azure-Monitor-for-SQL-Insights/player?format=ny]
## Create Log Analytics workspace SQL insights stores its data in one or more [Log Analytics workspaces](../logs/data-platform-logs.md#log-analytics-and-workspaces). Before you can enable SQL Insights, you need to either [create a workspace](../logs/quick-create-workspace.md) or select an existing one. A single workspace can be used with multiple monitoring profiles, but the workspace and profiles must be located in the same Azure region. To enable and access the features in SQL insights, you must have the [Log Analytics contributor role](../logs/manage-access.md) in the workspace.
azure-netapp-files Configure Ldap Extended Groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/configure-ldap-extended-groups.md
na Previously updated : 01/07/2022 Last updated : 01/14/2022 # Configure ADDS LDAP with extended groups for NFS volume access
This article explains the considerations and steps for enabling LDAP with extend
## Steps
-1. The LDAP with extended groups feature is currently in preview. Before using this feature for the first time, you need to register the feature:
-
- 1. Register the feature:
-
- ```azurepowershell-interactive
- Register-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFLdapExtendedGroups
- ```
-
- 2. Check the status of the feature registration:
-
- > [!NOTE]
- > The **RegistrationState** may be in the `Registering` state for up to 60 minutes before changing to`Registered`. Wait until the status is `Registered` before continuing.
-
- ```azurepowershell-interactive
- Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFLdapExtendedGroups
- ```
-
- You can also use [Azure CLI commands](/cli/azure/feature) `az feature register` and `az feature show` to register the feature and display the registration status.
-
-2. LDAP volumes require an Active Directory configuration for LDAP server settings. Follow instructions in [Requirements for Active Directory connections](create-active-directory-connections.md#requirements-for-active-directory-connections) and [Create an Active Directory connection](create-active-directory-connections.md#create-an-active-directory-connection) to configure Active Directory connections on the Azure portal.
+1. LDAP volumes require an Active Directory configuration for LDAP server settings. Follow instructions in [Requirements for Active Directory connections](create-active-directory-connections.md#requirements-for-active-directory-connections) and [Create an Active Directory connection](create-active-directory-connections.md#create-an-active-directory-connection) to configure Active Directory connections on the Azure portal.
> [!NOTE] > Ensure that you have configured the Active Directory connection settings. A machine account will be created in the organizational unit (OU) that is specified in the Active Directory connection settings. The settings are used by the LDAP client to authenticate with your Active Directory.
-3. Ensure that the Active Directory LDAP server is up and running on the Active Directory.
+2. Ensure that the Active Directory LDAP server is up and running on the Active Directory.
-4. LDAP NFS users need to have certain POSIX attributes on the LDAP server. Set the attributes for LDAP users and LDAP groups as follows:
+3. LDAP NFS users need to have certain POSIX attributes on the LDAP server. Set the attributes for LDAP users and LDAP groups as follows:
* Required attributes for LDAP users: `uid: Alice`,
This article explains the considerations and steps for enabling LDAP with extend
![Active Directory Attribute Editor](../media/azure-netapp-files/active-directory-attribute-editor.png)
-5. If you want to configure an LDAP-integrated NFSv4.1 Linux client, see [Configure an NFS client for Azure NetApp Files](configure-nfs-clients.md).
+4. If you want to configure an LDAP-integrated NFSv4.1 Linux client, see [Configure an NFS client for Azure NetApp Files](configure-nfs-clients.md).
-6. If your LDAP-enabled volumes use NFSv4.1, follow instructions in [Configure NFSv4.1 domain](azure-netapp-files-configure-nfsv41-domain.md#configure-nfsv41-domain) to configure the `/etc/idmapd.conf` file.
+5. If your LDAP-enabled volumes use NFSv4.1, follow instructions in [Configure NFSv4.1 domain](azure-netapp-files-configure-nfsv41-domain.md#configure-nfsv41-domain) to configure the `/etc/idmapd.conf` file.
You need to set `Domain` in `/etc/idmapd.conf` to the domain that is configured in the Active Directory Connection on your NetApp account. For instance, if `contoso.com` is the configured domain in the NetApp account, then set `Domain = contoso.com`. Then you need to restart the `rpcbind` service on your host or reboot the host.
-7. Follow steps in [Create an NFS volume for Azure NetApp Files](azure-netapp-files-create-volumes.md) to create an NFS volume. During the volume creation process, under the **Protocol** tab, enable the **LDAP** option.
+6. Follow steps in [Create an NFS volume for Azure NetApp Files](azure-netapp-files-create-volumes.md) to create an NFS volume. During the volume creation process, under the **Protocol** tab, enable the **LDAP** option.
![Screenshot that shows Create a Volume page with LDAP option.](../media/azure-netapp-files/create-nfs-ldap.png)
-8. Optional - You can enable local NFS client users not present on the Windows LDAP server to access an NFS volume that has LDAP with extended groups enabled. To do so, enable the **Allow local NFS users with LDAP** option as follows:
+7. Optional - You can enable local NFS client users not present on the Windows LDAP server to access an NFS volume that has LDAP with extended groups enabled. To do so, enable the **Allow local NFS users with LDAP** option as follows:
1. Click **Active Directory connections**. On an existing Active Directory connection, click the context menu (the three dots `…`), and select **Edit**. 2. On the **Edit Active Directory settings** window that appears, select the **Allow local NFS users with LDAP** option. ![Screenshot that shows the Allow local NFS users with LDAP option](../media/azure-netapp-files/allow-local-nfs-users-with-ldap.png)
+8. <a name="ldap-search-scope"></a>Optional - If you have large topologies, and you use the Unix security style with a dual-protocol volume or LDAP with extended groups, you can use the **LDAP Search Scope** option to avoid "access denied" errors on Linux clients for Azure NetApp Files.
+
+ The **LDAP Search Scope** option is configured through the **[Active Directory Connections](create-active-directory-connections.md#create-an-active-directory-connection)** page.
+
+ To resolve the users and group from an LDAP server for large topologies, set the values of the **User DN**, **Group DN**, and **Group Membership Filter** options on the Active Directory Connections page as follows:
+
+ * Specify nested **User DN** and **Group DN** in the format of `OU=subdirectory,OU=directory,DC=domain,DC=com`.
+ * Specify **Group Membership Filter** in the format of `(gidNumber=*)`.
+
+ ![Screenshot that shows options related to LDAP Search Scope](../media/azure-netapp-files/ldap-search-scope.png)
+ ## Next steps * [Create an NFS volume for Azure NetApp Files](azure-netapp-files-create-volumes.md)
azure-netapp-files Create Active Directory Connections https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/create-active-directory-connections.md
na Previously updated : 01/07/2022 Last updated : 01/14/2022 # Create and manage Active Directory connections for Azure NetApp Files
This setting is configured in the **Active Directory Connections** under **NetAp
You can also use [Azure CLI commands](/cli/azure/feature) `az feature register` and `az feature show` to register the feature and display the registration status.
+ * **LDAP over TLS**
+ See [Configure ADDS LDAP over TLS](configure-ldap-over-tls.md) for information about this option.
+
+ * **LDAP Search Scope**, **User DN**, **Group DN**, and **Group Membership Filter**
+ See [Configure ADDS LDAP with extended groups for NFS volume access](configure-ldap-extended-groups.md#ldap-search-scope) for information about these options.
+ * **Security privilege users** <!-- SMB CA share feature --> You can grant security privilege (`SeSecurityPrivilege`) to AD users or groups that require elevated privilege to access the Azure NetApp Files volumes. The specified AD users or groups will be allowed to perform certain actions on Azure NetApp Files SMB shares that require security privilege not assigned by default to domain users.
You can also use [Azure CLI commands](/cli/azure/feature) `az feature register`
* [Create a dual-protocol volume](create-volumes-dual-protocol.md) * [Configure NFSv4.1 Kerberos encryption](configure-kerberos-encryption.md) * [Install a new Active Directory forest using Azure CLI](/windows-server/identity/ad-ds/deploy/virtual-dc/adds-on-azure-vm)
+* [Configure ADDS LDAP over TLS](configure-ldap-over-tls.md)
+* [ADDS LDAP with extended groups for NFS volume access](configure-ldap-extended-groups.md)
azure-netapp-files Create Volumes Dual Protocol https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/create-volumes-dual-protocol.md
na Previously updated : 01/07/2022 Last updated : 01/14/2022 # Create a dual-protocol volume for Azure NetApp Files
To create NFS volumes, see [Create an NFS volume](azure-netapp-files-create-volu
The following table describes the name mappings and security styles:
- | Protocol | Security style | Name mapping direction | Permissions applied |
+ | Protocol | Security style | Name-mapping direction | Permissions applied |
|-|-|-|-| | SMB | `Unix` | Windows to UNIX | UNIX (mode bits or NFSv4.x ACLs) | | SMB | `Ntfs` | Windows to UNIX | NTFS ACLs (based on Windows SID accessing share) |
- | NFSv3 | `Unix` | None | UNIX (mode bits or NFSv4.x ACLs) <br><br> Note that NFSv4.x ACLs can be applied using an NFSv4.x administrative client and honored by NFSv3 clients. |
+ | NFSv3 | `Unix` | None | UNIX (mode bits or NFSv4.x ACLs) <br><br> NFSv4.x ACLs can be applied using an NFSv4.x administrative client and honored by NFSv3 clients. |
| NFS | `Ntfs` | UNIX to Windows | NTFS ACLs (based on mapped Windows user SID) |
-* If you have large topologies, and you use the `Unix` security style with a dual-protocol volume or LDAP with extended groups, Azure NetApp Files might not be able to access all servers in your topologies. If this situation occurs, contact your account team for assistance. <!-- NFSAAS-15123 -->
+* The LDAP with extended groups feature supports the dual protocol of both [NFSv3 and SMB] and [NFSv4.1 and SMB] with the Unix security style. See [Configure ADDS LDAP with extended groups for NFS volume access](configure-ldap-extended-groups.md) for more information.
+
+* If you have large topologies, and you use the Unix security style with a dual-protocol volume or LDAP with extended groups, you should use the **LDAP Search Scope** option on the Active Directory Connections page to avoid "access denied" errors on Linux clients for Azure NetApp Files. See [Configure ADDS LDAP with extended groups for NFS volume access](configure-ldap-extended-groups.md#ldap-search-scope) for more information.
+ * You don't need a server root CA certificate for creating a dual-protocol volume. It is required only if LDAP over TLS is enabled. ## Create a dual-protocol volume
To create NFS volumes, see [Create an NFS volume](azure-netapp-files-create-volu
* **Virtual network** Specify the Azure virtual network (VNet) from which you want to access the volume.
- The Vnet you specify must have a subnet delegated to Azure NetApp Files. The Azure NetApp Files service can be accessed only from the same Vnet or from a Vnet that is in the same region as the volume through Vnet peering. You can also access the volume from your on-premises network through Express Route.
+ The VNet you specify must have a subnet delegated to Azure NetApp Files. Azure NetApp Files can be accessed only from the same VNet or from a VNet that is in the same region as the volume through VNet peering. You can also access the volume from your on-premises network through Express Route.
* **Subnet** Specify the subnet that you want to use for the volume. The subnet you specify must be delegated to Azure NetApp Files.
- If you have not delegated a subnet, you can click **Create new** on the Create a Volume page. Then in the Create Subnet page, specify the subnet information, and select **Microsoft.NetApp/volumes** to delegate the subnet for Azure NetApp Files. In each Vnet, only one subnet can be delegated to Azure NetApp Files.
+ If you have not delegated a subnet, you can click **Create new** on the Create a Volume page. Then in the Create Subnet page, specify the subnet information, and select **Microsoft.NetApp/volumes** to delegate the subnet for Azure NetApp Files. In each VNet, only one subnet can be delegated to Azure NetApp Files.
![Create a volume](../media/azure-netapp-files/azure-netapp-files-new-volume.png)
To create NFS volumes, see [Create an NFS volume](azure-netapp-files-create-volu
* If you want to enable SMB3 protocol encryption for the dual-protocol volume, select **Enable SMB3 Protocol Encryption**.
- This feature enables encryption for only in-flight SMB3 data. It does not encrypt NFSv3 in-flight data. SMB clients not using SMB3 encryption will not be able to access this volume. Data at rest is encrypted regardless of this setting. See [SMB encryption](azure-netapp-files-smb-performance.md#smb-encryption) for additional information.
+ This feature enables encryption for only in-flight SMB3 data. It does not encrypt NFSv3 in-flight data. SMB clients not using SMB3 encryption will not be able to access this volume. Data at rest is encrypted regardless of this setting. See [SMB encryption](azure-netapp-files-smb-performance.md#smb-encryption) for more information.
* If you selected NFSv4.1 and SMB for the dual-protocol volume versions, indicate whether you want to enable **Kerberos** encryption for the volume.
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/whats-new.md
na Previously updated : 12/14/2021 Last updated : 01/14/2022
Azure NetApp Files is updated regularly. This article provides a summary about the latest new features and enhancements.
+## January 2022
+
+* [LDAP search scope](configure-ldap-extended-groups.md#ldap-search-scope)
+
+ You might be using the Unix security style with a dual-protocol volume or LDAP with extended groups features in combination with large LDAP topologies. In this case, you might encounter "access denied" errors on Linux clients when interacting with such Azure NetApp Files volumes. You can now use the **LDAP Search Scope** option to specify the LDAP search scope to avoid "access denied" errors.
+
+* [Active Directory Domain Services (ADDS) LDAP user-mapping with NFS extended groups](configure-ldap-extended-groups.md) now generally available (GA)
+
+ The ADDS LDAP user-mapping with NFS extended groups feature is now generally available. You no longer need to register the feature before using it.
+ ## December 2021 * [NFS protocol version conversion](convert-nfsv3-nfsv41.md) (Preview)
azure-portal Azure Portal Markdown Tile https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/azure-portal-markdown-tile.md
You can add a markdown tile to your Azure dashboards to display custom, static c
1. Select **Dashboard** from the Azure portal menu. - 1. In the dashboard view, select the dashboard where the custom markdown tile should appear, then select **Edit**. ![Screenshot showing dashboard edit view](./media/azure-portal-markdown-tile/azure-portal-dashboard-edit.png)
azure-portal Quick Create Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/quick-create-template.md
Title: Create an Azure portal dashboard by using an Azure Resource Manager templ
description: Learn how to create an Azure portal dashboard by using an Azure Resource Manager template. Previously updated : 03/15/2021 Last updated : 01/13/2022 # Quickstart: Create a dashboard in the Azure portal by using an ARM template
If your environment meets the prerequisites and you're familiar with using ARM t
## Prerequisites -- If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.-- An existing VM.
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
## Create a virtual machine The dashboard you create in the next part of this quickstart requires an existing VM. Create a VM by following these steps.
-1. In the Azure portal, select **Cloud Shell**.
+1. In the Azure portal, select **Cloud Shell** from the global controls at the top of the page.
- ![Select Cloud shell from the Azure portal ribbon](media/quick-create-template/cloud-shell.png)
+ :::image type="content" source="media/quick-create-template/cloud-shell.png" alt-text="Screenshot showing the Cloud Shell option in the Azure portal.":::
1. In the **Cloud Shell** window, select **PowerShell**.
- ![Select PowerShell in the terminal window](media/quick-create-template/powershell.png)
+ :::image type="content" source="media/quick-create-template/powershell.png" alt-text="Screenshot showing the PowerShell option in Cloud Shell.":::
1. Copy the following command and enter it at the command prompt to create a resource group.
The dashboard you create in the next part of this quickstart requires an existin
New-AzResourceGroup -Name SimpleWinVmResourceGroup -Location EastUS ```
- ![Copy a command into the command prompt](media/quick-create-template/command-prompt.png)
-
-1. Copy the following command and enter it at the command prompt to create a VM in the resource group.
+1. Next, copy the following command and enter it at the command prompt to create a VM in your new resource group.
```powershell New-AzVm ` -ResourceGroupName "SimpleWinVmResourceGroup" `
- -Name "SimpleWinVm" `
+ -Name "myVM1" `
-Location "East US" ``` 1. Enter a username and password for the VM. This is a new user name and password; it's not, for example, the account you use to sign in to Azure. For more information, see [username requirements](../virtual-machines/windows/faq.yml#what-are-the-username-requirements-when-creating-a-vm-) and [password requirements](../virtual-machines/windows/faq.yml#what-are-the-password-requirements-when-creating-a-vm-).
- The VM deployment now starts and typically takes a few minutes to complete. After deployment completes, move on to the next section.
+ After the VM has been created, move on to the next section.
## Review the template
-The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/azure-portal-dashboard/). The template for this article is too long to show here. To view the template, see [azuredeploy.json](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.portal/azure-portal-dashboard/azuredeploy.json). One Azure resource is defined in the template, [Microsoft.Portal/dashboards](/azure/templates/microsoft.portal/dashboards) - Create a dashboard in the Azure portal.
+The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/azure-portal-dashboard/). The template for this article is too long to show here. To view the template, see [azuredeploy.json](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.portal/azure-portal-dashboard/azuredeploy.json). The template defines one Azure resource, a dashboard that displays data about the VM you created.
## Deploy the template
+This example uses the Azure portal to deploy the template. You can also use other methods to deploy ARM templates, such as [Azure PowerShell](../azure-resource-manager/templates/deploy-powershell.md), [Azure CLI](../azure-resource-manager/templates/deploy-cli.md), or [REST API](../azure-resource-manager/templates/deploy-rest.md).
+ 1. Select the following image to sign in to Azure and open a template. [![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.portal%2Fazure-portal-dashboard%2Fazuredeploy.json) 1. Select or enter the following values, then select **Review + create**.
- ![ARM template, create dashboard, deploy portal](media/quick-create-template/create-dashboard-using-template-portal.png)
+ :::image type="content" source="media/quick-create-template/create-dashboard-using-template-portal.png" alt-text="Screenshot of the dashboard template deployment screen in the Azure portal.":::
Unless it's specified, use the default values to create the dashboard.
- * **Subscription**: select an Azure subscription.
- * **Resource group**: select **SimpleWinVmResourceGroup**.
- * **Location**: select **East US**.
- * **Virtual Machine Name**: enter **SimpleWinVm**.
- * **Virtual Machine Resource Group**: enter **SimpleWinVmResourceGroup**.
-
-1. Select **Create** or **Purchase**. After the dashboard has been deployed successfully, you get a notification:
-
- ![ARM template, create dashboard, deploy portal notification](media/quick-create-template/resource-manager-template-portal-deployment-notification.png)
+ - **Subscription**: select the Azure subscription.
+ - **Resource group**: select **SimpleWinVmResourceGroup**.
+ - **Location**: If not automatically selected, choose **East US**.
+ - **Virtual Machine Name**: enter **myVM1**.
+ - **Virtual Machine Resource Group**: enter **SimpleWinVmResourceGroup**.
-The Azure portal was used to deploy the template. In addition to the Azure portal, you can also use Azure PowerShell, Azure CLI, and REST API. To learn other deployment methods, see [Deploy templates](../azure-resource-manager/templates/deploy-powershell.md).
+1. Select **Create**. You'll see a notification confirming when the dashboard has been deployed successfully.
## Review deployed resources
If you want to remove the VM and associated dashboard, delete the resource group
1. On the **SimpleWinVmResourceGroup** page, select **Delete resource group**, enter the resource group name to confirm, then select **Delete**.
- ![Delete resource group](media/quick-create-template/delete-resource-group.png)
+> [!CAUTION]
+> Deleting a resource group will delete all of the resources contained within it. If the resource group contains additional resources aside from your virtual machine and dashboard, those resources will also be deleted.
## Next steps
azure-portal Quickstart Portal Dashboard Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/quickstart-portal-dashboard-azure-cli.md
Title: Create an Azure portal dashboard with Azure CLI
description: "Quickstart: Learn how to create a dashboard in the Azure portal using the Azure CLI. A dashboard is a focused and organized view of your cloud resources." Previously updated : 12/4/2020 Last updated : 01/13/2022 # Quickstart: Create an Azure portal dashboard with Azure CLI
-A dashboard in the Azure portal is a focused and organized view of your cloud resources. This
-article focuses on the process of using Azure CLI to create a dashboard.
-The dashboard shows the performance of a virtual machine (VM), as well as some static information
-and links.
+A dashboard in the Azure portal is a focused and organized view of your cloud resources. This article shows you how to use Azure CLI to create a dashboard. In this example, the dashboard shows the performance of a virtual machine (VM), as well as some static information and links.
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
[!INCLUDE [azure-cli-prepare-your-environment.md](../../includes/azure-cli-prepare-your-environment.md)] - If you have multiple Azure subscriptions, choose the appropriate subscription in which to bill the resources.
-Select a subscription by using the [az account set](/cli/azure/account#az_account_set) command:
+Select a subscription by using the [az account set](/cli/azure/account#az-account-set) command:
```azurecli az account set --subscription 00000000-0000-0000-0000-000000000000 ``` -- Create an [Azure resource group](../azure-resource-manager/management/overview.md) by using the [az group create](/cli/azure/group#az_group_create) command or use an existing resource group:
+- Create an [Azure resource group](../azure-resource-manager/management/overview.md#resource-groups) by using the [az group create](/cli/azure/group#az-group-create) command (or use an existing resource group):
```azurecli az group create --name myResourceGroup --location centralus ```
- A resource group is a logical container in which Azure resources are deployed and managed as a group.
- ## Create a virtual machine
-Create a virtual machine by using the [az vm create](/cli/azure/vm#az_vm_create) command:
+Create a virtual machine by using the [az vm create](/cli/azure/vm#az-vm-create) command:
```azurecli
-az vm create --resource-group myResourceGroup --name SimpleWinVM --image win2016datacenter \
+az vm create --resource-group myResourceGroup --name myVM1 --image win2016datacenter \
--admin-username azureuser --admin-password 1StrongPassword$ ``` > [!Note]
-> The password must be complex.
-> This is a new user name and password.
-> It's not, for example, the account you use to sign in to Azure.
-> For more information, see [username requirements](../virtual-machines/windows/faq.yml#what-are-the-username-requirements-when-creating-a-vm-)
+> This is a new username and password (not the account you use to sign in to Azure). The password must be complex. For more information, see [username requirements](../virtual-machines/windows/faq.yml#what-are-the-username-requirements-when-creating-a-vm-)
and [password requirements](../virtual-machines/windows/faq.yml#what-are-the-password-requirements-when-creating-a-vm-). The deployment starts and typically takes a few minutes to complete.
-After deployment completes, move on to the next section.
## Download the dashboard template
-Since Azure dashboards are resources, they can be represented as JSON.
-For more information, see [The structure of Azure Dashboards](./azure-portal-dashboards-structure.md).
+Since Azure dashboards are resources, they can be represented as JSON. For more information, see [The structure of Azure dashboards](./azure-portal-dashboards-structure.md).
-Download the following file: [portal-dashboard-template-testvm.json](https://raw.githubusercontent.com/Azure/azure-docs-powershell-samples/master/azure-portal/portal-dashboard-template-testvm.json).
+Download the file [portal-dashboard-template-testvm.json](https://raw.githubusercontent.com/Azure/azure-docs-powershell-samples/master/azure-portal/portal-dashboard-template-testvm.json).
-Customize the downloaded template by changing the following values to your values:
+Then, customize the downloaded template file by changing the following to your values:
-* `<subscriptionID>`: Your subscription
-* `<rgName>`: Resource group, for example `myResourceGroup`
-* `<vmName>`: Virtual machine name, for example `SimpleWinVM`
-* `<dashboardTitle>`: Dashboard title, for example `Simple VM Dashboard`
-* `<location>`: Your Azure region, for example, `centralus`
+- `<subscriptionID>`: Your subscription
+- `<rgName>`: Resource group, for example `myResourceGroup`
+- `<vmName>`: Virtual machine name, for example `myVM1`
+- `<dashboardTitle>`: Dashboard title, for example `Simple VM Dashboard`
+- `<location>`: Your Azure region, for example `centralus`
For more information, see [Microsoft portal dashboards template reference](/azure/templates/microsoft.portal/dashboards).
For more information, see [Microsoft portal dashboards template reference](/azur
You can now deploy the template from within Azure CLI.
-1. Run the [az portal dashboard create](/cli/azure/portal/dashboard#az_portal_dashboard_create) command to deploy the template:
+1. Run the [az portal dashboard create](/cli/azure/portal/dashboard#az-portal-dashboard-create) command to deploy the template:
```azurecli az portal dashboard create --resource-group myResourceGroup --name 'Simple VM Dashboard' \ --input-path portal-dashboard-template-testvm.json --location centralus ```
-1. Check that the dashboard was created successfully by running the [az portal dashboard show](/cli/azure/portal/dashboard#az_portal_dashboard_show) command:
+1. Check that the dashboard was created successfully by running the [az portal dashboard show](/cli/azure/portal/dashboard#az-portal-dashboard-show) command:
```azurecli az portal dashboard show --resource-group myResourceGroup --name 'Simple VM Dashboard' ```
-To see all the dashboards for the current subscription, use [az portal dashboard list](/cli/azure/portal/dashboard#az_portal_dashboard_list):
+To see all the dashboards for the current subscription, use [az portal dashboard list](/cli/azure/portal/dashboard#az-portal-dashboard-list):
```azurecli az portal dashboard list ```
-You can also see all the dashboards for a resource group:
+You can also see all the dashboards for a specific resource group:
```azurecli az portal dashboard list --resource-group myResourceGroup ```
-You can update a dashboard by using the [az portal dashboard update](/cli/azure/portal/dashboard#az_portal_dashboard_update) command:
+To update a dashboard, use the [az portal dashboard update](/cli/azure/portal/dashboard#az-portal-dashboard-update) command:
```azurecli az portal dashboard update --resource-group myResourceGroup --name 'Simple VM Dashboard' \ --input-path portal-dashboard-template-testvm.json --location centralus ```
+## Review deployed resources
+ [!INCLUDE [azure-portal-review-deployed-resources](../../includes/azure-portal-review-deployed-resources.md)] ## Clean up resources
-To remove the virtual machine and associated dashboard, delete the resource group that contains them.
+To remove the virtual machine and associated dashboard that you created, delete the resource group that contains them.
> [!CAUTION]
-> The following example deletes the specified resource group and all resources contained within it.
-> If resources outside the scope of this article exist in the specified resource group, they will also be deleted.
+> Deleting the resource group will delete all of the resources contained within it. If the resource group contains additional resources aside from your virtual machine and dashboard, those resources will also be deleted.
```azurecli az group delete --name myResourceGroup
az portal dashboard delete --resource-group myResourceGroup --name "Simple VM Da
## Next steps
-For more information about Azure CLI support for dashboards, see [az portal dashboard](/cli/azure/portal/dashboard).
+For more information about Azure CLI commands for dashboards, see:
+
+> [!div class="nextstepaction"]
+> [Azure CLI: az portal dashboard](/cli/azure/portal/dashboard).
azure-portal Quickstart Portal Dashboard Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/quickstart-portal-dashboard-powershell.md
Title: Create an Azure portal dashboard with PowerShell
description: Learn how to create a dashboard in the Azure portal using Azure PowerShell. Previously updated : 03/25/2021 Last updated : 01/13/2022 # Quickstart: Create an Azure portal dashboard with PowerShell
-A dashboard in the Azure portal is a focused and organized view of your cloud resources. This
-article focuses on the process of using the Az.Portal PowerShell module to create a dashboard.
-The dashboard shows the performance of a virtual machine (VM), as well as some static information
+A dashboard in the Azure portal is a focused and organized view of your cloud resources. This article focuses on the process of using the Az.Portal PowerShell module to create a dashboard. The dashboard shows the performance of a virtual machine (VM), as well as some static information
and links. ## Requirements
-If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account
-before you begin.
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-If you choose to use PowerShell locally, this article requires that you install the Az PowerShell
-module and connect to your Azure account using the
-[Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount)
-cmdlet. For more information about installing the Az PowerShell module, see
-[Install Azure PowerShell](/powershell/azure/install-az-ps).
-
-> [!IMPORTANT]
-> While the **Az.Portal** PowerShell module is in preview, you must install it separately from
-> the Az PowerShell module using the `Install-Module` cmdlet. Once this PowerShell module becomes
-> generally available, it becomes part of future Az PowerShell module releases and available
-> natively from within Azure Cloud Shell.
-
-```azurepowershell-interactive
-Install-Module -Name Az.Portal
-```
+- If you choose to use PowerShell locally, this article requires that you install the Az PowerShell module and connect to your Azure account using the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet. For more information about installing the Az PowerShell module, see [Install Azure PowerShell](/powershell/azure/install-az-ps).
[!INCLUDE [cloud-shell-try-it](../../includes/cloud-shell-try-it.md)]
$dashboardName = $dashboardTitle -replace '\s'
$subscriptionID = (Get-AzContext).Subscription.Id # Name of test VM
-$vmName = 'SimpleWinVM'
+$vmName = 'myVM1'
``` ## Create a resource group
$Content = $Content -replace '<location>', $location
$Content | Out-File -FilePath $myPortalDashboardTemplatePath -Force ```
-For more information, see [Microsoft portal dashboards template reference](/azure/templates/microsoft.portal/dashboards).
+For more information about the dashboard template structure, see [Microsoft portal dashboards template reference](/azure/templates/microsoft.portal/dashboards).
## Deploy the dashboard template
Get-AzPortalDashboard -Name $dashboardName -ResourceGroupName $resourceGroupName
To remove the VM and associated dashboard, delete the resource group that contains them. > [!CAUTION]
-> The following example deletes the specified resource group and all resources contained within it.
-> If resources outside the scope of this article exist in the specified resource group, they will
-> also be deleted.
+> Deleting the resource group will delete all of the resources contained within it. If the resource group contains additional resources aside from your virtual machine and dashboard, those resources will also be deleted.
```azurepowershell-interactive Remove-AzResourceGroup -Name $resourceGroupName
Remove-Item -Path "$HOME\portal-dashboard-template-testvm.json"
For more information about the cmdlets contained in the Az.Portal PowerShell module, see: > [!div class="nextstepaction"]
-> [Microsoft Azure PowerShell: Portal Dashboard cmdlets](/powershell/module/Az.Portal/)
+> [Microsoft Azure PowerShell: Portal Dashboard cmdlets](/powershell/module/Az.Portal/#portal)
azure-resource-manager Azure Subscription Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/azure-subscription-service-limits.md
The following table applies to v1, v2, Standard, and WAF SKUs unless otherwise s
[!INCLUDE [notification-hub-limits](../../../includes/notification-hub-limits.md)]
-## Purview limits
+## Azure Purview limits
The latest values for Azure Purview quotas can be found in the [Azure Purview quota page](../../purview/how-to-manage-quotas.md).
azure-resource-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/overview.md
To implement infrastructure as code for your Azure solutions, use Azure Resource
To learn about how you can get started with ARM templates, see the following video.
-> [!VIDEO https://channel9.msdn.com/Shows/Azure-Enablement/How-and-why-to-learn-about-ARM-templates/player]
+> [!VIDEO https://docs.microsoft.com/Shows/Azure-Enablement/How-and-why-to-learn-about-ARM-templates/player]
## Why choose ARM templates?
azure-sql-edge Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/overview.md
Azure SQL Edge is an optimized relational database engine geared for IoT and IoT
Azure SQL Edge is built on the latest versions of the [SQL Server Database Engine](/sql/sql-server/sql-server-technical-documentation), which provides industry-leading performance, security and query processing capabilities. Since Azure SQL Edge is built on the same engine as [SQL Server](/sql/sql-server/sql-server-technical-documentation) and [Azure SQL](../azure-sql/index.yml), it provides the same Transact-SQL (T-SQL) programming surface area that makes development of applications or solutions easier and faster, and makes application portability between IoT Edge devices, data centers and the cloud straight forward. What is Azure SQL Edge video on Channel 9:
-> [!VIDEO https://channel9.msdn.com/Shows/Data-Exposed/What-is-Azure-SQL-Edge/player]
+> [!VIDEO https://docs.microsoft.com/shows/Data-Exposed/What-is-Azure-SQL-Edge/player]
## Deployment Models
azure-sql-edge Tutorial Renewable Energy Demo https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql-edge/tutorial-renewable-energy-demo.md
This Azure SQL Edge demo is based on a Contoso Renewable Energy, a wind turbine
This demo will walk you through resolving an alert being raised because of wind turbulence being detected at the device. You will train a model and deploy it to SQL DB Edge that will correct the detected wind wake and ultimately optimize power output. Azure SQL Edge - renewable Energy demo video on Channel 9:
-> [!VIDEO https://channel9.msdn.com/Shows/Data-Exposed/Azure-SQL-Edge-Demo-Renewable-Energy/player]
+> [!VIDEO /shows/Data-Exposed/Azure-SQL-Edge-Demo-Renewable-Energy/player]
## Setting up the demo on your local computer Git will be used to copy all files from the demo to your local computer.
azure-sql Azure Sql Iaas Vs Paas What Is Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/azure-sql-iaas-vs-paas-what-is-overview.md
Azure SQL is built upon the familiar SQL Server engine, so you can migrate appli
Learn how each product fits into Microsoft's Azure SQL data platform to match the right option for your business requirements. Whether you prioritize cost savings or minimal administration, this article can help you decide which approach delivers against the business requirements you care about most.
-If you're new to Azure SQL, check out the *What is Azure SQL* video from our in-depth [Azure SQL video series](https://channel9.msdn.com/Series/Azure-SQL-for-Beginners?WT.mc_id=azuresql4beg_azuresql-ch9-niner):
-> [!VIDEO https://channel9.msdn.com/Series/Azure-SQL-for-Beginners/What-is-Azure-SQL-3-of-61/player]
+If you're new to Azure SQL, check out the *What is Azure SQL* video from our in-depth [Azure SQL video series](/shows/Azure-SQL-for-Beginners/?WT.mc_id=azuresql4beg_azuresql-ch9-niner):
+> [!VIDEO https://docs.microsoft.com/shows/Azure-SQL-for-Beginners/What-is-Azure-SQL-3-of-61/player]
azure-sql Auditing Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/auditing-overview.md
You can manage Azure SQL Database auditing using [Azure Resource Manager](../../
## See also -- Data Exposed episode [What's New in Azure SQL Auditing](https://channel9.msdn.com/Shows/Data-Exposed/Whats-New-in-Azure-SQL-Auditing) on Channel 9.
+- Data Exposed episode [What's New in Azure SQL Auditing](/Shows/Data-Exposed/Whats-New-in-Azure-SQL-Auditing) on Channel 9.
- [Auditing for SQL Managed Instance](../managed-instance/auditing-configure.md) - [Auditing for SQL Server](/sql/relational-databases/security/auditing/sql-server-audit-database-engine)
azure-sql Database Import https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/database-import.md
You can import a SQL Server database into Azure SQL Database or SQL Managed Inst
Watch this video to see how to import from a BACPAC file in the Azure portal or continue reading below:
-> [!VIDEO https://channel9.msdn.com/Shows/Data-Exposed/Its-just-SQL-Restoring-a-database-to-Azure-SQL-DB-from-backup/player?WT.mc_id=dataexposed-c9-niner]
+> [!VIDEO https://docs.microsoft.com/Shows/Data-Exposed/Its-just-SQL-Restoring-a-database-to-Azure-SQL-DB-from-backup/player?WT.mc_id=dataexposed-c9-niner]
The [Azure portal](https://portal.azure.com) *only* supports creating a single database in Azure SQL Database and *only* from a BACPAC file stored in Azure Blob storage.
Stop-AzSqlDatabaseActivity -ResourceGroupName $ResourceGroupName -ServerName $Se
- Importing to a database in elastic pool isn't supported. You can import data into a single database and then move the database to an elastic pool. - Import Export Service does not work when Allow access to Azure services is set to OFF. However you can work around the problem by manually running sqlpackage.exe from an Azure VM or performing the export directly in your code by using the DACFx API. - Import does not support specifying a backup storage redundancy while creating a new database and creates with the default geo-redundant backup storage redundancy. To workaround, first create an empty database with desired backup storage redundancy using Azure portal or PowerShell and then import the BACPAC into this empty database.
+- Storage behind a firewall is currently not supported.
> [!NOTE] > Azure SQL Database Configurable Backup Storage Redundancy is currently available in public preview in Southeast Asia Azure region only.
azure-sql Dynamic Data Masking Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/dynamic-data-masking-overview.md
To learn more about permissions when using dynamic data masking with T-SQL comma
## See also - [Dynamic Data Masking](/sql/relational-databases/security/dynamic-data-masking) for SQL Server.-- Data Exposed episode about [Granular Permissions for Azure SQL Dynamic Data Masking](https://channel9.msdn.com/Shows/Data-Exposed/Granular-Permissions-for-Azure-SQL-Dynamic-Data-Masking) on Channel 9.
+- Data Exposed episode about [Granular Permissions for Azure SQL Dynamic Data Masking](/Shows/Data-Exposed/Granular-Permissions-for-Azure-SQL-Dynamic-Data-Masking) on Channel 9.
azure-sql Failover Group Add Elastic Pool Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/failover-group-add-elastic-pool-tutorial.md
Set these additional parameter values for use in creating the an elastic pool.
### Create elastic pool on primary server
-Use this script to create an elastic pool with the [az sql elastic-pool create](/cli/azure/sql/elastic-poolt#az_sql_elastic_pool_create) command.
+Use this script to create an elastic pool with the [az sql elastic-pool create](/cli/azure/sql/elastic-pool#az-sql-elastic-pool-create) command.
:::code language="azurecli" source="~/azure_cli_scripts/sql-database/failover-groups/add-elastic-pool-to-failover-group-az-cli.sh" range="29-31":::
This portion of the tutorial uses the following Azure CLI cmdlets:
| Command | Notes | |||
-| [az sql elastic-pool create](/cli/azure/sql/elastic-poolt#az_sql_elastic_pool_create) | Creates an elastic pool. |
+| [az sql elastic-pool create](/cli/azure/sql/elastic-pool#az-sql-elastic-pool-create) | Creates an elastic pool. |
| [az sql db update](/cli/azure/sql/db#az_sql_db_update) | Updates a database|
Use this script to create a secondary server with the [az sql server create](/cl
### Create elastic pool on secondary server
-Use this script to create an elastic pool on the secondary server with the [az sql elastic-pool create](/cli/azure/sql/elastic-poo#az_sql_elastic_pool_create) command.
+Use this script to create an elastic pool on the secondary server with the [az sql elastic-pool create](/cli/azure/sql/elastic-pool#az-sql-elastic-pool-create) command.
:::code language="azurecli" source="~/azure_cli_scripts/sql-database/failover-groups/add-elastic-pool-to-failover-group-az-cli.sh" range="38-40":::
This portion of the tutorial uses the following Azure CLI cmdlets:
| Command | Notes | ||| | [az sql server create](/cli/azure/sql/server#az_sql_server_create) | Creates a server that hosts databases and elastic pools. |
-| [az sql elastic-pool create](/cli/azure/sql/elastic-poo#az_sql_elastic_pool_create) | Creates an elastic pool.|
+| [az sql elastic-pool create](/cli/azure/sql/elastic-pool#az-sql-elastic-pool-create) | Creates an elastic pool.|
| [az sql failover-group create](/cli/azure/sql/failover-group#az_sql_failover_group_create) | Creates a failover group. | | [az sql failover-group update](/cli/azure/sql/failover-group#az_sql_failover_group_update) | Updates a failover group.|
azure-sql Network Access Controls Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/network-access-controls-overview.md
You can also allow private access to the database from [virtual networks](../../
See the below video for a high-level explanation of these access controls and what they do:
-> [!VIDEO https://channel9.msdn.com/Shows/Data-Exposed/Data-Exposed--SQL-Database-Connectivity-Explained/player?WT.mc_id=dataexposed-c9-niner]
+> [!VIDEO https://docs.microsoft.com/shows/Data-Exposed/Data-Exposed--SQL-Database-Connectivity-Explained/player?WT.mc_id=dataexposed-c9-niner]
## Allow Azure services
Ip based firewall is a feature of the logical SQL server in Azure that prevents
In addition to IP rules, the server firewall allows you to define *virtual network rules*. To learn more, see [Virtual network service endpoints and rules for Azure SQL Database](vnet-service-endpoint-rule-overview.md) or watch this video:
-> [!VIDEO https://channel9.msdn.com/Shows/Data-Exposed/Data-Exposed--Demo--Vnet-Firewall-Rules-for-SQL-Database/player?WT.mc_id=dataexposed-c9-niner]
+> [!VIDEO https://docs.microsoft.com/shows/Data-Exposed/Data-Exposed--Demo--Vnet-Firewall-Rules-for-SQL-Database/player?WT.mc_id=dataexposed-c9-niner]
### Azure Networking terminology
azure-sql Saas Dbpertenant Dr Geo Restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/saas-dbpertenant-dr-geo-restore.md
This tutorial uses features of Azure SQL Database and the Azure platform to addr
* [Azure Resource Manager templates](../../azure-resource-manager/templates/quickstart-create-templates-use-the-portal.md), to reserve all needed capacity as quickly as possible. Azure Resource Manager templates are used to provision a mirror image of the original servers and elastic pools in the recovery region. A separate server and pool are also created for provisioning new tenants. * [Elastic Database Client Library](elastic-database-client-library.md) (EDCL), to create and maintain a tenant database catalog. The extended catalog includes periodically refreshed pool and database configuration information. * [Shard management recovery features](elastic-database-recovery-manager.md) of the EDCL, to maintain database location entries in the catalog during recovery and repatriation.
-* [Geo-restore](../../key-vault/general/disaster-recovery-guidance.md), to recover the catalog and tenant databases from automatically maintained geo-redundant backups.
+* [Geo-restore](recovery-using-backups.md#geo-restore), to recover the catalog and tenant databases from automatically maintained geo-redundant backups.
* [Asynchronous restore operations](../../azure-resource-manager/management/async-operations.md), sent in tenant-priority order, are queued for each pool by the system and processed in batches so the pool isn't overloaded. These operations can be canceled before or during execution if necessary. * [Geo-replication](active-geo-replication-overview.md), to repatriate databases to the original region after the outage. There is no data loss and minimal impact on the tenant when you use geo-replication. * [SQL server DNS aliases](./dns-alias-overview.md), to allow the catalog sync process to connect to the active catalog regardless of its location.
azure-sql Saas Tenancy Video Index Wingtip Brk3120 20171011 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/saas-tenancy-video-index-wingtip-brk3120-20171011.md
Clicking any screenshot image takes you to the exact time location in the video.
[video-on-youtube-com-478y]: https://www.youtube.com/watch?v=jjNmcKBVjrc&t=1
-[video-on-channel9-479c]: https://channel9.msdn.com/Events/Ignite/Microsoft-Ignite-Orlando-2017/BRK3120
-- [resource-blog-saas-patterns-app-dev-sql-db-768h]: https://azure.microsoft.com/blog/saas-patterns-accelerate-saas-application-development-on-sql-database/
azure-sql Setup Geodr Failover Group Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/setup-geodr-failover-group-cli.md
This script uses the following commands. Each command in the table links to comm
| Command | Description | ||| | [az sql failover-group create](/cli/azure/sql/failover-group#az_sql_failover_group_create) | Creates a failover group. |
-| [az sql failover-group set-primary](/cli/azure/sql/failover-groupt#az_sql_failover_group_set_primary) | Set the primary of the failover group by failing over all databases from the current primary server |
+| [az sql failover-group set-primary](/cli/azure/sql/failover-group#az-sql-failover-group-set-primary) | Set the primary of the failover group by failing over all databases from the current primary server |
| [az sql failover-group show](/cli/azure/sql/failover-group) | Gets a failover group | | [az sql failover-group delete](/cli/azure/sql/failover-group) | Deletes a failover group |
azure-sql Sql Database Paas Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/sql-database-paas-overview.md
Azure SQL Database is based on the latest stable version of the [Microsoft SQL S
SQL Database enables you to easily define and scale performance within two different purchasing models: a [vCore-based purchasing model](service-tiers-vcore.md) and a [DTU-based purchasing model](service-tiers-dtu.md). SQL Database is a fully managed service that has built-in high availability, backups, and other common maintenance operations. Microsoft handles all patching and updating of the SQL and operating system code. You don't have to manage the underlying infrastructure.
-If you're new to Azure SQL Database, check out the *Azure SQL Database Overview* video from our in-depth [Azure SQL video series](https://channel9.msdn.com/Series/Azure-SQL-for-Beginners?WT.mc_id=azuresql4beg_azuresql-ch9-niner):
-> [!VIDEO https://channel9.msdn.com/Series/Azure-SQL-for-Beginners/Azure-SQL-Database-Overview-7-of-61/player]
+If you're new to Azure SQL Database, check out the *Azure SQL Database Overview* video from our in-depth [Azure SQL video series](/shows/Azure-SQL-for-Beginners/?WT.mc_id=azuresql4beg_azuresql-ch9-niner):
+> [!VIDEO https://docs.microsoft.com/shows/Azure-SQL-for-Beginners/Azure-SQL-Database-Overview-7-of-61/player]
azure-sql Understand Resolve Blocking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/understand-resolve-blocking.md
The Waittype, Open_Tran, and Status columns refer to information returned by [sy
## Next steps
-* [Azure SQL Database: Improving Performance Tuning with Automatic Tuning](https://channel9.msdn.com/Shows/Data-Exposed/Azure-SQL-Database-Improving-Performance-Tuning-with-Automatic-Tuning)
+* [Azure SQL Database: Improving Performance Tuning with Automatic Tuning](/Shows/Data-Exposed/Azure-SQL-Database-Improving-Performance-Tuning-with-Automatic-Tuning)
* [Deliver consistent performance with Azure SQL](/learn/modules/azure-sql-performance/) * [Troubleshooting connectivity issues and other errors with Azure SQL Database and Azure SQL Managed Instance](troubleshoot-common-errors-issues.md) * [Transient Fault Handling](/aspnet/aspnet/overview/developing-apps-with-windows-azure/building-real-world-cloud-apps-with-windows-azure/transient-fault-handling)
azure-sql Sql Managed Instance Paas Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/sql-managed-instance-paas-overview.md
Last updated 01/14/2021
Azure SQL Managed Instance is the intelligent, scalable cloud database service that combines the broadest SQL Server database engine compatibility with all the benefits of a fully managed and evergreen platform as a service. SQL Managed Instance has near 100% compatibility with the latest SQL Server (Enterprise Edition) database engine, providing a native [virtual network (VNet)](../../virtual-network/virtual-networks-overview.md) implementation that addresses common security concerns, and a [business model](https://azure.microsoft.com/pricing/details/sql-database/) favorable for existing SQL Server customers. SQL Managed Instance allows existing SQL Server customers to lift and shift their on-premises applications to the cloud with minimal application and database changes. At the same time, SQL Managed Instance preserves all PaaS capabilities (automatic patching and version updates, [automated backups](../database/automated-backups-overview.md), [high availability](../database/high-availability-sla.md)) that drastically reduce management overhead and TCO.
-If you're new to Azure SQL Managed Instance, check out the *Azure SQL Managed Instance* video from our in-depth [Azure SQL video series](https://channel9.msdn.com/Series/Azure-SQL-for-Beginners?WT.mc_id=azuresql4beg_azuresql-ch9-niner):
-> [!VIDEO https://channel9.msdn.com/Series/Azure-SQL-for-Beginners/Azure-SQL-Managed-Instance-Overview-6-of-61/player]
+If you're new to Azure SQL Managed Instance, check out the *Azure SQL Managed Instance* video from our in-depth [Azure SQL video series](/shows/Azure-SQL-for-Beginners/?WT.mc_id=azuresql4beg_azuresql-ch9-niner):
+> [!VIDEO https://docs.microsoft.com/shows/Azure-SQL-for-Beginners/Azure-SQL-Managed-Instance-Overview-6-of-61/player]
> [!IMPORTANT] > For a list of regions where SQL Managed Instance is currently available, see [Supported regions](resource-limits.md#supported-regions).
azure-sql Sql Server On Linux Vm What Is Iaas Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/linux/sql-server-on-linux-vm-what-is-iaas-overview.md
SQL Server on Azure Virtual Machines enables you to use full versions of SQL Ser
Azure virtual machines run in many different [geographic regions](https://azure.microsoft.com/regions/) around the world. They also offer a variety of [machine sizes](../../../virtual-machines/sizes.md). The virtual machine image gallery allows you to create a SQL Server VM with the right version, edition, and operating system. This makes virtual machines a good option for a many different SQL Server workloads.
-If you're new to Azure SQL, check out the *SQL Server on Azure VM Overview* video from our in-depth [Azure SQL video series](https://channel9.msdn.com/Series/Azure-SQL-for-Beginners?WT.mc_id=azuresql4beg_azuresql-ch9-niner):
-> [!VIDEO https://channel9.msdn.com/Series/Azure-SQL-for-Beginners/SQL-Server-on-Azure-VM-Overview-4-of-61/player]
+If you're new to Azure SQL, check out the *SQL Server on Azure VM Overview* video from our in-depth [Azure SQL video series](/shows/Azure-SQL-for-Beginners?WT.mc_id=azuresql4beg_azuresql-ch9-niner):
+> [!VIDEO https://docs.microsoft.com/shows/Azure-SQL-for-Beginners/SQL-Server-on-Azure-VM-Overview-4-of-61/player]
## <a id="create"></a> Get started with SQL Server VMs
azure-sql Sql Server On Azure Vm Iaas What Is Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview.md
Azure virtual machines run in many different [geographic regions](https://azure.microsoft.com/regions/) around the world. They also offer a variety of [machine sizes](../../../virtual-machines/sizes.md). The virtual machine image gallery allows you to create a SQL Server VM with the right version, edition, and operating system. This makes virtual machines a good option for many different SQL Server workloads.
-If you're new to SQL Server on Azure VMs, check out the *SQL Server on Azure VM Overview* video from our in-depth [Azure SQL video series](https://channel9.msdn.com/Series/Azure-SQL-for-Beginners?WT.mc_id=azuresql4beg_azuresql-ch9-niner):
-> [!VIDEO https://channel9.msdn.com/Series/Azure-SQL-for-Beginners/SQL-Server-on-Azure-VM-Overview-4-of-61/player]
+If you're new to SQL Server on Azure VMs, check out the *SQL Server on Azure VM Overview* video from our in-depth [Azure SQL video series](/shows/Azure-SQL-for-Beginners?WT.mc_id=azuresql4beg_azuresql-ch9-niner):
+> [!VIDEO https://docs.microsoft.com/shows/Azure-SQL-for-Beginners/SQL-Server-on-Azure-VM-Overview-4-of-61/player]
## Automated updates
azure-video-analyzer Direct Methods https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/edge/direct-methods.md
Following are some of the error codes used at the detail level.
|409| ResourceValidationError| Referenced resource (example: video resource) is not in a valid state.| ## Supported direct methods
-Following are the direct methods exposed by the Video Analyzer edge module. The schema for the direct methods can be found [here](https://github.com/Azure/azure-rest-api-specs/blob/master/specification/videoanalyzer/data-plane/VideoAnalyzer.Edge/preview/1.1.0/AzureVideoAnalyzerSdkDefinitions.json).
+Following are the direct methods exposed by the Video Analyzer edge module. The schema for the direct methods can be found [here](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/videoanalyzer/data-plane/VideoAnalyzer.Edge/preview/1.1.0/AzureVideoAnalyzerSdkDefinitions.json).
### pipelineTopologyList
azure-video-analyzer Manage Retention Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/manage-retention-policy.md
The retention period is typically set in the properties of a video sink node whe
} ```
-You can also set or update the `retentionPeriod` property of a video resource, using Azure portal, or via the [REST API](https://github.com/Azure/azure-rest-api-specs/blob/master/specification/videoanalyzer/resource-manager/Microsoft.Media/preview/2021-11-01-preview/Videos.json). Below is an example of setting a 3-day retention policy.
+You can also set or update the `retentionPeriod` property of a video resource, using Azure portal, or via the [REST API](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/videoanalyzer/resource-manager/Microsoft.Media/preview/2021-11-01-preview/Videos.json). Below is an example of setting a 3-day retention policy.
``` "archival":
azure-video-analyzer Viewing Videos How To https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/viewing-videos-how-to.md
You can also use the Video Analyzer service to create videos using CVR. You can
## Accessing videos
-You can query the ARM API [`Videos`](https://github.com/Azure/azure-rest-api-specs/blob/master/specification/videoanalyzer/resource-manager/Microsoft.Medi) shows you how.
+You can query the ARM API [`Videos`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/videoanalyzer/resource-manager/Microsoft.Medi) shows you how.
## Determining that a video recording is ready for viewing
When you export a portion of a video recording to an MP4 file, the resulting vid
## Recording and playback latencies
-When using Video Analyzer edge module to record to a video resource, you will specify a [`segmentLength` property](https://github.com/Azure/azure-rest-api-specs/blob/master/specification/videoanalyzer/data-plane/VideoAnalyzer.Edge/preview/1.0.0/AzureVideoAnalyzer.json) in your pipeline topology which tells the module to aggregate a minimum duration of video (in seconds) before it is written to the cloud. For example, if `segmentLength` is set to 300, then the module will accumulate 5 minutes worth of video before uploading one 5 minutes ΓÇ£chunkΓÇ¥, then go into accumulation mode for the next 5 minutes, and upload again. Increasing the `segmentLength` has the benefit of lowering your Azure Storage transaction costs, as the number of reads and writes will be no more frequent than once every `segmentLength` seconds. If you are using Video Analyzer service, the pipeline topology has the same [`segmentLength` property](https://github.com/Azure/azure-rest-api-specs/blob/master/specification/videoanalyzer/resource-manager/Microsoft.Media/preview/2021-11-01-preview/PipelineTopologies.json).
+When using Video Analyzer edge module to record to a video resource, you will specify a [`segmentLength` property](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/videoanalyzer/data-plane/VideoAnalyzer.Edge/preview/1.1.0/AzureVideoAnalyzer.json) in your pipeline topology which tells the module to aggregate a minimum duration of video (in seconds) before it is written to the cloud. For example, if `segmentLength` is set to 300, then the module will accumulate 5 minutes worth of video before uploading one 5 minutes ΓÇ£chunkΓÇ¥, then go into accumulation mode for the next 5 minutes, and upload again. Increasing the `segmentLength` has the benefit of lowering your Azure Storage transaction costs, as the number of reads and writes will be no more frequent than once every `segmentLength` seconds. If you are using Video Analyzer service, the pipeline topology has the same [`segmentLength` property](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/videoanalyzer/resource-manager/Microsoft.Media/preview/2021-11-01-preview/PipelineTopologies.json).
Consequently, streaming of the video from your Video Analyzer account will be delayed by at least that much time.
azure-vmware Reserved Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/reserved-instance.md
These requirements apply to buying a reserved dedicated host instance:
### Buy reserved instances for a CSP subscription
-CSPs that want to purchase reserved instances for their customers must use the **Admin On Behalf Of** (AOBO) procedure from the [Partner Center documentation](/partner-center/azure-plan-manage). For more information, view the [Admin on behalf of (AOBO)](https://channel9.msdn.com/Series/cspdev/Module-11-Admin-On-Behalf-Of-AOBO) video.
+CSPs that want to purchase reserved instances for their customers must use the **Admin On Behalf Of** (AOBO) procedure from the [Partner Center documentation](/partner-center/azure-plan-manage). For more information, view the Admin on behalf of (AOBO) video.
1. Sign in to [Partner Center](https://partner.microsoft.com).
backup Automation Backup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/automation-backup.md
Once you assign an Azure Policy to a scope, all VMs that meet your criteria are
The following video illustrates how Azure Policy works for backup: <br><br>
-> [!VIDEO https://channel9.msdn.com/Shows/IT-Ops-Talk/Configure-backups-at-scale-using-Azure-Policy/player]
+> [!VIDEO /shows/IT-Ops-Talk/Configure-backups-at-scale-using-Azure-Policy/player]
### Export backup-operational data
For more information on how to set up this runbook, see [Automatic retry of fail
The following video provides an end-to-end walk-through of the scenario: <br><br>
- > [!VIDEO https://channel9.msdn.com/Shows/IT-Ops-Talk/Automatically-retry-failed-backup-jobs-using-Azure-Resource-Graph-and-Azure-Automation-Runbooks/player]
+ > [!VIDEO /shows/IT-Ops-Talk/Automatically-retry-failed-backup-jobs-using-Azure-Resource-Graph-and-Azure-Automation-Runbooks/player]
## Additional resources
backup Backup Sql Server Database Azure Vms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-sql-server-database-azure-vms.md
Title: Back up multiple SQL Server VMs from the vault description: In this article, learn how to back up SQL Server databases on Azure virtual machines with Azure Backup from the Recovery Services vault Previously updated : 11/02/2021 Last updated : 01/14/2022
You can also use the following FQDNs to allow access to the required services fr
| Azure Storage | `*.blob.core.windows.net` <br><br> `*.queue.core.windows.net` <br><br> `*.blob.storage.azure.net` | 443 | Azure AD | Allow access to FQDNs under sections 56 and 59 according to [this article](/office365/enterprise/urls-and-ip-address-ranges#microsoft-365-common-and-office-online) | As applicable
+#### Allow connectivity for servers behind internal load balancers
+
+When using an internal load balancer, you need to allow the outbound connectivity from virtual machines behind the internal load balancer to perform backups. To do so, you can use a combination of internal and external standard load balancers to create an outbound connectivity. [Learn more](/azure/load-balancer/egress-only) about the configuration to create an _egress only_ setup for VMs in the backend pool of the internal load balancer.
+ #### Use an HTTP proxy server to route traffic When you back up a SQL Server database on an Azure VM, the backup extension on the VM uses the HTTPS APIs to send management commands to Azure Backup and data to Azure Storage. The backup extension also uses Azure AD for authentication. Route the backup extension traffic for these three services through the HTTP proxy. Use the list of IPs and FQDNs mentioned above for allowing access to the required services. Authenticated proxy servers aren't supported.
When you back up a SQL Server database on an Azure VM, the backup extension on t
- Multiple databases on the same SQL instance with casing difference aren't supported. -- Changing the casing of a SQL database isn't supported after configuring protection.
+- Changing the casing of an SQL database isn't supported after configuring protection.
>[!NOTE] >The **Configure Protection** operation for databases with special characters, such as '+' or '&', in their name isn't supported. You can change the database name or enable **Auto Protection**, which can successfully protect these databases.
How to discover databases running on a VM:
1. Azure Backup discovers all SQL Server databases on the VM. During discovery, the following elements occur in the background: * Azure Backup registers the VM with the vault for workload backup. All databases on the registered VM can be backed up to this vault only.
- * Azure Backup installs the AzureBackupWindowsWorkload extension on the VM. No agent is installed on a SQL database.
+ * Azure Backup installs the AzureBackupWindowsWorkload extension on the VM. No agent is installed on an SQL database.
* Azure Backup creates the service account NT Service\AzureWLBackupPluginSvc on the VM. * All backup and restore operations use the service account. * NT Service\AzureWLBackupPluginSvc requires SQL sysadmin permissions. All SQL Server VMs created in the Marketplace come with the SqlIaaSExtension installed. The AzureBackupWindowsWorkload extension uses the SQLIaaSExtension to automatically get the required permissions.
backup Manage Monitor Sql Database Backup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/manage-monitor-sql-database-backup.md
Title: Manage and monitor SQL Server DBs on an Azure VM description: This article describes how to manage and monitor SQL Server databases that are running on an Azure VM. Previously updated : 11/02/2021 Last updated : 01/14/2022
If you haven't yet configured backups for your SQL Server databases, see [Back u
## Monitor backup jobs in the portal
-Azure Backup shows all scheduled and on-demand operations under **Backup jobs** in **Backup center** in the Azure portal, except the scheduled log backups since they can be very frequent. The jobs you see in this portal include database discovery and registration, configure backup, and backup and restore operations.
+Azure Backup shows all scheduled and on-demand operations under **Backup jobs** in **Backup center** in the Azure portal, except the scheduled log backups since they can be very frequent. The jobs you see in this portal includes database discovery and registration, configure backup, and backup and restore operations.
:::image type="content" source="./media/backup-azure-sql-database/backup-operations-in-backup-center-jobs-inline.png" alt-text="Screenshot showing the Backup jobs under Backup jobs." lightbox="./media/backup-azure-sql-database/backup-operations-in-backup-center-jobs-expanded.png":::
You can fix the policy version for all the impacted items in one click:
## Unregister a SQL Server instance
-Unregister a SQL Server instance after you disable protection but before you delete the vault:
+Before you unregister the server, [disable soft delete](/azure/backup/backup-azure-security-feature-cloud#disabling-soft-delete-using-azure-portal), and then delete all backup items.
+
+>[!NOTE]
+>Deleting backup items with soft delete enabled will lead to 14 days retention, and you will need to wait before the items are completely removed. However, if you've deleted the backup items with soft delete enabled, you can undelete them, disable soft-delete, and then delete them again for immediate removal. [Learn more](/azure/backup/backup-azure-security-feature-cloud#permanently-deleting-soft-deleted-backup-items)
+
+Unregister a SQL Server instance after you disable protection but before you delete the vault.
1. On the vault dashboard, under **Manage**, select **Backup Infrastructure**.
backup Multi User Authorization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/multi-user-authorization.md
Now that the Backup admin has the Reader role on the Resource Guard, they can ea
1. Go to the Recovery Services vault. Navigate to **Properties** on the left navigation panel, then to **Multi-User Authorization** and click **Update**.
+ :::image type="content" source="./media/multi-user-authorization/test-vault-properties.png" alt-text="Screenshot showing the Recovery services vault-properties.":::
+ 1. Now you are presented with the option to enable MUA and choose a Resource Guard using one of the following ways: 1. You can either specify the URI of the Resource Guard, make sure you specify the URI of a Resource Guard you have **Reader** access to and that is the same regions as the vault. You can find the URI (Resource Guard ID) of the Resource Guard in its **Overview** screen:
Depicted below is an illustration of what happens when the Backup admin tries to
1. Select the directory containing the Resource Guard and Authenticate yourself. This step may not be required if the Resource Guard is in the same directory as the vault. 1. Proceed to click **Save**. The request fails with an error informing them about not having sufficient permissions on the Resource Guard to let you perform this operation.
+ :::image type="content" source="./media/multi-user-authorization/test-vault-properties-security-settings-inline.png" alt-text="Screenshot showing the Test Vault properties security settings." lightbox="./media/multi-user-authorization/test-vault-properties-security-settings-expanded.png":::
+ ## Authorize critical (protected) operations using Azure AD Privileged Identity Management The following sub-sections discuss authorizing these requests using PIM. There are cases where you may need to perform critical operations on your backups and MUA can help you ensure that these are performed only when the right approvals or permissions exist. As discussed earlier, the Backup admin needs to have a Contributor role on the Resource Guard to perform critical operations that are in the Resource Guard scope. One of the ways to allow just-in-time for such operations is through the use of [Azure Active Directory (Azure AD) Privileged Identity Management](../active-directory/privileged-identity-management/pim-configure.md).
Once the Backup adminΓÇÖs request for the Contributor role on the Resource Guard
>[!NOTE] > If the access was assigned using a JIT mechanism, the Contributor role is retracted at the end of the approved period. Else, the Security admin manually removes the **Contributor** role assigned to the Backup admin to perform the critical operation.
+The following screenshot shows an example of disabling soft delete for an MUA-enabled vault.
++ ## Disable MUA on a Recovery Services vault Disabling MUA is a protected operation, and hence, is protected using MUA. This means that the Backup admin must have the required Contributor role in the Resource Guard. Details on obtaining this role are described here. Following is a summary of steps to disable MUA on a vault.
Disabling MUA is a protected operation, and hence, is protected using MUA. This
1. Click **Update** 1. Uncheck the Protect with Resource Guard check box 1. Choose the Directory that contains the Resource Guard and verify access using the Authenticate button (if applicable).
- 1. After **authentication**, click **Save**. With the right access, the request should be successfully completed.
+ 1. After **authentication**, click **Save**. With the right access, the request should be successfully completed.
+
+ :::image type="content" source="./media/multi-user-authorization/disable-mua.png" alt-text="Screenshot showing to disable multi-user authentication.":::
backup Microsoft Azure Recovery Services Powershell All https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/scripts/microsoft-azure-recovery-services-powershell-all.md
$WC = New-Object System.Net.WebClient
$WC.DownloadFile($MarsAURL,'C:\downloads\MARSAgentInstaller.EXE') C:\Downloads\MARSAgentInstaller.EXE /q
-MARSAgentInstaller.exe /q # Please note the commandline install options available here: https://docs.microsoft.com/en-us/azure/backup/backup-client-automation#installation-options
+MARSAgentInstaller.exe /q # Please note the commandline install options available here: https://docs.microsoft.com/azure/backup/backup-client-automation#installation-options
# Registering Windows Server or Windows client machine to a Recovery Services Vault $CredsPath = "C:\downloads"
Set-OBMachineSetting -NoThrottle
# Encryption settings $PassPhrase = ConvertTo-SecureString -String "Complex!123_STRING" -AsPlainText -Force Set-OBMachineSetting -EncryptionPassPhrase $PassPhrase -SecurityPin "<generatedPIN>" #NOTE: You must generate a security pin by selecting Generate, under Settings > Properties > Security PIN in the Recovery Services vault section of the Azure portal.
-# See: https://docs.microsoft.com/en-us/rest/api/backup/securitypins/get
-# See: https://docs.microsoft.com/en-us/powershell/module/azurerm.keyvault/Add-AzureKeyVaultKey?view=azurermps-6.13.0
+# See: https://docs.microsoft.com/rest/api/backup/securitypins/get
+# See: https://docs.microsoft.com/powershell/module/azurerm.keyvault/Add-AzureKeyVaultKey?view=azurermps-6.13.0
# Back up files and folders $NewPolicy = New-OBPolicy
bastion Configuration Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/configuration-settings.md
Previously updated : 11/29/2021 Last updated : 01/14/2022
The sections in this article discuss the resources and settings for Azure Bastio
A SKU is also known as a Tier. Azure Bastion supports two SKU types: Basic and Standard. The SKU is configured in the Azure portal during the workflow when you configure Bastion. You can [upgrade a Basic SKU to a Standard SKU](#upgradesku).
-* The **Basic SKU** provides base functionality, enabling Azure Bastion to manage RDP/SSH connectivity to Virtual Machines (VMs) without exposing public IP addresses on the target application VMs.
-* The Standard SKU enables premium features that allow Azure Bastion to manage remote connectivity at a larger scale.
+* The **Basic SKU** provides base functionality, enabling Azure Bastion to manage RDP/SSH connectivity to virtual machines (VMs) without exposing public IP addresses on the target application VMs.
+* The **Standard SKU** enables premium features that allow Azure Bastion to manage remote connectivity at a larger scale.
The following table shows features and corresponding SKUs. [!INCLUDE [Azure Bastion SKUs](../../includes/bastion-sku.md)]
-### Configuration methods
- Currently, you must use the Azure portal if you want to specify the Standard SKU. If you use the Azure CLI or Azure PowerShell to configure Bastion, the SKU can't be specified and defaults to the Basic SKU. | Method | Value | Links |
Azure Bastion supports upgrading from a Basic to a Standard SKU.
> Downgrading from a Standard SKU to a Basic SKU is not supported. To downgrade, you must delete and recreate Azure Bastion. >
-#### Configuration methods
- You can configure this setting using the following method: | Method | Value | Links | | | | | | Azure portal |Tier | [Upgrade a SKU](upgrade-sku.md)|
-## <a name="instance"></a>Instances and host scaling
-
-An instance is an optimized Azure VM that is created when you configure Azure Bastion. It's fully managed by Azure and runs all of the processes needed for Azure Bastion. An instance is also referred to as a scale unit. You connect to client VMs via an Azure Bastion instance. When you configure Azure Bastion using the Basic SKU, two instances are created. If you use the Standard SKU, you can specify the number of instances. This is called **host scaling**.
-
-Each instance can support 10 concurrent RDP connections and 50 concurrent SSH connections. The number of connections per instances depends on what actions you are taking when connected to the client VM. For example, if you are doing something data intensive, it creates a larger load for the instance to process. Once the concurrent sessions are exceeded, an additional scale unit (instance) is required.
-
-Instances are created in the AzureBastionSubnet. To allow for host scaling, the AzureBastionSubnet should be /26 or larger. Using a smaller subnet limits the number of instances you can create. For more information about the AzureBastionSubnet, see the [subnets](#subnet) section in this article.
-
-### Configuration methods
-
-You can configure this setting using the following methods:
-
-| Method | Value | Links |
-| | | |
-| Azure portal |Instance count | [Azure portal steps](configure-host-scaling.md)|
-| Azure PowerShell | ScaleUnit | [PowerShell steps](configure-host-scaling-powershell.md) |
-- ## <a name="subnet"></a>Azure Bastion subnet >[!IMPORTANT] >For Azure Bastion resources deployed on or after November 2, 2021, the minimum AzureBastionSubnet size is /26 or larger (/25, /24, etc.). All Azure Bastion resources deployed in subnets of size /27 prior to this date are unaffected by this change and will continue to work, but we highly recommend increasing the size of any existing AzureBastionSubnet to /26 in case you choose to take advantage of [host scaling](./configure-host-scaling.md) in the future. >
-Azure Bastion requires a dedicated subnet: **AzureBastionSubnet**. This subnet needs to be created in the same Virtual Network that Azure Bastion is deployed to. The subnet must have the following configuration:
+Azure Bastion requires a dedicated subnet: **AzureBastionSubnet**. You must create this subnet in the same virtual network that you want to deploy Azure Bastion to. The subnet must have the following configuration:
* Subnet name must be *AzureBastionSubnet*. * Subnet size must be /26 or larger (/25, /24 etc.).
Azure Bastion requires a dedicated subnet: **AzureBastionSubnet**. This subnet n
* The subnet must be in the same VNet and resource group as the bastion host. * The subnet cannot contain additional resources.
-### Configuration methods
- You can configure this setting using the following methods: | Method | Value | Links |
Azure Bastion requires a Public IP address. The Public IP must have the followin
* The Public IP address name is the resource name by which you want to refer to this public IP address. * You can choose to use a public IP address that you already created, as long as it meets the criteria required by Azure Bastion and is not already in use.
-### Configuration methods
- You can configure this setting using the following methods: | Method | Value | Links | | | | | | Azure portal | Public IP address |[Azure portal](https://portal.azure.com)| | Azure PowerShell | -PublicIpAddress| [cmdlet](/powershell/module/az.network/new-azbastion#parameters) |
-| Azure CLI | --public-ip create |[command](/cli/azure/network/public-ip)
-|
+| Azure CLI | --public-ip create |[command](/cli/azure/network/public-ip) |
+
+## <a name="instance"></a>Instances and host scaling
+
+An instance is an optimized Azure VM that is created when you configure Azure Bastion. It's fully managed by Azure and runs all of the processes needed for Azure Bastion. An instance is also referred to as a scale unit. You connect to client VMs via an Azure Bastion instance. When you configure Azure Bastion using the Basic SKU, two instances are created. If you use the Standard SKU, you can specify the number of instances. This is called **host scaling**.
+
+Each instance can support 10 concurrent RDP connections and 50 concurrent SSH connections. The number of connections per instances depends on what actions you are taking when connected to the client VM. For example, if you are doing something data intensive, it creates a larger load for the instance to process. Once the concurrent sessions are exceeded, an additional scale unit (instance) is required.
+
+Instances are created in the AzureBastionSubnet. To allow for host scaling, the AzureBastionSubnet should be /26 or larger. Using a smaller subnet limits the number of instances you can create. For more information about the AzureBastionSubnet, see the [subnets](#subnet) section in this article.
+
+You can configure this setting using the following methods:
+
+| Method | Value | Links |
+| | | |
+| Azure portal |Instance count | [Azure portal steps](configure-host-scaling.md)|
+| Azure PowerShell | ScaleUnit | [PowerShell steps](configure-host-scaling-powershell.md) |
+
+## <a name="ports"></a>Custom ports
+
+You can specify the port that you want to use to connect to your VMs. By default, the inbound ports used to connect are 3389 for RDP and 22 for SSH. If you configure a custom port value, you need to specify that value when you connect to the VM.
+
+Custom port values are supported for the Standard SKU only. If your Bastion deployment uses the Basic SKU, you can easily [upgrade a Basic SKU to a Standard SKU](#upgradesku).
## Next steps
batch Batch Js Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-js-get-started.md
Now, let us follow the process step by step into building the JavaScript client:
You can install Azure Batch SDK for JavaScript using the npm install command.
-`npm install azure-batch`
+`npm install @azure/batch`
This command installs the latest version of azure-batch JavaScript SDK.
Following code snippet first imports the azure-batch JavaScript module and then
```javascript // Initializing Azure Batch variables
-var batch = require('azure-batch');
+import { BatchServiceClient, BatchSharedKeyCredentials } from "@azure/batch";
-var accountName = '<azure-batch-account-name>';
+// Replace values below with Batch Account details
+const batchAccountName = '<batch-account-name>';
+const batchAccountKey = '<batch-account-key>';
+const batchEndpoint = '<batch-account-url>';
-var accountKey = '<account-key-downloaded>';
-
-var accountUrl = '<account-url>'
-
-// Create Batch credentials object using account name and account key
-
-var credentials = new batch.SharedKeyCredentials(accountName,accountKey);
-
-// Create Batch service client
-
-var batch_client = new batch.ServiceClient(credentials,accountUrl);
+const credentials = new BatchSharedKeyCredentials(batchAccountName, batchAccountKey);
+const batchClient = new BatchServiceClient(credentials, batchEndpoint);
```
The following code snippet creates the configuration parameter objects.
```javascript // Creating Image reference configuration for Ubuntu Linux VM
-var imgRef = {publisher:"Canonical",offer:"UbuntuServer",sku:"14.04.2-LTS",version:"latest"}
-
+const imgRef = {
+ publisher: "Canonical",
+ offer: "UbuntuServer",
+ sku: "18.04-LTS",
+ version: "latest"
+}
// Creating the VM configuration object with the SKUID
-var vmconfig = {imageReference:imgRef,nodeAgentSKUId:"batch.node.ubuntu 14.04"}
-
-// Setting the VM size to Standard F4
-var vmSize = "STANDARD_F4"
-
-//Setting number of VMs in the pool to 4
-var numVMs = 4
+const vmConfig = {
+ imageReference: imgRef,
+ nodeAgentSKUId: "batch.node.ubuntu 18.04"
+};
+// Number of VMs to create in a pool
+const numVms = 4;
+
+// Setting the VM size
+const vmSize = "STANDARD_D1_V2";
``` > [!TIP]
The following code snippet creates an Azure Batch pool.
```javascript // Create a unique Azure Batch pool ID
-var poolid = "pool" + customerDetails.customerid;
-var poolConfig = {id:poolid, displayName:poolid,vmSize:vmSize,virtualMachineConfiguration:vmconfig,targetDedicatedComputeNodes:numVms,enableAutoScale:false };
-// Creating the Pool for the specific customer
-var pool = batch_client.pool.add(poolConfig,function(error,result){
+const now = new Date();
+const poolId = `processcsv_${now.getFullYear()}${now.getMonth()}${now.getDay()}${now.getHours()}${now.getSeconds()}`;
+
+const poolConfig = {
+ id: poolId,
+ displayName: "Processing csv files",
+ vmSize: vmSize,
+ virtualMachineConfiguration: vmConfig,
+ targetDedicatedNodes: numVms,
+ enableAutoScale: false
+};
+
+// Creating the Pool
+var pool = batchClient.pool.add(poolConfig, function (error, result){
if(error!=null){console.log(error.response)}; }); ```
var pool = batch_client.pool.add(poolConfig,function(error,result){
You can check the status of the pool created and ensure that the state is in "active" before going ahead with submission of a Job to that pool. ```javascript
-var cloudPool = batch_client.pool.get(poolid,function(error,result,request,response){
+var cloudPool = batchClient.pool.get(poolId,function(error,result,request,response){
if(error == null) {
var cloudPool = batch_client.pool.get(poolid,function(error,result,request,respo
Following is a sample result object returned by the pool.get function. ```
-{ id: 'processcsv_201721152',
- displayName: 'processcsv_201721152',
- url: 'https://<batch-account-name>.centralus.batch.azure.com/pools/processcsv_201721152',
- eTag: '<eTag>',
- lastModified: 2017-03-27T10:28:02.398Z,
- creationTime: 2017-03-27T10:28:02.398Z,
+{
+ id: 'processcsv_2022002321',
+ displayName: 'Processing csv files',
+ url: 'https://<batch-account-name>.westus.batch.azure.com/pools/processcsv_2022002321',
+ eTag: '0x8D9D4088BC56FA1',
+ lastModified: 2022-01-10T07:12:21.943Z,
+ creationTime: 2022-01-10T07:12:21.943Z,
state: 'active',
- stateTransitionTime: 2017-03-27T10:28:02.398Z,
- allocationState: 'resizing',
- allocationStateTransitionTime: 2017-03-27T10:28:02.398Z,
- vmSize: 'standard_a1',
- virtualMachineConfiguration:
- { imageReference:
- { publisher: 'Canonical',
- offer: 'UbuntuServer',
- sku: '14.04.2-LTS',
- version: 'latest' },
- nodeAgentSKUId: 'batch.node.ubuntu 14.04' },
- resizeTimeout:
- { [Number: 900000]
- _milliseconds: 900000,
- _days: 0,
- _months: 0,
- _data:
- { milliseconds: 0,
- seconds: 0,
- minutes: 15,
- hours: 0,
- days: 0,
- months: 0,
- years: 0 },
- _locale:
- Locale {
- _calendar: [Object],
- _longDateFormat: [Object],
- _invalidDate: 'Invalid date',
- ordinal: [Function: ordinal],
- _ordinalParse: /\d{1,2}(th|st|nd|rd)/,
- _relativeTime: [Object],
- _months: [Object],
- _monthsShort: [Object],
- _week: [Object],
- _weekdays: [Object],
- _weekdaysMin: [Object],
- _weekdaysShort: [Object],
- _meridiemParse: /[ap]\.?m?\.?/i,
- _abbr: 'en',
- _config: [Object],
- _ordinalParseLenient: /\d{1,2}(th|st|nd|rd)|\d{1,2}/ } },
- currentDedicated: 0,
- targetDedicated: 4,
+ stateTransitionTime: 2022-01-10T07:12:21.943Z,
+ allocationState: 'steady',
+ allocationStateTransitionTime: 2022-01-10T07:13:35.103Z,
+ vmSize: 'standard_d1_v2',
+ virtualMachineConfiguration: {
+ imageReference: {
+ publisher: 'Canonical',
+ offer: 'UbuntuServer',
+ sku: '18.04-LTS',
+ version: 'latest'
+ },
+ nodeAgentSKUId: 'batch.node.ubuntu 18.04'
+ },
+ resizeTimeout: 'PT15M',
+ currentDedicatedNodes: 4,
+ currentLowPriorityNodes: 0,
+ targetDedicatedNodes: 4,
+ targetLowPriorityNodes: 0,
enableAutoScale: false, enableInterNodeCommunication: false, taskSlotsPerNode: 1,
- taskSchedulingPolicy: { nodeFillType: 'Spread' } }
+ taskSchedulingPolicy: { nodeFillType: 'Spread' }}
``` ### Step 4: Submit an Azure Batch job
An Azure Batch job is a logical group of similar tasks. In our scenario, it is "
These tasks would run in parallel and deployed across multiple nodes, orchestrated by the Azure Batch service. > [!TIP]
-> You can use the taskSlotsPerNode property to specify maximum number of tasks that can run concurrently on a single node.
+> You can use the [taskSlotsPerNode](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/batch/arm-batch/src/models/index.ts#L1190-L1191) property to specify maximum number of tasks that can run concurrently on a single node.
#### Preparation task The VM nodes created are blank Ubuntu nodes. Often, you need to install a set of programs as prerequisites. Typically, for Linux nodes you can have a shell script that installs the prerequisites before the actual tasks run. However it could be any programmable executable.
-The [shell script](https://github.com/shwetams/azure-batchclient-sample-nodejs/blob/master/startup_prereq.sh) in this example installs Python-pip and the Azure Storage SDK for Python.
+The [shell script](https://github.com/Azure-Samples/azure-batch-samples/blob/master/JavaScript/Node.js/startup_prereq.sh) in this example installs Python-pip and the Azure Storage Blob SDK for Python.
You can upload the script on an Azure Storage Account and generate a SAS URI to access the script. This process can also be automated using the Azure Storage JavaScript SDK. > [!TIP]
-> A preparation task for a job runs only on the VM nodes where the specific task needs to run. If you want prerequisites to be installed on all nodes irrespective of the tasks that run on it, you can use the startTask property while adding a pool. You can use the following preparation task definition for reference.
+> A preparation task for a job runs only on the VM nodes where the specific task needs to run. If you want prerequisites to be installed on all nodes irrespective of the tasks that run on it, you can use the [startTask](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/batch/batch/src/models/index.ts#L1432) property while adding a pool. You can use the following preparation task definition for reference.
-A preparation task is specified during the submission of Azure Batch job. Following are the preparation task configuration parameters:
+A preparation task is specified during the submission of Azure Batch job. Following are some configurable preparation task parameters:
- **ID**: A unique identifier for the preparation task - **commandLine**: Command line to execute the task executable - **resourceFiles**: Array of objects that provide details of files needed to be downloaded for this task to run. Following are its options
- - blobSource: The SAS URI of the file
+ - httpUrl: The URL of the file to download
- filePath: Local path to download and save the file - fileMode: Only applicable for Linux nodes, fileMode is in octal format with a default value of 0770 - **waitForSuccess**: If set to true, the task does not run on preparation task failures
A preparation task is specified during the submission of Azure Batch job. Follow
Following code snippet shows the preparation task script configuration sample: ```javascript
-var job_prep_task_config = {id:"installprereq",commandLine:"sudo sh startup_prereq.sh > startup.log",resourceFiles:[{'blobSource':'Blob SAS URI','filePath':'startup_prereq.sh'}],waitForSuccess:true,runElevated:true}
+var jobPrepTaskConfig = {id:"installprereq",commandLine:"sudo sh startup_prereq.sh > startup.log",resourceFiles: [{ 'httpUrl': 'Blob sh url', 'filePath': 'startup_prereq.sh' }],waitForSuccess:true,runElevated:true, userIdentity: {autoUser: {elevationLevel: "admin", scope: "pool"}}}
``` If there are no prerequisites to be installed for your tasks to run, you can skip the preparation tasks. Following code creates a job with display name "process csv files." ```javascript
- // Setting up Batch pool configuration
- var pool_config = {poolId:poolid}
- // Setting up Job configuration along with preparation task
- var jobId = "processcsvjob"
- var job_config = {id:jobId,displayName:"process csv files",jobPreparationTask:job_prep_task_config,poolInfo:pool_config}
+ // Setting Batch Pool ID
+const poolInfo = { poolId: poolId };
+// Batch job configuration object
+const jobId = "processcsvjob";
+const jobConfig = {
+ id: jobId,
+ displayName: "process csv files",
+ jobPreparationTask: jobPrepTaskConfig,
+ poolInfo: poolInfo
+};
// Adding Azure batch job to the pool
- var job = batch_client.job.add(job_config,function(error,result){
- if(error != null)
- {
- console.log("Error submitting job : " + error.response);
- }});
+ const job = batchClient.job.add(jobConfig, function (error, result) {
+ if (error !== null) {
+ console.log("An error occurred while creating the job...");
+ console.log(error.response);
+ }
+ });
``` ### Step 5: Submit Azure Batch tasks for a job Now that our process csv job is created, let us create tasks for that job. Assuming we have four containers, we have to create four tasks, one for each container.
-If we look at the [Python script](https://github.com/shwetams/azure-batchclient-sample-nodejs/blob/master/processcsv.py), it accepts two parameters:
+If we look at the [Python script](https://github.com/Azure-Samples/azure-batch-samples/blob/master/JavaScript/Node.js/processcsv.py), it accepts two parameters:
- container name: The Storage container to download files from - pattern: An optional parameter of file name pattern
-Assuming we have four containers "con1", "con2", "con3","con4" following code shows submitting for tasks to the Azure batch job "process csv" we created earlier.
+Assuming we have four containers "con1", "con2", "con3","con4" following code shows submitting four tasks to the Azure batch job "process csv" we created earlier.
```javascript // storing container names in an array
-var container_list = ["con1","con2","con3","con4"]
- container_list.forEach(function(val,index){
-
- var container_name = val;
- var taskID = container_name + "_process";
- var task_config = {id:taskID,displayName:'process csv in ' + container_name,commandLine:'python processcsv.py --container ' + container_name,resourceFiles:[{'blobSource':'<blob SAS URI>','filePath':'processcsv.py'}]}
- var task = batch_client.task.add(poolid,task_config,function(error,result){
- if(error != null)
- {
- console.log(error.response);
- }
- else
- {
- console.log("Task for container : " + container_name + "submitted successfully");
- }
---
- });
-
+const containerList = ["con1", "con2", "con3", "con4"]; //Replace with list of blob containers within storage account
+containerList.forEach(function (val, index) {
+ console.log("Submitting task for container : " + val);
+ const containerName = val;
+ const taskID = containerName + "_process";
+ // Task configuration object
+ const taskConfig = {
+ id: taskID,
+ displayName: 'process csv in ' + containerName,
+ commandLine: 'python processcsv.py --container ' + containerName,
+ resourceFiles: [{ 'httpUrl': 'Blob script url', 'filePath': 'processcsv.py' }]
+ };
+
+ const task = batchClient.task.add(jobId, taskConfig, function (error, result) {
+ if (error !== null) {
+ console.log("Error occured while creating task for container " + containerName + ". Details : " + error.response);
+ }
+ else {
+ console.log("Task for container : " + containerName + " submitted successfully");
+ }
});
+});
``` The code adds multiple tasks to the pool. And each of the tasks is executed on a node in the pool of VMs created. If the number of tasks exceeds the number of VMs in a pool or the taskSlotsPerNode property, the tasks wait until a node is made available. This orchestration is handled by Azure Batch automatically.
-The portal has detailed views on the tasks and job statuses. You can also use the list and get functions in the Azure JavaScript SDK..
+The portal has detailed views on the tasks and job statuses. You can also use the list and get functions in the Azure JavaScript SDK. Details are provided in the documentation [link](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/batch/batch/src/operations/job.ts#L114-L149).
## Next steps
cloud-services Cloud Services Dotnet Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-dotnet-get-started.md
Here are some cloud service sample applications that demonstrate more real-world
For general information about developing for the cloud, see [Building Real-World Cloud Apps with Azure](https://www.asp.net/aspnet/overview/developing-apps-with-windows-azure/building-real-world-cloud-apps-with-windows-azure/introduction).
-For a video introduction to Azure Storage best practices and patterns, see [Microsoft Azure Storage ΓÇô What's New, Best Practices and Patterns](https://channel9.msdn.com/Events/Build/2014/3-628).
+For a video introduction to Azure Storage best practices and patterns, see Microsoft Azure Storage ΓÇô What's New, Best Practices and Patterns.
For more information, see the following resources:
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Anomaly-Detector/whats-new.md
We've also added links to some user-generated content. Those items will be marke
## Videos
-* May 7, 2021 [New to Anomaly Detector: Multivariate Capabilities](https://channel9.msdn.com/Shows/AI-Show/New-to-Anomaly-Detector-Multivariate-Capabilities) - AI Show on the new multivariate anomaly detection APIs with Tony Xing and Seth Juarez
-* April 20, 2021 [AI Show Live | Episode 11| New to Anomaly Detector: Multivariate Capabilities](https://channel9.msdn.com/Shows/AI-Show/AI-Show-Live-Episode-11-Whats-new-with-Anomaly-Detector) - AI Show live recording with Tony Xing and Seth Juarez
-* May 18, 2020 [Inside Anomaly Detector](https://channel9.msdn.com/Shows/AI-Show/Inside-Anomaly-Detector) - AI Show with Qun Ying and Seth Juarez
+* May 7, 2021 [New to Anomaly Detector: Multivariate Capabilities](/shows/AI-Show/New-to-Anomaly-Detector-Multivariate-Capabilities) - AI Show on the new multivariate anomaly detection APIs with Tony Xing and Seth Juarez
+* April 20, 2021 AI Show Live | Episode 11| New to Anomaly Detector: Multivariate Capabilities - AI Show live recording with Tony Xing and Seth Juarez
+* May 18, 2020 [Inside Anomaly Detector](/shows/AI-Show/Inside-Anomaly-Detector) - AI Show with Qun Ying and Seth Juarez
* September 19, 2019 **[UGC]** [Detect Anomalies in Your Data with the Anomaly Detector](https://www.youtube.com/watch?v=gfb63wvjnYQ) - Video by Jon Wood
-* September 3, 2019 [Anomaly detection on streaming data using Azure Databricks](https://channel9.msdn.com/Shows/AI-Show/Anomaly-detection-on-streaming-data-using-Azure-Databricks) - AI Show with Qun Ying
-* August 27, 2019 [Anomaly Detector v1.0 Best Practices](https://channel9.msdn.com/Shows/AI-Show/Anomaly-Detector-v10-Best-Practices) - AI Show on univariate anomaly detection best practices with Qun Ying
-* August 20, 2019 [Bring Anomaly Detector on-premises with containers support](https://channel9.msdn.com/Shows/AI-Show/Bring-Anomaly-Detector-on-premise-with-containers-support) - AI Show with Qun Ying and Seth Juarez
-* August 13, 2019 [Introducing Azure Anomaly Detector](https://channel9.msdn.com/Shows/AI-Show/Introducing-Azure-Anomaly-Detector?WT.mc_id=ai-c9-niner) - AI Show with Qun Ying and Seth Juarez
+* September 3, 2019 [Anomaly detection on streaming data using Azure Databricks](/shows/AI-Show/Anomaly-detection-on-streaming-data-using-Azure-Databricks) - AI Show with Qun Ying
+* August 27, 2019 [Anomaly Detector v1.0 Best Practices](/shows/AI-Show/Anomaly-Detector-v10-Best-Practices) - AI Show on univariate anomaly detection best practices with Qun Ying
+* August 20, 2019 [Bring Anomaly Detector on-premises with containers support](/shows/AI-Show/Bring-Anomaly-Detector-on-premise-with-containers-support) - AI Show with Qun Ying and Seth Juarez
+* August 13, 2019 [Introducing Azure Anomaly Detector](/shows/AI-Show/Introducing-Azure-Anomaly-Detector?WT.mc_id=ai-c9-niner) - AI Show with Qun Ying and Seth Juarez
## Service updates
cognitive-services Facebook Post Moderation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/facebook-post-moderation.md
+
+ Title: "Tutorial: Moderate Facebook content - Content Moderator"
+
+description: In this tutorial, you will learn how to use machine-learning-based Content Moderator to help moderate Facebook posts and comments.
+++++++ Last updated : 01/29/2021+
+#Customer intent: As the moderator of a Facebook page, I want to use Azure's machine learning technology to automate and streamline the process of post moderation.
++
+# Tutorial: Moderate Facebook posts and commands with Azure Content Moderator
++
+In this tutorial, you will learn how to use Azure Content Moderator to help moderate the posts and comments on a Facebook page. Facebook will send the content posted by visitors to the Content Moderator service. Then your Content Moderator workflows will either publish the content or create reviews within the Review tool, depending on the content scores and thresholds.
+
+> [!IMPORTANT]
+> In 2018, Facebook implemented a more strict vetting policy for Facebook Apps. You will not be able to complete the steps of this tutorial if your app has not been reviewed and approved by the Facebook review team.
+
+This tutorial shows you how to:
+
+> [!div class="checklist"]
+> * Create a Content Moderator team.
+> * Create Azure Functions that listen for HTTP events from Content Moderator and Facebook.
+> * Link a Facebook page to Content Moderator using a Facebook application.
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
+
+This diagram illustrates each component of this scenario:
+
+![Diagram of Content Moderator receiving information from Facebook through "FBListener" and sending information through "CMListener"](images/tutorial-facebook-moderation.png)
+
+## Prerequisites
+
+- A Content Moderator subscription key. Follow the instructions in [Create a Cognitive Services account](../cognitive-services-apis-create-account.md) to subscribe to the Content Moderator service and get your key.
+- A [Facebook account](https://www.facebook.com/).
+
+## Create a review team
+
+Refer to the [Try Content Moderator on the web](quick-start.md) quickstart for instructions on how to sign up for the [Content Moderator Review tool](https://contentmoderator.cognitive.microsoft.com/) and create a review team. Take note of the **Team ID** value on the **Credentials** page.
+
+## Configure image moderation workflow
+
+Refer to the [Define, test, and use workflows](review-tool-user-guide/workflows.md) guide to create a custom image workflow. Content Moderator will use this workflow to automatically check images on Facebook and send some to the Review tool. Take note of the workflow **name**.
+
+## Configure text moderation workflow
+
+Again, refer to the [Define, test, and use workflows](review-tool-user-guide/workflows.md) guide; this time, create a custom text workflow. Content Moderator will use this workflow to automatically check text content. Take note of the workflow **name**.
+
+![Configure Text Workflow](images/text-workflow-configure.PNG)
+
+Test your workflow using the **Execute Workflow** button.
+
+![Test Text Workflow](images/text-workflow-test.PNG)
+
+## Create Azure Functions
+
+Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps:
+
+1. Create an Azure Function App as shown on the [Azure Functions](../../azure-functions/functions-create-function-app-portal.md) page.
+1. Go to the newly created Function App.
+1. Within the App, go to the **Platform features** tab and select **Configuration**. In the **Application settings** section of the next page, select **New application setting** to add the following key/value pairs:
+
+ | App Setting name | value |
+ | -- |-|
+ | `cm:TeamId` | Your Content Moderator TeamId |
+ | `cm:SubscriptionKey` | Your Content Moderator subscription key - See [Credentials](./review-tool-user-guide/configure.md#credentials) |
+ | `cm:Region` | Your Content Moderator region name, without the spaces. You can find this name in the **Location** field of the **Overview** tab of your Azure resource.|
+ | `cm:ImageWorkflow` | Name of the workflow to run on Images |
+ | `cm:TextWorkflow` | Name of the workflow to run on Text |
+ | `cm:CallbackEndpoint` | Url for the CMListener Function App that you will create later in this guide |
+ | `fb:VerificationToken` | A secret token that you create, used to subscribe to the Facebook feed events |
+ | `fb:PageAccessToken` | The Facebook graph api access token does not expire and allows the function Hide/Delete posts on your behalf. You will get this token at a later step. |
+
+ Click the **Save** button at the top of the page.
+
+1. Go back to the **Platform features** tab. Use the **+** button on the left pane to bring up the **New function** pane. The function you are about to create will receive events from Facebook.
+
+ ![Azure Functions pane with the Add Function button highlighted.](images/new-function.png)
+
+ 1. Click on the tile that says **Http trigger**.
+ 1. Enter the name **FBListener**. The **Authorization Level** field should be set to **Function**.
+ 1. Click **Create**.
+ 1. Replace the contents of the **run.csx** with the contents from **FbListener/run.csx**
+
+ [!code-csharp[FBListener: csx file](~/samples-fbPageModeration/FbListener/run.csx?range=1-154)]
+
+1. Create a new **Http trigger** function named **CMListener**. This function receives events from Content Moderator. Replace the contents of the **run.csx** with the contents from **CMListener/run.csx**
+
+ [!code-csharp[FBListener: csx file](~/samples-fbPageModeration/CmListener/run.csx?range=1-110)]
+++
+## Configure the Facebook page and App
+
+1. Create a Facebook App.
+
+ ![facebook developer page](images/facebook-developer-app.png)
+
+ 1. Navigate to the [Facebook developer site](https://developers.facebook.com/)
+ 1. Go to **My Apps**.
+ 1. Add a New App.
+ 1. Provide a name
+ 1. Select **Webhooks -> Set Up**
+ 1. Select **Page** in the dropdown menu and select **Subscribe to this object**
+ 1. Provide the **FBListener Url** as the Callback URL and the **Verify Token** you configured under the **Function App Settings**
+ 1. Once subscribed, scroll down to feed and select **subscribe**.
+ 1. Select the **Test** button of the **feed** row to send a test message to your FBListener Azure Function, then hit the **Send to My Server** button. You should see the request being received on your FBListener.
+
+1. Create a Facebook Page.
+
+ > [!IMPORTANT]
+ > In 2018, Facebook implemented a more strict vetting of Facebook apps. You will not be able to execute sections 2, 3 and 4 if your app has not been reviewed and approved by the Facebook review team.
+
+ 1. Navigate to [Facebook](https://www.facebook.com/pages) and create a **new Facebook Page**.
+ 1. Allow the Facebook App to access this page by following these steps:
+ 1. Navigate to the [Graph API Explorer](https://developers.facebook.com/tools/explorer/).
+ 1. Select **Application**.
+ 1. Select **Page Access Token**, Send a **Get** request.
+ 1. Select the **Page ID** in the response.
+ 1. Now append the **/subscribed_apps** to the URL and Send a **Get** (empty response) request.
+ 1. Submit a **Post** request. You get the response as **success: true**.
+
+3. Create a non-expiring Graph API access token.
+
+ 1. Navigate to the [Graph API Explorer](https://developers.facebook.com/tools/explorer/).
+ 2. Select the **Application** option.
+ 3. Select the **Get User Access Token** option.
+ 4. Under the **Select Permissions**, select **manage_pages** and **publish_pages** options.
+ 5. We will use the **access token** (Short Lived Token) in the next step.
+
+4. We use Postman for the next few steps.
+
+ 1. Open **Postman** (or get it [here](https://www.getpostman.com/)).
+ 2. Import these two files:
+ 1. [Postman Collection](https://github.com/MicrosoftContentModerator/samples-fbPageModeration/blob/master/Facebook%20Permanant%20Page%20Access%20Token.postman_collection.json)
+ 2. [Postman Environment](https://github.com/MicrosoftContentModerator/samples-fbPageModeration/blob/master/FB%20Page%20Access%20Token%20Environment.postman_environment.json)
+ 3. Update these environment variables:
+
+ | Key | Value |
+ | -- |-|
+ | appId | Insert your Facebook App Identifier here |
+ | appSecret | Insert your Facebook App's secret here |
+ | short_lived_token | Insert the short lived user access token you generated in the previous step |
+ 4. Now run the 3 APIs listed in the collection:
+ 1. Select **Generate Long-Lived Access Token** and click **Send**.
+ 2. Select **Get User ID** and click **Send**.
+ 3. Select **Get Permanent Page Access Token** and click **Send**.
+ 5. Copy the **access_token** value from the response and assign it to the App setting, **fb:PageAccessToken**.
+
+The solution sends all images and text posted on your Facebook page to Content Moderator. Then the workflows that you configured earlier are invoked. The content that does not pass your criteria defined in the workflows gets passed to reviews within the review tool. The rest of the content gets published automatically.
+
+## Next steps
+
+In this tutorial, you set up a program to analyze product images, tag them by product type, and allow a review team to make informed decisions about content moderation. Next, learn more about the details of image moderation.
+
+> [!div class="nextstepaction"]
+> [Image moderation](./image-moderation-api.md)
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/whats-new.md
Learn what's new in the service. These items include release notes, videos, blog
* [Continuous integration tools](developer-reference-resource.md#continuous-integration-tools) * Workshop - learn best practices for [_Natural Language Understanding_ (NLU) using LUIS](developer-reference-resource.md#workshops) * [Customer managed keys](./encrypt-data-at-rest.md) - encrypt all the data you use in LUIS by using your own key
-* [AI show](https://channel9.msdn.com/Shows/AI-Show/New-Features-in-Language-Understanding) (video) - see the new features in LUIS
+* [AI show](/Shows/AI-Show/New-Features-in-Language-Understanding) (video) - see the new features in LUIS
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/whats-new.md
Learn what's new with QnA Maker.
* New version of QnA Maker launched in free Public Preview. Read more [here](https://techcommunity.microsoft.com/t5/azure-ai/introducing-qna-maker-managed-now-in-public-preview/ba-p/1845575).
-> [!VIDEO https://channel9.msdn.com/Shows/AI-Show/Introducing-QnA-managed-Now-in-Public-Preview/player]
+> [!VIDEO https://docs.microsoft.com/Shows/AI-Show/Introducing-QnA-managed-Now-in-Public-Preview/player]
* Simplified resource creation * End to End region support * Deep learnt ranking model
cognitive-services How To Use Codec Compressed Audio Input Streams https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-use-codec-compressed-audio-input-streams.md
Title: Stream codec-compressed audio with the Speech SDK - Speech service
-description: Learn how to stream compressed audio to the Speech service with the Speech SDK. Available for C++, C#, and Java for Linux, Java in Android and Objective-C in iOS.
+description: Learn how to stream compressed audio to the Speech service with the Speech SDK.
++ Previously updated : 03/30/2020 Last updated : 01/13/2022 ms.devlang: cpp, csharp, golang, java, python zone_pivot_groups: programming-languages-set-twenty-eight
-# Use codec-compressed audio input
+# Stream codec-compressed audio
-The Speech SDK and Speech CLI can accept compressed audio formats using GStreamer. GStreamer decompresses the audio before it's sent over the wire to the Speech service as raw PCM.
+The Speech SDK and Speech CLI use GStreamer to support different kinds of input audio formats. GStreamer decompresses the audio before it's sent over the wire to the Speech service as raw PCM.
++
+## Installing GStreamer
+
+Choose a platform for installation instructions.
Platform | Languages | Supported GStreamer version | : | : | ::
+Android | Java | [1.18.3](https://gstreamer.freedesktop.org/data/pkg/android/1.18.3/)
Linux | C++, C#, Java, Python, Go | [Supported Linux distributions and target architectures](~/articles/cognitive-services/speech-service/speech-sdk.md) Windows (excluding UWP) | C++, C#, Java, Python | [1.18.3](https://gstreamer.freedesktop.org/data/pkg/windows/1.18.3/msvc/gstreamer-1.0-msvc-x86_64-1.18.3.msi)
-Android | Java | [1.18.3](https://gstreamer.freedesktop.org/data/pkg/android/1.18.3/)
-## Installing GStreamer on Linux
+### [Android](#tab/android)
+
+See [GStreamer configuration by programming language](#gstreamer-configuration) for the details about building libgstreamer_android.so.
+
+For more information, see [Android installation instructions](https://gstreamer.freedesktop.org/documentation/installing/for-android-development.html?gi-language=c).
+
+### [Linux](#tab/linux)
For more information, see [Linux installation instructions](https://gstreamer.freedesktop.org/documentation/installing/on-linux.html?gi-language=c).
gstreamer1.0-plugins-good \
gstreamer1.0-plugins-bad \ gstreamer1.0-plugins-ugly ```
-## Installing GStreamer on Windows
+### [Windows](#tab/windows)
Make sure that packages of the same platform (x64 or x86) are installed. For example, if you installed the x64 package for Python, then you need to install the x64 GStreamer package. The instructions below are for the x64 packages.
Make sure that packages of the same platform (x64 or x86) are installed. For exa
For more information about GStreamer, see [Windows installation instructions](https://gstreamer.freedesktop.org/documentation/installing/on-windows.html?gi-language=c).
-## Using GStreamer in Android
-Look at the Java tab above for the details about building libgstreamer_android.so
+***
-For more information see [Android installation instructions](https://gstreamer.freedesktop.org/documentation/installing/for-android-development.html?gi-language=c).
-
-## Speech SDK version required for compressed audio input
-* Speech SDK version 1.10.0 or later is required for RHEL 8 and CentOS 8
-* Speech SDK version 1.11.0 or later is required for Windows.
-* Speech SDK version 1.16.0 or later for the latest GStreamer on Windows and Android.
-
+## GStreamer configuration
-## GStreamer required to handle compressed audio
+> [!NOTE]
+> GStreamer configuration requirements vary by programming language. For details, choose your programming language at the top of this page. The contents of this section will be updated.
::: zone pivot="programming-language-csharp" [!INCLUDE [prerequisites](includes/how-to/compressed-audio-input/csharp/prerequisites.md)]
For more information see [Android installation instructions](https://gstreamer.f
[!INCLUDE [prerequisites](includes/how-to/compressed-audio-input/go/prerequisites.md)] ::: zone-end
-## Example code using codec compressed audio input
+## Example
::: zone pivot="programming-language-csharp" [!INCLUDE [prerequisites](includes/how-to/compressed-audio-input/csharp/examples.md)]
cognitive-services Speech Ssml Phonetic Sets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-ssml-phonetic-sets.md
Title: Speech phonetic sets - Speech service
+ Title: Speech phonetic alphabets - Speech service
-description: Learn how to the Speech service phonetic alphabet maps to the International Phonetic Alphabet (IPA), and when to use which set.
+description: Speech service phonetic alphabet and International Phonetic Alphabet (IPA) examples.
-+ Previously updated : 03/04/2020 Last updated : 01/13/2022
-# Speech service phonetic sets
+# SSML phonetic alphabets
-The Speech service defines phonetic alphabets ("phone sets" for short), consisting of seven languages; `en-US`, `fr-FR`, `de-DE`, `es-ES`, `ja-JP`, `zh-CN`, and `zh-TW`. The Speech service phone sets typically map to the <a href="https://en.wikipedia.org/wiki/International_Phonetic_Alphabet" target="_blank">International Phonetic Alphabet (IPA) </a>. Speech service phone sets are used in conjunction with the [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md), as part of the Text-to-speech service offering. In this article, you'll learn how these phone sets are mapped and when to use which phone set.
+Phonetic alphabets are used with the [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md) to improve pronunciation of Text-to-speech voices. See [Use phonemes to improve pronunciation](speech-synthesis-markup.md#use-phonemes-to-improve-pronunciation) to learn when and how to use each alphabet.
-# [en-US](#tab/en-US)
+## Speech service phonetic alphabet
-### English suprasegmentals
+For some locales, the Speech service defines its own phonetic alphabets that typically map to the [International Phonetic Alphabet (IPA)](https://en.wikipedia.org/wiki/International_Phonetic_Alphabet). The 7 locales that support `sapi` are: `en-US`, `fr-FR`, `de-DE`, `es-ES`, `ja-JP`, `zh-CN`, and `zh-TW`.
-| Example 1 (Onset for consonant, word initial for vowel) | Example 2 (Intervocalic for consonant, word medial nucleus for vowel) | Example 3 (Coda for consonant, word final for vowel) | Comments |
+You set `sapi` or `ipa` as the `alphabet` in [SSML](speech-synthesis-markup.md#use-phonemes-to-improve-pronunciation).
+
+### [en-US](#tab/en-US)
+
+#### English suprasegmentals
+
+|Example 1 (Onset for consonant, word initial for vowel)|Example 2 (Intervocalic for consonant, word medial nucleus for vowel)|Example 3 (Coda for consonant, word final for vowel)|Comments|
|--|--|--|--| | burger /b er **1** r - g ax r/ | falafel /f ax - l aa **1** - f ax l/ | guitar /g ih - t aa **1** r/ | Speech service phone set put stress after the vowel of the stressed syllable | | inopportune /ih **2** - n aa - p ax r - t uw 1 n/ | dissimilarity /d ih - s ih **2**- m ax - l eh 1 - r ax - t iy/ | workforce /w er 1 r k - f ao **2** r s/ | Speech service phone set put stress after the vowel of the sub-stressed syllable |
-### English vowels
+#### English vowels
| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 | |--|-||--|--|
The Speech service defines phonetic alphabets ("phone sets" for short), consisti
| y uw | `ju` | **Yu**ma | h**u**man | f**ew** | | ax | `ə` | **a**go | wom**a**n | are**a** |
-### English R-colored vowels
+#### English R-colored vowels
| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 | |--|-|--|-||
The Speech service defines phonetic alphabets ("phone sets" for short), consisti
| er r | `ɝ` | **ear**th | b**ir**d | f**ur** | | ax r | `ɚ` | | all**er**gy | supp**er** |
-### English Semivowels
+#### English Semivowels
| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 | |--|-|||--| | w | `w` | **w**ith, s**ue**de | al**w**ays | | | y | `j` | **y**ard, f**e**w | on**i**on | |
-### English aspirated oral stops
+#### English aspirated oral stops
| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 | |--|-|--|-||
The Speech service defines phonetic alphabets ("phone sets" for short), consisti
| k | `k` | **c**ut | sla**ck**er | Ira**q** | | g | `g` | **g**o | a**g**o | dra**g** |
-### English Nasal stops
+#### English Nasal stops
| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 | |--|-|||-|
The Speech service defines phonetic alphabets ("phone sets" for short), consisti
| n | `n` | **n**o, s**n**ow | te**n**t | chicke**n** | | ng | `ŋ` | | li**n**k | s**ing** |
-### English fricatives
+#### English fricatives
| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 | |--|-|-|||
The Speech service defines phonetic alphabets ("phone sets" for short), consisti
| zh | `ʒ` | **J**acques | plea**s**ure | gara**g**e | | h | `h` | **h**elp | en**h**ance | a-**h**a! |
-### English affricates
+#### English affricates
| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 | |--|-|--|--|| | ch | `tʃ` | **ch**in | fu**t**ure | atta**ch** | | jh | `dʒ` | **j**oy | ori**g**inal | oran**g**e |
-### English approximants
+#### English approximants
| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 | |--|-|--||--| | l | `l` | **l**id, g**l**ad | pa**l**ace | chi**ll** | | r | `╔╣` | **r**ed, b**r**ing | bo**rr**ow | ta**r** |
-# [fr-FR](#tab/fr-FR)
+### [fr-FR](#tab/fr-FR)
-### French suprasegmentals
+#### French suprasegmentals
The Speech service phone set puts stress after the vowel of the stressed syllable, however; the `fr-FR` Speech service phone set doesn't support the IPA substress 'ˌ'. If the IPA substress is needed, you should use the IPA directly.
-### French vowels
+#### French vowels
| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 | |--|-||--|--|
The Speech service phone set puts stress after the vowel of the stressed syllabl
| uw | `u` | **ou**trage | intr**ou**vable | **ou** | | uy | `y` | **u**ne | p**u**nir | él**u** |
-### French consonants
+#### French consonants
| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 | |--|-|-||-|
The Speech service phone set puts stress after the vowel of the stressed syllabl
> [!TIP] > The `fr-FR` Speech service phone set doesn't support the following French liasions, `n‿`, `t‿`, and `z‿`. If they are needed, you should consider using the IPA directly.
-# [de-DE](#tab/de-DE)
+### [de-DE](#tab/de-DE)
-### German suprasegmentals
+#### German suprasegmentals
| Example 1 (Onset for consonant, word initial for vowel) | Example 2 (Intervocalic for consonant, word medial nucleus for vowel) | Example 3 (Coda for consonant, word final for vowel) | Comments | |--|--|--|--| | anders /a **1** n - d ax r s/ | Multiplikationszeichen /m uh l - t iy - p l iy - k a - ts y ow **1** n s - ts ay - c n/ | Biologie /b iy - ow - l ow - g iy **1**/ | Speech service phone set put stress after the vowel of the stressed syllable | | Allgemeinwissen /a **2** l - g ax - m ay 1 n - v ih - s n/ | Abfallentsorgungsfirma /a 1 p - f a l - ^ eh n t - z oh **2** ax r - g uh ng s - f ih ax r - m a/ | Computertomographie /k oh m - p y uw 1 - t ax r - t ow - m ow - g r a - f iy **2**/ | Speech service phone set put stress after the vowel of the sub-stressed syllable |
-### German vowels
+#### German vowels
| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 | |--|--||||
The Speech service phone set puts stress after the vowel of the stressed syllabl
<a id="de-v-2"></a> **2** *Word-intially only in words of foreign origin such as **A**ppointment. Syllable-initially in: 'v**e**rstauen.*
-### German diphthong
+#### German diphthong
| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 | |--|-|--|--|--|
The Speech service phone set puts stress after the vowel of the stressed syllabl
| aw | `au` | **au**ßen | abb**au**st | St**au** | | oy | `ɔy`, `ɔʏ̯` | **Eu**phorie | tr**äu**mt | sch**eu** |
-### German semivowels
+#### German semivowels
| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 | |--|-|--|--|| | ax r | `ɐ` | | abänd**er**n | lock**er** |
-### German consonants
+#### German consonants
| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 | |--|--|--|--|--|
The Speech service phone set puts stress after the vowel of the stressed syllabl
<a id="de-c-12"></a> **12** *Word-initially only in words of foreign origin, such as: **J**uan. Syllable-initially also in words like: Ba**ch**erach.*<br>
-### German oral consonants
+#### German oral consonants
| `sapi` | `ipa` | Example 1 | |--|-|--| | ^ | `ʔ` | beachtlich /b ax - ^ a 1 x t - l ih c/ | > [!NOTE]
-> We need to add a [gs\] phone between two distinct vowels, except the two vowels are a genuine diphthong. This oral consonant is a glottal stop, for more information, see <a href="http://en.wikipedia.org/wiki/Glottal_stop" target="_blank">glottal stop <span class="docon docon-navigate-external x-hidden-focus"></a></a>.
+> We need to add a [gs\] phone between two distinct vowels, except the two vowels are a genuine diphthong. This oral consonant is a glottal stop, for more information, see [glottal stop](http://en.wikipedia.org/wiki/Glottal_stop).
-# [es-ES](#tab/es-ES)
+### [es-ES](#tab/es-ES)
-### Spanish vowels
+#### Spanish vowels
| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 | |--|-|--||--|
The Speech service phone set puts stress after the vowel of the stressed syllabl
| o | `o` | **o**caso | enc**o**ntrar | ocasenc**o** | | u | `u` | **u**sted | p**u**nta | Juanl**u** |
-### Spanish consonants
+#### Spanish consonants
| `sapi` | `ipa` | Example 1 | Example 2 | Example 3 | |--|||-|-|
The Speech service phone set puts stress after the vowel of the stressed syllabl
> [!TIP] > The `es-ES` Speech service phone set doesn't support the following Spanish IPA, `β`, `ð`, and `ɣ`. If they are needed, you should consider using the IPA directly.
-# [zh-CN](#tab/zh-CN)
+### [zh-CN](#tab/zh-CN)
-The Speech service phone set for `zh-CN` is based on the native phone <a href="https://en.wikipedia.org/wiki/Pinyin" target="_blank">Pinyin </a> set.
+The Speech service phone set for `zh-CN` is based on the native phone [Pinyin](https://en.wikipedia.org/wiki/Pinyin).
-### Tone
+#### Tone
| Pinyin tone | `sapi` | Character example | |-|--|-|
The Speech service phone set for `zh-CN` is based on the native phone <a href="h
| 累进 | lei 3 -jin 4 | | 西宅巷 | xi 1 - zhai 2 - xiang 4 |
-# [zh-TW](#tab/zh-TW)
+### [zh-TW](#tab/zh-TW)
-The Speech service phone set for `zh-TW` is based on the native phone <a href="https://en.wikipedia.org/wiki/Bopomofo" target="_blank">Bopomofo </a> set.
+The Speech service phone set for `zh-TW` is based on the native phone [Bopomofo](https://en.wikipedia.org/wiki/Bopomofo).
-### Tone
+#### Tone
| Speech service tone | Bopomofo tone | Example (word) | Speech service phones | Bopomofo | Pinyin (拼音) | |||-|--|-|-|
The Speech service phone set for `zh-TW` is based on the native phone <a href="h
| 然后 | ㄖㄢˊㄏㄡˋ | | 剪掉 | ㄐㄧㄢˇㄉㄧㄠˋ |
-# [ja-JP](#tab/ja-JP)
+### [ja-JP](#tab/ja-JP)
-The Speech service phone set for `ja-JP` is based on the native phone <a href="https://en.wikipedia.org/wiki/Kana" target="_blank">Kana </a> set.
+The Speech service phone set for `ja-JP` is based on the native phone [Kana](https://en.wikipedia.org/wiki/Kana) set.
-### Stress
+#### Stress
| `sapi` | `ipa` | |--|-|
The Speech service phone set for `ja-JP` is based on the native phone <a href="h
| 所有者 | ショュ'ウ?ャ | ɕjojɯˈwɯɕja | | 最適化 | サィテキカ+ | sajitecikaˌ | + ***+
+## International Phonetic Alphabet
+
+For the locales below, the Speech service uses the [International Phonetic Alphabet (IPA)](https://en.wikipedia.org/wiki/International_Phonetic_Alphabet).
+
+You set `ipa` as the `alphabet` in [SSML](speech-synthesis-markup.md#use-phonemes-to-improve-pronunciation).
+
+These locales all use the same IPA stress and syllables described here.
+
+|`ipa` | Symbol |
+|-|-|
+| `ˈ` | Primary stress |
+| `ˌ` | Secondary stress |
+| `.` | Syllable boundary |
++
+Select a tab for the IPA phonemes specific to each locale.
+
+### [ca-ES](#tab/ca-ES)
+
+| `ipa` | Example 1 | Example 2 | Example 3 |
+|-|-||-|
+| `a` | **a**men | am**a**ro | est**à** |
+| `ɔ` | **o**dre | ofert**o**ri | microt**ò** |
+| `ə` | **e**stan | s**e**ré | aigu**a** |
+| `b` | **b**aba | do**b**la | |
+| `β` | **v**ià | ba**b**a | |
+| `t͡ʃ` | **tx**adià | ma**tx**ucs | fa**ig** |
+| `d̪` | **d**edicada | con**d**uïa | navida**d** |
+| `├░` | **Th**e_Sun | de**d**icada | trinida**d** |
+| `e` | **é**rem | f**e**ta | ser**é** |
+| `ɛ` | **e**cosistema | incorr**e**cta | hav**er** |
+| `f` | **f**acilitades | a**f**ectarà | àgra**f** |
+| `g` | **g**racia | con**g**ratula | |
+| `ɣ` | | ai**g**ua | |
+| `i` | **i**tinerants | it**i**nerants | zomb**i** |
+| `j` | **hi**ena | espla**i**a | cofo**i** |
+| `d͡ʒ` | **dj**akarta | composta**tg**e | geor**ge** |
+| `k` | **c**urós | dode**c**à | doble**c** |
+| `l` | **l**aberint | mio**l**ar | preva**l** |
+| `ʎ` | **ll**igada | mi**ll**orarà | perbu**ll** |
+| `m` | **m**acadàmies | fe**m**ar | subli**m** |
+| `n` | **n**ecessaris | sa**n**itaris | alterame**nt** |
+| `ŋ` | | algo**n**quí | albe**nc** |
+| `╔▓` | **ny**asa | reme**n**jar | alema**ny** |
+| `o` | **o**mbra | ret**o**ndre | omissi**├│** |
+| `p` | **p**egues | este**p**a | ca**p** |
+| `ɾ` | | ca**r**o | càrte**r** |
+| `r` | **r**abada | ca**rr**o | lof├▓fo**r** |
+| `s` | **c**eri | cur**s**ar | cu**s** |
+| `ʃ` | **x**acar | micro**x**ip | midra**ix** |
+| `t̪` | **t**abacaires | es**t**ratifica | debatu**t** |
+| `θ` | **c**eará | ve**c**inos | Álvare**z** |
+| `u` | **u**niversitaris | candidat**u**res | cron**o** |
+| `w` | **w**estfalià | ina**u**gurar | inscri**u** |
+| `x` | **j**uanita | mu**j**eres | heinri**ch** |
+| `z` | **z**elar | bra**s**ils | alian**ze** |
++
+### [en-GB](#tab/en-GB)
+
+#### Vowels
+
+| `ipa` | Example 1 | Example 2 | Example 3 |
+|-||--|-|
+| `ɑː` | | f**a**st | br**a** |
+| `æ` | | f**a**t | |
+| `ʌ` | | b**u**g | |
+| `ɛə` | | | h**air** |
+| `aʊ` | **ou**t | m**ou**th | h**ow** |
+| `ə` | **a** | | driv**er** |
+| `aɪ` | | f**i**ve | |
+| `ɛ` | **e**gg | dr**e**ss | |
+| `ɜː` | **er**nest | sh**ir**t | f**ur** |
+| `eɪ` | **ai**lment | l**a**ke | p**ay** |
+| `ɪ` | | add**i**ng | |
+| `ɪə` | | b**ear**d | h**ear** |
+| `iː` | **ea**t | s**ee**d | s**ee** |
+| `ɒ` | | p**o**d | |
+| `ɔː` | | d**aw**n | |
+| `əʊ` | | c**o**de | pill**ow** |
+| `ɔɪ` | | p**oi**nt | b**oy** |
+| `ʊ` | | l**oo**k | |
+| `ʊə` | | | t**our** |
+| `uː` | | f**oo**d | t**wo** |
+
+#### Consonants
+
+| `ipa` | Example 1 | Example 2 | Example 3 |
+|-||--|-|
+| `b ` | **b**ike | ri**bb**on | ri**b** |
+| `tʃ ` | **ch**allenge | na**t**ure | ri**ch** |
+| `d ` | **d**ate | ca**dd**y | sli**d** |
+| `├░` | **th**is | fa**th**er | brea**the** |
+| `f ` | **f**ace | lau**gh**ing | enou**gh** |
+| `g ` | **g**old | bra**gg**ing | be**g** |
+| `h ` | **h**urry | a**h**ead | |
+| `j` | **y**es | | |
+| `dʒ` | **g**in | ba**dg**er | bri**dge** |
+| `k ` | **c**at | lu**ck**y | tru**ck** |
+| `l ` | **l**eft | ga**ll**on | fi**ll** |
+| `m ` | **m**ile | li**m**it | ha**m** |
+| `n ` | **n**ose | pho**n**etic | ti**n** |
+| `ŋ ` | | si**ng**er | lo**ng** |
+| `p ` | **p**rice | su**p**er | ti**p** |
+| `╔╣` | **r**ate | ve**r**y | |
+| `s ` | **s**ay | si**ss**y | pa**ss** |
+| `ʃ ` | **sh**op | ca**sh**ier | lea**sh** |
+| `t ` | **t**op | ki**tt**en | be**t** |
+| `╬╕` | **th**eatre | ma**the**matics | brea**th** |
+| `v` | **v**ery | li**v**er | ha**ve** |
+| `w ` | **w**ill | | |
+| `z ` | **z**ero | bli**zz**ard | ro**se** |
++
+### [es-MX](#tab/es-MX)
+
+#### Vowels
+
+| `ipa` | Example 1 | Example 2 | Example 3|
+|-||-|-|
+| `ɑ` | **a**zúcar | tom**a**te | rop**a** |
+| `e` | **e**so | rem**e**ro | am**é** |
+| `i` | h**i**lo | liqu**i**do | ol**í** |
+| `o` | h**o**gar | ol**o**te | cas**o** |
+| `u` | **u**no | ning**u**no | tab**├║** |
+
+#### Consonants
+
+| `ipa` | Example 1 | Example 2 | Example 3|
+|-||-|-|
+| `b` | **b**ote | | |
+| `╬▓` | ├│r**b**ita | envol**v**ente | |
+| `t͡ʃ` | **ch**ico | ha**ch**a | |
+| `d` | **d**átil | | |
+| `├░` | or**d**en | o**d**a | |
+| `f` | **f**oco | o**f**icina | |
+| `g` | **g**ajo | | |
+| `ɣ` | a**g**ua | ho**gu**era | |
+| `j` | **i**odo | cal**i**ente | re**y** |
+| `j͡j` | | o**ll**a | |
+| `k` | **c**asa | á**c**aro | |
+| `l` | **l**oco | a**l**a | |
+| `ʎ` | **ll**ave | en**y**ugo | |
+| `m` | **m**ata | a**m**ar | |
+| `n` | **n**ada | a**n**o | |
+| `╔▓` | **├▒**o├▒o | a**├▒**o | |
+| `p` | **p**apa | pa**p**a | |
+| `╔╛` | | a**r**o | |
+| `r` | **r**ojo | pe**rr**o | |
+| `s` | **s**illa | a**s**a | |
+| `t` | **t**omate | | sof**t** |
+| `w` | h**u**evo | | |
+| `x` | **j**arra | ho**j**a | |
++
+### [it-IT](#tab/it-IT)
+
+#### Vowels
+
+| `ipa` | Example 1 | Example 2 | Example 3 |
+|-||--|--|
+| `a` | **a**mo | s**a**no | scort**a** |
+| `ai` | **ai**cs | abb**ai**no | m**ai** |
+| `aʊ` | **au**dio | r**au**co | b**au** |
+| `e` | **e**roico | v**e**nti / numb**e**r | sapor**e** |
+| `ɛ` | **e**lle | avv**e**nto | lacch**è** |
+| `ej` | **ei**ra | em**ai**l | l**ei** |
+| `ɛu` | **eu**ro | n**eu**ro | |
+| `ei` | | as**ei**tà | scultor**ei** |
+| `eu` | **eu**ropeo | f**eu**dale | |
+| `i` | **i**taliano | v**i**no | sol**i** |
+| `u` | **u**nico | l**u**na | zeb**├╣** |
+| `o` | **o**besità | stra**o**rdinari | amic**o** |
+| `ɔ` | **o**tto | b**o**tte / str**o**kes | per**ò** |
+| `oj` | | oppi**oi**di | |
+| `oi` | **oi**b├▓ | intellettual**oi**de | Gameb**oy** |
+| `ou` | | sh**ow** | talksh**ow** |
+
+#### Consonants
+
+| `ipa` | Example 1 | Example 2 | Example 3 |
+|-||--|--|
+| `b` | **b**ene | e**b**anista | Euroclu**b** |
+| `bː` | | go**bb**a | |
+| `ʧ` | **c**enare | a**c**ido | fren**ch** |
+| `tʃː` | | bra**cc**io | |
+| `kː` | | pa**cc**o | Innsbru**ck** |
+| `d` | **d**ente | a**d**orare | interlan**d** |
+| `dː` | | ca**dd**e | |
+| `ʣ` | **z**ero | or**z**o | |
+| `ʣː` | | me**zz**o | |
+| `f` | **f**ame | a**f**a | ale**f** |
+| `fː` | | be**ff**a | blu**ff** |
+| `ʤ` | **g**ente | a**g**ire | bei**ge** |
+| `ʤː` | | o**gg**i | |
+| `g` | **g**ara | al**gh**e | smo**g** |
+| `gː` | | fu**gg**a | Zue**gg** |
+| `ʎ` | **gl**i | ammira**gl**i | |
+| `ʎː` | | fo**gl**ia | |
+| `ɲː` | | ba**gn**o | |
+| `╔▓` | **gn**occo | padri**gn**o | Montai**gne** |
+| `j` | **i**eri | p**i**ede | freewif**i** |
+| `k` | **c**aro | an**ch**e | ti**c** ta**c** |
+| `l` | **l**ana | a**l**ato | co**l** |
+| `lː` | | co**ll**a | fu**ll** |
+| `m` | **m**ano | a**m**are | Ada**m** |
+| `mː` | | gra**mm**o | |
+| `n` | **n**aso | la**n**a | no**n** |
+| `nː` | | pa**nn**a | |
+| `p` | **p**ane | e**p**ico | sto**p** |
+| `pː` | | co**pp**a | |
+| `╔╛` | **r**ana | moto**r**e | pe**r** |
+| `r.r` | | ca**rr**o | Sta**rr** |
+| `s` | **s**ano | ca**s**cata | lapi**s** |
+| `sː` | | ca**ss**a | cordle**ss** |
+| `ʃ` | **sc**emo | Gram**sc**i | sla**sh** |
+| `ʃː` | | a**sc**ia | fich**es** |
+| `t` | **t**ana | e**t**erno | al**t** |
+| `tː` | | zi**tt**o | |
+| `ʦ` | **ts**unami | turbolen**z**a | subtes**ts** |
+| `ʦː` | | bo**zz**a | |
+| `v` | **v**ento | a**v**aro | Asimo**v** |
+| `vː` | | be**vv**i | |
+| `w` | **u**ovo | d**u**omo | Marlo**we** |
+
+### [pt-BR](#tab/pt-BR)
+
+#### VOWELS
+
+| `ipa` | Example 1 | Example 2 | Example 3 |
+|-|--||--|
+| `i` | **i**lha | f**i**car | com**i** |
+| `ĩ` | **in**tacto | p**in**tar | aberd**een** |
+| `ɑ` | **á**gua | d**a**da | m**á** |
+| `ɔ` | **o**ra | p**o**rta | cip**ó** |
+| `u` | **u**fanista | m**u**la | per**u** |
+| `ũ` | **un**s | p**un**gente | k**uhn** |
+| `o` | **o**rtopedista | f**o**fo | av**├┤** |
+| `e` | **e**lefante | el**e**fante | voc**ê** |
+| `ɐ̃` | **an**ta | c**an**ta | amanh**ã** |
+| `ɐ` | **a**qui | am**a**ciar | dad**a** |
+| `ɛ` | **e**la | s**e**rra | at**é** |
+| `ẽ` | **en**dorfina | p**en**der | |
+| `õ` | **on**tologia | c**on**to | |
+
+#### Consonants
+
+| `ipa` | Example 1 | Example 2 | Example 3 |
+|-|--||--|
+| `w̃` | | | atualizaçã**o** |
+| `w` | **w**ashington | ág**u**a | uso**u** |
+| `p` | **p**ato | ca**p**ital | |
+| `b` | **b**ola | ca**b**eça | |
+| `t` | **t**ato | ra**t**o | |
+| `d` | **d**ado | ama**d**o | |
+| `g` | **g**ato | mara**g**ato | |
+| `m` | **m**ato | co**m**er | |
+| `n` | **n**o | a**n**o | |
+| `ŋ` | **nh**oque | ni**nh**o | |
+| `f` | **f**aca | a**f**ago | |
+| `v` | **v**aca | ca**v**ar | |
+| `╔╣` | | pa**r**a | ama**r** |
+| `s` | **s**atisfeito | amas**s**ado | casado**s** |
+| `z` | **z**ebra | a**z**ar | |
+| `ʃ` | **ch**eirar | ma**ch**ado | |
+| `ʒ` | **jaca** | in**j**usta | |
+| `x` | **r**ota | ca**rr**eta | |
+| `tʃ` | **t**irar | a**t**irar | |
+| `dʒ` | **d**ia | a**d**iar | |
+| `l` | **l**ata | a**l**eto | |
+| `ʎ` | **lh**ama | ma**lh**ado | |
+| `j̃` | | inabalavelme**n**te | hífe**n** |
+| `j` | | ca**i**xa | sa**i** |
+| `k` | **c**asa | ensa**c**ado | |
++
+### [pt-PT](#tab/pt-PT)
+
+| `ipa` | Example 1 | Example 2 | Example 3 |
+|-|-|--||
+| `a` | **á**bdito | consul**a**r | medir**á** |
+| `ɐ` | **a**bacaxi | dom**a**ção | long**a** |
+| `ɐ͡j` | **ei**dético | dir**ei**ta | detect**ei** |
+| `ɐ̃` | **an**verso | viaj**an**te | af**ã** |
+| `ɐ͡j̃`| **an**gels | viag**en**s | tamb**ém** |
+| `ɐ͡w̃`| **hão** | significaç**ão**zinha | gab**ão** |
+| `ɐ͡w` | | s**au**dar | hell**o** |
+| `a͡j` | **ai**rosa | cultur**ai**s | v**ai** |
+| `ɔ` | **ho**ra | dep**ó**sito | l**ó** |
+| `ɔ͡j` | **ói**s | her**ói**co | d**ói** |
+| `a͡w` | **ou**tlook | inc**au**to | p**au** |
+| `ə` | **e**xtremo | sapr**e**mar | noit**e** |
+| `b` | **b**acalhau | ta**b**aco | clu**b** |
+| `d` | **d**ado | da**d**o | ban**d** |
+| `ɾ` | **r**ename | ve**r**ás | chuta**r** |
+| `e` | **e**clipse | hav**e**r | buff**et** |
+| `ɛ` | **e**co | hib**é**rnios | pat**é** |
+| `ɛ͡w` | | pirin**éu**s | escarc**éu** |
+| `ẽ` | **em**baçado | dirim**en**te | ám**en** |
+| `e͡w` | **eu** | d**eu**s | beb**eu** |
+| `f` | **f**im | e**f**icácia | gol**f** |
+| `g` | **g**adinho | ape**g**o | blo**g** |
+| `i` | **i**greja | aplaud**i**do | escrev**i** |
+| `ĩ` | **im**paciente | esp**in**çar | manequ**im** |
+| `i͡w` | | n**iu**e | garant**iu** |
+| `j` | **i**ode | desassoc**i**ado | substitu**i** |
+| `k` | **k**iwi | trafi**c**ado | sna**ck** |
+| `l` | **l**aborar | pe**l**ada | fu**ll** |
+| `ɫ` | | po**l**vo | brasi**l** |
+| `ʎ` | **lh**anamente | anti**lh**as | |
+| `m` | **m**aça | ama**nh**ã | mode**m** |
+| `n` | **n**utritivo | campa**n**a | sca**n** |
+| `╔▓` | **nh**ambu-grande | toalhi**nh**a | pe**nh** |
+| `o` | **o**fir | consumad**o**r | stacatt**o** |
+| `o͡j` | **oi**rar | n**oi**te | f**oi** |
+| `õ` | **om**brão | barr**on**da | d**om** |
+| `o͡j̃`| | ocupaç**õe**s | exp**õe** |
+| `p` | **p**ai | crá**p**ula | lapto**p** |
+| `ʀ` | **r**ecordar | gue**rr**a | chauffeu**r** |
+| `s` | **s**eco | gro**ss**eira | bo**ss** |
+| `ʃ` | **ch**uva | du**ch**ar | médio**s** |
+| `t` | **t**abaco | pelo**t**a | inpu**t** |
+| `u` | **u**bi | fac**u**ltativo | fad**o** |
+| `u͡j` | **ui**var | arr**ui**vado | f**ui** |
+| `ũ` | **um**bilical | f**un**cionar | fór**um** |
+| `u͡j̃`| | m**ui**to | |
+| `v` | **v**aca | combatí**v**el | pavlo**v** |
+| `w` | **w**affle | restit**u**ir | katofi**o** |
+| `z` | **z**âmbia | pra**z**er | ja**zz** |
++
+### [ru-RU](#tab/ru-RU)
+
+#### VOWELS
+
+| `ipa` | Example 1 | Example 2 | Example 3 |
+|-||-|-|
+| `a` | **а**дрес | р**а**дость | бед**а** |
+| `ʌ` | **о**блаков | з**а**стенчивость | внучк**а** |
+| `ə` | | ябл**о**чн**о**го | |
+| `ɛ` | **э**пос | б**е**лка | каф**е** |
+| `i` | **и**ней | л**и**ст | соловь**и** |
+| `ɪ` | **и**гра | м**е**дведь | мгновень**е** |
+| `ɨ` | **э**нергия | л**ы**с**ы**й | вес**ы** |
+| `ɔ` | **о**крик | м**о**т | весл**о** |
+| `u` | **у**жин | к**у**ст | пойд**у** |
+
+#### CONSONANT
+
+| `ipa` | Example 1 | Example 2 | Example 3 |
+|-||-|-|
+| `p` | **п**рофессор | по**п**лавок | укро**п** |
+| `pʲ` | **П**етербург | осле**п**ительно | сте**пь** |
+| `b` | **б**ольшой | со**б**ака | |
+| `bʲ` | **б**елый | у**б**едить | |
+| `t` | **т**айна | с**т**аренький | тви**д** |
+| `tʲ` | **т**епло | учи**т**ель | сине**ть** |
+| `d` | **д**оверчиво | не**д**алеко | |
+| `dʲ` | **д**ядя | е**д**иница | |
+| `k` | **к**рыло | ку**к**уруза | кустарни**к** |
+| `kʲ` | **к**ипяток | неяр**к**ий | |
+| `g` | **г**роза | немно**г**о | |
+| `gʲ` | **г**ерань | помо**г**ите | |
+| `x` | **х**ороший | по**х**од | ду**х** |
+| `xʲ` | **х**илый | хи**х**иканье | |
+| `f` | **ф**антазия | шка**ф**ах | кро**в** |
+| `fʲ` | **ф**естиваль | ко**ф**е | вер**фь** |
+| `v` | **в**нучка | сине**в**а | |
+| `vʲ` | **в**ертеть | с**в**ет | |
+| `s` | **с**казочник | ле**с**ной | карапу**з** |
+| `sʲ` | **с**еять | по**с**ередине | зажгли**сь** |
+| `z` | **з**аяц | зве**з**да | |
+| `zʲ` | **з**емляника | со**з**ерцал | |
+| `ʂ` | **ш**уметь | п**ш**ено | мы**шь** |
+| `ʐ` | **ж**илище | кру**ж**евной | |
+| `t͡s` | **ц**елитель | Вене**ц**ия | незнакоме**ц** |
+| `t͡ɕ` | **ч**асы | о**ч**арование | мя**ч** |
+| `ɕː` | **щ**елчок | о**щ**у**щ**ать | ле**щ** |
+| `m` | **м**олодежь | нес**м**отря | то**м** |
+| `mʲ` | **м**еч | ды**м**ить | се**мь** |
+| `n` | **н**ачало | око**н**це | со**н** |
+| `nʲ` | **н**ебо | ли**н**ялый | тюле**нь** |
+| `l` | **л**ужа | до**л**гожитель | ме**л** |
+| `lʲ` | **л**ицо | неда**л**еко | со**ль** |
+| `r` | **р**адость | со**р**ока | дво**р** |
+| `rʲ` | **р**ябина | набе**р**ежная | две**рь** |
+| `j` | **е**сть | ма**я**к | игрушечны**й** |
+
+***
+
cognitive-services Speech Synthesis Markup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-synthesis-markup.md
Phonetic alphabets are composed of phones, which are made up of letters, numbers
| Attribute | Description | Required / Optional | |--|-||
-| `alphabet` | Specifies the phonetic alphabet to use when synthesizing the pronunciation of the string in the `ph` attribute. The string specifying the alphabet must be specified in lowercase letters. The following are the possible alphabets that you can specify.<ul><li>`ipa` &ndash; <a href="https://en.wikipedia.org/wiki/International_Phonetic_Alphabet" target="_blank">International Phonetic Alphabet </a></li><li>`sapi` &ndash; [Speech service phonetic alphabet](speech-ssml-phonetic-sets.md)</li><li>`ups` &ndash;<a href="https://documentation.help/Microsoft-Speech-Platform-SDK-11/17509a49-cae7-41f5-b61d-07beaae872ea.htm" target="_blank"> Universal Phone Set</a></li></ul><br>The alphabet applies only to the `phoneme` in the element.| Optional |
+| `alphabet` | Specifies the phonetic alphabet to use when synthesizing the pronunciation of the string in the `ph` attribute. The string specifying the alphabet must be specified in lowercase letters. The following are the possible alphabets that you can specify.<ul><li>`ipa` &ndash; [International Phonetic Alphabet](speech-ssml-phonetic-sets.md#speech-service-phonetic-alphabet)</li><li>`sapi` &ndash; [Speech service phonetic alphabet](speech-ssml-phonetic-sets.md#speech-service-phonetic-alphabet)</li><li>`ups` &ndash; [Universal Phone Set](https://documentation.help/Microsoft-Speech-Platform-SDK-11/17509a49-cae7-41f5-b61d-07beaae872ea.htm)</li></ul><br>The alphabet applies only to the `phoneme` in the element.| Optional |
| `ph` | A string containing phones that specify the pronunciation of the word in the `phoneme` element. If the specified string contains unrecognized phones, the Text-to-Speech (TTS) service rejects the entire SSML document and produces none of the speech output specified in the document. | Required if using phonemes. | **Examples**
To define how multiple entities are read, you can create a custom lexicon, which
</lexicon> ```
-The `lexicon` element contains at least one `lexeme` element. Each `lexeme` element contains at least one `grapheme` element and one or more `grapheme`, `alias`, and `phoneme` elements. The `grapheme` element contains text describing the <a href="https://www.w3.org/TR/pronunciation-lexicon/#term-Orthography" target="_blank">orthography </a>. The `alias` elements are used to indicate the pronunciation of an acronym or an abbreviated term. The `phoneme` element provides text describing how the `lexeme` is pronounced. When `alias` and `phoneme` element are provided with the same `grapheme` element, `alias` has higher priority.
+The `lexicon` element contains at least one `lexeme` element. Each `lexeme` element contains at least one `grapheme` element and one or more `grapheme`, `alias`, and `phoneme` elements. The `grapheme` element contains text describing the [orthography](https://www.w3.org/TR/pronunciation-lexicon/#term-Orthography). The `alias` elements are used to indicate the pronunciation of an acronym or an abbreviated term. The `phoneme` element provides text describing how the `lexeme` is pronounced. When `alias` and `phoneme` element are provided with the same `grapheme` element, `alias` has higher priority.
> [!IMPORTANT] > The `lexeme` element is case sensitive in custom lexicon. For example, if you only provide a phoneme for `lexeme` 'Hello', it will not work for `lexeme` 'hello'.
You can subscribe to the `BookmarkReached` event in Speech SDK to get the bookma
# [C#](#tab/csharp)
-For more information, see <a href="/dotnet/api/microsoft.cognitiveservices.speech.speechsynthesizer.bookmarkreached" target="_blank"> `BookmarkReached` </a>.
+For more information, see [`BookmarkReached`](/dotnet/api/microsoft.cognitiveservices.speech.speechsynthesizer.bookmarkreached).
```csharp synthesizer.BookmarkReached += (s, e) =>
Bookmark reached. Audio offset: 1462.5ms, bookmark text: flower_2.
# [C++](#tab/cpp)
-For more information, see <a href="/cpp/cognitive-services/speech/speechsynthesizer#bookmarkreached" target="_blank"> `BookmarkReached` </a>.
+For more information, see [`BookmarkReached`](/cpp/cognitive-services/speech/speechsynthesizer#bookmarkreached).
```cpp synthesizer->BookmarkReached += [](const SpeechSynthesisBookmarkEventArgs& e)
Bookmark reached. Audio offset: 1462.5ms, bookmark text: flower_2.
# [Java](#tab/java)
-For more information, see <a href="/java/api/com.microsoft.cognitiveservices.speech.speechsynthesizer.bookmarkReached#com_microsoft_cognitiveservices_speech_SpeechSynthesizer_BookmarkReached" target="_blank"> `BookmarkReached` </a>.
+For more information, see [`BookmarkReached`](/java/api/com.microsoft.cognitiveservices.speech.speechsynthesizer.bookmarkReached#com_microsoft_cognitiveservices_speech_SpeechSynthesizer_BookmarkReached).
```java synthesizer.BookmarkReached.addEventListener((o, e) -> {
Bookmark reached. Audio offset: 1462.5ms, bookmark text: flower_2.
# [Python](#tab/python)
-For more information, see <a href="/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechsynthesizer#bookmark-reached" target="_blank"> `bookmark_reached` </a>.
+For more information, see [`bookmark_reached`](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechsynthesizer#bookmark-reached).
```python # The unit of evt.audio_offset is tick (1 tick = 100 nanoseconds), divide it by 10,000 to convert to milliseconds.
Bookmark reached, audio offset: 1462.5ms, bookmark text: flower_2.
# [JavaScript](#tab/javascript)
-For more information, see <a href="/javascript/api/microsoft-cognitiveservices-speech-sdk/speechsynthesizer#bookmarkReached" target="_blank"> `bookmarkReached`</a>.
+For more information, see [`bookmarkReached`](/javascript/api/microsoft-cognitiveservices-speech-sdk/speechsynthesizer#bookmarkReached).
```javascript synthesizer.bookmarkReached = function (s, e) {
For the example SSML above, the `bookmarkReached` event will be triggered twice,
# [Objective-C](#tab/objectivec)
-For more information, see <a href="/objectivec/cognitive-services/speech/spxspeechsynthesizer#addbookmarkreachedeventhandler" target="_blank"> `addBookmarkReachedEventHandler` </a>.
+For more information, see [`addBookmarkReachedEventHandler`](/objectivec/cognitive-services/speech/spxspeechsynthesizer#addbookmarkreachedeventhandler).
```objectivec [synthesizer addBookmarkReachedEventHandler: ^ (SPXSpeechSynthesizer *synthesizer, SPXSpeechSynthesisBookmarkEventArgs *eventArgs) {
Bookmark reached. Audio offset: 1462.5ms, bookmark text: flower_2.
# [Swift](#tab/swift)
-For more information, see <a href="/objectivec/cognitive-services/speech/spxspeechsynthesizer" target="_blank"> `addBookmarkReachedEventHandler` </a>.
+For more information, see [`addBookmarkReachedEventHandler`](/objectivec/cognitive-services/speech/spxspeechsynthesizer).
cognitive-services Improve Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/language-service/custom-named-entity-recognition/how-to/improve-model.md
After you have reviewed your [model's evaluation](view-model-evaluation.md), you
> [!NOTE] > This guide focuses on data from the [validation set](train-model.md#data-split) that was created during training.
-### Review validation set
+### Review test set
Using Language Studio, you can review how your model performs against how you expected it to perform. You can review predicted and tagged classes for each model you have trained.
Using Language Studio, you can review how your model performs against how you ex
2. Select **Improve model** from the left side menu.
-3. Select **Review validation set**.
+3. Select **Review test set**.
4. Choose your trained model from **Model** drop-down menu.
cognitive-services Authoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/language-service/question-answering/how-to/authoring.md
curl -X GET -H "Ocp-Apim-Subscription-Key: {API-KEY}" -H "Content-Type: applicat
"value": [ { "displayName": "source1",
- "sourceUri": "https://docs.microsoft.com/en-us/azure/cognitive-services/qnamaker/overview/overview",
+ "sourceUri": "https://docs.microsoft.com/azure/cognitive-services/qnamaker/overview/overview",
"sourceKind": "url", "lastUpdatedDateTime": "2021-05-01T15:13:22Z" },
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/language-service/text-analytics-for-health/overview.md
This documentation contains the following types of articles:
* [**How-to guides**](how-to/call-api.md) contain instructions for using the service in more specific or customized ways. * The [**conceptual articles**](concepts/health-entity-categories.md) provide in-depth explanations of the service's functionality and features.
-> [!VIDEO https://channel9.msdn.com/Shows/AI-Show/Introducing-Text-Analytics-for-Health/player]
+> [!VIDEO https://docs.microsoft.com/Shows/AI-Show/Introducing-Text-Analytics-for-Health/player]
## Features
confidential-computing Confidential Enclave Nodes Aks Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/confidential-computing/confidential-enclave-nodes-aks-get-started.md
spec:
restartPolicy: Never backoffLimit: 0 ```
+Alternatively you can also do a node pool selection deployment for your container deployments as shown below
+
+```yaml
+apiVersion: batch/v1
+kind: Job
+metadata:
+ name: sgx-test
+spec:
+ template:
+ metadata:
+ labels:
+ app: sgx-test
+ spec:
+ affinity:
+ nodeAffinity:
+ requiredDuringSchedulingIgnoredDuringExecution:
+ nodeSelectorTerms:
+ - matchExpressions:
+ - key: agentpool
+ operator: In
+ values:
+ - acc # this is the name of your confidential computing nodel pool
+ - acc_second # this is the name of your confidential computing nodel pool
+ containers:
+ - name: sgx-test
+ image: oeciteam/oe-helloworld:1.0
+ resources:
+ limits:
+ kubernetes.azure.com/sgx_epc_mem_in_MiB: 10
+ requests:
+ kubernetes.azure.com/sgx_epc_mem_in_MiB: 10
+ restartPolicy: "Never"
+ backoffLimit: 0
+ ```
Now use the `kubectl apply` command to create a sample job that will open in a secure enclave, as shown in the following example output:
confidential-computing Confidential Nodes Aks Addon https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/confidential-computing/confidential-nodes-aks-addon.md
Azure Kubernetes Service (AKS) provides a plugin for Azure confidential computin
The SGX Device plugin implements the Kubernetes device plugin interface for Enclave Page Cache (EPC) memory. In effect, this plugin makes EPC memory another resource type in Kubernetes. Users can specify limits on EPC just like other resources. Apart from the scheduling function, the device plugin helps assign SGX device driver permissions to confidential workload containers. [A sample implementation of the EPC memory-based deployment](https://github.com/Azure-Samples/confidential-computing/blob/main/containersamples/helloworld/helm/templates/helloworld.yaml) (`kubernetes.azure.com/sgx_epc_mem_in_MiB`) is available.
-## PSM with SGX quote helper
+## PSW with SGX quote helper
Enclave applications that do remote attestation need to generate a quote. The quote provides cryptographic proof of the identity and the state of the application, along with the enclave's host environment. Quote generation relies on certain trusted software components from Intel, which are part of the SGX Platform Software Components (PSW/DCAP). This PSW is packaged as a daemon set that runs per node. You can use the PSW when requesting attestation quote from enclave apps. Using the AKS provided service helps better maintain the compatibility between the PSW and other SW components in the host. Read the feature details below.
Enclave applications that do remote attestation need to generate a quote. The qu
Intel supports two attestation modes to run the quote generation. For how to choose which type, see the [attestation type differences](#attestation-type-differences). -- **in-proc**: hosts the trusted software components inside the enclave application process
+- **in-proc**: hosts the trusted software components inside the enclave application process. This method is useful when you are performing local attestation (between 2 enclave apps in a single VM node)
-- **out-of-proc**: hosts the trusted software components outside of the enclave application.
+- **out-of-proc**: hosts the trusted software components outside of the enclave application. This is a preferred method when performing remote attestation.
SGX applications built using Open Enclave SDK by default use in-proc attestation mode. SGX-based applications allow out-of-proc and require extra hosting. These applications expose the required components such as Architectural Enclave Service Manager (AESM), external to the application.
You don't have to check for backward compatibility with PSW and DCAP. The provid
### Out-of-proc attestation for confidential workloads
-The out-of-proc attestation model works for confidential workloads. The quote requestor and quote generation are executed separately, but on the same physical machine. The quote generation happens in a centralized manner and serves requests for QUOTES from all entities. Properly define the interface, and make the interface discoverable for any entity to request quotes.
+The out-of-proc attestation model works for confidential workloads. The quote requestor and quote generation are executed separately, but on the same physical machine. The quote generation happens in a centralized manner and serves requests for QUOTES from all entities. Properly define the interface and make the interface discoverable for any entity to request quotes.
![Diagram of quote requestor and quote generation interface.](./media/confidential-nodes-out-of-proc-attestation/aesmmanager.png)
Each container needs to opt in to use out-of-proc quote generation by setting th
An application can still use the in-proc attestation as before. However, you can't simultaneously use both in-proc and out-of-proc within an application. The out-of-proc infrastructure is available by default and consumes resources.
+> [!NOTE]
+> If you are using a Intel SGX wrapper software(OSS/ISV) to run you unmodified containers the attestation interaction with hardware is typically handled for your higher level apps. Please refer to the attestation implementation per provider.
+ ### Sample implementation The below docker file is a sample for an Open Enclave-based application. Set the `SGX_AESM_ADDR=1` environment variable in the Docker file. Or, set the variable in the deployment file. Follow this sample for the Docker file and deployment YAML details.
confidential-computing Confidential Nodes Aks Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/confidential-computing/confidential-nodes-aks-overview.md
# Confidential computing nodes on Azure Kubernetes Service
-[Azure confidential computing](overview.md) allows you to protect your sensitive data while it's in use. The underlying confidential computing infrastructure protects this data from other applications, administrators, and cloud providers with a hardware backed trusted execution container environments. Adding confidential computing nodes allow you to target container application to run in an isolated, hardware protected and attestable environment.
+[Azure confidential computing](overview.md) allows you to protect your sensitive data while it's in use. The underlying confidential computing infrastructure protects this data from other applications, administrators, and cloud providers with a hardware backed trusted execution container environments. Adding confidential computing nodes allow you to target container application to run in an isolated, hardware protected, integrity protected in an attestable environment.
## Overview
-Azure Kubernetes Service (AKS) supports adding [DCsv2 confidential computing nodes](confidential-computing-enclaves.md) powered by Intel SGX. These nodes allow you to run sensitive workloads within a hardware-based trusted execution environment (TEE). TEEΓÇÖs allow user-level code from containers to allocate private regions of memory to execute the code with CPU directly. These private memory regions that execute directly with CPU are called enclaves. Enclaves help protect the data confidentiality, data integrity and code integrity from other processes running on the same nodes. The Intel SGX execution model also removes the intermediate layers of Guest OS, Host OS and Hypervisor thus reducing the attack surface area. The *hardware based per container isolated execution* model in a node allows applications to directly execute with the CPU, while keeping the special block of memory encrypted per container. Confidential computing nodes with confidential containers are a great addition to your zero trust security planning and defense-in-depth container strategy.
+Azure Kubernetes Service (AKS) supports adding [DCsv2 confidential computing nodes](confidential-computing-enclaves.md) powered by Intel SGX. These nodes allow you to run sensitive workloads within a hardware-based trusted execution environment (TEE). TEEs allow user-level code from containers to allocate private regions of memory to execute the code with CPU directly. These private memory regions that execute directly with CPU are called enclaves. Enclaves help protect the data confidentiality, data integrity and code integrity from other processes running on the same nodes, as well as Azure operator. The Intel SGX execution model also removes the intermediate layers of Guest OS, Host OS and Hypervisor thus reducing the attack surface area. The *hardware based per container isolated execution* model in a node allows applications to directly execute with the CPU, while keeping the special block of memory encrypted per container. Confidential computing nodes with confidential containers are a great addition to your zero-trust security planning and defense-in-depth container strategy.
:::image type="content" source="./media/confidential-nodes-aks-overview/sgx-aks-node.png" alt-text="Graphic of AKS Confidential Compute Node, showing confidential containers with code and data secured inside.":::
Azure Kubernetes Service (AKS) supports adding [DCsv2 confidential computing nod
- Linux Containers support through Ubuntu 18.04 Gen 2 VM worker nodes ## Confidential Computing add-on for AKS
-The add-on feature enables extra capability on AKS when running confidential computing node pools on the cluster. This add-on enables the features below.
+The add-on feature enables extra capability on AKS when running confidential computing Intel SGX capable node pools on the cluster. "Confcon" add-on on AKS enables the features below.
#### Azure Device Plugin for Intel SGX <a id="sgx-plugin"></a>
-The device plugin implements the Kubernetes device plugin interface for Encrypted Page Cache (EPC) memory and exposes the device drivers from the nodes. Effectively, this plugin makes EPC memory as another resource type in Kubernetes. Users can specify limits on this resource just as other resources. Apart from the scheduling function, the device plugin helps assign Intel SGX device driver permissions to confidential workload containers. With this plugin developer can avoid mounting the Intel SGX driver volumes in the deployment files. A sample implementation of the EPC memory-based deployment (`kubernetes.azure.com/sgx_epc_mem_in_MiB`) sample is [here](https://github.com/Azure-Samples/confidential-computing/blob/main/containersamples/helloworld/helm/templates/helloworld.yaml)
+The device plugin implements the Kubernetes device plugin interface for Encrypted Page Cache (EPC) memory and exposes the device drivers from the nodes. Effectively, this plugin makes EPC memory as another resource type in Kubernetes. Users can specify limits on this resource just as other resources. Apart from the scheduling function, the device plugin helps assign Intel SGX device driver permissions to confidential container deployments. With this plugin developer can avoid mounting the Intel SGX driver volumes in the deployment files. This add-on on AKS clusters run as a daemonset per VM node that is of Intel SGX capable. A sample implementation of the EPC memory-based deployment (`kubernetes.azure.com/sgx_epc_mem_in_MiB`) sample is [here](https://github.com/Azure-Samples/confidential-computing/blob/main/containersamples/helloworld/helm/templates/helloworld.yaml)
+#### Intel SGX Quote Helper with Platform Software Components <a id="sgx-plugin"></a>
+
+As part of the plugin another daemonset is deployed per VM node that are of Intel SGX capable on the AKS cluster. This daemonset helps your confidential container apps when a remote out-of-proc attestation request is invoked.
+
+Enclave applications that do remote attestation need to generate a quote. The quote provides cryptographic proof of the identity and the state of the application, along with the enclave's host environment. Quote generation relies on certain trusted software components from Intel, which are part of the SGX Platform Software Components (PSW/DCAP). This PSW is packaged as a daemon set that runs per node. You can use the PSW when requesting attestation quote from enclave apps. Using the AKS provided service helps better maintain the compatibility between the PSW and other SW components in the host with Intel SGX drivers that are part of the AKS VM nodes. Read more on how your apps can use this daemonset without having to package the attestation primitives as part of your container deployments [More here](confidential-nodes-aks-addon.md#psw-with-sgx-quote-helper)
## Programming models
Confidential computing nodes on AKS also support containers that are programmed
[Quick starter confidential container samples](https://github.com/Azure-Samples/confidential-container-samples)
-[Intel SGX Confidential VM's - DCsv2 SKU List](../virtual-machines/dcv2-series.md)
+[Intel SGX Confidential VMs - DCsv2 SKU List](../virtual-machines/dcv2-series.md)
-[Intel SGX Confidential VM's - DCsv3 SKU List](../virtual-machines/dcv3-series.md)
+[Intel SGX Confidential VMs - DCsv3 SKU List](../virtual-machines/dcv3-series.md)
[Defense-in-depth with confidential containers webinar session](https://www.youtube.com/watch?reload=9&v=FYZxtHI_Or0&feature=youtu.be)
cosmos-db Concepts Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/concepts-limits.md
You can provision throughput at a container-level or a database-level in terms o
| Minimum RU/s required per 1 GB | 10 RU/s<br>**Note:** this minimum can be lowered if your account is eligible to our ["high storage / low throughput" program](set-throughput.md#high-storage-low-throughput-program) | > [!NOTE]
-> To learn about best practices for managing workloads that have partition keys requiring higher limits for storage or throughput, see [Create a synthetic partition key](synthetic-partition-keys.md).
+> To learn about best practices for managing workloads that have partition keys requiring higher limits for storage or throughput, see [Create a synthetic partition key](synthetic-partition-keys.md). If your workload has already reached the logical partition limit of 20GB in production, it is recommended to re-architect your application with a different partition key as a long-term solution. To help give time for this, you can request a temporary increase in the logical partition key limit for your existing application. [File an Azure support ticket](create-support-request-quota-increase.md) and select quota type **Temporary increase in container's logical partition key size**. Note this is intended as a temporary mitigation and not recommendeded as a long-term solution, as SLA guarantees are not honored when the limit is increased. To remove the configuration, file a support ticket and select quota type **Restore containerΓÇÖs logical partition key size to default (20 GB)**. This can be done after you have either deleted data to fit the 20 GB logical partition limit or have re-architected your application with a different partition key.
### Minimum throughput limits
cosmos-db Large Partition Keys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/large-partition-keys.md
Previously updated : 09/28/2019 Last updated : 12/8/2019
# Create containers with large partition key [!INCLUDE[appliesto-sql-api](includes/appliesto-sql-api.md)]
-Azure Cosmos DB uses hash-based partitioning scheme to achieve horizontal scaling of data. All Azure Cosmos containers created before May 3 2019 use a hash function that computes hash based on the first 100 bytes of the partition key. If there are multiple partition keys that have the same first 100 bytes, then those logical partitions are considered as the same logical partition by the service. This can lead to issues like partition size quota being incorrect, and unique indexes being applied across the partition keys. Large partition keys are introduced to solve this issue. Azure Cosmos DB now supports large partition keys with values up to 2 KB.
+Azure Cosmos DB uses hash-based partitioning scheme to achieve horizontal scaling of data. All Azure Cosmos containers created before May 3, 2019 use a hash function that computes hash based on the first 101 bytes of the partition key. If there are multiple partition keys that have the same first 101 bytes, then those logical partitions are considered as the same logical partition by the service. This can lead to issues like partition size quota being incorrect, unique indexes being incorrectly applied across the partition keys, and uneven distribution of storage. Large partition keys are introduced to solve this issue. Azure Cosmos DB now supports large partition keys with values up to 2 KB.
-Large partition keys are supported by using the functionality of an enhanced version of the hash function, which can generate a unique hash from large partition keys up to 2 KB. This hash version is also recommended for scenarios with high partition key cardinality irrespective of the size of the partition key. A partition key cardinality is defined as the number of unique logical partitions, for example in the order of ~30000 logical partitions in a container. This article describes how to create a container with a large partition key using the Azure portal and different SDKs.
+Large partition keys are supported by enabling an enhanced version of the hash function, which can generate a unique hash from large partition keys up to 2 KB.
+As a best practice, unless you need support for an [older Cosmos SDK or application that does not support this feature](#supported-sdk-versions), it is always recommended to configure your container with support for large partition keys.
## Create a large partition key (Azure portal)
-To create a large partition key, when you create a new container using the Azure portal, check the **My partition key is larger than 100-bytes** option. Unselect the checkbox if you donΓÇÖt need large partition keys or if you have applications running on SDKs version older than 1.18.
+To create a large partition key, when you create a new container using the Azure portal, check the **My partition key is larger than 101-bytes** option. Unselect the checkbox if you donΓÇÖt need large partition keys or if you have applications running on SDKs version older than 1.18.
:::image type="content" source="./media/large-partition-keys/large-partition-key-with-portal.png" alt-text="Create large partition keys using Azure portal":::
To create a container with large partition key support see,
* [Create an Azure Cosmos container with a large partition key size](manage-with-powershell.md#create-container-big-pk)
-## Create a large partition key (.Net SDK)
+## Create a large partition key (.NET SDK)
To create a container with a large partition key using the .NET SDK, specify the `PartitionKeyDefinitionVersion.V2` property. The following example shows how to specify the Version property within the PartitionKeyDefinition object and set it to PartitionKeyDefinitionVersion.V2.
+> [!NOTE]
+> By default, all containers created using the .NET SDK V2 do not support large partition keys. By default, all containers created using the .NET SDK V3 support large partition keys.
+ # [.NET SDK V3](#tab/dotnetv3) ```csharp
The Large partition keys are supported with the following minimum versions of SD
|SDK type | Minimum version | |||
-|.Net | 1.18 |
+|.NET | 1.18 |
|Java sync | 2.4.0 | |Java Async | 2.5.0 | | REST API | version higher than `2017-05-03` by using the `x-ms-version` request header.|
cosmos-db Create Sql Api Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/create-sql-api-nodejs.md
In this quickstart, you create and manage an Azure Cosmos DB SQL API account fro
Watch this video for a complete walkthrough of the content in this article.
-> [!VIDEO https://channel9.msdn.com/Shows/Docs-Azure/Quickstart-Use-Nodejs-to-connect-and-query-data-from-Azure-Cosmos-DB-SQL-API-account/player]
+> [!VIDEO https://docs.microsoft.com/Shows/Docs-Azure/Quickstart-Use-Nodejs-to-connect-and-query-data-from-Azure-Cosmos-DB-SQL-API-account/player]
## Prerequisites
cosmos-db Manage With Templates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/manage-with-templates.md
To create any of the Azure Cosmos DB resources below, copy the following example
This template creates an Azure Cosmos account in two regions with options for consistency and failover, with database and container configured for autoscale throughput that has most policy options enabled. This template is also available for one-click deploy from Azure Quickstart Templates Gallery.
+> [!NOTE]
+> You can use Azure Resource Manager templates to create new autoscale databases/containers and change the autoscale max RU/s setting on an existing database/container that is already configured with autoscale. By design, migrating between manual and autoscale throughput is not supported with Azure Resource Manager templates. To do this programmatically, you can use [Azure CLI](how-to-provision-autoscale-throughput.md#azure-cli) or [PowerShell](how-to-provision-autoscale-throughput.md#azure-powershell).
+ [:::image type="content" source="../../media/template-deployments/deploy-to-azure.svg" alt-text="Deploy to Azure":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.documentdb%2Fcosmosdb-sql-autoscale%2Fazuredeploy.json) :::code language="json" source="~/quickstart-templates/quickstarts/microsoft.documentdb/cosmosdb-sql-autoscale/azuredeploy.json":::
This template creates an Azure Cosmos account, database and container with with
## Azure Cosmos DB account with Azure AD and RBAC
-This template will create a SQL Cosmos account, a natively maintained Role Definition, and a natively maintained Role Assignment for an AAD identity. This template is also available for one-click deploy from Azure Quickstart Templates Gallery.
+This template will create a SQL Cosmos account, a natively maintained Role Definition, and a natively maintained Role Assignment for an Azure AD identity. This template is also available for one-click deploy from Azure Quickstart Templates Gallery.
[:::image type="content" source="../../media/template-deployments/deploy-to-azure.svg" alt-text="Deploy to Azure":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.documentdb%2Fcosmosdb-sql-rbac%2Fazuredeploy.json)
cosmos-db Migrate Dotnet V3 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/migrate-dotnet-v3.md
Previously updated : 10/19/2021 Last updated : 01/13/2022 ms.devlang: csharp
catch (CosmosClientException ex)
### Diagnostics
-Where the v2 SDK had Direct-only diagnostics available through the `ResponseDiagnosticsString` property, the v3 SDK uses `Diagnostics` available in all responses and exceptions, which are richer and not restricted to Direct mode. They include not only the time spent on the SDK for the operation, but also the regions the operation contacted:
+Where the v2 SDK had Direct-only diagnostics available through the `RequestDiagnosticsString` property, the v3 SDK uses `Diagnostics` available in all responses and exceptions, which are richer and not restricted to Direct mode. They include not only the time spent on the SDK for the operation, but also the regions the operation contacted:
```csharp try
cost-management-billing Reporting Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/costs/reporting-get-started.md
+
+ Title: Get started with Cost Management + Billing reporting - Azure
+description: This article helps you to get started with Cost Management + Billing to understand, report on, and analyze your invoiced Microsoft Cloud and AWS costs.
++ Last updated : 01/13/2022++++++
+# Get started with Cost Management + Billing reporting
+
+Cost Management + Billing includes several tools to help you understand, report on, and analyze your invoiced Microsoft Cloud and AWS costs. The following sections describe the major reporting components.
+
+## Cost analysis
+
+Cost analysis should be your first stop in the Azure portal when it comes to understanding what you're spending and where you're spending. Cost analysis helps you:
+
+- Visualize and analyze your organizational costs
+- Share cost views with others using custom alerts
+- View aggregated costs by organization to understand where costs occur over time and identify spending trends
+- View accumulated costs over time to estimate monthly, quarterly, or even yearly cost trends against a budget
+- Create budgets to provide adherence to financial constraints
+- Use budgets to view daily or monthly costs and help isolate spending irregularities
+
+Cost analysis is available from every resource group, subscription, management group, and billing account in the Azure portal. If you manage one of these scopes, you can start there and select **Cost analysis** from the menu. If you manage multiple scopes, you may want to start directly within Cost Management:
+
+Sign in to the Azure portal > select **Home** in the menu > scroll down under **Tools** and select **Cost Management** > select a scope at the top of the page > in the left menu, select **Cost analysis**.
++
+For more information about cost analysis, see [Explore and analyze costs with cost analysis](quick-acm-cost-analysis.md).
+
+## Power BI
+
+While cost analysis offers a rich, interactive experience for analyzing and surfacing insights about your costs, there are times when you need to build more extensive dashboards and complex reports or combine costs with internal data. The Cost Management template app for Power BI is a great way to get up and running with Power BI quickly. For more information about the template app, see [Analyze Azure costs with the Power BI App](analyze-cost-data-azure-cost-management-power-bi-template-app.md).
++
+Need to go beyond the basics with Power BI? The Cost Management connector for Power BI lets you choose the data you need to help you seamlessly integrate costs with your own datasets or easily build out more complete dashboards and reports to meet your organization's needs. For more information about the connector, see [Connect to Azure Cost Management data in Power BI Desktop](/power-bi/connect-data/desktop-connect-azure-cost-management).
+
+## Usage details and exports
+
+If you're looking for raw data to automate business processes or integrate with other systems, start by exporting data to a storage account. Scheduled exports allow you to automatically publish your raw cost data to a storage account on a daily, weekly, or monthly basis. With special handling for large datasets, scheduled exports are the most scalable option for building first-class cost data integration. For more information, see [Create and manage exported data](tutorial-export-acm-data.md).
+
+If you need more fine-grained control over your data requests, the Usage Details API offers a bit more flexibility to pull raw data the way you need it. For more information, see the [Usage Details REST API](/rest/api/consumption/usage-details/list).
++
+## Invoices and credits
+
+Cost analysis is a great tool for reviewing estimated, unbilled charges or for tracking historical cost trends, but it may not show your total billed amount because credits, taxes, and other refunds and charges not available in Cost Management. To estimate your projected bill at the end of the month, start in cost analysis to understand your forecasted costs, then review any available credit or prepaid commitment balance from **Credits** or **Payment methods** for your billing account or billing profile within the Azure portal. To review your final billed charges after the invoice is available, see **Invoices** for your billing account or billing profile.
+
+Here's an example that shows credits on the Credits tab on the Credits + Commitments page.
++
+For more information about your invoice, see [View and download your Microsoft Azure invoice](../understand/download-azure-invoice.md)
+
+For more information about credits, see [Track Microsoft Customer Agreement Azure credit balance](../manage/mca-check-azure-credits-balance.md).
+
+## Next steps
+
+- [Explore and analyze costs with cost analysis](quick-acm-cost-analysis.md).
+- [Analyze Azure costs with the Power BI App](analyze-cost-data-azure-cost-management-power-bi-template-app.md).
+- [Connect to Azure Cost Management data in Power BI Desktop](/power-bi/connect-data/desktop-connect-azure-cost-management).
+- [Create and manage exported data](tutorial-export-acm-data.md).
cost-management-billing Azure Plan Subscription Transfer Partners https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/azure-plan-subscription-transfer-partners.md
Access to existing users, groups, or service principals that were assigned using
Consequently, it's important that you remove Azure RBAC access for the old partner and add access for the new partner. For more information about giving your new partner access, see [What is Azure role-based access control (Azure RBAC)?](../../role-based-access-control/overview.md) For more information about removing your previous partner's Azure RBAC access, see [Remove Azure role assignments](../../role-based-access-control/role-assignments-remove.md).
-Additionally, your new partner doesn't automatically get [Admin on Behalf Of (AOBO)](https://channel9.msdn.com/Series/cspdev/Module-11-Admin-On-Behalf-Of-AOBO) access to your subscriptions. AOBO is necessary for your partner to manage the Azure subscriptions on your behalf. For more information about Azure privileges, see [Obtain permissions to manage a customer's service or subscription](/partner-center/customers-revoke-admin-privileges).
+Additionally, your new partner doesn't automatically get Admin on Behalf Of (AOBO) access to your subscriptions. AOBO is necessary for your partner to manage the Azure subscriptions on your behalf. For more information about Azure privileges, see [Obtain permissions to manage a customer's service or subscription](/partner-center/customers-revoke-admin-privileges).
## Stop a transfer
cost-management-billing Mpa Request Ownership https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/mpa-request-ownership.md
Azure Reservations don't automatically move with subscriptions. Either you can k
Access for existing users, groups, or service principals that was assigned using [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md) isn't affected during the transition. The partner wonΓÇÖt get any new Azure RBAC access to the subscriptions.
-The partners should work with the customer to get access to subscriptions. The partners need to get either [Admin on Behalf Of - AOBO](https://channel9.msdn.com/Series/cspdev/Module-11-Admin-On-Behalf-Of-AOBO) or [Azure Lighthouse](../../lighthouse/concepts/cloud-solution-provider.md) access open support tickets.
+The partners should work with the customer to get access to subscriptions. The partners need to get either Admin on Behalf Of - AOBO or [Azure Lighthouse](../../lighthouse/concepts/cloud-solution-provider.md) access open support tickets.
### Power BI connectivity
data-factory Author Global Parameters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/author-global-parameters.md
There are two ways to integrate global parameters in your continuous integration
* Include global parameters in the ARM template * Deploy global parameters via a PowerShell script
-For general use cases, it is recommended to include global parameters in the ARM template. This integrates natively with the solution outlined in [the CI/CD doc](continuous-integration-delivery.md). In case of automatic publishing and Purview connection, **PowerShell script** method is required. You can find more about PowerShell script method later. Global parameters will be added as an ARM template parameter by default as they often change from environment to environment. You can enable the inclusion of global parameters in the ARM template from the **Manage** hub.
+For general use cases, it is recommended to include global parameters in the ARM template. This integrates natively with the solution outlined in [the CI/CD doc](continuous-integration-delivery.md). In case of automatic publishing and Azure Purview connection, **PowerShell script** method is required. You can find more about PowerShell script method later. Global parameters will be added as an ARM template parameter by default as they often change from environment to environment. You can enable the inclusion of global parameters in the ARM template from the **Manage** hub.
:::image type="content" source="media/author-global-parameters/include-arm-template.png" alt-text="Include in ARM template"::: > [!NOTE]
-> The **Include in ARM template** configuration is only available in "Git mode". Currently it is disabled in "live mode" or "Data Factory" mode. In case of automatic publishing or Purview connection, do not use Include global parameters method; use PowerShell script method.
+> The **Include in ARM template** configuration is only available in "Git mode". Currently it is disabled in "live mode" or "Data Factory" mode. In case of automatic publishing or Azure Purview connection, do not use Include global parameters method; use PowerShell script method.
> [!WARNING] >You cannot use ΓÇÿ-ΓÇÿ in the parameter name. You will receive an errorcode "{"code":"BadRequest","message":"ErrorCode=InvalidTemplate,ErrorMessage=The expression >'pipeline().globalParameters.myparam-dbtest-url' is not valid: .....}". But, you can use the ΓÇÿ_ΓÇÖ in the parameter name.
data-factory Ci Cd Github Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/ci-cd-github-troubleshoot-guide.md
Following section is not valid because package.json folder is not valid.
``` It should have DataFactory included in customCommand like *'run build validate $(Build.Repository.LocalPath)/DataFactory/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/testResourceGroup/providers/Microsoft.DataFactory/factories/yourFactoryName'*. Make sure the generated YAML file for higher stage should have required JSON artifacts.
-### Git Repository or Purview Connection Disconnected
+### Git Repository or Azure Purview Connection Disconnected
#### Issue When deploying a service instance, the git repository or purview connection is disconnected.
data-factory Connect Data Factory To Azure Purview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connect-data-factory-to-azure-purview.md
You need to have **Owner** or **Contributor** role on your data factory to conne
To establish the connection on Data Factory authoring UI:
-1. In the ADF authoring UI, go to **Manage** -> **Azure Purview**, and select **Connect to a Purview account**.
+1. In the ADF authoring UI, go to **Manage** -> **Azure Purview**, and select **Connect to an Azure Purview account**.
- :::image type="content" source="./media/data-factory-purview/register-purview-account.png" alt-text="Screenshot for registering a Purview account.":::
+ :::image type="content" source="./media/data-factory-purview/register-purview-account.png" alt-text="Screenshot for registering an Azure Purview account.":::
2. Choose **From Azure subscription** or **Enter manually**. **From Azure subscription**, you can select the account that you have access to.
-3. Once connected, you can see the name of the Purview account in the tab **Purview account**.
+3. Once connected, you can see the name of the Azure Purview account in the tab **Azure Purview account**.
-If your Purview account is protected by firewall, create the managed private endpoints for Purview. Learn more about how to let Data Factory [access a secured Purview account](how-to-access-secured-purview-account.md). You can either do it during the initial connection or edit an existing connection later.
+If your Azure Purview account is protected by firewall, create the managed private endpoints for Azure Purview. Learn more about how to let Data Factory [access a secured Azure Purview account](how-to-access-secured-purview-account.md). You can either do it during the initial connection or edit an existing connection later.
-The Purview connection information is stored in the data factory resource like the following. To establish the connection programmatically, you can update the data factory and add the `purviewConfiguration` settings. When you want to push lineage from SSIS activities, also add `catalogUri` tag additionally.
+The Azure Purview connection information is stored in the data factory resource like the following. To establish the connection programmatically, you can update the data factory and add the `purviewConfiguration` settings. When you want to push lineage from SSIS activities, also add `catalogUri` tag additionally.
```json {
For how to register Data Factory in Azure Purview, see [How to connect Azure Dat
## Set up authentication
-Data factory's managed identity is used to authenticate lineage push operations from data factory to Purview.
+Data factory's managed identity is used to authenticate lineage push operations from data factory to Azure Purview.
-Grant the data factory's managed identity **Data Curator** role on your Purview **root collection**. Learn more about [Access control in Azure Purview](../purview/catalog-permissions.md) and [Add roles and restrict access through collections](../purview/how-to-create-and-manage-collections.md#add-roles-and-restrict-access-through-collections).
+Grant the data factory's managed identity **Data Curator** role on your Azure Purview **root collection**. Learn more about [Access control in Azure Purview](../purview/catalog-permissions.md) and [Add roles and restrict access through collections](../purview/how-to-create-and-manage-collections.md#add-roles-and-restrict-access-through-collections).
-When connecting data factory to Purview on authoring UI, ADF tries to add such role assignment automatically. If you have **Collection admins** role on the Purview root collection and have access to Purview account from your network, this operation is done successfully.
+When connecting data factory to Azure Purview on authoring UI, ADF tries to add such role assignment automatically. If you have **Collection admins** role on the Azure Purview root collection and have access to Azure Purview account from your network, this operation is done successfully.
-## Monitor Purview connection
+## Monitor Azure Purview connection
-Once you connect the data factory to a Purview account, you see the following page with details on the enabled integration capabilities.
+Once you connect the data factory to an Azure Purview account, you see the following page with details on the enabled integration capabilities.
For **Data Lineage - Pipeline**, you may see one of below status: -- **Connected**: The data factory is successfully connected to the Purview account. Note this indicates data factory is associated with a Purview account and has permission to push lineage to it. If your Purview account is protected by firewall, you also need to make sure the integration runtime used to execute the activities and conduct lineage push can reach the Purview account. Learn more from [Access a secured Azure Purview account from Azure Data Factory](how-to-access-secured-purview-account.md).-- **Disconnected**: The data factory cannot push lineage to Purview because Purview Data Curator role is not granted to data factory's managed identity. To fix this issue, go to your Purview account to check the role assignments, and manually grant the role as needed. Learn more from [Set up authentication](#set-up-authentication) section.
+- **Connected**: The data factory is successfully connected to the Azure Purview account. Note this indicates data factory is associated with an Azure Purview account and has permission to push lineage to it. If your Azure Purview account is protected by firewall, you also need to make sure the integration runtime used to execute the activities and conduct lineage push can reach the Azure Purview account. Learn more from [Access a secured Azure Purview account from Azure Data Factory](how-to-access-secured-purview-account.md).
+- **Disconnected**: The data factory cannot push lineage to Azure Purview because Azure Purview Data Curator role is not granted to data factory's managed identity. To fix this issue, go to your Azure Purview account to check the role assignments, and manually grant the role as needed. Learn more from [Set up authentication](#set-up-authentication) section.
- **Unknown**: Data Factory cannot check the status. Possible reasons are:
- - Cannot reach the Purview account from your current network because the account is protected by firewall. You can launch the ADF UI from a private network that has connectivity to your Purview account instead.
- - You don't have permission to check role assignments on the Purview account. You can contact the Purview account admin to check the role assignments for you. Learn about the needed Purview role from [Set up authentication](#set-up-authentication) section.
+ - Cannot reach the Azure Purview account from your current network because the account is protected by firewall. You can launch the ADF UI from a private network that has connectivity to your Azure Purview account instead.
+ - You don't have permission to check role assignments on the Azure Purview account. You can contact the Azure Purview account admin to check the role assignments for you. Learn about the needed Azure Purview role from [Set up authentication](#set-up-authentication) section.
## Report lineage data to Azure Purview
-Once you connect the data factory to a Purview account, when you execute pipelines, Data Factory push lineage information to the Purview account. For detailed supported capabilities, see [Supported Azure Data Factory activities](../purview/how-to-link-azure-data-factory.md#supported-azure-data-factory-activities). For an end to end walkthrough, refer to [Tutorial: Push Data Factory lineage data to Azure Purview](tutorial-push-lineage-to-purview.md).
+Once you connect the data factory to an Azure Purview account, when you execute pipelines, Data Factory push lineage information to the Azure Purview account. For detailed supported capabilities, see [Supported Azure Data Factory activities](../purview/how-to-link-azure-data-factory.md#supported-azure-data-factory-activities). For an end to end walkthrough, refer to [Tutorial: Push Data Factory lineage data to Azure Purview](tutorial-push-lineage-to-purview.md).
-## Discover and explore data using Purview
+## Discover and explore data using Azure Purview
-Once you connect the data factory to a Purview account, you can use the search bar at the top center of Data Factory authoring UI to search for data and perform actions. Learn more from [Discover and explore data in ADF using Purview](how-to-discover-explore-purview-data.md).
+Once you connect the data factory to an Azure Purview account, you can use the search bar at the top center of Data Factory authoring UI to search for data and perform actions. Learn more from [Discover and explore data in ADF using Azure Purview](how-to-discover-explore-purview-data.md).
## Next steps [Tutorial: Push Data Factory lineage data to Azure Purview](tutorial-push-lineage-to-purview.md)
-[Discover and explore data in ADF using Purview](how-to-discover-explore-purview-data.md)
+[Discover and explore data in ADF using Azure Purview](how-to-discover-explore-purview-data.md)
-[Access a secured Purview account](how-to-access-secured-purview-account.md)
+[Access a secured Azure Purview account](how-to-access-secured-purview-account.md)
data-factory Connector Dynamics Crm Office 365 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-dynamics-crm-office-365.md
Title: Copy data in Dynamics (Microsoft Dataverse)
+ Title: Copy and transform data in Dynamics 365 (Microsoft Dataverse) or Dynamics CRM
-description: Learn how to copy data from Microsoft Dynamics CRM or Microsoft Dynamics 365 (Microsoft Dataverse) to supported sink data stores or from supported source data stores to Dynamics CRM or Dynamics 365 by using a copy activity in an Azure Data Factory or Azure Synapse Analytics pipeline.
+description: Learn how to copy and transform data in Dynamics 365 (Microsoft Dataverse) or Dynamics CRM using Azure Data Factory or Azure Synapse Analytics.
Previously updated : 12/31/2021 Last updated : 01/10/2022
-# Copy data from and to Dynamics 365 (Microsoft Dataverse) or Dynamics CRM
+# Copy and transform data in Dynamics 365 (Microsoft Dataverse) or Dynamics CRM using Azure Data Factory or Azure Synapse Analytics
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-This article outlines how to use a copy activity in Azure Data Factory or Synapse pipelines to copy data from and to Microsoft Dynamics 365 and Microsoft Dynamics CRM. It builds on the [copy activity overview](copy-activity-overview.md) article that presents a general overview of a copy activity.
+This article outlines how to use a copy activity in Azure Data Factory or Synapse pipelines to copy data from and to Dynamics 365 (Microsoft Dataverse) or Dynamics CRM, and use a data flow to transform data in Dynamics 365 (Microsoft Dataverse) or Dynamics CRM. To learn more, read the [Azure Data Factory](introduction.md) and the [Azure Synapse Analytics](..\synapse-analytics\overview-what-is.md) introduction articles.
## Supported capabilities This connector is supported for the following activities: - [Copy activity](copy-activity-overview.md) with [supported source and sink matrix](copy-activity-overview.md)
+- [Mapping data flow](concepts-data-flow-overview.md)
- [Lookup activity](control-flow-lookup-activity.md) You can copy data from Dynamics 365 (Microsoft Dataverse) or Dynamics CRM to any supported sink data store. You also can copy data from any supported source data store to Dynamics 365 (Microsoft Dataverse) or Dynamics CRM. For a list of data stores that a copy activity supports as sources and sinks, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
If all of your source records map to the same target entity and your source data
:::image type="content" source="./media/connector-dynamics-crm-office-365/connector-dynamics-add-entity-reference-column.png" alt-text="Dynamics lookup-field adding an entity-reference column":::
+## Mapping data flow properties
+
+When transforming data in mapping data flow, you can read and write to tables from Dynamics. For more information, see the [source transformation](data-flow-source.md) and [sink transformation](data-flow-sink.md) in mapping data flows. You can choose to use a Dynamics dataset or an [inline dataset](data-flow-source.md#inline-datasets) as source and sink type.
+
+### Source transformation
+
+The below table lists the properties supported by Dynamics. You can edit these properties in the **Source options** tab.
+
+| Name | Description | Required | Allowed values | Data flow script property |
+| - | -- | -- | -- | - |
+| Table | If you select Table as input, data flow fetches all the data from the table specified in the dataset. | No | - | tableName |
+| Query |FetchXML is a proprietary query language that is used in Dynamics online and on-premises. See the following example. To learn more, see [Build queries with FetchXML](/previous-versions/dynamicscrm-2016/developers-guide/gg328332(v=crm.8)). | No | String | query |
+| Entity | The logical name of the entity to retrieve. | Yes when use inline mode | - | entity|
+
+> [!Note]
+> If you select **Query** as input type, the column type from tables can not be retrieved. It will be treated as string by default.
+
+#### Dynamics source script example
+
+When you use Dynamics as source type, the associated data flow script is:
+
+```
+source(
+ output(
+ new_name as string,
+ new_dataflowtestid as string
+ ),
+ store: 'dynamics',
+ format: 'dynamicsformat',
+ baseUrl: $baseUrl,
+ cloudType:'AzurePublic',
+ servicePrincipalId:$servicePrincipalId,
+ servicePrincipalCredential:$servicePrincipalCredential,
+ entity:'new_datalowtest'
+query:' <fetch mapping='logical' count='3 paging-cookie=''><entity name='new_dataflow_crud_test'><attribute name='new_name'/><attribute name='new_releasedate'/></entity></fetch> '
+ ) ~> movies
+
+```
+
+### Sink transformation
+
+The below table lists the properties supported by Dynamics sink. You can edit these properties in the **Sink options** tab.
+
+| Name | Description | Required | Allowed values | Data flow script property |
+| - | -- | -- | -- | - |
+| Entity | The logical name of the entity to retrieve. | Yes when use inline mode | - | entity|
+| Request interval | The interval time between API requests in millisecond. | No | - | requestInterval|
+| Update method | Specify what operations are allowed on your database destination. The default is to only allow inserts.<br>To update, upsert, or delete rows, an [Alter row transformation](data-flow-alter-row.md) is required to tag rows for those actions. | Yes | `true` or `false` | insertable <br/>updateable<br/>upsertable<br>deletable|
+| Alternate key name | The alternate key name defined on your entity to do an update, upsert or delete. | No | - | alternateKeyName |
+
+#### Dynamics sink script example
+
+When you use Dynamics as sink type, the associated data flow script is:
+
+```
+moviesAltered sink(
+ input(new_name as string,
+ new_id as string,
+ new_releasedate as string
+ ),
+ store: 'dynamics',
+ format: 'dynamicsformat',
+ baseUrl: $baseUrl,
+
+ cloudType:'AzurePublic',
+ servicePrincipalId:$servicePrincipalId,
+ servicePrincipalCredential:$servicePrincipalCredential,
+ updateable: true,
+ upsertable: true,
+ insertable: true,
+ deletable:true,
+ alternateKey:'new_testalternatekey',
+ entity:'new_dataflow_crud_test',
+
+requestInterval:1000
+ ) ~> movieDB
+```
+ ## Lookup activity properties To learn details about the properties, see [Lookup activity](control-flow-lookup-activity.md).
data-factory Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-overview.md
Previously updated : 10/14/2021 Last updated : 01/10/2022
data-factory Connector Troubleshoot Azure Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-troubleshoot-azure-blob-storage.md
This article provides suggestions to troubleshoot common problems with the Azure
## Error code: FIPSModeIsNotSupport -- **Message**: `Fail to read data form Azure Blob Storage for Azure Blob connector needs MD5 algorithm which can't co-work with FIPS mode. Please change diawp.exe.config in self-hosted integration runtime install directory to disable FIPS policy following https://docs.microsoft.com/en-us/dotnet/framework/configure-apps/file-schema/runtime/enforcefipspolicy-element.`
+- **Message**: `Fail to read data form Azure Blob Storage for Azure Blob connector needs MD5 algorithm which can't co-work with FIPS mode. Please change diawp.exe.config in self-hosted integration runtime install directory to disable FIPS policy following https://docs.microsoft.com/dotnet/framework/configure-apps/file-schema/runtime/enforcefipspolicy-element.`
- **Cause**: Then FIPS policy is enabled on the VM where the self-hosted integration runtime was installed.
data-factory Control Flow Azure Function Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/control-flow-azure-function-activity.md
The Azure Function activity allows you to run [Azure Functions](../azure-functio
For an eight-minute introduction and demonstration of this feature, watch the following video:
-> [!VIDEO https://channel9.msdn.com/shows/azure-friday/Run-Azure-Functions-from-Azure-Data-Factory-pipelines/player]
+> [!VIDEO https://docs.microsoft.com/shows/azure-friday/Run-Azure-Functions-from-Azure-Data-Factory-pipelines/player]
## Create an Azure Function activity with UI
data-factory Create Self Hosted Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/create-self-hosted-integration-runtime.md
When the processor and available RAM aren't well utilized, but the execution of
### TLS/SSL certificate requirements
-Here are the requirements for the TLS/SSL certificate that you use to secure communication between integration runtime nodes:
--- The certificate must be a publicly trusted X509 v3 certificate. We recommend that you use certificates that are issued by a public partner certification authority (CA).-- Each integration runtime node must trust this certificate.-- We don't recommend Subject Alternative Name (SAN) certificates because only the last SAN item is used. All other SAN items are ignored. For example, if you have a SAN certificate whose SANs are **node1.domain.contoso.com** and **node2.domain.contoso.com**, you can use this certificate only on a machine whose fully qualified domain name (FQDN) is **node2.domain.contoso.com**.-- The certificate can use any key size supported by Windows Server 2012 R2 for TLS/SSL certificates.-- Certificates that use CNG keys aren't supported.
+ If you want to enable remote access from intranet with TLS/SSL certificate (Advanced) to secure communication between integration runtime nodes, you can follow steps in [Enable remote access from intranet with TLS/SSL certificate](tutorial-enable-remote-access-intranet-tls-ssl-certificate.md).
> [!NOTE] > This certificate is used:
data-factory Create Shared Self Hosted Integration Runtime Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/create-shared-self-hosted-integration-runtime-powershell.md
You can reuse an existing self-hosted integration runtime infrastructure that yo
To see an introduction and demonstration of this feature, watch the following 12-minute video:
-> [!VIDEO https://channel9.msdn.com/Shows/Azure-Friday/Hybrid-data-movement-across-multiple-Azure-Data-Factories/player]
+> [!VIDEO https://docs.microsoft.com/Shows/Azure-Friday/Hybrid-data-movement-across-multiple-Azure-Data-Factories/player]
### Terminology
data-factory Data Flow Sink https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-sink.md
Previously updated : 10/14/2021 Last updated : 01/10/2022 # Sink transformation in mapping data flow
Mapping data flow follows an extract, load, and transform (ELT) approach and wor
| [Azure SQL Database](connector-azure-sql-database.md#mapping-data-flow-properties) | | Γ£ô/Γ£ô | | [Azure SQL Managed Instance](connector-azure-sql-managed-instance.md#mapping-data-flow-properties) | | Γ£ô/- | | [Azure Synapse Analytics](connector-azure-sql-data-warehouse.md#mapping-data-flow-properties) | | Γ£ô/- |
+| [Dataverse](connector-dynamics-crm-office-365.md#mapping-data-flow-properties) | | Γ£ô/Γ£ô |
+| [Dynamics 365](connector-dynamics-crm-office-365.md#mapping-data-flow-properties) | | Γ£ô/Γ£ô |
+| [Dynamics CRM](connector-dynamics-crm-office-365.md#mapping-data-flow-properties) | | Γ£ô/Γ£ô |
| [Snowflake](connector-snowflake.md) | | Γ£ô/Γ£ô | | [SQL Server](connector-sql-server.md) | | Γ£ô/Γ£ô |
data-factory Data Flow Source https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-source.md
Previously updated : 12/08/2021 Last updated : 01/10/2022 # Source transformation in mapping data flow
Mapping data flow follows an extract, load, and transform (ELT) approach and wor
| [Azure SQL Database](connector-azure-sql-database.md#mapping-data-flow-properties) | | Γ£ô/Γ£ô | | [Azure SQL Managed Instance](connector-azure-sql-managed-instance.md#mapping-data-flow-properties) | | Γ£ô/Γ£ô | | [Azure Synapse Analytics](connector-azure-sql-data-warehouse.md#mapping-data-flow-properties) | | Γ£ô/Γ£ô |
+| [Dataverse](connector-dynamics-crm-office-365.md#mapping-data-flow-properties) | | Γ£ô/Γ£ô |
+| [Dynamics 365](connector-dynamics-crm-office-365.md#mapping-data-flow-properties) | | Γ£ô/Γ£ô |
+| [Dynamics CRM](connector-dynamics-crm-office-365.md#mapping-data-flow-properties) | | Γ£ô/Γ£ô |
| [Hive](connector-hive.md#mapping-data-flow-properties) | | -/Γ£ô | | [Snowflake](connector-snowflake.md) | | Γ£ô/Γ£ô | | [SQL Server](connector-sql-server.md) | | Γ£ô/Γ£ô |
data-factory How To Access Secured Purview Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-access-secured-purview-account.md
This article describes how to access a secured Azure Purview account from Azure
## Azure Purview private endpoint deployment scenarios
-You can use [Azure private endpoints](../private-link/private-endpoint-overview.md) for your Azure Purview accounts to allow secure access from a virtual network (VNet) to the catalog over a Private Link. Purview provides different types of private points for various access need: *account* private endpoint, *portal* private endpoint, and *ingestion* private endpoints. Learn more from [Purview private endpoints conceptual overview](../purview/catalog-private-link.md#conceptual-overview).
+You can use [Azure private endpoints](../private-link/private-endpoint-overview.md) for your Azure Purview accounts to allow secure access from a virtual network (VNet) to the catalog over a Private Link. Azure Purview provides different types of private points for various access need: *account* private endpoint, *portal* private endpoint, and *ingestion* private endpoints. Learn more from [Azure Purview private endpoints conceptual overview](../purview/catalog-private-link.md#conceptual-overview).
-If your Purview account is protected by firewall and denies public access, make sure you follow below checklist to set up the private endpoints so Data Factory can successfully connect to Purview.
+If your Azure Purview account is protected by firewall and denies public access, make sure you follow below checklist to set up the private endpoints so Data Factory can successfully connect to Azure Purview.
-| Scenario | Required Purview private endpoints |
+| Scenario | Required Azure Purview private endpoints |
| | |
-| [Run pipeline and report lineage to Purview](tutorial-push-lineage-to-purview.md) | For Data Factory pipeline to push lineage to Purview, Purview ***account*** and ***ingestion*** private endpoints are required. <br>- When using **Azure Integration Runtime**, follow the steps in [Managed private endpoints for Purview](#managed-private-endpoints-for-purview) section to create managed private endpoints in the Data Factory managed virtual network.<br>- When using **Self-hosted Integration Runtime**, follow the steps in [this section](../purview/catalog-private-link-end-to-end.md#option-2enable-account-portal-and-ingestion-private-endpoint-on-existing-azure-purview-accounts) to create the *account* and *ingestion* private endpoints in your integration runtime's virtual network. |
-| [Discover and explore data using Purview on ADF UI](how-to-discover-explore-purview-data.md) | To use the search bar at the top center of Data Factory authoring UI to search for Purview data and perform actions, you need to create Purview ***account*** and ***portal*** private endpoints in the virtual network that you launch the Data Factory Studio. Follow the steps in [Enable *account* and *portal* private endpoint](../purview/catalog-private-link-account-portal.md#option-2enable-account-and-portal-private-endpoint-on-existing-azure-purview-accounts). |
+| [Run pipeline and report lineage to Azure Purview](tutorial-push-lineage-to-purview.md) | For Data Factory pipeline to push lineage to Azure Purview, Azure Purview ***account*** and ***ingestion*** private endpoints are required. <br>- When using **Azure Integration Runtime**, follow the steps in [Managed private endpoints for Azure Purview](#managed-private-endpoints-for-azure-purview) section to create managed private endpoints in the Data Factory managed virtual network.<br>- When using **Self-hosted Integration Runtime**, follow the steps in [this section](../purview/catalog-private-link-end-to-end.md#option-2enable-account-portal-and-ingestion-private-endpoint-on-existing-azure-purview-accounts) to create the *account* and *ingestion* private endpoints in your integration runtime's virtual network. |
+| [Discover and explore data using Azure Purview on ADF UI](how-to-discover-explore-purview-data.md) | To use the search bar at the top center of Data Factory authoring UI to search for Azure Purview data and perform actions, you need to create Azure Purview ***account*** and ***portal*** private endpoints in the virtual network that you launch the Data Factory Studio. Follow the steps in [Enable *account* and *portal* private endpoint](../purview/catalog-private-link-account-portal.md#option-2enable-account-and-portal-private-endpoint-on-existing-azure-purview-accounts). |
-## Managed private endpoints for Purview
+## Managed private endpoints for Azure Purview
-[Managed private endpoints](managed-virtual-network-private-endpoint.md#managed-private-endpoints) are private endpoints created in the Azure Data Factory Managed Virtual Network establishing a private link to Azure resources. When you run pipeline and report lineage to a firewall protected Azure Purview account, create an Azure Integration Runtime with "Virtual network configuration" option enabled, then create the Purview ***account*** and ***ingestion*** managed private endpoints as follows.
+[Managed private endpoints](managed-virtual-network-private-endpoint.md#managed-private-endpoints) are private endpoints created in the Azure Data Factory Managed Virtual Network establishing a private link to Azure resources. When you run pipeline and report lineage to a firewall protected Azure Purview account, create an Azure Integration Runtime with "Virtual network configuration" option enabled, then create the Azure Purview ***account*** and ***ingestion*** managed private endpoints as follows.
### Create managed private endpoints
-To create managed private endpoints for Purview on Data Factory authoring UI:
+To create managed private endpoints for Azure Purview on Data Factory authoring UI:
-1. Go to **Manage** -> **Azure Purview**, and click **Edit** to edit your existing connected Purview account or click **Connect to a Purview account** to connect to a new Purview account.
+1. Go to **Manage** -> **Azure Purview**, and click **Edit** to edit your existing connected Azure Purview account or click **Connect to an Azure Purview account** to connect to a new Azure Purview account.
2. Select **Yes** for **Create managed private endpoints**. You need to have at least one Azure Integration Runtime with "Virtual network configuration" option enabled in the data factory to see this option.
-3. Click **+ Create all** button to batch create the needed Purview private endpoints, including the ***account*** private endpoint and the ***ingestion*** private endpoints for the Purview managed resources - Blob storage, Queue storage, and Event Hubs namespace. You need to have at least **Reader** role on your Purview account for Data Factory to retrieve the Purview managed resources' information.
+3. Click **+ Create all** button to batch create the needed Azure Purview private endpoints, including the ***account*** private endpoint and the ***ingestion*** private endpoints for the Azure Purview managed resources - Blob storage, Queue storage, and Event Hubs namespace. You need to have at least **Reader** role on your Azure Purview account for Data Factory to retrieve the Azure Purview managed resources' information.
- :::image type="content" source="./media/how-to-access-secured-purview-account/purview-create-all-managed-private-endpoints.png" alt-text="Create managed private endpoint for your connected Purview account.":::
+ :::image type="content" source="./media/how-to-access-secured-purview-account/purview-create-all-managed-private-endpoints.png" alt-text="Create managed private endpoint for your connected Azure Purview account.":::
4. In the next page, specify a name for the private endpoint. It will be used to generate names for the ingestion private endpoints as well with suffix.
- :::image type="content" source="./media/how-to-access-secured-purview-account/name-purview-private-endpoints.png" alt-text="Name the managed private endpoints for your connected Purview account.":::
+ :::image type="content" source="./media/how-to-access-secured-purview-account/name-purview-private-endpoints.png" alt-text="Name the managed private endpoints for your connected Azure Purview account.":::
-5. Click **Create** to create the private endpoints. After creation, 4 private endpoint requests will be generated that must [get approved by an owner of Purview](#approve-private-endpoint-connections).
+5. Click **Create** to create the private endpoints. After creation, 4 private endpoint requests will be generated that must [get approved by an owner of Azure Purview](#approve-private-endpoint-connections).
-Such batch managed private endpoint creation is provided on the Purview UI only. If you want to create the managed private endpoints programmatically, you need to create those PEs individually. You can find Purview managed resources' information from Azure portal -> your Purview account -> Managed resources.
+Such batch managed private endpoint creation is provided on the Azure Purview UI only. If you want to create the managed private endpoints programmatically, you need to create those PEs individually. You can find Azure Purview managed resources' information from Azure portal -> your Azure Purview account -> Managed resources.
### Approve private endpoint connections
-After you create the managed private endpoints for Purview, you see "Pending" state first. The Purview owner need to approve the private endpoint connections for each resource.
+After you create the managed private endpoints for Azure Purview, you see "Pending" state first. The Azure Purview owner need to approve the private endpoint connections for each resource.
-If you have permission to approve the Purview private endpoint connection, from Data Factory UI:
+If you have permission to approve the Azure Purview private endpoint connection, from Data Factory UI:
1. Go to **Manage** -> **Azure Purview** -> **Edit** 2. In the private endpoint list, click the **Edit** (pencil) button next to each private endpoint name
If you have permission to approve the Purview private endpoint connection, from
4. On the given resource, go to **Networking** -> **Private endpoint connection** to approve it. The private endpoint is named as `data_factory_name.your_defined_private_endpoint_name` with description as "Requested by data_factory_name". 5. Repeat this operation for all private endpoints.
-If you don't have permission to approve the Purview private endpoint connection, ask the Purview account owner to do as follows.
+If you don't have permission to approve the Azure Purview private endpoint connection, ask the Azure Purview account owner to do as follows.
-- For *account* private endpoint, go to Azure portal -> your Purview account -> Networking -> Private endpoint connection to approve.-- For *ingestion* private endpoints, go to Azure portal -> your Purview account -> Managed resources, click into the Storage account and Event Hubs namespace respectively, and approve the private endpoint connection in Networking -> Private endpoint connection page.
+- For *account* private endpoint, go to Azure portal -> your Azure Purview account -> Networking -> Private endpoint connection to approve.
+- For *ingestion* private endpoints, go to Azure portal -> your Azure Purview account -> Managed resources, click into the Storage account and Event Hubs namespace respectively, and approve the private endpoint connection in Networking -> Private endpoint connection page.
### Monitor managed private endpoints
-You can monitor the created managed private endpoints for Purview at two places:
+You can monitor the created managed private endpoints for Azure Purview at two places:
-- Go to **Manage** -> **Azure Purview** -> **Edit** to open your existing connected Purview account. To see all the relevant private endpoints, you need to have at least **Reader** role on your Purview account for Data Factory to retrieve the Purview managed resources' information. Otherwise, you only see *account* private endpoint with warning.-- Go to **Manage** -> **Managed private endpoints** where you see all the managed private endpoints created under the data factory. If you have at least **Reader** role on your Purview account, you see Purview relevant private endpoints being grouped together. Otherwise, they show up separately in the list.
+- Go to **Manage** -> **Azure Purview** -> **Edit** to open your existing connected Azure Purview account. To see all the relevant private endpoints, you need to have at least **Reader** role on your Azure Purview account for Data Factory to retrieve the Azure Purview managed resources' information. Otherwise, you only see *account* private endpoint with warning.
+- Go to **Manage** -> **Managed private endpoints** where you see all the managed private endpoints created under the data factory. If you have at least **Reader** role on your Azure Purview account, you see Azure Purview relevant private endpoints being grouped together. Otherwise, they show up separately in the list.
## Next steps - [Connect Data Factory to Azure Purview](connect-data-factory-to-azure-purview.md) - [Tutorial: Push Data Factory lineage data to Azure Purview](tutorial-push-lineage-to-purview.md)-- [Discover and explore data in ADF using Purview](how-to-discover-explore-purview-data.md)
+- [Discover and explore data in ADF using Azure Purview](how-to-discover-explore-purview-data.md)
data-factory How To Configure Azure Ssis Ir Enterprise Edition https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-configure-azure-ssis-ir-enterprise-edition.md
Some of these features require you to install additional components to customize
| **Enterprise Features** | **Descriptions** | ||| | CDC components | The CDC Source, Control Task, and Splitter Transformation are preinstalled on the Azure-SSIS IR Enterprise Edition. To connect to Oracle, you also need to install the CDC Designer and Service on another computer. |
-| Oracle connectors | The Oracle Connection Manager, Source, and Destination are preinstalled on the Azure-SSIS IR Enterprise Edition. You also need to install the Oracle Call Interface (OCI) driver, and if necessary configure the Oracle Transport Network Substrate (TNS), on the Azure-SSIS IR. For more info, see [Custom setup for the Azure-SSIS integration runtime](how-to-configure-azure-ssis-ir-custom-setup.md). |
+| Oracle connectors | You need to install the Oracle Connection Manager, Source, and Destination, as well as the Oracle Call Interface (OCI) driver, on the Azure-SSIS IR Enterprise Edition. If necessary, you can also configure the Oracle Transport Network Substrate (TNS), on the Azure-SSIS IR. For more info, see [Custom setup for the Azure-SSIS integration runtime](how-to-configure-azure-ssis-ir-custom-setup.md). |
| Teradata connectors | You need to install the Teradata Connection Manager, Source, and Destination, as well as the Teradata Parallel Transporter (TPT) API and Teradata ODBC driver, on the Azure-SSIS IR Enterprise Edition. For more info, see [Custom setup for the Azure-SSIS integration runtime](how-to-configure-azure-ssis-ir-custom-setup.md). | | SAP BW connectors | The SAP BW Connection Manager, Source, and Destination are preinstalled on the Azure-SSIS IR Enterprise Edition. You also need to install the SAP BW driver on the Azure-SSIS IR. These connectors support SAP BW 7.0 or earlier versions. To connect to later versions of SAP BW or other SAP products, you can purchase and install SAP connectors from third-party ISVs on the Azure-SSIS IR. For more info about how to install additional components, see [Custom setup for the Azure-SSIS integration runtime](how-to-configure-azure-ssis-ir-custom-setup.md). | | Analysis Services components | The Data Mining Model Training Destination, the Dimension Processing Destination, and the Partition Processing Destination, as well as the Data Mining Query Transformation, are preinstalled on the Azure-SSIS IR Enterprise Edition. All these components support SQL Server Analysis Services (SSAS), but only the Partition Processing Destination supports Azure Analysis Services (AAS). To connect to SSAS, you also need to [configure Windows Authentication credentials in SSISDB](/sql/integration-services/lift-shift/ssis-azure-connect-with-windows-auth). In addition to these components, the Analysis Services Execute DDL Task, the Analysis Services Processing Task, and the Data Mining Query Task are also preinstalled on the Azure-SSIS IR Standard/Enterprise Edition. |
Some of these features require you to install additional components to customize
- [Custom setup for the Azure-SSIS integration runtime](how-to-configure-azure-ssis-ir-custom-setup.md) -- [How to develop paid or licensed custom components for the Azure-SSIS integration runtime](how-to-develop-azure-ssis-ir-licensed-components.md)
+- [How to develop paid or licensed custom components for the Azure-SSIS integration runtime](how-to-develop-azure-ssis-ir-licensed-components.md)
data-factory How To Create Event Trigger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-create-event-trigger.md
Event-driven architecture (EDA) is a common data integration pattern that involv
For a ten-minute introduction and demonstration of this feature, watch the following video:
-> [!VIDEO https://channel9.msdn.com/Shows/Azure-Friday/Event-based-data-integration-with-Azure-Data-Factory/player]
+> [!VIDEO https://docs.microsoft.com/Shows/Azure-Friday/Event-based-data-integration-with-Azure-Data-Factory/player]
> [!NOTE] > The integration described in this article depends on [Azure Event Grid](https://azure.microsoft.com/services/event-grid/). Make sure that your subscription is registered with the Event Grid resource provider. For more info, see [Resource providers and types](../azure-resource-manager/management/resource-providers-and-types.md#azure-portal). You must be able to do the *Microsoft.EventGrid/eventSubscriptions/** action. This action is part of the EventGrid EventSubscription Contributor built-in role.
data-factory How To Discover Explore Purview Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-discover-explore-purview-data.md
Title: Discover and explore data in ADF using Purview
-description: Learn how to discover, explore data in Azure Data Factory using Purview
+ Title: Discover and explore data in ADF using Azure Purview
+description: Learn how to discover, explore data in Azure Data Factory using Azure Purview
Last updated 08/10/2021
-# Discover and explore data in ADF using Purview
+# Discover and explore data in ADF using Azure Purview
[!INCLUDE[appliesto-adf-xxx-md](includes/appliesto-adf-xxx-md.md)] In this article, you will register an Azure Purview Account to a Data Factory. That connection allows you to discover Azure Purview assets and interact with them through ADF capabilities. You can perform the following tasks in ADF: -- Use the search box at the top to find Purview assets based on keywords
+- Use the search box at the top to find Azure Purview assets based on keywords
- Understand the data based on metadata, lineage, annotations - Connect those data to your data factory with linked services or datasets ## Prerequisites -- [Azure Purview account](../purview/create-catalog-portal.md)
+- [Azure Purview account](../purview/create-catalog-portal.md)
- [Data Factory](./quickstart-create-data-factory-portal.md) - [Connect an Azure Purview Account into Data Factory](./connect-data-factory-to-azure-purview.md) ## Using Azure Purview in Data Factory
-The use Azure Purview in Data Factory requires you to have access to that Purview account. Data Factory passes-through your Purview permission. As an example, if you have a curator permission role, you will be able to edit metadata scanned by Azure Purview.
+The use Azure Purview in Data Factory requires you to have access to that Azure Purview account. Data Factory passes-through your Azure Purview permission. As an example, if you have a curator permission role, you will be able to edit metadata scanned by Azure Purview.
### Data discovery: search datasets
data-factory Iterative Development Debugging https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/iterative-development-debugging.md
Azure Data Factory and Synapse Analytics supports iterative development and debu
For an eight-minute introduction and demonstration of this feature, watch the following video:
-> [!VIDEO https://channel9.msdn.com/Shows/Azure-Friday/Iterative-development-and-debugging-with-Azure-Data-Factory/player]
+> [!VIDEO https://docs.microsoft.com/Shows/Azure-Friday/Iterative-development-and-debugging-with-Azure-Data-Factory/player]
## Debugging a pipeline
data-factory Monitor Using Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/monitor-using-azure-monitor.md
Cloud applications are complex and have many moving parts. Monitors provide data
Azure Monitor provides base-level infrastructure metrics and logs for most Azure services. Azure diagnostic logs are emitted by a resource and provide rich, frequent data about the operation of that resource. Azure Data Factory (ADF) can write diagnostic logs in Azure Monitor. For a seven-minute introduction and demonstration of this feature, watch the following video:
-> [!VIDEO https://channel9.msdn.com/Shows/Azure-Friday/Monitor-Data-Factory-pipelines-using-Operations-Management-Suite-OMS/player]
+> [!VIDEO https://docs.microsoft.com/Shows/Azure-Friday/Monitor-Data-Factory-pipelines-using-Operations-Management-Suite-OMS/player]
For more information, see [Azure Monitor overview](../azure-monitor/overview.md).
data-factory Monitor Visually https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/monitor-visually.md
You can raise alerts on supported metrics in Data Factory. Select **Monitor** >
For a seven-minute introduction and demonstration of this feature, watch the following video:
-> [!VIDEO https://channel9.msdn.com/shows/azure-friday/Monitor-your-Azure-Data-Factory-pipelines-proactively-with-alerts/player]
+> [!VIDEO https://docs.microsoft.com/shows/azure-friday/Monitor-your-Azure-Data-Factory-pipelines-proactively-with-alerts/player]
### Create alerts
data-factory Parameterize Linked Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/parameterize-linked-services.md
You can use the UI in the Azure portal or a programming interface to parameteriz
For a seven-minute introduction and demonstration of this feature, watch the following video:
-> [!VIDEO https://channel9.msdn.com/shows/azure-friday/Parameterize-connections-to-your-data-stores-in-Azure-Data-Factory/player]
+> [!VIDEO https://docs.microsoft.com/shows/azure-friday/Parameterize-connections-to-your-data-stores-in-Azure-Data-Factory/player]
## Supported linked service types
data-factory Quickstart Create Data Factory Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/quickstart-create-data-factory-portal.md
This quickstart describes how to use the Azure Data Factory UI to create and mon
### Video Watching this video helps you understand the Data Factory UI:
->[!VIDEO https://channel9.msdn.com/Shows/Azure-Friday/Visually-build-pipelines-for-Azure-Data-Factory-v2/Player]
+>[!VIDEO https://docs.microsoft.com/Shows/Azure-Friday/Visually-build-pipelines-for-Azure-Data-Factory-v2/Player]
## Create a data factory
data-factory Transform Data Databricks Jar https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/transform-data-databricks-jar.md
The Azure Databricks Jar Activity in a [pipeline](concepts-pipelines-activities.
For an eleven-minute introduction and demonstration of this feature, watch the following video:
-> [!VIDEO https://channel9.msdn.com/Shows/Azure-Friday/Execute-Jars-and-Python-scripts-on-Azure-Databricks-using-Data-Factory/player]
+> [!VIDEO https://docs.microsoft.com/Shows/Azure-Friday/Execute-Jars-and-Python-scripts-on-Azure-Databricks-using-Data-Factory/player]
## Add a Jar activity for Azure Databricks to a pipeline with UI
For more information, see the [Databricks documentation](/azure/databricks/dev-t
## Next steps
-For an eleven-minute introduction and demonstration of this feature, watch the [video](https://channel9.msdn.com/Shows/Azure-Friday/Execute-Jars-and-Python-scripts-on-Azure-Databricks-using-Data-Factory/player).
+For an eleven-minute introduction and demonstration of this feature, watch the [video](/Shows/Azure-Friday/Execute-Jars-and-Python-scripts-on-Azure-Databricks-using-Data-Factory/player).
data-factory Transform Data Databricks Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/transform-data-databricks-python.md
The Azure Databricks Python Activity in a [pipeline](concepts-pipelines-activiti
For an eleven-minute introduction and demonstration of this feature, watch the following video:
-> [!VIDEO https://channel9.msdn.com/Shows/Azure-Friday/Execute-Jars-and-Python-scripts-on-Azure-Databricks-using-Data-Factory/player]
+> [!VIDEO https://docs.microsoft.com/Shows/Azure-Friday/Execute-Jars-and-Python-scripts-on-Azure-Databricks-using-Data-Factory/player]
## Add a Python activity for Azure Databricks to a pipeline with UI
data-factory Transform Data Machine Learning Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/transform-data-machine-learning-service.md
Run your Azure Machine Learning pipelines as a step in your Azure Data Factory a
The below video features a six-minute introduction and demonstration of this feature.
-> [!VIDEO https://channel9.msdn.com/Shows/Azure-Friday/How-to-execute-Azure-Machine-Learning-service-pipelines-in-Azure-Data-Factory/player]
+> [!VIDEO https://docs.microsoft.com/Shows/Azure-Friday/How-to-execute-Azure-Machine-Learning-service-pipelines-in-Azure-Data-Factory/player]
## Create a Machine Learning Execute Pipeline activity with UI
data-factory Transform Data Using Databricks Notebook https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/transform-data-using-databricks-notebook.md
If you don't have an Azure subscription, create a [free account](https://azure.m
For an eleven-minute introduction and demonstration of this feature, watch the following video:
-> [!VIDEO https://channel9.msdn.com/Shows/Azure-Friday/ingest-prepare-and-transform-using-azure-databricks-and-data-factory/player]
+> [!VIDEO https://docs.microsoft.com/Shows/Azure-Friday/ingest-prepare-and-transform-using-azure-databricks-and-data-factory/player]
## Prerequisites
data-factory Tumbling Window Trigger Dependency https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tumbling-window-trigger-dependency.md
In order to build a dependency chain and make sure that a trigger is executed on
For a demonstration on how to create dependent pipelines using tumbling window trigger, watch the following video:
-> [!VIDEO https://channel9.msdn.com/Shows/Azure-Friday/Create-dependent-pipelines-in-your-Azure-Data-Factory/player]
+> [!VIDEO https://docs.microsoft.com/Shows/Azure-Friday/Create-dependent-pipelines-in-your-Azure-Data-Factory/player]
## Create a dependency in the UI
data-factory Tutorial Push Lineage To Purview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-push-lineage-to-purview.md
Currently, lineage is supported for Copy, Data Flow, and Execute SSIS activities
* **Azure subscription**. If you don't have an Azure subscription, create a [free Azure account](https://azure.microsoft.com/free/) before you begin. * **Azure Data Factory**. If you don't have an Azure Data Factory, see [Create an Azure Data Factory](./quickstart-create-data-factory-portal.md).
-* **Azure Purview account**. The Purview account captures all lineage data generated by data factory. If you don't have an Azure Purview account, see [Create an Azure Purview](../purview/create-catalog-portal.md).
+* **Azure Purview account**. The Azure Purview account captures all lineage data generated by data factory. If you don't have an Azure Purview account, see [Create an Azure Purview](../purview/create-catalog-portal.md).
## Run pipeline and push lineage data to Azure Purview
-### Step 1: Connect Data Factory to your Purview account
+### Step 1: Connect Data Factory to your Azure Purview account
-You can establish the connection between Data Factory and Purview account by following the steps in [Connect Data Factory to Azure Purview](connect-data-factory-to-azure-purview.md).
+You can establish the connection between Data Factory and Azure Purview account by following the steps in [Connect Data Factory to Azure Purview](connect-data-factory-to-azure-purview.md).
### Step 2: Run pipeline in Data Factory
After you run the pipeline, in the [pipeline monitoring view](monitor-visually.m
:::image type="content" source="./media/data-factory-purview/monitor-lineage-reporting-status.png" alt-text="Monitor the lineage reporting status in pipeline monitoring view.":::
-### Step 4: View lineage information in your Purview account
+### Step 4: View lineage information in your Azure Purview account
-On Purview UI, you can browse assets and choose type "Azure Data Factory". You can also search the Data Catalog using keywords.
+On Azure Purview UI, you can browse assets and choose type "Azure Data Factory". You can also search the Data Catalog using keywords.
On the activity asset, click the Lineage tab, you can see all the lineage information. - Copy activity:
- :::image type="content" source="./media/data-factory-purview/copy-lineage.png" alt-text="Screenshot of the Copy activity lineage in Purview." lightbox="./media/data-factory-purview/copy-lineage.png":::
+ :::image type="content" source="./media/data-factory-purview/copy-lineage.png" alt-text="Screenshot of the Copy activity lineage in Azure Purview." lightbox="./media/data-factory-purview/copy-lineage.png":::
- Data Flow activity:
- :::image type="content" source="./media/data-factory-purview/dataflow-lineage.png" alt-text="Screenshot of the Data Flow lineage in Purview." lightbox="./media/data-factory-purview/dataflow-lineage.png":::
+ :::image type="content" source="./media/data-factory-purview/dataflow-lineage.png" alt-text="Screenshot of the Data Flow lineage in Azure Purview." lightbox="./media/data-factory-purview/dataflow-lineage.png":::
> [!NOTE] > For the lineage of Dataflow activity, we only support source and sink. The lineage for Dataflow transformation is not supported yet. - Execute SSIS Package activity:
- :::image type="content" source="./media/data-factory-purview/ssis-lineage.png" alt-text="Screenshot of the Execute SSIS lineage in Purview." lightbox="./media/data-factory-purview/ssis-lineage.png":::
+ :::image type="content" source="./media/data-factory-purview/ssis-lineage.png" alt-text="Screenshot of the Execute SSIS lineage in Azure Purview." lightbox="./media/data-factory-purview/ssis-lineage.png":::
> [!NOTE] > For the lineage of Execute SSIS Package activity, we only support source and destination. The lineage for transformation is not supported yet.
data-factory Data Factory Monitor Manage App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-monitor-manage-app.md
This article describes how to use the Monitoring and Management app to monitor,
> [!NOTE] > The user interface shown in the video may not exactly match what you see in the portal. It's slightly older, but concepts remain the same.
-> [!VIDEO https://channel9.msdn.com/Shows/Azure-Friday/Azure-Data-Factory-Monitoring-and-Managing-Big-Data-Piplines/player]
->
## Launch the Monitoring and Management app To launch the Monitor and Management app, click the **Monitor & Manage** tile on the **Data Factory** blade for your data factory.
data-lake-analytics Data Lake Analytics Data Lake Tools For Vscode https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-lake-analytics/data-lake-analytics-data-lake-tools-for-vscode.md
Last updated 02/09/2018
In this article, learn how you can use Azure Data Lake Tools for Visual Studio Code (VS Code) to create, test, and run U-SQL scripts. The information is also covered in the following video:
-[![Video player: Azure Data Lake tools for VS Code](media/data-lake-analytics-data-lake-tools-for-vscode/data-lake-tools-for-vscode-video.png)](https://channel9.msdn.com/Series/AzureDataLake/Azure-Data-Lake-Tools-for-VSCode?term=ADL%20Tools%20for%20VSCode")
+![Video player: Azure Data Lake tools for VS Code](media/data-lake-analytics-data-lake-tools-for-vscode/data-lake-tools-for-vscode-video.png)
## Prerequisites
data-lake-store Data Lake Store Performance Tuning Hive https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-lake-store/data-lake-store-performance-tuning-hive.md
Restart all the nodes/service for the config to take effect.
Here are a few blogs that will help tune your Hive queries: * [Optimize Hive queries for Hadoop in HDInsight](../hdinsight/hdinsight-hadoop-optimize-hive-query.md) * [Encoding the Hive query file in Azure HDInsight](/archive/blogs/bigdatasupport/encoding-the-hive-query-file-in-azure-hdinsight)
-* [Ignite talk on optimize Hive on HDInsight](https://channel9.msdn.com/events/Machine-Learning-and-Data-Sciences-Conference/Data-Science-Summit-2016/MSDSS25)
+* Ignite talk on optimize Hive on HDInsight
data-lake-store Data Lake Store With Data Catalog https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-lake-store/data-lake-store-with-data-catalog.md
Before you begin this tutorial, you must have the following:
## Register Data Lake Storage Gen1 as a source for Data Catalog
-> [!VIDEO https://channel9.msdn.com/Series/AzureDataLake/ADCwithADL/player]
- 1. Go to `https://azure.microsoft.com/services/data-catalog`, and click **Get started**. 1. Log into the Azure Data Catalog portal, and click **Publish data**.
defender-for-cloud Information Protection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/information-protection.md
Last updated 11/09/2021
[!INCLUDE [Banner for top of topics](./includes/banner.md)]
-[Azure Purview](../purview/overview.md), Microsoft's data governance service, provides rich insights into the *sensitivity of your data*. With automated data discovery, sensitive data classification, and end-to-end data lineage, Purview helps organizations manage and govern data in hybrid and multi-cloud environments.
+[Azure Purview](../purview/overview.md), Microsoft's data governance service, provides rich insights into the *sensitivity of your data*. With automated data discovery, sensitive data classification, and end-to-end data lineage, Azure Purview helps organizations manage and govern data in hybrid and multi-cloud environments.
Microsoft Defender for Cloud customers using Azure Purview can benefit from an additional vital layer of metadata in alerts and recommendations: information about any potentially sensitive data involved. This knowledge helps solve the triage challenge and ensures security professionals can focus their attention on threats to sensitive data.
-This page explains the integration of Purview's data sensitivity classification labels within Defender for Cloud.
+This page explains the integration of Azure Purview's data sensitivity classification labels within Defender for Cloud.
## Availability |Aspect|Details|
However, where possible, you'd want to focus the security team's efforts on risk
Azure Purview's data sensitivity classifications and data sensitivity labels provide that knowledge. ## Discover resources with sensitive data
-To provide the vital information about discovered sensitive data, and help ensure you have that information when you need it, Defender for Cloud displays information from Purview in multiple locations.
+To provide the vital information about discovered sensitive data, and help ensure you have that information when you need it, Defender for Cloud displays information from Azure Purview in multiple locations.
> [!TIP]
-> If a resource is scanned by multiple Purview accounts, the information shown in Defender for Cloud relates to the most recent scan.
+> If a resource is scanned by multiple Azure Purview accounts, the information shown in Defender for Cloud relates to the most recent scan.
### Alerts and recommendations pages
This vital additional layer of metadata helps solve the triage challenge and ens
### Inventory filters
-The [asset inventory page](asset-inventory.md) has a collection of powerful filters to group your resources with outstanding alerts and recommendations according to the criteria relevant for any scenario. These filters include **Data sensitivity classifications** and **Data sensitivity labels**. Use these filters to evaluate the security posture of resources on which Purview has discovered sensitive data.
+The [asset inventory page](asset-inventory.md) has a collection of powerful filters to group your resources with outstanding alerts and recommendations according to the criteria relevant for any scenario. These filters include **Data sensitivity classifications** and **Data sensitivity labels**. Use these filters to evaluate the security posture of resources on which Azure Purview has discovered sensitive data.
:::image type="content" source="./media/information-protection/information-protection-inventory-filters.png" alt-text="Screenshot of information protection filters in Microsoft Defender for Cloud's asset inventory page." lightbox="./media/information-protection/information-protection-inventory-filters.png":::
When you select a single resource - whether from an alert, recommendation, or th
The resource health page provides a snapshot view of the overall health of a single resource. You can review detailed information about the resource and all recommendations that apply to that resource. Also, if you're using any of the Microsoft Defender plans, you can see outstanding security alerts for that specific resource too.
-When reviewing the health of a specific resource, you'll see the Purview information on this page and can use it determine what data has been discovered on this resource alongside the Purview account used to scan the resource.
+When reviewing the health of a specific resource, you'll see the Azure Purview information on this page and can use it determine what data has been discovered on this resource alongside the Azure Purview account used to scan the resource.
:::image type="content" source="./media/information-protection/information-protection-resource-health.png" alt-text="Screenshot of Defender for Cloud's resource health page showing information protection labels and classifications from Azure Purview." lightbox="./media/information-protection/information-protection-resource-health.png":::
A graph shows the number of recommendations and alerts by classified resource ty
For related information, see: - [What is Azure Purview?](../purview/overview.md)-- [Purview's supported data sources and file types](../purview/sources-and-scans.md) and [supported data stores](../purview/purview-connector-overview.md)
+- [Azure Purview's supported data sources and file types](../purview/sources-and-scans.md) and [supported data stores](../purview/purview-connector-overview.md)
- [Azure Purview deployment best practices](../purview/deployment-best-practices.md) - [How to label to your data in Azure Purview](../purview/how-to-automatically-label-your-content.md)
defender-for-cloud Overview Page https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/overview-page.md
In the center of the page are the **feature tiles**, each linking to a high prof
- **Regulatory compliance** - Defender for Cloud provides insights into your compliance posture based on continuous assessments of your Azure environment. Defender for Cloud analyzes risk factors in your environment according to security best practices. These assessments are mapped to compliance controls from a supported set of standards. [Learn more](regulatory-compliance-dashboard.md). - **Firewall Manager** - This tile shows the status of your hubs and networks from [Azure Firewall Manager](../firewall-manager/overview.md). - **Inventory** - The asset inventory page of Microsoft Defender for Cloud provides a single page for viewing the security posture of the resources you've connected to Microsoft Defender for Cloud. All resources with unresolved security recommendations are shown in the inventory. If you've enabled the integration with Microsoft Defender for Endpoint and enabled Microsoft Defender for servers, you'll also have access to a software inventory. The tile on the overview page shows you at a glance the total healthy and unhealthy resources (for the currently selected subscriptions). [Learn more](asset-inventory.md).-- **Information protection** - A graph on this tile shows the resource types that have been scanned by [Azure Purview](../purview/overview.md), found to contain sensitive data, and have outstanding recommendations and alerts. Follow the **scan** link to access the Azure Purview accounts and configure new scans, or select any other part of the tile to open the [asset inventory](asset-inventory.md) and view your resources according to your Purview data sensitivity classifications. [Learn more](information-protection.md).
+- **Information protection** - A graph on this tile shows the resource types that have been scanned by [Azure Purview](../purview/overview.md), found to contain sensitive data, and have outstanding recommendations and alerts. Follow the **scan** link to access the Azure Purview accounts and configure new scans, or select any other part of the tile to open the [asset inventory](asset-inventory.md) and view your resources according to your Azure Purview data sensitivity classifications. [Learn more](information-protection.md).
### Insights
defender-for-cloud Security Center Planning And Operations Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/security-center-planning-and-operations-guide.md
This page shows the details regarding the time that the attack took place, the s
Once you identify the compromised system, you can run a [workflow automation](workflow-automation.md) that was previously created. These are a collection of procedures that can be executed from Defender for Cloud once triggered by an alert.
-In the [How to Leverage the Defender for Cloud & Microsoft Operations Management Suite for an Incident Response](https://channel9.msdn.com/Blogs/Taste-of-Premier/ToP1703) video, you can see some demonstrations that show how Defender for Cloud can be used in each one of those stages.
+In the How to Leverage the Defender for Cloud & Microsoft Operations Management Suite for an Incident Response video, you can see some demonstrations that show how Defender for Cloud can be used in each one of those stages.
> [!NOTE] > Read [Managing and responding to security alerts in Defender for Cloud](managing-and-responding-alerts.md) for more information on how to use Defender for Cloud capabilities to assist you during your Incident Response process.
defender-for-cloud Security Center Readiness Roadmap https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/security-center-readiness-roadmap.md
Articles
- [Protecting Azure SQL service and data in Defender for Cloud](./implement-security-recommendations.md)
-Video
-- [Mitigating Security Issues using Defender for Cloud](https://channel9.msdn.com/Blogs/Azure-Security-Videos/Mitigating-Security-Issues-using-Azure-Security-Center)- ### Defender for Cloud for incident response To reduce costs and damage, it's important to have an incident response plan in place before an attack takes place. You can use Defender for Cloud in different stages of an incident response. Use the following resources to understand how Defender for Cloud can be incorporated in your incident response process. Videos
-* [Defender for Cloud in Incident Response](https://channel9.msdn.com/Blogs/Azure-Security-Videos/Azure-Security-Center-in-Incident-Response)
* [Respond quickly to threats with next-generation security operation, and investigation](https://youtu.be/e8iFCz5RM4g) Articles
devops-project Azure Devops Project Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devops-project/azure-devops-project-sql-database.md
To learn more about the CI/CD pipeline, see:
## Videos
-> [!VIDEO https://channel9.msdn.com/Events/Build/2018/BRK3308/player]
+> [!VIDEO https://docs.microsoft.com/Events/Build/2018/BRK3308/player]
devtest-labs Devtest Lab Add Devtest User https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-add-devtest-user.md
# Add owners and users in Azure DevTest Labs
-> [!VIDEO https://channel9.msdn.com/Blogs/Azure/How-to-set-security-in-your-DevTest-Lab/player]
->
->
Access in Azure DevTest Labs is controlled by [Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md). Using Azure RBAC, you can segregate duties within your team into *roles* where you grant only the amount of access necessary to users to perform their jobs. Three of these Azure roles are *Owner*, *DevTest Labs User*, and *Contributor*. In this article, you learn what actions can be performed in each of the three main Azure roles. From there, you learn how to add users to a lab - both via the portal and via a PowerShell script, and how to add users at the subscription level.
devtest-labs Image Factory Create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/image-factory-create.md
The solution enables the speed of creating virtual machines from custom images w
<br/>
-> [!VIDEO https://channel9.msdn.com/Blogs/Azure/Custom-Image-Factory-with-Azure-DevTest-Labs/player]
-- ## High-level view of the solution The solution enables the speed of creating virtual machines from custom images while eliminating extra ongoing maintenance costs. With this solution, you can automatically create custom images and distribute them to other DevTest Labs. You use Azure DevOps (formerly Visual Studio Team Services) as the orchestration engine for automating the all the operations in the DevTest Labs.
event-grid Configure Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/configure-private-endpoints.md
topicName = "<TOPIC NAME>"
connectionName="<ENDPOINT CONNECTION NAME>" endpointName=<ENDPOINT NAME>
-# resource ID of the topic. replace <SUBSCRIPTION ID>, <RESOURCE GROUP NAME>, and <TOPIC NAME>
-topicResourceID="/subscriptions/<SUBSCRIPTION ID>/resourceGroups/<RESOURCE GROUP NAME>/providers/Microsoft.EventGrid/topics/<TOPIC NAME>"
+# resource ID of the topic. replace <SUBSCRIPTION ID>, <RESOURCE GROUP NAME>, and <TOPIC NAME>
+# topicResourceID="/subscriptions/<SUBSCRIPTION ID>/resourceGroups/<RESOURCE GROUP NAME>/providers/Microsoft.EventGrid/topics/<TOPIC NAME>"
# select subscription az account set --subscription $subscriptionID
az eventgrid topic show \
--name $topicName # create private endpoint for the topic you created
-az network private-endpoint create
+az network private-endpoint create \
--resource-group $resourceGroupName \ --name $endpointName \ --vnet-name $vNetName \
event-hubs Create Schema Registry https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/create-schema-registry.md
Title: Create an Azure Event Hubs schema registry description: This article shows you how to create a schema registry in an Azure Event Hubs namespace. Previously updated : 06/01/2021 Last updated : 01/13/2022
-# Create an Azure Event Hubs schema registry
-This article shows you how to create a schema group with schemas in a schema registry hosted by Azure Event Hubs. For an overview of the Schema Registry feature of Azure Event Hubs, see [Azure Schema Registry in Event Hubs](schema-registry-overview.md).
+# Quickstart: Create an Azure Event Hubs schema registry using Azure portal
+
+**Azure Schema Registry** is a feature of Event Hubs, which provides a central repository for schemas for event-driven and messaging-centric applications. It provides the flexibility for your producer and consumer applications to **exchange data without having to manage and share the schema**. It also provides a simple governance framework for reusable schemas and defines relationship between schemas through a grouping construct (schema groups). For more information, see [Azure Schema Registry in Event Hubs](schema-registry-overview.md).
+
+This article shows you how to create a schema group with schemas in a schema registry hosted by Azure Event Hubs.
> [!NOTE] > - The feature isn't available in the **basic** tier.
In this section, you add a schema to the schema group using the Azure portal.
:::image type="content" source="./media/create-schema-registry/new-version.png" alt-text="Image showing the new version of schema"::: 1. Select `1` to see the version 1 of the schema.
+## Clean up resources
+
+> [!NOTE]
+> Don't clean up resources if you want to continue to the next quick start linked from **Next steps**.
+
+1. Navigate to the **Event Hubs Namespace** page.
+1. Select **Schema Registry** on the left menu.
+1. Select the **schema group** you created in this quickstart.
+1. On the **Schema Group** page, select **Delete** on the toolbar.
+1. On the **Delete Schema Group** page, type the name of the schema group, and select **Delete**.
## Next steps
-For more information about schema registry, see [Azure Schema Registry in Event Hubs](schema-registry-overview.md).
+
+> [!div class="nextstepaction"]
+> [Validate schema when sending and receiving events - AMQP and .NET](schema-registry-dotnet-send-receive-quickstart.md).
event-hubs Dynamically Add Partitions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/dynamically-add-partitions.md
Title: Dynamically add partitions to an event hub in Azure Event Hubs description: This article shows you how to dynamically add partitions to an event hub in Azure Event Hubs. Previously updated : 10/20/2021 Last updated : 01/13/2022
-# Dynamically add partitions to an event hub (Apache Kafka topic) in Azure Event Hubs
+# Dynamically add partitions to an event hub (Apache Kafka topic)
Event Hubs provides message streaming through a partitioned consumer pattern in which each consumer only reads a specific subset, or partition, of the message stream. This pattern enables horizontal scale for event processing and provides other stream-focused features that are unavailable in queues and topics. A partition is an ordered sequence of events that is held in an event hub. As newer events arrive, they're added to the end of this sequence. For more information about partitions in general, see [Partitions](event-hubs-scalability.md#partitions) You can specify the number of partitions at the time of creating an event hub. In some scenarios, you may need to add partitions after the event hub has been created. This article describes how to dynamically add partitions to an existing event hub.
event-hubs Schema Registry Dotnet Send Receive Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/schema-registry-dotnet-send-receive-quickstart.md
Title: Validate schema when sending and receiving events - AMQP and .NET
+ Title: Validate schema when sending or receiving events
description: This article provides a walkthrough to create a .NET Core application that sends/receives events to/from Azure Event Hubs with schema validation using Schema Registry. Previously updated : 11/02/2021 Last updated : 01/12/2022 ms.devlang: csharp
-# Validate schema when sending and receiving events - AMQP and .NET
+# Quickstart: Validate schema when sending and receiving events - AMQP and .NET
+
+**Azure Schema Registry** is a feature of Event Hubs, which provides a central repository for schemas for event-driven and messaging-centric applications. It provides the flexibility for your producer and consumer applications to **exchange data without having to manage and share the schema**. It also provides a simple governance framework for reusable schemas and defines relationship between schemas through a grouping construct (schema groups). For more information, see [Azure Schema Registry in Event Hubs](schema-registry-overview.md).
+ This quickstart shows how to send events to and receive events from an event hub with schema validation using the **Azure.Messaging.EventHubs** .NET library. ## Prerequisites
This section shows how to write a .NET Core console application that receives ev
## Next steps
-Check out [Azure Schema Registry client library for .NET](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/schemaregistry/Azure.Data.SchemaRegistry) for additional information.
+
+> [!div class="nextstepaction"]
+> Checkout [Azure Schema Registry client library for .NET](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/schemaregistry/Azure.Data.SchemaRegistry)
event-hubs Schema Registry Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/schema-registry-overview.md
Title: Azure Schema Registry in Azure Event Hubs description: This article provides an overview of Schema Registry support by Azure Event Hubs. Previously updated : 11/02/2021 Last updated : 01/13/2022
In many event streaming and messaging scenarios, the event or message payload co
An event producer uses a schema to serialize event payload and publish it to an event broker such as Event Hubs. Event consumers read event payload from the broker and de-serialize it using the same schema. So, both producers and consumers can validate the integrity of the data with the same schema. ## What is Azure Schema Registry? **Azure Schema Registry** is a feature of Event Hubs, which provides a central repository for schemas for event-driven and messaging-centric applications. It provides the flexibility for your producer and consumer applications to **exchange data without having to manage and share the schema**. It also provides a simple governance framework for reusable schemas and defines relationship between schemas through a grouping construct (schema groups). With schema-driven serialization frameworks like Apache Avro, moving serialization metadata into shared schemas can also help with **reducing the per-message overhead**. That's because each message won't need to have the metadata (type information and field names) as it's the case with tagged formats such as JSON.
The information flow when you use schema registry is the same for all protocols
The following diagram shows how the information flows when event producers and consumers use Schema Registry with the **Kafka** protocol. ### Producer
The following diagram shows how the information flows when event producers and c
An Event Hubs namespace now can host schema groups alongside event hubs (or Kafka topics). It hosts a schema registry and can have multiple schema groups. In spite of being hosted in Azure Event Hubs, the schema registry can be used universally with all Azure messaging services and any other message or events broker. Each of these schema groups is a separately securable repository for a set of schemas. Groups can be aligned with a particular application or an organizational unit. ### Schema groups
expressroute Expressroute Howto Circuit Portal Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-howto-circuit-portal-resource-manager.md
This quickstart shows you how to create an ExpressRoute circuit using the Azure
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * Review the [prerequisites](expressroute-prerequisites.md) and [workflows](expressroute-workflows.md) before you begin configuration.
-* You can [view a video](https://channel9.msdn.com/Blogs/Azure/Azure-ExpressRoute-How-to-create-an-ExpressRoute-circuit) before beginning to better understand the steps.
+* You can view a video before beginning to better understand the steps.
## <a name="create"></a>Create and provision an ExpressRoute circuit
expressroute Expressroute Howto Erdirect https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-howto-erdirect.md
ExpressRoute Direct gives you the ability to directly connect to Microsoft's glo
## Before you begin
-Before using ExpressRoute Direct, you must first enroll your subscription. Before using ExpressRoute Direct, you must first enroll your subscription. To enroll, please do the following via Azure PowerShell:
+Before using ExpressRoute Direct, you must first enroll your subscription. To enroll, please do the following via Azure PowerShell:
1. Sign in to Azure and select the subscription you wish to enroll. ```azurepowershell-interactive
expressroute Expressroute Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-prerequisites.md
If you plan to enable Microsoft 365 on ExpressRoute, review the following docume
* [Network and migration planning for Microsoft 365](/microsoft-365/enterprise/network-and-migration-planning) * [Microsoft 365 integration with on-premises environments](/microsoft-365/enterprise/microsoft-365-integration) * [Stay up to date with Office 365 IP Address changes](/microsoft-365/enterprise/microsoft-365-ip-web-service)
-* [ExpressRoute on Office 365 advanced training videos](https://channel9.msdn.com/series/aer/)
+* ExpressRoute on Office 365 advanced training videos
## Next steps * For more information about ExpressRoute, see the [ExpressRoute FAQ](expressroute-faqs.md).
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/overview.md
The following limitations exist for certain fields:
## Video overview The following overview of Azure Blueprints is from Azure Fridays. For video download, visit
-[Azure Fridays - An overview of Azure Blueprints](https://channel9.msdn.com/Shows/Azure-Friday/An-overview-of-Azure-Blueprints)
+[Azure Fridays - An overview of Azure Blueprints](/Shows/Azure-Friday/An-overview-of-Azure-Blueprints)
on Channel 9. > [!VIDEO https://www.youtube.com/embed/cQ9D-d6KkMY]
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/overview.md
resource. For more information about making existing resources compliant, see
### Video overview The following overview of Azure Policy is from Build 2018. For slides or video download, visit
-[Govern your Azure environment through Azure Policy](https://channel9.msdn.com/events/Build/2018/THR2030)
+[Govern your Azure environment through Azure Policy](/events/Build/2018/THR2030)
on Channel 9. > [!VIDEO https://www.youtube.com/embed/dxMaYF2GB7o]
governance Supported Tables Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/resource-graph/reference/supported-tables-resources.md
For sample queries for this table, see [Resource Graph sample queries for resour
- microsoft.powerplatform/enterprisepolicies - microsoft.projectbabylon/accounts - microsoft.providerhubdevtest/regionalstresstests-- Microsoft.Purview/Accounts (Purview accounts)
+- Microsoft.Purview/Accounts (Azure Purview accounts)
- Microsoft.Quantum/Workspaces (Quantum Workspaces) - Microsoft.RecommendationsService/accounts (Intelligent Recommendations Accounts) - Microsoft.RecommendationsService/accounts/modeling (Modeling)
hdinsight Interactive Query Troubleshoot View Time Out https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/interactive-query/interactive-query-troubleshoot-view-time-out.md
This article describes troubleshooting steps and possible resolutions for issues
When running certain queries from the Apache Hive view, the following error may be encountered: ```
-result fetch timed out
+Result fetch timed out
+ java.util.concurrent.TimeoutException: deadline passed
+ at akka.actor.dsl.Inbox$InboxActor$$anonfun$receive$1.applyOrElse(Inbox.scala:117)
+ at scala.PartialFunction$AndThen.applyOrElse(PartialFunction.scala:189)
+ at akka.actor.Actor$class.aroundReceive(Actor.scala:467)
+ at akka.actor.dsl.Inbox$InboxActor.aroundReceive(Inbox.scala:62)
+ at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
+ at akka.actor.ActorCell.invoke(ActorCell.scala:487)
+ at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
+ at akka.dispatch.Mailbox.run(Mailbox.scala:220)
+ at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
+ at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
+ at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
+ at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
+ at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
``` ## Cause
The Hive View default timeout value may not be suitable for the query you are ru
views.request.read.timeout.millis=300000 views.ambari.hive.<HIVE_VIEW_INSTANCE_NAME>.result.fetch.timeout=300000 ```
- The value of `HIVE_VIEW_INSTANCE_NAME` is available at the end of the Hive View URL.
+ The value of `HIVE_VIEW_INSTANCE_NAME` is available by clicking YOUR_USERNAME > Manage Ambari > Views > Names column. Do not use the URL name.
2. Restart the active Ambari server by running the following. If you get an error message saying it's not the active Ambari server, just ssh into the next headnode and repeat this step. ```
healthcare-apis Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/azure-api-for-fhir/release-notes.md
Previously updated : 12/21/2021 Last updated : 01/11/2022
Azure API for FHIR provides a fully managed deployment of the Microsoft FHIR Server for Azure. The server is an implementation of the [FHIR](https://hl7.org/fhir) standard. This document provides details about the features and enhancements made to Azure API for FHIR.
+## December 2021
+
+### **Features and enhancements**
+
+|Enhancements &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |Related information |
+| :- | : |
+|Added Publisher to `CapabiilityStatement.name` |You can now find the publisher in the capability statement at `CapabilityStatement.name`. [#2319](https://github.com/microsoft/fhir-server/pull/2319) |
+|Log `FhirOperation` linked to anonymous calls to Request metrics |We werenΓÇÖt logging operations that didnΓÇÖt require authentication. We extended the ability to get `FhirOperation` type in `RequestMetrics` for anonymous calls. [#2295](https://github.com/microsoft/fhir-server/pull/2295) |
+
+### **Bug fixes**
+
+|Bug fixes |Related information |
+| :-- | : |
+|Fixed 500 error when `SearchParameter` Code is null |Fixed an issue with `SearchParameter` if it had a null value for Code, the result would be a 500. Now it will result in an `InvalidResourceException` like the other values do. [#2343](https://github.com/microsoft/fhir-server/pull/2343) |
+|Returned `BadRequestException` with valid message when input JSON body is invalid |For invalid JSON body requests, the FHIR server was returning a 500 error. Now we will return a `BadRequestException` with a valid message instead of 500. [#2239](https://github.com/microsoft/fhir-server/pull/2239) |
+|`_sort` can cause `ChainedSearch` to return incorrect results |Previously, the sort options from the chained search's `SearchOption` object was not cleared, causing the sorting options to be passed through to the chained sub-search, which are not valid. This could result in no results when there should be results. This bug is now fixed [#2347](https://github.com/microsoft/fhir-server/pull/2347). It addressed GitHub bug [#2344](https://github.com/microsoft/fhir-server/issues/2344). |
+++ ## November 2021 ### **Features and enhancements**
Azure API for FHIR provides a fully managed deployment of the Microsoft FHIR Ser
|Enhancements &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |Related information | | :- | : | |Process Patient-everything links |We've expanded the Patient-everything capabilities to process patient links [#2305](https://github.com/microsoft/fhir-server/pull/2305). For more information, see [Patient-everything in FHIR](../../healthcare-apis/fhir/patient-everything.md#processing-patient-links) documentation. |
-|Added software name and version to capability statement |In the capability statement, the software name now distinguishes if you're using Azure API for FHIR or Azure Healthcare APIs. The software version will now specify which open source [release package](https://github.com/microsoft/fhir-server/releases) is live in the managed service [#2294](https://github.com/microsoft/fhir-server/pull/2294). Addresses: [#1778](https://github.com/microsoft/fhir-server/issues/1778) and [#2241](https://github.com/microsoft/fhir-server/issues/2241) |
+|Added software name and version to capability statement |In the capability statement, the software name now distinguishes if you're using Azure API for FHIR or Azure Healthcare APIs. The software version will now specify which open-source [release package](https://github.com/microsoft/fhir-server/releases) is live in the managed service [#2294](https://github.com/microsoft/fhir-server/pull/2294). Addresses: [#1778](https://github.com/microsoft/fhir-server/issues/1778) and [#2241](https://github.com/microsoft/fhir-server/issues/2241) |
|Log 500's to `RequestMetric` |Previously, 500s or any unknown/unhandled errors were not getting logged in `RequestMetric`. They're now getting logged [#2240](https://github.com/microsoft/fhir-server/pull/2240). For more information, see [Enable diagnostic settings in Azure API for FHIR](../../healthcare-apis/azure-api-for-fhir/enable-diagnostic-logging.md) | |Compress continuation tokens |In certain instances, the continuation token was too long to be able to follow the [next link](../../healthcare-apis/azure-api-for-fhir/overview-of-search.md#pagination) in searches and would result in a 404. To resolve this, we compressed the continuation token to ensure it stays below the size limit [#2279](https://github.com/microsoft/fhir-server/pull/2279). Addresses issue [#2250](https://github.com/microsoft/fhir-server/issues/2250). |
Azure API for FHIR provides a fully managed deployment of the Microsoft FHIR Ser
|Bug fixes |Related information | | :-- | : | |Resolved 500 error when the date was passed with a time zone. |This fixes a 500 error when a date with a time zone was passed into a datetime field [#2270](https://github.com/microsoft/fhir-server/pull/2270). |
-|Resolved issue where posting a bundle with incorrect Media Type returned a 500 error. |Previously when posting a search with a key that contains certain characters, a 500 error was returned. This fixes this issue [#2264](https://github.com/microsoft/fhir-server/pull/2264), and it addresses [#2148](https://github.com/microsoft/fhir-server/issues/2148). |
+|Resolved issue when posting a bundle with incorrect Media Type returned a 500 error. |Previously when posting a search with a key that contains certain characters, a 500 error was returned. This fixes this issue [#2264](https://github.com/microsoft/fhir-server/pull/2264), and it addresses [#2148](https://github.com/microsoft/fhir-server/issues/2148). |
+ ## October 2021
healthcare-apis Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/release-notes.md
Previously updated : 12/21/2021 Last updated : 01/11/2022
Azure Healthcare APIs is a set of managed API services based on open standards and frameworks for the healthcare industry. They enable you to build scalable and secure healthcare solutions by bringing protected health information (PHI) datasets together and connecting them end-to-end with tools for machine learning, analytics, and AI. This document provides details about the features and enhancements made to Azure Healthcare APIs including the different service types (FHIR service, DICOM service, and IoT connector) that seamlessly work with one another.
+## December 2021
+
+### Azure Healthcare APIs
+
+### **Features and enhancements**
+
+|Enhancements &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |Related information |
+| :- | : |
+|Quota details for support requests |We've updated the quota details for customer support requests with the latest information. |
+|Local RBAC |We've updated the local RBAC documentation to clarify the use of the secondary tenant and the steps to disable it. |
+|Deploy and configure Healthcare APIs using scripts |We've started the process of providing PowerShell, CLI scripts, and ARM templates to configure app registration and role assignments. Note that scripts for deploying Healthcare APIs will be available after GA. |
+
+### FHIR service
+
+### **Features and enhancements**
+
+|Enhancements | Related information |
+| : | -: |
+|Added Publisher to `CapabiilityStatement.name` |You can now find the publisher in the capability statement at `CapabilityStatement.name`. [#2319](https://github.com/microsoft/fhir-server/pull/2319) |
+|Log `FhirOperation` linked to anonymous calls to Request metrics |We werenΓÇÖt logging operations that didnΓÇÖt require authentication. We extended the ability to get `FhirOperation` type in `RequestMetrics` for anonymous calls. [#2295](https://github.com/microsoft/fhir-server/pull/2295) |
+
+### **Bug fixes**
+
+|Bug fixes |Related information |
+| :-- | : |
+|Fixed 500 error when `SearchParameter` Code is null |Fixed an issue with `SearchParameter` if it had a null value for Code, the result would be a 500. Now it will result in an `InvalidResourceException` like the other values do. [#2343](https://github.com/microsoft/fhir-server/pull/2343) |
+|Returned `BadRequestException` with valid message when input JSON body is invalid |For invalid JSON body requests, the FHIR server was returning a 500 error. Now we will return a `BadRequestException` with a valid message instead of 500. [#2239](https://github.com/microsoft/fhir-server/pull/2239) |
+|Handled SQL Timeout issue |If SQL Server timed out, the PUT `/resource{id}` returned a 500 error. Now we handle the 500 error and return a timeout exception with an operation outcome. [#2290](https://github.com/microsoft/fhir-server/pull/2290) |
+ ## November 2021 ### FHIR service
Azure Healthcare APIs is a set of managed API services based on open standards a
| Enhancements &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Related information | | :- | --: | |Process Patient-everything links |We've expanded the Patient-everything capabilities to process patient links [#2305](https://github.com/microsoft/fhir-server/pull/2305). For more information, see [Patient-everything in FHIR](./../healthcare-apis/fhir/patient-everything.md#processing-patient-links) documentation. |
-|Added software name and version to capability statement. |In the capability statement, the software name now distinguishes if you're using Azure API for FHIR or Azure Healthcare APIs. The software version will now specify which open source [release package](https://github.com/microsoft/fhir-server/releases) is live in the managed service [#2294](https://github.com/microsoft/fhir-server/pull/2294). Addresses: [#1778](https://github.com/microsoft/fhir-server/issues/1778) and [#2241](https://github.com/microsoft/fhir-server/issues/2241) |
+|Added software name and version to capability statement. |In the capability statement, the software name now distinguishes if you're using Azure API for FHIR or Azure Healthcare APIs. The software version will now specify which open-source [release package](https://github.com/microsoft/fhir-server/releases) is live in the managed service [#2294](https://github.com/microsoft/fhir-server/pull/2294). Addresses: [#1778](https://github.com/microsoft/fhir-server/issues/1778) and [#2241](https://github.com/microsoft/fhir-server/issues/2241) |
|Compress continuation tokens |In certain instances, the continuation token was too long to be able to follow the [next link](./../healthcare-apis/fhir/overview-of-search.md#pagination) in searches and would result in a 404. To resolve this, we compressed the continuation token to ensure it stays below the size limit [#2279](https://github.com/microsoft/fhir-server/pull/2279). Addresses issue [#2250](https://github.com/microsoft/fhir-server/issues/2250). | |FHIR service autoscale |The [FHIR service autoscale](./fhir/fhir-service-autoscale.md) is designed to provide optimized service scalability automatically to meet customer demands when they perform data transactions in consistent or various workloads at any time. It's available in all regions where the FHIR service is supported. |
Azure Healthcare APIs is a set of managed API services based on open standards a
|Bug fixes | Related information | | :- | -: |
-|Implemented fix to resolve QIDO paging ordering issues | [#989](https://github.com/microsoft/dicom-server/pull/989) |
+|Implemented fix to resolve QIDO paging-ordering issues | [#989](https://github.com/microsoft/dicom-server/pull/989) |
| :- | -: | ### **IoT connector**
iot-fundamentals Iot Support Help https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-fundamentals/iot-support-help.md
Here are suggestions for where you can get help when developing your Azure IoT s
## Create an Azure support request <div class='icon is-large'>
- <img alt='Azure support' src='https://docs.microsoft.com/media/logos/logo_azure.svg'>
+ <img alt='Azure support' src='/media/logos/logo_azure.svg'>
</div> Explore the range of [Azure support options and choose the plan](https://azure.microsoft.com/support/plans) that best fits, whether you're a developer just starting your cloud journey or a large organization deploying business-critical, strategic applications. Azure customers can create and manage support requests in the Azure portal.
If you can't find an answer to your problem using search, submit a new question
## Post a question on Stack Overflow <div class='icon is-large'>
- <img alt='Stack Overflow' src='https://docs.microsoft.com/media/logos/logo_stackoverflow.svg'>
+ <img alt='Stack Overflow' src='/media/logos/logo_stackoverflow.svg'>
</div> For answers on your developer questions from the largest community developer ecosystem, ask your question on Stack Overflow.
If you do submit a new question to Stack Overflow, please use one or more of the
## Stay informed of updates and new releases <div class='icon is-large'>
- <img alt='Stay informed' src='https://docs.microsoft.com/media/common/i_blog.svg'>
+ <img alt='Stay informed' src='/media/common/i_blog.svg'>
</div> Learn about important product updates, roadmap, and announcements in [Azure Updates](https://azure.microsoft.com/updates/?category=iot).
-News and information about Azure IoT is shared at the [Azure blog](https://azure.microsoft.com/blog/topics/internet-of-things/) and on the [Internet of Things Show on Channel 9](https://channel9.msdn.com/Shows/Internet-of-Things-Show).
+News and information about Azure IoT is shared at the [Azure blog](https://azure.microsoft.com/blog/topics/internet-of-things/) and on the [Internet of Things Show on Channel 9](/Shows/Internet-of-Things-Show).
Also, share your experiences, engage and learn from experts in the [Internet of Things Tech Community](https://techcommunity.microsoft.com/t5/Internet-of-Things-IoT/ct-p/IoT).
iot-hub Iot Hub Devguide Device Twins https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-devguide-device-twins.md
In the previous example, the `telemetryConfig` device twin desired and reported
}, ```
-2. The device app is notified of the change immediately if connected, or at the first reconnect. The device app then reports the updated configuration (or an error condition using the `status` property). Here is the portion of the reported properties:
+2. The device app is notified of the change immediately if the device is connected. If it's not connected, the device app follows the [device reconnection flow](#device-reconnection-flow) when it connects. The device app then reports the updated configuration (or an error condition using the `status` property). Here is the portion of the reported properties:
```json "reported": {
In the previous example, the `telemetryConfig` device twin desired and reported
> [!NOTE] > The preceding snippets are examples, optimized for readability, of one way to encode a device configuration and its status. IoT Hub does not impose a specific schema for the device twin desired and reported properties in the device twins.
->
+
+> [!IMPORTANT]
+> IoT Plug and Play defines a schema that uses several additional properties to synchronize changes to desired and reported properties. If your solution uses IoT Plug and Play, you must follow the Plug and Play conventions when updating twin properties. For more information and an example, see [Writable properties in IoT Plug and Play](../iot-develop/concepts-convention.md#writable-properties).
You can use twins to synchronize long-running operations such as firmware updates. For more information on how to use properties to synchronize and track a long running operation across devices, see [Use desired properties to configure devices](tutorial-device-twins.md).
IoT Hub does not preserve desired properties update notifications for disconnect
The device app can ignore all notifications with `$version` less or equal than the version of the full retrieved document. This approach is possible because IoT Hub guarantees that versions always increment.
-> [!NOTE]
-> This logic is already implemented in the [Azure IoT device SDKs](iot-hub-devguide-sdks.md). This description is useful only if the device app cannot use any of Azure IoT device SDKs and must program the MQTT interface directly.
->
- ## Additional reference material Other reference topics in the IoT Hub developer guide include:
iot-hub Iot Hub Devguide Module Twins https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-devguide-module-twins.md
In the previous example, the `telemetryConfig` module twin desired and reported
... ```
-2. The module app is notified of the change immediately if connected, or at the first reconnect. The module app then reports the updated configuration (or an error condition using the `status` property). Here is the portion of the reported properties:
+2. The module app is notified of the change immediately if the module is connected. If it's not connected, the module app follows the [module reconnection flow](#module-reconnection-flow) when it connects. The module app then reports the updated configuration (or an error condition using the `status` property). Here is the portion of the reported properties:
```json "reported": {
In the previous example, the `telemetryConfig` module twin desired and reported
> [!NOTE] > The preceding snippets are examples, optimized for readability, of one way to encode a module configuration and its status. IoT Hub does not impose a specific schema for the module twin desired and reported properties in the module twins.
->
->
+
+> [!IMPORTANT]
+> IoT Plug and Play defines a schema that uses several additional properties to synchronize changes to desired and reported properties. If your solution uses IoT Plug and Play, you must follow the Plug and Play conventions when updating twin properties. For more information and an example, see [Writable properties in IoT Plug and Play](../iot-develop/concepts-convention.md#writable-properties).
## Back-end operations The solution back end operates on the module twin using the following atomic operations, exposed through HTTPS:
Module twin desired and reported properties do not have ETags, but have a `$vers
Versions are also useful when an observing agent (such as the module app observing the desired properties) must reconcile races between the result of a retrieve operation and an update notification. The section [Device reconnection flow](iot-hub-devguide-device-twins.md#device-reconnection-flow) provides more information.
+## Module reconnection flow
+
+IoT Hub does not preserve desired properties update notifications for disconnected modules. It follows that a module that is connecting must retrieve the full desired properties document, in addition to subscribing for update notifications. Given the possibility of races between update notifications and full retrieval, the following flow must be ensured:
+
+1. Module app connects to an IoT hub.
+2. Module app subscribes for desired properties update notifications.
+3. Module app retrieves the full document for desired properties.
+
+The module app can ignore all notifications with `$version` less or equal than the version of the full retrieved document. This approach is possible because IoT Hub guarantees that versions always increment.
+ ## Next steps To try out some of the concepts described in this article, see the following IoT Hub tutorials:
iot-hub Iot Hub Device Sdk C Intro https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-device-sdk-c-intro.md
There are a broad range of platforms on which the SDK has been tested (see the [
The following video presents an overview of the Azure IoT SDK for C:
->[!VIDEO https://channel9.msdn.com/Shows/Internet-of-Things-Show/Azure-IoT-C-SDK-insights/Player]
+>[!VIDEO https://docs.microsoft.com/Shows/Internet-of-Things-Show/Azure-IoT-C-SDK-insights/Player]
This article introduces you to the architecture of the Azure IoT device SDK for C. It demonstrates how to initialize the device library, send data to IoT Hub, and receive messages from it. The information in this article should be enough to get started using the SDK, but also provides pointers to additional information about the libraries.
lighthouse Cloud Solution Provider https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lighthouse/concepts/cloud-solution-provider.md
# Azure Lighthouse and the Cloud Solution Provider program
-If you're a [CSP (Cloud Solution Provider)](/partner-center/csp-overview) partner, you can already access the Azure subscriptions created for your customers through the CSP program by using the [Administer On Behalf Of (AOBO)](https://channel9.msdn.com/Series/cspdev/Module-11-Admin-On-Behalf-Of-AOBO) functionality. This access allows you to directly support, configure, and manage your customers' subscriptions.
+If you're a [CSP (Cloud Solution Provider)](/partner-center/csp-overview) partner, you can already access the Azure subscriptions created for your customers through the CSP program by using the Administer On Behalf Of (AOBO) functionality. This access allows you to directly support, configure, and manage your customers' subscriptions.
With [Azure Lighthouse](../overview.md), you can use Azure delegated resource management along with AOBO. This helps improve security and reduces unnecessary access by enabling more granular permissions for your users. It also allows for greater efficiency and scalability, as your users can work across multiple customer subscriptions using a single login in your tenant.
load-testing How To Find Download Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-testing/how-to-find-download-logs.md
Title: Download Apache JMeter logs for troubleshooting
+ Title: Troubleshoot load test errors
-description: Learn how you can troubleshoot Apache JMeter script problems by downloading the Azure Load Testing logs in the Azure portal.
+description: Learn how you can troubleshoot errors during your load test by downloading and analyzing the Apache JMeter logs in the Azure portal.
Previously updated : 11/30/2021 Last updated : 01/14/2022
-# Troubleshoot JMeter problems by downloading Azure Load Testing Preview logs
+# Troubleshoot load test errors by downloading Apache JMeter logs in Azure Load Testing Preview
-In this article, you'll learn how to download the Azure Load Testing Preview logs in the Azure portal to troubleshoot problems with the Apache JMeter script.
+In this article, you'll learn how to download the Apache JMeter logs for Azure Load Testing Preview in the Azure portal. You can use the logging information to troubleshoot problems while the Apache JMeter script runs.
-When you run a load test, the Azure Load Testing test engines execute your Apache JMeter test script. The Apache JMeter log can help you identify both problems in the JMX file and issues that occur during the test execution. For example, the application endpoint might be unavailable, or the JMX file might contain invalid credentials.
+The Apache JMeter log can help you identify problems in your JMX file, or run-time issues that occur while the test is running. For example, the application endpoint might be unavailable, or the JMX file might contain invalid credentials.
+
+When you run a load test, the Azure Load Testing test engines execute your Apache JMeter test script. While your load test is running, Apache JMeter stores detailed logging information in the worker node logs. You can download the JMeter worker node log for your load test run from the Azure portal to help you diagnose load test errors.
> [!IMPORTANT] > Azure Load Testing is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
When you run a load test, the Azure Load Testing test engines execute your Apach
## Prerequisites - An Azure account with an active subscription. If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. -- An Azure Load Testing resource that has a completed test run. If you need to create an Azure Load Testing resource, see [Create and run a load test](./quickstart-create-and-run-load-test.md).
+- An Azure load testing resource that has a completed test run. If you need to create an Azure load testing resource, see [Create and run a load test](./quickstart-create-and-run-load-test.md).
## Access and download logs for your load test
In this section, you retrieve and download the Azure Load Testing logs from the
1. On the dashboard, select **Download**, and then select **Logs**.
- :::image type="content" source="media/how-to-find-download-logs/logs.png" alt-text="Screenshot that shows how to download the load test logs from the test result page.":::
+ :::image type="content" source="media/how-to-find-download-logs/logs.png" alt-text="Screenshot that shows how to download the load test logs from the test run details page.":::
- The browser should now start downloading the execution logs as a zipped folder.
+ The browser should now start downloading the JMeter worker node log file *worker.log*.
-1. You can use any extraction tool to extract the zipped folder and access the logging information.
+1. You can use a text editor to open the log file.
:::image type="content" source="media/how-to-find-download-logs/jmeter-log.png" alt-text="Screenshot that shows the JMeter log file content.":::
+ The *worker.log* file can help you diagnose the root cause of a failing load test. In the previous screenshot, you can see that the test failed because a file is missing.
+ ## Next steps -- For more information about comparing test results, see [Compare multiple test results](./how-to-compare-multiple-test-runs.md).
+- Learn how to [Monitor server-side application metrics](./how-to-update-rerun-test.md).
+
+- Learn how to [Get detailed insights for Azure App Service based applications](./how-to-appservice-insights.md).
-- To learn about performance test automation, see [Configure automated performance testing](./tutorial-cicd-azure-pipelines.md).
+- Learn how to [Compare multiple load test runs](./how-to-compare-multiple-test-runs.md).
logic-apps Logic Apps Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-overview.md
The following list describes just a few example tasks, business processes, and w
* Monitor tweets, analyze the sentiment, and create alerts or tasks for items that need review.
-> [!VIDEO https://channel9.msdn.com/Shows/Azure-Friday/Go-serverless-Enterprise-integration-with-Azure-Logic-Apps/player]
+> [!VIDEO https://docs.microsoft.com/Shows/Azure-Friday/Go-serverless-Enterprise-integration-with-Azure-Logic-Apps/player]
Based on the logic app resource type that you choose and create, your logic apps run in multi-tenant Azure Logic Apps, [single-tenant Azure Logic Apps](single-tenant-overview-compare.md), or a dedicated [integration service environment](connect-virtual-network-vnet-isolated-environment-overview.md) when accessing an Azure virtual network. To run logic apps in containers, [create single-tenant based logic apps using Azure Arc enabled Logic Apps](azure-arc-enabled-logic-apps-create-deploy-workflows.md). For more information, review [What is Azure Arc enabled Logic Apps?](azure-arc-enabled-logic-apps-overview.md) and [Resource type and host environment differences for logic apps](#resource-environment-differences).
You might also want to explore other quickstart guides for Azure Logic Apps:
Learn more about the Azure Logic Apps platform with these introductory videos:
-> [!VIDEO https://channel9.msdn.com/Shows/Azure-Friday/Connect-and-extend-your-mainframe-to-the-cloud-with-Logic-Apps/player]
+> [!VIDEO https://docs.microsoft.com/Shows/Azure-Friday/Connect-and-extend-your-mainframe-to-the-cloud-with-Logic-Apps/player]
## Next steps
machine-learning Deploy With Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/deploy-with-resource-manager-template.md
+
+ Title: 'ML Studio (classic): Deploy workspaces with Azure Resource Manager - Azure'
+description: How to deploy a workspace for Machine Learning Studio (classic) using Azure Resource Manager template
++++++++ Last updated : 02/05/2018+
+# Deploy Machine Learning Studio (classic) Workspace Using Azure Resource Manager
+
+**APPLIES TO:** ![Applies to.](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio)
++
+Using an Azure Resource Manager deployment template saves you time by giving you a scalable way to deploy interconnected components with a validation and retry mechanism. To set up Machine Learning Studio (classic) Workspaces, for example, you need to first configure an Azure storage account and then deploy your workspace. Imagine doing this manually for hundreds of workspaces. An easier alternative is to use an Azure Resource Manager template to deploy an Studio (classic) Workspace and all its dependencies. This article takes you through this process step-by-step. For a great overview of Azure Resource Manager, see [Azure Resource Manager overview](../../azure-resource-manager/management/overview.md).
++
+## Step-by-step: create a Machine Learning Workspace
+We will create an Azure resource group, then deploy a new Azure storage account and a new Machine Learning Studio (classic) Workspace using a Resource Manager template. Once the deployment is complete, we will print out important information about the workspaces that were created (the primary key, the workspaceID, and the URL to the workspace).
+
+### Create an Azure Resource Manager template
+
+A Machine Learning Workspace requires an Azure storage account to store the dataset linked to it.
+The following template uses the name of the resource group to generate the storage account name and the workspace name. It also uses the storage account name as a property when creating the workspace.
+
+```json
+{
+ "contentVersion": "1.0.0.0",
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "variables": {
+ "namePrefix": "[resourceGroup().name]",
+ "location": "[resourceGroup().location]",
+ "mlVersion": "2016-04-01",
+ "stgVersion": "2015-06-15",
+ "storageAccountName": "[concat(variables('namePrefix'),'stg')]",
+ "mlWorkspaceName": "[concat(variables('namePrefix'),'mlwk')]",
+ "mlResourceId": "[resourceId('Microsoft.MachineLearning/workspaces', variables('mlWorkspaceName'))]",
+ "stgResourceId": "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]",
+ "storageAccountType": "Standard_LRS"
+ },
+ "resources": [
+ {
+ "apiVersion": "[variables('stgVersion')]",
+ "name": "[variables('storageAccountName')]",
+ "type": "Microsoft.Storage/storageAccounts",
+ "location": "[variables('location')]",
+ "properties": {
+ "accountType": "[variables('storageAccountType')]"
+ }
+ },
+ {
+ "apiVersion": "[variables('mlVersion')]",
+ "type": "Microsoft.MachineLearning/workspaces",
+ "name": "[variables('mlWorkspaceName')]",
+ "location": "[variables('location')]",
+ "dependsOn": ["[variables('stgResourceId')]"],
+ "properties": {
+ "UserStorageAccountId": "[variables('stgResourceId')]"
+ }
+ }
+ ],
+ "outputs": {
+ "mlWorkspaceObject": {"type": "object", "value": "[reference(variables('mlResourceId'), variables('mlVersion'))]"},
+ "mlWorkspaceToken": {"type": "string", "value": "[listWorkspaceKeys(variables('mlResourceId'), variables('mlVersion')).primaryToken]"},
+ "mlWorkspaceWorkspaceID": {"type": "string", "value": "[reference(variables('mlResourceId'), variables('mlVersion')).WorkspaceId]"},
+ "mlWorkspaceWorkspaceLink": {"type": "string", "value": "[concat('https://studio.azureml.net/Home/ViewWorkspace/', reference(variables('mlResourceId'), variables('mlVersion')).WorkspaceId)]"}
+ }
+}
+
+```
+Save this template as mlworkspace.json file under c:\temp\.
+
+### Deploy the resource group, based on the template
+
+* Open PowerShell
+* Install modules for Azure Resource Manager and Azure Service Management
+
+```powershell
+# Install the Azure Resource Manager modules from the PowerShell Gallery (press "A")
+Install-Module Az -Scope CurrentUser
+
+# Install the Azure Service Management modules from the PowerShell Gallery (press "A")
+Install-Module Azure -Scope CurrentUser
+```
+
+ These steps download and install the modules necessary to complete the remaining steps. This only needs to be done once in the environment where you are executing the PowerShell commands.
+
+* Authenticate to Azure
+
+```powershell
+# Authenticate (enter your credentials in the pop-up window)
+Connect-AzAccount
+```
+This step needs to be repeated for each session. Once authenticated, your subscription information should be displayed.
+
+![Azure Account](/articles/marketplace/media/test-drive/azure-subscriptions.png)
+
+Now that we have access to Azure, we can create the resource group.
+
+* Create a resource group
+
+```powershell
+$rg = New-AzResourceGroup -Name "uniquenamerequired523" -Location "South Central US"
+$rg
+```
+
+Verify that the resource group is correctly provisioned. **ProvisioningState** should be "Succeeded."
+The resource group name is used by the template to generate the storage account name. The storage account name must be between 3 and 24 characters in length and use numbers and lower-case letters only.
+
+<!--
+ ![Resource Group](./media/deploy-with-resource-manager-template/resource-groupprovisioning.png)
+-->
+
+* Using the resource group deployment, deploy a new Machine Learning Workspace.
+
+```powershell
+# Create a Resource Group, TemplateFile is the location of the JSON template.
+$rgd = New-AzResourceGroupDeployment -Name "demo" -TemplateFile "C:\temp\mlworkspace.json" -ResourceGroupName $rg.ResourceGroupName
+```
+
+Once the deployment is completed, it is straightforward to access properties of the workspace you deployed. For example, you can access the Primary Key Token.
+
+```powershell
+# Access Machine Learning Studio (classic) Workspace Token after its deployment.
+$rgd.Outputs.mlWorkspaceToken.Value
+```
+
+Another way to retrieve tokens of existing workspace is to use the Invoke-AzResourceAction command. For example, you can list the primary and secondary tokens of all workspaces.
+
+```powershell
+# List the primary and secondary tokens of all workspaces
+Get-AzResource |? { $_.ResourceType -Like "*MachineLearning/workspaces*"} |ForEach-Object { Invoke-AzResourceAction -ResourceId $_.ResourceId -Action listworkspacekeys -Force}
+```
+After the workspace is provisioned, you can also automate many Machine Learning Studio (classic) tasks using the [PowerShell Module for Machine Learning Studio (classic)](https://aka.ms/amlps).
+
+## Next steps
+
+* Learn more about [authoring Azure Resource Manager Templates](../../azure-resource-manager/templates/syntax.md).
+* Have a look at the [Azure Quickstart Templates Repository](https://github.com/Azure/azure-quickstart-templates).
+* See the [Resource Manager template reference help](/azure/templates/microsoft.machinelearning/allversions)
+
+<!--Link references-->
machine-learning How To Deploy Fpga Web Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-fpga-web-service.md
converted_model.delete()
+ Learn about FPGA and [Azure Machine Learning pricing and costs](https://azure.microsoft.com/pricing/details/machine-learning/).
-+ [Hyperscale hardware: ML at scale on top of Azure + FPGA: Build 2018 (video)](https://channel9.msdn.com/events/Build/2018/BRK3202)
-
-+ [Microsoft FPGA-based configurable cloud (video)](https://channel9.msdn.com/Events/Build/2017/B8063)
++ [Hyperscale hardware: ML at scale on top of Azure + FPGA: Build 2018 (video)](/events/Build/2018/BRK3202) + [Project Brainwave for real-time AI](https://www.microsoft.com/research/project/project-brainwave/)
machine-learning How To Troubleshoot Deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-troubleshoot-deployment.md
After the image is successfully built, the system attempts to start a container
Use the info in the [Inspect the Docker log](how-to-troubleshoot-deployment-local.md#dockerlog) article.
+## Container azureml-fe-aci launch fails
+
+When deploying a service to an Azure Container Instance compute target, Azure Machine Learning attempts to create a front-end container that has the name `azureml-fe-aci` for the inference request. If `azureml-fe-aci` crashes, you can see logs by running `az container logs --name MyContainerGroup --resource-group MyResourceGroup --subscription MySubscription --container-name azureml-fe-aci`. You can follow the error message in the logs to make the fix.
+
+The most common failure for `azureml-fe-aci` is that the provided SSL certificate or key is invalid.
+ ## Function fails: get_model_path() Often, in the `init()` function in the scoring script, [Model.get_model_path()](/python/api/azureml-core/azureml.core.model.model#get-model-path-model-name--version-noneworkspace-none-) function is called to locate a model file or a folder of model files in the container. If the model file or folder cannot be found, the function fails. The easiest way to debug this error is to run the below Python code in the Container shell:
Learn more about deployment:
* [How to deploy and where](how-to-deploy-and-where.md) * [Tutorial: Train & deploy models](tutorial-train-deploy-notebook.md)
-* [How to run and debug experiments locally](./how-to-debug-visual-studio-code.md)
+* [How to run and debug experiments locally](./how-to-debug-visual-studio-code.md)
marketplace Plan Azure Application Offer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/plan-azure-application-offer.md
Review the following resources as you plan your Azure application offer for the
- [Azure PowerShell](../azure-resource-manager/managed-applications/powershell-samples.md) - [Managed application solutions](../azure-resource-manager/managed-applications/sample-projects.md)
-The video [Building Solution Templates, and Managed Applications for Azure Marketplace](https://channel9.msdn.com/Events/Build/2018/BRK3603) gives a comprehensive introduction to the Azure application offer type:
+The video [Building Solution Templates, and Managed Applications for Azure Marketplace](/Events/Build/2018/BRK3603) gives a comprehensive introduction to the Azure application offer type:
- What offer types are available - What technical assets are required
media-services Migrate V 2 V 3 Migration Scenario Based Publishing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/migrate-v-2-v-3-migration-scenario-based-publishing.md
Major changes to the way content is published in v3 API. The new publishing mode
See publishing concepts, tutorials and how to guides below for specific steps.
+## Will v2 streaming locators continue to work after February 2024?
+
+Streaming locators created with v2 API will continue to work after our v2 API is turned off. Once the Streaming Locator data is created in the Media Services backend database, there is no dependency on the v2 REST API for streaming. We will not remove v2 specific records from the database when v2 is turned off in February 2024.
+
+There are some properties of assets and locators created with v2 that cannot be accessed or updated using the new v3 API. For example, v2 exposes an **Asset Files** API that does not have an equivalent feature in the v3 API. Often this is not a problem for most of our customers, since it is not a widely used feature and you can still stream old locators and delete them when they are no longer needed.
+
+After migration, you should avoid making any calls to the v2 API to modify streaming locators or assets.
+ ## Publishing concepts, tutorials and how to guides ### Concepts
media-services Samples Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/samples-overview.md
Previously updated : 03/24/2021- Last updated : 01/14/2022+ # Media Services v3 samples [!INCLUDE [media services api v3 logo](./includes/v3-hr.md)]
-This article contains a list of all the samples available for Media Services organized by method and SDK. Samples include .NET, Node.js (TypeScript), Python, Java, and also REST with Postman.
+This article contains a list of all the samples available for Media Services organized by method and SDK. Samples include .NET, Node.js (TypeScript), Python, Java, and also examples using REST with Postman.
## Samples by SDK You'll find description and links to the samples you may be looking for in each of the tabs.
+## [Node.JS (Typescript)](#tab/node/)
+
+|Sample|Description|
+|||
+|[Create an account from code](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/Account/CreateAccount/create-account.ts)|The sample shows how to create a Media Services account and set the primary storage account, in addition to advanced configuration settings including Key Delivery IP allowlist, Managed Identity, storage auth, and bring your own encryption key.|
+|[Create an account with user assigned managed identity code](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/Account/CreateAccount/create-account_with_managed_identity.ts)|The sample shows how to create a Media Services account and set the primary storage account, in addition to advanced configuration settings including Key Delivery IP allowlist, user or system assigned Managed Identity, storage auth, and bring your own encryption key.|
+|[Hello World - list assets](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/HelloWorld-ListAssets/list-assets.ts)|Basic example of how to connect and list assets |
+|[Live streaming with Standard Passthrough](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/Live/Standard_Passthrough_Live_Event/index.ts)| Standard passthrough live streaming example. **WARNING**, make sure to check that all resources are cleaned up and no longer billing in portal when using live|
+|[Live streaming with Standard Passthrough with Event Hubs](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/Live/Standard_Passthrough_Live_Event_with_EventHub/index.ts)| Demonstrates how to use Event Hubs to subscribe to events on the live streaming channel. Events include encoder connections, disconnections, heartbeat, latency, discontinuity, and drift issues. **WARNING**, make sure to check that all resources are cleaned up and no longer billing in portal when using live|
+|[Live streaming with Basic Passthrough](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/Live/Basic_Passthrough_Live_Event/index.ts)| Shows how to set up the basic passthrough live event if you only need to broadcast a low-cost UGC channel. **WARNING**, make sure to check that all resources are cleaned up and no longer billing in portal when using live|
+|[Live streaming with 720P Standard encoding](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/Live/720P_Encoding_Live_Event/index.ts)| Use live encoding in the cloud with the 720P HD adaptive bitrate encoding preset. **WARNING**, make sure to check that all resources are cleaned up and no longer billing in portal when using live|
+|[Live streaming with 1080P encoding](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/Live/720P_Encoding_Live_Event/index.ts)| Use live encoding in the cloud with the 1080P HD adaptive bitrate encoding preset. **WARNING**, make sure to check that all resources are cleaned up and no longer billing in portal when using live|
+|[Upload and stream HLS and DASH](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/StreamFilesSample/index.ts)| Basic example for uploading a local file or encoding from a source URL. Sample shows how to use storage SDK to download content, and shows how to stream to a player |
+|[Upload and stream HLS and DASH with PlayReady and Widevine DRM](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/StreamFilesWithDRMSample/index.ts)| Demonstrates how to encode and stream using Widevine and PlayReady DRM |
+|[Upload and use AI to index videos and audio](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/VideoAnalytics/index.ts)| Example of using the Video and Audio Analyzer presets to generate metadata and insights from a video or audio file |
+|[Create Transform, use Job preset overrides (v2-to-v3 API migration)](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/VideoEncoding/CreateTransform_Job_PresetOverride/index.ts)| If you need a workflow where you desire to submit custom preset jobs to a single queue, you can use this base sample that shows how to create a (mostly) empty Transform, and then you can use the preset override property on the Job to submit custom presets to the same transform. This allows you to treat the v3 AMS API a lot more like the legacy v2 API Job queue if you desire.|
+|[Basic Encoding with H264](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/VideoEncoding/Encoding_H264/index.ts)| Shows how to use the standard encoder to encode a source file into H264 format with AAC audio and PNG thumbnails |
+|[Basic Encoding with H264 with Event Hubs/Event Grid](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/VideoEncoding/Encoding_H264%20_with_EventHub/index.ts)| Shows how to use the standard encoder and receive and process Event Grid events from Media Services through an Event Hubs. First set up an Event Grid subscription that pushes events into an Event Hubs using the Azure portal or CLI to use this sample. |
+|[Sprite Thumbnail (VTT) in JPG format](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/VideoEncoding/Encoding_Sprite_Thumbnail/index.ts)| Shows how to generate a VTT Sprite Thumbnail in JPG format and how to set the columns and number of images. This also shows a speed encoding mode in H264 for a 720P layer. |
+|[Content Aware encoding with H264](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/VideoEncoding/Encoding_H264_ContentAware/index.ts)| Example of using the standard encoder with Content Aware encoding to automatically generate the best quality adaptive bitrate streaming set based on an analysis of the source files contents|
+|[Content Aware encoding Constrained with H264](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/VideoEncoding/Encoding_H264_ContentAware_Constrained/index.ts)| Demonstrates how to control the output settings of the Content Aware encoding preset to make the outputs more deterministic to your encoding needs and costs. This will still auto generate the best quality adaptive bitrate streaming set based on an analysis of the source files contents, but constrain the output to your desired ranges.|
+|[Overlay Image](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/VideoEncoding/Encoding_H264_OverlayImage/index.ts)| Shows how to upload an image file and overlay on top of video with output to MP4 container|
+|[Rotate Video](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/VideoEncoding/Encoding_H264_Rotate90degrees/index.ts)| Shows how to use the rotation filter to rotate a video by 90 degrees. |
+|[Output to Transport Stream format](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/VideoEncoding/Encoding_H264_To_TransportStream/index.ts)| Shows how to use the standard encoder to encode a source file and output to MPEG Transport Stream format using H264 format with AAC audio and PNG thumbnail|
+|[Basic Encoding with HEVC](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/VideoEncoding/Encoding_HEVC/index.ts)| Shows how to use the standard encoder to encode a source file into HEVC format with AAC audio and PNG thumbnails |
+|[Content Aware encoding with HEVC](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/VideoEncoding/Encoding_HEVC_ContentAware/index.ts)| Example of using the standard encoder with Content Aware encoding to automatically generate the best quality HEVC (H.265) adaptive bitrate streaming set based on an analysis of the source files contents|
+|[Content Aware encoding Constrained with HEVC](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/VideoEncoding/Encoding_HEVC_ContentAware_Constrained/index.ts)| Demonstrates how to control the output settings of the Content Aware encoding preset to make the outputs more deterministic to your encoding needs and costs. This will still auto generate the best quality adaptive bitrate streaming set based on an analysis of the source files contents, but constrain the output to your desired ranges.|
+|[Bulk encoding from a remote Azure storage account using SAS URLs](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/VideoEncoding/Encoding_Bulk_Remote_Storage_Account_SAS/index.ts)| This samples shows how you can point to a remote Azure Storage account using a SAS URL and submit batches of encoding jobs to your account, monitor progress, and continue. You can modify the file extension types to scan for (e.g - .mp4, .mov) and control the batch size submitted. You can also modify the Transform used in the batch operation. This sample demonstrates the use of SAS URL's as ingest sources to a Job input. Make sure to configure the REMOTESTORAGEACCOUNTSAS environment variable in the .env file for this sample to work.|
+| [Video Analytics](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/VideoAnalytics/index.ts)|This sample illustrates how to create a video and audio analyzer transform, upload a video file to an input asset, submit a job with the transform and download the results for verification.|
+| [Audio Analytics basic with per-job language override](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/AudioAnalytics/index.ts)|This sample illustrates how to create a audio analyzer transform using the basic mode. It also shows how you can override the preset language on a per-job basis to avoid creating a transform for every language. It also shows how to upload a media file to an input asset, submit a job with the transform and download the results for verification.|
+ ## [.NET](#tab/net/) | Sample | Description |
You'll find description and links to the samples you may be looking for in each
| [HighAvailabilityEncodingStreaming](https://github.com/Azure-Samples/media-services-v3-dotnet/tree/main/HighAvailabilityEncodingStreaming/) | This sample provides guidance and best practices for a production system using on-demand encoding or analytics. Readers should start with the companion article [High Availability with Media Services and VOD](architecture-high-availability-encoding-concept.md). There is a separate solution file provided for the [HighAvailabilityEncodingStreaming](https://github.com/Azure-Samples/media-services-v3-dotnet/blob/main/HighAvailabilityEncodingStreaming/README.md) sample. | | [Azure Functions for Media Services](https://github.com/xpouyat/media-services-v3-dotnet-core-functions-integration/tree/main/Functions)|This project contains examples of Azure Functions that connect to Azure Media Services v3 for video processing. You can use Visual Studio 2019 or Visual Studio Code to develop and run the functions. An Azure Resource Manager (ARM) template and a GitHub Actions workflow are provided for the deployment of the Function resources and to enable continuous deployment.|
-## [Node.JS](#tab/node/)
-
-|Sample|Description|
-|||
-|[Create an account from code](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/Account/create-account.ts)|The sample shows how to create a Media Services account and set the primary storage account, in addition to advanced configuration settings including Key Delivery IP allowlist, Managed Identity, storage auth, and bring your own encryption key.|
-|[Hello World - list assets](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/HelloWorld-ListAssets/list-assets.ts)|Basic example of how to connect and list assets |
-|[Live streaming with Standard Passthrough](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/Live/Standard_Passthrough_Live_Event/index.ts)| Standard passthrough live streaming example. **WARNING**, make sure to check that all resources are cleaned up and no longer billing in portal when using live|
-|[Live streaming with Standard Passthrough with Event Hubs](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/Live/Standard_Passthrough_Live_Event_with_EventHub/index.ts)| Demonstrates how to use Event Hubs to subscribe to events on the live streaming channel. Events include encoder connections, disconnections, heartbeat, latency, discontinuity, and drift issues. **WARNING**, make sure to check that all resources are cleaned up and no longer billing in portal when using live|
-|[Live streaming with Basic Passthrough](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/Live/Basic_Passthrough_Live_Event/index.ts)| Shows how to set up the basic passthrough live event if you only need to broadcast a low-cost UGC channel. **WARNING**, make sure to check that all resources are cleaned up and no longer billing in portal when using live|
-|[Live streaming with 720P Standard encoding](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/Live/720P_Encoding_Live_Event/index.ts)| Use live encoding in the cloud with the 720P HD adaptive bitrate encoding preset. **WARNING**, make sure to check that all resources are cleaned up and no longer billing in portal when using live|
-|[Live streaming with 1080P encoding](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/Live/720P_Encoding_Live_Event/index.ts)| Use live encoding in the cloud with the 1080P HD adaptive bitrate encoding preset. **WARNING**, make sure to check that all resources are cleaned up and no longer billing in portal when using live|
-|[Upload and stream HLS and DASH](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/StreamFilesSample/index.ts)| Basic example for uploading a local file or encoding from a source URL. Sample shows how to use storage SDK to download content, and shows how to stream to a player |
-|[Upload and stream HLS and DASH with PlayReady and Widevine DRM](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/StreamFilesWithDRMSample/index.ts)| Demonstrates how to encode and stream using Widevine and PlayReady DRM |
-|[Upload and use AI to index videos and audio](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/VideoAnalytics/index.ts)| Example of using the Video and Audio Analyzer presets to generate metadata and insights from a video or audio file |
-|[Create Transform, use Job preset overrides (v2-to-v3 API migration)](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/VideoEncoding/CreateTransform_Job_PresetOverride/index.ts)| If you need a workflow where you desire to submit custom preset jobs to a single queue, you can use this base sample that shows how to create a (mostly) empty Transform, and then you can use the preset override property on the Job to submit custom presets to the same transform. This allows you to treat the v3 AMS API a lot more like the legacy v2 API Job queue if you desire.|
-|[Basic Encoding with H264](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/VideoEncoding/Encoding_H264/index.ts)| Shows how to use the standard encoder to encode a source file into H264 format with AAC audio and PNG thumbnails |
-|[Basic Encoding with H264 with Event Hubs/Event Grid](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/VideoEncoding/Encoding_H264%20_with_EventHub/index.ts)| Shows how to use the standard encoder and receive and process Event Grid events from Media Services through an Event Hubs. First set up an Event Grid subscription that pushes events into an Event Hubs using the Azure portal or CLI to use this sample. |
-|[Content Aware encoding with H264](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/VideoEncoding/Encoding_H264_ContentAware/index.ts)| Example of using the standard encoder with Content Aware encoding to automatically generate the best quality adaptive bitrate streaming set based on an analysis of the source files contents|
-|[Content Aware encoding Constrained with H264](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/VideoEncoding/Encoding_H264_ContentAware_Constrained/index.ts)| Demonstrates how to control the output settings of the Content Aware encoding preset to make the outputs more deterministic to your encoding needs and costs. This will still auto generate the best quality adaptive bitrate streaming set based on an analysis of the source files contents, but constrain the output to your desired ranges.|
-|[Basic Encoding with HEVC](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/VideoEncoding/Encoding_HEVC/index.ts)| Shows how to use the standard encoder to encode a source file into HEVC format with AAC audio and PNG thumbnails |
-|[Content Aware encoding with HEVC](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/VideoEncoding/Encoding_HEVC_ContentAware/index.ts)| Example of using the standard encoder with Content Aware encoding to automatically generate the best quality HEVC (H.265) adaptive bitrate streaming set based on an analysis of the source files contents|
-|[Content Aware encoding Constrained with HEVC](https://github.com/Azure-Samples/media-services-v3-node-tutorials/blob/main/VideoEncoding/Encoding_HEVC_ContentAware_Constrained/index.ts)| Demonstrates how to control the output settings of the Content Aware encoding preset to make the outputs more deterministic to your encoding needs and costs. This will still auto generate the best quality adaptive bitrate streaming set based on an analysis of the source files contents, but constrain the output to your desired ranges.|
- ## [Python](#tab/python) |Sample|Description|
You'll find description and links to the samples you may be looking for in each
## REST Postman collection
-The [REST Postman](https://github.com/Azure-Samples/media-services-v3-rest-postman) samples include a Postman environment and collection for you to import into the Postman client. The Postman collection samples are recommended for getting familiar with the API structure and how it works with Azure Resource Management (ARM), and the structure of calls from the client SDKs.
+The [REST Postman](https://github.com/Azure-Samples/media-services-v3-rest-postman) samples include a Postman environment and collection for you to import into the Postman client. The Postman collection samples are recommended for getting familiar with the API structure and how it works with Azure Resource Management (ARM), and the structure of calls from the client SDKs.
[!INCLUDE [warning-rest-api-retry-policy.md](./includes/warning-rest-api-retry-policy.md)]
media-services Media Services Protect With Aes128 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/previous/media-services-protect-with-aes128.md
To take advantage of dynamic encryption, you need to have an asset that contains
This article is useful to developers who work on applications that deliver protected media. The article shows you how to configure the key delivery service with authorization policies so that only authorized clients can receive encryption keys. It also shows how to use dynamic encryption. For information on how to encrypt content with the Advanced Encryption Standard (AES) for delivery to Safari on macOS, see [this blog post](https://azure.microsoft.com/blog/how-to-make-token-authorized-aes-encrypted-hls-stream-working-in-safari/).
-For an overview of how to protect your media content with AES encryption, see [this video](https://channel9.msdn.com/Shows/Azure-Friday/Azure-Media-Services-Protecting-your-Media-Content-with-AES-Encryption).
- ## AES-128 dynamic encryption and key delivery service workflow
media-services Media Services Workflow Designer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/previous/media-services-workflow-designer.md
Day 1 video covers:
* Basic Workflows ΓÇô "Hello World" * Creating multiple output MP4 files for use with Azure Media Services streaming
-> [!VIDEO https://channel9.msdn.com/Blogs/Azure/Azure-Premium-Encoder-Workflow-Designer-Training-Videos-Day-1/player]
->
->
### Day 2 Day 2 video covers:
Day 2 video covers:
* Workflows with advanced Logic * Graph stages
-> [!VIDEO https://channel9.msdn.com/Blogs/Azure/Azure-Premium-Encoder-Workflow-Designer-Training-Videos-Day-2/player]
->
->
- ### Day 3 Day 3 video covers:
Day 3 video covers:
* Restrictions with the current Encoder * Q&A
-> [!VIDEO https://channel9.msdn.com/Blogs/Azure/Azure-Premium-Encoder-Workflow-Designer-Training-Videos-Day-3/player]
->
->
- ## Need help? You can open a support ticket by navigating to [New support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest)
mysql Videos https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/videos.md
This page provides video content for learning about Azure Database for MySQL.
## Overview: Azure Database for PostgreSQL and MySQL
->[!VIDEO https://channel9.msdn.com/Events/Connect/2017/T147/player]
-[Open in Channel 9](https://channel9.msdn.com/Events/Connect/2017/T147)
+>[!VIDEO https://docs.microsoft.com/Events/Connect/2017/T147/player]
+[Open in Channel 9](/Events/Connect/2017/T147)
Azure Database for PostgreSQL and Azure Database for MySQL bring together community edition database engines and capabilities of a fully managed serviceΓÇöso you can focus on your apps instead of having to manage a database. Tune in to get a quick overview of the advantages of using the service, and see some of the capabilities in action.
Azure Database for PostgreSQL and Azure Database for MySQL are managed services
## Deep dive on managed service capabilities for MySQL and PostgreSQL
->[!VIDEO https://channel9.msdn.com/Events/Connect/2017/T148/player]
-[Open in Channel 9](https://channel9.msdn.com/Events/Connect/2017/T148)
+>[!VIDEO https://docs.microsoft.com/Events/Connect/2017/T148/player]
+[Open in Channel 9](/Events/Connect/2017/T148)
Azure Database for PostgreSQL and Azure Database for MySQL bring together community edition database engines and the capabilities of a fully managed service. Tune in to get a deep dive on how these services workΓÇöhow we ensure high availability and fast scaling (within seconds), so you can meet your customersΓÇÖ needs. You'll also learn about some of the underlying investments in security and worldwide availability. ## How to get started with the new Azure Database for MySQL service
->[!VIDEO https://channel9.msdn.com/Events/Build/2017/B8045/player]
-[Open in Channel 9](https://channel9.msdn.com/events/Build/2017/B8045)
- In this video from the May 2017 Microsoft //Build conference, learn about MicrosoftΓÇÖs managed MySQL offering in Azure. The video walks through MicrosoftΓÇÖs strategy for supporting Open-Source database systems in Azure. The video discusses what it means to you as a developer to develop or deploy applications that use MySQL in Azure. This video shows an overview of the architecture of the service, and demonstrates Azure Database for MySQL is integrated with other Azure Services such as Web Apps.
networking Networking Partners Msp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/networking/networking-partners-msp.md
Use the links in this section for more information about managed cloud networkin
|[BT](https://www.globalservices.bt.com/en/solutions/products/cloud-connect-azure)|[Network Transformation Consulting: 1-Hr Assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/bt-americas-inc.network-transformation-consulting);[BT Cloud Connect Azure](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/bt-americas-inc.bt-cca-lh-001?tab=Overview)|[BT Cloud Connect Azure ExpressRoute](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/bt-americas-inc.bt-cca-lh-003?tab=Overview)|[BT Cloud Connect Azure VWAN](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/bt-americas-inc.bt-cca-lh-002?tab=Overview)||| |[BUI](https://www.bui.co.za/)|[a2zManaged Cloud Management](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/bui.a2zmanagement?tab=Overview)||[BUI Managed Azure vWAN using VMware SD-WAN](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/bui.bui_managed_vwan?tab=Overview)||[BUI CyberSoC](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/bui.buicybersoc_msp?tab=Overview)| |[Coevolve](https://www.coevolve.com/services/azure-networking-services/)|||[Managed Azure Virtual WAN](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/coevolveptylimited1581027739259.coevolve-managed-azure-vwan?tab=Overview);[Managed VMware SD-WAN Virtual Edge](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/coevolveptylimited1581027739259.managed-vmware-sdwan-edge?tab=Overview)|||
-|[Colt](https://www.colt.net/why-colt/partner-hub/microsoft/)|[Network optimization on Azure: 2-hr Assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/colttechnologyservices.azure_networking)|||||
+|[Colt](https://cloud.telekom.de/de/infrastruktur/microsoft-azure/azure-networking)|[Network optimization on Azure: 2-hr Assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/colttechnologyservices.azure_networking)|||||
+|[Deutsche Telekom](https://cloud.telekom.de/de/infrastruktur/microsoft-azure/azure-networking)|[Network connectivity to Azure: 2-Hr assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/telekomdeutschlandgmbh1617272539503.azure_netzwerkoptimierung_2_stunden?search=telekom&page=1); [Cloud Transformation with Azure: 1-Day Workshop](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/telekomdeutschlandgmbh1617272539503.azure_cloudtransformation_1_tag?search=telekom&page=1)|[Managed ExpressRoute](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/telekomdeutschlandgmbh1617272539503.azure_intraselect_cloud_connect_implementation?search=telekom&page=1)|||[Azure Networking and Security: 1-Day Workshop](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/telekomdeutschlandgmbh1617272539503.azure_netzwerke_und_sicherheit_1_tag?search=telekom&page=1); [Intraselect SecureConnect: 1-Week Implementation](https://appsource.microsoft.com/de-de/marketplace/consulting-services/telekomdeutschlandgmbh1617272539503.azure_intraselect_secure_connect_implementation?tab=Overview)|
|[Equinix](https://www.equinix.com/)|Cloud Optimized WAN Workshop|[ExpressRoute Connectivity Strategy Workshop](https://www.equinix.se/resources/data-sheets/expressroute-strategy-workshop); [Equinix Cloud Exchange Fabric](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/equinix.equinix_ecx_fabric?tab=Overview)|||| |[Federated Wireless](https://www.federatedwireless.com/caas/)||||[Federated Wireless Connectivity-as-a-Service](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/federatedwireless1580839623708.fw_caas?tab=Overview)| |[HCL](https://www.hcltech.com/)|[HCL Cloud Network Transformation- One Day Assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/hcl-technologies.clo?tab=Overview)|[1-Hour Briefing of HCL Azure ExpressRoute Service](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/hcl-technologies.hclmanagedazureexpressroute?tab=Overview)|[HCL Azure Virtual WAN Services - One Day Assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/hcl-technologies.hclmanagedazurevitualwan?search=vWAN&page=1)|[HCL Azure Private LTE offering - One Day Assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/hcl-technologies.hclazureprivatelteoffering)|
Use the links in this section for more information about managed cloud networkin
|[Zertia](https://zertia.es/)||[Express Route ΓÇô Intercloud connectivity](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/zertiatelecomunicacionessl1615311863650.zertia-inter-conect-of103?tab=Overview)|[Enterprise Connectivity Suite - Virtual WAN](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/zertiatelecomunicacionessl1615311863650.zertia-vw-suite-of101?tab=Overview); [Manage Virtual WAN ΓÇô SD-WAN Fortinet](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/zertiatelecomunicacionessl1615311863650.zertia-mvw-fortinet-of101?tab=Overview); [Manage Virtual WAN ΓÇô SD-WAN Cisco Meraki](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/zertiatelecomunicacionessl1615311863650.zertia-mvw-cisco-of101?tab=Overview); [Manage Virtual WAN ΓÇô SD-WAN Citrix](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/zertiatelecomunicacionessl1615311863650.zertia-mvw-citrix-of101?tab=Overview);||| Azure Marketplace offers for Managed ExpressRoute, Virtual WAN, Security Services and Private Edge Zone Services from the following Azure Networking MSP Partners are on our roadmap:
-[Amdocs](https://www.amdocs.com/); [Cirrus Core Networks](https://cirruscorenetworks.com/); [Cognizant](https://www.cognizant.com/cognizant-digital-systems-technology/cloud-enablement-services); [Deutsche Telekom](https://www.telekom.com/en/media/media-information/archive/deutsche-telekom-offers-managed-network-services-for-microsoft-azure-598406); [InterCloud](https://intercloud.com/partners/microsoft-azure/); [KINX](https://www.kinx.net/service/cloud/?lang=en); [Netfosys](https://www.netfosys.com/services/azure-networking-services/); [OmniClouds](https://omniclouds.com/); [Sejong Telecom](https://www.sejongtelecom.net/en/pages/service/cloud_ms); [SES](https://www.ses.com/networks/cloud/ses-and-azure-expressroute);
+[Amdocs](https://www.amdocs.com/); [Cirrus Core Networks](https://cirruscorenetworks.com/); [Cognizant](https://www.cognizant.com/cognizant-digital-systems-technology/cloud-enablement-services); [InterCloud](https://intercloud.com/partners/microsoft-azure/); [KINX](https://www.kinx.net/service/cloud/?lang=en); [Netfosys](https://www.netfosys.com/services/azure-networking-services/); [OmniClouds](https://omniclouds.com/); [Sejong Telecom](https://www.sejongtelecom.net/en/pages/service/cloud_ms); [SES](https://www.ses.com/networks/cloud/ses-and-azure-expressroute);
## <a name="expressroute"></a>ExpressRoute partners
networking Networking Partners Msp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/nodejs-use-node-modules-azure-apps.md
Now that you understand how to use Node.js modules with Azure, learn how to [spe
For more information, see the [Node.js Developer Center](/azure/developer/javascript/). [specify the Node.js version]: ./app-service/overview.md
-[How to use the Azure Command-Line Interface for Mac and Linux]:cli-install-nodejs.md
-[Custom Website Deployment Scripts with Kudu]: https://channel9.msdn.com/Shows/Azure-Friday/Custom-Web-Site-Deployment-Scripts-with-Kudu-with-David-Ebbo
+[How to use the Azure Command-Line Interface for Mac and Linux]:cli-install-nodejs.md
object-anchors Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/object-anchors/best-practices.md
We recommend trying some of these steps to get the best results.
## Detection
-> [!VIDEO https://channel9.msdn.com/Shows/Docs-Mixed-Reality/Azure-Object-Anchors-Detection-and-Alignment-Best-Practices/player]
+> [!VIDEO https://docs.microsoft.com/Shows/Docs-Mixed-Reality/Azure-Object-Anchors-Detection-and-Alignment-Best-Practices/player]
- The provided runtime SDK requires a user-provided search region to search for and detect the physical object(s). The search region could be a bounding box, a sphere, a view frustum, or any combination of them. To avoid a false detection,
open-datasets Dataset Boston Safety https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/open-datasets/dataset-boston-safety.md
Sample not available for this platform/package combination.
``` # This is a package in preview.
-# You need to pip install azureml-opendatasets in Databricks cluster. https://docs.microsoft.com/en-us/azure/data-explorer/connect-from-databricks#install-the-python-library-on-your-azure-databricks-cluster
+# You need to pip install azureml-opendatasets in Databricks cluster. https://docs.microsoft.com/azure/data-explorer/connect-from-databricks#install-the-python-library-on-your-azure-databricks-cluster
from azureml.opendatasets import BostonSafety from datetime import datetime
postgresql Concepts High Availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/concepts-high-availability.md
Previously updated : 11/15/2021 Last updated : 01/12/2022 # High availability in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)
To take advantage of HA on the coordinator node, database applications need to
detect and retry dropped connections and failed transactions. The newly promoted coordinator will be accessible with the same connection string.
+## High availability states
+ Recovery can be broken into three stages: detection, failover, and full
-recovery. Hyperscale (Citus) runs periodic health checks on every node, and after four
-failed checks it determines that a node is down. Hyperscale (Citus) then promotes a
-standby to primary node status (failover), and provisions a new standby-to-be.
-Streaming replication begins, bringing the new node up-to-date. When all data
-has been replicated, the node has reached full recovery.
+recovery. Hyperscale (Citus) runs periodic health checks on every node, and
+after four failed checks it determines that a node is down. Hyperscale (Citus)
+then promotes a standby to primary node status (failover), and provisions a new
+standby-to-be. Streaming replication begins, bringing the new node up to date.
+When all data has been replicated, the node has reached full recovery.
+
+Hyperscale (Citus) displays its failover progress state on the Overview page
+for server groups in the Azure portal.
+
+* **Healthy**: HA is enabled and the node is fully replicated to its standby.
+* **Failover in progress**: A failure was detected on the primary node and
+ a failover to standby was initiated. This state will transition into
+ **Creating standby** once failover to the standby node is completed, and the
+ standby becomes the new primary.
+* **Creating standby**: The previous standby was promoted to primary, and a
+ new standby is being created for it. When the new secondary is ready, this
+ state will transition into **Replication in progress**.
+* **Replication in progress**: The new standby node is provisioned and data
+ synchronization is in progress. Once all data is replicated to the new
+ standby, synchronous replication will be enabled between the primary and
+ standby nodes, and the nodes' state will transition back to **Healthy**.
+* **No**: HA is not enabled on this node.
-### Next steps
+## Next steps
- Learn how to [enable high availability](howto-high-availability.md) in a Hyperscale (Citus) server
postgresql Concepts Server Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/concepts-server-group.md
+
+ Title: Server group - Hyperscale (Citus) - Azure Database for PostgreSQL
+description: What is a server group in Azure Database for PostgreSQL - Hyperscale (Citus)
+++++ Last updated : 01/13/2022++
+# Hyperscale (Citus) server group
+
+## Nodes
+
+The Azure Database for PostgreSQL - Hyperscale (Citus) deployment option allows
+PostgreSQL servers (called nodes) to coordinate with one another in a "server
+group." The server group's nodes collectively hold more data and use more CPU
+cores than would be possible on a single server. The architecture also allows
+the database to scale by adding more nodes to the server group.
+
+To learn more about the types of Hyperscale (Citus) nodes, see [nodes and
+tables](concepts-nodes.md).
+
+### Node status
+
+Hyperscale (Citus) displays the status of nodes in a server group on the
+Overview page in the Azure portal. Each node can have one of these status
+values:
+
+* **Provisioning**: Initial node provisioning, either as a part of its server
+ group provisioning, or when a worker node is added.
+* **Available**: Node is in a healthy state.
+* **Need attention**: An issue is detected on the node. The node is attempting
+ to self-heal. If self-healing fails, an issue gets put in the queue for our
+ engineers to investigate.
+* **Dropping**: Server group deletion started.
+* **Disabled**: The server group's Azure subscription turned into Disabled
+ states. For more information about subscription states, see [this
+ page](../../cost-management-billing/manage/subscription-states.md).
+
+## Tiers
+
+The basic tier in Azure Database for PostgreSQL - Hyperscale (Citus) is a
+simple way to create a small server group that you can scale later. While
+server groups in the standard tier have a coordinator node and at least two
+worker nodes, the basic tier runs everything in a single database node.
+
+Other than using fewer nodes, the basic tier has all the features of the
+standard tier. Like the standard tier, it supports high availability, read
+replicas, and columnar table storage, among other features.
+
+### Choosing basic vs standard tier
+
+The basic tier can be an economical and convenient deployment option for
+initial development, testing, and continuous integration. It uses a single
+database node and presents the same SQL API as the standard tier. You can test
+applications with the basic tier and later [graduate to the standard
+tier](howto-scale-grow.md#add-worker-nodes) with confidence that the
+interface remains the same.
+
+The basic tier is also appropriate for smaller workloads in production. There
+is room to scale vertically *within* the basic tier by increasing the number of
+server vCores.
+
+When greater scale is required right away, use the standard tier. Its smallest
+allowed server group has one coordinator node and two workers. You can choose
+to use more nodes based on your use-case, as described in our [initial
+sizing](howto-scale-initial.md) how-to.
+
+## Next steps
+
+* Learn to [provision the basic tier](quickstart-create-basic-tier.md)
+* When you're ready, see [how to graduate](howto-scale-grow.md#add-worker-nodes) from the basic tier to the standard tier
+* The [columnar storage](concepts-columnar.md) option is available in both the basic and standard tier
postgresql Videos https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/videos.md
This page provides video content for learning about Azure Database for PostgreSQ
## Overview: Azure Database for PostgreSQL and MySQL
->[!VIDEO https://channel9.msdn.com/Events/Connect/2017/T147/player]
-[Open in Channel 9](https://channel9.msdn.com/Events/Connect/2017/T147)
+>[!VIDEO https://docs.microsoft.com/Events/Connect/2017/T147/player]
+[Open in Channel 9](/Events/Connect/2017/T147)
Azure Database for PostgreSQL and Azure Database for MySQL bring together community edition database engines and capabilities of a fully managed serviceΓÇöso you can focus on your apps instead of having to manage a database. Tune in to get a quick overview of the advantages of using the service, and see some of the capabilities in action.
Azure Database for PostgreSQL and Azure Database for MySQL are managed services
## Deep dive on managed service capabilities for MySQL and PostgreSQL
->[!VIDEO https://channel9.msdn.com/Events/Connect/2017/T148/player]
-[Open in Channel 9](https://channel9.msdn.com/Events/Connect/2017/T148)
+>[!VIDEO https://docs.microsoft.com/Events/Connect/2017/T148/player]
+[Open in Channel 9](/Events/Connect/2017/T148)
Azure Database for PostgreSQL and Azure Database for MySQL bring together community edition database engines and the capabilities of a fully managed service. Tune in to get a deep dive on how these services workΓÇöhow we ensure high availability and fast scaling (within seconds), so you can meet your customersΓÇÖ needs. You'll also learn about some of the underlying investments in security and worldwide availability. ## Develop an intelligent analytics app with PostgreSQL
->[!VIDEO https://channel9.msdn.com/Events/Connect/2017/T149/player]
-[Open in Channel 9](https://channel9.msdn.com/Events/Connect/2017/T149)
+>[!VIDEO https://docs.microsoft.com/Events/Connect/2017/T149/player]
+[Open in Channel 9](/Events/Connect/2017/T149)
Azure Database for PostgreSQL brings together community edition database engine and capabilities of a fully managed serviceΓÇöso you can focus on your apps instead of having to manage a database. Tune in to see in action how easy it is to create new experiences like adding Cognitive Services to your apps by virtue of being on Azure. ## How to get started with the new Azure Database for PostgreSQL service
->[!VIDEO https://channel9.msdn.com/Events/Build/2017/B8046/player]
-[Open in Channel 9](https://channel9.msdn.com/events/Build/2017/B8046)
In this video from the 2017 Microsoft //Build conference, learn from two early adopting customers how they've used Azure Database for PostgreSQL service to innovate faster. Learn how they migrated to the service, and discuss next steps in their application development. The video walks through some of the key service features and discusses how you as a developer can migrate your existing applications or develop new applications that use this managed PostgreSQL service in Azure.
purview Abap Functions Deployment Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/abap-functions-deployment-guide.md
Last updated 12/20/2021
# SAP ABAP function module deployment guide
-When you scan [SAP ECC](register-scan-sapecc-source.md) or [SAP S/4HANA](register-scan-saps4hana-source.md) sources in Azure Purview, you need to create the dependent ABAP function module in your SAP server. Purview invokes this function module to extract the metadata from your SAP system during scan.
+When you scan [SAP ECC](register-scan-sapecc-source.md) or [SAP S/4HANA](register-scan-saps4hana-source.md) sources in Azure Purview, you need to create the dependent ABAP function module in your SAP server. Azure Purview invokes this function module to extract the metadata from your SAP system during scan.
This document details the steps required to deploy this module.
This document details the steps required to deploy this module.
## Prerequisites
-Download the SAP ABAP function module source code from Purview Studio. After you register a source for [SAP ECC](register-scan-sapecc-source.md) or [SAP S/4HANA](register-scan-saps4hana-source.md), you can find a download link on top as follows.
+Download the SAP ABAP function module source code from Azure Purview Studio. After you register a source for [SAP ECC](register-scan-sapecc-source.md) or [SAP S/4HANA](register-scan-saps4hana-source.md), you can find a download link on top as follows.
## Deployment of the Module
When the module has been created, specify the following information:
3. Navigate to the **Source code** tab. There are two ways how to deploy code for the function:
- a. From the main menu, upload the text file you downloaded from Purview Studio as described in [Prerequisites](#prerequisites). To do so, select **Utilities**, **More Utilities**, then **Upload/Download**, then **Upload**.
+ a. From the main menu, upload the text file you downloaded from Azure Purview Studio as described in [Prerequisites](#prerequisites). To do so, select **Utilities**, **More Utilities**, then **Upload/Download**, then **Upload**.
b. Alternatively, open the file, copy its content and paste into **Source code** area.
purview Apply Classifications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/apply-classifications.md
This article discusses how to apply classifications on assets.
## Introduction
-Classifications can be system or custom types. System classifications are present in Purview by default. Custom classifications can be created based on a regular expression pattern. Classifications can be applied to assets either automatically or manually.
+Classifications can be system or custom types. System classifications are present in Azure Purview by default. Custom classifications can be created based on a regular expression pattern. Classifications can be applied to assets either automatically or manually.
This document explains how to apply classifications to your data.
purview Asset Insights https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/asset-insights.md
Title: Asset insights on your data in Azure Purview
-description: This how-to guide describes how to view and use Purview Insights asset reporting on your data.
+description: This how-to guide describes how to view and use Azure Purview Insights asset reporting on your data.
Last updated 09/27/2021
# Asset insights on your data in Azure Purview
-This how-to guide describes how to access, view, and filter Purview Asset insight reports for your data.
+This how-to guide describes how to access, view, and filter Azure Purview Asset insight reports for your data.
> [!IMPORTANT] > Azure Purview Insights are currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
This how-to guide describes how to access, view, and filter Purview Asset insigh
In this how-to guide, you'll learn how to: > [!div class="checklist"]
-> * View insights from your Purview account.
+> * View insights from your Azure Purview account.
> * Get a bird's eye view of your data. > * Drill down for more asset count details. ## Prerequisites
-Before getting started with Purview insights, make sure that you've completed the following steps:
+Before getting started with Azure Purview insights, make sure that you've completed the following steps:
* Set up your Azure resources and populate the account with data.
Before getting started with Purview insights, make sure that you've completed th
For more information, see [Manage data sources in Azure Purview](manage-data-sources.md).
-## Use Purview Asset Insights
+## Use Azure Purview Asset Insights
In Azure Purview, you can register and scan source types. Once the scan is complete, you can view the asset distribution in Asset Insights, which tells you the state of your data estate by classification and resource sets. It also tells you if there is any change in data size.
In Azure Purview, you can register and scan source types. Once the scan is compl
1. Navigate to your Azure Purview resource in the Azure portal.
-1. On the **Overview** page, in the **Get Started** section, select the **Open Purview Studio** tile.
+1. On the **Overview** page, in the **Get Started** section, select the **Open Azure Purview Studio** tile.
- :::image type="content" source="./media/asset-insights/portal-access.png" alt-text="Launch Purview from the Azure portal":::
+ :::image type="content" source="./media/asset-insights/portal-access.png" alt-text="Launch Azure Purview from the Azure portal":::
-1. On the Purview **Home** page, select **Insights** on the left menu.
+1. On the Azure Purview **Home** page, select **Insights** on the left menu.
:::image type="content" source="./media/asset-insights/view-insights.png" alt-text="View your insights in the Azure portal":::
-1. In the **Insights** area, select **Assets** to display the Purview **Asset insights** report.
+1. In the **Insights** area, select **Assets** to display the Azure Purview **Asset insights** report.
### View Asset Insights
purview Catalog Asset Details https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/catalog-asset-details.md
Last updated 09/27/2021
-# View, edit and delete assets in Purview catalog
+# View, edit and delete assets in Azure Purview catalog
This article discusses how to you can view your assets and their relevant details. It also describes how you can edit and delete assets from your catalog. ## Prerequisites - Set up your data sources and scan the assets into your catalog.-- *Or* Use the Purview Atlas APIs to ingest assets into the catalog.
+- *Or* Use the Azure Purview Atlas APIs to ingest assets into the catalog.
## Viewing asset details
-You can discover your assets in Purview by either:
+You can discover your assets in Azure Purview by either:
- [Browsing the Azure Purview Data catalog](how-to-browse-catalog.md) - [Searching the Azure Purview Data Catalog](how-to-search-catalog.md)
If you edit an asset by adding a description, asset level classification, glossa
If you make some column level updates, like adding a description, column level classification, or glossary term, then subsequent scans will also update the asset schema (new columns and classifications will be detected by the scanner in subsequent scan runs).
-Even on edited assets, after a scan Azure Purview will reflect the truth of the source system. For example: if you edit a column and it's deleted from the source, it will be deleted from your asset in Purview.
+Even on edited assets, after a scan Azure Purview will reflect the truth of the source system. For example: if you edit a column and it's deleted from the source, it will be deleted from your asset in Azure Purview.
>[!NOTE] > If you update the **name or data type of a column** in an Azure Purview asset, later scans **will not** update the asset schema. New columns and classifications **will not** be detected.
You can delete an asset by selecting the delete icon under the name of the asset
### Delete behavior explained
-Any asset you delete using the delete button is permanently deleted in Azure Purview. However, if you run a **full scan** on the source from which the asset was ingested into the catalog, then the asset is reingested and you can discover it using the Purview catalog.
+Any asset you delete using the delete button is permanently deleted in Azure Purview. However, if you run a **full scan** on the source from which the asset was ingested into the catalog, then the asset is reingested and you can discover it using the Azure Purview catalog.
-If you have a scheduled scan (weekly or monthly) on the source, the **deleted asset will not get re-ingested** into the catalog unless the asset is modified by an end user since the previous run of the scan. For example, if a SQL table was deleted from Purview, but after the table was deleted a user added a new column to the table in SQL, at the next scan the asset will be rescanned and ingested into the catalog.
+If you have a scheduled scan (weekly or monthly) on the source, the **deleted asset will not get re-ingested** into the catalog unless the asset is modified by an end user since the previous run of the scan. For example, if a SQL table was deleted from Azure Purview, but after the table was deleted a user added a new column to the table in SQL, at the next scan the asset will be rescanned and ingested into the catalog.
-If you delete an asset, only that asset is deleted. Purview does not currently support cascaded deletes. For example, if you delete a storage account asset in your catalog - the containers, folders and files within them are not deleted.
+If you delete an asset, only that asset is deleted. Azure Purview does not currently support cascaded deletes. For example, if you delete a storage account asset in your catalog - the containers, folders and files within them are not deleted.
## Next steps
purview Catalog Conditional Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/catalog-conditional-access.md
+
+ Title: Configure Azure AD Conditional Access for Azure Purview
+description: This article describes steps how to configure Azure AD Conditional Access for Azure Purview
+++++ Last updated : 01/14/2022
+# Customer intent: As an identity and security admin, I want to set up Azure Active Directory Conditional Access for Azure Purview, for secure access.
++
+# Conditional Access with Azure Purview
+
+[Azure Purview](/overview.md) supports Microsoft Conditional Access.
+
+The following steps show how to configure Azure Purview to enforce a Conditional Access policy.
+
+## Prerequisites
+
+- When multi-factor authentication is enabled, to login to Azure Purview Studio, you must perform multi-factor authentication.
+
+## Configure conditional access
+
+1. Sign in to the Azure portal, select **Azure Active Directory**, and then select **Conditional Access**. For more information, see [Azure Active Directory Conditional Access technical reference](../active-directory/conditional-access/concept-conditional-access-conditions.md).
+
+ :::image type="content" source="media/catalog-conditional-access/conditional-access-blade.png" alt-text="Screenshot that shows Conditional Access blade"lightbox="media/catalog-conditional-access/conditional-access-blade.png":::
+
+2. In the **Conditional Access-Policies** blade, click **New policy**, provide a name, and then click **Configure rules**.
+3. Under **Assignments**, select **Users and groups**, check **Select users and groups**, and then select the user or group for Conditional Access. Click **Select**, and then click **Done** to accept your selection.
+
+ :::image type="content" source="media/catalog-conditional-access/select-users-and-groups.png" alt-text="Screenshot that shows User and Group selection"lightbox="media/catalog-conditional-access/select-users-and-groups.png":::
+
+4. Select **Cloud apps**, click **Select apps**. You see all apps available for Conditional Access. Select **Azure Purview**, at the bottom click **Select**, and then click **Done**.
+
+ :::image type="content" source="media/catalog-conditional-access/select-azure-purview.png" alt-text="Screenshot that shows Applications selection"lightbox="media/catalog-conditional-access/select-azure-purview.png":::
+
+5. Select **Access controls**, select **Grant**, and then check the policy you want to apply. For this example, we select **Require multi-factor authentication**.
+
+ :::image type="content" source="media/catalog-conditional-access/grant-access.png" alt-text="Screenshot that shows Grant access tab"lightbox="media/catalog-conditional-access/grant-access.png":::
+
+6. Set **Enable policy** to **On** and click **Create**.
+
+## Next steps
+
+- [Use Azure Purview Studio](/use-purview-studio.md)
purview Catalog Lineage User Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/catalog-lineage-user-guide.md
This article provides an overview of the data lineage features in Azure Purview
One of the platform features of Azure Purview is the ability to show the lineage between datasets created by data processes. Systems like Data Factory, Data Share, and Power BI capture the lineage of data as it moves. Custom lineage reporting is also supported via Atlas hooks and REST API. ## Lineage collection
- Metadata collected in Azure Purview from enterprise data systems are stitched across to show an end to end data lineage. Data systems that collect lineage into Purview are broadly categorized into following three types.
+ Metadata collected in Azure Purview from enterprise data systems are stitched across to show an end to end data lineage. Data systems that collect lineage into Azure Purview are broadly categorized into following three types.
### Data processing system
-Data integration and ETL tools can push lineage in to Azure Purview at execution time. Tools such as Data Factory, Data Share, Synapse, Azure Databricks, and so on, belong to this category of data systems. The data processing systems reference datasets as source from different databases and storage solutions to create target datasets. The list of data processing systems currently integrated with Purview for lineage are listed in below table.
+Data integration and ETL tools can push lineage in to Azure Purview at execution time. Tools such as Data Factory, Data Share, Synapse, Azure Databricks, and so on, belong to this category of data systems. The data processing systems reference datasets as source from different databases and storage solutions to create target datasets. The list of data processing systems currently integrated with Azure Purview for lineage are listed in below table.
| Data processing system | Supported scope | | - | |
Data integration and ETL tools can push lineage in to Azure Purview at execution
| Azure Data Share | [Share snapshot](how-to-link-azure-data-share.md) | ### Data storage systems
-Databases & storage solutions such as SQL Server, Teradata, and SAP have query engines to transform data using scripting language. Data lineage from stored procedures is collected in to Purview and stitched with lineage from other systems.
+Databases & storage solutions such as SQL Server, Teradata, and SAP have query engines to transform data using scripting language. Data lineage from stored procedures is collected in to Azure Purview and stitched with lineage from other systems.
| Data storage system | Supported scope | | - | |
Data systems like Azure ML and Power BI report lineage into Azure Purview. These
> [!VIDEO https://www.microsoft.com/videoplayer/embed/RWxTAK]
-Lineage in Purview includes datasets and processes. Datasets are also referred to as nodes while processes can be also called edges:
+Lineage in Azure Purview includes datasets and processes. Datasets are also referred to as nodes while processes can be also called edges:
-* **Dataset (Node)**: A dataset (structured or unstructured) provided as an input to a process. For example, a SQL Table, Azure blob, and files (such as .csv and .xml), are all considered datasets. In the lineage section of Purview, datasets are represented by rectangular boxes.
+* **Dataset (Node)**: A dataset (structured or unstructured) provided as an input to a process. For example, a SQL Table, Azure blob, and files (such as .csv and .xml), are all considered datasets. In the lineage section of Azure Purview, datasets are represented by rectangular boxes.
-* **Process (Edge)**: An activity or transformation performed on a dataset is called a process. For example, ADF Copy activity, Data Share snapshot and so on. In the lineage section of Purview, processes are represented by round-edged boxes.
+* **Process (Edge)**: An activity or transformation performed on a dataset is called a process. For example, ADF Copy activity, Data Share snapshot and so on. In the lineage section of Azure Purview, processes are represented by round-edged boxes.
-To access lineage information for an asset in Purview, follow the steps:
+To access lineage information for an asset in Azure Purview, follow the steps:
1. In the Azure portal, go to the [Azure Purview accounts page](https://aka.ms/purviewportal).
-1. Select your Azure Purview account from the list, and then select **Open Purview Studio** from the **Overview** page.
+1. Select your Azure Purview account from the list, and then select **Open Azure Purview Studio** from the **Overview** page.
1. On the Azure Purview Studio **Home** page, search for a dataset name or the process name such as ADF Copy or Data Flow activity. And then press Enter.
To see column-level lineage of a dataset, go to the **Lineage** tab of the curre
:::image type="content" source="./media/catalog-lineage-user-guide/use-toggle-to-filter-nodes.png" alt-text="Screenshot showing how to use the toggle to filter the list of nodes on the lineage page." lightbox="./media/catalog-lineage-user-guide/use-toggle-to-filter-nodes.png"::: ## Process column lineage
-Data process can take one or more input datasets to produce one or more outputs. In Purview, column level lineage is available for process nodes.
+Data process can take one or more input datasets to produce one or more outputs. In Azure Purview, column level lineage is available for process nodes.
1. Switch between input and output datasets from a drop down in the columns panel. 2. Select columns from one or more tables to see the lineage flowing from input dataset to corresponding output dataset.
purview Catalog Managed Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/catalog-managed-vnet.md
Previously updated : 01/11/2022 Last updated : 01/13/2022
-# Customer intent: As a Purview admin, I want to set up Managed Virtual Network and managed private endpoints for my Purview account.
+# Customer intent: As a Azure Purview admin, I want to set up Managed Virtual Network and managed private endpoints for my Azure Purview account.
# Use a Managed VNet with your Azure Purview account
> [!IMPORTANT] > Currently, Managed Virtual Network and managed private endpoints are available for Azure Purview accounts that are deployed in the following regions:
+> - Australia East
> - Canada Central > - East US 2 > - West Europe
This article describes how to configure Managed Virtual Network and managed priv
### Supported regions Currently, Managed Virtual Network and managed private endpoints are available for Azure Purview accounts that are deployed in the following regions:
+- Australia East
- Canada Central - East US 2 - West Europe
Additionally, you can deploy managed private endpoints for your Azure Key Vault
### Managed Virtual Network
-A Managed Virtual Network in Azure Purview is a virtual network which is deployed and managed by Azure inside the same region as Purview account to allow scanning Azure data sources inside a managed network, without having to deploy and manage any self-hosted integration runtime virtual machines by the customer in Azure.
+A Managed Virtual Network in Azure Purview is a virtual network which is deployed and managed by Azure inside the same region as Azure Purview account to allow scanning Azure data sources inside a managed network, without having to deploy and manage any self-hosted integration runtime virtual machines by the customer in Azure.
You can deploy an Azure Managed Integration Runtime within an Azure Purview Managed Virtual Network. From there, the Managed VNet Runtime will leverage private endpoints to securely connect to and scan supported data sources.
-Creating an Managed VNet Runtime within Managed Virtual Network ensures that data integration process is isolated and secure.
+Creating a Managed VNet Runtime within Managed Virtual Network ensures that data integration process is isolated and secure.
Benefits of using Managed Virtual Network:
Benefits of using Managed Virtual Network:
> [!Note] > You cannot switch a global Azure integration runtime or self-hosted integration runtime to a Managed VNet Runtime and vice versa.
-A Managed VNet is created for your Azure Purview account when you create a Managed VNet Runtime for the first time in your Purview account. You can't view or manage the Managed VNets.
+A Managed VNet is created for your Azure Purview account when you create a Managed VNet Runtime for the first time in your Azure Purview account. You can't view or manage the Managed VNets.
### Managed private endpoints
-Managed private endpoints are private endpoints created in the Azure Purview Managed Virtual Network establishing a private link to Purview and Azure resources. Azure Purview manages these private endpoints on your behalf.
+Managed private endpoints are private endpoints created in the Azure Purview Managed Virtual Network establishing a private link to Azure Purview and Azure resources. Azure Purview manages these private endpoints on your behalf.
Azure Purview supports private links. Private link enables you to access Azure (PaaS) services (such as Azure Storage, Azure Cosmos DB, Azure Synapse Analytics).
Private endpoint uses a private IP address in the Managed Virtual Network to eff
> To reduce administrative overhead, it's recommended that you create managed private endpoints to scan all supported Azure data sources. > [!WARNING]
-> If an Azure PaaS data store (Blob, Azure Data Lake Storage Gen2, Azure Synapse Analytics) has a private endpoint already created against it, and even if it allows access from all networks, Purview would only be able to access it using a managed private endpoint. If a private endpoint does not already exist, you must create one in such scenarios.
+> If an Azure PaaS data store (Blob, Azure Data Lake Storage Gen2, Azure Synapse Analytics) has a private endpoint already created against it, and even if it allows access from all networks, Azure Purview would only be able to access it using a managed private endpoint. If a private endpoint does not already exist, you must create one in such scenarios.
A private endpoint connection is created in a "Pending" state when you create a managed private endpoint in Azure Purview. An approval workflow is initiated. The private link resource owner is responsible to approve or reject the connection.
Before deploying a Managed VNet and Managed VNet Runtime for an Azure Purview ac
1. An Azure Purview account deployed in one of the [supported regions](#supported-regions). 2. From Azure Purview roles, you must be a data curator at root collection level in your Azure Purview account.
-3. From Azure RBAC roles, you must be contributor on the Purview account and data source to approve private links.
+3. From Azure RBAC roles, you must be contributor on the Azure Purview account and data source to approve private links.
### Deploy Managed VNet Runtimes > [!NOTE] > The following guide shows how to register and scan an Azure Data Lake Storage Gen 2 using Managed VNet Runtime.
-1. Go to the [Azure portal](https://portal.azure.com), and navigate to the **Purview accounts** page and select your _Purview account_.
+1. Go to the [Azure portal](https://portal.azure.com), and navigate to the **Azure Purview accounts** page and select your _Purview account_.
- :::image type="content" source="media/catalog-managed-vnet/purview-managed-azure-portal.png" alt-text="Screenshot that shows the Purview account":::
+ :::image type="content" source="media/catalog-managed-vnet/purview-managed-azure-portal.png" alt-text="Screenshot that shows the Azure Purview account":::
-2. **Open Purview Studio** and navigate to the **Data Map --> Integration runtimes**.
+2. **Open Azure Purview Studio** and navigate to the **Data Map --> Integration runtimes**.
- :::image type="content" source="media/catalog-managed-vnet/purview-managed-vnet.png" alt-text="Screenshot that shows Purview Data Map menus":::
+ :::image type="content" source="media/catalog-managed-vnet/purview-managed-vnet.png" alt-text="Screenshot that shows Azure Purview Data Map menus":::
3. From **Integration runtimes** page, select **+ New** icon, to create a new runtime. Select Azure and then select **Continue**.
Before deploying a Managed VNet and Managed VNet Runtime for an Azure Purview ac
:::image type="content" source="media/catalog-managed-vnet/purview-managed-ir-region.png" alt-text="Screenshot that shows to create a Managed VNet Runtime":::
-5. Deploying the Managed VNet Runtime for the first time triggers multiple workflows in Purview Studio for creating managed private endpoints for Azure Purview and its Managed Storage Account. Click on each workflow to approve the private endpoint for the corresponding Azure resource.
+5. Deploying the Managed VNet Runtime for the first time triggers multiple workflows in Azure Purview Studio for creating managed private endpoints for Azure Purview and its Managed Storage Account. Click on each workflow to approve the private endpoint for the corresponding Azure resource.
:::image type="content" source="media/catalog-managed-vnet/purview-managed-ir-workflows.png" alt-text="Screenshot that shows deployment of a Managed VNet Runtime":::
-6. In Azure portal, from your Purview account resource blade, approve the managed private endpoint. From Managed storage account blade approve the managed private endpoints for blob and queue
+6. In Azure portal, from your Azure Purview account resource blade, approve the managed private endpoint. From Managed storage account blade approve the managed private endpoints for blob and queue
- :::image type="content" source="media/catalog-managed-vnet/purview-managed-pe-purview.png" alt-text="Screenshot that shows how to approve a managed private endpoint for Purview":::
+ :::image type="content" source="media/catalog-managed-vnet/purview-managed-pe-purview.png" alt-text="Screenshot that shows how to approve a managed private endpoint for Azure Purview":::
- :::image type="content" source="media/catalog-managed-vnet/purview-managed-pe-purview-approved.png" alt-text="Screenshot that shows how to approve a managed private endpoint for Purview - approved":::
+ :::image type="content" source="media/catalog-managed-vnet/purview-managed-pe-purview-approved.png" alt-text="Screenshot that shows how to approve a managed private endpoint for Azure Purview - approved":::
:::image type="content" source="media/catalog-managed-vnet/purview-managed-pe-managed-storage.png" alt-text="Screenshot that shows how to approve a managed private endpoint for managed storage account":::
Before deploying a Managed VNet and Managed VNet Runtime for an Azure Purview ac
7. From Management, select Managed private endpoint to validate if all managed private endpoints are successfully deployed and approved. All private endpoints be approved.
- :::image type="content" source="media/catalog-managed-vnet/purview-managed-pe-list.png" alt-text="Screenshot that shows managed private endpoints in Purview":::
+ :::image type="content" source="media/catalog-managed-vnet/purview-managed-pe-list.png" alt-text="Screenshot that shows managed private endpoints in Azure Purview":::
- :::image type="content" source="media/catalog-managed-vnet/purview-managed-pe-list-approved.png" alt-text="Screenshot that shows managed private endpoints in Purview - approved ":::
+ :::image type="content" source="media/catalog-managed-vnet/purview-managed-pe-list-approved.png" alt-text="Screenshot that shows managed private endpoints in Azure Purview - approved ":::
### Deploy managed private endpoints for data sources
For more information, see [Manage data sources in Azure Purview](manage-data-sou
#### Scan data source
-You can use any of the following options to scan data sources using Purview Managed VNet Runtime:
+You can use any of the following options to scan data sources using Azure Purview Managed VNet Runtime:
- [Using Managed Identity](#scan-using-managed-identity) (Recommended) - As soon as the Azure Purview Account is created, a system-assigned managed identity (SAMI) is created automatically in Azure AD tenant. Depending on the type of resource, specific RBAC role assignments are required for the Azure Purview system-assigned managed identity (SAMI) to perform the scans.
You can use any of the following options to scan data sources using Purview Mana
##### Scan using Managed Identity
-To scan a data source using a Managed VNet Runtime and Purview managed identity perform these steps:
+To scan a data source using a Managed VNet Runtime and Azure Purview managed identity perform these steps:
-1. Select the **Data Map** tab on the left pane in the Purview Studio.
+1. Select the **Data Map** tab on the left pane in the Azure Purview Studio.
1. Select the data source that you registered.
To scan a data source using a Managed VNet Runtime and Purview managed identity
##### Scan using other authentication options
-You can also use other supported options to scan data sources using Purview Managed Runtime. This requires setting up a private connection to Azure Key Vault where the secret is stored.
+You can also use other supported options to scan data sources using Azure Purview Managed Runtime. This requires setting up a private connection to Azure Key Vault where the secret is stored.
To set up a scan using Account Key or SQL Authentication follow these steps:
To set up a scan using Account Key or SQL Authentication follow these steps:
6. Provide a name for the managed private endpoint, select the Azure subscription and the Azure Key Vault from the drop down lists. Select **create**.
- :::image type="content" source="media/catalog-managed-vnet/purview-managed-pe-key-vault-create.png" alt-text="Screenshot that shows how to create a managed private endpoint for Azure Key Vault in Purview Studio":::
+ :::image type="content" source="media/catalog-managed-vnet/purview-managed-pe-key-vault-create.png" alt-text="Screenshot that shows how to create a managed private endpoint for Azure Key Vault in Azure Purview Studio":::
7. From the list of managed private endpoints, click on the newly created managed private endpoint for your Azure Key Vault and then click on **Manage approvals in the Azure portal**, to approve the private endpoint in Azure portal.
To set up a scan using Account Key or SQL Authentication follow these steps:
:::image type="content" source="media/catalog-managed-vnet/purview-managed-pe-list-3.png" alt-text="Screenshot that shows managed private endpoints including Azure Key Vault in purview studio":::
-10. Select the **Data Map** tab on the left pane in the Purview Studio.
+10. Select the **Data Map** tab on the left pane in the Azure Purview Studio.
11. Select the data source that you registered.
purview Catalog Permissions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/catalog-permissions.md
Azure Purview uses **Collections** to organize and manage access across its sour
## Collections
-A collection is a tool Azure Purview uses to group assets, sources, and other artifacts into a hierarchy for discoverability and to manage access control. All access to Purview's resources are managed from collections in the Purview account itself.
+A collection is a tool Azure Purview uses to group assets, sources, and other artifacts into a hierarchy for discoverability and to manage access control. All access to Azure Purview's resources are managed from collections in the Azure Purview account itself.
> [!NOTE] > As of November 8th, 2021, ***Insights*** is accessible to Data Curators. Data Readers do not have access to Insights.
Azure Purview uses a set of predefined roles to control who can access what with
|I need to edit the glossary or set up new classification definitions|Data Curator| |I need to view Insights to understand the governance posture of my data estate|Data Curator| |My application's Service Principal needs to push data to Azure Purview|Data Curator|
-|I need to set up scans via the Purview Studio|Data Curator on the collection **or** Data Curator **And** Data Source Administrator where the source is registered|
+|I need to set up scans via the Azure Purview Studio|Data Curator on the collection **or** Data Curator **And** Data Source Administrator where the source is registered|
|I need to enable a Service Principal or group to set up and monitor scans in Azure Purview without allowing them to access the catalog's information |Data Source Admin| |I need to put users into roles in Azure Purview | Collection Admin | ## Understand how to use Azure Purview's roles and collections
-All access control is managed in Purview's collections. Purview's collections can be found in the [Purview Studio](https://web.purview.azure.com/resource/). Open your Purview account in the [Azure portal](https://portal.azure.com) and select the Purview Studio tile on the Overview page. From there, navigate to the data map on the left menu, and then select the 'Collections' tab.
+All access control is managed in Azure Purview's collections. Azure Purview's collections can be found in the [Azure Purview Studio](https://web.purview.azure.com/resource/). Open your Azure Purview account in the [Azure portal](https://portal.azure.com) and select the Azure Purview Studio tile on the Overview page. From there, navigate to the data map on the left menu, and then select the 'Collections' tab.
-When an Azure Purview account is created, it starts with a root collection that has the same name as the Purview account itself. The creator of the Purview account is automatically added as a Collection Admin, Data Source Admin, Data Curator, and Data Reader on this root collection, and can edit and manage this collection.
+When an Azure Purview account is created, it starts with a root collection that has the same name as the Azure Purview account itself. The creator of the Azure Purview account is automatically added as a Collection Admin, Data Source Admin, Data Curator, and Data Reader on this root collection, and can edit and manage this collection.
-Sources, assets, and objects can be added directly to this root collection, but so can other collections. Adding collections will give you more control over who has access to data across your Purview account.
+Sources, assets, and objects can be added directly to this root collection, but so can other collections. Adding collections will give you more control over who has access to data across your Azure Purview account.
All other users can only access information within the Azure Purview account if they, or a group they're in, are given one of the above roles. This means, when you create an Azure Purview account, no one but the creator can access or use its APIs until they are [added to one or more of the above roles in a collection](how-to-create-and-manage-collections.md#add-role-assignments). Users can only be added to a collection by a collection admin, or through permissions inheritance. The permissions of a parent collection are automatically inherited by its subcollections. However, you can choose to [restrict permission inheritance](how-to-create-and-manage-collections.md#restrict-inheritance) on any collection. If you do this, its subcollections will no longer inherit permissions from the parent and will need to be added directly, though collection admins that are automatically inherited from a parent collection can't be removed.
-You can assign Purview roles to users, security groups and service principals from your Azure Active Directory which is associated with your purview account's subscription.
+You can assign Azure Purview roles to users, security groups and service principals from your Azure Active Directory which is associated with your purview account's subscription.
## Assign permissions to your users After creating an Azure Purview account, the first thing to do is create collections and assign users to roles within those collections. > [!NOTE]
-> If you created your Azure Purview account using a service principal, to be able to access the Purview Studio and assign permissions to users, you will need to grant a user collection admin permissions on the root collection.
+> If you created your Azure Purview account using a service principal, to be able to access the Azure Purview Studio and assign permissions to users, you will need to grant a user collection admin permissions on the root collection.
> You can use [this Azure CLI command](/cli/azure/purview/account#az_purview_account_add_root_collection_admin): > > ```azurec