Updates from: 04/02/2022 01:09:30
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory B2b Fundamentals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/b2b-fundamentals.md
Previously updated : 03/21/2022 Last updated : 03/31/2022
This article contains recommendations and best practices for business-to-business (B2B) collaboration in Azure Active Directory (Azure AD). > [!IMPORTANT]
-> **Starting July 2022**, we'll begin rolling out a change to turn on the email one-time passcode feature for all existing tenants and enable it by default for new tenants. As part of this change, Microsoft will stop creating new, unmanaged ("viral") Azure AD accounts and tenants during B2B collaboration invitation redemption. We're enabling the email one-time passcode feature because it provides a seamless fallback authentication method for your guest users. However, if you don't want to allow this feature to turn on automatically, you can [disable it](one-time-passcode.md#disable-email-one-time-passcode).
+> We've begun rolling out a change to turn on the email one-time passcode feature for all existing tenants and enable it by default for new tenants. We're enabling the email one-time passcode feature because it provides a seamless fallback authentication method for your guest users. However, if you don't want to allow this feature to turn on automatically, you can [disable it](one-time-passcode.md#disable-email-one-time-passcode). Soon, we'll stop creating new, unmanaged ("viral") Azure AD accounts and tenants during B2B collaboration invitation redemption.
## B2B recommendations
active-directory One Time Passcode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/one-time-passcode.md
Previously updated : 02/14/2022 Last updated : 03/31/2022
The email one-time passcode feature is a way to authenticate B2B collaboration users when they can't be authenticated through other means, such as Azure AD, Microsoft account (MSA), or social identity providers. When a B2B guest user tries to redeem your invitation or sign in to your shared resources, they can request a temporary passcode, which is sent to their email address. Then they enter this passcode to continue signing in.
-You can enable this feature at any time in the Azure portal by configuring the Email one-time passcode identity provider under your tenant's External Identities settings. You can choose to enable the feature, disable it, or wait for automatic enablement starting July 2022.
+You can enable this feature at any time in the Azure portal by configuring the Email one-time passcode identity provider under your tenant's External Identities settings. You can choose to enable the feature, disable it, or wait for automatic enablement.
![Email one-time passcode overview diagram](media/one-time-passcode/email-otp.png) > [!IMPORTANT] >
-> - **Starting July 2022**, we'll begin rolling out a change to turn on the email one-time passcode feature for all existing tenants and enable it by default for new tenants. As part of this change, Microsoft will stop creating new, unmanaged ("viral") Azure AD accounts and tenants during B2B collaboration invitation redemption. We're enabling the email one-time passcode feature because it provides a seamless fallback authentication method for your guest users. However, if you don't want to allow this feature to turn on automatically, you can [disable it](one-time-passcode.md#disable-email-one-time-passcode).
+> - We've begun rolling out a change to turn on the email one-time passcode feature for all existing tenants and enable it by default for new tenants. We're enabling the email one-time passcode feature because it provides a seamless fallback authentication method for your guest users. However, if you don't want to allow this feature to turn on automatically, you can [disable it](#disable-email-one-time-passcode). Soon, we'll stop creating new, unmanaged ("viral") Azure AD accounts and tenants during B2B collaboration invitation redemption.
> - Email one-time passcode settings have moved in the Azure portal from **External collaboration settings** to **All identity providers**. > [!NOTE]
Guest user teri@gmail.com is invited to Fabrikam, which doesn't have Google fede
1. Sign in to the [Azure portal](https://portal.azure.com/) as an Azure AD global administrator.
-2. In the navigation pane, select **Azure Active Directory**.
+1. In the navigation pane, select **Azure Active Directory**.
-3. Select **External Identities** > **All identity providers**.
+1. Select **External Identities** > **All identity providers**.
-4. Select **Email one-time passcode** to open the configuration pane.
+1. Select **Email one-time passcode** to open the configuration pane.
-5. Under **Email one-time passcode for guests**, select one of the following:
+1. Under **Email one-time passcode for guests**, select one of the following:
- **Automatically enable email one-time passcode for guests starting \<date\>** if you don't want to enable the feature immediately and want to wait for the automatic enablement date. - **Enable email one-time passcode for guests effective now** to enable the feature now.
Guest user teri@gmail.com is invited to Fabrikam, which doesn't have Google fede
![Email one-time passcode toggle enabled](media/one-time-passcode/enable-email-otp-options.png)
-5. Select **Save**.
+1. Select **Save**.
## Disable email one-time passcode
-Starting July 2022, we'll begin rolling out a change to turn on the email one-time passcode feature for all existing tenants and enable it by default for new tenants. At that time, Microsoft will no longer support the redemption of invitations by creating unmanaged ("viral" or "just-in-time") Azure AD accounts and tenants for B2B collaboration scenarios. We're enabling the email one-time passcode feature because it provides a seamless fallback authentication method for your guest users. However, you have the option of disabling this feature if you choose not to use it.
+We've begun rolling out a change to turn on the email one-time passcode feature for all existing tenants and enable it by default for new tenants. We're enabling the email one-time passcode feature because it provides a seamless fallback authentication method for your guest users. However, if you don't want to allow this feature to turn on automatically, you can disable it. Soon, we'll stop creating new, unmanaged ("viral") Azure AD accounts and tenants during B2B collaboration invitation redemption.
> [!NOTE] >
Starting July 2022, we'll begin rolling out a change to turn on the email one-ti
1. Sign in to the [Azure portal](https://portal.azure.com/) as an Azure AD global administrator.
-2. In the navigation pane, select **Azure Active Directory**.
+1. In the navigation pane, select **Azure Active Directory**.
-3. Select **External Identities** > **All identity providers**.
+1. Select **External Identities** > **All identity providers**.
-4. Select **Email one-time passcode**, and then under **Email one-time passcode for guests**, select **Disable email one-time passcode for guests** (or **No** if the feature was previously enabled, disabled, or opted into during preview).
+1. Select **Email one-time passcode**, and then under **Email one-time passcode for guests**, select **Disable email one-time passcode for guests** (or **No** if the feature was previously enabled, disabled, or opted into during preview).
![Email one-time passcode toggle disabled](media/one-time-passcode/disable-email-otp-options.png)
Starting July 2022, we'll begin rolling out a change to turn on the email one-ti
> Email one-time passcode settings have moved in the Azure portal from **External collaboration settings** to **All identity providers**. > If you see a toggle instead of the email one-time passcode options, this means you've previously enabled, disabled, or opted into the preview of the feature. Select **No** to disable the feature.
-5. Select **Save**.
+1. Select **Save**.
## Note for public preview customers
-If you've previously opted in to the email one-time passcode public preview, the July 2022 date for automatic feature enablement doesn't apply to you, so your related business processes won't be affected. Additionally, in the Azure portal, under the **Email one-time passcode for guests** properties, you won't see the option to **Automatically enable email one-time passcode for guests starting \<date\>**. Instead, you'll see the following **Yes** or **No** toggle:
+If you've previously opted in to the email one-time passcode public preview, automatic feature enablement doesn't apply to you, so your related business processes won't be affected. Additionally, in the Azure portal, under the **Email one-time passcode for guests** properties, you won't see the option to **Automatically enable email one-time passcode for guests starting \<date\>**. Instead, you'll see the following **Yes** or **No** toggle:
![Email one-time passcode opted in](media/one-time-passcode/enable-email-otp-opted-in.png)
-However, if you'd prefer to opt out of the feature and allow it to be automatically enabled starting July 2022, you can revert to the default settings by using the Microsoft Graph API [email authentication method configuration resource type](/graph/api/resources/emailauthenticationmethodconfiguration). After you revert to the default settings, the following options will be available under **Email one-time passcode for guests**:
+However, if you'd prefer to opt out of the feature and allow it to be automatically enabled, you can revert to the default settings by using the Microsoft Graph API [email authentication method configuration resource type](/graph/api/resources/emailauthenticationmethodconfiguration). After you revert to the default settings, the following options will be available under **Email one-time passcode for guests**:
![Enable Email one-time passcode opted in](media/one-time-passcode/email-otp-options.png) -- **Automatically enable email one-time passcode for guests starting \<date\>**. (Default) If the email one-time passcode feature isn't already enabled for your tenant, it will be automatically turned on starting July 2022. No further action is necessary if you want the feature enabled at that time. If you've already enabled or disabled the feature, this option will be unavailable.
+- **Automatically enable email one-time passcode for guests starting \<date\>**. (Default) If the email one-time passcode feature isn't already enabled for your tenant, it will be automatically turned on. No further action is necessary if you want the feature enabled at that time. If you've already enabled or disabled the feature, this option will be unavailable.
- **Enable email one-time passcode for guests effective now**. Turns on the email one-time passcode feature for your tenant.
For more information about current limitations, see [Azure AD B2B in government
**Why do I still see ΓÇ£Automatically enable email one-time passcode for guests starting October 2021ΓÇ¥ selected in my email one-time passcode settings?**
-Due to our deployment schedules, we'll begin globally rolling out the change to enable email one-time passcode by default starting July 2022. Until then, you might still see ΓÇ£Automatically enable email one-time passcode for guests starting October 2021ΓÇ¥ selected in your email one-time passcode settings.
+We've begun globally rolling out the change to enable email one-time passcode. In the meantime, you might still see ΓÇ£Automatically enable email one-time passcode for guests starting October 2021ΓÇ¥ selected in your email one-time passcode settings.
**What happens to my existing guest users if I enable email one-time passcode?**
active-directory Redemption Experience https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/redemption-experience.md
Previously updated : 02/14/2022 Last updated : 03/31/2022
When you add a guest user to your directory, the guest user account has a consen
> [!IMPORTANT] > - **Starting July 12, 2021**, if Azure AD B2B customers set up new Google integrations for use with self-service sign-up for their custom or line-of-business applications, authentication with Google identities wonΓÇÖt work until authentications are moved to system web-views. [Learn more](google-federation.md#deprecation-of-web-view-sign-in-support). > - **Starting September 30, 2021**, Google is [deprecating embedded web-view sign-in support](https://developers.googleblog.com/2016/08/modernizing-oauth-interactions-in-native-apps.html). If your apps authenticate users with an embedded web-view and you're using Google federation with [Azure AD B2C](../../active-directory-b2c/identity-provider-google.md) or Azure AD B2B for [external user invitations](google-federation.md) or [self-service sign-up](identity-providers.md), Google Gmail users won't be able to authenticate. [Learn more](google-federation.md#deprecation-of-web-view-sign-in-support).
-> - **Starting July 2022**, we'll begin rolling out a change to turn on the email one-time passcode feature for all existing tenants and enable it by default for new tenants. As part of this change, Microsoft will stop creating new, unmanaged ("viral") Azure AD accounts and tenants during B2B collaboration invitation redemption. We're enabling the email one-time passcode feature because it provides a seamless fallback authentication method for your guest users. However, if you don't want to allow this feature to turn on automatically, you can [disable it](one-time-passcode.md#disable-email-one-time-passcode).
+> - We've begun rolling out a change to turn on the email one-time passcode feature for all existing tenants and enable it by default for new tenants. We're enabling the email one-time passcode feature because it provides a seamless fallback authentication method for your guest users. However, if you don't want to allow this feature to turn on automatically, you can [disable it](one-time-passcode.md#disable-email-one-time-passcode). Soon, we'll stop creating new, unmanaged ("viral") Azure AD accounts and tenants during B2B collaboration invitation redemption.
## Redemption and sign-in through a common endpoint
active-directory Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/troubleshoot.md
Here are some remedies for common problems with Azure Active Directory (Azure AD
> > - **Starting July 12, 2021**, if Azure AD B2B customers set up new Google integrations for use with self-service sign-up for their custom or line-of-business applications, authentication with Google identities wonΓÇÖt work until authentications are moved to system web-views. [Learn more](google-federation.md#deprecation-of-web-view-sign-in-support). > - **Starting September 30, 2021**, Google is [deprecating embedded web-view sign-in support](https://developers.googleblog.com/2016/08/modernizing-oauth-interactions-in-native-apps.html). If your apps authenticate users with an embedded web-view and you're using Google federation with [Azure AD B2C](../../active-directory-b2c/identity-provider-google.md) or Azure AD B2B for [external user invitations](google-federation.md) or [self-service sign-up](identity-providers.md), Google Gmail users won't be able to authenticate. [Learn more](google-federation.md#deprecation-of-web-view-sign-in-support).
- > - **Starting July 2022**, we'll begin rolling out a change to turn on the email one-time passcode feature for all existing tenants and enable it by default for new tenants. As part of this change, Microsoft will stop creating new, unmanaged ("viral") Azure AD accounts and tenants during B2B collaboration invitation redemption. We're enabling the email one-time passcode feature because it provides a seamless fallback authentication method for your guest users. However, if you don't want to allow this feature to turn on automatically, you can [disable it](one-time-passcode.md#disable-email-one-time-passcode).
+ > - We've begun rolling out a change to turn on the email one-time passcode feature for all existing tenants and enable it by default for new tenants. We're enabling the email one-time passcode feature because it provides a seamless fallback authentication method for your guest users. However, if you don't want to allow this feature to turn on automatically, you can [disable it](one-time-passcode.md#disable-email-one-time-passcode). Soon, we'll stop creating new, unmanaged ("viral") Azure AD accounts and tenants during B2B collaboration invitation redemption.
## Guest sign-in fails with error code AADSTS50020
active-directory User Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/user-properties.md
Previously updated : 02/14/2022 Last updated : 03/31/2022
The following table describes B2B collaboration users based on how they authenti
- **Internal member**: These users are generally considered employees of your organization. The user authenticates internally via Azure AD, and the user object created in the resource Azure AD directory has a UserType of Member. > [!IMPORTANT]
-> **Starting July 2022**, we'll begin rolling out a change to turn on the email one-time passcode feature for all existing tenants and enable it by default for new tenants. As part of this change, Microsoft will stop creating new, unmanaged ("viral") Azure AD accounts and tenants during B2B collaboration invitation redemption. We're enabling the email one-time passcode feature because it provides a seamless fallback authentication method for your guest users. However, if you don't want to allow this feature to turn on automatically, you can [disable it](one-time-passcode.md#disable-email-one-time-passcode).
+> We've begun rolling out a change to turn on the email one-time passcode feature for all existing tenants and enable it by default for new tenants. We're enabling the email one-time passcode feature because it provides a seamless fallback authentication method for your guest users. However, if you don't want to allow this feature to turn on automatically, you can [disable it](one-time-passcode.md#disable-email-one-time-passcode). Soon, we'll stop creating new, unmanaged ("viral") Azure AD accounts and tenants during B2B collaboration invitation redemption.
## Invitation redemption
active-directory What Is B2b https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/what-is-b2b.md
Previously updated : 02/18/2022 Last updated : 03/31/2022
Developers can use Azure AD business-to-business APIs to customize the invitatio
> [!IMPORTANT]
-> **Starting July 2022**, we'll begin rolling out a change to turn on the email one-time passcode feature for all existing tenants and enable it by default for new tenants. As part of this change, Microsoft will stop creating new, unmanaged ("viral") Azure AD accounts and tenants during B2B collaboration invitation redemption. We're enabling the email one-time passcode feature because it provides a seamless fallback authentication method for your guest users. However, if you don't want to allow this feature to turn on automatically, you can [disable it](one-time-passcode.md#disable-email-one-time-passcode)..
+> We've begun rolling out a change to turn on the email one-time passcode feature for all existing tenants and enable it by default for new tenants. We're enabling the email one-time passcode feature because it provides a seamless fallback authentication method for your guest users. However, if you don't want to allow this feature to turn on automatically, you can [disable it](one-time-passcode.md#disable-email-one-time-passcode). Soon, we'll stop creating new, unmanaged ("viral") Azure AD accounts and tenants during B2B collaboration invitation redemption.
## Collaborate with any partner using their identities
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/whats-new-docs.md
Title: "What's new in Azure Active Directory external identities"
-description: "New and updated documentation for the Azure Active Directory external identities."
Previously updated : 03/09/2022
+ Title: "What's new in Azure Active Directory External Identities"
+description: "New and updated documentation for the Azure Active Directory External Identities."
Last updated : 03/31/2022
-# Azure Active Directory external identities: What's new
+# Azure Active Directory External Identities: What's new
-Welcome to what's new in Azure Active Directory external identities documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the external identities service, see [What's new in Azure Active Directory](../fundamentals/whats-new.md).
+Welcome to what's new in Azure Active Directory External Identities documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the External Identities service, see [What's new in Azure Active Directory](../fundamentals/whats-new.md).
-## February 2022
-
-### Updated articles
--- [Add Google as an identity provider for B2B guest users](google-federation.md)-- [External Identities in Azure Active Directory](external-identities-overview.md)-- [Overview: Cross-tenant access with Azure AD External Identities (Preview)](cross-tenant-access-overview.md)-- [B2B collaboration overview](what-is-b2b.md)-- [Federation with SAML/WS-Fed identity providers for guest users (preview)](direct-federation.md)-- [Quickstart: Add a guest user with PowerShell](b2b-quickstart-invite-powershell.md)-- [Tutorial: Bulk invite Azure AD B2B collaboration users](tutorial-bulk-invite.md)-- [Azure Active Directory B2B best practices](b2b-fundamentals.md)-- [Azure Active Directory B2B collaboration FAQs](faq.yml)-- [Email one-time passcode authentication](one-time-passcode.md)-- [Azure Active Directory B2B collaboration invitation redemption](redemption-experience.md)-- [Troubleshooting Azure Active Directory B2B collaboration](troubleshoot.md)-- [Properties of an Azure Active Directory B2B collaboration user](user-properties.md)-- [Authentication and Conditional Access for External Identities](authentication-conditional-access.md)-
-## January 2022
-
-### Updated articles
--- [Properties of an Azure Active Directory B2B collaboration user](user-properties.md)-- [Azure Active Directory B2B collaboration invitation redemption](redemption-experience.md)--
-## December 2021
-
-### Updated articles
--- [Federation with SAML/WS-Fed identity providers for guest users (preview)](direct-federation.md)--
-## November 2021
-
-### Updated articles
--- [Tutorial: Enforce multi-factor authentication for B2B guest users](b2b-tutorial-require-mfa.md)-- [Grant B2B users in Azure AD access to your on-premises applications](hybrid-cloud-to-on-premises.md)-- [Azure Active Directory external identities: What's new](whats-new-docs.md)-- [Conditional Access for B2B collaboration users](authentication-conditional-access.md)--
-## October 2021
-
-### Updated articles
--- [Email one-time passcode authentication](one-time-passcode.md)-- [Azure Active Directory B2B collaboration FAQs](faq.yml)-- [Reset redemption status for a guest user (Preview)](reset-redemption-status.md)-- [Add Google as an identity provider for B2B guest users](google-federation.md)-
-## September 2021
-
-### Updated articles
--- [Federation with SAML/WS-Fed identity providers for guest users (preview)](direct-federation.md)-- [Add Azure Active Directory B2B collaboration users in the Azure portal](add-users-administrator.md)-- [Leave an organization as a guest user](leave-the-organization.md)-- [Invite internal users to B2B collaboration](invite-internal-users.md)--
-## August 2021
-
-### Updated articles
--- [Identity Providers for External Identities](identity-providers.md)-- [Enable B2B external collaboration and manage who can invite guests](external-collaboration-settings-configure.md)-- [Properties of an Azure Active Directory B2B collaboration user](user-properties.md)-- [Add Google as an identity provider for B2B guest users](google-federation.md)-- [Azure Active Directory (Azure AD) identity provider for External Identities](azure-ad-account.md)-- [Microsoft account (MSA) identity provider for External Identities](microsoft-account.md)-- [Add an API connector to a user flow](self-service-sign-up-add-api-connector.md)--
-## July 2021
+## March 2022
### New articles -- [Secure your API used an API connector in Azure AD External Identities self-service sign-up user flows](self-service-sign-up-secure-api-connector.md)
+- [B2B direct connect overview (Preview)](b2b-direct-connect-overview.md)
+- [Configure cross-tenant access settings for B2B direct connect (Preview)](cross-tenant-access-settings-b2b-direct-connect.md)
### Updated articles -- [Identity Providers for External Identities](identity-providers.md)-- [Microsoft account (MSA) identity provider for External Identities](microsoft-account.md)-- [Email one-time passcode authentication](one-time-passcode.md)-- [Add a self-service sign-up user flow to an app](self-service-sign-up-user-flow.md)-- [Properties of an Azure Active Directory B2B collaboration user](user-properties.md)-- [Add Google as an identity provider for B2B guest users](google-federation.md)-- [Azure Active Directory B2B collaboration invitation redemption](redemption-experience.md) - [Troubleshooting Azure Active Directory B2B collaboration](troubleshoot.md)-- [Add an API connector to a user flow](self-service-sign-up-add-api-connector.md)-- [Add a custom approval workflow to self-service sign-up](self-service-sign-up-add-approvals.md)-- [What are External Identities in Azure Active Directory?](external-identities-overview.md)
+- [B2B direct connect overview (Preview)](b2b-direct-connect-overview.md)
+- [Configure cross-tenant access settings for B2B direct connect (Preview)](cross-tenant-access-settings-b2b-direct-connect.md)
+- [External Identities documentation](index.yml)
- [Billing model for Azure AD External Identities](external-identities-pricing.md)-- [Dynamic groups and Azure Active Directory B2B collaboration](use-dynamic-groups.md)-- [What is guest user access in Azure Active Directory B2B?](what-is-b2b.md)-- [Use API connectors to customize and extend self-service sign-up](api-connectors-overview.md)-- [Federation with SAML/WS-Fed identity providers for guest users (preview)](direct-federation.md)-- [The elements of the B2B collaboration invitation email - Azure Active Directory](invitation-email-elements.md)-- [Conditional Access for B2B collaboration users](authentication-conditional-access.md)--
-## June 2021
-
-### New articles
--- [Azure Active Directory (Azure AD) identity provider for External Identities](azure-ad-account.md)-
-### Updated articles
--- [Tutorial: Enforce multi-factor authentication for B2B guest users](b2b-tutorial-require-mfa.md)-- [Add a self-service sign-up user flow to an app](self-service-sign-up-user-flow.md)-- [Quickstart: Add guest users to your directory in the Azure portal](b2b-quickstart-add-guest-users-portal.md)-- [Federation with SAML/WS-Fed identity providers for guest users (preview)](direct-federation.md)-- [Add Google as an identity provider for B2B guest users](google-federation.md)-- [Leave an organization as a guest user](leave-the-organization.md)-- [Azure Active Directory B2B collaboration invitation redemption](redemption-experience.md)-
-## May 2021
-
-### New articles
--- [Azure Active Directory B2B collaboration FAQs](faq.yml)-
-### Updated articles
--- [Azure Active Directory B2B collaboration FAQs](faq.yml)-- [Azure Active Directory B2B collaboration invitation redemption](redemption-experience.md)-- [Troubleshooting Azure Active Directory B2B collaboration](troubleshoot.md)-- [Properties of an Azure Active Directory B2B collaboration user](user-properties.md)-- [What is guest user access in Azure Active Directory B2B?](what-is-b2b.md)-- [Enable B2B external collaboration and manage who can invite guests](external-collaboration-settings-configure.md)-- [Billing model for Azure AD External Identities](external-identities-pricing.md)-- [Example: Configure SAML/WS-Fed IdP federation with Active Directory Federation Services (AD FS) (preview)](direct-federation-adfs.md) - [Federation with SAML/WS-Fed identity providers for guest users (preview)](direct-federation.md)
+- [Azure Active Directory B2B collaboration code and PowerShell samples](code-samples.md)
+- [Overview: Cross-tenant access with Azure AD External Identities (Preview)](cross-tenant-access-overview.md)
- [Add Google as an identity provider for B2B guest users](google-federation.md)-- [Identity Providers for External Identities](identity-providers.md)-- [Leave an organization as a guest user](leave-the-organization.md)-- [Azure Active Directory external identities: What's new](whats-new-docs.md)-- [Add Azure Active Directory B2B collaboration users in the Azure portal](add-users-administrator.md) - [Invite internal users to B2B collaboration](invite-internal-users.md)
+- [Authentication and Conditional Access for External Identities](authentication-conditional-access.md)
+- [Azure Active Directory B2B best practices](b2b-fundamentals.md)
+- [Azure Active Directory B2B collaboration FAQs](faq.yml)
+- [Configure cross-tenant access settings for B2B collaboration (Preview)](cross-tenant-access-settings-b2b-collaboration.md)
+- [External Identities in Azure Active Directory](external-identities-overview.md)
+- [Leave an organization as a B2B collaboration user](leave-the-organization.md)
+- [Configure external collaboration settings](external-collaboration-settings-configure.md)
+- [Reset redemption status for a guest user (Preview)](reset-redemption-status.md)
-
-## April 2021
+## February 2022
### Updated articles - [Add Google as an identity provider for B2B guest users](google-federation.md)-- [Example: Direct federation with Active Directory Federation Services (AD FS) (preview)](direct-federation-adfs.md)-- [Direct federation with AD FS and third-party providers for guest users (preview)](direct-federation.md)-- [Email one-time passcode authentication](one-time-passcode.md)-- [Reset redemption status for a guest user (Preview)](reset-redemption-status.md)-- [The elements of the B2B collaboration invitation email - Azure Active Directory](invitation-email-elements.md)-- [Troubleshooting Azure Active Directory B2B collaboration](troubleshoot.md)
+- [External Identities in Azure Active Directory](external-identities-overview.md)
+- [Overview: Cross-tenant access with Azure AD External Identities (Preview)](cross-tenant-access-overview.md)
+- [B2B collaboration overview](what-is-b2b.md)
+- [Federation with SAML/WS-Fed identity providers for guest users (preview)](direct-federation.md)
- [Quickstart: Add a guest user with PowerShell](b2b-quickstart-invite-powershell.md)-- [Conditional Access for B2B collaboration users](authentication-conditional-access.md)-
-## March 2021
-
-### New articles
--- [Microsoft Account (MSA) identity provider for External Identities](microsoft-account.md)-
-### Updated articles
--- [Add an API connector to a user flow](self-service-sign-up-add-api-connector.md)-- [What is guest user access in Azure Active Directory B2B?](what-is-b2b.md)-- [Properties of an Azure Active Directory B2B collaboration user](user-properties.md) - [Tutorial: Bulk invite Azure AD B2B collaboration users](tutorial-bulk-invite.md)-- [Add a custom approval workflow to self-service sign-up](self-service-sign-up-add-approvals.md)-- [Reset redemption status for a guest user](reset-redemption-status.md)-- [Use API connectors to customize and extend self-service sign-up](api-connectors-overview.md)
+- [Azure Active Directory B2B best practices](b2b-fundamentals.md)
- [Azure Active Directory B2B collaboration FAQs](faq.yml)-- [Azure Active Directory B2B collaboration invitation redemption](redemption-experience.md)-- [Identity Providers for External Identities](identity-providers.md)-- [Add a self-service sign-up user flow to an app (Preview)](self-service-sign-up-user-flow.md) - [Email one-time passcode authentication](one-time-passcode.md)-- [Add Azure Active Directory B2B collaboration users in the Azure portal](add-users-administrator.md)--
-## February 2021
-
-### New articles
+- [Azure Active Directory B2B collaboration invitation redemption](redemption-experience.md)
+- [Troubleshooting Azure Active Directory B2B collaboration](troubleshoot.md)
+- [Properties of an Azure Active Directory B2B collaboration user](user-properties.md)
+- [Authentication and Conditional Access for External Identities](authentication-conditional-access.md)
-- [Reset redemption status for a guest user](reset-redemption-status.md)
+## January 2022
### Updated articles -- [Azure Active Directory B2B best practices](b2b-fundamentals.md)-- [Enable B2B external collaboration and manage who can invite guests](external-collaboration-settings-configure.md)-- [Azure Active Directory B2B collaboration FAQs](faq.yml)-- [Email one-time passcode authentication](one-time-passcode.md)-- [Azure Active Directory B2B collaboration invitation redemption](redemption-experience.md)-- [Troubleshooting Azure Active Directory B2B collaboration](troubleshoot.md) - [Properties of an Azure Active Directory B2B collaboration user](user-properties.md)-- [What is guest user access in Azure Active Directory B2B?](what-is-b2b.md)-- [Azure Active Directory external identities: What's new](whats-new-docs.md)-- [Allow or block invitations to B2B users from specific organizations](allow-deny-list.md)-- [Azure Active Directory B2B collaboration API and customization](customize-invitation-api.md)-- [Invite internal users to B2B collaboration](invite-internal-users.md)-- [Microsoft 365 external sharing and Azure Active Directory (Azure AD) B2B collaboration](o365-external-user.md)-- [Federation with SAML/WS-Fed identity providers for guest users (preview)](direct-federation.md)
+- [Azure Active Directory B2B collaboration invitation redemption](redemption-experience.md)
active-directory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-archive.md
The What's new in Azure Active Directory? release notes provide information abou
+
+## September 2021
+
+### Limits on the number of configured API permissions for an application registration will be enforced starting in October 2021
+
+**Type:** Plan for change
+**Service category:** Other
+**Product capability:** Developer Experience
+
+Occasionally, application developers configure their apps to require more permissions than it's possible to grant. To prevent this from happening, we're enforcing a limit on the total number of required permissions that can be configured for an app registration.
+
+The total number of required permissions for any single application registration must not exceed 400 permissions, across all APIs. The change to enforce this limit will begin rolling out no sooner than mid-October 2021. Applications exceeding the limit can't increase the number of permissions they're configured for. The existing limit on the number of distinct APIs for which permissions are required remains unchanged and can't exceed 50 APIs.
+
+In the Azure portal, the required permissions are listed under Azure Active Directory > Application registrations > (select an application) > API permissions. Using Microsoft Graph or Microsoft Graph PowerShell, the required permissions are listed in the requiredResourceAccess property of an application entity. [Learn more](../enterprise-users/directory-service-limits-restrictions.md).
+++
+### My Apps performance improvements
+
+**Type:** Fixed
+**Service category:** My Apps
+**Product capability:** End User Experiences
+
+The load time of My Apps has been improved. Users going to myapps.microsoft.com load My Apps directly, rather than being redirected through another service. [Learn more](../user-help/my-apps-portal-end-user-access.md).
+++
+### Single Page Apps using the `spa` redirect URI type must use a CORS enabled browser for auth
+
+**Type:** Known issue
+**Service category:** Authentications (Logins)
+**Product capability:** Developer Experience
+
+The modern Edge browser is now included in the requirement to provide an `Origin` header when redeeming a [single page app authorization code](../develop/v2-oauth2-auth-code-flow.md#redirect-uri-setup-required-for-single-page-apps). A compatibility fix accidentally exempted the modern Edge browser from CORS controls, and that bug is being fixed during October. A subset of applications depended on CORS being disabled in the browser, which has the side effect of removing the `Origin` header from traffic. This is an unsupported configuration for using Azure AD, and these apps that depended on disabling CORS can no longer use modern Edge as a security workaround. All modern browsers must now include the `Origin` header per HTTP spec, to ensure CORS is enforced. [Learn more](../develop/reference-breaking-changes.md#the-device-code-flow-ux-will-now-include-an-app-confirmation-prompt).
+++
+### General availability - On the My Apps portal, users can choose to view their apps in a list
+
+**Type:** New feature
+**Service category:** My Apps
+**Product capability:** End User Experiences
+
+By default, My Apps displays apps in a grid view. Users can now toggle their My Apps view to display apps in a list. [Learn more](../user-help/my-apps-portal-end-user-access.md).
+
++
+### General availability - New and enhanced device-related audit logs
+
+**Type:** New feature
+**Service category:** Audit
+**Product capability:** Device Lifecycle Management
+
+Admins can now see various new and improved device-related audit logs. The new audit logs include the create and delete passwordless credentials (Phone sign-in, FIDO2 key, and Windows Hello for Business), register/unregister device and pre-create/delete pre-create device. Additionally, there have been minor improvements to existing device-related audit logs that include adding more device details. [Learn more](../reports-monitoring/concept-audit-logs.md).
+++
+### General availability - Azure AD users can now view and report suspicious sign-ins and manage their accounts within Microsoft Authenticator
+
+**Type:** New feature
+**Service category:** Microsoft Authenticator App
+**Product capability:** Identity Security & Protection
+
+This feature allows Azure AD users to manage their work or school accounts within the Microsoft Authenticator app. The management features will allow users to view sign-in history and sign-in activity. They can report any suspicious or unfamiliar activity based on the sign-in history and activity if necessary. Users also can change their Azure AD account passwords and update the account's security information. [Learn more](../user-help/my-account-portal-sign-ins-page.md).
+
++
+### General availability - New MS Graph APIs for role management
+
+**Type:** New feature
+**Service category:** RBAC
+**Product capability:** Access Control
+
+New APIs for role management to MS Graph v1.0 endpoint are generally available. Instead of old [directory roles](/graph/api/resources/directoryrole?view=graph-rest-1.0&preserve-view=true), use [unifiedRoleDefinition](/graph/api/resources/unifiedroledefinition?view=graph-rest-1.0&preserve-view=true) and [unifiedRoleAssignment](/graph/api/resources/unifiedroleassignment?view=graph-rest-1.0&preserve-view=true).
+
++
+### General availability - Access Packages can expire after number of hours
+
+**Type:** New feature
+**Service category:** User Access Management
+**Product capability:** Entitlement Management
+
+It's now possible in entitlement management to configure an access package that will expire in a matter of hours in addition to the previous support for days or specific dates. [Learn more](../governance/entitlement-management-access-package-create.md#lifecycle).
+
++
+### New provisioning connectors in the Azure AD Application Gallery - September 2021
+
+**Type:** New feature
+**Service category:** App Provisioning
+**Product capability:** 3rd Party Integration
+
+You can now automate creating, updating, and deleting user accounts for these newly integrated apps:
+
+- [BLDNG APP](../saas-apps/bldng-app-provisioning-tutorial.md)
+- [Cato Networks](../saas-apps/cato-networks-provisioning-tutorial.md)
+- [Rouse Sales](../saas-apps/rouse-sales-provisioning-tutorial.md)
+- [SchoolStream ASA](../saas-apps/schoolstream-asa-provisioning-tutorial.md)
+- [Taskize Connect](../saas-apps/taskize-connect-provisioning-tutorial.md)
+
+For more information about how to better secure your organization by using automated user account provisioning, see [Automate user provisioning to SaaS applications with Azure AD](../manage-apps/user-provisioning.md).
+
++
+### New Federated Apps available in Azure AD Application gallery - September 2021
+
+**Type:** New feature
+**Service category:** Enterprise Apps
+**Product capability:** 3rd Party Integration
+
+In September 2021, we have added following 44 new applications in our App gallery with Federation support
+
+[Studybugs](https://studybugs.com/signin), [Yello](https://yello.co/yello-for-microsoft-teams/), [LawVu](../saas-apps/lawvu-tutorial.md), [Formate eVo Mail](https://www.document-genetics.co.uk/formate-evo-erp-output-management), [Revenue Grid](https://app.revenuegrid.com/login), [Orbit for Office 365](https://azuremarketplace.microsoft.com/marketplace/apps/aad.orbitforoffice365?tab=overview), [Upmarket](https://app.upmarket.ai/), [Alinto Protect](https://protect.alinto.net/), [Cloud Concinnity](https://cloudconcinnity.com/), [Matlantis](https://matlantis.com/), [ModelGen for Visio (MG4V)](https://crecy.com.au/model-gen/), [NetRef: Classroom Management](https://oauth.net-ref.com/microsoft/sso), [VergeSense](../saas-apps/vergesense-tutorial.md), [iAuditor](../saas-apps/iauditor-tutorial.md), [Secutraq](https://secutraq.net/login), [Active and Thriving](../saas-apps/active-and-thriving-tutorial.md), [Inova](https://login.microsoftonline.com/organizations/oauth2/v2.0/authorize?client_id=1bacdba3-7a3b-410b-8753-5cc0b8125f81&response_type=code&redirect_uri=https:%2f%2fbroker.partneringplace.com%2fpartner-companion%2f&code_challenge_method=S256&code_challenge=YZabcdefghijklmanopqrstuvwxyz0123456789._-~&scope=1bacdba3-7a3b-410b-8753-5cc0b8125f81/.default), [TerraTrue](../saas-apps/terratrue-tutorial.md), [Facebook Work Accounts](../saas-apps/facebook-work-accounts-tutorial.md), [Beyond Identity Admin Console](../saas-apps/beyond-identity-admin-console-tutorial.md), [Visult](https://app.visult.io/), [ENGAGE TAG](https://app.engagetag.com/), [Appaegis Isolation Access Cloud](../saas-apps/appaegis-isolation-access-cloud-tutorial.md), [CrowdStrike Falcon Platform](../saas-apps/crowdstrike-falcon-platform-tutorial.md), [MY Emergency Control](https://my-emergency.co.uk/app/auth/login), [AlexisHR](../saas-apps/alexishr-tutorial.md), [Teachme Biz](../saas-apps/teachme-biz-tutorial.md), [Zero Networks](../saas-apps/zero-networks-tutorial.md), [Mavim iMprove](https://improve.mavimcloud.com/), [Azumuta](https://app.azumuta.com/login?microsoft=true), [Frankli](https://beta.frankli.io/login), [Amazon Managed Grafana](../saas-apps/amazon-managed-grafana-tutorial.md), [Productive](../saas-apps/productive-tutorial.md), [Create!Webフロー](../saas-apps/createweb-tutorial.md), [Evercate](https://evercate.com/us/sign-up/), [Ezra Coaching](../saas-apps/ezra-coaching-tutorial.md), [Baldwin Safety and Compliance](../saas-apps/baldwin-safety-&-compliance-tutorial.md), [Nulab Pass (Backlog,Cacoo,Typetalk)](../saas-apps/nulab-pass-tutorial.md), [Metatask](../saas-apps/metatask-tutorial.md), [Contrast Security](../saas-apps/contrast-security-tutorial.md), [Animaker](../saas-apps/animaker-tutorial.md), [Traction Guest](../saas-apps/traction-guest-tutorial.md), [True Office Learning - LIO](../saas-apps/true-office-learning-lio-tutorial.md), [Qiita Team](../saas-apps/qiita-team-tutorial.md)
+
+You can also find the documentation of all the applications here: https://aka.ms/AppsTutorial
+
+For listing your application in the Azure AD app gallery, read the details here: https://aka.ms/AzureADAppRequest
+++
+### Gmail users signing in on Microsoft Teams mobile and desktop clients will sign in with device login flow starting September 30, 2021
+
+**Type:** Changed feature
+**Service category:** B2B
+**Product capability:** B2B/B2C
+
+Starting on September 30 2021, Azure AD B2B guests and Azure AD B2C customers signing in with their self-service signed up or redeemed Gmail accounts will have an extra login step. Users will now be prompted to enter a code in a separate browser window to finish signing in on Microsoft Teams mobile and desktop clients. If you haven't already done so, make sure to modify your apps to use the system browser for sign-in. See [Embedded vs System Web UI in the MSAL.NET](../develop/msal-net-web-browsers.md#embedded-vs-system-web-ui) documentation for more information. All MSAL SDKs use the system web-view by default.
+
+As the device login flow will start September 30, 2021, it may not be available in your region immediately. If it's not available yet, your end-users will be met with the error screen shown in the doc until it gets deployed to your region.) For more details on the device login flow and details on requesting extension to Google, see [Add Google as an identity provider for B2B guest users](../external-identities/google-federation.md#deprecation-of-web-view-sign-in-support).
+
++
+### Improved Conditional Access Messaging for Non-compliant Device
+
+**Type:** Changed feature
+**Service category:** Conditional Access
+**Product capability:** End User Experiences
+
+The text and design on the Conditional Access blocking screen shown to users when their device is marked as non-compliant has been updated. Users will be blocked until they take the necessary actions to meet their company's device compliance policies. Additionally, we have streamlined the flow for a user to open their device management portal. These improvements apply to all conditional access supported OS platforms. [Learn more](https://support.microsoft.com/account-billing/troubleshooting-the-you-can-t-get-there-from-here-error-message-479a9c42-d9d1-4e44-9e90-24bbad96c251)
++++ ## August 2021 ### New major version of AADConnect available
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new.md
Azure AD receives improvements on an ongoing basis. To stay up to date with the
This page is updated monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [Archive for What's new in Azure Active Directory](whats-new-archive.md).
+## March 2022
+
+
+### Tenant enablement of combined security information registration for Azure Active Directory
+
+**Type:** Plan for change
+**Service category:** MFA
+**Product capability:** Identity Security & Protection
+**Clouds impacted:** Public (Microsoft 365, GCC)
+
+
+We announced in April 2020 General Availability of our new combined registration experience, enabling users to register security information for multi-factor authentication and self-service password reset at the same time, which was available for existing customers to opt in. We're happy to announce the combined security information registration experience will be enabled to all non-enabled customers after September 30th, 2022. This change does not impact tenants created after August 15th, 2020, or tenants located in the China region. For more information, see: [Combined security information registration for Azure Active Directory overview](../authentication/concept-registration-mfa-sspr-combined.md).
+
++
+
+
+### Public preview - New provisioning connectors in the Azure AD Application Gallery - March 2022
+
+**Type:** New feature
+**Service category:** App Provisioning
+**Product capability:** 3rd Party Integration
+
+
+
+You can now automate creating, updating, and deleting user accounts for these newly integrated apps:
+
+- [AlexisHR](../saas-apps/alexishr-provisioning-tutorial.md)
+- [embed signage](../saas-apps/embed-signage-provisioning-tutorial.md)
+- [Joyn FSM](../saas-apps/joyn-fsm-provisioning-tutorial.md)
+- [KPN Grip](../saas-apps/kpn-grip-provisioning-tutorial.md)
+- [MURAL Identity](../saas-apps/mural-identity-provisioning-tutorial.md)
+- [Palo Alto Networks SCIM Connector](../saas-apps/palo-alto-networks-scim-connector-provisioning-tutorial.md)
+- [Tap App Security](../saas-apps/tap-app-security-provisioning-tutorial.md)
+- [Yellowbox](../saas-apps/yellowbox-provisioning-tutorial.md)
+
+For more information about how to better secure your organization by using automated user account provisioning, see: [Automate user provisioning to SaaS applications with Azure AD](../app-provisioning/user-provisioning.md).
+
++
+
++
+### Public preview - Azure AD Recommendations
+
+**Type:** New feature
+**Service category:** Reporting
+**Product capability:** Monitoring & Reporting
+**Clouds impacted:** Public (Microsoft 365,GCC)
+
+
+Azure AD Recommendations is now in public preview. This feature provides personalized insights with actionable guidance to help you identify opportunities to implement Azure AD best practices, and optimize the state of your tenant. For more information, see: [What is Azure Active Directory recommendations](../reports-monitoring/overview-recommendations.md)
+
++
+
+
+### Public Preview: Dynamic administrative unit membership for users and devices
+
+**Type:** New feature
+**Service category:** RBAC
+**Product capability:** Access Control
+**Clouds impacted:** Public (Microsoft 365,GCC)
+
+
+Administrative units now support dynamic membership rules for user and device members. Instead of manually assigning users and devices to administrative units, tenant admins can set up a query for the administrative unit. The membership will be automatically maintained by Azure AD. For more information, see:[Administrative units in Azure Active Directory](../roles/administrative-units.md).
+
++
+
+
+### Public Preview: Devices in Administrative Units
+
+**Type:** New feature
+**Service category:** RBAC
+**Product capability:** AuthZ/Access Delegation
+**Clouds impacted:** Public (Microsoft 365,GCC)
+
+
+Devices can now be added as members of administrative units. This enables scoped delegation of device permissions to a specific set of devices in the tenant. Built-in and custom roles are also supported. For more information, see: [Administrative units in Azure Active Directory](../roles/administrative-units.md).
+
++
+
+
+### New Federated Apps available in Azure AD Application gallery - March 2022
+
+**Type:** New feature
+**Service category:** Enterprise Apps
+**Product capability:** 3rd Party Integration
+
+
+In March 2022 we have added the following 29 new applications in our App gallery with Federation support:
+
+[Informatica Platform](../saas-apps/informatica-platform-tutorial.md), [Buttonwood Central SSO](../saas-apps/buttonwood-central-sso-tutorial.md), [Blockbax](../saas-apps/blockbax-tutorial.md), [Datto Workplace Single Sign On](../saas-apps/datto-workplace-tutorial.md), [Atlas by Workland](https://atlas.workland.com/), [Simply.Coach](https://app.simply.coach/signup), [Benevity](https://benevity.com/), [Engage Absence Management](https://engage.honeydew-health.com/users/sign_in), [LitLingo App Authentication](https://www.litlingo.com/litlingo-deployment-guide), [ADP EMEA French HR Portal mon.adp.com](../saas-apps/adp-emea-french-hr-portal-tutorial.md), [Ready Room](https://app.readyroom.net/), [Rainmaker UPSMQDEV](https://upsmqdev.rainmaker.aero/rainmaker.security.web/), [Axway CSOS](../saas-apps/axway-csos-tutorial.md), [Alloy](https://alloyapp.io/), [U.S. Bank Prepaid](../saas-apps/us-bank-prepaid-tutorial.md), [EdApp](https://admin.edapp.com/login), [GoSimplo](https://app.gosimplo.com/External/Microsoft/Signup), [Snow Atlas SSO](https://www.snowsoftware.io/), [Abacus.AI](https://alloyapp.io/), [Culture Shift](../saas-apps/culture-shift-tutorial.md), [StaySafe Hub](https://hub.staysafeapp.net/login), [OpenLearning](../saas-apps/openlearning-tutorial.md), [Draup, Inc](https://draup.com/platformlogin/), [Air](../saas-apps/air-tutorial.md), [Regulatory Lab](https://clientidentification.com/), [SafetyLine](https://slmonitor.com/login), [Zest](../saas-apps/zest-tutorial.md), [iGrafx Platform](../saas-apps/igrafx-platform-tutorial.md), [Tracker Software Technologies](../saas-apps/tracker-software-technologies-tutorial.md)
+
+You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial,
+
+For listing your application in the Azure AD app gallery, please read the details here https://aka.ms/AzureADAppRequest
+++
+
+
+### Public Preview - New APIs for fetching transitive role assignments and role permissions
+
+**Type:** New feature
+**Service category:** RBAC
+**Product capability:** Access Control
+
+
+1. **transitiveRoleAssignments** - Last year the ability to assign Azure AD roles to groups was created. Originally it took four calls to fetch all direct, and transitive, role assignments of a user. This new API call allows it all to be done via one API call. For more information, see:
+[List transitiveRoleAssignment - Microsoft Graph beta | Microsoft Docs](/graph/api/rbacapplication-list-transitiveroleassignments).
+
+2. **unifiedRbacResourceAction** - Developers can use this API to list all role permissions and their descriptions in Azure AD. This API can be thought of as a dictionary that can help build custom roles without relying on UX. For more information, see:
+[List resourceActions - Microsoft Graph beta | Microsoft Docs](/graph/api/unifiedrbacresourcenamespace-list-resourceactions).
+
+
++
+
++ ## February 2022
Azure AD Identity Protection is extending its core capabilities of detecting, in
**Clouds impacted:** China;Public (Microsoft 365, GCC);US Gov (GCC-H, DoD)
-Cross-tenant access settings enable you to control how users in your organization collaborate with members of external Azure AD organizations. Now youΓÇÖll have granular inbound and outbound access control settings that work on a per org, user, group, and application basis. These settings also make it possible for you to trust security claims from external Azure AD organizations like multifactor authentication (MFA), device compliance, and hybrid Azure AD joined devices. [Learn more](../external-identities/cross-tenant-access-overview.md)
+Cross-tenant access settings enable you to control how users in your organization collaborate with members of external Azure AD organizations. Now youΓÇÖll have granular inbound and outbound access control settings that work on a per org, user, group, and application basis. These settings also make it possible for you to trust security claims from external Azure AD organizations like multi-factor authentication (MFA), device compliance, and hybrid Azure AD joined devices. [Learn more](../external-identities/cross-tenant-access-overview.md)
Use multi-stage reviews to create Azure AD access reviews in sequential stages,
**Product capability:** 3rd Party Integration
-In February 2022 we added the following 20 new applications in our App gallery with Federation support
+In February 2022 we added the following 20 new applications in our App gallery with Federation support:
[Embark](../saas-apps/embark-tutorial.md), [FENCE-Mobile RemoteManager SSO](../saas-apps/fence-mobile-remotemanager-sso-tutorial.md), [カオナビ](../saas-apps/kao-navi-tutorial.md), [Adobe Identity Management (OIDC)](../saas-apps/adobe-identity-management-tutorial.md), [AppRemo](../saas-apps/appremo-tutorial.md), [Live Center](https://livecenter.norkon.net/Login), [Offishall](https://app.offishall.io/), [MoveWORK Flow](https://www.movework-flow.fm/login), [Cirros SL](https://www.cirros.net/cirros-sl/), [ePMX Procurement Software](https://azure.epmxweb.com/admin/index.php?), [Vanta O365](https://app.vanta.com/connections), [Hubble](../saas-apps/hubble-tutorial.md), [Medigold Gateway](https://gateway.medigoldcore.com), [クラウドログ](../saas-apps/crowd-log-tutorial.md),[Amazing People Schools](../saas-apps/amazing-people-schools-tutorial.md), [Salus](https://salus.com/login), [XplicitTrust Network Access](https://console.xplicittrust.com/#/dashboard), [Spike Email - Mail & Team Chat](https://spikenow.com/web/), [AltheaSuite](https://planmanager.altheasuite.com/), [Balsamiq Wireframes](../saas-apps/balsamiq-wireframes-tutorial.md).
For more information about how to better secure your organization by using autom
**Service category:** Enterprise Apps **Product capability:** 3rd Party Integration
-In January 2022, weΓÇÖve added the following 47 new applications in our App gallery with Federation support
+In January 2022, weΓÇÖve added the following 47 new applications in our App gallery with Federation support:
[Jooto](../saas-apps/jooto-tutorial.md), [Proprli](https://app.proprli.com/), [Pace Scheduler](https://www.pacescheduler.com/accounts/login/), [DRTrack](../saas-apps/drtrack-tutorial.md), [Dining Sidekick](../saas-apps/dining-sidekick-tutorial.md), [Cryotos](https://app.cryotos.com/oauth2/authorization/azure-client), [Emergency Management Systems](https://secure.emsystems.com.au/), [Manifestly Checklists](../saas-apps/manifestly-checklists-tutorial.md), [eLearnPOSH](../saas-apps/elearnposh-tutorial.md), [Scuba Analytics](../saas-apps/scuba-analytics-tutorial.md), [Athena Systems Login Platform](../saas-apps/athena-systems-login-platform-tutorial.md), [TimeTrack](../saas-apps/timetrack-tutorial.md), [MiHCM](../saas-apps/mihcm-tutorial.md), [Health Note](https://www.healthnote.com/), [Active Directory SSO for DoubleYou](../saas-apps/active-directory-sso-for-doubleyou-tutorial.md), [Emplifi platform](../saas-apps/emplifi-platform-tutorial.md), [Flexera One](../saas-apps/flexera-one-tutorial.md), [Hypothesis](https://web.hypothes.is/help/authorizing-hypothesis-from-the-azure-ad-app-gallery/), [Recurly](../saas-apps/recurly-tutorial.md), [XpressDox AU Cloud](https://au.xpressdox.com/Authentication/Login.aspx), [Zoom for Intune](https://zoom.us/), [UPWARD AGENT](https://app.upward.jp/login/), [Linux Foundation ID](https://openprofile.dev/), [Asset Planner](../saas-apps/asset-planner-tutorial.md), [Kiho](https://v3.kiho.fi/index/sso), [chezie](https://app.chezie.co/), [Excelity HCM](../saas-apps/excelity-hcm-tutorial.md), [yuccaHR](https://app.yuccahr.com/), [Blue Ocean Brain](../saas-apps/blue-ocean-brain-tutorial.md), [EchoSpan](../saas-apps/echospan-tutorial.md), [Archie](../saas-apps/archie-tutorial.md), [Equifax Workforce Solutions](../saas-apps/equifax-workforce-solutions-tutorial.md), [Palantir Foundry](../saas-apps/palantir-foundry-tutorial.md), [ATP SpotLight and ChronicX](../saas-apps/atp-spotlight-and-chronicx-tutorial.md), [DigiSign](https://app.digisign.org/selfcare/sso), [mConnect](https://mconnect.skooler.com/), [BrightHR](https://login.brighthr.com/), [Mural Identity](../saas-apps/mural-identity-tutorial.md), [NordPass SSO](https://app.nordpass.com/login%20use%20%22Log%20in%20to%20business%22%20option), [CloudClarity](https://portal.cloudclarity.app/dashboard), [Twic](../saas-apps/twic-tutorial.md), [Eduhouse Online](https://app.eduhouse.fi/palvelu/kirjaudu/microsoft), [Bealink](../saas-apps/bealink-tutorial.md), [Time Intelligence Bot](https://teams.microsoft.com/), [SentinelOne](https://sentinelone.com/)
For more information about how to better secure your organization by using autom
**Service category:** Enterprise Apps **Product capability:** 3rd Party Integration
-In November 2021, we have added following 32 new applications in our App gallery with Federation support
+In November 2021, we have added following 32 new applications in our App gallery with Federation support:
[Tide - Connector](https://gallery.ctinsuretech-tide.com/), [Virtual Risk Manager - USA](../saas-apps/virtual-risk-manager-usa-tutorial.md), [Xorlia Policy Management](https://app.xoralia.com/), [WorkPatterns](https://app.workpatterns.com/oauth2/login?data_source_type=office_365_account_calendar_workspace_sync&utm_source=azure_sso), [GHAE](../saas-apps/ghae-tutorial.md), [Nodetrax Project](../saas-apps/nodetrax-project-tutorial.md), [Touchstone Benchmarking](https://app.touchstonebenchmarking.com/), [SURFsecureID - Azure MFA](../saas-apps/surfsecureid-azure-mfa-tutorial.md), [AiDEA](https://truebluecorp.com/en/prodotti/aidea-en/),[R and D Tax Credit
Identity Governance Administrator can create and manage Azure AD access reviews
-## September 2021
-
-### Limits on the number of configured API permissions for an application registration will be enforced starting in October 2021
-
-**Type:** Plan for change
-**Service category:** Other
-**Product capability:** Developer Experience
-
-Occasionally, application developers configure their apps to require more permissions than it's possible to grant. To prevent this from happening, we're enforcing a limit on the total number of required permissions that can be configured for an app registration.
-
-The total number of required permissions for any single application registration must not exceed 400 permissions, across all APIs. The change to enforce this limit will begin rolling out no sooner than mid-October 2021. Applications exceeding the limit can't increase the number of permissions they're configured for. The existing limit on the number of distinct APIs for which permissions are required remains unchanged and can't exceed 50 APIs.
-
-In the Azure portal, the required permissions are listed under Azure Active Directory > Application registrations > (select an application) > API permissions. Using Microsoft Graph or Microsoft Graph PowerShell, the required permissions are listed in the requiredResourceAccess property of an application entity. [Learn more](../enterprise-users/directory-service-limits-restrictions.md).
---
-### My Apps performance improvements
-
-**Type:** Fixed
-**Service category:** My Apps
-**Product capability:** End User Experiences
-
-The load time of My Apps has been improved. Users going to myapps.microsoft.com load My Apps directly, rather than being redirected through another service. [Learn more](../user-help/my-apps-portal-end-user-access.md).
---
-### Single Page Apps using the `spa` redirect URI type must use a CORS enabled browser for auth
-
-**Type:** Known issue
-**Service category:** Authentications (Logins)
-**Product capability:** Developer Experience
-
-The modern Edge browser is now included in the requirement to provide an `Origin` header when redeeming a [single page app authorization code](../develop/v2-oauth2-auth-code-flow.md#redirect-uri-setup-required-for-single-page-apps). A compatibility fix accidentally exempted the modern Edge browser from CORS controls, and that bug is being fixed during October. A subset of applications depended on CORS being disabled in the browser, which has the side effect of removing the `Origin` header from traffic. This is an unsupported configuration for using Azure AD, and these apps that depended on disabling CORS can no longer use modern Edge as a security workaround. All modern browsers must now include the `Origin` header per HTTP spec, to ensure CORS is enforced. [Learn more](../develop/reference-breaking-changes.md#the-device-code-flow-ux-will-now-include-an-app-confirmation-prompt).
---
-### General availability - On the My Apps portal, users can choose to view their apps in a list
-
-**Type:** New feature
-**Service category:** My Apps
-**Product capability:** End User Experiences
-
-By default, My Apps displays apps in a grid view. Users can now toggle their My Apps view to display apps in a list. [Learn more](../user-help/my-apps-portal-end-user-access.md).
-
--
-### General availability - New and enhanced device-related audit logs
-
-**Type:** New feature
-**Service category:** Audit
-**Product capability:** Device Lifecycle Management
-
-Admins can now see various new and improved device-related audit logs. The new audit logs include the create and delete passwordless credentials (Phone sign-in, FIDO2 key, and Windows Hello for Business), register/unregister device and pre-create/delete pre-create device. Additionally, there have been minor improvements to existing device-related audit logs that include adding more device details. [Learn more](../reports-monitoring/concept-audit-logs.md).
---
-### General availability - Azure AD users can now view and report suspicious sign-ins and manage their accounts within Microsoft Authenticator
-
-**Type:** New feature
-**Service category:** Microsoft Authenticator App
-**Product capability:** Identity Security & Protection
-
-This feature allows Azure AD users to manage their work or school accounts within the Microsoft Authenticator app. The management features will allow users to view sign-in history and sign-in activity. They can report any suspicious or unfamiliar activity based on the sign-in history and activity if necessary. Users also can change their Azure AD account passwords and update the account's security information. [Learn more](../user-help/my-account-portal-sign-ins-page.md).
-
--
-### General availability - New MS Graph APIs for role management
-
-**Type:** New feature
-**Service category:** RBAC
-**Product capability:** Access Control
-
-New APIs for role management to MS Graph v1.0 endpoint are generally available. Instead of old [directory roles](/graph/api/resources/directoryrole?view=graph-rest-1.0&preserve-view=true), use [unifiedRoleDefinition](/graph/api/resources/unifiedroledefinition?view=graph-rest-1.0&preserve-view=true) and [unifiedRoleAssignment](/graph/api/resources/unifiedroleassignment?view=graph-rest-1.0&preserve-view=true).
-
--
-### General availability - Access Packages can expire after number of hours
-
-**Type:** New feature
-**Service category:** User Access Management
-**Product capability:** Entitlement Management
-
-It's now possible in entitlement management to configure an access package that will expire in a matter of hours in addition to the previous support for days or specific dates. [Learn more](../governance/entitlement-management-access-package-create.md#lifecycle).
-
--
-### New provisioning connectors in the Azure AD Application Gallery - September 2021
-
-**Type:** New feature
-**Service category:** App Provisioning
-**Product capability:** 3rd Party Integration
-
-You can now automate creating, updating, and deleting user accounts for these newly integrated apps:
--- [BLDNG APP](../saas-apps/bldng-app-provisioning-tutorial.md)-- [Cato Networks](../saas-apps/cato-networks-provisioning-tutorial.md)-- [Rouse Sales](../saas-apps/rouse-sales-provisioning-tutorial.md)-- [SchoolStream ASA](../saas-apps/schoolstream-asa-provisioning-tutorial.md)-- [Taskize Connect](../saas-apps/taskize-connect-provisioning-tutorial.md)-
-For more information about how to better secure your organization by using automated user account provisioning, see [Automate user provisioning to SaaS applications with Azure AD](../manage-apps/user-provisioning.md).
-
--
-### New Federated Apps available in Azure AD Application gallery - September 2021
-
-**Type:** New feature
-**Service category:** Enterprise Apps
-**Product capability:** 3rd Party Integration
-
-In September 2021, we have added following 44 new applications in our App gallery with Federation support
-
-[Studybugs](https://studybugs.com/signin), [Yello](https://yello.co/yello-for-microsoft-teams/), [LawVu](../saas-apps/lawvu-tutorial.md), [Formate eVo Mail](https://www.document-genetics.co.uk/formate-evo-erp-output-management), [Revenue Grid](https://app.revenuegrid.com/login), [Orbit for Office 365](https://azuremarketplace.microsoft.com/marketplace/apps/aad.orbitforoffice365?tab=overview), [Upmarket](https://app.upmarket.ai/), [Alinto Protect](https://protect.alinto.net/), [Cloud Concinnity](https://cloudconcinnity.com/), [Matlantis](https://matlantis.com/), [ModelGen for Visio (MG4V)](https://crecy.com.au/model-gen/), [NetRef: Classroom Management](https://oauth.net-ref.com/microsoft/sso), [VergeSense](../saas-apps/vergesense-tutorial.md), [iAuditor](../saas-apps/iauditor-tutorial.md), [Secutraq](https://secutraq.net/login), [Active and Thriving](../saas-apps/active-and-thriving-tutorial.md), [Inova](https://login.microsoftonline.com/organizations/oauth2/v2.0/authorize?client_id=1bacdba3-7a3b-410b-8753-5cc0b8125f81&response_type=code&redirect_uri=https:%2f%2fbroker.partneringplace.com%2fpartner-companion%2f&code_challenge_method=S256&code_challenge=YZabcdefghijklmanopqrstuvwxyz0123456789._-~&scope=1bacdba3-7a3b-410b-8753-5cc0b8125f81/.default), [TerraTrue](../saas-apps/terratrue-tutorial.md), [Facebook Work Accounts](../saas-apps/facebook-work-accounts-tutorial.md), [Beyond Identity Admin Console](../saas-apps/beyond-identity-admin-console-tutorial.md), [Visult](https://app.visult.io/), [ENGAGE TAG](https://app.engagetag.com/), [Appaegis Isolation Access Cloud](../saas-apps/appaegis-isolation-access-cloud-tutorial.md), [CrowdStrike Falcon Platform](../saas-apps/crowdstrike-falcon-platform-tutorial.md), [MY Emergency Control](https://my-emergency.co.uk/app/auth/login), [AlexisHR](../saas-apps/alexishr-tutorial.md), [Teachme Biz](../saas-apps/teachme-biz-tutorial.md), [Zero Networks](../saas-apps/zero-networks-tutorial.md), [Mavim iMprove](https://improve.mavimcloud.com/), [Azumuta](https://app.azumuta.com/login?microsoft=true), [Frankli](https://beta.frankli.io/login), [Amazon Managed Grafana](../saas-apps/amazon-managed-grafana-tutorial.md), [Productive](../saas-apps/productive-tutorial.md), [Create!Webフロー](../saas-apps/createweb-tutorial.md), [Evercate](https://evercate.com/us/sign-up/), [Ezra Coaching](../saas-apps/ezra-coaching-tutorial.md), [Baldwin Safety and Compliance](../saas-apps/baldwin-safety-&-compliance-tutorial.md), [Nulab Pass (Backlog,Cacoo,Typetalk)](../saas-apps/nulab-pass-tutorial.md), [Metatask](../saas-apps/metatask-tutorial.md), [Contrast Security](../saas-apps/contrast-security-tutorial.md), [Animaker](../saas-apps/animaker-tutorial.md), [Traction Guest](../saas-apps/traction-guest-tutorial.md), [True Office Learning - LIO](../saas-apps/true-office-learning-lio-tutorial.md), [Qiita Team](../saas-apps/qiita-team-tutorial.md)
-
-You can also find the documentation of all the applications here: https://aka.ms/AppsTutorial
-
-For listing your application in the Azure AD app gallery, read the details here: https://aka.ms/AzureADAppRequest
---
-### Gmail users signing in on Microsoft Teams mobile and desktop clients will sign in with device login flow starting September 30, 2021
-
-**Type:** Changed feature
-**Service category:** B2B
-**Product capability:** B2B/B2C
-
-Starting on September 30 2021, Azure AD B2B guests and Azure AD B2C customers signing in with their self-service signed up or redeemed Gmail accounts will have an extra login step. Users will now be prompted to enter a code in a separate browser window to finish signing in on Microsoft Teams mobile and desktop clients. If you haven't already done so, make sure to modify your apps to use the system browser for sign-in. See [Embedded vs System Web UI in the MSAL.NET](../develop/msal-net-web-browsers.md#embedded-vs-system-web-ui) documentation for more information. All MSAL SDKs use the system web-view by default.
-
-As the device login flow will start September 30, 2021, it's may not be available in your region immediately. If it's not available yet, your end-users will be met with the error screen shown in the doc until it gets deployed to your region.) For more details on the device login flow and details on requesting extension to Google, see [Add Google as an identity provider for B2B guest users](../external-identities/google-federation.md#deprecation-of-web-view-sign-in-support).
-
--
-### Improved Conditional Access Messaging for Non-compliant Device
-
-**Type:** Changed feature
-**Service category:** Conditional Access
-**Product capability:** End User Experiences
-
-The text and design on the Conditional Access blocking screen shown to users when their device is marked as non-compliant has been updated. Users will be blocked until they take the necessary actions to meet their company's device compliance policies. Additionally, we have streamlined the flow for a user to open their device management portal. These improvements apply to all conditional access supported OS platforms. [Learn more](https://support.microsoft.com/account-billing/troubleshooting-the-you-can-t-get-there-from-here-error-message-479a9c42-d9d1-4e44-9e90-24bbad96c251)
--
active-directory Atlassian Cloud Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/atlassian-cloud-provisioning-tutorial.md
Once you've configured provisioning, use the following resources to monitor your
3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md). ## Connector Limitations-
-* Atlassian Cloud allows provisioning of users only from [verified domains](https://confluence.atlassian.com/cloud/organization-administration-938859734.html).
+* Atlassian Cloud only supports provisioning updates for users with verified domains. Changes made to users from a non-verified domain will not be pushed to Atlassian Cloud. Learn more about Atlassian verified domains [here] (https://support.atlassian.com/provisioning-users/docs/understand-user-provisioning/).
* Atlassian Cloud does not support group renames today. This means that any changes to the displayName of a group in Azure AD will not be updated and reflected in Atlassian Cloud. * The value of the **mail** user attribute in Azure AD is only populated if the user has a Microsoft Exchange Mailbox. If the user does not have one, it is recommended to map a different desired attribute to the **emails** attribute in Atlassian Cloud.
Once you've configured provisioning, use the following resources to monitor your
<!--Image references--> [1]: ./media/atlassian-cloud-provisioning-tutorial/tutorial-general-01.png [2]: ./media/atlassian-cloud-provisioning-tutorial/tutorial-general-02.png
-[3]: ./media/atlassian-cloud-provisioning-tutorial/tutorial-general-03.png
+[3]: ./media/atlassian-cloud-provisioning-tutorial/tutorial-general-03.png
aks Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/availability-zones.md
description: Learn how to create a cluster that distributes nodes across availab
Previously updated : 12/10/2021 Last updated : 03/31/2022
AKS clusters can currently be created using availability zones in the following
* North Europe * Norway East * Southeast Asia
+* South Africa North
* South Central US * Sweden Central * UK South
aks Azure Disk Volume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disk-volume.md
Title: Create a static volume for pods in Azure Kubernetes Service (AKS)
description: Learn how to manually create a volume with Azure disks for use with a pod in Azure Kubernetes Service (AKS) Previously updated : 03/29/2019 Last updated : 04/01/2019 #Customer intent: As a developer, I want to learn how to manually create and attach storage to a specific pod in AKS.
spec:
accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain
+ storageClassName: managed-csi
csi: driver: disk.csi.azure.com readOnly: false
spec:
requests: storage: 100Gi volumeName: pv-azuredisk
- storageClassName: ""
+ storageClassName: managed-csi
``` Use the `kubectl` commands to create the *PersistentVolume* and *PersistentVolumeClaim*.
aks Azure Files Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-files-csi.md
Title: Use Container Storage Interface (CSI) drivers for Azure Files on Azure Ku
description: Learn how to use the Container Storage Interface (CSI) drivers for Azure Files in an Azure Kubernetes Service (AKS) cluster. Previously updated : 03/24/2021 Last updated : 04/01/2021
A storage class is used to define how an Azure Files share is created. A storage
* **Standard_GRS**: Standard geo-redundant storage * **Standard_ZRS**: Standard zone-redundant storage * **Standard_RAGRS**: Standard read-access geo-redundant storage
+* **Standard_RAGZRS**: Standard read-access geo-zone-redundant storage
* **Premium_LRS**: Premium locally redundant storage * **Premium_ZRS**: Premium zone-redundant storage
aks Azure Files Volume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-files-volume.md
description: Learn how to manually create a volume with Azure Files for use with multiple concurrent pods in Azure Kubernetes Service (AKS) Previously updated : 03/9/2022 Last updated : 04/1/2022 #Customer intent: As a developer, I want to learn how to manually create and attach storage using Azure Files to a pod in AKS.
spec:
accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain
+ storageClassName: azurefile-csi
csi: driver: file.csi.azure.com readOnly: false
metadata:
spec: accessModes: - ReadWriteMany
- storageClassName: ""
+ storageClassName: azurefile-csi
volumeName: azurefile resources: requests:
aks Certificate Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/certificate-rotation.md
Title: Rotate certificates in Azure Kubernetes Service (AKS)
-description: Learn how to rotate your certificates in an Azure Kubernetes Service (AKS) cluster.
+ Title: Certificate Rotation in Azure Kubernetes Service (AKS)
+description: Learn certificate rotation in an Azure Kubernetes Service (AKS) cluster.
Previously updated : 3/4/2022 Last updated : 3/29/2022
-# Rotate certificates in Azure Kubernetes Service (AKS)
+# Certificate rotation in Azure Kubernetes Service (AKS)
-Azure Kubernetes Service (AKS) uses certificates for authentication with many of its components. Periodically, you may need to rotate those certificates for security or policy reasons. For example, you may have a policy to rotate all your certificates every 90 days.
+Azure Kubernetes Service (AKS) uses certificates for authentication with many of its components. If you have a RBAC-enabled cluster built after March 2022 it is enabled with certificate auto-rotation. Periodically, you may need to rotate those certificates for security or policy reasons. For example, you may have a policy to rotate all your certificates every 90 days.
-This article shows you how to rotate the certificates in your AKS cluster.
+> [!NOTE]
+> Certificate auto-rotation will not be enabled by default for non-RBAC enabled AKS clusters.
+
+This article shows you how certificate rotation works in your AKS cluster.
## Before you begin
AKS generates and uses the following certificates, Certificate Authorities, and
* The `kubectl` client has a certificate for communicating with the AKS cluster. > [!NOTE]
-> AKS clusters created prior to May 2019 have certificates that expire after two years. Any cluster created after May 2019 or any cluster that has its certificates rotated have Cluster CA certificates that expire after 30 years. All other AKS certificates, which use the Cluster CA to for signing, will expire after two years and are automatically rotated during AKS version upgrade happened after 8/1/2021. To verify when your cluster was created, use `kubectl get nodes` to see the *Age* of your node pools.
+> AKS clusters created prior to May 2019 have certificates that expire after two years. Any cluster created after May 2019 or any cluster that has its certificates rotated have Cluster CA certificates that expire after 30 years. All other AKS certificates, which use the Cluster CA for signing, will expire after two years and are automatically rotated during an AKS version upgrade which happened after 8/1/2021. To verify when your cluster was created, use `kubectl get nodes` to see the *Age* of your node pools.
> > Additionally, you can check the expiration date of your cluster's certificate. For example, the following bash command displays the client certificate details for the *myAKSCluster* cluster in resource group *rg* > ```console
curl https://{apiserver-fqdn} -k -v 2>&1 |grep expire
az vm run-command invoke -g MC_rg_myAKSCluster_region -n vm-name --command-id RunShellScript --query 'value[0].message' -otsv --scripts "openssl x509 -in /etc/kubernetes/certs/apiserver.crt -noout -enddate" ```
-* Check expiration date of certificate on one VMSS agent node
+* Check expiration date of certificate on one virtual machine scale set agent node
```azurecli az vmss run-command invoke -g MC_rg_myAKSCluster_region -n vmss-name --instance-id 0 --command-id RunShellScript --query 'value[0].message' -otsv --scripts "openssl x509 -in /etc/kubernetes/certs/apiserver.crt -noout -enddate" ``` ## Certificate Auto Rotation
-Azure Kubernetes Service will automatically rotate non-ca certificates on both the control plane and agent nodes before they expire with no downtime for the cluster.
- For AKS to automatically rotate non-CA certificates, the cluster must have [TLS Bootstrapping](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/) which has been enabled by default in all Azure regions.
+> [!Note]
+> If you have an existing cluster you have to upgrade that cluster to enable Certificate Auto-Rotation.
+
+For any AKS clusters created or upgraded after March 2022 Azure Kubernetes Service will automatically rotate non-ca certificates on both the control plane and agent nodes within 80% of the client certificate valid time, before they expire with no downtime for the cluster.
+ #### How to check whether current agent node pool is TLS Bootstrapping enabled? To verify if TLS Bootstrapping is enabled on your cluster browse to the following paths. On a Linux node: /var/lib/kubelet/bootstrap-kubeconfig, on a Windows node, itΓÇÖs c:\k\bootstrap-config.
To verify if TLS Bootstrapping is enabled on your cluster browse to the followin
Auto cert rotation won't be enabled on non-rbac cluster. -
-## Rotate your cluster certificates
+## Manually rotate your cluster certificates
> [!WARNING] > Rotating your certificates using `az aks rotate-certs` will recreate all of your nodes and their OS Disks and can cause up to 30 minutes of downtime for your AKS cluster.
aks Concepts Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-storage.md
Title: Concepts - Storage in Azure Kubernetes Services (AKS)
description: Learn about storage in Azure Kubernetes Service (AKS), including volumes, persistent volumes, storage classes, and claims Previously updated : 03/11/2021 Last updated : 03/30/2022
This article introduces the core concepts that provide storage to your applicati
Kubernetes typically treats individual pods as ephemeral, disposable resources. Applications have different approaches available to them for using and persisting data. A *volume* represents a way to store, retrieve, and persist data across pods and through the application lifecycle.
-Traditional volumes are created as Kubernetes resources backed by Azure Storage. You can manually create data volumes to be assigned to pods directly, or have Kubernetes automatically create them. Data volumes can use Azure Disks or Azure Files.
+Traditional volumes are created as Kubernetes resources backed by Azure Storage. You can manually create data volumes to be assigned to pods directly, or have Kubernetes automatically create them. Data volumes can use: [Azure Disks][disks-types], [Azure Files][storage-files-planning], [Azure NetApp Files][azure-netapp-files-service-levels], or [Azure Blobs][storage-account-overview].
### Azure Disks
-Use *Azure Disks* to create a Kubernetes *DataDisk* resource. Disks can use:
-* Azure Premium storage, backed by high-performance SSDs, or
-* Azure Standard storage, backed by regular HDDs.
+Use *Azure Disks* to create a Kubernetes *DataDisk* resource. Disks types include:
+* Ultra Disks
+* Premium SSDs
+* Standard SSDs
+* Standard HDDs
> [!TIP]
->For most production and development workloads, use Premium storage.
+>For most production and development workloads, use Premium SSD.
Since Azure Disks are mounted as *ReadWriteOnce*, they're only available to a single pod. For storage volumes that can be accessed by multiple pods simultaneously, use Azure Files. ### Azure Files
-Use *Azure Files* to mount an SMB 3.0 share backed by an Azure Storage account to pods. Files let you share data across multiple nodes and pods and can use:
-* Azure Premium storage, backed by high-performance SSDs, or
-* Azure Standard storage backed by regular HDDs.
+Use *Azure Files* to mount an SMB 3.1.1 share or NFS 4.1 share backed by an Azure storage accounts to pods. Files let you share data across multiple nodes and pods and can use:
+* Azure Premium storage backed by high-performance SSDs
+* Azure Standard storage backed by regular HDDs
+
+### Azure NetApp Files
+* Ultra Storage
+* Premium Storage
+* Standard Storage
+
+### Azure Blob Storage
+* Block Blobs
### Volume types Kubernetes volumes represent more than just a traditional disk for storing and retrieving information. Kubernetes volumes can also be used as a way to inject data into a pod for use by the containers.
To define different tiers of storage, such as Premium and Standard, you can crea
The StorageClass also defines the *reclaimPolicy*. When you delete the pod and the persistent volume is no longer required, the reclaimPolicy controls the behavior of the underlying Azure storage resource. The underlying storage resource can either be deleted or kept for use with a future pod.
-In AKS, four initial `StorageClasses` are created for cluster using the in-tree storage plugins:
-
-| Permission | Reason |
-|||
-| `default` | Uses Azure StandardSSD storage to create a Managed Disk. The reclaim policy ensures that the underlying Azure Disk is deleted when the persistent volume that used it is deleted. |
-| `managed-premium` | Uses Azure Premium storage to create a Managed Disk. The reclaim policy again ensures that the underlying Azure Disk is deleted when the persistent volume that used it is deleted. |
-| `azurefile` | Uses Azure Standard storage to create an Azure File Share. The reclaim policy ensures that the underlying Azure File Share is deleted when the persistent volume that used it is deleted. |
-| `azurefile-premium` | Uses Azure Premium storage to create an Azure File Share. The reclaim policy ensures that the underlying Azure File Share is deleted when the persistent volume that used it is deleted.|
- For clusters using the [Container Storage Interface (CSI) drivers][csi-storage-drivers] the following extra `StorageClasses` are created: | Permission | Reason |
Unless you specify a StorageClass for a persistent volume, the default StorageCl
You can create a StorageClass for additional needs using `kubectl`. The following example uses Premium Managed Disks and specifies that the underlying Azure Disk should be *retained* when you delete the pod: ```yaml
-kind: StorageClass
apiVersion: storage.k8s.io/v1
+kind: StorageClass
metadata: name: managed-premium-retain
-provisioner: kubernetes.io/azure-disk
-reclaimPolicy: Retain
+provisioner: disk.csi.azure.com
parameters:
- storageaccounttype: Premium_LRS
- kind: Managed
+ skuName: Premium_LRS
+reclaimPolicy: Retain
+volumeBindingMode: WaitForFirstConsumer
+allowVolumeExpansion: true
``` > [!NOTE]
parameters:
## Persistent volume claims
-A PersistentVolumeClaim requests either Disk or File storage of a particular StorageClass, access mode, and size. The Kubernetes API server can dynamically provision the underlying Azure storage resource if no existing resource can fulfill the claim based on the defined StorageClass.
+A PersistentVolumeClaim requests storage of a particular StorageClass, access mode, and size. The Kubernetes API server can dynamically provision the underlying Azure storage resource if no existing resource can fulfill the claim based on the defined StorageClass.
The pod definition includes the volume mount once the volume has been connected to the pod.
metadata:
spec: accessModes: - ReadWriteOnce
- storageClassName: managed-premium
+ storageClassName: managed-premium-retain
resources: requests: storage: 5Gi
For mounting a volume in a Windows container, specify the drive letter and path.
For associated best practices, see [Best practices for storage and backups in AKS][operator-best-practices-storage].
-To see how to create dynamic and static volumes that use Azure Disks or Azure Files, see the following how-to articles:
+To see how to use CSI drivers, see the following how-to articles:
-- [Create a static volume using Azure Disks][aks-static-disks]-- [Create a static volume using Azure Files][aks-static-files]-- [Create a dynamic volume using Azure Disks][aks-dynamic-disks]-- [Create a dynamic volume using Azure Files][aks-dynamic-files]
+- [Enable Container Storage Interface(CSI) drivers for Azure disks and Azure Files on Azure Kubernetes Service(AKS)][csi-storage-drivers]
+- [Use Azure disk Container Storage Interface(CSI) drivers in Azure Kubernetes Service(AKS)][azure-disk-csi]
+- [Use Azure Files Container Storage Interface(CSI) drivers in Azure Kubernetes Service(AKS)][azure-files-csi]
+- [Integrate Azure NetApp Files with Azure Kubernetes Service][azure-netapp-files]
For more information on core Kubernetes and AKS concepts, see the following articles:
For more information on core Kubernetes and AKS concepts, see the following arti
<!-- EXTERNAL LINKS --> <!-- INTERNAL LINKS -->
-[aks-static-disks]: azure-disk-volume.md
-[aks-static-files]: azure-files-volume.md
-[aks-dynamic-disks]: azure-disks-dynamic-pv.md
-[aks-dynamic-files]: azure-files-dynamic-pv.md
+[disks-types]: ../virtual-machines/disks-types.md
+[storage-files-planning]: ../storage/files/storage-files-planning.md
+[azure-netapp-files-service-levels]: ../azure-netapp-files/azure-netapp-files-service-levels.md
+[storage-account-overview]: ../storage/common/storage-account-overview.md
+[csi-storage-drivers]: csi-storage-drivers.md
+[azure-disk-csi]: azure-disk-csi.md
+[azure-netapp-files]: azure-netapp-files.md
+[azure-files-csi]: azure-files-csi.md
[aks-concepts-clusters-workloads]: concepts-clusters-workloads.md [aks-concepts-identity]: concepts-identity.md [aks-concepts-scale]: concepts-scale.md
aks Ingress Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/ingress-tls.md
spec:
- host: hello-world-ingress.MY_CUSTOM_DOMAIN http: paths:
- - path:
+ - path: /static(/|$)(.*)
pathType: Prefix backend: service: name: aks-helloworld-one port: number: 80
- path: /static(/|$)(.*)
``` Create the ingress resource using the `kubectl apply` command.
aks Kubernetes Walkthrough https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/kubernetes-walkthrough.md
To learn more about AKS, and walk through a complete code to deployment example,
> [!div class="nextstepaction"] > [AKS tutorial][aks-tutorial]
+This quickstart is for introductory purposes. For guidance on a creating full solutions with AKS for production, see [AKS solution guidance][aks-solution-guidance].
+ <!-- LINKS - external --> [azure-vote-app]: https://github.com/Azure-Samples/azure-voting-app-redis.git [kubectl]: https://kubernetes.io/docs/user-guide/kubectl/
To learn more about AKS, and walk through a complete code to deployment example,
[kubernetes-deployment]: concepts-clusters-workloads.md#deployments-and-yaml-manifests [kubernetes-service]: concepts-network.md#services [windows-container-cli]: windows-container-cli.md
+[aks-solution-guidance]: /azure/architecture/reference-architectures/containers/aks-start-here?WT.mc_id=AKSDOCSPAGE
aks Managed Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/managed-aad.md
When deploying an AKS Cluster, local accounts are enabled by default. Even when
> On clusters with Azure AD integration enabled, users belonging to a group specified by `aad-admin-group-object-ids` will still be able to gain access via non-admin credentials. On clusters without Azure AD integration enabled and `properties.disableLocalAccounts` set to true, obtaining both user and admin credentials will fail. > [!NOTE]
-> After disabling local accounts users on an already existing AKS cluster where users might have used local account/s, admin must [rotate the cluster certificates](certificate-rotation.md#rotate-your-cluster-certificates), in order to revoke the certificates those users might have access to. If this is a new cluster than no action is required.
+> After disabling local accounts users on an already existing AKS cluster where users might have used local account/s, admin must [rotate the cluster certificates](certificate-rotation.md), in order to revoke the certificates those users might have access to. If this is a new cluster then no action is required.
### Create a new cluster without local accounts
aks Support Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/support-policies.md
Microsoft manages and monitors the following components through the control pane
* Kubelet or Kubernetes API servers * Etcd or a compatible key-value store, providing Quality of Service (QoS), scalability, and runtime * DNS services (for example, kube-dns or CoreDNS)
-* Kubernetes proxy or networking
+* Kubernetes proxy or networking (except when [BYOCNI](use-byo-cni.md) is used)
* Any additional addon or system component running in the kube-system namespace AKS isn't a Platform-as-a-Service (PaaS) solution. Some components, such as agent nodes, have *shared responsibility*, where users must help maintain the AKS cluster. User input is required, for example, to apply an agent node operating system (OS) security patch.
Microsoft provides technical support for the following examples:
* Connectivity to all Kubernetes components that the Kubernetes service provides and supports, such as the API server. * Management, uptime, QoS, and operations of Kubernetes control plane services (Kubernetes control plane, API server, etcd, and coreDNS, for example). * Etcd data store. Support includes automated, transparent backups of all etcd data every 30 minutes for disaster planning and cluster state restoration. These backups aren't directly available to you or any users. They ensure data reliability and consistency. On-demand rollback or restore is not supported as a feature.
-* Any integration points in the Azure cloud provider driver for Kubernetes. These include integrations into other Azure services such as load balancers, persistent volumes, or networking (Kubernetes and Azure CNI).
+* Any integration points in the Azure cloud provider driver for Kubernetes. These include integrations into other Azure services such as load balancers, persistent volumes, or networking (Kubernetes and Azure CNI, except when [BYOCNI](use-byo-cni.md) is in use).
* Questions or issues about customization of control plane components such as the Kubernetes API server, etcd, and coreDNS.
-* Issues about networking, such as Azure CNI, kubenet, or other network access and functionality issues. Issues could include DNS resolution, packet loss, routing, and so on. Microsoft supports various networking scenarios:
+* Issues about networking, such as Azure CNI, kubenet, or other network access and functionality issues, except when [BYOCNI](use-byo-cni.md) is in use. Issues could include DNS resolution, packet loss, routing, and so on. Microsoft supports various networking scenarios:
* Kubenet and Azure CNI using managed VNETs or with custom (bring your own) subnets. * Connectivity to other Azure services and applications * Ingress controllers and ingress or load balancer configurations
Microsoft doesn't provide technical support for the following examples:
> Microsoft can provide best-effort support for third-party open-source projects such as Helm. Where the third-party open-source tool integrates with the Kubernetes Azure cloud provider or other AKS-specific bugs, Microsoft supports examples and applications from Microsoft documentation. * Third-party closed-source software. This software can include security scanning tools and networking devices or software. * Network customizations other than the ones listed in the [AKS documentation](./index.yml).
+* Custom or 3rd-party CNI plugins used in [BYOCNI](use-byo-cni.md) mode.
## AKS support coverage for agent nodes
aks Use Byo Cni https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-byo-cni.md
+
+ Title: Bring your own Container Network Interface (CNI) plugin (preview)
+description: Learn how to utilize Azure Kubernetes Service with your own Container Network Interface (CNI) plugin
++ Last updated : 3/30/2022++
+# Bring your own Container Network Interface (CNI) plugin with Azure Kubernetes Service (AKS) (PREVIEW)
+
+Kubernetes does not provide a network interface system by default; this functionality is provided by [network plugins][kubernetes-cni]. Azure Kubernetes Service provides several supported CNI plugins. Documentation for supported plugins can be found from the [networking concepts page][aks-network-concepts].
+
+While the supported plugins meet most networking needs in Kubernetes, advanced users of AKS may desire to utilize the same CNI plugin used in on-premises Kubernetes environments or to make use of specific advanced functionality available in other CNI plugins.
+
+This article shows how to deploy an AKS cluster with no CNI plugin pre-installed, which allows for installation of any third-party CNI plugin that works in Azure.
++
+## Support
+
+BYOCNI has support implications - Microsoft support will not be able to assist with CNI-related issues in clusters deployed with BYOCNI. For example, CNI-related issues would cover most east/west (pod to pod) traffic, along with `kubectl proxy` and similar commands. If CNI-related support is desired, a supported AKS network plugin can be used or support could be procured for the BYOCNI plugin from a third-party vendor.
+
+Support will still be provided for non-CNI-related issues.
+
+## Prerequisites
+
+* For ARM/Bicep, use at least template version 2022-01-02-preview
+* For Azure CLI, use at least version 0.5.55 of the `aks-preview` extension
+* The virtual network for the AKS cluster must allow outbound internet connectivity.
+* AKS clusters may not use `169.254.0.0/16`, `172.30.0.0/16`, `172.31.0.0/16`, or `192.0.2.0/24` for the Kubernetes service address range, pod address range, or cluster virtual network address range.
+* The cluster identity used by the AKS cluster must have at least [Network Contributor](../role-based-access-control/built-in-roles.md#network-contributor) permissions on the subnet within your virtual network. If you wish to define a [custom role](../role-based-access-control/custom-roles.md) instead of using the built-in Network Contributor role, the following permissions are required:
+ * `Microsoft.Network/virtualNetworks/subnets/join/action`
+ * `Microsoft.Network/virtualNetworks/subnets/read`
+* The subnet assigned to the AKS node pool cannot be a [delegated subnet](../virtual-network/subnet-delegation-overview.md).
+* AKS doesn't apply Network Security Groups (NSGs) to its subnet and will not modify any of the NSGs associated with that subnet. If you provide your own subnet and add NSGs associated with that subnet, you must ensure the security rules in the NSGs allow traffic within the node CIDR range. For more details, see [Network security groups][aks-network-nsg].
+
+## Cluster creation steps
+
+### Install the aks-preview CLI extension
+
+```azurecli-interactive
+# Install the aks-preview extension
+az extension add --name aks-preview
+
+# Update the extension to make sure you have the latest version installed
+az extension update --name aks-preview
+```
+
+### Deploy a cluster
+
+# [Azure CLI](#tab/azure-cli)
+
+Deploying a BYOCNI cluster requires passing the `--network-plugin` parameter with the parameter value of `none`.
+
+1. First, create a resource group to create the cluster in:
+ ```azurecli-interactive
+ az group create -l <Region> -n <ResourceGroupName>
+ ```
+
+1. Then create the cluster itself:
+ ```azurecli-interactive
+ az aks create -l <Region> -g <ResourceGroupName> -n <ClusterName> --network-plugin none
+ ```
+
+# [Azure Resource Manager](#tab/azure-resource-manager)
+
+When using an Azure Resource Manager template to deploy, pass `none` to the `networkPlugin` parameter to the `networkProfile` object. See the [Azure Resource Manager template documentation][deploy-arm-template] for help with deploying this template, if needed.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "clusterName": {
+ "type": "string",
+ "defaultValue": "aksbyocni"
+ },
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]"
+ },
+ "kubernetesVersion": {
+ "type": "string",
+ "defaultValue": "1.22"
+ },
+ "nodeCount": {
+ "type": "int",
+ "defaultValue": 3
+ },
+ "nodeSize": {
+ "type": "string",
+ "defaultValue": "Standard_B2ms"
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.ContainerService/managedClusters",
+ "apiVersion": "2022-02-02-preview",
+ "name": "[parameters('clusterName')]",
+ "location": "[parameters('location')]",
+ "identity": {
+ "type": "SystemAssigned"
+ },
+ "properties": {
+ "agentPoolProfiles": [
+ {
+ "name": "nodepool1",
+ "count": "[parameters('nodeCount')]",
+ "mode": "System",
+ "vmSize": "[parameters('nodeSize')]"
+ }
+ ],
+ "dnsPrefix": "[parameters('clusterName')]",
+ "kubernetesVersion": "[parameters('kubernetesVersion')]",
+ "networkProfile": {
+ "networkPlugin": "none"
+ }
+ }
+ }
+ ]
+}
+```
+
+# [Bicep](#tab/bicep)
+
+When using a Bicep template to deploy, pass `none` to the `networkPlugin` parameter to the `networkProfile` object. See the [Bicep template documentation][deploy-bicep-template] for help with deploying this template, if needed.
+
+```bicep
+param clusterName string = 'aksbyocni'
+param location string = resourceGroup().location
+param kubernetesVersion string = '1.22'
+param nodeCount int = 3
+param nodeSize string = 'Standard_B2ms'
+
+resource aksCluster 'Microsoft.ContainerService/managedClusters@2022-02-02-preview' = {
+ name: clusterName
+ location: location
+ identity: {
+ type: 'SystemAssigned'
+ }
+ properties: {
+ agentPoolProfiles: [
+ {
+ name: 'nodepool1'
+ count: nodeCount
+ mode: 'System'
+ vmSize: nodeSize
+ }
+ ]
+ dnsPrefix: clusterName
+ kubernetesVersion: kubernetesVersion
+ networkProfile: {
+ networkPlugin: 'none'
+ }
+ }
+}
+```
+
+### Deploy a CNI plugin
+
+When AKS provisioning completes, the cluster will be online, but all of the nodes will be in a `NotReady` state:
+
+```bash
+$ kubectl get nodes
+NAME STATUS ROLES AGE VERSION
+aks-nodepool1-23902496-vmss000000 NotReady agent 6m9s v1.21.9
+
+$ kubectl get node -o custom-columns='NAME:.metadata.name,STATUS:.status.conditions[?(@.type=="Ready")].message'
+NAME STATUS
+aks-nodepool1-23902496-vmss000000 container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
+```
+
+At this point, the cluster is ready for installation of a CNI plugin.
++
+## Next steps
+
+Learn more about networking in AKS in the following articles:
+
+* [Use a static IP address with the Azure Kubernetes Service (AKS) load balancer](static-ip.md)
+* [Use an internal load balancer with Azure Container Service (AKS)](internal-lb.md)
+
+* [Create a basic ingress controller with external network connectivity][aks-ingress-basic]
+* [Enable the HTTP application routing add-on][aks-http-app-routing]
+* [Create an ingress controller that uses an internal, private network and IP address][aks-ingress-internal]
+* [Create an ingress controller with a dynamic public IP and configure Let's Encrypt to automatically generate TLS certificates][aks-ingress-tls]
+* [Create an ingress controller with a static public IP and configure Let's Encrypt to automatically generate TLS certificates][aks-ingress-static-tls]
+
+<!-- LINKS - External -->
+[kubernetes-cni]: https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/
+[cni-networking]: https://github.com/Azure/azure-container-networking/blob/master/docs/cni.md
+[kubenet]: https://kubernetes.io/docs/concepts/cluster-administration/network-plugins/#kubenet
+
+<!-- LINKS - Internal -->
+[az-aks-create]: /cli/azure/aks#az_aks_create
+[aks-ssh]: ssh.md
+[ManagedClusterAgentPoolProfile]: /azure/templates/microsoft.containerservice/managedclusters#managedclusteragentpoolprofile-object
+[aks-network-concepts]: concepts-network.md
+[aks-network-nsg]: concepts-network.md#network-security-groups
+[aks-ingress-basic]: ingress-basic.md
+[aks-ingress-tls]: ingress-tls.md
+[aks-ingress-static-tls]: ingress-static-ip.md
+[aks-http-app-routing]: http-application-routing.md
+[aks-ingress-internal]: ingress-internal-ip.md
+[az-extension-add]: /cli/azure/extension#az_extension_add
+[az-extension-update]: /cli/azure/extension#az_extension_update
+[az-feature-register]: /cli/azure/feature#az_feature_register
+[az-feature-list]: /cli/azure/feature#az_feature_list
+[az-provider-register]: /cli/azure/provider#az_provider_register
+[network-policy]: use-network-policies.md
+[nodepool-upgrade]: use-multiple-node-pools.md#upgrade-a-node-pool
+[network-comparisons]: concepts-network.md#compare-network-models
+[system-node-pools]: use-system-pools.md
+[prerequisites]: configure-azure-cni.md#prerequisites
api-management Soft Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/soft-delete.md
You can verify that a soft-deleted API Management instance is available to resto
Use the API Management [Get By Name](/rest/api/apimanagement/current-ga/deleted-services/get-by-name) operation, substituting `{subscriptionId}`, `{location}`, and `{serviceName}` with your Azure subscription, resource location, and API Management instance name: ```rest
-GET https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.ApiManagement/locations/{location}/deletedservices/{serviceName}?api-version=2021-08-01-preview
+GET https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.ApiManagement/locations/{location}/deletedservices/{serviceName}?api-version=2021-08-01
``` If available for undelete, Azure will return a record of the APIM instance showing its `deletionDate` and `scheduledPurgeDate`, for example:
If available for undelete, Azure will return a record of the APIM instance showi
Use the API Management [List By Subscription](/rest/api/apimanagement/current-ga/deleted-services/list-by-subscription) operation, substituting `{subscriptionId}` with your subscription ID: ```rest
-GET https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.ApiManagement/deletedservices?api-version=2021-08-01-preview
+GET https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.ApiManagement/deletedservices?api-version=2021-08-01
``` This will return a list all soft-deleted services available for undelete under the given subscription, showing the `deletionDate` and `scheduledPurgeDate` for each.
https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{reso
Use the API Management [Purge](/rest/api/apimanagement/current-ga/deleted-services/purge) operation, substituting `{subscriptionId}`, `{location}`, and `{serviceName}` with your Azure subscription, resource location, and API Management name: ```rest
-DELETE https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.ApiManagement/locations/{location}/deletedservices/{serviceName}?api-version=2021-08-01-preview
+DELETE https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.ApiManagement/locations/{location}/deletedservices/{serviceName}?api-version=2021-08-01
``` This will permanently delete your API Management instance from Azure.
app-service Monitor Instances Health Check https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/monitor-instances-health-check.md
If you're using your own authentication system, the Health check path must allow
## Monitoring
-After providing your application's Health check path, you can monitor the health of your site using Azure Monitor. From the **Health check** blade in the Portal, click the **Metrics** in the top toolbar. This will open a new blade where you can see the site's historical health status and create a new alert rule. For more information on monitoring your sites, [see the guide on Azure Monitor](web-sites-monitor.md).
+After providing your application's Health check path, you can monitor the health of your site using Azure Monitor. From the **Health check** blade in the Portal, click the **Metrics** in the top toolbar. This will open a new blade where you can see the site's historical health status and option to create a new alert rule. Health check metrics will aggregate the successful pings & display failures only when the instance was deemed unhealthy based on the health check configuration. For more information on monitoring your sites, [see the guide on Azure Monitor](web-sites-monitor.md).
## Limitations
app-service Nat Gateway Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/networking/nat-gateway-integration.md
To configure NAT gateway integration with App Service, you need to complete the
Set up NAT gateway through the portal:
-1. Go to the **Networking** UI in the App Service portal and select virtaul network integration in the Outbound Traffic section. Ensure that your app is integrated with a subnet and **Route All** has been enabled.
+1. Go to the **Networking** UI in the App Service portal and select virtual network integration in the Outbound Traffic section. Ensure that your app is integrated with a subnet and **Route All** has been enabled.
:::image type="content" source="./media/nat-gateway-integration/nat-gateway-route-all-enabled.png" alt-text="Screenshot of Route All enabled for virtual network integration."::: 1. On the Azure portal menu or from the **Home** page, select **Create a resource**. The **New** window appears. 1. Search for "NAT gateway" and select it from the list of results.
Set up NAT gateway through the portal:
:::image type="content" source="./media/nat-gateway-integration/nat-gateway-create-outbound-ip.png" alt-text="Screenshot of Outbound IP tab in Create NAT gateway."::: 1. In the **Subnet** tab, select the subnet used for virtual network integration. :::image type="content" source="./media/nat-gateway-integration/nat-gateway-create-subnet.png" alt-text="Screenshot of Subnet tab in Create NAT gateway.":::
-1. Fill in tags if needed and **Create** the NAT gateway. After the NAT gateway is provisioned, click on the **Go to resource group** and select the new NAT gateway. You can to see the public IP that your app will use for outbound Internet-facing traffic in the Outbound IP blade.
+1. Fill in tags if needed and **Create** the NAT gateway. After the NAT gateway is provisioned, click on the **Go to resource group** and select the new NAT gateway. You can see the public IP that your app will use for outbound Internet-facing traffic in the Outbound IP blade.
:::image type="content" source="./media/nat-gateway-integration/nat-gateway-public-ip.png" alt-text="Screenshot of Outbound IP blade in the NAT gateway portal."::: If you prefer using CLI to configure your environment, these are the important commands. As a prerequisite, you should create an app with virtual network integration configured.
az network vnet subnet update --resource-group [myResourceGroup] --vnet-name [my
The same NAT gateway can be used across multiple subnets in the same Virtual Network allowing a NAT gateway to be used across multiple apps and App Service plans.
-NAT gateway supports both public IP addresses and public IP prefixes. A NAT gateway can support up to 16 IP addresses across individual IP addresses and prefixes. Each IP address allocates 64,000 ports (SNAT ports) allowing up to 1M available ports. Learn more in the [Scaling section](../../virtual-network/nat-gateway/nat-gateway-resource.md#scale-nat) of NAT gateway.
+NAT gateway supports both public IP addresses and public IP prefixes. A NAT gateway can support up to 16 IP addresses across individual IP addresses and prefixes. Each IP address allocates 64,000 ports (SNAT ports) allowing up to 1M available ports. Learn more in the [Scaling section](../../virtual-network/nat-gateway/nat-gateway-resource.md#scale-nat-gateway) of NAT gateway.
## Next steps
app-service Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-python.md
To run the application locally:
### [Flask](#tab/flask)
-1. Go to the application folder:
+1. Go to the application folder:
```Console cd msdocs-python-flask-webapp-quickstart
To deploy a web app from VS Code, you must have the [Azure Tools extension pack]
| [!INCLUDE [VS Code deploy step 4](<./includes/quickstart-python/deploy-visual-studio-code-4.md>)] | :::image type="content" source="./media/quickstart-python/deploy-visual-studio-code-4-240px.png" alt-text="A screenshot of a dialog box in VS Code asking if you want to update your workspace to run build commands." lightbox="./media/quickstart-python/deploy-visual-studio-code-4.png"::: | | [!INCLUDE [VS Code deploy step 5](<./includes/quickstart-python/deploy-visual-studio-code-5.md>)] | :::image type="content" source="./media/quickstart-python/deploy-visual-studio-code-5-240px.png" alt-text="A screenshot showing the confirmation dialog when the app code has been deployed to Azure." lightbox="./media/quickstart-python/deploy-visual-studio-code-5.png"::: |
+### [Deploy using Azure CLI](#tab/azure-cli-deploy)
++ ### [Deploy using Local Git](#tab/local-git-deploy) [!INCLUDE [Deploy Local Git](<./includes/quickstart-python/deploy-local-git.md>)]
application-gateway Ssl Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ssl-overview.md
In this example, requests using TLS1.2 are routed to backend servers in Pool1 us
## End to end TLS and allow listing of certificates
-Application Gateway only communicates with known backend instances that have allow listed their certificate with the application gateway. There are some differences in the end-to-end TLS setup process with respect to the version of Application Gateway used. The following section explains them individually.
+Application Gateway only communicates with those backend servers that have either allow listed their certificate with the Application Gateway or whose certificates are signed by well-known CA authorities and the certificate's CN matches the host name in the HTTP backend settings. There are some differences in the end-to-end TLS setup process with respect to the version of Application Gateway used. The following section explains them individually.
## End-to-end TLS with the v1 SKU
automation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/overview.md
As a service provider, you may have onboarded multiple customer tenants to [Azur
Microsoft offers other capabilities to help you manage updates for your Azure VMs or Azure virtual machine scale sets that you should consider as part of your overall update management strategy. -- If you are interested in automatically assessing and updating your Azure virtual machines to maintain security compliance with *Critical* and *Security* updates released each month, review [Automatic VM guest patching](../../virtual-machines/automatic-vm-guest-patching.md) (preview). This is an alternative update management solution for your Azure VMs to auto-update them during off-peak hours, including VMs within an availability set, compared to managing update deployments to those VMs from Update Management in Azure Automation.
+- If you are interested in automatically assessing and updating your Azure virtual machines to maintain security compliance with *Critical* and *Security* updates released each month, review [Automatic VM guest patching](../../virtual-machines/automatic-vm-guest-patching.md). This is an alternative update management solution for your Azure VMs to auto-update them during off-peak hours, including VMs within an availability set, compared to managing update deployments to those VMs from Update Management in Azure Automation.
- If you manage Azure virtual machine scale sets, review how to perform [automatic OS image upgrades](../../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md) to safely and automatically upgrade the OS disk for all instances in the scale set.
Update Management relies on the locally configured update repository to update s
* Before enabling and using Update Management, review [Plan your Update Management deployment](plan-deployment.md).
-* Review commonly asked questions about Update Management in the [Azure Automation frequently asked questions](../automation-faq.md).
+* Review commonly asked questions about Update Management in the [Azure Automation frequently asked questions](../automation-faq.md).
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/overview.md
Currently, Azure Arc allows you to manage the following resource types hosted ou
* [Azure data services](dat): Run Azure data services on-premises, at the edge, and in public clouds using Kubernetes and the infrastructure of your choice. SQL Managed Instance and PostgreSQL Hyperscale (preview) services are currently available. * [SQL Server](/sql/sql-server/azure-arc/overview): Extend Azure services to SQL Server instances hosted outside of Azure.
+* Virtual machines (preview): Provision, resize, delete and manage virtual machines based on [VMware vSphere](/vmware-vsphere/overview.md) or [Azure Stack HCI](/azure-stack/hci/manage/azure-arc-enabled-virtual-machines) and enable VM self-service through role-based access.
## Key features and benefits
Some of the key scenarios that Azure Arc supports are:
* Create [custom locations](./kubernetes/custom-locations.md) on top of your [Azure Arc-enabled Kubernetes](./kubernetes/overview.md) clusters, using them as target locations for deploying Azure services instances. Deploy your Azure service cluster extensions for [Azure Arc-enabled Data Services](./dat).
+* Perform virtual machine lifecycle and management operations for [VMware vSphere](/vmware-vsphere/overview.md) and [Azure Stack HCI](/azure-stack/hci/manage/azure-arc-enabled-virtual-machines) environments.
+ * A unified experience viewing your Azure Arc-enabled resources, whether you are using the Azure portal, the Azure CLI, Azure PowerShell, or Azure REST API. ## Pricing
For information, see the [Azure pricing page](https://azure.microsoft.com/pricin
* Learn about [Azure Arc-enabled Kubernetes](./kubernetes/overview.md). * Learn about [Azure Arc-enabled data services](https://azure.microsoft.com/services/azure-arc/hybrid-data-services/). * Learn about [SQL Server on Azure Arc-enabled servers](/sql/sql-server/azure-arc/overview).
+* Learn about [Azure Arc-enabled VMware vSphere](vmware-vsphere/overview.md) and [Azure Arc-enabled Azure Stack HCI](https://docs.microsoft.com/azure-stack/hci/manage/azure-arc-enabled-virtual-machines)
* Experience Azure Arc-enabled services by exploring the [Jumpstart proof of concept](https://azurearcjumpstart.io/azure_arc_jumpstart/).
azure-arc Day2 Operations Resource Bridge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/day2-operations-resource-bridge.md
+
+ Title: Perform ongoing administration for Arc-enabled VMware vSphere
+description: Learn how to perform day 2 administrator operations related to Azure Arc-enabled VMware vSphere
+ Last updated : 03/28/2022+++
+# Perform ongoing administration for Arc-enabled VMware vSphere
+
+In this article, you'll learn how to perform various administrative operations related to Azure Arc-enabled VMware vSphere (preview):
+
+- Upgrading the Arc resource bridge
+- Updating the credentials
+- Collecting logs from the Arc resource bridge
+
+Each of these operations requires either SSH key to the resource bridge VM or the kubeconfig that provides access to the Kubernetes cluster on the resource bridge VM.
+
+## Upgrading the Arc resource bridge
+
+Azure Arc-enabled VMware vSphere requires the Arc resource bridge to connect your VMware vSphere environment with Azure. Periodically, new images of Arc resource bridge will be released to include security and feature updates.
+
+> [!NOTE]
+> To upgrade the arc resource bridge VM to the latest version, you will need to perform the onboarding again with the **same resource IDs**. This will cause some downtime as operations performed through Arc during this time might fail.
+
+To upgrade to the latest version of the resource bridge, perform the following steps:
+
+1. Copy the Azure region and resource IDs of the Arc resource bridge, custom location and vCenter Azure resources
+
+2. Find and delete the old Arc resource bridge **template** from your vCenter
+
+3. Download the script from the portal and update the following section in the script
+
+ ```powershell
+ $location = <Azure region of the resources>
+
+ $applianceSubscriptionId = <subscription-id>
+ $applianceResourceGroupName = <resourcegroup-name>
+ $applianceName = <resource-bridge-name>
+
+ $customLocationSubscriptionId = <subscription-id>
+ $customLocationResourceGroupName = <resourcegroup-name>
+ $customLocationName = <custom-location-name>
+
+ $vCenterSubscriptionId = <subscription-id>
+ $vCenterResourceGroupName = <resourcegroup-name>
+ $vCenterName = <vcenter-name-in-azure>
+ ```
+
+4. [Run the onboarding script](quick-start-connect-vcenter-to-arc-using-script.md#run-the-script) again with the `--force` parameter
+
+ ``` powershell-interactive
+ ./resource-bridge-onboarding-script.ps1 --force
+ ```
+
+5. [Provide the inputs](quick-start-connect-vcenter-to-arc-using-script.md#inputs-for-the-script) as prompted.
+
+6. Once the onboarding is successfully completed, the resource bridge is upgraded to the latest version.
+
+## Updating the vSphere account credentials
+
+Azure Arc-enabled VMware vSphere uses the vSphere account credentials you provided during the onboarding to communicate with your vCenter server. These credentials are only persisted locally on the Arc resource bridge VM.
+
+As part of your security practices, you might need to rotate credentials for your vCenter accounts. As credentials are rotated, you must also update the credentials provided to Azure Arc to ensure the functioning of Azure Arc-enabled VMware services.
+
+There are two different sets of credentials stored on the Arc resource bridge. But you can use the same account credentials for both.
+
+- **Account for Arc resource bridge**. This account is used for deploying the Arc resource bridge VM and will be used for upgrade.
+- **Account for VMware cluster extension**. This account is used to discover inventory and perform all VM operations through Azure Arc-enabled VMware vSphere
+
+To update the credentials of the account for Arc resource bridge, run the following command from a workstation that can access cluster configuration IP address of the Arc resource bridge locally:
+
+```azurecli
+az arcappliance setcredential vmware --kubeconfig <kubeconfig>
+```
+
+To update the credentials used by the VMware cluster extension on the resource bridge. This command can be run from anywhere with `connectedvmware` CLI extension installed.
+
+```azurecli
+az connectedvmware vcenter connect --custom-location <name of the custom location> --location <Azure region> --name <name of the vCenter resource in Azure> --resource-group <resource group for the vCenter resource> --username <username for the vSphere account> --password <password to the vSphere account>
+```
+
+## Collecting logs from the Arc resource bridge
+
+For any issues encountered with the Azure Arc resource bridge, you can collect logs for further investigation. To collect the logs, use the Azure CLI [`Az arcappliance log`](https://docs.microsoft.com/cli/azure/arcappliance/logs?#az-arcappliance-logs-vmware) command.
+
+The `az arcappliance log` command must be run from a workstation that can communicate with the Arc resource bridge either via the cluster configuration IP address or the IP address of the Arc resource bridge VM.
+
+To save the logs to a destination folder, run the following command. This command requires connectivity to cluster configuration IP address.
+
+```azurecli
+az arcappliance logs <provider> --kubeconfig <path to kubeconfig> --out-dir <path to specified output directory>
+```
+
+If the Kubernetes cluster on the resource bridge isn't in functional state, you can use the following command. This command requires connectivity to IP address of the Azure Arc resource bridge VM via SSH
+
+```azurecli
+az arcappliance logs <provider> --out-dir <path to specified output directory> --ip XXX.XXX.XXX.XXX
+```
+
+During initial onboarding, SSH keys are saved to the workstation. If you're running this command from the workstation that was used for onboarding, no other steps are required.
+
+If you're running this command from a different workstation, you must make sure the following files are copied to the new workstation in the same location.
+
+- For a Windows workstation, `C:\ProgramData\kva\.ssh\logkey` and `C:\ProgramData\kva\.ssh\logkey.pub`
+
+- For a Linux workstation, `$HOME\.KVA\.ssh\logkey` and `$HOME\.KVA\.ssh\logkey.pub`
+
+## Next steps
+
+[Troubleshoot common issues related to resource bridge](../resource-bridge/troubleshoot-resource-bridge.md)
azure-arc Quick Start Connect Vcenter To Arc Using Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script.md
After the command finishes running, your setup is complete. You can now use the
## Save SSH keys and kubeconfig > [!IMPORTANT]
-> Performing some day 2 operations on the Arc resource bridge will require the SSH key to the resource bridge VM and kubeconfig to the Kubernetes cluster on it. It is important to store them to a secure location as it is not possible to retrieve them if the workstation used for the onboarding is deleted.
+> Performing [day 2 operations on the Arc resource bridge](day2-operations-resource-bridge.md) will require the SSH key to the resource bridge VM and kubeconfig to the Kubernetes cluster on it. It is important to store them to a secure location as it is not possible to retrieve them if the workstation used for the onboarding is deleted.
You will find the kubeconfig file with the name `kubeconfig` in the folder where the onboarding script is downloaded and run.
azure-arc Remove Vcenter From Arc Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/remove-vcenter-from-arc-vmware.md
+
+ Title: Remove your VMware vCenter environment from Azure Arc
+description: This article explains the steps to cleanly remove your VMware vCenter environment from Azure Arc-enabled VMware vSphere and delete related Azure Arc resources from Azure.
+++ Last updated : 3/28/2022
+# Customer intent: As an infrastructure admin, I want to cleanly remove my VMware vCenter environment from Azure Arc-enabled VMware vSphere.
++
+# Remove your VMware vCenter environment from Azure Arc
+
+In this article, you'll learn how to cleanly remove your VMware vCenter environment from Azure Arc-enabled VMware vSphere. For VMware vSphere environments that you no longer want to manage with Azure Arc-enabled VMware vSphere, follow the steps in the article to:
+
+- Remove guest management from VMware virtual machines
+- Remove VMware vSphere resource from Azure Arc
+- Remove Arc resource bridge related items in your vCenter
+
+## Remove guest management from VMware virtual machines
+
+To prevent continued billing of Azure management services after you remove the vSphere environment from Azure Arc, you must first cleanly remove guest management from all Arc-enabled VMware vSphere virtual machines where it was enabled.
+
+When you enable guest management on Arc-enabled VMware vSphere virtual machines, the Arc connected machine agent is installed on them. Once guest management is enabled, you can install VM extensions on them and use Azure management services like the Log Analytics on them.
+
+To cleanly remove guest management, you must follow the steps below to remove any VM extensions from the virtual machine, disconnect the agent, and uninstall the software from your virtual machine. It's important to complete each of the three steps to fully remove all related software components from your virtual machines.
+
+### Step 1: Remove VM extensions
+
+If you have deployed Azure VM extensions to an Azure Arc-enabled VMware vSphere VM, you must uninstall the extensions before disconnecting the agent or uninstalling the software. Uninstalling the Azure Connected Machine agent doesn't automatically remove extensions, and they won't be recognized if you late connect the VM to Azure Arc again.
+Uninstall extensions using following steps:
+
+1. Go to [Azure Arc center in Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_HybridCompute/AzureArcCenterBlade/overview)
+
+2. Select **VMware vCenters**.
+
+3. Search and select the vCenter you want to remove from Azure Arc.
+
+ ![Browse your VMware Inventory ](./media/browse-vmware-inventory.png)
+
+4. Select **Virtual machines** under **vCenter inventory**.
+
+5. Search and select the virtual machine where you have Guest Management enabled.
+
+6. Select **Extensions**.
+
+7. Select the extensions and select **Uninstall**
+
+### Step 2: Disconnect the agent from Azure Arc
+
+Disconnecting the agent clears the local state of the agent and removes agent information from our systems. To disconnect the agent, sign-in and run the following command as an administrator/root account on the virtual machine.
+
+```powershell
+ azcmagent disconnect --force-local-only
+```
+
+### Step 3: Uninstall the agent
+
+#### For Windows virtual machines
+
+To uninstall the Windows agent from the machine, do the following:
+
+1. Sign in to the computer with an account that has administrator permissions.
+2. In Control Panel, select Programs and Features.
+3. In Programs and Features, select Azure Connected Machine Agent, select Uninstall, and then select Yes.
+4. Delete the `C:\Program Files\AzureConnectedMachineAgent` folder
+
+#### For Linux virtual machines
+
+To uninstall the Linux agent, the command to use depends on the Linux operating system. You must have `root` access permissions or your account must have elevated rights using sudo.
+
+- For Ubuntu, run the following command:
+
+ ```bash
+ sudo apt purge azcmagent
+ ```
+
+- For RHEL, CentOS, Oracle Linux run the following command:
+
+ ```bash
+ sudo yum remove azcmagent
+ ```
+
+- For SLES, run the following command:
+
+ ```bash
+ sudo zypper remove azcmagent
+ ```
+
+## Remove VMware vSphere resources from Azure
+
+When you enable VMware vSphere resources in Azure, an Azure resource representing them is created. Before you can delete the vCenter resource in Azure, you must delete all the Azure resources that represent your related vSphere resources.
+
+1. Go to [Azure Arc center in Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_HybridCompute/AzureArcCenterBlade/overview)
+
+2. Select **VMware vCenters**.
+
+3. Search and select the vCenter you remove from Azure Arc.
+
+4. Select **Virtual machines** under **vCenter inventory**.
+
+5. Select all the VMs that have **Azure Enabled** value as **Yes**.
+
+6. Select **Remove from Azure**.
+
+ This action will only remove these resource representations from Azure. The resources will continue to remain in your vCenter.
+
+7. Perform the steps 4,5 and 6 for **Resources pools/clusters/hosts**, **Templates**, **Networks**, and **Datastores**
+
+8. Once the deletion is complete, select **Overview**.
+
+9. Note the **Custom location** and the **Azure Arc Resource bridge** resource in the **Essentials** section.
+
+10. Select **Remove from Azure** to remove the vCenter resource from Azure.
+
+11. Go to the **Custom location** resource and click **Delete**
+
+12. Go to the **Azure Arc Resource bridge** resource and click **Delete**
+
+At this point, all your Arc-enabled VMware vSphere resources are removed from Azure.
+
+## Remove Arc resource bridge related items in your vCenter
+
+During onboarding, to create a connection between your VMware vCenter and Azure, an Azure Arc resource bridge is deployed into your VMware vSphere environment. As the last step, you must delete the resource bridge VM as well the VM template created during the onboarding.
+
+You can find both the virtual machine and the template on the resource pool/cluster/host that you provided during [Azure Arc-enabled VMware vSphere onboarding](quick-start-connect-vcenter-to-arc-using-script.md).
+
+## Next steps
+
+- [Connect the vCenter to Azure Arc again](quick-start-connect-vcenter-to-arc-using-script.md)
azure-cache-for-redis Cache High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-high-availability.md
description: Learn about Azure Cache for Redis high availability features and op
Previously updated : 03/16/2022 Last updated : 03/29/2022
-# High availability for Azure Cache for Redis
+# High availability and disaster recovery
-Azure Cache for Redis has built-in high availability. A high availability architecture is used to ensure your managed Redis instance is functioning even when outages affect the underlying virtual machines (VMs), both planned and unplanned outages. It delivers much greater percentage rates than what's attainable by hosting Redis on a single VM.
+As with any cloud systems, unplanned outages can occur that result in a virtual machines (VM) instance, an Availability Zone, or a complete Azure region going down. We recommend customers have a plan in place to handle zone or regional outages.
-Azure Cache for Redis implements high availability by using multiple VMs, called *nodes*, for a cache. The nodes are configured such that data replication and failover happen in coordinated manners. High availability also aids in maintenance operations such as Redis software patching. Various high availability options are available in the Standard, Premium, and Enterprise tiers:
+This article presents the information for customers to create a _business continuity and disaster recovery plan_ for their Azure Cache for Redis, or Azure Cache for Redis Enterprise implementation.
+
+Various high availability options are available in the Standard, Premium, and Enterprise tiers:
| Option | Description | Availability | Standard | Premium | Enterprise | | - | - | - | :: | :: | :: |
-| [Standard replication](#standard-replication)| Dual-node replicated configuration in a single data center with automatic failover | 99.9% (see [details](https://azure.microsoft.com/support/legal/sla/cache/v1_1/)) |Γ£ö|Γ£ö|-|
-| [Zone redundancy](#zone-redundancy) | Multi-node replicated configuration across AZs, with automatic failover | 99.9% in Premium; 99.99% in Enterprise (see [details](https://azure.microsoft.com/support/legal/sla/cache/v1_1/)) |-|Γ£ö|Γ£ö|
+| [Standard replication](#standard-replication-for-high-availability)| Dual-node replicated configuration in a single data center with automatic failover | 99.9% (see [details](https://azure.microsoft.com/support/legal/sla/cache/v1_1/)) |Γ£ö|Γ£ö|Γ£ö|
+| [Zone redundancy](#zone-redundancy) | Multi-node replicated configuration across Availability Zones, with automatic failover | 99.9% in Premium; 99.99% in Enterprise (see [details](https://azure.microsoft.com/support/legal/sla/cache/v1_1/)) |-|Γ£ö|Γ£ö|
| [Geo-replication](#geo-replication) | Linked cache instances in two regions, with user-controlled failover | Premium; Enterprise (see [details](https://azure.microsoft.com/support/legal/sla/cache/v1_1/)) |-|Passive|Active|
+| [Import/Export](#importexport) | Point-in-time snapshot of data in cache. | 99.9% (see [details](https://azure.microsoft.com/support/legal/sla/cache/v1_1/)) |-|Γ£ö|Γ£ö|
+| [Persistence](#persistence) | Periodic data saving to storage account. | 99.9% (see [details](https://azure.microsoft.com/support/legal/sla/cache/v1_1/)) |-|Γ£ö|-|
+
+## Standard replication for high availability
+
+Applicable tiers: **Standard**, **Premium**, **Enterprise**, **Enterprise Flash**
+
+Azure Cache for Redis, in the Standard or Premium tier, has a high availability architecture that ensures your managed instance is functioning, even when outages affect the underlying virtual machines (VMs). Whether the outage is planned or unplanned outages, Azure Cache for Redis delivers much greater percentage availability rates than what's attainable by hosting Redis on a single VM.
-## Standard replication
+An Azure Cache for Redis in the Standard or Premium tier runs on a pair of Redis servers by default. The two servers are hosted on dedicated VMs. Open-source Redis allows only one server to handle data write requests.
-An Azure Cache for Redis in the Standard or Premium tier runs on a pair of Redis servers by default. The two servers are hosted on dedicated VMs. Open-source Redis allows only one server to handle data write requests. This server is the *primary* node, while the other *replica*. After it provisions the server nodes, Azure Cache for Redis assigns primary and replica roles to them. The primary node usually is responsible for servicing write and read requests from Redis clients. On a write operation, it commits a new key and a key update to its internal memory and replies immediately to the client. It forwards the operation to the replica asynchronously.
+With Azure Cache for Redis, one server is the *primary* node, while the other is the *replica*. After it provisions the server nodes, Azure Cache for Redis assigns primary and replica roles to them. The primary node usually is responsible for servicing write and read requests from clients. On a write operation, it commits a new key and a key update to its internal memory and replies immediately to the client. It forwards the operation to the *replica* asynchronously.
:::image type="content" source="media/cache-high-availability/replication.png" alt-text="Data replication setup"::: >[!NOTE]
->Normally, a Redis client communicates with the primary node in a Redis cache for all read and write requests. Certain Redis clients can be configured to read from the replica node.
+>Normally, an Azure Cache for Redis client application communicates with the primary node in a cache for all read and write requests. Certain clients can be configured to read from the replica node.
> >
-If the primary node in a Redis cache is unavailable, the replica promotes itself to become the new primary automatically. This process is called a *failover*. The replica will wait for a sufficiently long time before taking over in case that the primary node recovers quickly. When a failover happens, Azure Cache for Redis provisions a new VM and joins it to the cache as the replica node. The replica does a full data synchronization with the primary so that it has another copy of the cache data.
+If the *primary* node in a cache is unavailable, the *replica* promotes itself to become the new primary automatically. This process is called a *failover*. The replica waits for a sufficiently long time before taking over in case that the primary node recovers quickly. When a failover happens, Azure Cache for Redis provisions a new VM and joins it to the cache as the replica node. The replica does a full data synchronization with the primary so that it has another copy of the cache data.
-A primary node can go out of service as part of a planned maintenance activity, such as Redis software or operating system update. It also can stop working because of unplanned events such as failures in underlying hardware, software, or network. [Failover and patching for Azure Cache for Redis](cache-failover.md) provides a detailed explanation on types of Redis failovers. An Azure Cache for Redis goes through many failovers during its lifetime. The design of the high availability architecture makes these changes inside a cache as transparent to its clients as possible.
+A primary node can go out of service as part of a planned maintenance activity, such as an update to Redis software or the operating system. It also can stop working because of unplanned events such as failures in underlying hardware, software, or network. [Failover and patching for Azure Cache for Redis](cache-failover.md) provides a detailed explanation on types of failovers. An Azure Cache for Redis goes through many failovers during its lifetime. The design of the high availability architecture makes these changes inside a cache as transparent to its clients as possible.
-Also, Azure Cache for Redis provides more replica nodes in the Premium tier. A [multi-replica cache](cache-how-to-multi-replicas.md) can be configured with up to three replica nodes. Having more replicas generally improves resiliency because you have nodes backing up the primary. Even with more replicas, an Azure Cache for Redis instance still can be severely impacted by a data center- or AZ-level outage. You can increase cache availability by using multiple replicas with [zone redundancy](#zone-redundancy).
+Also, Azure Cache for Redis provides more replica nodes in the Premium tier. A [multi-replica cache](cache-how-to-multi-replicas.md) can be configured with up to three replica nodes. Having more replicas generally improves resiliency because you have nodes backing up the primary. Even with more replicas, an Azure Cache for Redis instance still can be severely impacted by a data center or Availability Zone outage. You can increase cache availability by using multiple replicas with [zone redundancy](#zone-redundancy).
## Zone redundancy
-Azure Cache for Redis supports zone redundant configurations in the Premium and Enterprise tiers. A [zone redundant cache](cache-how-to-zone-redundancy.md) can place its nodes across different [Azure Availability Zones](../availability-zones/az-overview.md) in the same region. It eliminates data center or AZ outage as a single point of failure and increases the overall availability of your cache.
+Applicable tiers: **Premium**, **Enterprise**, **Enterprise Flash**
+
+Azure Cache for Redis supports zone redundant configurations in the Premium and Enterprise tiers. A zone redundant cache can place its nodes across different Azure Availability Zones in the same region. It eliminates data center or AZ outage as a single point of failure and increases the overall availability of your cache.
+See this article for information on how to set it up.
+
+If a cache is configured to use two or more zones as described above, the cache nodes are created in different zones. When a zone goes down, cache nodes in other zones are available to keep the cache functioning as usual.
+
+Azure Cache for Redis supports zone redundant configurations in the Premium and Enterprise tiers. A [zone redundant cache](cache-how-to-zone-redundancy.md) can place its nodes across different [Azure Availability Zones](../availability-zones/az-overview.md) in the same region. It eliminates data center or Availability Zone outage as a single point of failure and increases the overall availability of your cache.
### Premium tier
The following diagram illustrates the zone redundant configuration for the Premi
:::image type="content" source="media/cache-high-availability/zone-redundancy.png" alt-text="Zone redundancy setup":::
-Azure Cache for Redis distributes nodes in a zone redundant cache in a round-robin manner over the AZs you've selected. It also determines which node will serve as the primary initially.
+Azure Cache for Redis distributes nodes in a zone redundant cache in a round-robin manner over the selected Availability Zones. It also determines which node will serve as the primary initially.
+
+A zone redundant cache provides automatic failover. When the current primary node is unavailable, one of the replicas will take over. Your application may experience higher cache response time if the new primary node is located in a different AZ. Availability Zones are geographically separated. Switching from one AZ to another alters the physical distance between where your application and cache are hosted. This change impacts round-trip network latencies from your application to the cache. The extra latency is expected to fall within an acceptable range for most applications. We recommend you test your application to ensure it does well with a zone-redundant cache.
+
+### Enterprise and Enterprise Flash tiers
+
+A cache in either Enterprise tier runs on a Redis Enterprise *cluster*. It always requires an odd number of server nodes to form a quorum. By default, it has three nodes, each hosted on a dedicated VM.
+
+- An Enterprise cache has two same-sized *data nodes* and one smaller *quorum node*.
+- An Enterprise Flash cache has three same-sized data nodes.
+
+The Enterprise cluster divides Azure Cache for Redis data into partitions internally. Each partition has a *primary* and at least one *replica*. Each data node holds one or more partitions. The Enterprise cluster ensures that the primary and replica(s) of any partition are never collocated on the same data node. Partitions replicate data asynchronously from primaries to their corresponding replicas.
+
+When a data node becomes unavailable or a network split happens, a failover similar to the one described in [Standard replication](#standard-replication-for-high-availability) takes place. The Enterprise cluster uses a quorum-based model to determine which surviving nodes participates in a new quorum. It also promotes replica partitions within these nodes to primaries as needed.
+
+## Persistence
-A zone redundant cache provides automatic failover. When the current primary node is unavailable, one of the replicas will take over. Your application may experience higher cache response time if the new primary node is located in a different AZ. AZs are geographically separated. Switching from one AZ to another alters the physical distance between where your application and cache are hosted. This change impacts round-trip network latencies from your application to the cache. The extra latency is expected to fall within an acceptable range for most applications. We recommend you test your application to ensure it does well with a zone-redundant cache.
+Applicable tiers: **Premium**
-### Enterprise tier
+Because your cache data is stored in memory, a rare and unplanned failure of multiple nodes can cause all the data to be dropped. To avoid losing data completely, [Redis persistence](https://redis.io/topics/persistence) allows you to take periodic snapshots of in-memory data, and store it to your storage account. If you experience a failure across multiple nodes causing data loss, your cache loads the snapshot from storage account. For more information, see [Configure data persistence for a Premium Azure Cache for Redis instance](cache-how-to-premium-persistence.md).
-A cache in either Enterprise tier runs on a Redis Enterprise cluster. It always requires an odd number of server nodes to form a quorum. By default, it has three nodes, each hosted on a dedicated VM. An Enterprise cache has two same-sized *data nodes* and one smaller *quorum node*. An Enterprise Flash cache has three same-sized data nodes. The Enterprise cluster divides Redis data into partitions internally. Each partition has a *primary* and at least one *replica*. Each data node holds one or more partitions. The Enterprise cluster ensures that the primary and replica(s) of any partition are never collocated on the same data node. Partitions replicate data asynchronously from primaries to their corresponding replicas.
+### Storage account for persistence
-When a data node becomes unavailable or a network split happens, a failover similar to the one described in [Standard replication](#standard-replication) takes place. The Enterprise cluster uses a quorum-based model to determine which surviving nodes will participate in a new quorum. It also promotes replica partitions within these nodes to primaries as needed.
+Consider choosing a geo-redundant storage account to ensure high availability of persisted data. For more information, see [Azure Storage redundancy](/azure/storage/common/storage-redundancy?toc=/azure/storage/blobs/toc.json).
+
+## Import/Export
+
+Applicable tiers: **Premium**, **Enterprise**, **Enterprise Flash**
+
+Azure cache for Redis supports the option to import and export Redis Database (RDB) files to provide data portability. It allows you to import data into Azure Cache for Redis or export data from Azure Cache for Redis by using an RDB snapshot. The RDB snapshot from a premium cache is exported to a blob in an Azure Storage Account. You can create a script to trigger export periodically to your storage account. For more information, see [Import and Export data in Azure Cache for Redis](cache-how-to-import-export-data.md).
+
+### Storage account for export
+
+Consider choosing a geo-redundant storage account to ensure high availability of your exported data. For more information, see [Azure Storage redundancy](/azure/storage/common/storage-redundancy?toc=/azure/storage/blobs/toc.json).
## Geo-replication
-[Geo-replication](cache-how-to-geo-replication.md) is a mechanism for linking two or more Azure Cache for Redis instances, typically spanning two Azure regions.
+Applicable tiers: **Premium**
-### Premium tier geo-replication
+[Geo-replication](cache-how-to-geo-replication.md) is a mechanism for linking two or more Azure Cache for Redis instances, typically spanning two Azure regions. Geo-replication is designed mainly for disaster recovery. Two Premium tier cache instances are connected through geo-replication in away that provides reads and writes to your primary cache, and that data is replicated to the secondary cache.
+For more information on how to set it up, see [Configure geo-replication for Premium Azure Cache for Redis instances](/azure/azure-cache-for-redis/cache-how-to-geo-replication).
->[!NOTE]
->Geo-replication in the Premium tier is designed mainly for disaster recovery.
->
->
+If the region hosting the primary cache goes down, youΓÇÖll need to start the failover by: first, unlinking the secondary cache, and then, updating your application to point to the secondary cache for reads and writes.
+
+## Active geo-replication
+
+Applicable tiers: **Enterprise**, **Enterprise Flash**
+
+The Enterprise tiers support a more advanced form of geo-replication called [active geo-replication](cache-how-to-active-geo-replication.md). The Azure Cache for Redis Enterprise software uses conflict-free replicated data types to support writes to multiple cache instances, merges changes, and resolves conflicts. You can join up to five Enterprise tier cache instances in different Azure regions to form a geo-replication group.
+
+An application using such a cache can read and write to any of the geo-distributed cache instances through their corresponding endpoints. The application should use what is the closest to each application instance, giving you the lowest latency. For more information, see [Configure active geo-replication for Enterprise Azure Cache for Redis instances](cache-how-to-active-geo-replication.md).
+
+If a region of one of the caches in your replication group goes down, your application needs to switch to another region that is available.
-Two Premium tier cache instances can be connected through [geo-replication](cache-how-to-geo-replication.md) so that you can back up your cache data to a different region. Once linked together, one instance is named the primary linked cache and the other the secondary linked cache. Only the primary cache accepts read and write requests. Data written to the primary cache is replicated to the secondary cache.
+When a cache in your replication group is unavailable, we recommend monitoring memory usage for other caches in the same replication group. While one of the caches is down, all other caches in the replication group start saving metadata that they couldn't share with the cache that is down. If the memory usage for the available caches starts growing at a high rate after one of the caches goes down, consider unlinking the cache that is unavailable from the replication group.
-An application accesses the cache through separate endpoints for the primary and secondary caches. The application must send all write requests to the primary cache when it's deployed in multiple Azure regions. It can read from either the primary or secondary cache. In general, you want to your application's compute instances to read from the closest caches to reduce latency. Data transfer between the two cache instances is secured by TLS.
+For more information on force-unlinking, see [Force-Unlink if there's region outage](cache-how-to-active-geo-replication.md#force-unlink-if-theres-a-region-outage).
-Geo-replication doesn't provide automatic failover because of concerns over added network roundtrip time between regions if the rest of your application remains in the primary region. You'll need to manage and start the failover by unlinking the secondary cache. Unlinking promotes it to be the new primary instance.
+## Delete and recreate cache
-### Enterprise tier geo-replication
+Applicable tiers: **Standard**, **Premium**, **Enterprise**, **Enterprise Flash**
-The Enterprise tiers support a more advanced form of geo-replication. We call it [active geo-replication](cache-how-to-active-geo-replication.md). Using conflict-free replicated data types, the Redis Enterprise software supports writes to multiple cache instances and takes care of merging of changes and resolving conflicts. You can join two or more Enterprise tier cache instances in different Azure regions to form an active geo-replicated cache.
+If you experience a regional outage, consider recreating your cache in a different region and updating your application to connect to the new cache instead. It's important to understand that data will be lost during a regional outage. Your application code should be resilient to data loss.
-An application using such a cache can read and write to the geo-distributed cache instances through corresponding endpoints. The cache should use what is the closest to each compute instance, giving you the lowest latency. The application also needs to monitor the cache instances and switch to another region when one of the instances becomes unavailable. For more information on how active geo-replication works, see [Active-Active Geo-Distribution (CRDTs-Based)](https://redislabs.com/redis-enterprise/technology/active-active-geo-distribution/).
+Once the affected region is restored, your unavailable Azure Cache for Redis is automatically restored and available for use again. For more strategies for moving your cache to a different region, see [Move Azure Cache for Redis instances to different regions](/azure/azure-cache-for-redis/cache-moving-resources).
## Next steps Learn more about how to configure Azure Cache for Redis high-availability options.
-* [Azure Cache for Redis Premium service tiers](cache-overview.md#service-tiers)
-* [Add replicas to Azure Cache for Redis](cache-how-to-multi-replicas.md)
-* [Enable zone redundancy for Azure Cache for Redis](cache-how-to-zone-redundancy.md)
-* [Set up geo-replication for Azure Cache for Redis](cache-how-to-geo-replication.md)
+- [Azure Cache for Redis Premium service tiers](cache-overview.md#service-tiers)
+- [Add replicas to Azure Cache for Redis](cache-how-to-multi-replicas.md)
+- [Enable zone redundancy for Azure Cache for Redis](cache-how-to-zone-redundancy.md)
+- [Set up geo-replication for Azure Cache for Redis](cache-how-to-geo-replication.md)
azure-cache-for-redis Cache How To Active Geo Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-active-geo-replication.md
Active geo-replication groups up to five Enterprise Azure Cache for Redis instan
:::image type="content" source="media/cache-how-to-active-geo-replication/cache-active-geo-replication-configured.png" alt-text="Active geo-replication configured":::
-1. Wait for the first cache to be created successfully. Repeat the above steps for each additional cache instance in the geo-replication group.
+1. Wait for the first cache to be created successfully. Repeat the above steps for each cache instance in the geo-replication group.
## Remove from an active geo-replication group To remove a cache instance from an active geo-replication group, you just delete the instance. The remaining instances will reconfigure themselves automatically.
+## Force-unlink if there's a region outage
+
+In case one of the caches in your replication group is unavailable due to region outage, you can forcefully remove the unavailable cache from the replication group.
+
+You should remove the unavailable cache because the remaining caches in the replication group start storing the metadata that hasnΓÇÖt been shared to the unavailable cache. When this happens, the available caches in your replication group might run out of memory.
+
+1. Go to Azure portal and select one of the caches in the replication group that is still available.
+
+1. Select to **Active geo-replication** in the Resource menu on the left to see the settings in the working pane.
+ :::image type="content" source="media/cache-how-to-active-geo-replication/cache-active-geo-replication-group.png" alt-text="screenshot of active geo-replication group":::
+
+1. Select the cache that you need to force-unlink by checking the box.
+
+1. Select **Force unlink** and then **OK** to confirm.
+ :::image type="content" source="media/cache-how-to-active-geo-replication/cache-cache-active-geo-replication-unlink.png" alt-text="screenshot of unlinking in active geo-replication":::
+
+1. Once the affected region's availability is restored, you need to delete the affected cache and recreate it to add it back to your replication group.
+ ## Next steps Learn more about Azure Cache for Redis features.
azure-functions Functions Create Private Site Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-private-site-access.md
Last updated 06/17/2020
# Tutorial: Establish Azure Functions private site access
-This tutorial shows you how to enable [private site access](./functions-networking-options.md#private-endpoint-connections) with Azure Functions. By using private site access, you can require that your function code is only triggered from a specific virtual network.
+This tutorial shows you how to enable [private site access](./functions-networking-options.md#private-endpoints) with Azure Functions. By using private site access, you can require that your function code is only triggered from a specific virtual network.
Private site access is useful in scenarios when access to the function app needs to be limited to a specific virtual network. For example, the function app may be applicable to only employees of a specific organization, or services which are within the specified virtual network (such as another Azure Function, Azure Virtual Machine, or an AKS cluster).
Sign in to the [Azure portal](https://portal.azure.com).
## Create a virtual machine
-The first step in this tutorial is to create a new virtual machine inside a virtual network. The virtual machine will be used to access your function once you've restricted it's access to only be available from within the virtual network.
+The first step in this tutorial is to create a new virtual machine inside a virtual network. The virtual machine will be used to access your function once you've restricted its access to only be available from within the virtual network.
1. Select the **Create a resource** button.
The next step is to create a function app in Azure using the [Consumption plan](
The next step is to configure [access restrictions](../app-service/app-service-ip-restrictions.md) to ensure only resources on the virtual network can invoke the function.
-[Private site](functions-networking-options.md#private-endpoint-connections) access is enabled by creating an Azure Virtual Network [service endpoint](../virtual-network/virtual-network-service-endpoints-overview.md) between the function app and the specified virtual network. Access restrictions are implemented via service endpoints. Service endpoints ensure only traffic originating from within the specified virtual network can access the designated resource. In this case, the designated resource is the Azure Function.
+[Private site](functions-networking-options.md#private-endpoints) access is enabled by creating an Azure Virtual Network [service endpoint](../virtual-network/virtual-network-service-endpoints-overview.md) between the function app and the specified virtual network. Access restrictions are implemented via service endpoints. Service endpoints ensure only traffic originating from within the specified virtual network can access the designated resource. In this case, the designated resource is the Azure Function.
1. Within the function app, select the **Networking** link under the _Settings_ section header. 1. The _Networking_ page is the starting point to configure Azure Front Door, the Azure CDN, and also Access Restrictions.
Accessing the function via a web browser (by using the Azure Bastion service) on
> [!div class="nextstepaction"]
-> [Learn more about the networking options in Functions](./functions-networking-options.md)
+> [Learn more about the networking options in Functions](./functions-networking-options.md)
azure-functions Functions Networking Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-networking-options.md
Title: Azure Functions networking options
description: An overview of all networking options available in Azure Functions. Previously updated : 03/04/2022 Last updated : 3/28/2022
You can host function apps in a couple of ways:
[!INCLUDE [functions-networking-features](../../includes/functions-networking-features.md)]
-## Inbound access restrictions
+## Quick start resources
+
+Use the following resources to quickly get started with Azure Functions networking scenarios. These resources are referenced throughout the article.
+
+* ARM templates:
+ * [Function App with Azure Storage private endpoints](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/function-app-storage-private-endpoints).
+ * [Azure Function App with Virtual Network Integration](https://github.com/Azure-Samples/function-app-arm-templates/tree/main/function-app-vnet-integration).
+* Tutorials:
+ * [Restrict your storage account to a virtual network](configure-networking-how-to.md#restrict-your-storage-account-to-a-virtual-network).
+ * [Control Azure Functions outbound IP with an Azure virtual network NAT gateway](functions-how-to-use-nat-gateway.md).
+
+## Inbound networking features
+
+The following features let you filter inbound requests to your function app.
+
+### Inbound access restrictions
You can use access restrictions to define a priority-ordered list of IP addresses that are allowed or denied access to your app. The list can include IPv4 and IPv6 addresses, or specific virtual network subnets using [service endpoints](#use-service-endpoints). When there are one or more entries, an implicit "deny all" exists at the end of the list. IP restrictions work with all function-hosting options.
Access restrictions are available in the [Premium](functions-premium-plan.md), [
To learn more, see [Azure App Service static access restrictions](../app-service/app-service-ip-restrictions.md).
-## Private endpoint connections
+### Private endpoints
[!INCLUDE [functions-private-site-access](../../includes/functions-private-site-access.md)]
-To call other services that have a private endpoint connection, such as storage or service bus, be sure to configure your app to make [outbound calls to private endpoints](#private-endpoints).
+To call other services that have a private endpoint connection, such as storage or service bus, be sure to configure your app to make [outbound calls to private endpoints](#private-endpoints). For more details on using private endpoints with the storage account for your function app, visit [restrict your storage account to a virtual network](#restrict-your-storage-account-to-a-virtual-network).
+
+### Service endpoints
+
+Using service endpoints, you can restrict a number of Azure services to selected virtual network subnets to provide a higher level of security. Regional virtual network integration enables your function app to reach Azure services that are secured with service endpoints. This configuration is supported on all [plans](functions-scale.md#networking-features) that support virtual network integration. To access a service endpoint-secured service, you must do the following:
+
+1. Configure regional virtual network integration with your function app to connect to a specific subnet.
+1. Go to the destination service and configure service endpoints against the integration subnet.
+
+To learn more, see [Virtual network service endpoints](../virtual-network/virtual-network-service-endpoints-overview.md).
-### Use service endpoints
+#### Use Service Endpoints
-By using service endpoints, you can restrict access to selected Azure virtual network subnets. To restrict access to a specific subnet, create a restriction rule with a **Virtual Network** type. You can then select the subscription, virtual network, and subnet that you want to allow or deny access to.
+To restrict access to a specific subnet, create a restriction rule with a **Virtual Network** type. You can then select the subscription, virtual network, and subnet that you want to allow or deny access to.
-If service endpoints aren't already enabled with Microsoft.Web for the subnet that you selected, they'll be automatically enabled unless you select the **Ignore missing Microsoft.Web service endpoints** check box. The scenario where you might want to enable service endpoints on the app but not the subnet depends mainly on whether you have the permissions to enable them on the subnet.
+If service endpoints aren't already enabled with Microsoft.Web for the subnet that you selected, they'll be automatically enabled unless you select the **Ignore missing Microsoft.Web service endpoints** check box. The scenario where you might want to enable service endpoints on the app but not the subnet depends mainly on whether you have the permissions to enable them on the subnet.
If you need someone else to enable service endpoints on the subnet, select the **Ignore missing Microsoft.Web service endpoints** check box. Your app will be configured for service endpoints in anticipation of having them enabled later on the subnet.
Virtual network integration in Azure Functions uses shared infrastructure with A
To learn how to set up virtual network integration, see [Enable virtual network integration](#enable-virtual-network-integration).
-## Enable virtual network integration
+### Enable virtual network integration
1. Go to the **Networking** blade in the Function App portal. Under **VNet Integration**, select **Click here to configure**.
During the integration, your app is restarted. When integration is finished, you
If you wish for only your private traffic ([RFC1918](https://datatracker.ietf.org/doc/html/rfc1918#section-3) traffic) to be routed, please follow the steps in the [app service documentation](../app-service/overview-vnet-integration.md#application-routing).
-## Regional virtual network integration
+### Regional virtual network integration
Using regional virtual network integration enables your app to access:
There are some limitations with using virtual network:
* You can have only one regional virtual network integration per App Service plan. Multiple apps in the same App Service plan can use the same integration subnet. * You can't change the subscription of an app or a plan while there's an app that's using regional virtual network integration.
-## Subnets
+### Subnets
Virtual network integration depends on a dedicated subnet. When you provision a subnet, the Azure subnet loses five IPs from the start. One address is used from the integration subnet for each plan instance. When you scale your app to four instances, then four addresses are used.
When you want your apps in another plan to reach a virtual network that's alread
The feature is fully supported for both Windows and Linux apps, including [custom containers](../app-service/configure-custom-container.md). All of the behaviors act the same between Windows apps and Linux apps.
-### Service endpoints
-
-To provide a higher level of security, you can restrict a number of Azure services to a virtual network by using service endpoints. Regional virtual network integration enables your function app to reach Azure services that are secured with service endpoints. This configuration is supported on all [plans](functions-scale.md#networking-features) that support virtual network integration. To access a service endpoint-secured service, you must do the following:
-
-1. Configure regional virtual network integration with your function app to connect to a specific subnet.
-1. Go to the destination service and configure service endpoints against the integration subnet.
-
-To learn more, see [Virtual network service endpoints](../virtual-network/virtual-network-service-endpoints-overview.md).
- ### Network security groups You can use network security groups to block inbound and outbound traffic to resources in a virtual network. An app that uses regional virtual network integration can use a [network security group][VNETnsg] to block outbound traffic to resources in your virtual network or the internet. To block traffic to public addresses, you must have virtual network integration with Route All enabled. The inbound rules in an NSG don't apply to your app because virtual network integration affects only outbound traffic from your app.
Border Gateway Protocol (BGP) routes also affect your app traffic. If you have B
After your app integrates with your virtual network, it uses the same DNS server that your virtual network is configured with and will work with the Azure DNS private zones linked to the virtual network.
-### Private Endpoints
-
-If you want to make calls to [Private Endpoints][privateendpoints], then you must make sure that your DNS lookups resolve to the private endpoint. You can enforce this behavior in one of the following ways:
-
-* Integrate with Azure DNS private zones. When your virtual network doesn't have a custom DNS server, this is done automatically.
-* Manage the private endpoint in the DNS server used by your app. To do this you must know the private endpoint address and then point the endpoint you are trying to reach to that address using an A record.
-* Configure your own DNS server to forward to [Azure DNS private zones](#azure-dns-private-zones).
- ## Restrict your storage account to a virtual network
+> [!NOTE]
+> To quickly deploy a function app with private endpoints enabled on the storage account, please refer to the following template: [Function App with Azure Storage private endpoints](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/function-app-storage-private-endpoints).
+ When you create a function app, you must create or link to a general-purpose Azure Storage account that supports Blob, Queue, and Table storage. You can replace this storage account with one that is secured with service endpoints or private endpoints. This feature is supported for all Windows and Linux virtual network-supported SKUs in the Dedicated (App Service) plan and for the Premium plans. The Consumption plan isn't supported. To learn how to set up a function with a storage account restricted to a private network, see [Restrict your storage account to a virtual network](configure-networking-how-to.md#restrict-your-storage-account-to-a-virtual-network).
azure-functions Functions Recover Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-recover-storage-account.md
Your function app might be unreachable for either of the following reasons:
* Your function app is hosted in an [internally load balanced App Service Environment](../app-service/environment/create-ilb-ase.md) and it's configured to block inbound internet traffic.
-* Your function app has [inbound IP restrictions](functions-networking-options.md#inbound-access-restrictions) that are configured to block internet access.
+* Your function app has [inbound IP restrictions](functions-networking-options.md#inbound-networking-features) that are configured to block internet access.
The Azure portal makes calls directly to the running app to fetch the list of functions, and it makes HTTP calls to the Kudu endpoint. Platform-level settings under the **Platform Features** tab are still available.
azure-maps Creator Indoor Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-indoor-maps.md
Title: Work with indoor maps in Azure Maps Creator
description: This article introduces concepts that apply to Azure Maps Creator services Previously updated : 01/26/2022 Last updated : 04/01/2022
Creator services create, store, and use various data types that are defined and
Creator collects indoor map data by converting an uploaded Drawing package. The Drawing package represents a constructed or remodeled facility. For information about Drawing package requirements, see [Drawing package requirements](drawing-requirements.md).
-Use the [Azure Maps Data Upload API](/rest/api/maps/data-v2/update-preview) to upload a Drawing package. After the Drawing packing is uploaded, the Data Upload API returns a user data identifier (`udid`). The `udid` can then be used to convert the uploaded package into indoor map data.
+Use the [Azure Maps Data Upload API](/rest/api/maps/data-v2/update) to upload a Drawing package. After the Drawing packing is uploaded, the Data Upload API returns a user data identifier (`udid`). The `udid` can then be used to convert the uploaded package into indoor map data.
## Convert a Drawing package
azure-maps How To Render Custom Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-render-custom-data.md
To upload pins and path data:
``` >[!TIP]
->To obtain your own path and pin location information, use the [Data Upload API](/rest/api/maps/data-v2/upload-preview).
+>To obtain your own path and pin location information, use the [Data Upload API](/rest/api/maps/data-v2/upload).
### Check pins and path data upload status
azure-maps Tutorial Creator Indoor Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-creator-indoor-maps.md
This tutorial uses the [Postman](https://www.postman.com/) application, but you
## Upload a Drawing package
-Use the [Data Upload API](/rest/api/maps/data-v2/upload-preview) to upload the Drawing package to Azure Maps resources.
+Use the [Data Upload API](/rest/api/maps/data-v2/upload) to upload the Drawing package to Azure Maps resources.
The Data Upload API is a long running transaction that implements the pattern defined in [Creator Long-Running Operation API V2](creator-long-running-operation-v2.md).
To upload the Drawing package:
4. Select the **POST** HTTP method.
-5. Enter the following URL to the [Data Upload API](/rest/api/maps/data-v2/upload-preview) The request should look like the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key)::
+5. Enter the following URL to the [Data Upload API](/rest/api/maps/data-v2/upload) The request should look like the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key)::
```http https://us.atlas.microsoft.com/mapData?api-version=2.0&dataFormat=dwgzippackage&subscription-key={Your-Azure-Maps-Primary-Subscription-key}
azure-maps Tutorial Ev Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-ev-routing.md
for loc in range(len(searchPolyResponse["results"])):
## Upload the reachable range and charging points to Azure Maps Data service
-On a map, you'll want to visualize the charging stations and the boundary for the maximum reachable range of the electric vehicle. To do so, upload the boundary data and charging stations data as geojson objects to Azure Maps Data service. Use the [Data Upload API](/rest/api/maps/data-v2/upload-preview).
+On a map, you'll want to visualize the charging stations and the boundary for the maximum reachable range of the electric vehicle. To do so, upload the boundary data and charging stations data as geojson objects to Azure Maps Data service. Use the [Data Upload API](/rest/api/maps/data-v2/upload).
To upload the boundary and charging point data to Azure Maps Data service, run the following two cells:
routeData = {
## Visualize the route
-To help visualize the route, you first upload the route data as a geojson object to Azure Maps Data service . To do so, use the Azure Maps [Data Upload API](/rest/api/maps/data-v2/upload-preview). Then, call the rendering service, [Get Map Image API](/rest/api/maps/render/getmapimage), to render the route on the map, and visualize it.
+To help visualize the route, you first upload the route data as a geojson object to Azure Maps Data service . To do so, use the Azure Maps [Data Upload API](/rest/api/maps/data-v2/upload). Then, call the rendering service, [Get Map Image API](/rest/api/maps/render/getmapimage), to render the route on the map, and visualize it.
To get an image for the rendered route on the map, run the following script:
To explore the Azure Maps APIs that are used in this tutorial, see:
* [Get Route Range](/rest/api/maps/route/getrouterange) * [Post Search Inside Geometry](/rest/api/maps/search/postsearchinsidegeometry)
-* [Data Upload](/rest/api/maps/data-v2/upload-preview)
+* [Data Upload](/rest/api/maps/data-v2/upload)
* [Render - Get Map Image](/rest/api/maps/render/getmapimage) * [Post Route Matrix](/rest/api/maps/route/postroutematrix) * [Get Route Directions](/rest/api/maps/route/getroutedirections)
azure-maps Tutorial Geofence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-geofence.md
Azure Maps provides a number of services to support the tracking of equipment en
> [!div class="checklist"] > > * Create an Azure Maps account with a global region.
-> * Upload [Geofencing GeoJSON data](geofence-geojson.md) that defines the construction site areas you want to monitor. You'll use the [Data Upload API](/rest/api/maps/data-v2/upload-preview) to upload geofences as polygon coordinates to your Azure Maps account.
+> * Upload [Geofencing GeoJSON data](geofence-geojson.md) that defines the construction site areas you want to monitor. You'll use the [Data Upload API](/rest/api/maps/data-v2/upload) to upload geofences as polygon coordinates to your Azure Maps account.
> * Set up two [logic apps](../event-grid/handler-webhooks.md#logic-apps) that, when triggered, send email notifications to the construction site operations manager when equipment enters and exits the geofence area. > * Use [Azure Event Grid](../event-grid/overview.md) to subscribe to enter and exit events for your Azure Maps geofence. You set up two webhook event subscriptions that call the HTTP endpoints defined in your two logic apps. The logic apps then send the appropriate email notifications of equipment moving beyond or entering the geofence. > * Use [Search Geofence Get API](/rest/api/maps/spatial/getgeofence) to receive notifications when a piece of equipment exits and enters the geofence areas.
The Azure CLI command [az maps account create](/cli/azure/maps/account?view=azur
In this tutorial, you''ll upload geofencing GeoJSON data that contains a `FeatureCollection`. The `FeatureCollection` contains two geofences that define polygonal areas within the construction site. The first geofence has no time expiration or restrictions. The second can only be queried against during business hours (9:00 AM-5:00 PM in the Pacific Time zone), and will no longer be valid after January 1, 2022. For more information on the GeoJSON format, see [Geofencing GeoJSON data](geofence-geojson.md). >[!TIP]
->You can update your geofencing data at any time. For more information, see [Data Upload API](/rest/api/maps/data-v2/upload-preview).
+>You can update your geofencing data at any time. For more information, see [Data Upload API](/rest/api/maps/data-v2/upload).
To upload the geofencing GeoJSON data:
azure-monitor Agent Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-linux.md
Title: Install Log Analytics agent on Linux computers description: This article describes how to connect Linux computers hosted in other clouds or on-premises to Azure Monitor with the Log Analytics agent for Linux. -- Previously updated : 02/07/2022 Last updated : 03/31/2021
azure-monitor Agent Windows Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-windows-troubleshoot.md
Title: Troubleshoot issues with Log Analytics agent for Windows description: Describe the symptoms, causes, and resolution for the most common issues with the Log Analytics agent for Windows in Azure Monitor. -- Previously updated : 10/21/2021 Last updated : 03/31/2021
azure-monitor Agent Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-windows.md
Title: Install Log Analytics agent on Windows computers description: This article describes how to connect Windows computers hosted in other clouds or on-premises to Azure Monitor with the Log Analytics agent for Windows. -- Previously updated : 12/16/2021 Last updated : 03/31/2021
azure-monitor Data Sources Collectd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-collectd.md
Title: Collect data from CollectD in Azure Monitor | Microsoft Docs description: CollectD is an open source Linux daemon that periodically collects data from applications and system level information. This article provides information on collecting data from CollectD in Azure Monitor. -- Previously updated : 11/27/2018 Last updated : 03/31/2021
azure-monitor Data Sources Iis Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-iis-logs.md
Title: Collect IIS logs with Log Analytics agent in Azure Monitor description: Internet Information Services (IIS) stores user activity in log files that can be collected by Azure Monitor. This article describes how to configure collection of IIS logs and details of the records they create in Azure Monitor. -- Previously updated : 02/26/2021 Last updated : 03/31/2021
azure-monitor Data Sources Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-json.md
Title: Collecting custom JSON data sources with the Log Analytics agent for Linux in Azure Monitor description: Custom JSON data sources can be collected into Azure Monitor using the Log Analytics Agent for Linux. These custom data sources can be simple scripts returning JSON such as curl or one of FluentD's 300+ plugins. This article describes the configuration required for this data collection. -- Previously updated : 11/28/2018 Last updated : 03/31/2021
azure-monitor Data Sources Linux Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-linux-applications.md
Title: Collect Linux application performance in Azure Monitor | Microsoft Docs description: This article provides details for configuring the Log Analytics agent for Linux to collect performance counters for MySQL and Apache HTTP Server. -- Previously updated : 05/04/2017 Last updated : 03/31/2021
azure-monitor Diagnostics Extension To Application Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/diagnostics-extension-to-application-insights.md
Title: Send Azure Diagnostics data to Application Insights description: Update the Azure Diagnostics public configuration to send data to Application Insights. -- Previously updated : 01/20/2022 Last updated : 03/31/2021
azure-monitor Diagnostics Extension Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/diagnostics-extension-troubleshooting.md
Title: Troubleshooting Azure Diagnostics extension description: Troubleshoot problems when using Azure diagnostics in Azure Virtual Machines, Service Fabric, or Cloud Services. -- Previously updated : 05/08/2019 Last updated : 03/31/2021
azure-monitor Om Agents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/om-agents.md
Title: Connect Operations Manager to Azure Monitor | Microsoft Docs description: To maintain your existing investment in System Center Operations Manager and use extended capabilities with Log Analytics, you can integrate Operations Manager with your workspace. -- Previously updated : 07/24/2020 Last updated : 03/31/2021 # Connect Operations Manager to Azure Monitor - To maintain your existing investment in [System Center Operations Manager](/system-center/scom/key-concepts) and use extended capabilities with Azure Monitor, you can integrate Operations Manager with your Log Analytics workspace. This allows you to leverage the opportunities of logs in Azure Monitor while continuing to use Operations Manager to: * Monitor the health of your IT services with Operations Manager
azure-monitor Asp Net Troubleshoot No Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-troubleshoot-no-data.md
Last updated 05/21/2020
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)] - ## Some of my telemetry is missing *In Application Insights, I only see a fraction of the events that are being generated by my app.*
-* If you're consistently seeing the same fraction, it's probably because of adaptive [sampling](../../azure-monitor/app/sampling.md). To confirm this, open Search (from the overview blade) and look at an instance of a Request or other event. To see the full property details, select the ellipsis (**...**) at the bottom of the **Properties** section. If Request Count > 1, sampling is in operation.
+* If you're consistently seeing the same fraction, it's probably because of adaptive [sampling](../../azure-monitor/app/sampling.md). To confirm this, open Search (from the **Overview** in the portal on the left) and look at an instance of a Request or other event. To see the full property details, select the ellipsis (**...**) at the bottom of the **Properties** section. If Request Count > 1, sampling is in operation.
* It's possible that you're hitting a [data rate limit](../../azure-monitor/app/pricing.md#limits-summary) for your pricing plan. These limits are applied per minute. *I'm randomly experiencing data loss.*
Last updated 05/21/2020
* SDK channel keeps telemetry in buffer, and sends them in batches. If the application is shutting down, you might need to explicitly call [Flush()](api-custom-events-metrics.md#flushing-data). Behavior of `Flush()` depends on the actual [channel](telemetry-channels.md#built-in-telemetry-channels) used.
-## Request count collected by Application Insights SDK does not match the IIS log count for my application
+## Request count collected by Application Insights SDK doesn't match the IIS log count for my application
-Internet Information Services (IIS) logs counts of all request reaching IIS and inherently could differ from the total request reaching an application. Due to this it is not guaranteed that the request count collected by the SDKs will match the total IIS log count.
+Internet Information Services (IIS) logs counts of all request reaching IIS and inherently could differ from the total request reaching an application. Due to this, it isn't guaranteed that the request count collected by the SDKs will match the total IIS log count.
## No data from my server * I installed my app on my web server, and now I don't see any telemetry from it. It worked OK on my dev machine.*
-* This is probably a firewall issue. [Set firewall exceptions for Application Insights to send data](../../azure-monitor/app/ip-addresses.md).
+* A firewall issue is most likely the cause. [Set firewall exceptions for Application Insights to send data](../../azure-monitor/app/ip-addresses.md).
* IIS Server might be missing some prerequisites, like .NET Extensibility 4.5 or ASP.NET 4.5. *I [installed Azure Monitor Application Insights Agent](./status-monitor-v2-overview.md) on my web server to monitor existing apps. I don't see any results.* * See [Troubleshooting Status Monitor](./status-monitor-v2-troubleshoot.md).
-> [!IMPORTANT]
-> [Connection Strings](./sdk-connection-string.md?tabs=net) are recommended over instrumentation keys. New Azure regions **require** the use of connection strings instead of instrumentation keys. Connection string identifies the resource that you want to associate your telemetry data with. It also allows you to modify the endpoints your resource will use as a destination for your telemetry. You will need to copy the connection string and add it to your application's code or to an environment variable.
-- ## <a id="SSL"></a>Check TLS/SSL client settings (ASP.NET)
-If you have an ASP.NET application that it is hosted in Azure App Service or in IIS on a virtual machine, your application could fail to connect to the Snapshot Debugger service due to a missing SSL security protocol.
+If you have an ASP.NET application hosted in Azure App Service or in IIS on a virtual machine, your application could fail to connect to the Snapshot Debugger service due to a missing SSL security protocol.
[The Snapshot Debugger endpoint requires TLS version 1.2](snapshot-debugger-upgrade.md). The set of SSL security protocols is one of the quirks enabled by the httpRuntime targetFramework value in the system.web section of web.config. If the httpRuntime targetFramework is 4.5.2 or lower, then TLS 1.2 isn't included by default.
To check the setting, open your web.config file and find the system.web section.
> If the targetFramework is 4.7 or above then Windows determines the available protocols. In Azure App Service, TLS 1.2 is available. However, if you are using your own virtual machine, you may need to enable TLS 1.2 in the OS.
-## FileNotFoundException: Could not load file or assembly 'Microsoft.AspNet TelemetryCorrelation
+## FileNotFoundException: "Could not load file or assembly Microsoft.AspNet TelemetryCorrelation"
-For more information on this error see [GitHub issue 1610 ]
+For more information on this error, see [GitHub issue 1610 ]
(https://github.com/microsoft/ApplicationInsights-dotnet/issues/1610).
-When upgrading from SDKs older than (2.4) you need to make sure the following changes applied to `web.config` and `ApplicationInsights.config`:
+When upgrading from SDKs older than (2.4), you need to make sure the following changes applied to `web.config` and `ApplicationInsights.config`:
-1. Two http modules instead of one. In `web.config` you should have two http modules. Order is important for some scenarios:
+1. Two http modules instead of one. In `web.config`, you should have two http modules. Order is important for some scenarios:
``` xml <system.webServer>
When upgrading from SDKs older than (2.4) you need to make sure the following ch
* Not all types of .NET project are supported by the tools. Web and WCF projects are supported. For other project types such as desktop or service applications, you can still [add an Application Insights SDK to your project manually](./windows-desktop.md). * Make sure you have [Visual Studio 2013 Update 3 or later](/visualstudio/releasenotes/vs2013-update3-rtm-vs). It comes pre-installed with Developer Analytics tools, which provide the Application Insights SDK.
-* Select **Tools**, **Extensions and Updates** and check that **Developer Analytics Tools** is installed and enabled. If so, click **Updates** to see if there's an update available.
-* Open the New Project dialog and choose ASP.NET Web application. If you see the Application Insights option there, then the tools are installed. If not, try uninstalling and then re-installing the Developer Analytics Tools.
+* Select **Tools**, **Extensions and Updates** and check that **Developer Analytics Tools** is installed and enabled. If so, select **Updates** to see if there's an update available.
+* Open the New Project dialog and choose ASP.NET Web application. If you see the Application Insights option there, then the tools are installed. If not, try uninstalling and then reinstalling the Developer Analytics Tools.
## <a name="q02"></a>Adding Application Insights failed *When I try to add Application Insights to an existing project, I see an error message.*
When upgrading from SDKs older than (2.4) you need to make sure the following ch
Likely causes: * Communication with the Application Insights portal failed; or
-* There is some problem with your Azure account;
+* There's a problem with your Azure account;
* You only have [read access to the subscription or group where you were trying to create the new resource](./resources-roles-access-control.md). Fix: * Check that you provided sign-in credentials for the right Azure account.
-* In your browser, check that you have access to the [Azure portal](https://portal.azure.com). Open Settings and see if there is any restriction.
-* [Add Application Insights to your existing project](./asp-net.md): In Solution Explorer, right click your project and choose "Add Application Insights."
-
-## <a name="emptykey"></a>I get an error "Instrumentation key cannot be empty"
-Looks like something went wrong while you were installing Application Insights or maybe a logging adapter.
-
-In Solution Explorer, right-click your project and choose **Application Insights > Configure Application Insights**. You'll get a dialog that invites you to sign in to Azure and either create an Application Insights resource, or re-use an existing one.
+* In your browser, check that you have access to the [Azure portal](https://portal.azure.com). Open Settings and see if there's any restriction.
+* [Add Application Insights to your existing project](./asp-net.md): In Solution Explorer, right select your project and choose "Add Application Insights."
## <a name="NuGetBuild"></a> "NuGet package(s) are missing" on my build server *Everything builds OK when I'm debugging on my development machine, but I get a NuGet error on the build server.*
-Please see [NuGet Package Restore](https://docs.nuget.org/Consume/Package-Restore)
+See [NuGet Package Restore](https://docs.nuget.org/Consume/Package-Restore)
and [Automatic Package Restore](https://docs.nuget.org/Consume/package-restore/migrating-to-automatic-package-restore). ## Missing menu command to open Application Insights from Visual Studio
and [Automatic Package Restore](https://docs.nuget.org/Consume/package-restore/m
Likely causes:
-* If you created the Application Insights resource manually, or if the project is of a type that isn't supported by the Application Insights tools.
+* You created the Application Insights resource manually.
+* The project is of a type that isn't supported by the Application Insights tools.
* The Developer Analytics tools are disabled in your Visual Studio. * Your Visual Studio is older than 2013 Update 3. Fix: * Make sure your Visual Studio version is 2013 update 3 or later.
-* Select **Tools**, **Extensions and Updates** and check that **Developer Analytics tools** is installed and enabled. If so, click **Updates** to see if there's an update available.
+* Select **Tools**, **Extensions and Updates** and check that **Developer Analytics tools** is installed and enabled. If so, select **Updates** to see if there's an update available.
* Right-click your project in Solution Explorer. If you see the command **Application Insights > Configure Application Insights**, use it to connect your project to the resource in the Application Insights service. Otherwise, your project type isn't directly supported by the Developer Analytics tools. To see your telemetry, sign in to the [Azure portal](https://portal.azure.com), choose Application Insights on the left navigation bar, and select your application.
Otherwise, your project type isn't directly supported by the Developer Analytics
The Microsoft sign-in that you last used on your default browser doesn't have access to [the resource that was created when Application Insights was added to this app](./asp-net.md). There are two likely reasons:
-* You have more than one Microsoft account - maybe a work and a personal Microsoft account? The sign-in that you last used on your default browser was for a different account than the one that has access to [add Application Insights to the project](./asp-net.md).
- * Fix: Click your name at top right of the browser window, and sign out. Then sign in with the account that has access. Then on the left navigation bar, click Application Insights and select your app.
+More than one Microsoft account - maybe a work and a personal Microsoft account? The sign-in that you last used on your default browser was for a different account than the one that has access to [add Application Insights to the project](./asp-net.md).
+ * Fix: Select your name at top right of the browser window, and sign out. Then sign in with the account that has access. Then on the left navigation bar, select Application Insights and select your app.
* Someone else added Application Insights to the project, and they forgot to give you [access to the resource group](./resources-roles-access-control.md) in which it was created. * Fix: If they used an organizational account, they can add you to the team; or they can grant you individual access to the resource group.
The Microsoft sign-in that you last used on your default browser doesn't have ac
Likely causes: * The Application Insights resource for your application has been deleted; or
-* The instrumentation key was set or changed in ApplicationInsights.config by editing it directly, without updating the project file.
+* The [connection string](./sdk-connection-string.md) was set or changed in ApplicationInsights.config by editing it directly, without updating the project file.
-The instrumentation key in ApplicationInsights.config controls where the telemetry is sent. A line in the project file controls which resource is opened when you use the command in Visual Studio.
+The [connection string](./sdk-connection-string.md) in ApplicationInsights.config controls where the telemetry is sent. A line in the project file controls which resource is opened when you use the command in Visual Studio.
Fix: * In Solution Explorer, right-click the project and choose Application Insights, Configure Application Insights. In the dialog, you can either choose to send telemetry to an existing resource, or create a new one. Or:
-* Open the resource directly. Sign in to [the Azure portal](https://portal.azure.com), click Application Insights on the left navigation bar, and then select your app.
+* Open the resource directly. Sign in to [the Azure portal](https://portal.azure.com), select Application Insights on the left navigation bar, and then select your app.
## Where do I find my telemetry? *I signed in to the [Microsoft Azure portal](https://portal.azure.com), and I'm looking at the Azure home dashboard. So where do I find my Application Insights data?*
-* On the left navigation bar, click Application Insights, then your app name. If you don't have any projects there, you need to [add or configure Application Insights in your web project](./asp-net.md).
- There you'll see some summary charts. You can click through them to see more detail.
-* In Visual Studio, while you're debugging your app, click the Application Insights button.
+* On the left navigation bar, select Application Insights, then your app name. If you don't have any projects there, you need to [add or configure Application Insights in your web project](./asp-net.md).
+ There you'll see some summary charts. You can select through them to see more detail.
+* In Visual Studio, while you're debugging your app, select the Application Insights button.
## <a name="q03"></a> No server data (or no data at all) *I ran my app and then opened the Application Insights service in Microsoft Azure, but all the charts show 'Learn how to collect...' or 'Not configured.'* Or, *only Page View and user data, but no server data.*
Fix:
* Run your application in debug mode in Visual Studio (F5). Use the application so as to generate some telemetry. Check that you can see events logged in the Visual Studio output window. ![Screenshot that shows running your application in debug mode in Visual Studio.](./media/asp-net-troubleshoot-no-data/output-window.png) * In the Application Insights portal, open [Diagnostic Search](./diagnostic-search.md). Data usually appears here first.
-* Click the Refresh button. The blade refreshes itself periodically, but you can also do it manually. The refresh interval is longer for larger time ranges.
-* Check the instrumentation keys match. On the main blade for your app in the Application Insights portal, in the **Essentials** drop-down, look at **Instrumentation key**. Then, in your project in Visual Studio, open ApplicationInsights.config and find the `<instrumentationkey>`. Check that the two keys are equal. If not:
- * In the portal, click Application Insights and look for the app resource with the right key; or
+* Select the Refresh button. The blade refreshes itself periodically, but you can also do it manually. The refresh interval is longer for larger time ranges.
+* Verify the [connection strings](./sdk-connection-string.md) match. On the main blade for your app in the Application Insights portal, in the **Essentials** drop-down, look at **Connection string**. Then, in your project in Visual Studio, open ApplicationInsights.config and find the `<ConnectionString>`. Check that the two strings are equal. If not:
+ * In the portal, select Application Insights and look for the app resource with the right string; or
* In Visual Studio Solution Explorer, right-click the project and choose Application Insights, Configure. Reset the app to send telemetry to the right resource.
- * If you can't find the matching keys, check that you are using the same sign-in credentials in Visual Studio as in to the portal.
-* In the [Microsoft Azure home dashboard](https://portal.azure.com), look at the Service Health map. If there are some alert indications, wait until they have returned to OK and then close and re-open your Application Insights application blade.
+ * If you can't find the matching strings, check that you're using the same sign-in credentials in Visual Studio as in to the portal.
+* In the [Microsoft Azure home dashboard](https://portal.azure.com), look at the Service Health map. If there are some alert indications, wait until they've returned to OK and then close and reopen your Application Insights application blade.
* Check also [our status blog](https://techcommunity.microsoft.com/t5/azure-monitor-status/bg-p/AzureMonitorStatusBlog).
-* Did you write any code for the [server-side SDK](./api-custom-events-metrics.md) that might change the instrumentation key in `TelemetryClient` instances or in `TelemetryContext`? Or did you write a [filter or sampling configuration](./api-filtering-sampling.md) that might be filtering out too much?
-* If you edited ApplicationInsights.config, carefully check the configuration of [TelemetryInitializers and TelemetryProcessors](./api-filtering-sampling.md). An incorrectly-named type or parameter can cause the SDK to send no data.
+* Did you write any code for the [server-side SDK](./api-custom-events-metrics.md) that might change the [connection string](./sdk-connection-string.md) in `TelemetryClient` instances or in `TelemetryContext`? Or did you write a [filter or sampling configuration](./api-filtering-sampling.md) that might be filtering out too much?
+* If you edited ApplicationInsights.config, carefully check the configuration of [TelemetryInitializers and TelemetryProcessors](./api-filtering-sampling.md). An incorrectly named type or parameter can cause the SDK to send no data.
## <a name="q04"></a>No data on Page Views, Browsers, Usage *I see data in Server Response Time and Server Requests charts, but no data in Page View Load time, or in the Browser or Usage blades.*
See [dependency telemetry](./asp-net-dependencies.md) and [exception telemetry](
Performance data (CPU, IO rate, and so on) is available for [Java web services](java-2x-collectd.md), [Windows desktop apps](./windows-desktop.md), [IIS web apps and services if you install Application Insights Agent](./status-monitor-v2-overview.md), and [Azure Cloud Services](./app-insights-overview.md). you'll find it under Settings, Servers. ## No (server) data since I published the app to my server
-* Check that you actually copied all the Microsoft. ApplicationInsights DLLs to the server, together with Microsoft.Diagnostics.Instrumentation.Extensions.Intercept.dll
+* Check that you copied all the Microsoft. ApplicationInsights DLLs to the server, together with Microsoft.Diagnostics.Instrumentation.Extensions.Intercept.dll
* In your firewall, you might have to [open some TCP ports](./ip-addresses.md). * If you have to use a proxy to send out of your corporate network, set [defaultProxy](/previous-versions/dotnet/netframework-1.1/aa903360(v=vs.71)) in Web.config
-* Windows Server 2008: Make sure you have installed the following updates: [KB2468871](https://support.microsoft.com/kb/2468871), [KB2533523](https://support.microsoft.com/kb/2533523), [KB2600217](https://www.microsoft.com/download/details.aspx?id=28936).
+* Windows Server 2008: Make sure you've installed the following updates: [KB2468871](https://support.microsoft.com/kb/2468871), [KB2533523](https://support.microsoft.com/kb/2533523), [KB2600217](https://www.microsoft.com/download/details.aspx?id=28936).
## I used to see data, but it has stopped
-* Have you hit your monthly quota of data points? Open the Settings/Quota and Pricing to find out. If so, you can upgrade your plan, or pay for additional capacity. See the [pricing scheme](https://azure.microsoft.com/pricing/details/application-insights/).
+* Have you hit your monthly quota of data points? Open the Settings/Quota and Pricing to find out. If so, you can upgrade your plan, or pay for more capacity. See the [pricing scheme](https://azure.microsoft.com/pricing/details/application-insights/).
## I don't see all the data I'm expecting
-If your application sends a lot of data and you are using the Application Insights SDK for ASP.NET version 2.0.0-beta3 or later, the [adaptive sampling](./sampling.md) feature may operate and send only a percentage of your telemetry.
+If your application sends a lot of data and you're using the Application Insights SDK for ASP.NET version 2.0.0-beta3 or later, the [adaptive sampling](./sampling.md) feature may operate and send only a percentage of your telemetry.
-You can disable it, but this is not recommended. Sampling is designed so that related telemetry is correctly transmitted, for diagnostic purposes.
+You can disable it, but doing so is not recommended. Sampling is designed so that related telemetry is correctly transmitted, for diagnostic purposes.
## Client IP address is 0.0.0.0
-On February 5 2018, we announced that we removed logging of the Client IP address. This does not affect Geo Location.
+On February 5 2018, we announced that we removed logging of the Client IP address. This doesn't affect Geo Location.
> [!NOTE] > If you need the first 3 octets of the IP address, you can use a [telemetry initializer](./api-filtering-sampling.md#addmodify-properties-itelemetryinitializer) to add a custom attribute.
On February 5 2018, we announced that we removed logging of the Client IP addres
The city, region, and country dimensions are derived from IP addresses and aren't always accurate. These IP addresses are processed for location first and then changed to 0.0.0.0 to be stored. ## Exception "method not found" on running in Azure Cloud Services
-Did you build for .NET 4.6? 4.6 is not automatically supported in Azure Cloud Services roles. [Install 4.6 on each role](../../cloud-services/cloud-services-dotnet-install-dotnet.md) before running your app.
+Did you build for .NET 4.6? 4.6 isn't automatically supported in Azure Cloud Services roles. [Install 4.6 on each role](../../cloud-services/cloud-services-dotnet-install-dotnet.md) before running your app.
## Troubleshooting Logs
Follow these instructions to capture troubleshooting logs for your framework.
2. Restart process so that these new settings are picked up by SDK
-3. Revert these changes when you are finished.
+3. Revert these changes when you're finished.
### .NET Core
Follow these instructions to capture troubleshooting logs for your framework.
3. Restart process so that these new settings are picked up by SDK
-4. Revert these changes when you are finished.
+4. Revert these changes when you're finished.
## <a name="PerfView"></a> Collect logs with PerfView
-[PerfView](https://github.com/Microsoft/perfview) is a free diagnostics and performance-analysis tool that help isolate CPU, memory, and other issues by collecting and visualizing diagnostics information from many sources.
+[PerfView](https://github.com/Microsoft/perfview) is a free tool that helps isolate CPU, memory, and other issues.
The Application Insights SDK log EventSource self-troubleshooting logs that can be captured by PerfView.
For more information,
## Collect logs with dotnet-trace
-Alternatively, customers can also use a cross-platform .NET Core tool, [`dotnet-trace`](/dotnet/core/diagnostics/dotnet-trace) for collecting logs that can further help in troubleshooting. This may be particularly helpful for linux-based environments.
+Alternatively, customers can also use a cross-platform .NET Core tool, [`dotnet-trace`](/dotnet/core/diagnostics/dotnet-trace) for collecting logs that can further help in troubleshooting. This may be helpful for linux-based environments.
After installation of [`dotnet-trace`](/dotnet/core/diagnostics/dotnet-trace), execute the command below in bash.
azure-monitor Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/console.md
You need a subscription with [Microsoft Azure](https://azure.com). Sign in with
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)] - ## Getting started
-> [!IMPORTANT]
-> [Connection Strings](./sdk-connection-string.md?tabs=net) are recommended over instrumentation keys. New Azure regions **require** the use of connection strings instead of instrumentation keys. Connection string identifies the resource that you want to associate your telemetry data with. It also allows you to modify the endpoints your resource will use as a destination for your telemetry. You will need to copy the connection string and add it to your application's code or to an environment variable.
- * In the [Azure portal](https://portal.azure.com), [create an Application Insights resource](./create-new-resource.md). For application type, choose **General**.
-* Take a copy of the Instrumentation Key. Find the key in the **Essentials** drop-down of the new resource you created.
+* Take a copy of the connection string. Find the connection string in the **Essentials** drop-down of the new resource you created.
* Install latest [Microsoft.ApplicationInsights](https://www.nuget.org/packages/Microsoft.ApplicationInsights) package.
-* Set the instrumentation key in your code before tracking any telemetry (or set APPINSIGHTS_INSTRUMENTATIONKEY environment variable). After that, you should be able to manually track telemetry and see it on the Azure portal
+* Set the connection string in your code before tracking any telemetry (or set the APPLICATIONINSIGHTS_CONNECTION_STRING environment variable). After that, you should be able to manually track telemetry and see it on the Azure portal
```csharp // you may use different options to create configuration as shown later in this article TelemetryConfiguration configuration = TelemetryConfiguration.CreateDefault();
-configuration.InstrumentationKey = " *your key* ";
+configuration.ConnectionString = <Copy connection string from Application Insights Resource Overview>;
var telemetryClient = new TelemetryClient(configuration); telemetryClient.TrackTrace("Hello World!"); ``` > [!NOTE]
-> Telemetry is not sent instantly. Telemetry items are batched and sent by the ApplicationInsights SDK. In Console apps, which exits right after calling `Track()` methods, telemetry may not be sent unless `Flush()` and `Sleep`/`Delay` is done before the app exits as shown in [full example](#full-example) later in this article. `Sleep` is not required if you are using `InMemoryChannel`. There is an active issue regarding the need for `Sleep` which is tracked here: [ApplicationInsights-dotnet/issues/407](https://github.com/microsoft/ApplicationInsights-dotnet/issues/407)
+> Telemetry is not sent instantly. Telemetry items are batched and sent by the ApplicationInsights SDK. In Console apps, which exit right after calling `Track()` methods, telemetry may not be sent unless `Flush()` and `Sleep`/`Delay` is done before the app exits as shown in [full example](#full-example) later in this article. `Sleep` is not required if you are using `InMemoryChannel`. There is an active issue regarding the need for `Sleep` which is tracked here: [ApplicationInsights-dotnet/issues/407](https://github.com/microsoft/ApplicationInsights-dotnet/issues/407)
* Install latest version of [Microsoft.ApplicationInsights.DependencyCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.DependencyCollector) package - it automatically tracks HTTP, SQL, or some other external dependency calls.
var telemetryClient = new TelemetryClient(configuration);
For more information, see [configuration file reference](configuration-with-applicationinsights-config.md).
-You may get a full example of the config file by installing latest version of [Microsoft.ApplicationInsights.WindowsServer](https://www.nuget.org/packages/Microsoft.ApplicationInsights.WindowsServer) package. Here is the **minimal** configuration for dependency collection that is equivalent to the code example.
+You may get a full example of the config file by installing latest version of [Microsoft.ApplicationInsights.WindowsServer](https://www.nuget.org/packages/Microsoft.ApplicationInsights.WindowsServer) package. Here's the **minimal** configuration for dependency collection that is equivalent to the code example.
```xml <?xml version="1.0" encoding="utf-8"?> <ApplicationInsights xmlns="http://schemas.microsoft.com/ApplicationInsights/2013/Settings">
- <InstrumentationKey>Your Key</InstrumentationKey>
+ <ConnectionString>"Copy connection string from Application Insights Resource Overview"</ConnectionString>
<TelemetryInitializers> <Add Type="Microsoft.ApplicationInsights.DependencyCollector.HttpDependenciesParsingTelemetryInitializer, Microsoft.AI.DependencyCollector"/> </TelemetryInitializers>
You may get a full example of the config file by installing latest version of [M
> [!NOTE] > Reading config file is not supported on .NET Core. You may consider using [Application Insights SDK for ASP.NET Core](./asp-net-core.md)
-* During application start-up create and configure `DependencyTrackingTelemetryModule` instance - it must be singleton and must be preserved for application lifetime.
+* During application start-up, create and configure `DependencyTrackingTelemetryModule` instance - it must be singleton and must be preserved for application lifetime.
```csharp var module = new DependencyTrackingTelemetryModule();
namespace ConsoleApp
{ TelemetryConfiguration configuration = TelemetryConfiguration.CreateDefault();
- configuration.InstrumentationKey = "removed";
+ configuration.ConnectionString = "removed";
configuration.TelemetryInitializers.Add(new HttpDependenciesParsingTelemetryInitializer()); var telemetryClient = new TelemetryClient(configuration);
azure-monitor Java In Process Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-in-process-agent.md
This article describes how to enable and configure the OpenTelemetry-based Azure Monitor Java offering. After you finish the instructions in this article, you'll be able to use Azure Monitor Application Insights to monitor your application. + ## Get started Java auto-instrumentation can be enabled without any code changes.
Add `-javaagent:path/to/applicationinsights-agent-3.2.10.jar` to your applicatio
- You can set an environment variable: ```console
- APPLICATIONINSIGHTS_CONNECTION_STRING=InstrumentationKey=...
+ APPLICATIONINSIGHTS_CONNECTION_STRING = <Copy connection string from Application Insights Resource Overview>
``` - Or you can create a configuration file named `applicationinsights.json`. Place it in the same directory as `applicationinsights-agent-3.2.10.jar` with the following content: ```json {
- "connectionString": "InstrumentationKey=..."
+ "connectionString": "Copy connection string from Application Insights Resource Overview"
} ``` 1. Find the connection string on your Application Insights resource.
- :::image type="content" source="media/java-ipa/connection-string.png" alt-text="Screenshot that shows the Application Insights connection string.":::
+ :::image type="content" source="media/migrate-from-instrumentation-keys-to-connection-strings/migrate-from-instrumentation-keys-to-connection-strings.png" alt-text="Screenshot displaying Application Insights overview and connection string." lightbox="media/migrate-from-instrumentation-keys-to-connection-strings/migrate-from-instrumentation-keys-to-connection-strings.png":::
#### Confirm data is flowing
Run your application and open your **Application Insights Resource** tab in the
> [!IMPORTANT] > If you have two or more services that emit telemetry to the same Application Insights resource, you're required to [set cloud role names](java-standalone-config.md#cloud-role-name) to represent them properly on the application map.
-As part of using Application Insights instrumentation, we collect and send diagnostic data to Microsoft. This data helps us run and improve Application Insights. You have the option to disable nonessential data collection. To learn more, see [Statsbeat in Azure Application Insights](./statsbeat.md).
+As part of using Application Insights instrumentation, we collect and send diagnostic data to Microsoft. This data helps us run and improve Application Insights. You can disable nonessential data collection. To learn more, see [Statsbeat in Azure Application Insights](./statsbeat.md).
## Configuration options
Autocollected dependencies without downstream distributed trace propagation:
### Azure SDKs
-Telemetry emitted by these Azure SDKs is autocollected by default:
+Telemetry emitted by these Azure SDKs is automatically collected by default:
* [Azure App Configuration](/java/api/overview/azure/data-appconfiguration-readme) 1.1.10+ * [Azure Cognitive Search](/java/api/overview/azure/search-documents-readme) 11.3.0+
Telemetry emitted by these Azure SDKs is autocollected by default:
* [Azure Text Analytics](/java/api/overview/azure/ai-textanalytics-readme) 5.0.4+ [//]: # "the above names and links scraped from https://azure.github.io/azure-sdk/releases/latest/java.html"
-[//]: # "and version sync'd manually against the oldest version in maven central built on azure-core 1.14.0"
+[//]: # "and version synched manually against the oldest version in maven central built on azure-core 1.14.0"
[//]: # "" [//]: # "var table = document.querySelector('#tg-sb-content > div > table')" [//]: # "var str = ''"
azure-monitor Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript.md
Application Insights can be used with any web pages - you just add a short piece
## Adding the JavaScript SDK
-> [!IMPORTANT]
-> [Connection Strings](./sdk-connection-string.md?tabs=js) are recommended over instrumentation keys. New Azure regions **require** the use of connection strings instead of instrumentation keys. Connection string identifies the resource that you want to associate your telemetry data with. It also allows you to modify the endpoints your resource will use as a destination for your telemetry. You will need to copy the connection string and add it to your application's code or to an environment variable.
-
-1. First you need an Application Insights resource. If you don't already have a resource and instrumentation key, follow the [create a new resource instructions](create-new-resource.md).
-2. Copy the _instrumentation key_ (also known as "iKey") or [connection string](#connection-string-setup) for the resource where you want your JavaScript telemetry to be sent (from step 1.) You'll add it to the `instrumentationKey` or `connectionString` setting of the Application Insights JavaScript SDK.
+1. First you need an Application Insights resource. If you don't already have a resource and connection string, follow the [create a new resource instructions](create-new-resource.md).
+2. Copy the [connection string](#connection-string-setup) for the resource where you want your JavaScript telemetry to be sent (from step 1.) You'll add it to the `connectionString` setting of the Application Insights JavaScript SDK.
3. Add the Application Insights JavaScript SDK to your web page or app via one of the following two options: * [npm Setup](#npm-based-setup) * [JavaScript Snippet](#snippet-based-setup)
npm i --save @microsoft/applicationinsights-web
import { ApplicationInsights } from '@microsoft/applicationinsights-web' const appInsights = new ApplicationInsights({ config: {
- instrumentationKey: 'YOUR_INSTRUMENTATION_KEY_GOES_HERE'
+ connectionString: 'Copy connection string from Application Insights Resource Overview'
/* ...Other Configuration Options... */ } }); appInsights.loadAppInsights();
The current Snippet (listed below) is version "5", the version is encoded in the
```html <script type="text/javascript">
-!function(T,l,y){var S=T.location,k="script",D="instrumentationKey",C="ingestionendpoint",I="disableExceptionTracking",E="ai.device.",b="toLowerCase",w="crossOrigin",N="POST",e="appInsightsSDK",t=y.name||"appInsights";(y.name||T[e])&&(T[e]=t);var n=T[t]||function(d){var g=!1,f=!1,m={initialize:!0,queue:[],sv:"5",version:2,config:d};function v(e,t){var n={},a="Browser";return n[E+"id"]=a[b](),n[E+"type"]=a,n["ai.operation.name"]=S&&S.pathname||"_unknown_",n["ai.internal.sdkVersion"]="javascript:snippet_"+(m.sv||m.version),{time:function(){var e=new Date;function t(e){var t=""+e;return 1===t.length&&(t="0"+t),t}return e.getUTCFullYear()+"-"+t(1+e.getUTCMonth())+"-"+t(e.getUTCDate())+"T"+t(e.getUTCHours())+":"+t(e.getUTCMinutes())+":"+t(e.getUTCSeconds())+"."+((e.getUTCMilliseconds()/1e3).toFixed(3)+"").slice(2,5)+"Z"}(),iKey:e,name:"Microsoft.ApplicationInsights."+e.replace(/-/g,"")+"."+t,sampleRate:100,tags:n,data:{baseData:{ver:2}}}}var h=d.url||y.src;if(h){function a(e){var t,n,a,i,r,o,s,c,u,p,l;g=!0,m.queue=[],f||(f=!0,t=h,s=function(){var e={},t=d.connectionString;if(t)for(var n=t.split(";"),a=0;a<n.length;a++){var i=n[a].split("=");2===i.length&&(e[i[0][b]()]=i[1])}if(!e[C]){var r=e.endpointsuffix,o=r?e.location:null;e[C]="https://"+(o?o+".":"")+"dc."+(r||"services.visualstudio.com")}return e}(),c=s[D]||d[D]||"",u=s[C],p=u?u+"/v2/track":d.endpointUrl,(l=[]).push((n="SDK LOAD Failure: Failed to load Application Insights SDK script (See stack for details)",a=t,i=p,(o=(r=v(c,"Exception")).data).baseType="ExceptionData",o.baseData.exceptions=[{typeName:"SDKLoadFailed",message:n.replace(/\./g,"-"),hasFullStack:!1,stack:n+"\nSnippet failed to load ["+a+"] -- Telemetry is disabled\nHelp Link: https://go.microsoft.com/fwlink/?linkid=2128109\nHost: "+(S&&S.pathname||"_unknown_")+"\nEndpoint: "+i,parsedStack:[]}],r)),l.push(function(e,t,n,a){var i=v(c,"Message"),r=i.data;r.baseType="MessageData";var o=r.baseData;return o.message='AI (Internal): 99 message:"'+("SDK LOAD Failure: Failed to load Application Insights SDK script (See stack for details) ("+n+")").replace(/\"/g,"")+'"',o.properties={endpoint:a},i}(0,0,t,p)),function(e,t){if(JSON){var n=T.fetch;if(n&&!y.useXhr)n(t,{method:N,body:JSON.stringify(e),mode:"cors"});else if(XMLHttpRequest){var a=new XMLHttpRequest;a.open(N,t),a.setRequestHeader("Content-type","application/json"),a.send(JSON.stringify(e))}}}(l,p))}function i(e,t){f||setTimeout(function(){!t&&m.core||a()},500)}var e=function(){var n=l.createElement(k);n.src=h;var e=y[w];return!e&&""!==e||"undefined"==n[w]||(n[w]=e),n.onload=i,n.onerror=a,n.onreadystatechange=function(e,t){"loaded"!==n.readyState&&"complete"!==n.readyState||i(0,t)},n}();y.ld<0?l.getElementsByTagName("head")[0].appendChild(e):setTimeout(function(){l.getElementsByTagName(k)[0].parentNode.appendChild(e)},y.ld||0)}try{m.cookie=l.cookie}catch(p){}function t(e){for(;e.length;)!function(t){m[t]=function(){var e=arguments;g||m.queue.push(function(){m[t].apply(m,e)})}}(e.pop())}var n="track",r="TrackPage",o="TrackEvent";t([n+"Event",n+"PageView",n+"Exception",n+"Trace",n+"DependencyData",n+"Metric",n+"PageViewPerformance","start"+r,"stop"+r,"start"+o,"stop"+o,"addTelemetryInitializer","setAuthenticatedUserContext","clearAuthenticatedUserContext","flush"]),m.SeverityLevel={Verbose:0,Information:1,Warning:2,Error:3,Critical:4};var s=(d.extensionConfig||{}).ApplicationInsightsAnalytics||{};if(!0!==d[I]&&!0!==s[I]){var c="onerror";t(["_"+c]);var u=T[c];T[c]=function(e,t,n,a,i){var r=u&&u(e,t,n,a,i);return!0!==r&&m["_"+c]({message:e,url:t,lineNumber:n,columnNumber:a,error:i}),r},d.autoExceptionInstrumented=!0}return m}(y.cfg);function a(){y.onInit&&y.onInit(n)}(T[t]=n).queue&&0===n.queue.length?(n.queue.push(a),n.trackPageView({})):a()}(window,document,{
+!function(T,l,y){var S=T.location,k="script",D="connectionString",C="ingestionendpoint",I="disableExceptionTracking",E="ai.device.",b="toLowerCase",w="crossOrigin",N="POST",e="appInsightsSDK",t=y.name||"appInsights";(y.name||T[e])&&(T[e]=t);var n=T[t]||function(d){var g=!1,f=!1,m={initialize:!0,queue:[],sv:"5",version:2,config:d};function v(e,t){var n={},a="Browser";return n[E+"id"]=a[b](),n[E+"type"]=a,n["ai.operation.name"]=S&&S.pathname||"_unknown_",n["ai.internal.sdkVersion"]="javascript:snippet_"+(m.sv||m.version),{time:function(){var e=new Date;function t(e){var t=""+e;return 1===t.length&&(t="0"+t),t}return e.getUTCFullYear()+"-"+t(1+e.getUTCMonth())+"-"+t(e.getUTCDate())+"T"+t(e.getUTCHours())+":"+t(e.getUTCMinutes())+":"+t(e.getUTCSeconds())+"."+((e.getUTCMilliseconds()/1e3).toFixed(3)+"").slice(2,5)+"Z"}(),name:"Microsoft.ApplicationInsights."+e.replace(/-/g,"")+"."+t,sampleRate:100,tags:n,data:{baseData:{ver:2}}}}var h=d.url||y.src;if(h){function a(e){var t,n,a,i,r,o,s,c,u,p,l;g=!0,m.queue=[],f||(f=!0,t=h,s=function(){var e={},t=d.connectionString;if(t)for(var n=t.split(";"),a=0;a<n.length;a++){var i=n[a].split("=");2===i.length&&(e[i[0][b]()]=i[1])}if(!e[C]){var r=e.endpointsuffix,o=r?e.location:null;e[C]="https://"+(o?o+".":"")+"dc."+(r||"services.visualstudio.com")}return e}(),c=s[D]||d[D]||"",u=s[C],p=u?u+"/v2/track":d.endpointUrl,(l=[]).push((n="SDK LOAD Failure: Failed to load Application Insights SDK script (See stack for details)",a=t,i=p,(o=(r=v(c,"Exception")).data).baseType="ExceptionData",o.baseData.exceptions=[{typeName:"SDKLoadFailed",message:n.replace(/\./g,"-"),hasFullStack:!1,stack:n+"\nSnippet failed to load ["+a+"] -- Telemetry is disabled\nHelp Link: https://go.microsoft.com/fwlink/?linkid=2128109\nHost: "+(S&&S.pathname||"_unknown_")+"\nEndpoint: "+i,parsedStack:[]}],r)),l.push(function(e,t,n,a){var i=v(c,"Message"),r=i.data;r.baseType="MessageData";var o=r.baseData;return o.message='AI (Internal): 99 message:"'+("SDK LOAD Failure: Failed to load Application Insights SDK script (See stack for details) ("+n+")").replace(/\"/g,"")+'"',o.properties={endpoint:a},i}(0,0,t,p)),function(e,t){if(JSON){var n=T.fetch;if(n&&!y.useXhr)n(t,{method:N,body:JSON.stringify(e),mode:"cors"});else if(XMLHttpRequest){var a=new XMLHttpRequest;a.open(N,t),a.setRequestHeader("Content-type","application/json"),a.send(JSON.stringify(e))}}}(l,p))}function i(e,t){f||setTimeout(function(){!t&&m.core||a()},500)}var e=function(){var n=l.createElement(k);n.src=h;var e=y[w];return!e&&""!==e||"undefined"==n[w]||(n[w]=e),n.onload=i,n.onerror=a,n.onreadystatechange=function(e,t){"loaded"!==n.readyState&&"complete"!==n.readyState||i(0,t)},n}();y.ld<0?l.getElementsByTagName("head")[0].appendChild(e):setTimeout(function(){l.getElementsByTagName(k)[0].parentNode.appendChild(e)},y.ld||0)}try{m.cookie=l.cookie}catch(p){}function t(e){for(;e.length;)!function(t){m[t]=function(){var e=arguments;g||m.queue.push(function(){m[t].apply(m,e)})}}(e.pop())}var n="track",r="TrackPage",o="TrackEvent";t([n+"Event",n+"PageView",n+"Exception",n+"Trace",n+"DependencyData",n+"Metric",n+"PageViewPerformance","start"+r,"stop"+r,"start"+o,"stop"+o,"addTelemetryInitializer","setAuthenticatedUserContext","clearAuthenticatedUserContext","flush"]),m.SeverityLevel={Verbose:0,Information:1,Warning:2,Error:3,Critical:4};var s=(d.extensionConfig||{}).ApplicationInsightsAnalytics||{};if(!0!==d[I]&&!0!==s[I]){var c="onerror";t(["_"+c]);var u=T[c];T[c]=function(e,t,n,a,i){var r=u&&u(e,t,n,a,i);return!0!==r&&m["_"+c]({message:e,url:t,lineNumber:n,columnNumber:a,error:i}),r},d.autoExceptionInstrumented=!0}return m}(y.cfg);function a(){y.onInit&&y.onInit(n)}(T[t]=n).queue&&0===n.queue.length?(n.queue.push(a),n.trackPageView({})):a()}(window,document,{
src: "https://js.monitor.azure.com/scripts/b/ai.2.min.js", // The SDK URL Source // name: "appInsights", // Global SDK Instance name defaults to "appInsights" when not supplied // ld: 0, // Defines the load delay (in ms) before attempting to load the sdk. -1 = block page load and add to head. (default) = 0ms load after timeout,
src: "https://js.monitor.azure.com/scripts/b/ai.2.min.js", // The SDK URL Source
crossOrigin: "anonymous", // When supplied this will add the provided value as the cross origin attribute on the script tag // onInit: null, // Once the application insights instance has loaded and initialized this callback function will be called with 1 argument -- the sdk instance (DO NOT ADD anything to the sdk.queue -- As they won't get called) cfg: { // Application Insights Configuration
- instrumentationKey: "YOUR_INSTRUMENTATION_KEY_GOES_HERE"
+ connectionString: "Copy connection string from Application Insights Resource Overview"
/* ...Other Configuration Options... */ }}); </script>
cfg: { // Application Insights Configuration
#### Reporting Script load failures
-This version of the snippet detects and reports failures when loading the SDK from the CDN as an exception to the Azure Monitor portal (under the failures &gt; exceptions &gt; browser), this exception provides visibility into failures of this type so that you're aware that your application isn't reporting telemetry (or other exceptions) as expected. This signal is an important measurement in understanding that you have lost telemetry because the SDK didn't load or initialize which can lead to:
+This version of the snippet detects and reports failures when loading the SDK from the CDN as an exception to the Azure Monitor portal (under the failures &gt; exceptions &gt; browser), this exception provides visibility into failures of this type so that you're aware your application isn't reporting telemetry (or other exceptions) as expected. This signal is an important measurement in understanding that you have lost telemetry because the SDK didn't load or initialize which can lead to:
- Under-reporting of how users are using (or trying to use) your site; - Missing telemetry on how your end users are using your site; - Missing JavaScript errors that could potentially be blocking your end users from successfully using your site.
For details on this exception see the [SDK load failure](javascript-sdk-load-fai
Reporting of this failure as an exception to the portal doesn't use the configuration option ```disableExceptionTracking``` from the application insights configuration and therefore if this failure occurs it will always be reported by the snippet, even when the window.onerror support is disabled.
-Reporting of SDK load failures is not supported on Internet Explorer 8 or earlier. This reduces the minified size of the snippet by assuming that most environments aren't exclusively IE 8 or less. If you have this requirement and you wish to receive these exceptions, you'll need to either include a fetch poly fill or create your own snippet version that uses ```XDomainRequest``` instead of ```XMLHttpRequest```, it's recommended that you use the [provided snippet source code](https://github.com/microsoft/ApplicationInsights-JS/blob/master/AISKU/snippet/snippet.js) as a starting point.
+Reporting of SDK load failures isn't supported on Internet Explorer 8 or earlier. This reduces the minified size of the snippet by assuming that most environments aren't exclusively IE 8 or less. If you have this requirement and you wish to receive these exceptions, you'll need to either include a fetch poly fill or create your own snippet version that uses ```XDomainRequest``` instead of ```XMLHttpRequest```, it's recommended that you use the [provided snippet source code](https://github.com/microsoft/ApplicationInsights-JS/blob/master/AISKU/snippet/snippet.js) as a starting point.
> [!NOTE] > If you are using a previous version of the snippet, it is highly recommended that you update to the latest version so that you will receive these previously unreported issues.
The available configuration options are
### Connection String Setup
-For either the NPM or Snippet setup, you can also configure your instance of Application Insights using a Connection String. Replace the `instrumentationKey` field with the `connectionString` field.
```js import { ApplicationInsights } from '@microsoft/applicationinsights-web'
appInsights.trackTrace({message: 'this message will not be sent'}); // Not sent
``` ## Configuration
-Most configuration fields are named such that they can be defaulted to false. All fields are optional except for `instrumentationKey`.
+Most configuration fields are named such that they can be defaulted to false. All fields are optional except for `connectionString`.
| Name | Description | Default | ||-||
-| instrumentationKey | **Required**<br>Instrumentation key that you obtained from the Azure portal. | string<br/>null |
+| connectionString | **Required**<br>Connection string that you obtained from the Azure portal. | string<br/>null |
| accountId | An optional account ID, if your app groups users into accounts. No spaces, commas, semicolons, equals, or vertical bars | string<br/>null | | sessionRenewalMs | A session is logged if the user is inactive for this amount of time in milliseconds. | numeric<br/>1800000<br/>(30 mins) | | sessionExpirationMs | A session is logged if it has continued for this amount of time in milliseconds. | numeric<br/>86400000<br/>(24 hours) |
Most configuration fields are named such that they can be defaulted to false. Al
| enableResponse&#8203;HeaderTracking | If true, AJAX & Fetch request's response headers is tracked. | boolean<br/> false | | distributedTracingMode | Sets the distributed tracing mode. If AI_AND_W3C mode or W3C mode is set, W3C trace context headers (traceparent/tracestate) will be generated and included in all outgoing requests. AI_AND_W3C is provided for back-compatibility with any legacy Application Insights instrumented services. See example [here](./correlation.md#enable-w3c-distributed-tracing-support-for-web-apps).| `DistributedTracingModes`or<br/>numeric<br/>(Since v2.6.0) `DistributedTracingModes.AI_AND_W3C`<br />(v2.5.11 or earlier) `DistributedTracingModes.AI` | | enable&#8203;AjaxErrorStatusText | If true, include response error data text in dependency event on failed AJAX requests. | boolean<br/> false |
-| enable&#8203;AjaxPerfTracking |Flag to enable looking up and including additional browser window.performance timings in the reported `ajax` (XHR and fetch) reported metrics. | boolean<br/> false |
+| enable&#8203;AjaxPerfTracking |Flag to enable looking up and including more browser window.performance timings in the reported `ajax` (XHR and fetch) reported metrics. | boolean<br/> false |
| maxAjaxPerf&#8203;LookupAttempts | The maximum number of times to look for the window.performance timings (if available), this is required as not all browsers populate the window.performance before reporting the end of the XHR request and for fetch requests this is added after its complete.| numeric<br/> 3 |
-| ajaxPerfLookupDelay | The amount of time to wait before re-attempting to find the window.performance timings for an `ajax` request, time is in milliseconds and is passed directly to setTimeout(). | numeric<br/> 25 ms |
+| ajaxPerfLookupDelay | The amount of time to wait before reattempting to find the window.performance timings for an `ajax` request, time is in milliseconds and is passed directly to setTimeout(). | numeric<br/> 25 ms |
| enableUnhandled&#8203;PromiseRejection&#8203;Tracking | If true, unhandled promise rejections will be autocollected and reported as a JavaScript error. When disableExceptionTracking is true (don't track exceptions), the config value will be ignored and unhandled promise rejections won't be reported. | boolean<br/> false |
-| disable&#8203;InstrumentationKey&#8203;Validation | If true, instrumentation key validation check is bypassed. | boolean<br/>false |
| enablePerfMgr | When enabled (true) this will create local perfEvents for code that has been instrumented to emit perfEvents (via the doPerf() helper). This can be used to identify performance issues within the SDK based on your usage or optionally within your own instrumented code. [More details are available by the basic documentation](https://github.com/microsoft/ApplicationInsights-JS/blob/master/docs/PerformanceMonitoring.md). Since v2.5.7 | boolean<br/>false | | perfEvtsSendAll | When _enablePerfMgr_ is enabled and the [IPerfManager](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/IPerfManager.ts) fires a [INotificationManager](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/INotificationManager.ts).perfEvent() this flag determines whether an event is fired (and sent to all listeners) for all events (true) or only for 'parent' events (false &lt;default&gt;).<br />A parent [IPerfEvent](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/IPerfEvent.ts) is an event where no other IPerfEvent is still running at the point of this event being created and its _parent_ property isn't null or undefined. Since v2.5.7 | boolean<br />false |
-| idLength | The default length used to generate new random session and user id values. Defaults to 22, previous default value was 5 (v2.5.8 or less), if you need to keep the previous maximum length you should set this value to 5. | numeric<br />22 |
+| idLength | The default length used to generate new random session and user ID values. Defaults to 22, previous default value was 5 (v2.5.8 or less), if you need to keep the previous maximum length you should set this value to 5. | numeric<br />22 |
## Cookie Handling
The following example shows all possible configurations required to enable corre
// excerpt of the config section of the JavaScript SDK snippet with correlation // between client-side AJAX and server requests enabled. cfg: { // Application Insights Configuration
- instrumentationKey: "YOUR_INSTRUMENTATION_KEY_GOES_HERE"
+ connectionString: "Copy connection string from Application Insights Resource Overview"
disableFetchTracking: false, enableCorsCorrelation: true, enableRequestHeaderTracking: true,
If you're using the current application insights PRODUCTION SDK (1.0.20) and wan
"https://js.monitor.azure.com/scripts/b/ai.2.min.js" ``` -- npm scenario: Call `downloadAndSetup` to download the full ApplicationInsights script from CDN and initialize it with instrumentation key:
+- npm scenario: Call `downloadAndSetup` to download the full ApplicationInsights script from CDN and initialize it with a connection string:
```ts appInsights.downloadAndSetup({
- instrumentationKey: "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx",
+ connectionString: "Copy connection string from Application Insights Resource Overview",
url: "https://js.monitor.azure.com/scripts/b/ai.2.min.jss" }); ```
This does NOT mean that we'll only support the lowest common set of features, ju
The Application Insights JavaScript SDK is open-source to view the source code or to contribute to the project visit the [official GitHub repository](https://github.com/Microsoft/ApplicationInsights-JS).
-For the latest updates and bug fixes [consult the release notes](./release-notes.md).
+For the latest updates and bug fixes, [consult the release notes](./release-notes.md).
## <a name="next"></a> Next steps * [Track usage](usage-overview.md)
azure-monitor Live Stream https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/live-stream.md
With Live Metrics Stream, you can:
Live Metrics are currently supported for ASP.NET, ASP.NET Core, Azure Functions, Java, and Node.js apps.
+> [!NOTE]
+> The number of monitored server instances displayed by Live Metrics Stream may be lower than the actual number of instances allocated for the application. This is because many modern web servers will unload applications that do not receive requests over a period of time in order to conserve resources. Since Live Metrics Stream only counts servers that are currently running the application, servers that have already unloaded the process will not be included in that total.
+ ## Get started 1. Follow language specific guidelines to enable Live Metrics.
Live Metrics are currently supported for ASP.NET, ASP.NET Core, Azure Functions,
3. [Secure the control channel](#secure-the-control-channel) if you might use sensitive data such as customer names in your filters.
+> [!IMPORTANT]
+> Monitoring ASP.NET Core 3.X applications require Application Insights version 2.8.0 or above. To enable Application Insights ensure it is both activated in the Azure Portal and that the Application Insights NuGet package is included. Without the NuGet package some telemetry is sent to Application Insights but that telemetry will not show in the Live Metrics Stream.
+ [!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)] ### Enable LiveMetrics using code for any .NET application
azure-monitor Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/nodejs.md
# Monitor your Node.js services and apps with Application Insights
-[Application Insights](./app-insights-overview.md) monitors your backend services and components after deployment, to help you discover and rapidly diagnose performance and other issues. You can use Application Insights for Node.js services that are hosted in your datacenter, Azure VMs and web apps, and even in other public clouds.
+[Application Insights](./app-insights-overview.md) monitors your components after deployment to discover performance and other issues. You can use Application Insights for Node.js services that are hosted in your datacenter, Azure VMs and web apps, and even in other public clouds.
To receive, store, and explore your monitoring data, include the SDK in your code, and then set up a corresponding Application Insights resource in Azure. The SDK sends data to that resource for further analysis and exploration. The Node.js client library can automatically monitor incoming and outgoing HTTP requests, exceptions, and some system metrics. Beginning in version 0.20, the client library also can monitor some common [third-party packages](https://github.com/microsoft/node-diagnostic-channel/tree/master/src/diagnostic-channel-publishers#currently-supported-modules), like MongoDB, MySQL, and Redis. All events related to an incoming HTTP request are correlated for faster troubleshooting.
-You can use the TelemetryClient API to manually instrument and monitor additional aspects of your app and system. We describe the TelemetryClient API in more detail later in this article.
+You can use the TelemetryClient API to manually instrument and monitor more aspects of your app and system. We describe the TelemetryClient API in more detail later in this article.
> [!NOTE] > A preview [OpenTelemetry-based Node.js offering](opentelemetry-enable.md?tabs=nodejs) is available. [Learn more](opentelemetry-overview.md).
Before you begin, make sure that you have an Azure subscription, or [get a new o
Include the SDK in your app, so it can gather data.
-> [!IMPORTANT]
-> [Connection Strings](./sdk-connection-string.md?tabs=nodejs) are recommended over instrumentation keys. New Azure regions **require** the use of connection strings instead of instrumentation keys. Connection string identifies the resource that you want to associate your telemetry data with. It also allows you to modify the endpoints your resource will use as a destination for your telemetry. You will need to copy the connection string and add it to your application's code or to an environment variable.
+1. Copy your resource's connection string from your new resource. Application Insights uses the connection string to map data to your Azure resource. Before the SDK can use your connection string, you must specify the connection string in an environment variable or in your code.
-1. Copy your resource's instrumentation Key (also called an *ikey*) from your newly created resource. Application Insights uses the ikey to map data to your Azure resource. Before the SDK can use your ikey, you must specify the ikey in an environment variable or in your code.
-
- ![Copy instrumentation key](./media/nodejs/instrumentation-key-001.png)
+ :::image type="content" source="media/migrate-from-instrumentation-keys-to-connection-strings/migrate-from-instrumentation-keys-to-connection-strings.png" alt-text="Screenshot displaying Application Insights overview and connection string." lightbox="media/migrate-from-instrumentation-keys-to-connection-strings/migrate-from-instrumentation-keys-to-connection-strings.png":::
2. Add the Node.js client library to your app's dependencies via package.json. From the root folder of your app, run:
Include the SDK in your app, so it can gather data.
```javascript let appInsights = require('applicationinsights'); ```
-4. You also can provide an ikey via the environment variable `APPINSIGHTS_INSTRUMENTATIONKEY`, instead of passing it manually to `setup()` or `new appInsights.TelemetryClient()`. This practice lets you keep ikeys out of committed source code, and you can specify different ikeys for different environments. To configure manually call `appInsights.setup('[your ikey]');`.
+4. You also can provide a connection string via the environment variable, `APPLICATIONINSIGHTS_CONNECTION_STRING`, instead of passing it manually to `setup()` or `new appInsights.TelemetryClient()`. This practice lets you keep connection strings out of committed source code, and you can specify different connection strings for different environments. To manually configure, call `appInsights.setup('[your connection string]');`.
- For additional configuration options, see the following sections.
+ For more configuration options, see the following sections.
You can try the SDK without sending telemetry by setting `appInsights.defaultClient.config.disableAppInsights = true`.
To view the topology that is discovered for your app, you can use [Application m
Because the SDK batches data for submission, there might be a delay before items are displayed in the portal. If you don't see data in your resource, try some of the following fixes: * Continue to use the application. Take more actions to generate more telemetry.
-* Click **Refresh** in the portal resource view. Charts periodically refresh on their own, but manually refreshing forces them to refresh immediately.
+* Select **Refresh** in the portal resource view. Charts periodically refresh on their own, but manually refreshing forces them to refresh immediately.
* Verify that [required outgoing ports](./ip-addresses.md) are open. * Use [Search](./diagnostic-search.md) to look for specific events. * Check the [FAQ][FAQ].
For out-of-the-box collection of HTTP requests, popular third-party library even
```javascript let appInsights = require("applicationinsights");
-appInsights.setup("[your ikey]").start();
+appInsights.setup("[your connection string]").start();
``` > [!NOTE]
-> If the instrumentation key is set in the environment variable `APPINSIGHTS_INSTRUMENTATIONKEY`, `.setup()` can be called with no arguments. This makes it easy to use different ikeys for different environments.
+> If the connection string is set in the environment variable `APPLICATIONINSIGHTS_CONNECTION_STRING`, `.setup()` can be called with no arguments. This makes it easy to use different connection strings for different environments.
-Load the Application Insights library ,`require("applicationinsights")`, as early as possible in your scripts before loading other packages. This is needed so that the Application Insights library can prepare later packages for tracking. If you encounter conflicts with other libraries doing similar preparation, try loading the Application Insights library after those.
+Load the Application Insights library, `require("applicationinsights")`, as early as possible in your scripts before loading other packages. This step is needed so that the Application Insights library can prepare later packages for tracking. If you encounter conflicts with other libraries doing similar preparation, try loading the Application Insights library afterwards.
-Because of the way JavaScript handles callbacks, additional work is necessary to track a request across external dependencies and later callbacks. By default this additional tracking is enabled; disable it by calling `setAutoDependencyCorrelation(false)` as described in the [configuration](#sdk-configuration) section below.
+Because of the way JavaScript handles callbacks, more work is necessary to track a request across external dependencies and later callbacks. By default this extra tracking is enabled; disable it by calling `setAutoDependencyCorrelation(false)` as described in the [configuration](#sdk-configuration) section below.
## Migrating from versions prior to 0.22
If you access SDK configuration functions without chaining them to `appInsights.
## SDK configuration
-The `appInsights` object provides a number of configuration methods. They are listed in the following snippet with their default values.
+The `appInsights` object provides many configuration methods. They're listed in the following snippet with their default values.
```javascript let appInsights = require("applicationinsights");
-appInsights.setup("<instrumentation_key>")
+appInsights.setup("<connection_string>")
.setAutoDependencyCorrelation(true) .setAutoCollectRequests(true) .setAutoCollectPerformance(true, true)
appInsights.setup("<instrumentation_key>")
To fully correlate events in a service, be sure to set `.setAutoDependencyCorrelation(true)`. With this option set, the SDK can track context across asynchronous callbacks in Node.js.
-Review their descriptions in your IDE's built-in type hinting, or [applicationinsights.ts](https://github.com/microsoft/ApplicationInsights-node.js/blob/develop/applicationinsights.ts) for detailed information on what these control, and optional secondary arguments.
+Review their descriptions in your IDE's built-in type hinting, or [applicationinsights.ts](https://github.com/microsoft/ApplicationInsights-node.js/blob/develop/applicationinsights.ts) for detailed information and optional secondary arguments.
> [!NOTE] > By default `setAutoCollectConsole` is configured to *exclude* calls to `console.log` (and other console methods). Only calls to supported third-party loggers (for example, winston and bunyan) will be collected. You can change this behavior to include calls to `console` methods by using `setAutoCollectConsole(true, true)`. ### Sampling
-By default, the SDK will send all collected data to the Application Insights service. If you collect a lot of data, you might want to enable sampling to reduce the amount of data sent. Set the `samplingPercentage` field on the `config` object of a client to accomplish this. Setting `samplingPercentage` to 100(the default) means all data will be sent and 0 means nothing will be sent.
+By default, the SDK will send all collected data to the Application Insights service. If you want to enable sampling to reduce the amount of data, set the `samplingPercentage` field on the `config` object of a client. Setting `samplingPercentage` to 100(the default) means all data will be sent and 0 means nothing will be sent.
-If you are using automatic correlation, all data associated with a single request will be included or excluded as a unit.
+If you're using automatic correlation, all data associated with a single request will be included or excluded as a unit.
Add code such as the following to enable sampling: ```javascript const appInsights = require("applicationinsights");
-appInsights.setup("<instrumentation_key>");
+appInsights.setup("<connection_string>");
appInsights.defaultClient.config.samplingPercentage = 33; // 33% of all telemetry will be sent to Application Insights appInsights.start(); ``` ### Multiple roles for multi-components applications
-If your application consists of multiple components that you wish to instrument all with the same instrumentation key and still see these components as separate units in the portal, as if they were using separate instrumentation keys (for example, as separate nodes on the Application Map), you may need to manually configure the RoleName field to distinguish one component's telemetry from other components sending data to your Application Insights resource.
+If your application consists of multiple components that you wish to instrument all with the same connection string and still see these components as separate units in the portal, as if they were using separate connection strings (for example, as separate nodes on the Application Map), you may need to manually configure the RoleName field to distinguish one component's telemetry from other components sending data to your Application Insights resource.
Use the following to set the RoleName field: ```javascript const appInsights = require("applicationinsights");
-appInsights.setup("<instrumentation_key>");
+appInsights.setup("<connection_string>");
appInsights.defaultClient.context.tags[appInsights.defaultClient.context.keys.cloudRole] = "MyRoleName"; appInsights.start(); ``` ### Automatic third-party instrumentation
-In order to track context across asynchronous calls, some changes are required in third party libraries such as MongoDB and Redis. By default, Application Insights will use [`diagnostic-channel-publishers`](https://github.com/Microsoft/node-diagnostic-channel/tree/master/src/diagnostic-channel-publishers) to monkey-patch some of these libraries. This can be disabled by setting the `APPLICATION_INSIGHTS_NO_DIAGNOSTIC_CHANNEL` environment variable.
+In order to track context across asynchronous calls, some changes are required in third party libraries such as MongoDB and Redis. By default, Application Insights will use [`diagnostic-channel-publishers`](https://github.com/Microsoft/node-diagnostic-channel/tree/master/src/diagnostic-channel-publishers) to monkey-patch some of these libraries. This feature can be disabled by setting the `APPLICATION_INSIGHTS_NO_DIAGNOSTIC_CHANNEL` environment variable.
> [!NOTE] > By setting that environment variable, events may no longer be correctly associated with the right operation.
To enable sending Live Metrics from your app to Azure, use `setSendLiveMetrics(t
> [!NOTE] > The ability to send extended native metrics was added in version 1.4.0
-To enable sending extended native metrics from your app to Azure, install the separate native metrics package. The SDK will automatically load when it is installed and start collecting Node.js native metrics.
+To enable sending extended native metrics from your app to Azure, install the separate native metrics package. The SDK will automatically load when it's installed and start collecting Node.js native metrics.
```bash npm install applicationinsights-native-metrics
Currently, the native metrics package performs autocollection of garbage collect
### Distributed Tracing modes
-By default, the SDK will send headers understood by other applications/services instrumented with an Application Insights SDK. You can optionally enable sending/receiving of [W3C Trace Context](https://github.com/w3c/trace-context) headers in addition to the existing AI headers, so you will not break correlation with any of your existing legacy services. Enabling W3C headers will allow your app to correlate with other services not instrumented with Application Insights, but do adopt this W3C standard.
+By default, the SDK will send headers understood by other applications/services instrumented with an Application Insights SDK. You can enable sending/receiving of [W3C Trace Context](https://github.com/w3c/trace-context) headers in addition to the existing AI headers, so you won't break correlation with any of your existing legacy services. Enabling W3C headers will allow your app to correlate with other services not instrumented with Application Insights, but do adopt this W3C standard.
```Javascript const appInsights = require("applicationinsights"); appInsights
- .setup("<your ikey>")
+ .setup("<your connection string>")
.setDistributedTracingMode(appInsights.DistributedTracingModes.AI_AND_W3C) .start() ```
You can track any request, event, metric, or exception by using the Application
```javascript let appInsights = require("applicationinsights");
-appInsights.setup().start(); // assuming ikey in env var. start() can be omitted to disable any non-custom data
+appInsights.setup().start(); // assuming connection string in env var. start() can be omitted to disable any non-custom data
let client = appInsights.defaultClient; client.trackEvent({name: "my custom event", properties: {customProperty: "custom property value"}}); client.trackException({exception: new Error("handled exceptions can be logged with this method")});
server.on("listening", () => {
### Flush
-By default, telemetry is buffered for 15 seconds before it is sent to the ingestion server. If your application has a short lifespan (e.g. a CLI tool), it might be necessary to manually flush your buffered telemetry when application terminates, `appInsights.defaultClient.flush()`.
+By default, telemetry is buffered for 15 seconds before it's sent to the ingestion server. If your application has a short lifespan, such as a CLI tool, it might be necessary to manually flush your buffered telemetry when application terminates, `appInsights.defaultClient.flush()`.
-If the SDK detects that your application is crashing, it will call flush for you, `appInsights.defaultClient.flush({ isAppCrashing: true })`. With the flush option `isAppCrashing`, your application is assumed to be in an abnormal state, not suitable for sending telemetry. Instead, the SDK will save all buffered telemetry to [persistent storage](./data-retention-privacy.md#nodejs) and let your application terminate. When you application starts again, it will try to send any telemetry that was saved to persistent storage.
+If the SDK detects that your application is crashing, it will call flush for you, `appInsights.defaultClient.flush({ isAppCrashing: true })`. With the flush option `isAppCrashing`, your application is assumed to be in an abnormal state, not suitable for sending telemetry. Instead, the SDK will save all buffered telemetry to [persistent storage](./data-retention-privacy.md#nodejs) and let your application terminate. When your application starts again, it will try to send any telemetry that was saved to persistent storage.
### Preprocess data with telemetry processors
-You can process and filter collected data before it is sent for retention using *Telemetry Processors*. Telemetry processors are called one by one in the order they were added before the telemetry item is sent to the cloud.
+You can process and filter collected data before it's sent for retention using *Telemetry Processors*. Telemetry processors are called one by one in the order they were added before the telemetry item is sent to the cloud.
```javascript public addTelemetryProcessor(telemetryProcessor: (envelope: Contracts.Envelope, context: { http.RequestOptions, http.ClientRequest, http.ClientResponse, correlationContext }) => boolean) ```
-If a telemetry processor returns false, that telemetry item will not be sent.
+If a telemetry processor returns false, that telemetry item won't be sent.
All telemetry processors receive the telemetry data and its envelope to inspect and modify. They also receive a context object. The contents of this object is defined by the `contextObjects` parameter when calling a track method for manually tracked telemetry. For automatically collected telemetry, this object is filled with available request information and the persistent request content as provided by `appInsights.getCorrelationContext()` (if automatic dependency correlation is enabled).
function removeStackTraces ( envelope, context ) {
appInsights.defaultClient.addTelemetryProcessor(removeStackTraces); ```
-## Use multiple instrumentation keys
+## Use multiple connection strings
-You can create multiple Application Insights resources and send different data to each by using their respective instrumentation keys ("ikey").
+You can create multiple Application Insights resources and send different data to each by using their respective connection strings.
For example: ```javascript let appInsights = require("applicationinsights");
-// configure auto-collection under one ikey
-appInsights.setup("_ikey-A_").start();
+// configure auto-collection under one connection string
+appInsights.setup("Connection String A").start();
-// track some events manually under another ikey
-let otherClient = new appInsights.TelemetryClient("_ikey-B_");
+// track some events manually under another connection string
+let otherClient = new appInsights.TelemetryClient("Connection String B");
otherClient.trackEvent({name: "my custom event"}); ```
These properties are client specific, so you can configure `appInsights.defaultC
| Property | Description | | - ||
-| instrumentationKey | An identifier for your Application Insights resource. |
+| connectionString | An identifier for your Application Insights resource. |
| endpointUrl | The ingestion endpoint to send telemetry payloads to. | | quickPulseHost | The Live Metrics Stream host to send live metrics telemetry to. | | proxyHttpUrl | A proxy server for SDK HTTP traffic (Optional, Default pulled from `http_proxy` environment variable). |
azure-monitor Container Insights Azure Redhat4 Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-azure-redhat4-setup.md
If you don't have a workspace to specify, you can skip to the [Integrate with th
export logAnalyticsWorkspaceResourceId="/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/microsoft.operationalinsights/workspaces/<workspaceName>" ```
- Here is the command you must run once you have populated the 3 variables with Export commands:
+ Here is the command you must run once you have populated the variables with Export commands:
`bash enable-monitoring.sh --resource-id $azureAroV4ClusterResourceId --workspace-id $logAnalyticsWorkspaceResourceId`
azure-netapp-files Azure Netapp Files Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-network-topologies.md
na Previously updated : 03/31/2022 Last updated : 04/01/2022 # Guidelines for Azure NetApp Files network planning
Azure NetApp Files standard network features are supported for the following reg
* East US 2 * France Central * North Central US
+* North Europe
* South Central US
+* UK South
* West Europe * West US 2 * West US 3
azure-netapp-files Faq Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-smb.md
Previously updated : 03/30/2022 Last updated : 03/31/2022 # SMB FAQs for Azure NetApp Files
However, you can map multiple NetApp accounts that are under the same subscripti
## Does Azure NetApp Files support Azure Active Directory?
-Both [Azure Active Directory (AD) Domain Services](../active-directory-domain-services/overview.md) and [Active Directory Domain Services (AD DS)](/windows-server/identity/ad-ds/get-started/virtual-dc/active-directory-domain-services-overview) are supported. You can use existing Active Directory domain controllers with Azure NetApp Files. Domain controllers can reside in Azure as virtual machines, or on premises via ExpressRoute or S2S VPN. Azure NetApp Files does not support AD join for [Azure Active Directory](https://azure.microsoft.com/resources/videos/azure-active-directory-overview/) at this time.
+Both [Azure Active Directory (AD) Domain Services](../active-directory-domain-services/overview.md) and [Active Directory Domain Services (AD DS)](/windows-server/identity/ad-ds/get-started/virtual-dc/active-directory-domain-services-overview) are supported. You can use existing Active Directory domain controllers with Azure NetApp Files. Domain controllers can reside in Azure as virtual machines, or on premises via ExpressRoute or S2S VPN. Azure NetApp Files doesn't support AD join for [Azure Active Directory](https://azure.microsoft.com/resources/videos/azure-active-directory-overview/) at this time.
If you are using Azure NetApp Files with Azure Active Directory Domain Services, the organizational unit path is `OU=AADDC Computers` when you configure Active Directory for your NetApp account.
As a best practice, set the maximum tolerance for computer clock synchronization
Azure NetApp Files supports modifying `SMB Shares` by using MMC. However, modifying share properties has significant risk. If the users or groups assigned to the share properties are removed from the Active Directory, or if the permissions for the share become unusable, then the entire share will become inaccessible.
-Azure NetApp Files does not support using MMC to manage `Sessions` and `Open Files`.
+Azure NetApp Files doesn't support using MMC to manage `Sessions` and `Open Files`.
## How can I obtain the IP address of an SMB volume via the portal? Use the **JSON View** link on the volume overview pane, and look for the **startIp** identifier under **properties** -> **mountTargets**.
-## Can an Azure NetApp Files SMB share act as an DFS Namespace (DFS-N) root?
+## Can an Azure NetApp Files SMB share act as a DFS Namespace (DFS-N) root?
No. However, Azure NetApp Files SMB shares can serve as a DFS Namespace (DFS-N) folder target. To use an Azure NetApp Files SMB share as a DFS-N folder target, provide the Universal Naming Convention (UNC) mount path of the Azure NetApp Files SMB share by using the [DFS Add Folder Target](/windows-server/storage/dfs-namespaces/add-folder-targets#to-add-a-folder-target) procedure.
You can change the NTFS permissions of the root volume by using [NTFS file and f
## Can I change the SMB share name after the SMB volume has been created?
-No. However, you can create a new SMB volume with the new share name from a snapshot of the SMB volume with the old share name.
+No. However, you can create a new SMB volume with the new share name from a snapshot of the SMB volume with the old share name.
+
+Alternatively, you can use [Windows Server DFS Namespace](/windows-server/storage/dfs-namespaces/dfs-overview) where a DFS Namespace with the new share name can point to the Azure NetApp Files SMB volume with the old share name.
## Next steps
azure-resource-manager Key Vault Parameter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/key-vault-parameter.md
The following procedure shows how to create a role with the minimum permission,
```azurecli-interactive az role definition create --role-definition "<path-to-role-file>" az role assignment create \
- --role "Key Vault resource manager template deployment operator" \
+ --role "Key Vault Bicep deployment operator" \
--scope /subscriptions/<Subscription-id>/resourceGroups/<resource-group-name> \
- --assignee <user-principal-name> \
- --resource-group ExampleGroup
+ --assignee <user-principal-name>
``` # [PowerShell](#tab/azure-powershell)
azure-sql Authentication Azure Ad Only Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/authentication-azure-ad-only-authentication.md
Previously updated : 03/08/2022 Last updated : 04/01/2022
SELECT SERVERPROPERTY('IsExternalAuthenticationOnly')
When Azure AD-only authentication is enabled for SQL Database, the following features aren't supported: -- [Azure SQL Database server roles](security-server-roles.md)
+- [Azure SQL Database server roles](security-server-roles.md) are supported for [Azure AD server principals](authentication-azure-ad-logins.md), but not if the Azure AD login is a group.
- [Elastic jobs](job-automation-overview.md) - [SQL Data Sync](sql-data-sync-data-sql-server-sql-database.md) - [Change data capture (CDC)](/sql/relational-databases/track-changes/about-change-data-capture-sql-server) - If you create a database in Azure SQL Database as an Azure AD user and enable change data capture on it, a SQL user will not be able to disable or make changes to CDC artifacts. However, another Azure AD user will be able to enable or disable CDC on the same database. Similarly, if you create an Azure SQL Database as a SQL user, enabling or disabling CDC as an Azure AD user won't work
azure-video-analyzer Customize Language Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-for-media-docs/customize-language-model-overview.md
Title: Customize a Language model in Azure Video Analyzer for Media (formerly Video Indexer) - Azure description: This article gives an overview of what is a Language model in Azure Video Analyzer for Media (formerly Video Indexer) and how to customize it.----- Previously updated : 05/15/2019-++++ Last updated : 02/02/2022 # Customize a Language model with Video Analyzer for Media
-Azure Video Analyzer for Media (formerly Video Indexer) supports automatic speech recognition through integration with the Microsoft [Custom Speech Service](https://azure.microsoft.com/services/cognitive-services/custom-speech-service/). You can customize the Language model by uploading adaptation text, namely text from the domain whose vocabulary you'd like the engine to adapt to. Once you train your model, new words appearing in the adaptation text will be recognized, assuming default pronunciation, and the Language model will learn new probable sequences of words. Custom Language models are supported for English, Spanish, French, German, Italian, Chinese (Simplified), Japanese, Russian, Portuguese, Hindi, and Korean.
+Azure Video Analyzer for Media (formerly Video Indexer) supports automatic speech recognition through integration with the Microsoft [Custom Speech Service](https://azure.microsoft.com/services/cognitive-services/custom-speech-service/). You can customize the Language model by uploading adaptation text, namely text from the domain whose vocabulary you'd like the engine to adapt to. Once you train your model, new words appearing in the adaptation text will be recognized, assuming default pronunciation, and the Language model will learn new probable sequences of words. See the list of supported by Video Analyzer for Media languages in [supported langues](language-support.md).
Let's take a word that is highly specific, like "Kubernetes" (in the context of Azure Kubernetes service), as an example. Since the word is new to Video Analyzer for Media, it is recognized as "communities". You need to train the model to recognize it as "Kubernetes". In other cases, the words exist, but the Language model is not expecting them to appear in a certain context. For example, "container service" is not a 2-word sequence that a non-specialized Language model would recognize as a specific set of words.
azure-video-analyzer Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-for-media-docs/language-support.md
This section describes language support in Video Analyzer for Media.
- Frame patterns (Only to Hebrew as of now) - Language customization
-| **Language** | **Code** | **Transcription** | **LID** | **MLID** | **Translation** | **Customization** (Language model) |
-|::|::|:--:|:-:|:-:|:-:|::|
-| Afrikaans | `af-ZA` | | | | Γ£ö | Γ£ö |
-| Arabic (Iraq) | `ar-IQ` | Γ£ö | | | Γ£ö | Γ£ö |
-| Arabic (Israel) | `ar-IL` | Γ£ö | | | Γ£ö | Γ£ö |
-| Arabic (Jordan) | `ar-JO` | Γ£ö | | | Γ£ö | Γ£ö |
-| Arabic (Kuwait) | `ar-KW` | Γ£ö | | | Γ£ö | Γ£ö |
-| Arabic (Lebanon) | `ar-LB` | Γ£ö | | | Γ£ö | Γ£ö |
-| Arabic (Oman) | `ar-OM` | Γ£ö | | | Γ£ö | Γ£ö |
-| Arabic (Palestinian Authority) | `ar-PS` | Γ£ö | | | Γ£ö | Γ£ö |
-| Arabic (Qatar) | `ar-QA` | Γ£ö | | | Γ£ö | Γ£ö |
-| Arabic (Saudi Arabia) | `ar-SA` | Γ£ö | | | Γ£ö | Γ£ö |
-| Arabic (United Arab Emirates) | `ar-AE` | Γ£ö | | | Γ£ö | Γ£ö |
-| Arabic Egypt | `ar-EG` | Γ£ö | | | Γ£ö | Γ£ö |
-| Arabic Modern Standard (Bahrain) | `ar-BH` | Γ£ö | | | Γ£ö | Γ£ö |
-| Arabic Syrian Arab Republic | `ar-SY` | Γ£ö | | | Γ£ö | Γ£ö |
-| Bangla | `bn-BD` | | | | Γ£ö | Γ£ö |
-| Bosnian | `bs-Latn` | | | | Γ£ö | Γ£ö |
-| Bulgarian | `bg-BG` | | | | Γ£ö | Γ£ö |
-| Catalan | `ca-ES` | | | | Γ£ö | Γ£ö |
-| Chinese (Cantonese Traditional) | `zh-HK` | Γ£ö | | Γ£ö | Γ£ö | Γ£ö |
-| Chinese (Simplified) | `zh-Hans` | Γ£ö | | | Γ£ö | Γ£ö |
-| Chinese (Traditional) | `zh-Hant` | | | | Γ£ö | Γ£ö |
-| Croatian | `hr-HR` | | | | Γ£ö | Γ£ö |
-| Czech | `cs-CZ` | Γ£ö | | | Γ£ö | Γ£ö |
-| Danish | `da-DK` | Γ£ö | | | Γ£ö | Γ£ö |
-| Dutch | `nl-NL` | Γ£ö | | | Γ£ö | Γ£ö |
-| English Australia | `en-AU` | Γ£ö | | | Γ£ö | Γ£ö |
-| English United Kingdom | `en-GB` | Γ£ö | | | Γ£ö | Γ£ö |
-| English United States | `en-US` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Estonian | `et-EE` | | | | Γ£ö | Γ£ö |
-| Fijian | `en-FJ` | | | | Γ£ö | Γ£ö |
-| Filipino | `fil-PH` | | | | Γ£ö | Γ£ö |
-| Finnish | `fi-FI` | Γ£ö | | | Γ£ö | Γ£ö |
-| French | `fr-FR` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| French (Canada) | `fr-CA` | Γ£ö | | | Γ£ö | Γ£ö |
-| German | `de-DE` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Greek | `el-GR` | | | | Γ£ö | Γ£ö |
-| Haitian | `fr-HT` | | | | Γ£ö | Γ£ö |
-| Hebrew | `he-IL` | Γ£ö | | | Γ£ö | Γ£ö |
-| Hindi | `hi-IN` | Γ£ö | | | Γ£ö | Γ£ö |
-| Hungarian | `hu-HU` | | | | Γ£ö | Γ£ö |
-| Indonesian | `id-ID` | | | | Γ£ö | Γ£ö |
-| Italian | `it-IT` | Γ£ö | Γ£ö | | Γ£ö | Γ£ö |
-| Japanese | `ja-JP` | Γ£ö | Γ£ö | | Γ£ö | Γ£ö |
-| Kiswahili | `sw-KE` | | | | Γ£ö | Γ£ö |
-| Korean | `ko-KR` | Γ£ö | | | Γ£ö | Γ£ö |
-| Latvian | `lv-LV` | | | | Γ£ö | Γ£ö |
-| Lithuanian | `lt-LT` | | | | Γ£ö | Γ£ö |
-| Malagasy | `mg-MG` | | | | Γ£ö | Γ£ö |
-| Malay | `ms-MY` | | | | Γ£ö | Γ£ö |
-| Maltese | `mt-MT` | | | | Γ£ö | Γ£ö |
-| Norwegian | `nb-NO` | Γ£ö | | | Γ£ö | Γ£ö |
-| Persian | `fa-IR` | Γ£ö | | | Γ£ö | Γ£ö |
-| Polish | `pl-PL` | Γ£ö | | | Γ£ö | Γ£ö |
-| Portuguese | `pt-BR` | Γ£ö | Γ£ö | | Γ£ö | Γ£ö |
-| Portuguese (Portugal) | `pt-PT` | Γ£ö | | | Γ£ö | Γ£ö |
-| Romanian | `ro-RO` | | | | Γ£ö | Γ£ö |
-| Russian | `ru-RU` | Γ£ö | Γ£ö | | Γ£ö | Γ£ö |
-| Samoan | `en-WS` | | | | Γ£ö | Γ£ö |
-| Serbian (Cyrillic) | `sr-Cyrl-RS` | | | | Γ£ö | Γ£ö |
-| Serbian (Latin) | `sr-Latn-RS` | | | | Γ£ö | Γ£ö |
-| Slovak | `sk-SK` | | | | Γ£ö | Γ£ö |
-| Slovenian | `sl-SI` | | | | Γ£ö | Γ£ö |
-| Spanish | `es-ES` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| Spanish (Mexico) | `es-MX` | Γ£ö | | | Γ£ö | Γ£ö |
-| Swedish | `sv-SE` | Γ£ö | | | Γ£ö | Γ£ö |
-| Tamil | `ta-IN` | | | | Γ£ö | Γ£ö |
-| Thai | `th-TH` | Γ£ö | | | Γ£ö | Γ£ö |
-| Tongan | `to-TO` | | | | Γ£ö | Γ£ö |
-| Turkish | `tr-TR` | Γ£ö | | | Γ£ö | Γ£ö |
-| Ukrainian | `uk-UA` | | | | Γ£ö | Γ£ö |
-| Urdu | `ur-PK` | | | | Γ£ö | Γ£ö |
-| Vietnamese | `vi-VN` | | | | Γ£ö | Γ£ö |
+| **Language** | **Code** | **Transcription** | **LID** | **MLID** | **Translation** | **Customization** (Language model) |
+|::|:--:|:--:|:-:|:-:|:-:|::|
+| Afrikaans | `af-ZA` | | | | Γ£ö | |
+| Arabic (Israel) | `ar-IL` | Γ£ö | | | Γ£ö | Γ£ö |
+| Arabic (Jordan) | `ar-JO` | Γ£ö | | | Γ£ö | Γ£ö |
+| Arabic (Kuwait) | `ar-KW` | Γ£ö | | | Γ£ö | Γ£ö |
+| Arabic (Lebanon) | `ar-LB` | Γ£ö | | | Γ£ö | Γ£ö |
+| Arabic (Oman) | `ar-OM` | Γ£ö | | | Γ£ö | Γ£ö |
+| Arabic (Palestinian Authority) | `ar-PS` | Γ£ö | | | Γ£ö | Γ£ö |
+| Arabic (Qatar) | `ar-QA` | Γ£ö | | | Γ£ö | Γ£ö |
+| Arabic (Saudi Arabia) | `ar-SA` | Γ£ö | | | Γ£ö | Γ£ö |
+| Arabic (United Arab Emirates) | `ar-AE` | Γ£ö | | | Γ£ö | Γ£ö |
+| Arabic Egypt | `ar-EG` | Γ£ö | | | Γ£ö | Γ£ö |
+| Arabic Modern Standard (Bahrain) | `ar-BH` | Γ£ö | | | Γ£ö | Γ£ö |
+| Arabic Syrian Arab Republic | `ar-SY` | Γ£ö | | | Γ£ö | Γ£ö |
+| Bangla | `bn-BD` | | | | Γ£ö | |
+| Bosnian | `bs-Latn` | | | | Γ£ö | |
+| Bulgarian | `bg-BG` | | | | Γ£ö | |
+| Catalan | `ca-ES` | | | | Γ£ö | |
+| Chinese (Cantonese Traditional) | `zh-HK` | Γ£ö | | Γ£ö | Γ£ö | Γ£ö |
+| Chinese (Simplified) | `zh-Hans` | Γ£ö | | | Γ£ö | Γ£ö |
+| Chinese (Traditional) | `zh-Hant` | | | | Γ£ö | |
+| Croatian | `hr-HR` | | | | Γ£ö | |
+| Czech | `cs-CZ` | Γ£ö | | | Γ£ö | Γ£ö |
+| Danish | `da-DK` | Γ£ö | | | Γ£ö | Γ£ö |
+| Dutch | `nl-NL` | Γ£ö | | | Γ£ö | Γ£ö |
+| English Australia | `en-AU` | Γ£ö | | | Γ£ö | Γ£ö |
+| English United Kingdom | `en-GB` | Γ£ö | | | Γ£ö | Γ£ö |
+| English United States | `en-US` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Estonian | `et-EE` | | | | Γ£ö | |
+| Fijian | `en-FJ` | | | | Γ£ö | |
+| Filipino | `fil-PH` | | | | Γ£ö | |
+| Finnish | `fi-FI` | Γ£ö | | | Γ£ö | Γ£ö |
+| French | `fr-FR` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| French (Canada) | `fr-CA` | Γ£ö | | | Γ£ö | Γ£ö |
+| German | `de-DE` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Greek | `el-GR` | | | | Γ£ö | |
+| Haitian | `fr-HT` | | | | Γ£ö | |
+| Hebrew | `he-IL` | Γ£ö | | | Γ£ö | Γ£ö |
+| Hindi | `hi-IN` | Γ£ö | | | Γ£ö | Γ£ö |
+| Hungarian | `hu-HU` | | | | Γ£ö | |
+| Indonesian | `id-ID` | | | | Γ£ö | |
+| Italian | `it-IT` | Γ£ö | Γ£ö | | Γ£ö | Γ£ö |
+| Japanese | `ja-JP` | Γ£ö | Γ£ö | | Γ£ö | Γ£ö |
+| Kiswahili | `sw-KE` | | | | Γ£ö | |
+| Korean | `ko-KR` | Γ£ö | | | Γ£ö | Γ£ö |
+| Latvian | `lv-LV` | | | | Γ£ö | |
+| Lithuanian | `lt-LT` | | | | Γ£ö | |
+| Malagasy | `mg-MG` | | | | Γ£ö | |
+| Malay | `ms-MY` | | | | Γ£ö | |
+| Maltese | `mt-MT` | | | | Γ£ö | |
+| Norwegian | `nb-NO` | Γ£ö | | | Γ£ö | Γ£ö |
+| Persian | `fa-IR` | Γ£ö | | | Γ£ö | Γ£ö |
+| Polish | `pl-PL` | Γ£ö | | | Γ£ö | Γ£ö |
+| Portuguese | `pt-BR` | Γ£ö | Γ£ö | | Γ£ö | Γ£ö |
+| Portuguese (Portugal) | `pt-PT` | Γ£ö | | | Γ£ö | Γ£ö |
+| Romanian | `ro-RO` | | | | Γ£ö | |
+| Russian | `ru-RU` | Γ£ö | Γ£ö | | Γ£ö | Γ£ö |
+| Samoan | `en-WS` | | | | Γ£ö | |
+| Serbian (Cyrillic) | `sr-Cyrl-RS` | | | | Γ£ö | |
+| Serbian (Latin) | `sr-Latn-RS` | | | | Γ£ö | |
+| Slovak | `sk-SK` | | | | Γ£ö | |
+| Slovenian | `sl-SI` | | | | Γ£ö | |
+| Spanish | `es-ES` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Spanish (Mexico) | `es-MX` | Γ£ö | | | Γ£ö | Γ£ö |
+| Swedish | `sv-SE` | Γ£ö | | | Γ£ö | Γ£ö |
+| Tamil | `ta-IN` | | | | Γ£ö | |
+| Thai | `th-TH` | Γ£ö | | | Γ£ö | Γ£ö |
+| Tongan | `to-TO` | | | | Γ£ö | |
+| Turkish | `tr-TR` | Γ£ö | | | Γ£ö | Γ£ö |
+| Ukrainian | `uk-UA` | | | | Γ£ö | |
+| Urdu | `ur-PK` | | | | Γ£ö | |
+| Vietnamese | `vi-VN` | | | | Γ£ö | |
## Language support in frontend experiences
azure-video-analyzer Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-for-media-docs/release-notes.md
To stay up-to-date with the most recent Azure Video Analyzer for Media (former V
* Bug fixes * Deprecated functionality
+## March 2022
+
+### Closed Captioning files now support including speakersΓÇÖ attributes
+
+Video Analyzer for Media enables you to include speakers' characteristic based on a closed captioning file that you choose to download. To include the speakersΓÇÖ attributes, select Downloads -> Closed Captions -> choose the closed captioning downloadable file format (SRT, VTT, TTML, TXT, or CSV) and check **Include speakers** checkbox.
+ ## February 2022 ### Public preview of Video Analyzer for Media account management based on ARM in Government cloud
azure-vmware Install Vmware Hcx https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/install-vmware-hcx.md
After you're finished, follow the recommended next steps at the end to continue
1. Select the **I agree with terms and conditions** checkbox and then select **Install**.
- It takes around 35 minutes to install HCX Advanced and configure the Cloud Manager. Once installed, the HCX Manager URL and the HCX keys needed for the HCX on-premises connector site pairing display on the **Migration using HCX** tab.
+ It takes around 35 minutes to install HCX Advanced and configure the Cloud Manager. Once installed, the HCX Manager URL and the HCX keys needed for the HCX on-premises connector site pairing will display on the **Migration using HCX** tab.
+
+ > [NOTE!]
+ > If you aren't able to see the HCX key once installed, click the **ADD** button to generate the key which you can then use for site pairing.
:::image type="content" source="media/tutorial-vmware-hcx/deployed-hcx-migration-using-hcx-tab.png" alt-text="Screenshot showing the Migration using HCX tab under Connectivity.":::
bastion Bastion Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-faq.md
Azure Bastion doesn't move or store customer data out of the region it's deploye
Azure Bastion needs to be able to communicate with certain internal endpoints to successfully connect to target resources. Therefore, you *can* use Azure Bastion with Azure Private DNS Zones as long as the zone name you select doesn't overlap with the naming of these internal endpoints. Before you deploy your Azure Bastion resource, please make sure that the host virtual network is not linked to a private DNS zone with the following exact names:
+* management.azure.com
* blob.core.windows.net * core.windows.net * vaultcore.windows.net
cognitive-services Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/Concepts/best-practices.md
The [knowledge base development lifecycle](../Concepts/development-lifecycle-knowledge-base.md) guides you on how to manage your KB from beginning to end. Use these best practices to improve your knowledge base and provide better results to your client application or chat bot's end users. + ## Extraction The QnA Maker service is continually improving the algorithms that extract QnAs from content and expanding the list of supported file and HTML formats. Follow the [guidelines](../Concepts/data-sources-and-content.md) for data extraction based on your document type.
cognitive-services Development Lifecycle Knowledge Base https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/Concepts/development-lifecycle-knowledge-base.md
QnA Maker learns best in an iterative cycle of model changes, utterance examples
![Authoring cycle](../media/qnamaker-concepts-lifecycle/kb-lifecycle.png) + ## Creating a QnA Maker knowledge base QnA Maker knowledge base (KB) endpoint provides a best-match answer to a user query based on the content of the KB. Creating a knowledge base is a one-time action to setting up a content repository of questions, answers, and associated metadata. A KB can be created by crawling pre-existing content such the following sources:
cognitive-services Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/Concepts/plan.md
To plan your QnA Maker app, you need to understand how QnA Maker works and interacts with other Azure services. You should also have a solid grasp of knowledge base concepts. + ## Azure resources Each [Azure resource](azure-resources.md#resource-purposes) created with QnA Maker has a specific purpose. Each resource has its own purpose, limits, and [pricing tier](azure-resources.md#pricing-tier-considerations). It's important to understand the function of these resources so that you can use that knowledge into your planning process.
cognitive-services Query Knowledge Base https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/Concepts/query-knowledge-base.md
A knowledge base must be published. Once published, the knowledge base is queried at the runtime prediction endpoint using the generateAnswer API. The query includes the question text, and other settings, to help QnA Maker select the best possible match to an answer. + ## How QnA Maker processes a user query to select the best answer The trained and [published](../quickstarts/create-publish-knowledge-base.md#publish-the-knowledge-base) QnA Maker knowledge base receives a user query, from a bot or other client application, at the [GenerateAnswer API](../how-to/metadata-generateanswer-usage.md). The following diagram illustrates the process when the user query is received.
cognitive-services Question Answer Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/Concepts/question-answer-set.md
Last updated 01/27/2020
A knowledge base consists of question and answer (QnA) pairs. Each pair has one answer and a pair contains all the information associated with that _answer_. An answer can loosely resemble a database row or a data structure instance. + ## Question and answer pairs The **required** settings in a question-and-answer (QnA) pair are:
cognitive-services Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/Concepts/role-based-access-control.md
Last updated 05/15/2020
Collaborate with other authors and editors using Azure role-based access control (Azure RBAC) placed on your QnA Maker resource. + ## Access is provided on the QnA Maker resource All permissions are controlled by the permissions placed on the QnA Maker resource. These permissions align to read, write, publish, and full access. You can allow collaboration among multiple users by [updating RBAC access](../how-to/manage-qna-maker-app.md) for QnA Maker resource.
cognitive-services Multi Turn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/How-To/multi-turn.md
To see how multi-turn works, view the following demonstration video:
[![Multi-turn conversation in QnA Maker](../media/conversational-context/youtube-video.png)](https://aka.ms/multiturnexample) + ## What is a multi-turn conversation? Some questions can't be answered in a single turn. When you design your client application (chat bot) conversations, a user might ask a question that needs to be filtered or refined to determine the correct answer. You make this flow through the questions possible by presenting the user with *follow-up prompts*.
cognitive-services Network Isolation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/How-To/network-isolation.md
Follow the steps below to restrict public access to QnA Maker resources. Protect a Cognitive Services resource from public access by [configuring the virtual network](../../cognitive-services-virtual-networks.md?tabs=portal). + ## Restrict access to App Service (QnA runtime) You can use the ServiceTag `CognitiveServicesMangement` to restrict inbound access to App Service or ASE (App Service Environment) network security group in-bound rules. Check out more information about service tags in the [virtual network service tags article](../../../virtual-network/service-tags-overview.md).
cognitive-services Test Knowledge Base https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/How-To/test-knowledge-base.md
Testing your QnA Maker knowledge base is an important part of an iterative process to improve the accuracy of the responses being returned. You can test the knowledge base through an enhanced chat interface that also allows you make edits. + ## Interactively test in QnA Maker portal 1. Access your knowledge base by selecting its name on the **My knowledge bases** page.
cognitive-services Using Prebuilt Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/How-To/using-prebuilt-api.md
Last updated 05/05/2021
Prebuilt question answering provides user the capability to answer question over a passage of text without having to create knowledgebases, maintain question and answer pairs or incurring cost for underutilized infrastructure. This functionality is provided as an API and can be used to meet question and answering needs without having to learn the details about QnA Maker or additional storage. + > [!NOTE] > This documentation does not apply to the latest release. To learn about using the Prebuilt API with the latest release consult the [question answering prebuilt API article](../../language-service/question-answering/how-to/prebuilt.md).
cognitive-services Add Question Metadata Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/Quickstarts/add-question-metadata-portal.md
Once metadata is added to a QnA pair, the client application can:
* Request answers that only match certain metadata. * Receive all answers but post-process the answers depending on the metadata for each answer. - ## Prerequisites * Complete the [previous quickstart](./create-publish-knowledge-base.md)
cognitive-services Create Faq Bot With Azure Bot Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/Tutorials/create-faq-bot-with-azure-bot-service.md
In this tutorial, you learn how to:
> * Chat with the Bot in web chat > * Light up the Bot in the supported channels + ## Create and publish a knowledge base Follow the [quickstart](../Quickstarts/create-publish-knowledge-base.md) to create a knowledge base. Once the knowledge base has been successfully published, you will reach the below page.
cognitive-services Export Knowledge Base https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/Tutorials/export-knowledge-base.md
You may want to create a copy of your knowledge base for several reasons:
* Integrate with your CI/CD pipeline * When you wish to move your data to different regions + ## Prerequisites > * If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
cognitive-services Integrate With Power Virtual Assistant Fallback Topic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/Tutorials/integrate-with-power-virtual-assistant-fallback-topic.md
In this tutorial, you learn how to:
> * Publish Power Virtual Agents > * Test Power Virtual Agents, and recieve an answer from your QnA Maker knowledge base + ## Integrate an agent with a knowledge base [Power Virtual Agents](https://powervirtualagents.microsoft.com/) allows teams to create powerful bots by using a guided, no-code graphical interface. You don't need data scientists or developers.
cognitive-services Choose Natural Language Processing Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/choose-natural-language-processing-service.md
Last updated 04/16/2020
# Use Cognitive Services with natural language processing (NLP) to enrich bot conversations + [!INCLUDE [Use Cognitive Services with natural language processing (NLP) to enrich bot conversations](../includes/luis-qnamaker-shared-concept.md)] ## Next steps
cognitive-services Encrypt Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/encrypt-data-at-rest.md
QnA Maker automatically encrypts your data when it is persisted to the cloud, helping to meet your organizational security and compliance goals. + ## About encryption key management By default, your subscription uses Microsoft-managed encryption keys. There is also the option to manage your subscription with your own keys called customer-managed keys (CMK). CMK offers greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data. If CMK is configured for your subscription, double encryption is provided, which offers a second layer of protection, while allowing you to control the encryption key through your Azure Key Vault.
cognitive-services How To Use Codec Compressed Audio Input Streams https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-use-codec-compressed-audio-input-streams.md
Title: Support compressed input audio with the Speech SDK - Speech service
+ Title: How to use compressed audio files with the Speech SDK - Speech service
description: Learn how to stream compressed audio to the Speech service with the Speech SDK.
zone_pivot_groups: programming-languages-set-twenty-eight
-# Support compressed input audio
+# How to use compressed audio files
The Speech SDK and Speech CLI use GStreamer to support different kinds of input audio formats. GStreamer decompresses the audio before it's sent over the wire to the Speech service as raw PCM.
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-support.md
You can use the locales in this table with [phrase list](improve-accuracy-phrase
## Text-to-speech
-Both the Microsoft Speech SDK and REST APIs support these neural voices, each of which supports a specific language and dialect, identified by locale. You can also get a full list of languages and voices supported for each specific region or endpoint through the [voices list API](rest-text-to-speech.md#get-a-list-of-voices).
+Both the Microsoft Speech SDK and REST APIs support these neural voices, each of which supports a specific language and dialect, identified by locale. You can try the demo and hear the voices on [this website](https://azure.microsoft.com/services/cognitive-services/text-to-speech/#features).
+
+You can also get a full list of languages and voices supported for each specific region or endpoint through the [voices list API](rest-text-to-speech.md#get-a-list-of-voices). To learn how you can configure and adjust neural voices, such as Speaking Styles, see [Speech Synthesis Markup Language](speech-synthesis-markup.md#adjust-speaking-styles).
> [!IMPORTANT] > Pricing varies for Prebuilt Neural Voice (referred to as *Neural* on the pricing page) and Custom Neural Voice (referred to as *Custom Neural* on the pricing page). For more information, see the [Pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) page. + ### Prebuilt neural voices
-The following table lists the prebuilt neural voices supported in each language. You can try the demo and hear the voices on [this website](https://azure.microsoft.com/services/cognitive-services/text-to-speech/#features).
+Prebuilt neural voices are created from samples that use a 24-khz sample rate. All voices can upsample or downsample to other sample rates when synthesizing.
-> [!NOTE]
-> Prebuilt neural voices are created from samples that use a 24-khz sample rate.
-> All voices can upsample or downsample to other sample rates when synthesizing.
+> [!IMPORTANT]
+> The English (United Kingdom) voice `en-GB-MiaNeural` retired on October 30, 2021. All service requests to `en-GB-MiaNeural` will be redirected to `en-GB-SoniaNeural` automatically as of October 30, 2021.
+>
+> If you're using container Neural TTS, [download](speech-container-howto.md#get-the-container-image-with-docker-pull) and deploy the latest version. Starting from October 30, 2021, all requests with previous versions will not succeed.
+>
+> The `en-US-JessaNeural` voice has changed to `en-US-AriaNeural`. If you were using "Jessa" before, convert to "Aria." You can continue to use the full service name mapping like "Microsoft Server Speech Text to Speech Voice (en-US, AriaNeural)" in your speech synthesis requests.
+
+The following table lists the prebuilt neural voices supported in each language.
| Language | Locale | Gender | Voice name | Style support | ||||||
The following table lists the prebuilt neural voices supported in each language.
| English (United States) | `en-US` | Female | `en-US-CoraNeural` | General | | English (United States) | `en-US` | Female | `en-US-ElizabethNeural` | General | | English (United States) | `en-US` | Female | `en-US-JennyNeural` | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
-| English (United States) | `en-US` | Female | `en-US-JennyMultilingualNeural` | General, multi-lingual capabilities available [using SSML](speech-synthesis-markup.md#adjust-speaking-languages) |
+| English (United States) | `en-US` as the primary default. Additional locales are supported [using SSML](speech-synthesis-markup.md#adjust-speaking-languages) | Female | `en-US-JennyMultilingualNeural` | General |
| English (United States) | `en-US` | Female | `en-US-MichelleNeural`| General | | English (United States) | `en-US` | Female | `en-US-MonicaNeural` | General | | English (United States) | `en-US` | Female | `en-US-SaraNeural` | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
The following table lists the prebuilt neural voices supported in each language.
| Zulu (South Africa) | `zu-ZA` | Female | `zu-ZA-ThandoNeural` | General | | Zulu (South Africa) | `zu-ZA` | Male | `zu-ZA-ThembaNeural` | General |
-> [!IMPORTANT]
-> The English (United Kingdom) voice `en-GB-MiaNeural` retired on October 30, 2021. All service requests to `en-GB-MiaNeural` will be redirected to `en-GB-SoniaNeural` automatically as of October 30, 2021.
-
-> The `en-US-JennyMultilingualNeural` voice supports multiple languages. Check the [voices list API](rest-text-to-speech.md#get-a-list-of-voices) for a supported languages list.
-
-> If you're using container Neural TTS, [download](speech-container-howto.md#get-the-container-image-with-docker-pull) and deploy the latest version. Starting from October 30,2021, all requests with previous versions will be rejected.
-
-> Two styles for `fr-FR-DeniseNeural` now are available for public preview: `cheerful` and `sad` in 3 regions: East US, West Europe, and Southeast Asia.
- ### Prebuilt neural voices in preview The following neural voices are in public preview.
+> [!NOTE]
+> Voices and styles in public preview are only available in three service [regions](regions.md#prebuilt-neural-voices): East US, West Europe, and Southeast Asia.
+ | Language | Locale | Gender | Voice name | Style support | |-||--|-|| | English (United Kingdom) | `en-GB` | Female | `en-GB-AbbiNeural` <sup>New</sup> | General |
The following neural voices are in public preview.
| German (Germany) | `de-DE` | Male | `de-DE-KlausNeural` <sup>New</sup> | General | | German (Germany) | `de-DE` | Male | `de-DE-RalfNeural` <sup>New</sup> | General |
-> [!IMPORTANT]
-> Voices/Styles in public preview are only available in three service regions: East US, West Europe, and Southeast Asia.
-
-> For more information about regional availability, see [regions](regions.md#prebuilt-neural-voices).
-
-> To learn how you can configure and adjust neural voices, such as Speaking Styles, see [Speech Synthesis Markup Language](speech-synthesis-markup.md#adjust-speaking-styles).
-
-> The `en-US-JessaNeural` voice has changed to `en-US-AriaNeural`. If you were using "Jessa" before, convert to "Aria." You can continue to use the full service name mapping like "Microsoft Server Speech Text to Speech Voice (en-US, AriaNeural)" in your speech synthesis requests.
### Voice styles and roles In some cases, you can adjust the speaking style to express different emotions like cheerfulness, empathy, and calm. You can optimize the voice for different scenarios like customer service, newscast, and voice assistant. With roles, the same voice can act as a different age and gender.
+> [!NOTE]
+> Voices and styles in public preview are only available in three service [regions](regions.md#prebuilt-neural-voices): East US, West Europe, and Southeast Asia.
+ To learn how you can configure and adjust neural voice styles and roles, see [Speech Synthesis Markup Language](speech-synthesis-markup.md#adjust-speaking-styles). Use the following table to determine supported styles and roles for each neural voice.
Use the following table to determine supported styles and roles for each neural
|zh-CN-YunyangNeural|`customerservice`, `narration-professional`, `newscast-casual`|Supported|| |zh-CN-YunyeNeural|`angry`, `calm`, `cheerful`, `disgruntled`, `embarrassed`, `fearful`, `sad`, `serious`|Supported|Supported|
-> [!IMPORTANT]
-> Voices/Styles in public preview are only available in three service regions: East US, West Europe, and Southeast Asia.
### Custom Neural Voice
cognitive-services Rest Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/rest-text-to-speech.md
You can use the `voices/list` endpoint to get a full list of voices for a specif
| Brazil South | `https://brazilsouth.tts.speech.microsoft.com/cognitiveservices/voices/list` | | Canada Central | `https://canadacentral.tts.speech.microsoft.com/cognitiveservices/voices/list` | | Central US | `https://centralus.tts.speech.microsoft.com/cognitiveservices/voices/list` |
+| China East 2 | `https://chinaeast2.tts.speech.azure.cn/cognitiveservices/voices/list` |
+| China North 2 | `https://chinanorth2.tts.speech.azure.cn/cognitiveservices/voices/list` |
| East Asia | `https://eastasia.tts.speech.microsoft.com/cognitiveservices/voices/list` | | East US | `https://eastus.tts.speech.microsoft.com/cognitiveservices/voices/list` | | East US 2 | `https://eastus2.tts.speech.microsoft.com/cognitiveservices/voices/list` | | France Central | `https://francecentral.tts.speech.microsoft.com/cognitiveservices/voices/list` |
+| Germany West Central | `https://germanywestcentral.tts.speech.microsoft.com/cognitiveservices/voices/list` |
| India Central | `https://centralindia.tts.speech.microsoft.com/cognitiveservices/voices/list` | | Japan East | `https://japaneast.tts.speech.microsoft.com/cognitiveservices/voices/list` |
+| Japan West | `https://japanwest.tts.speech.microsoft.com/cognitiveservices/voices/list` |
+| Jio India West | `https://jioindiawest.tts.speech.microsoft.com/cognitiveservices/voices/list` |
| Korea Central | `https://koreacentral.tts.speech.microsoft.com/cognitiveservices/voices/list` | | North Central US | `https://northcentralus.tts.speech.microsoft.com/cognitiveservices/voices/list` | | North Europe | `https://northeurope.tts.speech.microsoft.com/cognitiveservices/voices/list` |
-| South Africa North | `https://southafricanorth.tts.speech.microsoft.com/cognitiveservices/voices/list` |
+| Norway East | `https://norwayeast.tts.speech.microsoft.com/cognitiveservices/voices/list` |
| South Central US | `https://southcentralus.tts.speech.microsoft.com/cognitiveservices/voices/list` | | Southeast Asia | `https://southeastasia.tts.speech.microsoft.com/cognitiveservices/voices/list` |
+| Switzerland North | `https://switzerlandnorth.tts.speech.microsoft.com/cognitiveservices/voices/list` |
+| Switzerland West | `https://switzerlandwest.tts.speech.microsoft.com/cognitiveservices/voices/list` |
+| US Gov Arizona | `https://usgovarizona.tts.speech.azure.us/cognitiveservices/voices/list` |
+| US Gov Virginia | `https://usgovvirginia.tts.speech.azure.us/cognitiveservices/voices/list` |
| UK South | `https://uksouth.tts.speech.microsoft.com/cognitiveservices/voices/list` | | West Central US | `https://westcentralus.tts.speech.microsoft.com/cognitiveservices/voices/list` | | West Europe | `https://westeurope.tts.speech.microsoft.com/cognitiveservices/voices/list` | | West US | `https://westus.tts.speech.microsoft.com/cognitiveservices/voices/list` | | West US 2 | `https://westus2.tts.speech.microsoft.com/cognitiveservices/voices/list` |
+| West US 3 | `https://westus3.tts.speech.microsoft.com/cognitiveservices/voices/list` |
> [!TIP] > [Voices in preview](language-support.md#prebuilt-neural-voices-in-preview) are available in only these three regions: East US, West Europe, and Southeast Asia.
This HTTP request uses SSML to specify the voice and language. If the body lengt
```http POST /cognitiveservices/v1 HTTP/1.1
-X-Microsoft-OutputFormat: raw-24khz-16bit-mono-pcm
+X-Microsoft-OutputFormat: riff-24khz-16bit-mono-pcm
Content-Type: application/ssml+xml Host: westus.tts.speech.microsoft.com
-Content-Length: 225
+Content-Length: <Length>
Authorization: Bearer [Base64 access_token]
+User-Agent: <Your application name>
<speak version='1.0' xml:lang='en-US'><voice xml:lang='en-US' xml:gender='Male' name='en-US-ChristopherNeural'> Microsoft Speech Service Text-to-Speech API </voice></speak> ```
+<sup>*</sup> For the Content-Length, you should use your own content length. In most cases, this value is calculated automatically.
### HTTP status codes
cognitive-services Speech Synthesis Markup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-synthesis-markup.md
You can adjust the speaking language for the `en-US-JennyMultilingualNeural` neu
Use this table to determine which speaking languages are supported for each neural voice. If the voice does not speak the language of the input text, the Speech service won't output synthesized audio.
-| Voice | Locale language | Description |
+| Voice | Primary and default locale | Additional locales |
|-||-|
-| `en-US-JennyMultilingualNeural` | `lang="en-US"` | Speak en-US locale, which is the primary locale of this voice |
-| | `lang="en-CA"` | Speak en-CA locale language |
-| | `lang="en-AU"` | Speak en-AU locale language |
-| | `lang="en-GB"` | Speak en-GB locale language |
-| | `lang="de-DE"` | Speak de-DE locale language |
-| | `lang="fr-FR"` | Speak fr-FR locale language |
-| | `lang="fr-CA"` | Speak fr-CA locale language |
-| | `lang="es-ES"` | Speak es-ES locale language |
-| | `lang="es-MX"` | Speak es-MX locale language |
-| | `lang="zh-CN"` | Speak zh-CN locale language |
-| | `lang="ko-KR"` | Speak ko-KR locale language |
-| | `lang="ja-JP"` | Speak ja-JP locale language |
-| | `lang="it-IT"` | Speak it-IT locale language |
-| | `lang="pt-BR"` | Speak pt-BR locale language |
+| `en-US-JennyMultilingualNeural` | `en-US` | `de-DE`, `en-AU`, `en-CA`, `en-GB`, `es-ES`, `es-MX`, `fr-CA`, `fr-FR`, `it-IT`, `ja-JP`, `ko-KR`, `pt-BR`, `zh-CN` |
**Example**
cognitive-services Migrate Qnamaker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/how-to/migrate-qnamaker.md
To successfully migrate knowledge bases, **the account performing the migration
Resource level settings such as Role-based access control (RBAC) are not migrated to the new resource. These resource level settings would have to be reconfigured for the language resource post migration. You will also need to [re-enable analytics](analytics.md) for the language resource.
-## Steps to migrate
+## Steps to migrate SDKs
+
+This [SDK Migration Guide](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.Language.QuestionAnswering_1.1.0-beta.1/sdk/cognitivelanguage/Azure.AI.Language.QuestionAnswering/MigrationGuide.md) is intended to assist in the migration to the new Question Answering client library, Azure.AI.Language.QuestionAnswering, from the old one, Microsoft.Azure.CognitiveServices.Knowledge.QnAMaker. It will focus on side-by-side comparisons for similar operations between the two packages.
+
+## Steps to migrate knowledge bases
You can follow the steps below to migrate knowledge bases:
communication-services Logging And Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/logging-and-diagnostics.md
For more information about configuring diagnostics, see the overview of [Azure r
## Resource log categories
-Communication Services offers three types of logs that you can enable:
+Communication Services offers the following types of logs that you can enable:
* **Usage logs** - provides usage data associated with each billed service offering * **Chat operational logs** - provides basic information related to the chat service * **SMS operational logs** - provides basic information related to the SMS service * **Authentication operational logs** - provides basic information related to the Authentication service
+* **Network Traversal operational logs** - provides basic information related to the Network Traversal service
### Usage logs schema
Communication Services offers three types of logs that you can enable:
| PlatformType | The platform type used in the request. | | Identity | The Communication Services identity related to the operation. | | Scopes | The Communication Services scopes present in the access token. |+
+### Network Traversal operational logs
+
+| Dimension | Description |
+||-|
+| TimeGenerated | The timestamp (UTC) of when the log was generated. |
+| OperationName | The operation associated with log record. |
+| CorrelationId | The ID for correlated events. Can be used to identify correlated events between multiple tables. |
+| OperationVersion | The API-version associated with the operation or version of the operation (if there is no API version). |
+| Category | The log category of the event. Logs with the same log category and resource type will have the same properties fields. |
+| ResultType | The status of the operation (e.g. Succeeded or Failed). |
+| ResultSignature | The sub status of the operation. If this operation corresponds to a REST API call, this field is the HTTP status code of the corresponding REST call. |
+| DurationMs | The duration of the operation in milliseconds. |
+| Level | The severity level of the operation. |
+| URI | The URI of the request. |
+| Identity | The request sender's identity, if provided. |
+| SdkType | The SDK type being used in the request. |
+| PlatformType | The platform type being used in the request. |
+| RouteType | The routing methodology to where the ICE server will be located from the client (e.g. Any or Nearest). |
communication-services Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/metrics.md
# Metrics overview
-Azure Communication Services currently provides metrics for Chat and SMS. [Azure Metrics Explorer](../../azure-monitor/essentials/metrics-getting-started.md) can be used to plot your own charts, investigate abnormalities in your metric values, and understand your API traffic by using the metrics data that Chat and SMS requests emit.
+Azure Communication Services currently provides metrics for all ACS primitives. [Azure Metrics Explorer](../../azure-monitor/essentials/metrics-getting-started.md) can be used to plot your own charts, investigate abnormalities in your metric values, and understand your API traffic by using the metrics data that Chat and SMS requests emit.
-## Where to find Metrics
+## Where to find metrics
-Chat and SMS services in Azure Communication Services emit metrics for API requests. These metrics can be found in the Metrics blade under your Communication Services resource. You can also create permanent dashboards using the workbooks blade under your Communication Services resource.
+Primitives in Azure Communication Services emit metrics for API requests. These metrics can be found in the Metrics blade under your Communication Services resource. You can also create permanent dashboards using the workbooks blade under your Communication Services resource.
## Metric definitions
-There are two types of requests that are represented within Communication Services metrics: **Chat API requests** and **SMS API requests**.
+Today there are various types of requests that are represented within Communication Services metrics: **Chat API requests** , **SMS API requests** , **Authentication API requests** and **Network Traversal API requests**.
-Both Chat and SMS API request metrics contain three dimensions that you can use to filter your metrics data. These dimensions can be aggregated together using the `Count` aggregation type and support all standard Azure Aggregation time series including `Sum`, `Average`, `Min`, and `Max`.
+All API request metrics contain three dimensions that you can use to filter your metrics data. These dimensions can be aggregated together using the `Count` aggregation type and support all standard Azure Aggregation time series including `Sum`, `Average`, `Min`, and `Max`.
More information on supported aggregation types and time series aggregations can be found [Advanced features of Azure Metrics Explorer](../../azure-monitor/essentials/metrics-charts.md#aggregation)
The following operations are available on Authentication API request metrics:
:::image type="content" source="./media/acs-auth-metrics.png" alt-text="Authentication Request Metric.":::
-## Next Steps
+### Network Traversal API requests
+
+The following operations are available on Network Traversal API request metrics:
+
+| Operation / Route | Description |
+| -- | - |
+| IssueRelayConfiguration | Issue configuration for an STUN/TURN server. |
++
+## Next steps
- Learn more about [Data Platform Metrics](../../azure-monitor/essentials/data-platform-metrics.md)
communication-services Program Brief Guidelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/sms/program-brief-guidelines.md
Your terms of service must include terms specific to the short code program brie
:::image type="content" source= "../media/short-code-terms.png" alt-text="Screenshot showing the terms of service mock up.":::
+Your terms of service must contain ALL of the following:
+- Program Name and Description
+- Message Frequency, it can be either listed as Message Frequency Varies or the accurate frequency, it also needs to match with what is listed in the CTA (Call-To-Action)
+- The disclaimer: "Message and data rates may apply" written verbatim
+- Customer care information, for example: "For help call [phone number] or send an email to [email]"
+- Opt-Out message: "Text STOP to cancel"
+- A link to the Privacy Policy or the whole Privacy policy.
+ > [!Note] > If you donΓÇÖt have a URL of the website, mockups, or design, please send an email with the screenshots to phone@microsoft.com with "[CompanyName - ProgramName] Short Code Request". - ### Program Sign up type and URL This field captures the call-to-action, an instruction for the customers to take action for ensuring that the customer consents to receive text messages, and understands the nature of the program. Call-to-action can be over SMS, Interactive Voice Response (IVR), website, or point of sale. Carriers require that all short code program brief applications are submitted with mock ups for the call-to-action.
communication-services Sms Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/sms/sms-faq.md
# SMS FAQ This article answers commonly asked questions about the SMS service.
-## Can a customer use Azure Communication Services for emergency purposes?
+## Sending and receiving messages
+### How can I receive messages using Azure Communication Services?
-Azure Communication Services does not support text-to-911 functionality in the United States, but itΓÇÖs possible that you may have an obligation to do so under the rules of the Federal Communications Commission (FCC). You should assess whether the FCCΓÇÖs text-to-911 rules apply to your service or application. To the extent you're covered by these rules, you'll be responsible for routing 911 text messages to emergency call centers that request them. You're free to determine your own text-to-911 delivery model, but one approach accepted by the FCC involves automatically launching the native dialer on the userΓÇÖs mobile device to deliver 911 texts through the underlying mobile carrier.
+Azure Communication Services customers can use Azure Event Grid to receive incoming messages. Follow this [quickstart](../../quickstarts/sms/handle-sms-events.md) to setup your event-grid to receive messages.
-## Are there any limits on sending messages?
+### How are messages sent to landline numbers treated?
-To ensure that we continue offering the high quality of service consistent with our SLAs, Azure Communication Services applies rate limits (different for each primitive). Developers who call our APIs beyond the limit will receive a 429 HTTP Status Code Response. If your company has requirements that exceed the rate-limits, please email us at phone@microsoft.com.
+In the United States, Azure Communication Services does not check for landline numbers and will attempt to send it to carriers for delivery. Customers will be charged for messages sent to landline numbers.
-Rate Limits for SMS:
+### Can I send messages to multiple recipients?
-|Operation|Number Type |Scope|Timeframe (s)| Limit (request #) | Message units per minute|
-|||--|-|-|-|
-|Send Message|Toll-Free|Per Number|60|200|200|
-|Send Message|Short Code |Per Number|60|6000|6000|
+Yes, you can make one request with multiple recipients. Follow this [quickstart](../../quickstarts/sms/send.md?pivots=programming-language-csharp) to send messages to multiple recipients.
+
+### I received a HTTP Status 202 from the Send SMS API but the SMS didn't reach my phone, what do I do now?
+The 202 returned by the service means that your message has been queued to be sent and not delivered. Use this [quickstart](../../quickstarts/sms/handle-sms-events.md) to subscribe to delivery report events and troubleshoot. Once the events are configured, inspect the "deliveryStatus" field of your delivery report to verify delivery success/failure.
-## How does Azure Communication Services handle opt-outs for toll-free numbers?
+## Opt-out handling
+### How does Azure Communication Services handle opt-outs for toll-free numbers?
Opt-outs for US toll-free numbers are mandated and enforced by US carriers and cannot be overridden. - **STOP** - If a text message recipient wishes to opt-out, they can send ΓÇÿSTOPΓÇÖ to the toll-free number. The carrier sends the following default response for STOP: *"NETWORK MSG: You replied with the word "stop" which blocks all texts sent from this number. Text back "unstop" to receive messages again."*
Opt-outs for US toll-free numbers are mandated and enforced by US carriers and c
- Azure Communication Services will detect the STOP message and block all further messages to the recipient. The delivery report will indicate a failed delivery with status message as ΓÇ£Sender blocked for given recipient.ΓÇ¥ - The STOP, UNSTOP and START messages will be relayed back to you. Azure Communication Services encourages you to monitor and implement these opt-outs to ensure that no further message send attempts are made to recipients who have opted out of your communications.
-## How does Azure Communication Services handle opt-outs for short codes?
+### How does Azure Communication Services handle opt-outs for short codes?
- **STOP** - If a text message recipient wishes to opt-out, they can send ΓÇÿSTOPΓÇÖ to the short code. Azure Communication Services sends the following default response for STOP: *"You have successfully been unsubscribed to messages from this number. Reply START to resubscribe"* - **START/UNSTOP** - If the recipient wishes to resubscribe to text messages from a toll-free number, they can send ΓÇÿSTARTΓÇÖ or ΓÇÿUNSTOP to the toll-free number. Azure Communication Service sends the following default response for START/UNSTOP: *ΓÇ£You have successfully been re-subscribed to messages from this number. Reply STOP to unsubscribe.ΓÇ¥* - Azure Communication Services will detect the STOP message and block all further messages to the recipient. The delivery report will indicate a failed delivery with status message as ΓÇ£Sender blocked for given recipient.ΓÇ¥ - The STOP, UNSTOP and START messages will be relayed back to you. Azure Communication Services encourages you to monitor and implement these opt-outs to ensure that no further message send attempts are made to recipients who have opted out of your communications.
-## How can I receive messages using Azure Communication Services?
+## Short codes
+### What is the eligibility to apply for a short code?
+Short Code availability is currently restricted to paid Azure enterprise subscriptions that have a billing address in the United States. Short Codes cannot be acquired on trial accounts or using Azure free credits. For more details, check out our [subscription eligibility page](../numbers/sub-eligibility-number-capability.md).
-Azure Communication Services customers can use Azure Event Grid to receive incoming messages. Follow this [quickstart](../../quickstarts/sms/handle-sms-events.md) to setup your event-grid to receive messages.
+### Can you text to a toll-free number from a short code?
+No. Texting to a toll-free number from a short code is not supported. You also wont be able to receive a message from a toll-free number to a short code.
+
+### How should a short code be formatted?
+Short codes do not fall under E.164 formatting guidelines and do not have a country code, or a "+" sign prefix. In the SMS API request, your short code should be passed as the 5-6 digit number you see in your short codes blade without any prefix.
+
+### How long does it take to get a short code? What happens after a short code program brief application is submitted?
+Once you have submitted the short code program brief application in the Azure portal, Azure Communication Services works with the aggregators to get your application approved by each mobile carrier. This process generally takes 8-12 weeks.
-## What is the SMS character limit?
+## Character and rate limits
+### What is the SMS character limit?
The size of a single SMS message is 140 bytes. The character limit per single message being sent depends on the message content and encoding used. Azure Communication Services supports both GSM-7 and UCS-2 encoding. - **GSM-7** - A message containing text characters only will be encoded using GSM-7
This table shows the maximum number of characters that can be sent per SMS segme
|Hello world|Text|GSM Standard|GSM-7|160| |你好|Unicode|Unicode|UCS-2|70|
-## Can I send/receive long messages (>2048 chars)?
+### Can I send/receive long messages (>2048 chars)?
Azure Communication Services supports sending and receiving of long messages over SMS. However, some wireless carriers or devices may act differently when receiving long messages.
-## How are messages sent to landline numbers treated?
-
-In the United States, Azure Communication Services does not check for landline numbers and will attempt to send it to carriers for delivery. Customers will be charged for messages sent to landline numbers.
+### Are there any limits on sending messages?
-## Can I send messages to multiple recipients?
-
-Yes, you can make one request with multiple recipients. Follow this [quickstart](../../quickstarts/sms/send.md?pivots=programming-language-csharp) to send messages to multiple recipients.
-
-## I received a HTTP Status 202 from the Send SMS API but the SMS didn't reach my phone, what do I do now?
+To ensure that we continue offering the high quality of service consistent with our SLAs, Azure Communication Services applies rate limits (different for each primitive). Developers who call our APIs beyond the limit will receive a 429 HTTP Status Code Response. If your company has requirements that exceed the rate-limits, please email us at phone@microsoft.com.
-The 202 returned by the service means that your message has been queued to be sent and not delivered. Use this [quickstart](../../quickstarts/sms/handle-sms-events.md) to subscribe to delivery report events and troubleshoot. Once the events are configured, inspect the "deliveryStatus" field of your delivery report to verify delivery success/failure.
+Rate Limits for SMS:
-## What is the eligibility to apply for a short code?
-Short Code availability is currently restricted to paid Azure enterprise subscriptions that have a billing address in the United States. Short Codes cannot be acquired on trial accounts or using Azure free credits. For more details, check out our [subscription eligibility page](../numbers/sub-eligibility-number-capability.md).
+|Operation|Number Type |Scope|Timeframe (s)| Limit (request #) | Message units per minute|
+|||--|-|-|-|
+|Send Message|Toll-Free|Per Number|60|200|200|
+|Send Message|Short Code |Per Number|60|6000|6000|
-## Can you text to a toll-free number from a short code?
-No. Texting to a toll-free number from a short code is not supported. You also wont be able to receive a message from a toll-free number to a short code.
+## Carrier Fees
+### What are the carrier fees for SMS?
+In July 2021, US carriers started charging an added fee for SMS messages sent and/or received from toll-free numbers and short codes. Carrier fees for SMS are charged per message segment based on the destination. Azure Communication Services charges a standard carrier fee per message segment. Carrier fees are subject to change by mobile carriers. Please refer to [SMS pricing](../sms-pricing.md) for more details.
-## How should a short code be formatted?
-Short codes do not fall under E.164 formatting guidelines and do not have a country code, or a "+" sign prefix. In the SMS API request, your short code should be passed as the 5-6 digit number you see in your short codes blade without any prefix.
+### When will we come to know of changes to these surcharges?
+As with similar Azure services, customers will be notified at least 30 days prior to the implementation of any price changes. These charges will be reflected on our SMS pricing page along with the effective dates.
+
+## Emergency support
+### Can a customer use Azure Communication Services for emergency purposes?
-## How long does it take to get a short code? What happens after a short code program brief application is submitted?
-Once you have submitted the short code program brief application in the Azure portal, Azure Communication Services works with the aggregators to get your application approved by each mobile carrier. This process generally takes 8-12 weeks.
+Azure Communication Services does not support text-to-911 functionality in the United States, but itΓÇÖs possible that you may have an obligation to do so under the rules of the Federal Communications Commission (FCC). You should assess whether the FCCΓÇÖs text-to-911 rules apply to your service or application. To the extent you're covered by these rules, you'll be responsible for routing 911 text messages to emergency call centers that request them. You're free to determine your own text-to-911 delivery model, but one approach accepted by the FCC involves automatically launching the native dialer on the userΓÇÖs mobile device to deliver 911 texts through the underlying mobile carrier.
container-apps Azure Resource Manager Api Spec https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/azure-resource-manager-api-spec.md
Changes made to the `template` section are [revision-scope changes](revisions.md
### <a name="container-app-examples"></a>Examples
+For details on health probes, refer to [Heath probes in Azure Container Apps](./health-probes.md).
+ # [ARM template](#tab/arm-template) The following example ARM template deploys a container app.
The following example ARM template deploys a container app.
"resources": { "cpu": 0.5, "memory": "1Gi"
- }
+ },
+ "probes":[
+ {
+ "type":"liveness",
+ "httpGet":{
+ "path":"/health",
+ "port":8080,
+ "httpHeaders":[
+ {
+ "name":"Custom-Header",
+ "value":"liveness probe"
+ }]
+ },
+ "initialDelaySeconds":7,
+ "periodSeconds":3
+ },
+ {
+ "type":"readiness",
+ "tcpSocket":
+ {
+ "port": 8081
+ },
+ "initialDelaySeconds": 10,
+ "periodSeconds": 3
+ },
+ {
+ "type": "startup",
+ "httpGet": {
+ "path": "/startup",
+ "port": 8080,
+ "httpHeaders": [
+ {
+ "name": "Custom-Header",
+ "value": "startup probe"
+ }]
+ },
+ "initialDelaySeconds": 3,
+ "periodSeconds": 3
+ }]
} ], "scale": {
The following example YAML configuration deploys a container app when used with
```yaml kind: containerapp
-location: northeurope
+location: canadacentral
name: mycontainerapp resourceGroup: myresourcegroup type: Microsoft.App/containerApps
properties:
resources: cpu: 0.5 memory: 1Gi
+ probes:
+ - type: liveness
+ httpGet:
+ - path: "/health"
+ port: 8080
+ httpHeaders:
+ - name: "Custom-Header"
+ value: "liveness probe"
+ initialDelaySeconds: 7
+ periodSeconds: 3
+ - type: readiness
+ tcpSocket:
+ - port: 8081
+ initialDelaySeconds: 10
+ periodSeconds: 3
+ - type: startup
+ httpGet:
+ - path: "/startup"
+ port: 8080
+ httpHeaders:
+ - name: "Custom-Header"
+ value: "startup probe"
+ initialDelaySeconds: 3
+ periodSeconds: 3
scale: minReplicas: 1 maxReplicas: 3
container-apps Deploy Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/deploy-visual-studio.md
+
+ Title: 'Deploy to Azure Container Apps using Visual Studio'
+description: Deploy your containerized .NET applications to Azure Container Apps using Visual Studio
+++++ Last updated : 3/04/2022+++
+# Tutorial: Deploy to Azure Container Apps using Visual Studio
+
+Azure Container Apps Preview enables you to run microservices and containerized applications on a serverless platform. With Container Apps, you enjoy the benefits of running containers while leaving behind the concerns of manually configuring cloud infrastructure and complex container orchestrators.
+
+In this tutorial, you'll deploy a containerized ASP.NET Core 6.0 application to Azure Container Apps using Visual Studio. The steps below also apply to earlier versions of ASP.NET Core.
+
+## Prerequisites
+
+- An Azure account with an active subscription is required. If you don't already have one, you can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- Visual Studio 2022 Preview 2 or higher, available as a [free download](https://visualstudio.microsoft.com/vs/preview/).
+- [Docker Desktop](https://hub.docker.com/editions/community/docker-ce-desktop-windows) for Windows. Visual Studio uses Docker Desktop for various containerization features.
+
+## Create the project
+
+Begin by creating the containerized ASP.NET Core application to deploy to Azure.
+
+1) Inside Visual Studio, select **File** and then choose **New => Project**.
+
+2) In the dialog window, search for *ASP.NET*, and then choose **ASP.NET Core Web App** and select **Next**.
+
+3) In the **Project Name** field, name the application *MyContainerApp* and then select **Next**.
+
+4) On the **Additional Information** screen, make sure to select **Enable Docker**, and then make sure **Linux** is selected for the **Docker OS** setting. Azure Container Apps currently does not support Windows containers. This selection ensures the project template supports containerization by default. While enabled, the project uses a container as it is running or building.
+
+5) Click **Create** and Visual Studio creates and loads the project.
+++
+### Docker installation
+
+If this is your first time creating a project using Docker, you may get a prompt instructing you to install Docker Desktop. This installation is required for working with containerized apps, as mentioned in the prerequisites, so click **Yes**. You can also download and [install Docker Desktop for Windows from the official Docker site](https://hub.docker.com/editions/community/docker-ce-desktop-windows).
+
+Visual Studio launches the Docker Desktop for Windows installer. You can follow the installation instructions on this page to setup Docker, which requires a system reboot.
+
+## Deploy to Azure Container Apps
+
+The application includes a Dockerfile because the Enable Docker setting was selected in the project template. Visual Studio uses the Dockerfile to build the container image that is run by Azure Container Apps.
+
+Refer to [How Visual Studio builds containerized apps](/visualstudio/containers/container-build) if you'd like to learn more about the specifics of this process.
+
+You are now ready to deploy to the application to Azure Containers Apps.
+
+### Create the resources
+
+The Visual Studio publish dialogs will help you choose existing Azure resources, or create new ones to be used to deploy your applications to. It will also build the container image using the Dockerfile in the project, push this image to ACR, and finally deploy the new image to the container app selected.
+
+1) Right-click the **MyContainerApp** project node and select **Publish**.
+
+2) In the dialog, choose **Azure** from the list of publishing options, and then select **Next**.
+
+ :::image type="content" source="media/visual-studio/container-apps-deploy-azure.png" alt-text="A screenshot showing to publish to Azure.":::
+
+3) On the **Specific target** screen, choose **Azure Container Apps Preview (Linux)**, and then select **Next** again.
+
+ :::image type="content" source="media/visual-studio/container-apps-publish-azure.png" alt-text="A screenshot showing Container Apps selected.":::
+
+5) Next, create an Azure Container App to host the project. Select the **green plus icon** on the right to open the create dialog. In the *Create new* dialog, enter the following values:
+
+ - **Container App name**: Enter a name of `msdocscontainerapp`.
+ - **Subscription name**: Choose the subscription where you would like to host your app.
+ - **Resource group**: A resource group acts as a logical container to organize related resources in Azure. You can either select an existing resource group, or select **New** to create one with a name of your choosing, such as `msdocscontainerapps`.
+ - **Container Apps Environment**: Container Apps Environment: Every container app must be part of a container app environment. An environment provides an isolated network for one or more container apps, making it possible for them to easily invoke each other, Click **New** to open the Create new dialog for your container app environment. Leave the default values and select **OK** to close the environment dialog.
+ - **Container Name**: This is the friendly name of the container that will run for this container app. Use the name `msdocscontainer1` for this quickstart. A container app typically runs a single container, but there are times when having more than one container is needed. One such example is when a sidecar container is required to perform an activity such as specialized logging or communications.
+
+ :::image type="content" source="media/visual-studio/container-apps-create-new.png" alt-text="A screenshot showing how to create new Container Apps.":::
+
+6) Select **Create** to finalize the creation or your container app. Visual Studio and Azure create the needed resources on your behalf. This process may take a couple minutes, so allow it to run to completion before moving on.
+
+7) Once the resources are created, choose **Next**.
+
+8) On the **Registry** screen, you can either select an existing Registry if you have one, or create a new one. To create a new one, click the green **+** icon on the right. On the **Create new** registry screen, fill in the following values:
+
+ - **DNS prefix**: Enter a value of `msdocscontainerregistry` or a name of your choosing.
+ - **Subscription Name**: Select the subscription you want to use - you may only have one to choose from.
+ - **Resource Group**: Choose the msdocs resource group you created previously.
+ - **Sku**: Select **Standard**.
+ - **Registry Location**: Select a region that is geographically close to you.
+
+ :::image type="content" source="media/visual-studio/container-apps-registry.png" alt-text="A screenshot showing how to create the container registry.":::
+
+9) After you have populated these values, select **Create**. Visual Studio and Azure will take a moment to create the registry.
+
+10) Once the container registry is created, make sure it is selected, and then choose **Finish**. Visual Studio will take a moment to create the publish profile. This publish profile is where VS stores the publish options and resources you chose so you can quickly publish again whenever you want. You can close the dialog once it finishes.
+
+ :::image type="content" source="media/visual-studio/container-apps-choose-registry.png" alt-text="A screenshot showing how select the created registry.":::
+
+### Publish the app
+
+While the resources and publishing profile are created, you still need to publish and deploy the app to Azure.
+
+Choose **Publish** in the upper right of the publishing profile screen to deploy to the container app you created in Azure. This process may take a moment, so wait for it to complete.
++
+When the app finishes deploying, Visual Studio opens a browser to the the URL of your deployed site. This page may initially display an error if all of the proper resources have not finished provisioning. You can continue to refresh the browser periodically to check if the deployment has fully completed.
+++
+## Clean up resources
+
+If you're not going to continue to use this application, you can delete the Azure Container Apps instance and all the associated services by removing the resource group.
+
+Follow these steps in the Azure portal to remove the resources you created:
+
+1. Select the **msdocscontainerapps** resource group from the *Overview* section.
+1. Select the **Delete resource group** button at the top of the resource group *Overview*.
+1. Enter the resource group name **msdocscontainerapps** in the *Are you sure you want to delete "my-container-apps"* confirmation dialog.
+1. Select **Delete**.
+ The process to delete the resource group may take a few minutes to complete.
+
+> [!TIP]
+> Having issues? Let us know on GitHub by opening an issue in the [Azure Container Apps repo](https://github.com/microsoft/azure-container-apps).
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Environments in Azure Container Apps](environment.md)
container-apps Health Probes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/health-probes.md
+
+ Title: Health probes in Azure Container Apps
+description: Check startup, liveness, and readiness with Azure Container Apps health probes
++++ Last updated : 03/30/2022+++
+# Health probes in Azure Container Apps
+
+Health probes in Azure Container Apps are based on [Kubernetes health probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/). You can set up probes using either TCP or HTTP(S) exclusively.
+
+Container Apps support the following probes:
+
+- [Liveness](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-liveness-command): Reports the overall health of your replica.
+- [Startup](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-startup-probes): Delay reporting on a liveness or readiness state for slower apps with a startup probe.
+- [Readiness](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-readiness-probes): Signals that a replica is ready to accept traffic.
+
+For a full listing of the specification supported in Azure Container Apps, refer to [Azure Rest API specs](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/app/resource-manager/Microsoft.App/stable/2022-03-01/CommonDefinitions.json#L119-L236).
+
+## HTTP probes
+
+HTTP probes allow you to implement custom logic to check the status of application dependencies before reporting a healthy status. Configure your health probe endpoints to respond with an HTTP status code greater than or equal to `200` and less than `400` to indicate success. Any other response code outside this range indicates a failure.
+
+The following example demonstrates how to implement a liveness endpoint in JavaScript.
+
+```javascript
+const express = require('express');
+const app = express();
+
+app.get('/liveness', (req, res) => {
+ let isSystemStable = false;
+
+ // check for database availability
+ // check filesystem structure
+ // etc.
+
+ // set isSystemStable to true if all checks pass
+
+ if (isSystemStable) {
+ res.status(200); // Success
+ } else {
+ res.status(503); // Service unavailable
+ }
+})
+```
+
+## TCP probes
+
+TCP probes wait for a connection to be established with the server to indicate success. A probe failure is registered if no connection is made.
+
+## Restrictions
+
+- You can only add one of each probe type per container.
+- `exec` probes aren't supported.
+- Port values must be integers; named ports aren't supported.
+
+## Examples
+
+The following code listing shows how you can define health probes for your containers.
+
+The `...` placeholders denote omitted code. Refer to [Container Apps Preview ARM template API specification](./azure-resource-manager-api-spec.md) for full ARM template details.
+
+# [ARM template](#tab/arm-template)
+
+```json
+{
+ ...
+ "containers":[
+ {
+ "image":"nginx",
+ "name":"web",
+ "probes": [
+ {
+ "type": "liveness",
+ "httpGet": {
+ "path": "/health",
+ "port": 8080,
+ "httpHeaders": [
+ {
+ "name": "Custom-Header",
+ "value": "liveness probe"
+ }],
+ "initialDelaySeconds": 7,
+ "periodSeconds": 3
+ }
+ },
+ {
+ "type": "readiness",
+ "tcpSocket": {
+ "port": 8081
+ },
+ "initialDelaySeconds": 10,
+ "periodSeconds": 3
+ },
+ {
+ "type": "startup",
+ "httpGet": {
+ "path": "/startup",
+ "port": 8080,
+ "httpHeaders": [
+ {
+ "name": "Custom-Header",
+ "value": "startup probe"
+ }],
+ "initialDelaySeconds": 3,
+ "periodSeconds": 3
+ }
+ }]
+ }]
+ ...
+}
+```
+
+# [YAML](#tab/yaml)
+
+```yml
+...
+containers:
+ - image: nginx
+ name: web
+ probes:
+ - type: liveness
+ httpGet:
+ path: "/health"
+ port: 8080
+ httpHeaders:
+ - name: Custom-Header
+ value: "liveness probe"
+ initialDelaySeconds: 7
+ periodSeconds: 3
+ - type: readiness
+ tcpSocket:
+ port: 8081
+ initialDelaySeconds: 10
+ periodSeconds: 3
+ - type: startup
+ httpGet:
+ path: "/startup"
+ port: 8080
+ httpHeaders:
+ - name: Custom-Header
+ value: "startup probe"
+ initialDelaySeconds: 3
+ periodSeconds: 3
+...
+```
+++
+The optional [failureThreshold](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes) setting defines the number of attempts Kubernetes tries if the probe if execution fails. Attempts that exceed the `failureThreshold` amount cause different results for each probe. Refer to [Configure Liveness, Readiness and Startup Probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes) for details.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Monitor an app](monitor.md)
container-registry Container Registry Auto Purge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-auto-purge.md
For more information about image storage, see [Container image storage in Azure
<!-- LINKS - Internal --> [azure-cli-install]: /cli/azure/install-azure-cli [az-acr-run]: /cli/azure/acr#az_acr_run
-[az-acr-task-create]: /cli/azure/acr/task#az_acr_task_create
-[az-acr-task-show]: /cli/azure/acr/task#az_acr_task_show
+[az-acr-task-create]: /cli/azure/acr/task#az-acr-task-create
+[az-acr-task-show]: /cli/azure/acr/task#az-acr-task-show
cost-management-billing Ea Portal Agreements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-portal-agreements.md
An enrollment has one of the following status values. Each value determines how
**Active** - The enrollment is accessible and usable. You can create accounts and subscriptions in the Azure EA portal. Direct customers can create departments, accounts and subscriptions in the [Azure portal](https://portal.azure.com). The enrollment remains active until the enterprise agreement end date.
-**Indefinite Extended Term** - Indefinite extended term status occurs after the enterprise agreement end date is reached. Before the EA enrollment reaches the enterprise agreement end date, the Enrollment Administrator should decide to:
+**Indefinite Extended Term** - Indefinite extended term status occurs after the enterprise agreement end date is reached and is expired. When an agreement enters into an extended term, it doesn't receive discounted pricing. Instead, pricing is at retail rates. Before the EA enrollment reaches the enterprise agreement end date, the Enrollment Administrator should decide to:
- Renew the enrollment by adding additional Azure Prepayment - Transfer the existing enrollment to a new enrollment - Migrate to the Microsoft Online Subscription Program (MOSP) - Confirm disablement of all services associated with the enrollment
-**Expired** - The EA enrollment expires when it reaches the enterprise agreement end date. The EA customer is opted out of the extended term and all their services are disabled.
+**Expired** - The EA enrollment expires when it reaches the enterprise agreement end date and is opted out of the extended term. Sign a new enrollment contract as soon as possible. Although your service won't be disabled immediately, there's a risk of it getting disabled.
As of August 1, 2019, new opt-out forms aren't accepted for Azure commercial customers. Instead, all enrollments go into indefinite extended term. If you want to stop using Azure services, close your subscription in the [Azure portal](https://portal.azure.com). Or, your partner can submit a termination request. There's no change for customers with government agreement types.
cost-management-billing Exchange And Refund Azure Reservations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/exchange-and-refund-azure-reservations.md
# Self-service exchanges and refunds for Azure Reservations
-Azure Reservations provide flexibility to help meet your evolving needs. Reservation products are interchangeable with each other if they're the same type of reservation. For example, you can exchange multiple compute reservations including Azure Dedicated Host, Azure VMware Solution, and Azure Virtual Machines with each other all at once. In other words, In an other example, you can exchange multiple SQL database reservation types including Managed Instances and Elastic Pool with each other.
+Azure Reservations provide flexibility to help meet your evolving needs. Reservation products are interchangeable with each other if they're the same type of reservation. For example, you can exchange multiple compute reservations including Azure Dedicated Host, Azure VMware Solution, and Azure Virtual Machines with each other all at once. In an other example, you can exchange multiple SQL database reservation types including Managed Instances and Elastic Pool with each other.
However, you can't exchange dissimilar reservations. For example, you can't exchange a Cosmos DB reservation for SQL Database.
-You can also exchange a reservation to purchase another reservation of a similar type in a different region. For example, you can exchange a reservation that's in West US 2 for one that's in West Europe.
+You can also exchange a reservation to purchase another reservation of a similar type in a different region. For example, you can exchange a reservation that's in West US 2 region for one that's in West Europe region.
When you exchange a reservation, you can change your term from one-year to three-year.
If you are exchanging for a different size, series, region or payment frequency,
## How transactions are processed
-First, Microsoft cancels the existing reservation and refunds the pro-rated amount for that reservation. If there's an exchange, the new purchase is processed. Microsoft processes refunds using one of the following methods, depending on your account type and payment method:
+First, Microsoft cancels the existing reservation and refunds the pro-rated amount for that reservation. If there's an exchange, the new purchase is processed. Microsoft processes refunds using one of the following methods, depending on your account type and payment method.
### Enterprise agreement customers
data-factory Connector Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-rest.md
Previously updated : 02/28/2022 Last updated : 04/01/2022
You can copy data from a REST source to any supported sink data store. You also
Specifically, this generic REST connector supports: - Copying data from a REST endpoint by using the **GET** or **POST** methods and copying data to a REST endpoint by using the **POST**, **PUT** or **PATCH** methods.-- Copying data by using one of the following authentications: **Anonymous**, **Basic**, **AAD service principal**, and **managed identities for Azure resources**.
+- Copying data by using one of the following authentications: **Anonymous**, **Basic**, **AAD service principal**, and **user-assigned managed identity**.
- **[Pagination](#pagination-support)** in the REST APIs. - For REST as source, copying the REST JSON response [as-is](#export-json-response-as-is) or parse it by using [schema mapping](copy-activity-schema-and-type-mapping.md#schema-mapping). Only response payload in **JSON** is supported.
Set the **authenticationType** property to **AadServicePrincipal**. In addition
} ```
-### <a name="managed-identity"></a> Use system-assigned managed identity authentication
-
-Set the **authenticationType** property to **ManagedServiceIdentity**. In addition to the generic properties that are described in the preceding section, specify the following properties:
-
-| Property | Description | Required |
-|: |: |: |
-| aadResourceId | Specify the AAD resource you are requesting for authorization, for example, `https://management.core.windows.net`.| Yes |
-
-**Example**
-
-```json
-{
- "name": "RESTLinkedService",
- "properties": {
- "type": "RestService",
- "typeProperties": {
- "url": "<REST endpoint e.g. https://www.example.com/>",
- "authenticationType": "ManagedServiceIdentity",
- "aadResourceId": "<AAD resource URL e.g. https://management.core.windows.net>"
- },
- "connectVia": {
- "referenceName": "<name of Integration Runtime>",
- "type": "IntegrationRuntimeReference"
- }
- }
-}
-```
- ### Use user-assigned managed identity authentication Set the **authenticationType** property to **ManagedServiceIdentity**. In addition to the generic properties that are described in the preceding section, specify the following properties:
databox-online Azure Stack Edge Gpu Virtual Machine Sizes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-virtual-machine-sizes.md
Previously updated : 02/25/2022 Last updated : 03/29/2022 #Customer intent: As an IT admin, I need to understand how to create and manage virtual machines (VMs) on my Azure Stack Edge Pro device by using APIs, so that I can efficiently manage my VMs.
databox Data Box Deploy Picked Up https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-deploy-picked-up.md
Previously updated : 03/11/2022+ Last updated : 03/31/2022 # Customer intent: As an IT admin, I need to be able to return a Data Box to upload on-premises data from my server onto Azure.
Follow the guidelines for the region you're shipping from if you're using Micros
[!INCLUDE [data-box-shipping-in-uae](../../includes/data-box-shipping-in-uae.md)]
+## [Norway](#tab/in-norway)
++ ### Self-managed shipping + [!INCLUDE [data-box-shipping-self-managed](../../includes/data-box-shipping-self-managed.md)] ::: zone target="chromeless"
databox Data Box Portal Customer Managed Shipping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-portal-customer-managed-shipping.md
Previously updated : 07/22/2021+ Last updated : 03/31/2022
This article describes self-managed shipping tasks to order, pick up, and drop-o
## Prerequisites
-Self-managed shipping is available as an option when you [Order Azure Data Box](data-box-deploy-ordered.md). Self-managed shipping is only available in the following regions:
-
-* US Government
-* United Kingdom
-* Western Europe
-* Japan
-* Singapore
-* South Korea
-* India
-* South Africa
-* Australia
-* Brazil
+Self-managed shipping is available as an option when you [Order Azure Data Box](data-box-deploy-ordered.md).
+ ## Use self-managed shipping
digital-twins Concepts Apis Sdks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-apis-sdks.md
This article gives an overview of the Azure Digital Twins APIs available, and th
Azure Digital Twins comes equipped with control plane APIs, data plane APIs, and SDKs for managing your instance and its elements. * The control plane APIs are [Azure Resource Manager (ARM)](../azure-resource-manager/management/overview.md) APIs, and cover resource management operations like creating and deleting your instance. * The data plane APIs are Azure Digital Twins APIs, and are used for data management operations like managing models, twins, and the graph.
-* The SDKs take advantage of the existing APIs to allow for ease of development of custom applications making use of Azure Digital Twins. The control plane SDKs are available in [.NET (C#)](/dotnet/api/overview/azure/digitaltwins/management?view=azure-dotnet&preserve-view=true) and [Java](/java/api/overview/azure/digitaltwins/resourcemanagement?view=azure-java-stable&preserve-view=true), and the data plane SDKs are available in [.NET (C#)](/dotnet/api/overview/azure/digitaltwins/client?view=azure-dotnet&preserve-view=true), [Java](/java/api/overview/azure/digitaltwins/client?view=azure-java-stable&preserve-view=true), [JavaScript](/javascript/api/@azure/digital-twins-core/?view=azure-node-latest&preserve-view=true), and [Python](/python/api/azure-digitaltwins-core/azure.digitaltwins.core?view=azure-python&preserve-view=true).
+* The SDKs take advantage of the existing APIs to allow for ease of development of custom applications making use of Azure Digital Twins.
## Overview: control plane APIs
digital-twins Concepts Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-cli.md
description: Learn about the Azure Digital Twins CLI command set. Previously updated : 02/28/2022 Last updated : 03/31/2022
# Azure Digital Twins CLI command set
-Apart from managing your Azure Digital Twins instance in the Azure portal, Azure Digital Twins also has a command set for the [Azure CLI](/cli/azure/what-is-azure-cli) that you can use to do most major actions with the service. This article covers the [Azure CLI](/cli/azure/what-is-azure-cli) in terms of its uses, how to get it, and the requirements for using it.
+Apart from managing your Azure Digital Twins instance in the Azure portal, Azure Digital Twins also has a command set for the [Azure CLI](/cli/azure/what-is-azure-cli) that you can use to do most major actions with the service. This article covers the Azure CLI command set for Azure Digital twins including its uses, how to get it, and the requirements for using it.
Some of the actions you can do using the command set include: * Managing an Azure Digital Twins instance
Otherwise, you can use the following command to install the extension yourself a
az extension add --upgrade --name azure-iot ```
+## Use special characters in different shells
+
+Some `az dt` commands use special characters that may have to be escaped for proper parsing in certain shell environments. Use the tips in this section to help you know when to do this in your shell of choice.
+
+### Bash
+
+Use these special character tips for Bash environments.
+
+#### Queries
+
+In many twin queries, the `$` character is used to reference the `$dtId` property of a twin. When using the [az dt twin query](/cli/azure/dt/twin#az-dt-twin-query) command to query in the Cloud Shell Bash environment, escape the `$` character with a backslash (`\`).
+
+Here is an example of querying for a twin with a CLI command in the Cloud Shell Bash environment:
+
+```azurecli
+az dt twin query -n <instance-name> -q "SELECT * FROM DigitalTwins T Where T.\$dtId = 'room0'"
+```
+
+### PowerShell
+
+Use these special character tips for PowerShell environments.
+
+#### Inline JSON
+
+Some commands, like [az dt twin create](/cli/azure/dt/twin#az-dt-twin-create), allow you to enter twin information in the form of inline JSON. When entering inline JSON in the PowerShell environment, escape double quote characters (`"`) inside the JSON with a backslash (`\`).
+
+Here is an example of creating a twin with a CLI command in PowerShell:
+
+```azurecli
+az dt twin create --dt-name <instance-name> --dtmi "dtmi:contosocom:DigitalTwins:Thermostat;1" --twin-id thermostat67 --properties '{\"Temperature\": 0.0}'
+```
+
+>[!TIP]
+>Many of the commands that support inline JSON also support input as a file path, which can help you avoid shell-specific text requirements.
+
+#### Queries
+
+In many twin queries, the `$` character is used to reference the `$dtId` property of a twin. When using the [az dt twin query](/cli/azure/dt/twin#az-dt-twin-query) command to query in a PowerShell environment, escape the `$` character with a backtick character.
+
+Here is an example of querying for a twin with a CLI command in PowerShell:
+```azurecli
+az dt twin query -n <instance-name> -q "SELECT * FROM DigitalTwins T Where T.`$dtId = 'room0'"
+```
+
+### Windows CMD
+
+Use these special character tips for the local Windows CMD.
+
+#### Inline JSON
+
+Some commands, like [az dt twin create](/cli/azure/dt/twin#az-dt-twin-create), allow you to enter twin information in the form of inline JSON. When entering inline JSON in a local Windows CMD window, enclose the parameter value with double quotes (`"`) instead of single quotes (`'`), and escape double quote characters inside the JSON with a backslash (`\`).
+
+Here is an example of creating a twin with a CLI command in the local Windows CMD:
+
+```azurecli
+az dt twin create --dt-name <instance-name> --dtmi "dtmi:contosocom:DigitalTwins:Thermostat;1" --twin-id thermostat67 --properties "{\"Temperature\": 0.0}"
+```
+
+>[!TIP]
+>Many of the commands that support inline JSON also support input as a file path, which can help you avoid shell-specific text requirements.
+ ## Next steps Explore the CLI and its full set of commands through the reference docs:
digital-twins Concepts Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-models.md
A key characteristic of Azure Digital Twins is the ability to define your own vocabulary and build your twin graph in the self-defined terms of your business. This capability is provided through user-provided *models*. You can think of models as the nouns in a description of your world. Azure Digital Twins models are represented in the JSON-LD-based *Digital Twin Definition Language (DTDL)*.
-A model is similar to a *class* in an object-oriented programming language, defining a data shape for one particular concept in your real work environment. Models have names (such as Room or TemperatureSensor), and contain elements such as properties, telemetry/events, and commands that describe what this type of entity in your environment can do. Later, you'll use these models to create [digital twins](concepts-twins-graph.md) that represent specific entities that meet this type description.
+A model is similar to a *class* in an object-oriented programming language, defining a data shape for one particular concept in your real work environment. Models have names (such as Room or TemperatureSensor), and contain elements such as properties, telemetry, and relationships that describe what this type of entity in your environment does. Later, you'll use these models to create [digital twins](concepts-twins-graph.md) that represent specific entities that meet this type description.
## Digital Twin Definition Language (DTDL) for models
The fields of the model are:
| `@id` | An identifier for the model. Must be in the format `dtmi:<domain>:<unique-model-identifier>;<model-version-number>`. | | `@type` | Identifies the kind of information being described. For an interface, the type is `Interface`. | | `@context` | Sets the [context](https://niem.github.io/json/reference/json-ld/context/) for the JSON document. Models should use `dtmi:dtdl:context;2`. |
-| `displayName` | [optional] Gives you the option to define a friendly name for the model. |
-| `contents` | All remaining interface data is placed here, as an array of attribute definitions. Each attribute must provide a `@type` (`Property`, `Telemetry`, `Command`, `Relationship`, or `Component`) to identify the sort of interface information it describes, and then a set of properties that define the actual attribute (for example, `name` and `schema` to define a `Property`). |
+| `displayName` | [optional] Gives you the option to define a friendly name for the model. If you don't use this field, the model will use its full DTMI value.|
+| `contents` | All remaining interface data is placed here, as an array of attribute definitions. Each attribute must provide a `@type` (`Property`, `Telemetry`, `Relationship`, or `Component`) to identify the sort of interface information it describes, and then a set of properties that define the actual attribute (for example, `name` and `schema` to define a `Property`). |
#### Example model
Telemetry is often used with IoT devices, because many devices either can't, or
As a result, when designing a model in Azure Digital Twins, you'll probably use properties in most cases to model your twins. Doing so allows you to have the backing storage and the ability to read and query the data fields.
-Telemetry and properties often work together to handle data ingress from devices. As all ingress to Azure Digital Twins is via [APIs](concepts-apis-sdks.md), you'll typically use your ingress function to read telemetry or property events from devices, and set a property in Azure Digital Twins in response.
+Telemetry and properties often work together to handle data ingress from devices. You'll often use an ingress function to read telemetry or property events from devices, and set a property in Azure Digital Twins in response.
You can also publish a telemetry event from the Azure Digital Twins API. As with other telemetry, that is a short-lived event that requires a listener to handle.
This section describes additional considerations and recommendations for modelin
### Use DTDL industry-standard ontologies
-If your solution is for a certain established industry (like smart buildings, smart cities, or energy grids), consider starting with a pre-existing set of models for you industry instead of designing your models from scratch. Microsoft has partnered with domain experts to create DTDL model sets based on industry standards, to help minimize reinvention and encourage consistency and simplicity across industry solutions. You can read more about these ontologies, including how to use them and what ontologies are available now, in [What is an ontology?](concepts-ontologies.md).
+If your solution is for a certain established industry (like smart buildings, smart cities, or energy grids), consider starting with a pre-existing set of models for your industry instead of designing your models from scratch. Microsoft has partnered with domain experts to create DTDL model sets based on industry standards, to help minimize reinvention and encourage consistency and simplicity across industry solutions. You can read more about these ontologies, including how to use them and what ontologies are available now, in [What is an ontology?](concepts-ontologies.md).
### Consider query implications
This section describes the current set of samples in more detail.
### Model uploader
-Once you're finished creating, extending, or selecting your models, you can upload them to your Azure Digital Twins instance to make them available for use in your solution. You can do so by using the [Azure Digital Twins APIs](concepts-apis-sdks.md), as described in [Manage DTDL models](how-to-manage-model.md#upload-models).
+Once you're finished creating, extending, or selecting your models, you can upload them to your Azure Digital Twins instance to make them available for use in your solution. You can do so by using the [Azure Digital Twins APIs and SDKs](concepts-apis-sdks.md), [az dt CLI commands](concepts-cli.md), or [Azure Digital Twins Explorer](concepts-azure-digital-twins-explorer.md).
However, if you have many models to uploadΓÇöor if they have many interdependencies that would make ordering individual uploads complicatedΓÇöyou can use the [Azure Digital Twins Model Uploader sample](https://github.com/Azure/opendigitaltwins-tools/tree/master/ADTTools#uploadmodels) to upload many models at once. Follow the instructions provided with the sample to configure and use this project to upload models into your own instance.
digital-twins How To Enable Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-enable-private-link.md
ms.devlang: azurecli
# Enable private access with Private Link
-This article describes the different ways to [enable Private Link with a private endpoint for an Azure Digital Twins instance](concepts-security.md#private-network-access-with-azure-private-link). Configuring a private endpoint for your Azure Digital Twins instance enables you to secure your Azure Digital Twins instance and eliminate public exposure. Additionally, it helps avoid data exfiltration from your [Azure Virtual Network (VNet)](../virtual-network/virtual-networks-overview.md).
+This article describes the different ways to enable [Private Link with a private endpoint for an Azure Digital Twins instance](concepts-security.md#private-network-access-with-azure-private-link). Configuring a private endpoint for your Azure Digital Twins instance enables you to secure your Azure Digital Twins instance and eliminate public exposure. Additionally, it helps avoid data exfiltration from your [Azure Virtual Network (VNet)](../virtual-network/virtual-networks-overview.md).
Here are the steps that are covered in this article: 1. Turn on Private Link and configure a private endpoint for an Azure Digital Twins instance.
digital-twins How To Ingest Iot Hub Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-ingest-iot-hub-data.md
To create a thermostat-type twin, you'll first need to upload the thermostat [mo
You'll then need to create one twin using this model. Use the following command to create a thermostat twin named thermostat67, and set 0.0 as an initial temperature value. ```azurecli-interactive
-az dt twin create --dt-name <instance-name> --dtmi "dtmi:contosocom:DigitalTwins:Thermostat;1" --twin-id thermostat67 --properties '{"Temperature": 0.0,}'
+az dt twin create --dt-name <instance-name> --dtmi "dtmi:contosocom:DigitalTwins:Thermostat;1" --twin-id thermostat67 --properties '{"Temperature": 0.0}'
```
+>[!NOTE]
+>If you're using anything other than Cloud Shell in the Bash environment, you may need to escape certain characters in the inline JSON so that it's parsed correctly.
+>
+>For more information, see [Use special characters in different shells](concepts-cli.md#use-special-characters-in-different-shells).
+ When the twin is created successfully, the CLI output from the command should look something like this: ```json {
digital-twins Troubleshoot Error Cli Parse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/troubleshoot-error-cli-parse.md
+
+ Title: "Troubleshoot CLI parsing failures"
+
+description: Learn how to diagnose and resolve parsing failures with the Azure Digital Twins CLI command set.
++++ Last updated : 03/31/2022++
+# Troubleshoot parsing failures with Azure Digital Twins CLI commands
+
+This article describes causes and resolution steps for various "parse failed" errors while running [az dt](/cli/azure/dt) commands in the Azure CLI.
+
+## Symptoms
+
+While attempting to run select `az dt` commands in an Azure CLI environment, you receive an error indicating that the command wasn't parsed correctly. The error message might include the words *parse failed* or *failed to parse*, or partial text from your command may be marked as *unrecognized arguments.*
+
+## Causes
+
+### Cause #1
+
+Some `az dt` commands use special characters that have to be escaped for proper parsing in certain shell environments. It is possible that some special character in your CLI command needs to be escaped for it to be parsed in the shell that you're using.
+
+## Solutions
+
+### Solution #1
+
+Use the full error message text to help you determine which character is causing an issue. Then, try escaping instances of this character with a backslash or a backtick. For a list of some specific characters that need to be escaped in certain shells, see [Use special characters in different shells](concepts-cli.md#use-special-characters-in-different-shells).
+
+### Solution #2
+
+If you're encountering the parsing issue while passing inline JSON into a command (like [az dt model create](/cli/azure/dt/model#az-dt-model-create) or [az dt twin create](/cli/azure/dt/twin#az-dt-twin-create)), check whether the command allows you to pass in a file instead. Many of the commands that support inline JSON also support input as a file path, which can help you avoid shell-specific text requirements.
+
+### Solution #3
+
+Not all shells have the same special character requirements, so you can try running the command in a different shell type (some options are the [Cloud Shell](https://shell.azure.com) Bash environment, [Cloud Shell](https://shell.azure.com) PowerShell environment, local Windows CMD, local Bash window, or local PowerShell window).
+
+## Next steps
+
+Read more about the CLI for Azure Digital Twins:
+* [Azure Digital Twins CLI command set](concepts-cli.md)
+* [az dt command reference](/cli/azure/dt)
digital-twins Tutorial Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/tutorial-code.md
#
-# Tutorial: Coding with the Azure Digital Twins APIs
+# Tutorial: Coding with the Azure Digital Twins SDK
Developers working with Azure Digital Twins commonly write client applications for interacting with their instance of the Azure Digital Twins service. This developer-focused tutorial provides an introduction to programming against the Azure Digital Twins service, using the [Azure Digital Twins SDK for .NET (C#)](/dotnet/api/overview/azure/digitaltwins/client?view=azure-dotnet&preserve-view=true). It walks you through writing a C# console client app step by step, starting from scratch.
digital-twins Tutorial Command Line Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/tutorial-command-line-cli.md
To create a digital twin, you use the [az dt twin create](/cli/azure/dt/twin#az-
``` >[!NOTE]
- > It's recommended to use the CLI in the Bash environment for this tutorial. If you're using the PowerShell environment, you may need to escape the quotation mark characters in order for the `--properties` JSON value to be parsed correctly.
+ >If you're using anything other than Cloud Shell in the Bash environment, you may need to escape certain characters in the inline JSON so that it's parsed correctly.
+ >
+ >For more information, see [Use special characters in different shells](concepts-cli.md#use-special-characters-in-different-shells).
The output from each command will show information about the successfully created twin (including properties for the room twins that were initialized with them).
digital-twins Tutorial End To End https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/tutorial-end-to-end.md
To publish the function app to Azure, you'll first need to create a storage acco
This command publishes the project to the *digital-twins-samples-master\AdtSampleApp\SampleFunctionsApp\bin\Release\netcoreapp3.1\publish* directory.
- 1. Create a zip of the published files that are located in the *digital-twins-samples-master\AdtSampleApp\SampleFunctionsApp\bin\Release\netcoreapp3.1\publish* directory.
+ 1. Create a zip of the published files that are located in the *digital-twins-samples-master\AdtSampleApp\SampleFunctionsApp\bin\Release\netcoreapp3.1\publish* directory. Name the zipped folder *publish.zip*.
- If you're using PowerShell, you can create the zip by copying the full path to that *\publish* directory and pasting it into the following command:
-
- ```powershell
- Compress-Archive -Path <full-path-to-publish-directory>\* -DestinationPath .\publish.zip
- ```
+ >[!TIP]
+ >If you're using PowerShell, you can create the zip by copying the full path to that *\publish* directory and pasting it into the following command:
+ >
+ >```powershell
+ >Compress-Archive -Path <full-path-to-publish-directory>\* -DestinationPath .\publish.zip
+ >```
+ > The cmdlet will create the *publish.zip* file in the directory location of your terminal.
- The cmdlet will create a *publish.zip* file in the directory location of your terminal that includes a *host.json* file, as well as *bin*, *ProcessDTRoutedData*, and *ProcessHubToDTEvents* directories.
+ Your *publish.zip* file should contain folders for *bin*, *ProcessDTRoutedData*, and *ProcessHubToDTEvents*, and there should also be a *host.json* file.
- If you're not using PowerShell and don't have access to the `Compress-Archive` cmdlet, you'll need to zip up the files using the File Explorer or another method.
+ :::image type="content" source="media/tutorial-end-to-end/publish-zip.png" alt-text="Screenshot of File Explorer in Windows showing the contents of the publish zip folder.":::
1. In the Azure CLI, run the following command to deploy the published and zipped functions to your Azure function app:
To publish the function app to Azure, you'll first need to create a storage acco
az functionapp deployment source config-zip --resource-group <resource-group> --name <name-of-your-function-app> --src "<full-path-to-publish.zip>" ```
- > [!NOTE]
+ > [!TIP]
> If you're using the Azure CLI locally, you can access the ZIP file on your computer directly using its path on your machine. > >If you're using the Azure Cloud Shell, upload the ZIP file to Cloud Shell with this button before running the command:
dms Create Dms Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/create-dms-bicep.md
+
+ Title: Create instance of DMS (Bicep)
+description: Learn how to create Database Migration Service by using Bicep.
++++ Last updated : 03/21/2022+++
+# Quickstart: Create instance of Azure Database Migration Service using Bicep
+
+Use Bicep to deploy an instance of the Azure Database Migration Service.
++
+## Prerequisites
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Review the Bicep file
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/azure-database-migration-simple-deploy/).
++
+Three Azure resources are defined in the Bicep file:
+
+- [Microsoft.Network/virtualNetworks](/azure/templates/microsoft.network/virtualnetworks): Creates the virtual network.
+- [Microsoft.Network/virtualNetworks/subnets](/azure/templates/microsoft.network/virtualnetworks/subnets): Creates the subnet.
+- [Microsoft.DataMigration/services](/azure/templates/microsoft.datamigration/services): Deploys an instance of the Azure Database Migration Service.
+
+## Deploy the Bicep file
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep --parameters serviceName=<service-name> vnetName=<vnet-name> subnetName=<subnet-name>
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -serviceName "<service-name>" -vnetName "<vnet-name>" -subnetName "<subnet-name>"
+ ```
+
+
+
+ > [!NOTE]
+ > Replace **\<service-name\>** with the name of the new migration service. Replace **\<vnet-name\>** with the name of the new virtual network. Replace **\<subnet-name\>** with the name of the new subnet associated with the virtual network.
+
+ When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+## Review deployed resources
+
+Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az resource list --resource-group exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Get-AzResource -ResourceGroupName exampleRG
+```
+++
+## Clean up resources
+
+When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete the resource group and its resources.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name exampleRG
+```
+++
+## Next steps
+
+For other ways to deploy Azure Database Migration Service, see [Azure portal](quickstart-create-data-migration-service-portal.md).
+
+To learn more, see [an overview of Azure Database Migration Service](dms-overview.md).
dms Create Dms Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/create-dms-resource-manager-template.md
# Quickstart: Create instance of Azure Database Migration Service using ARM template
-Use this Azure Resource Manager template (ARM template) to deploy an instance of the Azure Database Migration Service.
+Use this Azure Resource Manager template (ARM template) to deploy an instance of the Azure Database Migration Service.
[!INCLUDE [About Azure Resource Manager](../../includes/resource-manager-quickstart-introduction.md)]
If your environment meets the prerequisites and you're familiar with using ARM t
## Prerequisites
-The Azure Database Migration Service ARM template requires the following:
+The Azure Database Migration Service ARM template requires the following:
-- The latest version of the [Azure CLI](/cli/azure/install-azure-cli) and/or [PowerShell](/powershell/scripting/install/installing-powershell).
+- The latest version of the [Azure CLI](/cli/azure/install-azure-cli) and/or [PowerShell](/powershell/scripting/install/installing-powershell).
- An Azure subscription. If you don't have one, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Review the template
The template used in this quickstart is from [Azure Quickstart Templates](https:
:::code language="json" source="~/quickstart-templates/quickstarts/microsoft.datamigration/azure-database-migration-simple-deploy/azuredeploy.json":::
-Three Azure resources are defined in the template:
+Three Azure resources are defined in the template:
-- [Microsoft.Network/virtualNetworks](/azure/templates/microsoft.network/virtualnetworks): Creates the virtual network. -- [Microsoft.Network/virtualNetworks/subnets](/azure/templates/microsoft.network/virtualnetworks/subnets): Creates the subnet. -- [Microsoft.DataMigration/services](/azure/templates/microsoft.datamigration/services): Deploys an instance of the Azure Database Migration Service.
+- [Microsoft.Network/virtualNetworks](/azure/templates/microsoft.network/virtualnetworks): Creates the virtual network.
+- [Microsoft.Network/virtualNetworks/subnets](/azure/templates/microsoft.network/virtualnetworks/subnets): Creates the subnet.
+- [Microsoft.DataMigration/services](/azure/templates/microsoft.datamigration/services): Deploys an instance of the Azure Database Migration Service.
More Azure Database Migration Services templates can be found in the [quickstart template gallery](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Datamigration&pageNumber=1&sort=Popular). ## Deploy the template
-1. Select the following image to sign in to Azure and open a template. The template creates an instance of the Azure Database Migration Service.
+1. Select the following image to sign in to Azure and open a template. The template creates an instance of the Azure Database Migration Service.
[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fAzure%2fazure-quickstart-templates%2fmaster%2fquickstarts%2fmicrosoft.datamigration%2fazure-database-migration-simple-deploy%2fazuredeploy.json) 2. Select or enter the following values. * **Subscription**: Select an Azure subscription.
- * **Resource group**: Select an existing resource group from the drop down, or select **Create new** to create a new resource group.
+ * **Resource group**: Select an existing resource group from the drop down, or select **Create new** to create a new resource group.
* **Region**: Location where the resources will be deployed. * **Service Name**: Name of the new migration service. * **Location**: The location of the resource group, leave as the default of `[resourceGroup().location]`.
More Azure Database Migration Services templates can be found in the [quickstart
-3. Select **Review + create**. After the instance of Azure Database Migration Service has been deployed successfully, you get a notification.
+3. Select **Review + create**. After the instance of Azure Database Migration Service has been deployed successfully, you get a notification.
The Azure portal is used to deploy the template. In addition to the Azure portal, you can also use the Azure PowerShell, Azure CLI, and REST API. To learn other deployment methods, see [Deploy templates](../azure-resource-manager/templates/deploy-powershell.md). ## Review deployed resources
-You can use the Azure CLI to check deployed resources.
+You can use the Azure CLI to check deployed resources.
```azurecli-interactive
For a step-by-step tutorial that guides you through the process of creating a te
> [!div class="nextstepaction"] > [ Tutorial: Create and deploy your first ARM template](../azure-resource-manager/templates/template-tutorial-create-first-template.md)
-For other ways to deploy Azure Database Migration Service, see:
+For other ways to deploy Azure Database Migration Service, see:
- [Azure portal](quickstart-create-data-migration-service-portal.md) To learn more, see [an overview of Azure Database Migration Service](dms-overview.md)
event-grid Cloud Event Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/cloud-event-schema.md
In addition to its [default event schema](event-schema.md), Azure Event Grid nat
CloudEvents simplifies interoperability by providing a common event schema for publishing, and consuming cloud based events. This schema allows for uniform tooling, standard ways of routing & handling events, and universal ways of deserializing the outer event schema. With a common schema, you can more easily integrate work across platforms.
-CloudEvents is being built by several [collaborators](https://github.com/cloudevents/spec/blob/master/community/contributors.md), including Microsoft, through the [Cloud Native Computing Foundation](https://www.cncf.io/). It's currently available as version 1.0.
+CloudEvents is being built by several [collaborators](https://github.com/cloudevents/spec/blob/main/docs/contributors.md), including Microsoft, through the [Cloud Native Computing Foundation](https://www.cncf.io/). It's currently available as version 1.0.
This article describes CloudEvents schema with Event Grid.
event-grid Cloudevents Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/cloudevents-schema.md
In addition to its [default event schema](event-schema.md), Azure Event Grid nat
CloudEvents simplifies interoperability by providing a common event schema for publishing and consuming cloud-based events. This schema allows for uniform tooling, standard ways of routing and handling events, and universal ways of deserializing the outer event schema. With a common schema, you can more easily integrate work across platforms.
-CloudEvents is being built by several [collaborators](https://github.com/cloudevents/spec/blob/master/community/contributors.md), including Microsoft, through the [Cloud Native Computing Foundation](https://www.cncf.io/). It's currently available as version 1.0.
+CloudEvents is being built by several [collaborators](https://github.com/cloudevents/spec/blob/main/docs/contributors.md), including Microsoft, through the [Cloud Native Computing Foundation](https://www.cncf.io/). It's currently available as version 1.0.
This article describes how to use the CloudEvents schema with Event Grid.
module.exports = function (context, req) {
## Next steps * For information about monitoring event deliveries, see [Monitor Event Grid message delivery](monitor-event-delivery.md).
-* We encourage you to test, comment on, and [contribute to CloudEvents](https://github.com/cloudevents/spec/blob/master/community/CONTRIBUTING.md).
+* We encourage you to test, comment on, and [contribute to CloudEvents](https://github.com/cloudevents/spec/blob/main/docs/CONTRIBUTING.md).
* For more information about creating an Azure Event Grid subscription, see [Event Grid subscription schema](subscription-creation-schema.md).
expressroute Expressroute Monitoring Metrics Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-monitoring-metrics-alerts.md
Metrics explorer supports SUM, MAX, MIN, AVG and COUNT as [aggregation types](..
| Metric | Category | Unit | Aggregation Type | Description | Dimensions | Exportable via Diagnostic Settings? | | | | | | | | |
-| [BitsInPerSecond](#directin) | Traffic | BitsPerSecond | Average | Bits ingressing Azure per second | Link | Yes |
-| [BitsOutPerSecond](#directout) | Traffic | BitsPerSecond | Average | Bits egressing Azure per second | Link | Yes |
-| DroppedInBitsPerSecond | Traffic | BitsPerSecond | Average | Ingress bits of data dropped per second | Link | Yes |
-| DroppedOutBitsPerSecond | Traffic | BitPerSecond | Average | Egress bits of data dropped per second | Link | Yes |
-| [AdminState](#admin) | Physical Connectivity | Count | Average | Admin state of the port | Link | Yes |
-| [LineProtocol](#line) | Physical Connectivity | Count | Average | Line protocol status of the port | Link | Yes |
-| [RxLightLevel](#rxlight) | Physical Connectivity | Count | Average | Rx Light level in dBm | Link, Lane | Yes |
-| [TxLightLevel](#txlight) | Physical Connectivity | Count | Average | Tx light level in dBm | Link, Lane | Yes |
+| [BitsInPerSecond](#directin) | Traffic | BitsPerSecond | Average | Bits ingressing Azure per second | Link | No |
+| [BitsOutPerSecond](#directout) | Traffic | BitsPerSecond | Average | Bits egressing Azure per second | Link | No |
+| DroppedInBitsPerSecond | Traffic | BitsPerSecond | Average | Ingress bits of data dropped per second | Link | No |
+| DroppedOutBitsPerSecond | Traffic | BitPerSecond | Average | Egress bits of data dropped per second | Link | No |
+| [AdminState](#admin) | Physical Connectivity | Count | Average | Admin state of the port | Link | No |
+| [LineProtocol](#line) | Physical Connectivity | Count | Average | Line protocol status of the port | Link | No |
+| [RxLightLevel](#rxlight) | Physical Connectivity | Count | Average | Rx Light level in dBm | Link, Lane | No |
+| [TxLightLevel](#txlight) | Physical Connectivity | Count | Average | Tx light level in dBm | Link, Lane | No |
## Circuits metrics
frontdoor Quickstart Create Front Door Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/quickstart-create-front-door-bicep.md
+
+ Title: 'Quickstart: Create an Azure Front Door Service using Bicep'
+description: This quickstart describes how to create an Azure Front Door Service using Bicep.
+
+documentationcenter:
++ Last updated : 03/30/2022+++
+ na
+
+#Customer intent: As an IT admin, I want to direct user traffic to ensure high availability of web applications.
++
+# Quickstart: Create a Front Door using Bicep
+
+This quickstart describes how to use Bicep to create a Front Door to set up high availability for a web endpoint.
++
+## Prerequisites
+
+* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+* IP or FQDN of a website or web application.
+
+## Review the Bicep file
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/front-door-create-basic).
+
+In this quickstart, you'll create a Front Door configuration with a single backend and a single default path matching `/*`.
++
+One Azure resource is defined in the Bicep file:
+
+* [**Microsoft.Network/frontDoors**](/azure/templates/microsoft.network/frontDoors)
+
+## Deploy the Bicep file
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep --parameters frontDoorName=<door-name> backendAddress=<backend-address>
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -frontDoorName "<door-name>" -backendAddress "<backend-address>"
+ ```
+
+
+
+ > [!NOTE]
+ > Replace **\<door-name\>** with the name of the Front Door resource. Replace **\<backend-address\>** with the hostname of the backend. It must be an IP address or FQDN.
+
+ When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+## Validate the deployment
+
+Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az resource list --resource-group exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Get-AzResource -ResourceGroupName exampleRG
+```
+++
+## Clean up resources
+
+When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete the Front Door service and the resource group. This removes the Front Door and all the related resources.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name exampleRG
+```
+++
+## Next steps
+
+In this quickstart, you created a Front Door.
+
+To learn how to add a custom domain to your Front Door, continue to the Front Door tutorials.
+
+> [!div class="nextstepaction"]
+> [Front Door tutorials](front-door-custom-domain.md)
governance Definition Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/definition-structure.md
functions](../../../azure-resource-manager/templates/template-functions.md) are
within a policy rule, except the following functions and user-defined functions: - copyIndex()
+- dateTimeAdd()
- deployment()
+- environment()
+- extensionResourceId()
+- listAccountSas()
+- listKeys()
+- listSecrets()
- list*
+- managementGroup()
- newGuid() - pickZones() - providers() - reference() - resourceId()
+- subscriptionResourceId()
+- tenantResourceId()
+- tenant()
+- utcNow(format)
- variables() > [!NOTE]
governance Cmmc L3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cmmc-l3.md
Title: Regulatory Compliance details for CMMC Level 3 description: Details of the CMMC Level 3 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 03/14/2022 Last updated : 04/01/2022 # Details of the CMMC Level 3 Regulatory Compliance built-in initiative The following article details how the Azure Policy Regulatory Compliance built-in initiative
-definition maps to **compliance domains** and **controls** in CMMC Level 3.
+definition maps to **compliance domains** and **controls** in Cybersecurity Maturity Model Certification (CMMC) Level 3.
For more information about this compliance standard, see [CMMC Level 3](https://www.acq.osd.mil/cmmc/documentation.html). To understand _Ownership_, see [Azure Policy policy definition](../concepts/definition-structure.md#type) and
Then, find and select the **CMMC Level 3** Regulatory Compliance built-in
initiative definition. > [!IMPORTANT]
+> This policy initiative was built to CMMC version 1.0 and will be updated in the future".
+> CMMC Level 2 under CMMC 2.0 is similar to CMMC Level 3 under CMMC 1.0, but has different control mappings.
+>
> Each control below is associated with one or more [Azure Policy](../overview.md) definitions. > These policies may help you [assess compliance](../how-to/get-compliance-data.md) with the > control; however, there often is not a one-to-one or complete match between a control and one or
initiative definition.
> definitions at this time. Therefore, compliance in Azure Policy is only a partial view of your > overall compliance status. The associations between compliance domains, controls, and Azure Policy > definitions for this compliance standard may change over time. To view the change history, see the
-> [GitHub Commit History](https://github.com/Azure/azure-policy/commits/master/built-in-policies/policySetDefinitions/Regulatory%20Compliance/CMMC_L3.json).
+> [GitHub commit history](https://github.com/Azure/azure-policy/commits/master/built-in-policies/policySetDefinitions/Regulatory%20Compliance/CMMC_L3.json).
## Access Control
Additional articles about Azure Policy:
- See the [initiative definition structure](../concepts/initiative-definition-structure.md). - Review other examples at [Azure Policy samples](./index.md). - Review [Understanding policy effects](../concepts/effects.md).-- Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md).
+- Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md).
hdinsight Apache Domain Joined Configure Using Azure Adds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/domain-joined/apache-domain-joined-configure-using-azure-adds.md
description: Learn how to set up and configure an HDInsight cluster integrated w
Previously updated : 10/30/2020 Last updated : 04/01/2022 # Configure HDInsight clusters for Azure Active Directory integration with Enterprise Security Package
hdinsight Apache Hadoop Etl At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-etl-at-scale.md
description: Learn how extract, transform, and load is used in HDInsight with Ap
Previously updated : 04/28/2020 Last updated : 04/01/2022 # Extract, transform, and load (ETL) at scale
hdinsight Using Json In Hive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/using-json-in-hive.md
description: Learn how to use JSON documents and analyze them by using Apache Hi
Previously updated : 04/20/2020 Last updated : 04/01/2022 # Process and analyze JSON documents by using Apache Hive in Azure HDInsight
hdinsight Hdinsight Administer Use Portal Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-administer-use-portal-linux.md
description: Learn how to create and manage Azure HDInsight clusters using the A
Previously updated : 04/24/2020 Last updated : 04/01/2022 # Manage Apache Hadoop clusters in HDInsight by using the Azure portal
In this article, you learned some basic administrative functions. To learn more,
- [Use Apache Hive in HDInsight](hadoop/hdinsight-use-hive.md) - [Use Apache Sqoop in HDInsight](hadoop/hdinsight-use-sqoop.md) - [Use Python User Defined Functions (UDF) with Apache Hive and Apache Pig in HDInsight](hadoop/python-udf-hdinsight.md)-- [What version of Apache Hadoop is in Azure HDInsight?](hdinsight-component-versioning.md)
+- [What version of Apache Hadoop is in Azure HDInsight?](hdinsight-component-versioning.md)
hdinsight Hdinsight Hadoop Manage Ambari https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-manage-ambari.md
description: Learn how to use Apache Ambari UI to monitor and manage HDInsight c
Previously updated : 01/12/2021 Last updated : 04/01/2022 # Manage HDInsight clusters by using the Apache Ambari Web UI
The following Ambari operations aren't supported on HDInsight:
* [Apache Ambari REST API](hdinsight-hadoop-manage-ambari-rest-api.md) with HDInsight. * [Use Apache Ambari to optimize HDInsight cluster configurations](./hdinsight-changing-configs-via-ambari.md)
-* [Scale Azure HDInsight clusters](./hdinsight-scaling-best-practices.md)
+* [Scale Azure HDInsight clusters](./hdinsight-scaling-best-practices.md)
hdinsight Hdinsight Plan Virtual Network Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-plan-virtual-network-deployment.md
description: Learn how to plan an Azure Virtual Network deployment to connect HD
Previously updated : 01/12/2021 Last updated : 04/01/2022 # Plan a virtual network for Azure HDInsight
hdinsight Hdinsight Use External Metadata Stores https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-use-external-metadata-stores.md
description: Use external metadata stores with Azure HDInsight clusters.
Previously updated : 08/06/2020 Last updated : 04/01/2022 # Use external metadata stores in Azure HDInsight
hdinsight Hdinsight Using Spark Query Hbase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-using-spark-query-hbase.md
description: Use the Spark HBase Connector to read and write data from a Spark c
Previously updated : 08/12/2020 Last updated : 04/01/2022 # Use Apache Spark to read and write Apache HBase data
hdinsight Hdinsight Virtual Network Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-virtual-network-architecture.md
Title: Azure HDInsight virtual network architecture
description: Learn the resources available when you create an HDInsight cluster in an Azure Virtual Network. Previously updated : 04/14/2020 Last updated : 04/01/2022 # Azure HDInsight virtual network architecture
hdinsight Apache Hive Warehouse Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/apache-hive-warehouse-connector.md
Previously updated : 05/28/2020 Last updated : 04/01/2022 # Integrate Apache Spark and Apache Hive with Hive Warehouse Connector in Azure HDInsight
hdinsight Apache Kafka Streams Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-streams-api.md
description: Tutorial - Learn how to use the Apache Kafka Streams API with Kafka
Previously updated : 03/20/2020 Last updated : 04/01/2021 #Customer intent: As a developer, I need to create an application that uses the Kafka streams API with Kafka on HDInsight
To remove the resource group using the Azure portal:
In this document, you learned how to use the Apache Kafka Streams API with Kafka on HDInsight. Use the following to learn more about working with Kafka. > [!div class="nextstepaction"]
-> [Analyze Apache Kafka logs](apache-kafka-log-analytics-operations-management.md)
+> [Analyze Apache Kafka logs](apache-kafka-log-analytics-operations-management.md)
hdinsight Rest Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/rest-proxy.md
description: Learn how to do Apache Kafka operations using a Kafka REST proxy on
Previously updated : 04/03/2020 Last updated : 04/01/2022 # Interact with Apache Kafka clusters in Azure HDInsight using a REST proxy
hdinsight Quota Increase Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/quota-increase-request.md
description: Learn the process to request an increase for the CPU cores allocate
Previously updated : 05/07/2020 Last updated : 04/01/2022 # Requesting quota increases for Azure HDInsight
There are some fixed quota limits. For example, a single Azure subscription can
## Next steps * [Set up clusters in HDInsight with Apache Hadoop, Spark, Kafka, and more](hdinsight-hadoop-provision-linux-clusters.md): Learn how to set up and configure clusters in HDInsight.
-* [Monitor cluster performance](hdinsight-key-scenarios-to-monitor.md): Learn about key scenarios to monitor for your HDInsight cluster that might affect your cluster's capacity.
+* [Monitor cluster performance](hdinsight-key-scenarios-to-monitor.md): Learn about key scenarios to monitor for your HDInsight cluster that might affect your cluster's capacity.
hdinsight Apache Spark Jupyter Spark Sql Use Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-jupyter-spark-sql-use-portal.md
description: This quickstart shows how to use the Azure portal to create an Apac
Previously updated : 02/25/2020 Last updated : 04/01/2022 #Customer intent: As a developer new to Apache Spark on Azure, I need to see how to create a Spark cluster and query some data.
hdinsight Apache Spark Livy Rest Interface https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-livy-rest-interface.md
description: Learn how to use Apache Spark REST API to submit Spark jobs remotel
Previously updated : 02/28/2020 Last updated : 04/01/2022 # Use Apache Spark REST API to submit remote jobs to an HDInsight Spark cluster
If you connect to an HDInsight Spark cluster from within an Azure Virtual Networ
* [Apache Livy REST API documentation](https://livy.incubator.apache.org/docs/latest/rest-api.html) * [Manage resources for the Apache Spark cluster in Azure HDInsight](apache-spark-resource-manager.md)
-* [Track and debug jobs running on an Apache Spark cluster in HDInsight](apache-spark-job-debugging.md)
+* [Track and debug jobs running on an Apache Spark cluster in HDInsight](apache-spark-job-debugging.md)
hdinsight Apache Spark Troubleshoot Outofmemory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-troubleshoot-outofmemory.md
Title: OutOfMemoryError exceptions for Apache Spark in Azure HDInsight
description: Various OutOfMemoryError exceptions for Apache Spark cluster in Azure HDInsight Previously updated : 08/15/2019 Last updated : 03/31/2022 # OutOfMemoryError exceptions for Apache Spark in Azure HDInsight
If you didn't see your problem or are unable to solve your issue, visit one of t
* Connect with [@AzureSupport](https://twitter.com/azuresupport) - the official Microsoft Azure account for improving customer experience. Connecting the Azure community to the right resources: answers, support, and experts.
-* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/).
+* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/).
industrial-iot Tutorial Configure Industrial Iot Components https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industrial-iot/tutorial-configure-industrial-iot-components.md
output of deployment script or reset the password
* IoT Hub → IoT Edge → \<DEVICE> → Set Modules → OpcPublisher (for standalone OPC Publisher operation only)
-## Configuration options
+## OPC Publisher 2.8.2 Configuration options for orchestrated mode
-|Configuration Option (shorthand/full name) | Description |
-|-||
-pf/publishfile |The filename to configure the nodes to publish. If this option is specified, it puts OPC Publisher into standalone mode.
-lf/logfile |The filename of the logfile to use.
-ll/loglevel |The log level to use (allowed: fatal, error, warn, info, debug, verbose).
-me/messageencoding |The messaging encoding for outgoing messages allowed values: Json, Uadp
-mm/messagingmode |The messaging mode for outgoing messages allowed values: PubSub, Samples
-fm/fullfeaturedmessage |The full featured mode for messages (all fields filled in). Default is 'true', for legacy compatibility use 'false'
-aa/autoaccept |The publisher trusted all servers it's a connection to
-bs/batchsize |The number of OPC UA data-change messages to be cached for batching.
-si/iothubsendinterval |The trigger batching interval in seconds.
-ms/iothubmessagesize |The maximum size of the (IoT D2C) message.
-om/maxoutgressmessages |The maximum size of the (IoT D2C) message egress buffer.
-di/diagnosticsinterval |Shows publisher diagnostic info at the specified interval in seconds (need log level info). -1 disables remote diagnostic log and diagnostic output
-lt/logflugtimespan |The timespan in seconds when the logfile should be flushed.
-ih/iothubprotocol |Protocol to use for communication with the hub. Allowed values: AmqpOverTcp, AmqpOverWebsocket, MqttOverTcp, MqttOverWebsocket, Amqp, Mqtt, Tcp, Websocket, Any
-hb/heartbeatinterval |The publisher is using this as default value in seconds for the heartbeat interval setting of nodes without a heartbeat interval setting.
-ot/operationtimeout |The operation timeout of the publisher OPC UA client in ms.
-ol/opcmaxstringlen |The max length of a string opc can transmit/receive.
-oi/opcsamplinginterval |Default value in milliseconds to request the servers to sample values
-op/opcpublishinginterval |Default value in milliseconds for the publishing interval setting of the subscriptions against the OPC UA server.
-ct/createsessiontimeout |The interval the publisher is sending keep alive messages in seconds to the OPC servers on the endpoints it's connected to.
-kt/keepalivethresholt |Specify the number of keep alive packets a server can miss, before the session is disconnected.
-tm/trustmyself |The publisher certificate is put into the trusted store automatically.
-at/appcertstoretype |The own application cert store type (allowed: Directory, X509Store).
+The following OPC Publisher configuration can be applied by Command Line Interface (CLI) options or as environment variable settings. When both the environment variable and the CLI argument are provided, the latest will overrule the env variable.
+
+|Configuration Option | Description | Default |
+|-||--|
+site=VALUE |The site OPC Publisher is assigned to. |Not set
+AutoAcceptUntrustedCertificates=VALUE |OPC UA Client Security Config - auto accept untrusted peer certificates. |false
+BatchSize=VALUE |The number of OPC UA data-change messages to be cached for batching. When BatchSize is 1 or TriggerInterval is set to 0 batching is disabled. |50
+BatchTriggerInterval=VALUE |The trigger batching interval in seconds. When BatchSize is 1 or TriggerInterval is set to 0 batching is disabled. |{00:00:10}
+IoTHubMaxMessageSize=VALUE |The maximum size of the (IoT D2C) telemetry message. |0
+Transport=VALUE |Protocol to use for communication with the hub. Allowed values: AmqpOverTcp, AmqpOverWebsocket, MqttOverTcp, MqttOverWebsocket, Amqp, Mqtt, Tcp, Websocket, Any. |MqttOverTcp
+BypassCertVerification=VALUE |Enables/disables bypass of certificate verification for upstream communication to edgeHub. |false
+EnableMetrics=VALUE |Enables/disables upstream metrics propagation. |true
+OperationTimeout=VALUE |OPC UA Stack Transport Secure Channel - OPC UA Service call operation timeout |120,000 (2 min)
+MaxStringLength=VALUE |OPC UA Stack Transport Secure Channel - Maximum length of a string that can be send/received over the OPC UA Secure channel. |130,816 (128KB - 256)
+DefaultSessionTimeout=VALUE |The interval the OPC Publisher is sending keep alive messages in seconds to the OPC servers on the endpoints it's connected to. |0, meaning not set
+MinSubscriptionLifetime=VALUE | OPC UA Client Application Config - Minimum subscription lifetime as per OPC UA definition. |0, meaning not set
+AddAppCertToTrustedStore=VALUE |OPC UA Client Security Config - automatically copy own certificate's public key to the trusted certificate store |true
+ApplicationName=VALUE |OPC UA Client Application Config - Application name as per OPC UA definition. This is used for authentication during communication init handshake and as part of own certificate validation. |"Microsoft.Azure.IIoT"
+ApplicationUri=VALUE | OPC UA Client Application Config - Application URI as per OPC UA definition. |$"urn:localhost:{ApplicationName}:microsoft:"
+KeepAliveInterval=VALUE |OPC UA Client Application Config - Keep alive interval as per OPC UA definition. |10,000 (10s)
+MaxKeepAliveCount=VALUE |OPC UA Client Application Config - Maximum count of kee alive events as per OPC UA definition. | 50
+PkiRootPath=VALUE | OPC UA Client Security Config - PKI certificate store root path. |"pki
+ApplicationCertificateStorePath=VALUE |OPC UA Client Security Config - application's own certificate store path. |$"{PkiRootPath}/own"
+ApplicationCertificateStoreType=VALUE |The own application cert store type (allowed: Directory, X509Store). |Directory
+ApplicationCertificateSubjectName=VALUE |OPC UA Client Security Config - the subject name in the application's own certificate. |"CN=Microsoft.Azure.IIoT, C=DE, S=Bav, O=Microsoft, DC=localhost"
+TrustedIssuerCertificatesPath=VALUE |OPC UA Client Security Config - trusted certificate issuer store path. |$"{PkiRootPath}/issuers"
+TrustedIssuerCertificatesType=VALUE | OPC UA Client Security Config - trusted issuer certificates store type. |Directory
+TrustedPeerCertificatesPath=VALUE | OPC UA Client Security Config - trusted peer certificates store path. |$"{PkiRootPath}/trusted"
+TrustedPeerCertificatesType=VALUE | OPC UA Client Security Config - trusted peer certificates store type. |Directory
+RejectedCertificateStorePath=VALUE | OPC UA Client Security Config - rejected certificates store path. |$"{PkiRootPath}/rejected"
+RejectedCertificateStoreType=VALUE | OPC UA Client Security Config - rejected certificates store type. |Directory
+RejectSha1SignedCertificates=VALUE | OPC UA Client Security Config - reject deprecated Sha1 signed certificates. |false
+MinimumCertificateKeySize=VALUE | OPC UA Client Security Config - minimum accepted certificates key size. |1024
+SecurityTokenLifetime=VALUE | OPC UA Stack Transport Secure Channel - Security token lifetime in milliseconds. |3,600,000 (1h)
+ChannelLifetime=VALUE | OPC UA Stack Transport Secure Channel - Channel lifetime in milliseconds. |300,000 (5 min)
+MaxBufferSize=VALUE | OPC UA Stack Transport Secure Channel - Max buffer size. |65,535 (64KB -1)
+MaxMessageSize=VALUE | OPC UA Stack Transport Secure Channel - Max message size. |4,194,304 (4 MB)
+MaxArrayLength=VALUE | OPC UA Stack Transport Secure Channel - Max array length. |65,535 (64KB - 1)
+MaxByteStringLength=VALUE | OPC UA Stack Transport Secure Channel - Max byte string length. |1,048,576 (1MB);
## Next steps
iot-dps Quick Setup Auto Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-setup-auto-provision.md
You'll need an Azure subscription to begin with this article. You can create a [
1. In the Azure portal, select **+ Create a resource** .
-2. *Search the Marketplace* for the **Device Provisioning Service**. Select **IoT Hub Device Provisioning Service**.
+1. From the **Categories** menu, select **Internet of Things** then **IoT Hub Device Provisioning Service**.
3. Select **Create**.
iot-hub Iot Hub Protocol Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-protocol-gateway.md
- Title: Azure IoT protocol gateway | Microsoft Docs
-description: How to use an Azure IoT protocol gateway to extend IoT Hub capabilities and protocol support to enable devices to connect to your hub using protocols not supported by IoT Hub natively.
------ Previously updated : 03/17/2022----
-# Support additional protocols for IoT Hub
-
-Azure IoT Hub natively supports communication over the MQTT, AMQP, and HTTPS protocols. In some cases, devices or field gateways might not be able to use one of these standard protocols and require protocol adaptation. In such cases, you can use a custom gateway. A custom gateway enables protocol adaptation for IoT Hub endpoints by bridging the traffic to and from IoT Hub. You can use the [Azure IoT protocol gateway](https://github.com/Azure/azure-iot-protocol-gateway/blob/master/README.md) as a custom gateway to enable protocol adaptation for IoT Hub.
-
->[!NOTE]
->The Azure IoT protocol gateway is no longer the recommended method for protocol adaptation. Instead, consider using Azure IoT Edge as a gateway.
->
->For more information, see [How an IoT Edge device can be used as a gateway](../iot-edge/iot-edge-as-gateway.md).
-
-## Azure IoT protocol gateway
-
-The Azure IoT protocol gateway is a framework for protocol adaptation that is designed for high-scale, bidirectional device communication with IoT Hub. The protocol gateway is a pass-through component that accepts device connections over a specific protocol. It bridges the traffic to IoT Hub over AMQP 1.0.
-
-You can deploy the protocol gateway in Azure in a highly scalable way by using Azure Service Fabric, Azure Cloud Services worker roles, or Windows Virtual Machines. In addition, the protocol gateway can be deployed in on-premises environments, such as field gateways.
-
-The Azure IoT protocol gateway includes an MQTT protocol adapter that enables you to customize the MQTT protocol behavior if necessary. Since IoT Hub provides built-in support for the MQTT v3.1.1 protocol, you should only consider using the MQTT protocol adapter if protocol customizations or specific requirements for additional functionality are required.
-
-The MQTT adapter also demonstrates the programming model for building protocol adapters for other protocols. In addition, the Azure IoT protocol gateway programming model allows you to plug in custom components for specialized processing such as custom authentication, message transformations, compression/decompression, or encryption/decryption of traffic between the devices and IoT Hub.
-
-For flexibility, the Azure IoT protocol gateway and MQTT implementation are provided in an open-source software project. You can use the open-source project to add support for various protocols and protocol versions, or customize the implementation for your scenario.
-
-## Next steps
-
-To learn more about the Azure IoT protocol gateway and how to use and deploy it as part of your IoT solution, see:
-
-* [Azure IoT protocol gateway repository on GitHub](https://github.com/Azure/azure-iot-protocol-gateway/blob/master/README.md)
load-balancer Load Balancer Ipv6 Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-ipv6-overview.md
Details
Limitations * You cannot add IPv6 load balancing rules in the Azure portal. The rules can only be created through the template, CLI, PowerShell.
-* You may not upgrade existing VMs to use IPv6 addresses. You must deploy new VMs.
* A single IPv6 address can be assigned to a single network interface in each VM.
-* The public IPv6 addresses cannot be assigned to a VM. They can only be assigned to a load balancer.
* You cannot configure the reverse DNS lookup for your public IPv6 addresses. * The VMs with the IPv6 addresses cannot be members of an Azure Cloud Service. They can be connected to an Azure Virtual Network (VNet) and communicate with each other over their IPv4 addresses. * Private IPv6 addresses can be deployed on individual VMs in a resource group but cannot be deployed into a resource group via Scale Sets.
Limitations
* Changing the loadDistributionMethod parameter for IPv6 is **currently not supported**. * IPv6 for Basic Load Balancer is locked to a **Dynamic** SKU. IPv6 for a Standard Load Balancer is locked to a **Static** SKU. * NAT64 (translation of IPv6 to IPv4) is not supported.
-* Attaching a secondary NIC that refers to an IPv6 subnet to a back-end pool is **currently not supported**.
+* Attaching a secondary NIC that refers to an IPv6 subnet to a back-end pool is **not supported** for Basic Load Balancer.
## Next steps
Learn how to deploy a load balancer with IPv6.
* [Availability of IPv6 by region](https://go.microsoft.com/fwlink/?linkid=828357) * [Deploy a load balancer with IPv6 using a template](load-balancer-ipv6-internet-template.md) * [Deploy a load balancer with IPv6 using Azure PowerShell](load-balancer-ipv6-internet-ps.md)
-* [Deploy a load balancer with IPv6 using Azure CLI](load-balancer-ipv6-internet-cli.md)
+* [Deploy a load balancer with IPv6 using Azure CLI](load-balancer-ipv6-internet-cli.md)
load-testing Concept Load Testing Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/concept-load-testing-concepts.md
+
+ Title: Key concepts for Azure Load Testing concepts
+
+description: Learn how Azure Load Testing works, and the key concepts behind it.
+++++ Last updated : 03/30/2022+++
+<!--
+ Customer intent:
+ As a developer I want to understand the Azure Load Testing concepts so that I can set up a load test to identify performance issues in my application.
+ -->
+
+# Key concepts for new Azure Load Testing Preview users
+
+Learn about the key concepts and components of Azure Load Testing preview. This can help you to more effectively set up a load test to identify performance issues in your application.
+
+> [!IMPORTANT]
+> Azure Load Testing is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
++
+## Load testing resource
+
+The Load testing resource is the top-level resource for your load-testing activities. This resource provides a centralized place to view and manage load tests, test results, and related artifacts. A load testing resource contains zero or more [load tests](#test).
+
+When you create a load test resource, you specify its location, which determines the location of the [test engines](#test-engine).
+
+You can use [Azure role-based access control](./how-to-assign-roles.md) for granting access to your load testing resource.
+
+Azure Load Testing can use Azure Key Vault for [storing secret parameters](./how-to-parameterize-load-tests.md). You can [use either a user-assigned or system-assigned managed identity](./how-to-use-a-managed-identity.md) for your load testing resource.
+
+## Test
+
+A test specifies the test script, and configuration settings for running a load test. You can create one or more tests in an Azure Load Testing resource.
+
+The configuration of a load test consists of:
+
+- The test name and description.
+- The Apache JMeter test script and related data and configuration files. For example, a [CSV data file](./how-to-read-csv-data.md).
+- [Environment variables](./how-to-parameterize-load-tests.md).
+- [Secret parameters](./how-to-parameterize-load-tests.md).
+- The number of [test engines](#test-engine) to run the test script on.
+- The [pass/fail criteria](./how-to-define-test-criteria.md) for the test.
+- The list of [app components and resource metrics to monitor](./how-to-monitor-server-side-metrics.md) during the test execution.
+
+When you run a test, a [test run](#test-run) instance is created.
+
+## Test engine
+
+A test engine is computing infrastructure, managed by Microsoft, that runs the Apache JMeter test script. You can [scale out your load test](./how-to-high-scale-load.md) by configuring the number of test engines. The test script runs in parallel across the specified number of test engines.
+
+The test engines are hosted in the same location as your Azure Load Testing resource.
+
+While the test script runs, Azure Load Testing collects and aggregates the Apache JMeter worker logs. You can [download the logs for analyzing errors during the load test](./how-to-find-download-logs.md).
+
+## Test run
+
+A test run represents one execution of a load test. It collects the logs associated with running the Apache JMeter script, the [load test YAML configuration](./reference-test-config-yaml.md), the list of [app components to monitor](./how-to-monitor-server-side-metrics.md), and the [results of the test](./how-to-export-test-results.md).
+
+During a test run, Azure Load Testing sends the Apache JMeter script to the specified number of [test engines](#test-engine). After the test run, the logs and test results are aggregated and collected from the test engines.
+
+You can [view and analyze the load test results in the Azure Load Testing dashboard](./tutorial-identify-bottlenecks-azure-portal.md) in the Azure portal.
+
+## App component
+
+When you run a load test for an Azure-hosted application, you can monitor resource metrics for the different Azure application components (server-side metrics). While the load test runs, and after completion of the test, you can [monitor and analyze the resource metrics in the Azure Load Testing dashboard](./how-to-monitor-server-side-metrics.md).
+
+When you create or update a load test, you can configure the list of app components that Azure Load Testing will monitor. You can modify the list of default resource metrics for each app component. Learn more about which [Azure resource types are supported by Azure Load Testing](./resource-supported-azure-resource-types.md).
+
+## Metrics
+
+During a load test, Azure Load Testing collects metrics about the test execution. There are two types of metrics:
+
+- *Client-side metrics* give you details reported by the test engine. These metrics include the number of virtual users, the request response time, the number of failed requests, or the number of requests per second. You can [define pass/fail criteria](./how-to-define-test-criteria.md) based on client-side metrics to specify when a test passes or fails.
+
+- *Server-side metrics* are available for Azure-hosted applications and provide information about your Azure [application components](#app-component). Azure Load Testing integrates with Azure Monitor, including Application Insights and Container insights, to capture details from the Azure services. Depending on the type of service, different metrics are available. For example, metrics can be for the number of database reads, the type of HTTP responses, or container resource consumption.
+
+## Next steps
+
+You now know the key concepts of Azure Load Testing to start creating a load test.
+
+- Learn how [Azure Load Testing works](./overview-what-is-azure-load-testing.md#how-does-azure-load-testing-work).
+- Learn how to [Create and run a load test for a website](./quickstart-create-and-run-load-test.md).
+- Learn how to [Identify a performance bottleneck in an Azure application](./tutorial-identify-bottlenecks-azure-portal.md).
+- Learn how to [Set up continuous regression testing with Azure Pipelines](./tutorial-cicd-azure-pipelines.md).
load-testing How To Parameterize Load Tests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-parameterize-load-tests.md
To specify an environment variable to the load test by using the Azure portal, d
If you run your load test in a CI/CD workflow, you can define environment variables in the YAML test configuration file. For more information about the syntax, see the [Test configuration YAML reference](./reference-test-config-yaml.md).
-Alternatively, you can directly specify environment variables in the CI/CD workflow definition. You use input parameters for the GitHub Action or Azure Pipelines task to pass environment variables to the Apache JMeter script.
+Alternatively, you can directly specify environment variables in the CI/CD workflow definition. You use input parameters for the Azure Load Testing action or Azure Pipelines task to pass environment variables to the Apache JMeter script.
The following YAML snippet shows a GitHub Actions example:
No. The Azure Load Testing service doesn't store the values of secrets. When you
### What happens if I have parameters in both my YAML configuration file and the CI/CD workflow?
-If a parameter exists in both the YAML configuration file and the Azure Pipelines task or GitHub Action, the Azure Pipelines task or GitHub Action value will be used for the test run.
+If a parameter exists in both the YAML configuration file and the Azure Load Testing action or Azure Pipelines task, the value from the CI/CD workflow will be used for the test run.
### I created and ran a test from my CI/CD workflow by passing parameters using the Azure Load Testing task or action. Can I run this test from the Azure portal with the same parameters?
load-testing Overview What Is Azure Load Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/overview-what-is-azure-load-testing.md
adobe-target: true
Azure Load Testing Preview is a fully managed load-testing service that enables you to generate high-scale load. The service simulates traffic for your applications, regardless of where they're hosted. Developers, testers, and quality assurance (QA) engineers can use it to optimize application performance, scalability, or capacity.
-You can create a load test by using existing test scripts based on Apache JMeter, a popular open-source load and performance tool. For Azure-based applications, detailed resource metrics help you identify performance bottlenecks. Continuous integration and continuous deployment (CI/CD) workflows allow you to automate regression testing.
+You can create a load test by using existing test scripts based on Apache JMeter, a popular open-source load and performance tool. For Azure-based applications, detailed resource metrics help you identify performance bottlenecks. Continuous integration and continuous deployment (CI/CD) workflows allow you to automate regression testing. Learn more about the [key concepts for Azure Load Testing](./concept-load-testing-concepts.md).
> [!IMPORTANT] > Azure Load Testing is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
You can trigger Azure Load Testing from Azure Pipelines or GitHub Actions workfl
Start using Azure Load Testing: - [Tutorial: Use a load test to identify performance bottlenecks](./tutorial-identify-bottlenecks-azure-portal.md) - [Tutorial: Set up automated load testing](./tutorial-cicd-azure-pipelines.md)
+- Learn about the [key concepts for Azure Load Testing](./concept-load-testing-concepts.md).
load-testing Quickstart Create And Run Load Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/quickstart-create-and-run-load-test.md
This quickstart describes how to create an Azure Load Testing Preview resource b
After you complete this quickstart, you'll have a resource and load test that you can use for other tutorials.
+Learn more about the [key concepts for Azure Load Testing](./concept-load-testing-concepts.md).
+ > [!IMPORTANT] > Azure Load Testing is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
load-testing Tutorial Cicd Azure Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/tutorial-cicd-azure-pipelines.md
You'll deploy a sample Node.js web app on Azure App Service. The web app uses Az
If you're using GitHub Actions for your CI/CD workflows, see the corresponding [GitHub Actions tutorial](./tutorial-cicd-github-actions.md).
+Learn more about the [key concepts for Azure Load Testing](./concept-load-testing-concepts.md).
+ You'll learn how to: > [!div class="checklist"]
load-testing Tutorial Cicd Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/tutorial-cicd-github-actions.md
You'll deploy a sample Node.js web app on Azure App Service. The web app uses Az
If you're using Azure Pipelines for your CI/CD workflows, see the corresponding [Azure Pipelines tutorial](./tutorial-cicd-azure-pipelines.md).
+Learn more about the [key concepts for Azure Load Testing](./concept-load-testing-concepts.md).
+ You'll learn how to: > [!div class="checklist"]
load-testing Tutorial Identify Bottlenecks Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/tutorial-identify-bottlenecks-azure-portal.md
In this tutorial, you'll learn how to identify performance bottlenecks in a web
The sample application consists of a Node.js web API, which interacts with a NoSQL database. You'll deploy the web API to Azure App Service web apps and use Azure Cosmos DB as the database.
+Learn more about the [key concepts for Azure Load Testing](./concept-load-testing-concepts.md).
+ In this tutorial, you'll learn how to: > [!div class="checklist"]
managed-instance-apache-cassandra Spark Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/spark-migration.md
DFfromSourceCassandra
``` > [!NOTE]
-> If you have a need to preserve or backdate the `writetime` of each row, refer to the [live migration](dual-write-proxy-migration.md) article.
+> If you have a need to preserve the original `writetime` of each row, refer to the [cassandra migrator](https://github.com/Azure-Samples/cassandra-migrator) sample.
## Next steps
marketplace Plans Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/plans-pricing.md
Previously updated : 02/03/2022 Last updated : 04/01/2022 # Plans and pricing for commercial marketplace offers
If you have already set prices for your plan in United States Dollars (USD) and
> [!IMPORTANT] > After your offer is published, the pricing model choice cannot be changed.
+#### Metered billing
+ Flat-rate SaaS offers and managed application offers support metered billing using the marketplace metering service. This is a usage-based billing model that lets you define non-standard units, such as bandwidth or emails, that your customers will pay on a consumption basis. See related documentation to learn more about metered billing for [managed applications](marketplace-metering-service-apis.md) and [SaaS apps](./partner-center-portal/saas-metered-billing.md).
+#### Pricing information specific to offer types
+
+This table provides pricing information thatΓÇÖs specific to various offer types.
+
+| Offer type | &#8195; Articles |
+| | - |
+| Azure Application<br> (Managed application plan) | <ul><li>[Plan an Azure managed application for an Azure application offer](plan-azure-app-managed-app.md#define-pricing)</li><li>[Configure a managed application plan](azure-app-managed.md#define-pricing)</li></ul> |
+| Azure Container | <ul><li>[Plan an Azure container offer](marketplace-containers.md#plans-and-pricing)</li></ul> |
+| Azure virtual machine | <ul><li>[Plan a virtual machine offer](marketplace-virtual-machines.md#plans-pricing-and-trials)</li><li>[Configure pricing and availability for a virtual machine offer](azure-vm-plan-pricing-and-availability.md#pricing)</li></ul> |
+| Consulting service | <ul><li>[Plan a consulting service offer](plan-consulting-service-offer.md#pricing-and-availability)</li><li>[Configure consulting service offer pricing and availability](create-consulting-service-pricing-availability.md#pricing-informational-only) |
+| IoT Edge module | <ul><li>[Plan an IoT Edge module offer](marketplace-iot-edge.md#licensing-options)</li></ul> |
+| Managed service | <ul><li>[Plan a Managed Service offer](plan-managed-service-offer.md#plans-and-pricing)</li><li>[Create plans for a Managed Service offer](create-managed-service-offer-plans.md#define-pricing-and-availability) |
+| Power BI app | <ul><li>[Plan a Power BI App offer](marketplace-power-bi.md#licensing-options)</li></ul> |
+| Software as a Service (SaaS) | <ul><li>[SaaS pricing models](plan-saas-offer.md#saas-pricing-models)</li><li>[SaaS billing](plan-saas-offer.md#saas-billing)</li><li>[Create plans for a SaaS offer](create-new-saas-offer-plans.md#define-a-pricing-model)</li></ul> |
+|
+ ## Custom prices To set custom prices in an individual market, export, modify, and then import the pricing spreadsheet. You're responsible for validating this pricing and owning these settings.
mysql Concepts Server Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-server-logs.md
The following table describes what's in each log. Depending on the output method
| `query_time_s` | Total time in seconds the query took to execute | | `lock_time_s` | Total time in seconds the query was locked | | `user_host_s` | Username |
-| `rows_sent_s` | Number of rows sent |
+| `rows_sent_d` | Number of rows sent |
| `rows_examined_s` | Number of rows examined | | `last_insert_id_s` | [last_insert_id](https://dev.mysql.com/doc/refman/8.0/en/information-functions.html#function_last-insert-id) | | `insert_id_s` | Insert ID |
mysql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/overview.md
One advantage of running your workload in Azure is its global reach. The flexibl
| China North 2 | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: | | East Asia (Hong Kong) | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: | | East US | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| East US 2 | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| France Central | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :x: |
+| East US 2 | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: |
+| France Central | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: |
| Germany West Central | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: | | Japan East | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Japan West | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: |
One advantage of running your workload in Azure is its global reach. The flexibl
| Norway East | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: | | South Africa North | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: | | South Central US | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: |
-| Southeast Asia | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
+| Southeast Asia | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: |
| Switzerland North | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: | | UAE North | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: |
-| UK South | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
+| UK South | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: |
| UK West | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: | | West Central US | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: | | West Europe | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | West US | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: |
-| West US 2 | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
+| West US 2 | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: |
| West US 3 | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: | ## Contacts
remote-rendering View Remote Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/tutorials/unity/view-remote-models/view-remote-models.md
Follow the instructions on how to [add the Azure Remote Rendering and OpenXR pac
![Screenshot of the Unity Color wheel dialog. The color is set to 0 for all R G B A components.](./media/color-wheel-black.png)
-1. Set **Clipping Planes** to *Near = 0.3* and *Far = 20*. This means rendering will clip geometry that is closer than 30 cm or farther than 20 meters.
+1. Set **Clipping Planes** to *Near = 0.1* and *Far = 20*. This means rendering will clip geometry that is closer than 10 cm or farther than 20 meters.
![Screenshot of the Unity inspector for a Camera component.](./media/camera-properties.png)
search Search Howto Indexing Azure Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-indexing-azure-blob-storage.md
Previously updated : 02/28/2022 Last updated : 03/30/2022 # Index data from Azure Blob Storage
Blob indexers are frequently used for both [AI enrichment](cognitive-search-conc
+ [Access tiers](../storage/blobs/access-tiers-overview.md) for Blob Storage include hot, cool, and archive. Only hot and cool can be accessed by search indexers.
-+ Blobs containing text. If you have binary data, you can include [AI enrichment](cognitive-search-concept-intro.md) for image analysis. Blob content canΓÇÖt exceed the [indexer limits](search-limits-quotas-capacity.md#indexer-limits) for your search service tier.
++ Blobs containing text. If blobs contain binary data or unstructured text, consider adding [AI enrichment](cognitive-search-concept-intro.md) for image and natural language processing. Blob content canΓÇÖt exceed the [indexer limits](search-limits-quotas-capacity.md#indexer-limits) for your search service tier. + Read permissions on Azure Storage. A "full access" connection string includes a key that grants access to the content, but if you're using Azure roles instead, make sure the [search service managed identity](search-howto-managed-identities-data-sources.md) has **Storage Blob Data Reader** permissions.
search Search Limits Quotas Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-limits-quotas-capacity.md
Maximum limits on storage, workloads, and quantities of indexes and other object
<sup>1</sup> Basic services created before December 2017 have lower limits (5 instead of 15) on indexes. Basic tier is the only SKU with a lower limit of 100 fields per index. You might find some variation in maximum limits for Basic if your service is provisioned on a more powerful cluster. The limits here represent the common denominator. Indexes built to the above specifications will be portable across service tiers in any region.
-<sup>2</sup> The upper limit on fields includes both first-level fields and nested subfields in a complex collection. For example, if an index contains 15 fields and has two complex collections with 5 subfields each, the field count of your index is 25.
+<sup>2</sup> The upper limit on fields includes both first-level fields and nested subfields in a complex collection. For example, if an index contains 15 fields and has two complex collections with 5 subfields each, the field count of your index is 25. Indexes with a large fields collection can be slow. Limit fields to just those you need, and run indexing and query test to ensure performance is acceptable.
<sup>3</sup> An upper limit exists for elements because having a large number of them significantly increases the storage required for your index. An element of a complex collection is defined as a member of that collection. For example, assume a [Hotel document with a Rooms complex collection](search-howto-complex-data-types.md#indexing-complex-types), each room in the Rooms collection is considered an element. During indexing, the indexing engine can safely process a maximum of 3000 elements across the document as a whole. [This limit](search-api-migration.md#upgrade-to-2019-05-06) was introduced in `api-version=2019-05-06` and applies to complex collections only, and not to string collections or to complex fields.
search Search Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-overview.md
Independent of network security, all inbound requests must be authenticated. Key
Outbound requests from a search service to other applications are typically made by indexers for text-based indexing and some aspects of AI enrichment. Outbound requests include both read and write operations.
-Outbound requests are made by the search service on its own behalf, and on the behalf of an indexer or skillset:
+Outbound requests are made by the search service on its own behalf, and on the behalf of an indexer or custom skill:
+ Search connects to Azure Key Vault for a customer-managed key used to encrypt and decrypt sensitive data. + Indexers [connect to external data sources](search-indexer-securing-resources.md) to read in data for indexing.
service-bus-messaging Advanced Features Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/advanced-features-overview.md
A queue or subscription client can defer retrieval of a received message until a
A transaction groups two or more operations together into an execution scope. Service Bus allows you to group operations against multiple messaging entities within the scope of a single transaction. A message entity can be a queue, topic, or subscription. For more information, see [Overview of Service Bus transaction processing](service-bus-transactions.md). ## Autodelete on idle
-Autodelete on idle enables you to specify an idle interval after which a queue or topic subscription is automatically deleted. The minimum duration is 5 minutes.
+Autodelete on idle enables you to specify an idle interval after which a queue or topic subscription is automatically deleted. The interval is reset when there is traffic on the entity. The minimum duration is 5 minutes.
## Duplicate detection The duplicate detection feature enables the sender to resend the same message again and for the broker to drop a potential duplicate. For more information, see [Duplicate detection](duplicate-detection.md).
service-bus-messaging Service Bus Geo Dr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-geo-dr.md
Title: Azure Service Bus Geo-disaster recovery | Microsoft Docs description: How to use geographical regions to fail over and disaster recovery in Azure Service Bus Previously updated : 07/28/2021 Last updated : 04/01/2022 # Azure Service Bus Geo-disaster recovery
The Service Bus Geo-disaster recovery feature is designed to make it easier to r
The Geo-Disaster recovery feature ensures that the entire configuration of a namespace (Queues, Topics, Subscriptions, Filters) is continuously replicated from a primary namespace to a secondary namespace when paired, and it allows you to initiate a once-only failover move from the primary to the secondary at any time. The failover move will repoint the chosen alias name for the namespace to the secondary namespace and then break the pairing. The failover is nearly instantaneous once initiated.
-> [!IMPORTANT]
-> - The feature enables instant continuity of operations with the same configuration, but **doesn't replicate the messages held in queues or topic subscriptions or dead-letter queues**. To preserve queue semantics, such a replication will require not only the replication of message data, but of every state change in the broker. For most Service Bus namespaces, the required replication traffic would far exceed the application traffic and with high-throughput queues, most messages would still replicate to the secondary while they are already being deleted from the primary, causing excessively wasteful traffic. For high-latency replication routes, which applies to many pairings you would choose for Geo-disaster recovery, it might also be impossible for the replication traffic to sustainably keep up with the application traffic due to latency-induced throttling effects.
-> - Azure Active Directory (Azure AD) role-based access control (RBAC) assignments to Service Bus entities in the primary namespace aren't replicated to the secondary namespace. Create role assignments manually in the secondary namespace to secure access to them.
+## Important points to consider
+
+- The feature enables instant continuity of operations with the same configuration, but **doesn't replicate the messages held in queues or topic subscriptions or dead-letter queues**. To preserve queue semantics, such a replication will require not only the replication of message data, but of every state change in the broker. For most Service Bus namespaces, the required replication traffic would far exceed the application traffic and with high-throughput queues, most messages would still replicate to the secondary while they are already being deleted from the primary, causing excessively wasteful traffic. For high-latency replication routes, which applies to many pairings you would choose for Geo-disaster recovery, it might also be impossible for the replication traffic to sustainably keep up with the application traffic due to latency-induced throttling effects.
+- Azure Active Directory (Azure AD) role-based access control (RBAC) assignments to Service Bus entities in the primary namespace aren't replicated to the secondary namespace. Create role assignments manually in the secondary namespace to secure access to them.
+- The following configurations are not replicated.
+ - Virtual network configurations
+ - Private endpoint connections
+ - All networks access enabled
+ - Trusted service access enabled
+ - Public network access
+ - Default network action
+ - Identities and encryption settings (customer-managed key encryption or bring your own key (BYOK) encryption)
+ - Enable auto scale
+ - Disable local authentication
+ > [!TIP] > For replicating the contents of queues and topic subscriptions and operating corresponding namespaces in active/active configurations to cope with outages and disasters, don't lean on this Geo-disaster recovery feature set, but follow the [replication guidance](service-bus-federation-overview.md).
service-fabric Service Fabric Application Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-application-lifecycle.md
Last updated 1/19/2018
# Service Fabric application lifecycle As with other platforms, an application on Azure Service Fabric usually goes through the following phases: design, development, testing, deployment, upgrading, maintenance, and removal. Service Fabric provides first-class support for the full application lifecycle of cloud applications, from development through deployment, daily management, and maintenance to eventual decommissioning. The service model enables several different roles to participate independently in the application lifecycle. This article provides an overview of the APIs and how they are used by the different roles throughout the phases of the Service Fabric application lifecycle.
+[Check this page for a training video that describes how to manage your application lifecycle:](/shows/building-microservices-applications-on-azure-service-fabric/application-lifetime-management-in-action)
++ [!INCLUDE [links to azure cli and service fabric cli](../../includes/service-fabric-sfctl.md)] ## Service model roles
service-fabric Service Fabric Visualizing Your Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-visualizing-your-cluster.md
To connect to a Service Fabric cluster, you need the clusters management endpoin
You can control client access to your Service Fabric cluster either with certificates or using Azure Active Directory (AAD). If you attempt to connect to a secure cluster, then depending on the cluster's configuration you will be required to present a client certificate or sign in using AAD.
+## Video tutorial
+[<b>Check this page for a training video to learn how to use Service Fabric Explorer.</b>](/shows/building-microservices-applications-on-azure-service-fabric/service-fabric-explorer)
+
+[!NOTE]
+> This video shows Service Fabric Explorer hosted in a Service Fabric cluster, not the desktop version.
+>
## Understand the Service Fabric Explorer layout You can navigate through Service Fabric Explorer by using the tree on the left. At the root of the tree, the cluster dashboard provides an overview of your cluster, including a summary of application and node health.
spring-cloud How To Manage User Assigned Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-manage-user-assigned-managed-identities.md
+
+ Title: Manage user-assigned managed identities for an application in Azure Spring Cloud (preview)
+description: How to manage user-assigned managed identities for applications.
++++ Last updated : 03/31/2022+
+zone_pivot_groups: spring-cloud-tier-selection
++
+# Manage user-assigned managed identities for an application in Azure Spring Cloud (preview)
+
+**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
+
+This article shows you how to assign or remove user-assigned managed identities for an application in Azure Spring Cloud, using the Azure portal and Azure CLI.
+
+Managed identities for Azure resources provide an automatically managed identity in Azure Active Directory (Azure AD) to an Azure resource such as your application in Azure Spring Cloud. You can use this identity to authenticate to any service that supports Azure AD authentication, without having credentials in your code.
+
+## Prerequisites
+
+- If you're unfamiliar with managed identities for Azure resources, see the [Managed identities for Azure resources overview section](../active-directory/managed-identities-azure-resources/overview.md).
++
+- An already provisioned Azure Spring Cloud Enterprise tier instance. For more information, see [Quickstart: Provision an Azure Spring Cloud service instance using the Enterprise tier](quickstart-provision-service-instance-enterprise.md).
+- [Azure CLI version 3.1.0 or later](/cli/azure/install-azure-cli).
+- [!INCLUDE [install-enterprise-extension](includes/install-enterprise-extension.md)]
+- At least one already provisioned user-assigned managed identity. For more information, see [Manage user-assigned managed identities](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md).
+++
+- An already provisioned Azure Spring Cloud instance. For more information, see [Quickstart: Deploy your first application to Azure Spring Cloud](./quickstart.md).
+- [Azure CLI version 3.1.0 or later](/cli/azure/install-azure-cli).
+- At least one already provisioned user-assigned managed identity. For more information, see [Manage user-assigned managed identities](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md).
++
+## Assign user-assigned managed identities when creating an application
+
+Create an application and assign user-assigned managed identity at the same time by using the following command:
+
+```azurecli
+az spring-cloud app create \
+ --resource-group <resource-group-name> \
+ --name <app-name> \
+ --service <service-instance-name> \
+ --user-assigned <space-separated user identity resource IDs to assign>
+```
+
+## Assign user-assigned managed identities to an existing application
+
+Assigning user-assigned managed identity requires setting an additional property on the application.
+
+### [Azure portal](#tab/azure-portal)
+
+To assign user-assigned managed identity to an existing application in the Azure portal, follow these steps:
+
+1. Navigate to an application in the Azure portal as you normally would.
+2. Scroll down to the **Settings** group in the left navigation pane.
+3. Select **Identity**.
+4. Within the **User assigned** tab, select **Add**.
+5. Choose one or more user-assigned managed identities from right panel and then select **Add** from this panel.
+
+### [Azure CLI](#tab/azure-cli)
+
+Use the following command to assign one or more user-assigned managed identities on an existing app:
+
+```azurecli
+az spring-cloud app identity assign \
+ --resource-group <resource-group-name> \
+ --name <app-name> \
+ --service <service-instance-name> \
+ --user-assigned <space-separated user identity resource IDs to assign>
+```
+++
+## Obtain tokens for Azure resources
+
+An application can use its managed identity to get tokens to access other resources protected by Azure AD, such as Azure Key Vault. These tokens represent the application accessing the resource, not any specific user of the application.
+
+You may need to configure the target resource to allow access from your application. For more information, see [Assign a managed identity access to a resource by using the Azure portal](../active-directory/managed-identities-azure-resources/howto-assign-access-portal.md). For example, if you request a token to access Key Vault, be sure you have added an access policy that includes your application's identity. Otherwise, your calls to Key Vault will be rejected, even if they include the token. To learn more about which resources support Azure Active Directory tokens, see [Azure services that support Azure AD authentication](../active-directory/managed-identities-azure-resources/services-azure-active-directory-support.md)
+
+Azure Spring Cloud shares the same endpoint for token acquisition with Azure Virtual Machines. We recommend using Java SDK or Spring Boot starters to acquire a token. For various code and script examples and guidance on important topics such as handling token expiration and HTTP errors, see [How to use managed identities for Azure resources on an Azure VM to acquire an access token](../active-directory/managed-identities-azure-resources/how-to-use-vm-token.md).
+
+## Remove user-assigned managed identities from an existing app
+
+Removing user-assigned managed identities will remove the assignment between the identities and the application, and will not delete the identities themselves.
+
+### [Azure portal](#tab/azure-portal)
+
+To remove user-assigned managed identities from an application that no longer needs it, follow these steps:
+
+1. Sign in to the Azure portal using an account associated with the Azure subscription that contains the Azure Spring Cloud instance.
+1. Navigate to the desired application and select **Identity**.
+1. Under **User assigned**, select target identities and then select **Remove**.
+
+### [Azure CLI](#tab/azure-cli)
+
+To remove user-assigned managed identities from an application that no longer needs it, use the following command:
+
+```azurecli
+az spring-cloud app identity remove \
+ --resource-group <resource-group-name> \
+ --name <app-name> \
+ --service <service-instance-name> \
+ --user-assigned <space-separated user identity resource IDs to remove>
+```
+
+## Limitations
+
+For user-assigned managed identity limitations, see [Quotas and service plans for Azure Spring Cloud](./quotas.md).
+++
+## Next steps
+
+* [Access Azure Key Vault with managed identities in Spring boot starter](https://github.com/Azure/azure-sdk-for-jav#use-msi--managed-identities)
+* [Learn more about managed identities for Azure resources](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/active-directory/managed-identities-azure-resources/overview.md)
+* [How to use managed identities with Java SDK](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples)
static-web-apps Gitlab https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/gitlab.md
+
+ Title: "Tutorial: Deploy GitLab repositories on Azure Static Web Apps"
+description: Use GitLab with Azure Static Web Apps
++++ Last updated : 03/30/2021+++
+# Tutorial: Deploy GitLab repositories on Azure Static Web Apps
+
+Azure Static Web Apps has flexible deployment options that allow to work with various providers. In this article, you deploy a web application hosted in GitLab to Azure Static Web Apps.
+
+In this tutorial, you learn to:
+
+> [!div class="checklist"]
+> * Import a repository to GitLab
+> * Create a static web app
+> * Configure the GitLab repo to deploy to Azure Static Web Apps
+
+## Prerequisites
+
+- [GitLab](https://gitlab.com) account
+- [Azure](https://portal.azure.com) account
+ - If you don't have an Azure subscription, [create a free trial account](https://azure.microsoft.com/free).
+
+## Create a repository
+
+This article uses a GitHub repository as the source to import code into a GitLab repository.
+
+1. Sign in to your GitLab account and navigate to [https://gitlab.com/projects/new#import_project](https://gitlab.com/projects/new#import_project)
+1. Select the **Repo by URL** button.
+1. In the *Git repository URL* box, enter the repository URL for your choice of framework.
+
+ # [No Framework](#tab/vanilla-javascript)
+
+ [https://github.com/staticwebdev/vanilla-basic.git](https://github.com/staticwebdev/vanilla-basic.git)
+
+ # [Angular](#tab/angular)
+
+ [https://github.com/staticwebdev/angular-basic.git](https://github.com/staticwebdev/angular-basic.git)
+
+ # [Blazor](#tab/blazor)
+
+ [https://github.com/staticwebdev/blazor-basic.git](https://github.com/staticwebdev/blazor-basic.git)
+
+ # [React](#tab/react)
+
+ [https://github.com/staticwebdev/react-basic.git](https://github.com/staticwebdev/react-basic.git)
+
+ # [Vue](#tab/vue)
+
+ [https://github.com/staticwebdev/vue-basic.git](https://github.com/staticwebdev/vue-basic.git)
+
+
+
+1. In the *Project slug* box, enter **my-first-static-web-app**.
+1. Select the **Create project** button and wait a moment while your repository is set up.
+
+## Create a static web app
+
+Now that the repository is created, you can create a static web app from the Azure portal.
+
+1. Navigate to the [Azure portal](https://portal.azure.com).
+1. Select **Create a Resource**.
+1. Search for **Static Web Apps**.
+1. Select **Static Web Apps**.
+1. Select **Create**.
+1. In the _Basics_ section, begin by configuring your new app.
+
+ | Setting | Value |
+ |--|--|
+ | Azure subscription | Select your Azure subscription. |
+ | Resource Group | Select the **Create new** link and enter **static-web-apps-gitlab**. |
+ | Name | Enter **my-first-static-web-app**. |
+ | Plan type | Select **Free**. |
+ | Region for Azure Functions API and staging environments | Select the region closest to you. |
+ | Source | Select **Other**. |
+
+1. Select **Review + create**.
+1. Select **Create**.
+1. Select the **Go to resource** button.
+1. Select the **Manage deployment token** button.
+1. Copy the deployment token value and set it aside in an editor for later use.
+1. Select the **Close** button on the *Manage deployment token* window.
+
+## Create the pipeline task in GitLab
+
+Next you add a workflow task responsible for building and deploying your site as you make changes.
+
+### Add deployment token
+
+1. Navigate to the repository in GitLab.
+1. Select **Settings**.
+1. Select **CI/CD**.
+1. Next to the *Variables* section, select the **Expand** button.
+1. Select the **Add variable** button.
+1. In the *Key* box, enter **DEPLOYMENT_TOKEN**.
+1. In the *Value* box, paste in the deployment token value you set aside in a previous step.
+1. Select the **Add variable** button.
+
+### Add file
+
+1. Select the **Repository** menu option.
+1. Select **Files**.
+1. Ensure the *main* branch is selected in the branch drop down at the top.
+1. Press the **plus sign** drop down and select **New file**.
+1. Create a new file named `.gitlab-ci.yml` at the root of the repository. (Make sure the file extension is `.yml`.)
+1. Enter the following YAML into the file.
+
+ # [No Framework](#tab/vanilla-javascript)
+
+ ```yml
+ variables:
+ API_TOKEN: $DEPLOYMENT_TOKEN
+ APP_PATH: '$CI_PROJECT_DIR/src'
+
+ deploy:
+ stage: deploy
+ image: registry.gitlab.com/static-web-apps/azure-static-web-apps-deploy
+ script:
+ - echo "App deployed successfully."
+ ```
+
+ # [Angular](#tab/angular)
+
+ ```yml
+ variables:
+ API_TOKEN: $DEPLOYMENT_TOKEN
+ APP_PATH: '$CI_PROJECT_DIR/src'
+ OUTPUT_PATH: '$CI_PROJECT_DIR/dist/angular-basic'
+
+ deploy:
+ stage: deploy
+ image: registry.gitlab.com/static-web-apps/azure-static-web-apps-deploy
+ script:
+ - echo "App deployed successfully."
+ ```
+
+ # [Blazor](#tab/blazor)
+
+ ```yml
+ variables:
+ API_TOKEN: $DEPLOYMENT_TOKEN
+ APP_PATH: '$CI_PROJECT_DIR/Client'
+ OUTPUT_PATH: '$CI_PROJECT_DIR/wwwroot'
+
+ deploy:
+ stage: deploy
+ image: registry.gitlab.com/static-web-apps/azure-static-web-apps-deploy
+ script:
+ - echo "App deployed successfully."
+ ```
+
+ # [React](#tab/react)
+
+ ```yml
+ variables:
+ API_TOKEN: $DEPLOYMENT_TOKEN
+ APP_PATH: '$CI_PROJECT_DIR'
+ OUTPUT_PATH: '$CI_PROJECT_DIR/build'
+
+ deploy:
+ stage: deploy
+ image: registry.gitlab.com/static-web-apps/azure-static-web-apps-deploy
+ script:
+ - echo "App deployed successfully."
+ ```
+
+ # [Vue](#tab/vue)
+
+ ```yml
+ variables:
+ API_TOKEN: $DEPLOYMENT_TOKEN
+ APP_PATH: '$CI_PROJECT_DIR'
+ OUTPUT_PATH: '$CI_PROJECT_DIR/dist'
+
+ deploy:
+ stage: deploy
+ image: registry.gitlab.com/static-web-apps/azure-static-web-apps-deploy
+ script:
+ - echo "App deployed successfully."
+ ```
+
+
+
+ The following configuration properties are used in the *.gitlab-ci.yml* file to configure your static web app.
+
+ The `$CI_PROJECT_DIR` variable maps to the repository's root folder location during the build process.
+
+ | Property | Description | Example | Required |
+ |--|--|--|--|
+ | `APP_PATH` | Location of your application code. | Enter `$CI_PROJECT_DIR/` if your application source code is at the root of the repository, or `$CI_PROJECT_DIR/app` if your application code is in a folder named `app`. | Yes |
+ | `API_PATH` | Location of your Azure Functions code. | Enter `$CI_PROJECT_DIR/api` if your app code is in a folder named `api`. | No |
+ | `OUTPUT_PATH` | Location of the build output folder relative to the `APP_PATH`. | If your application source code is located at `$CI_PROJECT_DIR/app`, and the build script outputs files to the `$CI_PROJECT_DIR/app/build` folder, then set `$CI_PROJECT_DIR/app/build` as the `OUTPUT_PATH` value. | No |
+ | `API_TOKEN` | API token for deployment. | `API_TOKEN: $DEPLOYMENT_TOKEN` | Yes |
+
+1. Select the **Commit changes** button.
+1. Select the **CI/CD** then **Pipelines** menu items to view the progress of your deployment.
+
+Once the deployment is complete, you can view your website.
+
+## View the website
+
+There are two aspects to deploying a static app. The first step creates the underlying Azure resources that make up your app. The second is a GitLab workflow that builds and publishes your application.
+
+Before you can navigate to your new static site, the deployment build must first finish running.
+
+The Static Web Apps overview window displays a series of links that help you interact with your web app.
+
+1. Return to your static web app in the Azure portal.
+1. Navigate to the **Overview** window.
+1. Select the link under the *URL* label. Your website will load in a new tab.
+
+## Clean up resources
+
+If you're not going to continue to use this application, you can delete the Azure Static Web Apps instance and all the associated services by removing the resource group.
+
+1. Select the **static-web-apps-gitlab** resource group from the *Overview* section.
+1. Select the **Delete resource group** button at the top of the resource group *Overview*.
+1. Enter the resource group name **static-web-apps-gitlab** in the *Are you sure you want to delete "static-web-apps-gitlab"?* confirmation dialog.
+1. Select **Delete**.
+
+The process to delete the resource group may take a few minutes to complete.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Add an API](add-api.md)
storage Monitor Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/monitor-blob-storage.md
This table shows how this feature is supported in your account and the impact on
| Storage account type | Blob Storage (default support) | Data Lake Storage Gen2 <sup>1</sup> | NFS 3.0 <sup>1</sup> | SFTP <sup>1</sup> | |--|--|--|--|--|
-| Standard general-purpose v2 | ![Yes](../media/icons/yes-icon.png) |![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![No](../media/icons/no-icon.png) |
-| Premium block blobs | ![Yes](../media/icons/yes-icon.png) |![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![No](../media/icons/no-icon.png) |
+| Standard general-purpose v2 | ![Yes](../media/icons/yes-icon.png) |![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png)| ![No](../media/icons/no-icon.png) |
+| Premium block blobs | ![Yes](../media/icons/yes-icon.png) |![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
### Metrics in Azure Monitor
storage Storage Feature Support In Storage Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-feature-support-in-storage-accounts.md
The items that appear in these tables will change over time as support continues
| [Last access time tracking for lifecycle management](lifecycle-management-overview.md#move-data-based-on-last-accessed-time) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![Yes](../media/icons/yes-icon.png) | | [Lifecycle management policies (delete blob)](./lifecycle-management-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | | [Lifecycle management policies (tiering)](./lifecycle-management-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-| [Logging in Azure Monitor](./monitor-blob-storage.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![No](../media/icons/no-icon.png) |
+| [Logging in Azure Monitor](./monitor-blob-storage.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
| [Metrics in Azure Monitor](./monitor-blob-storage.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | | [Object replication for block blobs](object-replication-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | | [Page blobs](/rest/api/storageservices/understanding-block-blobs--append-blobs--and-page-blobs#about-page-blobs) | ![Yes](../media/icons/yes-icon.png) |![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
The items that appear in these tables will change over time as support continues
| [Last access time tracking for lifecycle management](lifecycle-management-overview.md#move-data-based-on-last-accessed-time) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![Yes](../media/icons/yes-icon.png) | | [Lifecycle management policies (delete blob)](./lifecycle-management-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | | [Lifecycle management policies (tiering)](./lifecycle-management-overview.md) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
-| [Logging in Azure Monitor](./monitor-blob-storage.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![No](../media/icons/no-icon.png) |
+| [Logging in Azure Monitor](./monitor-blob-storage.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
| [Metrics in Azure Monitor](./monitor-blob-storage.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | | [Object replication for block blobs](object-replication-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | | [Page blobs](/rest/api/storageservices/understanding-block-blobs--append-blobs--and-page-blobs#about-page-blobs) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
storage Storage Quickstart Blobs Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-powershell.md
Previously updated : 03/31/2020 Last updated : 03/31/2022
azcopy copy 'D:\Images\Image001.jpg' "https://$StorageAccountName.blob.core.wind
Remove all of the assets you've created. The easiest way to remove the assets is to delete the resource group. Removing the resource group also deletes all resources included within the group. In the following example, removing the resource group removes the storage account and the resource group itself. ```azurepowershell-interactive
-Remove-AzResourceGroup -Name $ResourceGroupName
+Remove-AzResourceGroup -Name $ResourceGroup
``` ## Next steps
-In this quickstart, you transferred files between a local file system and Azure Blob storage. To learn more about working with Blob storage by using PowerShell, explore Azure PowerShell samples for Blob storage.
+In this quickstart, you transferred files between a local file system and Azure Blob storage. To learn more about working with Blob storage by using PowerShell, select an option below.
+
+> [!div class="nextstepaction"]
+> [Manage block blobs with PowerShell](blob-powershell.md)
> [!div class="nextstepaction"] > [Azure PowerShell samples for Azure Blob storage](storage-samples-blobs-powershell.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json)
storage Storage How To Use Files Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-how-to-use-files-portal.md
To create an Azure file share:
![A screenshot of the data storage section of the storage account; select file shares.](media/storage-how-to-use-files-portal/create-file-share-1.png)
-1. On the menu at the top of the **File service** page, click **File share**. The **New file share** page drops down.
-1. In **Name** type *myshare*, enter a quota, and leave **Transaction optimized** selected for **Tiers**.
+1. On the menu at the top of the **File service** page, click **+ File share**. The **New file share** page drops down.
+1. In **Name** type *myshare*. Leave **Transaction optimized** selected for **Tier**.
1. Select **Create** to create the Azure file share.
-Share names need to be all lower case letters, numbers, and single hyphens but cannot start with a hyphen. For complete details about naming file shares and files, see [Naming and Referencing Shares, Directories, Files, and Metadata](/rest/api/storageservices/Naming-and-Referencing-Shares--Directories--Files--and-Metadata).
+Share names must be all lower case letters, numbers, and single hyphens but cannot start with a hyphen. For complete details about naming file shares and files, see [Naming and Referencing Shares, Directories, Files, and Metadata](/rest/api/storageservices/Naming-and-Referencing-Shares--Directories--Files--and-Metadata).
# [PowerShell](#tab/azure-powershell)
az storage share-rm create \
To create a new directory named *myDirectory* at the root of your Azure file share:
-1. On the **File Service** page, select the **myshare** file share. The page for your file share opens.
+1. On the **File share settings** page, select the **myshare** file share. The page for your file share opens, indicating *no files found*.
1. On the menu at the top of the page, select **+ Add directory**. The **New directory** page drops down. 1. Type *myDirectory* and then click **OK**.
az storage directory create \
# [Portal](#tab/azure-portal)
-To demonstrate uploading a file, you first need to create or select a file to be uploaded. You may do this by whatever means you see fit. Once you've selected the file you would like to upload:
+To demonstrate uploading a file, you first need to create or select a file to be uploaded. You may do this by whatever means you see fit. Once you've decided on the file you would like to upload:
1. Select the **myDirectory** directory. The **myDirectory** panel opens. 1. In the menu at the top, select **Upload**. The **Upload files** panel opens.
az storage file list \
#### Download a file # [Portal](#tab/azure-portal)
-You can download a copy of the file you uploaded by right-clicking on the file. After selecting the download button, the exact experience will depend on the operating system and browser you're using.
+You can download a copy of the file you uploaded by right-clicking on the file and selecting **Download**. The exact experience will depend on the operating system and browser you're using.
# [PowerShell](#tab/azure-powershell)
stream-analytics Machine Learning Udf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/machine-learning-udf.md
Previously updated : 03/24/2022 Last updated : 03/31/2022 # Integrate Azure Stream Analytics with Azure Machine Learning
If your input data sent to the ML UDF is inconsistent with the schema that is ex
- Validate input to your ML UDF is not null - Validate the type of every field that is an input to your ML UDF to ensure it matches what the endpoint expects
+> [!NOTE]
+> ML UDFs are evaluated for each row of a given query step, even when called via a conditional expression (i.e. `CASE WHEN [A] IS NOT NULL THEN udf.score(A) ELSE '' END`). If need be, use the [WITH](/stream-analytics-query/with-azure-stream-analytics) clause to create diverging paths, calling the ML UDF only where required, before using [UNION](/stream-analytics-query/union-azure-stream-analytics) to merge paths together again.
+ ## Pass multiple input parameters to the UDF Most common examples of inputs to machine learning models are numpy arrays and DataFrames. You can create an array using a JavaScript UDF, and create a JSON-serialized DataFrame using the `WITH` clause.
synapse-analytics Tutorial Logical Data Warehouse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/tutorial-logical-data-warehouse.md
A caller may access data source without credential if an owner of data source al
You can explicitly define a custom credential that will be used while accessing data on external data source. - [Managed Identity](develop-storage-files-storage-access-control.md?tabs=managed-identity) of the Synapse workspace - [Shared Access Signature](develop-storage-files-storage-access-control.md?tabs=shared-access-signature) of the Azure storage
+- Custom [Service Principal Name or Azure Application identity](develop-storage-files-storage-access-control.md?tabs=service-principal#supported-storage-authorization-types).
- Read-only Cosmos Db account key that enables you to read Cosmos DB analytical storage. As a prerequisite, you will need to create a master key in the database:
virtual-desktop App Attach Msixmgr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/app-attach-msixmgr.md
Here's how you'd use that command to make a VHDX:
msixmgr.exe -Unpack -packagePath "C:\Users\ssa\Desktop\packageName_3.51.1.0_x64__81q6ced8g4aa0.msix" -destination "c:\temp\packageName.vhdx" -applyacls -create -vhdSize 200 -filetype "vhdx" -rootDirectory apps ```
+>[!NOTE]
+>This command doesn't support package names that are longer than 128 characters or MSIX image names with spaces between characters.
+ ## Next steps Learn more about MSIX app attach at [What is MSIX app attach?](what-is-app-attach.md)
virtual-desktop Automatic Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/automatic-migration.md
Before you use the migration module, make sure you have the following things rea
- At least Remote Desktop Services (RDS) Contributor permissions on an RDS tenant or the specific host pools you're migrating. -- The latest version of the Microsoft.RdInfra.RDPowershell PowerShell module
+- The latest version of the Microsoft.RdInfra.RDPowershell PowerShell module.
-- The latest version of the Az.DesktopVirtualization PowerShell module
+- The latest version of the Az.DesktopVirtualization PowerShell module.
-- The latest version of the Az.Resources PowerShell module
+- The latest version of the Az.Resources PowerShell module.
-- Install the migration module in your computer
+- Install the migration module on your computer.
- PowerShell or PowerShell ISE to run the scripts you'll see in this article. The Microsoft.RdInfra.RDPowershell module doesn't work in PowerShell Core.
To prepare your PowerShell environment:
https://www.powershellgallery.com/packages/Az.Resources/ ```
- If you don't, then install and import the modules by running these cmdlets:
+ If you don't, then you'll have to install and import the modules by running these cmdlets:
```powershell Install-module Az.Resources
To migrate your Azure virtual Desktop (classic) resources to Azure Resource Mana
After the **Start-RdsHostPoolMigration** cmdlet is done, you should see the following things:
- - Azure service objects for the tenant or host pool you specified
+ - Azure service objects for the tenant or host pool you specified.
- Two new resource groups:
virtual-desktop Create Profile Container Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/create-profile-container-azure-ad.md
Previously updated : 01/24/2022 Last updated : 04/05/2022 # Create a profile container with Azure Files and Azure Active Directory (preview)
To enable Azure AD authentication on a storage account, you need to create an Az
} ```
+ > [!IMPORTANT]
+ > This password expires every six months, so you must update it by following the steps in [Update the service principal's password](#update-the-service-principals-password).
+ ### Set the API permissions on the newly created application You can configure the API permissions from the [Azure portal](https://portal.azure.com) by following these steps:
All users that need to have FSLogix profiles stored on the storage account you'r
### Assign directory level access permissions
-To prevent users from accessing the user profile of other users, you must also assign directory-level permissions. This section will give you a step-by-step guide for how to configure the permissions.
+To prevent users from accessing the user profile of other users, you must also assign directory-level permissions. This section will give you a step-by-step guide for how to configure the permissions.
> [!IMPORTANT] > Without proper directory level permissions in place, a user can delete the user profile or access the personal information of a different user. It's important to make sure users have proper permissions to prevent accidental deletion from happening.
Finally, test the profile to make sure that it works:
6. If everything's set up correctly, you should see a directory with a name that's formatted like this: `<user SID>_<username>`.
+## Update the service principal's password
+
+The service principal's password will expire every six months. To update the password:
+
+1. Install the Azure Storage and Azure AD PowerShell module. To install the modules, open PowerShell and run the following commands:
+
+ ```powershell
+ Install-Module -Name Az.Storage
+ Install-Module -Name AzureAD
+ ```
+
+2. Set the required variables for your tenant, subscription, storage account name, and resource group name by running the following PowerShell cmdlets, replacing the values with the ones relevant to your environment.
+
+ ```powershell
+ $tenantId = "<MyTenantId>"
+ $subscriptionId = "<MySubscriptionId>"
+ $resourceGroupName = "<MyResourceGroup>"
+ $storageAccountName = "<MyStorageAccount>"
+ ```
+
+3. Generate a new kerb1 key and password for the service principal by running this command:
+
+ ```powershell
+ Connect-AzAccount -Tenant $tenantId -SubscriptionId $subscriptionId
+ $kerbKeys = New-AzStorageAccountKey -ResourceGroupName $resourceGroupName -Name $storageAccountName -KeyName "kerb1" -ErrorAction Stop | Select-Object -ExpandProperty Keys
+ $kerbKey = $kerbKeys | Where-Object { $_.KeyName -eq "kerb1" } | Select-Object -ExpandProperty Value
+ $azureAdPasswordBuffer = [System.Linq.Enumerable]::Take([System.Convert]::FromBase64String($kerbKey), 32);
+ $password = "kk:" + [System.Convert]::ToBase64String($azureAdPasswordBuffer);
+ ```
+
+4. Connect to Azure AD and retrieve the tenant information, application, and service principal by running the following cmdlets:
+
+ ```powershell
+ Connect-AzureAD
+ $azureAdTenantDetail = Get-AzureADTenantDetail;
+ $azureAdTenantId = $azureAdTenantDetail.ObjectId
+ $azureAdPrimaryDomain = ($azureAdTenantDetail.VerifiedDomains | Where-Object {$_._Default -eq $true}).Name
+ $application = Get-AzureADApplication -Filter "DisplayName eq '$($storageAccountName)'" -ErrorAction Stop;
+ $servicePrincipal = Get-AzureADServicePrincipal | Where-Object {$_.AppId -eq $($application.AppId)}
+ ```
+
+5. Set the password for the storage account's service principal by running the following cmdlets.
+
+ ```powershell
+ $Token = ([Microsoft.Open.Azure.AD.CommonLibrary.AzureSession]::AccessTokens['AccessToken']).AccessToken;
+ $Uri = ('https://graph.windows.net/{0}/{1}/{2}?api-version=1.6' -f $azureAdPrimaryDomain, 'servicePrincipals', $servicePrincipal.ObjectId)
+ $json = @'
+ {
+ "passwordCredentials": [
+ {
+ "customKeyIdentifier": null,
+ "endDate": "<STORAGEACCOUNTENDDATE>",
+ "value": "<STORAGEACCOUNTPASSWORD>",
+ "startDate": "<STORAGEACCOUNTSTARTDATE>"
+ }]
+ }
+ '@
+
+ $now = [DateTime]::UtcNow
+ $json = $json -replace "<STORAGEACCOUNTSTARTDATE>", $now.AddDays(-1).ToString("s")
+ $json = $json -replace "<STORAGEACCOUNTENDDATE>", $now.AddMonths(6).ToString("s")
+ $json = $json -replace "<STORAGEACCOUNTPASSWORD>", $password
+
+ $Headers = @{'authorization' = "Bearer $($Token)"}
+
+ try {
+ Invoke-RestMethod -Uri $Uri -ContentType 'application/json' -Method Patch -Headers $Headers -Body $json
+ Write-Host "Success: Password is set for $storageAccountName"
+ } catch {
+ Write-Host $_.Exception.ToString()
+ Write-Host "StatusCode: " $_.Exception.Response.StatusCode.value
+ Write-Host "StatusDescription: " $_.Exception.Response.StatusDescription
+ }
+ ```
+ ## Disable Azure AD authentication on your Azure Storage account If you need to disable Azure AD authentication on your storage account:
virtual-desktop Language Packs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/language-packs.md
Title: Install language packs on Windows 10 VMs in Azure Virtual Desktop - Azure
description: How to install language packs for Windows 10 multi-session VMs in Azure Virtual Desktop. Previously updated : 03/30/2022 Last updated : 04/01/2022
You need the following things to customize your Windows 10 Enterprise multi-sess
- [Windows 10, version 2004 or later 10C 2021 LXP ISO](https://software-download.microsoft.com/download/sg/LanguageExperiencePack.2110C.iso) - [Windows 10, version 2004 or later 11C 2021 LXP ISO](https://software-download.microsoft.com/download/sg/LanguageExperiencePack.2111C.iso) - [Windows 10, version 2004 or later 01C 2022 LXP ISO](https://software-download.microsoft.com/download/sg/LanguageExperiencePack.2201C.iso)
+ - [Windows 10, version 2004 or later 02C 2022 LXP ISO](https://software-static.download.prss.microsoft.com/sg/download/888969d5-f34g-4e03-ac9d-1f9786c66749/LanguageExperiencePack.2202C.iso)
- An Azure Files Share or a file share on a Windows File Server Virtual Machine
virtual-desktop Connect Microsoft Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/user-documentation/connect-microsoft-store.md
To subscribe to a workspace:
- If you're using a Workspace URL, use the URL your admin gave you. - If you're connecting from Azure Virtual Desktop, use one of the following URLs depending on which version of the service you're using: - Azure Virtual Desktop (classic): `https://rdweb.wvd.microsoft.com/api/feeddiscovery/webfeeddiscovery.aspx`.
- - Azure Virtual Desktop: `https://rdweb.wvd.microsoft.com/api/arm/feeddiscovery`.
+ - Azure Virtual Desktop: `https://rdweb.wvd.microsoft.com/arm/webclient/https://docsupdatetracker.net/index.html`.
3. Tap **Subscribe**. 4. Provide your credentials when prompted.
Workspaces may be added, changed, or removed based on changes made by your admin
## Next steps
-To learn more about how to use the Microsoft Store client, check out [Get started with the Microsoft Store client](/windows-server/remote/remote-desktop-services/clients/windows/).
+To learn more about how to use the Microsoft Store client, check out [Get started with the Microsoft Store client](/windows-server/remote/remote-desktop-services/clients/windows/).
virtual-machine-scale-sets Tutorial Create And Manage Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/tutorial-create-and-manage-powershell.md
To view additional information about a specific VM instance, add the `-InstanceI
```azurepowershell-interactive Get-AzVmssVM -ResourceGroupName "myResourceGroup" -VMScaleSetName "myScaleSet" -InstanceId "1" ```-
+## Allow remote desktop traffic
+
+>[!IMPORTANT]
+>Exposing the RDP port 3389 is only recommended for testing. For production environments, we recommend using a VPN or private connection.
+
+To allow access using remote desktop, create a network security group with [New-AzNetworkSecurityRuleConfig](/powershell/module/az.network/new-aznetworksecurityruleconfig) and [New-AzNetworkSecurityGroup](/powershell/module/az.network/new-aznetworksecuritygroup). For more information, see [Networking for Azure virtual machine scale sets](virtual-machine-scale-sets-networking.md).
+
+ ```azurepowershell-interactive
+ # Get information about the scale set
+ $vmss = Get-AzVmss `
+ -ResourceGroupName "myResourceGroup" `
+ -VMScaleSetName "myScaleSet"
+
+ #Create a rule to allow traffic over port 3389
+ $nsgFrontendRule = New-AzNetworkSecurityRuleConfig `
+ -Name myFrontendNSGRule `
+ -Protocol Tcp `
+ -Direction Inbound `
+ -Priority 200 `
+ -SourceAddressPrefix * `
+ -SourcePortRange * `
+ -DestinationAddressPrefix * `
+ -DestinationPortRange 3389 `
+ -Access Allow
+
+ #Create a network security group and associate it with the rule
+ $nsgFrontend = New-AzNetworkSecurityGroup `
+ -ResourceGroupName "myResourceGroup" `
+ -Location EastUS `
+ -Name myFrontendNSG `
+ -SecurityRules $nsgFrontendRule
+
+ $vnet = Get-AzVirtualNetwork `
+ -ResourceGroupName "myResourceGroup" `
+ -Name myVnet
+
+ $frontendSubnet = $vnet.Subnets[0]
+
+ $frontendSubnetConfig = Set-AzVirtualNetworkSubnetConfig `
+ -VirtualNetwork $vnet `
+ -Name mySubnet `
+ -AddressPrefix $frontendSubnet.AddressPrefix `
+ -NetworkSecurityGroup $nsgFrontend
+
+ Set-AzVirtualNetwork -VirtualNetwork $vnet
+
+ # Update the scale set and apply the changes
+ Update-AzVmss `
+ -ResourceGroupName "myResourceGroup" `
+ -Name "myScaleSet" `
+ -VirtualMachineScaleSet $vmss
+ ```
## List connection information A public IP address is assigned to the load balancer that routes traffic to the individual VM instances. By default, Network Address Translation (NAT) rules are added to the Azure load balancer that forwards remote connection traffic to each VM on a given port. To connect to the VM instances in a scale set, you create a remote connection to an assigned public IP address and port number.
virtual-machine-scale-sets Virtual Machine Scale Sets Automatic Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md
Title: Automatic OS image upgrades with Azure virtual machine scale sets description: Learn how to automatically upgrade the OS image on VM instances in a scale set--++
Automatic OS image upgrade is supported for custom images deployed through [Azur
## Configure automatic OS image upgrade To configure automatic OS image upgrade, ensure that the *automaticOSUpgradePolicy.enableAutomaticOSUpgrade* property is set to *true* in the scale set model definition.
+> [!NOTE]
+> **Upgrade Policy mode** and **Automatic OS Upgrade Policy** are separate settings and control different aspects of the scale set. When there are changes in the scale set template, the Upgrade Policy `mode` will determine what happens to existing instances in the scale set. However, Automatic OS Upgrade Policy `enableAutomaticOSUpgrade` is specific to the OS image and tracks changes the image publisher has made and determines what happens when there is an update to the image.
+ ### REST API The following example describes how to set automatic OS upgrades on a scale set model:
virtual-machines Disk Encryption Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disk-encryption-troubleshooting.md
Before taking any of the steps below, first ensure that the VMs you are attempti
- [Group policy requirements](disk-encryption-overview.md#group-policy-requirements) - [Encryption key storage requirements](disk-encryption-overview.md#encryption-key-storage-requirements)
+## Troubleshooting 'Failed to send DiskEncryptionData'
+
+When encrypting a VM fails with the error message "Failed to send DiskEncryptionData...", it is usually caused by one of the following situations:
+
+- Having the Key Vault existing in a different region and/or subscription than the Virtual Machine
+- Advanced access policies in the Key Vault are not set to allow Azure Disk Encryption
+- Key Encryption Key, when in use, has been disabled or deleted in the Key Vault
+- Typo in the Resource ID or URL for the Key Vault or Key Encryption Key (KEK)
+- Special characters used while naming the VM, data disks, or keys. i.e _VMName, élite, etc
+- Unsupported encryption scenarios
+- Network issues that prevent the VM/Host from accessing the required resources
+
+### Suggestions
+
+- Make sure the Key Vault exists in the same region and subscription as the Virtual Machine
+- Ensure that you have [set key vault advanced access policies](disk-encryption-key-vault.md#set-key-vault-advanced-access-policies) properly
+- If you are using KEK, ensure the key exists and is enabled in Key Vault
+- Check VM name, data disks, and keys follow [key vault resource naming restrictions](../../azure-resource-manager/management/resource-name-rules.md#microsoftkeyvault)
+- Check for any typos in the Key Vault name or KEK name in your PowerShell or CLI command
+>[!NOTE]
+ > The syntax for the value of disk-encryption-keyvault parameter is the full identifier string:
+/subscriptions/[subscription-id-guid]/resourceGroups/[resource-group-name]/providers/Microsoft.KeyVault/vaults/[keyvault-name]</br>
+ > The syntax for the value of the key-encryption-key parameter is the full URI to the KEK as in:
+https://[keyvault-name].vault.azure.net/keys/[kekname]/[kek-unique-id]
+- Ensure you are not following any [unsupported scenario](disk-encryption-windows.md#unsupported-scenarios)
+- Ensure you are meeting [network requirements](disk-encryption-overview.md#networking-requirements) and try again
+ ## Troubleshooting Azure Disk Encryption behind a firewall When connectivity is restricted by a firewall, proxy requirement, or network security group (NSG) settings, the ability of the extension to perform needed tasks might be disrupted. This disruption can result in status messages such as "Extension status not available on the VM." In expected scenarios, the encryption fails to finish. The sections that follow have some common firewall problems that you might investigate.
virtual-machines Image Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/image-builder.md
We will be using some pieces of information repeatedly, so we will create some v
```azurecli-interactive # Resource group name - we are using myImageBuilderRG in this example
-$imageResourceGroup='myWinImgBuilderRG'
+imageResourceGroup='myWinImgBuilderRG'
# Region location
-$location='WestUS2'
+location='WestUS2'
# Run output name
-$runOutputName='aibWindows'
+runOutputName='aibWindows'
# name of the image to be created
-$imageName='aibWinImage'
+imageName='aibWinImage'
``` Create a variable for your subscription ID.
virtual-network Custom Ip Address Prefix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/custom-ip-address-prefix.md
When ready, you can issue the command to have your range advertised from Azure a
## Pricing
-* There's no charge to provision or use custom IP prefixes. There's no charge for all public IP prefixes and public IP addresses that are derived from custom IP prefixes
+* There is no charge to provision or use custom IP prefixes. Similarly, there is no charge for any public IP prefixes and public IP addresses that are derived from custom IP prefixes.
* All traffic destined to a custom IP prefix range is charged the [internet egress rate](https://azure.microsoft.com/pricing/details/bandwidth/). Customers traffic to a custom IP prefix address from within Azure are charged internet egress for the source region of their traffic. Egress traffic from a custom IP address prefix range is charged the equivalent rate as an Azure public IP from the same region
virtual-network Nat Gateway Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/nat-gateway-resource.md
# Design virtual networks with NAT gateway
-NAT gateway provides outbound internet connectivity for one or more subnets of a virtual network. Once NAT gateway is associated to a subnet, NAT provides source network address translation (SNAT) for that subnet. NAT gateway specifies which static IP addresses virtual machines use when creating outbound flows. Static IP addresses come from public IP addresses, public IP prefixes, or both. If a public IP prefix is used, all IP addresses of the entire public IP prefix are consumed by a NAT gateway. A NAT gateway can use a total of up to 16 static IP addresses from either.
+NAT gateway provides outbound internet connectivity for one or more subnets of a virtual network. Once NAT gateway is associated to a subnet, NAT provides source network address translation (SNAT) for that subnet. NAT gateway specifies which static IP addresses virtual machines use when creating outbound flows. Static IP addresses come from public IP addresses, public IP prefixes, or both. If a public IP prefix is used, all IP addresses of the entire public IP prefix are consumed by a NAT gateway. A NAT gateway can use up to 16 static IP addresses from either.
:::image type="content" source="./media/nat-overview/flow-direction1.png" alt-text="Diagram depicts a NAT gateway resource that consumes all IP addresses for a public IP prefix and directs traffic to and from two subnets of VMs and a virtual machine scale set.":::
User-defined routes aren't necessary.
Review this section to familiarize yourself with considerations for designing virtual networks with NAT.
-### Connect to Azure services
+### Connect to Azure services with Private Link
-When you connect to Azure services from your private network, the recommended approach is to use [Private Link](../../private-link/private-link-overview.md).
+When you connect your private network to Azure services such as Storage, SQL, Cosmos DB, or any other [Azure service listed here](/azure/private-link/availability), the recommended approach is to use [Private Link](../../private-link/private-link-overview.md).
-Private Link lets you access services in Azure from your private network without the use of a public IP address. Connecting to these services over the internet aren't necessary and are handled over the Azure backbone network. For example, when you access Azure Storage, you can use a private endpoint to ensure your connection is fully private.
+Private Link uses the private IP addresses of your virtual machines or other compute resources from your Azure network to connect privately and securely to Azure PaaS services over the Azure backbone network instead of over the internet. Private Link should be used when possible to connect to Azure services since it frees up SNAT ports for making outbound connections to the internet. To learn more about how NAT gateway uses SNAT ports, see [Source Network Address Translation](#source-network-address-translation).
-### Connect to the internet
+### Connect to the internet with NAT gateway
-NAT is recommended for outbound scenarios for all production workloads where you need to connect to a public endpoint. The following scenarios are examples of how to ensure coexistence of inbound with NAT gateway for outbound.
+NAT gateway is recommended for outbound scenarios for all production workloads where you need to connect to a public endpoint. When NAT gateway is configured to subnets, all previous outbound configurations, such as Load balancer or instance-level public IPs (IL PIPs) are superseded and NAT gateway directs all outbound traffic to the internet. Return traffic in response to an outbound initiated flow will also go through NAT gateway. Inbound initiated traffic is not affected by the addition of NAT gateway. Inbound traffic through Load balancer or IL PIPs are translated separately from outbound traffic through NAT gateway. This separation allows inbound and outbound services to coexist seamlessly.
+
+The following scenarios are examples of how to ensure coexistence of Load balancer or instance level public IPs for inbound with NAT gateway for outbound.
#### NAT and VM with an instance-level public IP
Any outbound configuration from a load-balancing rule or outbound rules is super
Any outbound configuration from a load-balancing rule or outbound rules is superseded by NAT gateway. The VM will also use NAT gateway for outbound. Inbound originated isn't affected.
+### Scale NAT gateway
+
+Scaling NAT gateway is primarily a function of managing the shared, available SNAT port inventory. NAT needs sufficient SNAT port inventory for expected peak outbound flows for all subnets that are attached to a NAT gateway. You can use public IP addresses, public IP prefixes, or both to create SNAT port inventory.
+
+> [!NOTE]
+> If you assign a public IP prefix, the entire public IP prefix is used. You can't assign a public IP prefix and then break out individual IP addresses to assign to other resources. If you want to assign individual IP addresses from a public IP prefix to multiple resources, you need to create individual public IP addresses and assign them as needed instead of using the public IP prefix itself.
+
+SNAT maps private addresses to one or more public IP addresses, rewriting the source address and source port in the process. A single NAT gateway can scale up to 16 IP addresses. If a public IP prefix is provided, each IP address within the prefix provides SNAT port inventory. Adding more public IP addresses increases the available inventory of SNAT ports. TCP and UDP are separate SNAT port inventories and are unrelated to NAT gateway.
+
+When you scale your workload, assume that each flow requires a new SNAT port, and then scale the total number of available IP addresses for outbound traffic. Carefully consider the scale you're designing for, and then allocate IP addresses quantities accordingly.
+
+SNAT ports sent to different destinations will most likely be reused when possible. As SNAT port exhaustion approaches, flows may not succeed.
+
+For a SNAT example, see [SNAT fundamentals](#source-network-address-translation).
+ ### Monitor outbound network traffic A network security group allows you to filter inbound and outbound traffic to and from a virtual machine. To monitor outbound traffic flowing from NAT, you can enable NSG flow logs.
Each NAT gateway can provide up to 50 Gbps of throughput. You can split your dep
Each NAT gateway public IP address provides 64,512 SNAT ports to make outbound connections. NAT gateway can support up to 50,000 concurrent connections per public IP address to the same destination endpoint over the internet for TCP and UDP. Review the following section for details and the [troubleshooting article](./troubleshoot-nat.md) for specific problem resolution guidance.
-## Source Network Address Translation
+## Protocols
-Source Network Address Translation (SNAT) rewrites the source of a flow to originate from a different IP address and/or port. Typically, SNAT is used when a private network needs to connect to a public host over the internet. SNAT allows multiple compute resources within the private VNet to use the same single Public IP address or set of IP addresses (prefix) to connect to the internet.
+NAT gateway interacts with IP and IP transport headers of UDP and TCP flows. NAT gateway is agnostic to application layer payloads. Other IP protocols aren't supported.
-NAT gateway SNATs the private IP address and source port of a virtual machine (or other compute resource) to a static public IP address before going outbound to the internet from a virtual network.
+## Source Network Address Translation
### Fundamentals
+Source Network Address Translation (SNAT) rewrites the source of a flow to originate from a different IP address and/or port. Typically, SNAT is used when a private network needs to connect to a public host over the internet. SNAT allows multiple VM instances within the private VNet to use the same single Public IP address or set of IP addresses (prefix) to connect to the internet.
+
+NAT gateway SNATs the private IP address and source port of a virtual machine (or other compute resource) to a static public IP address before going outbound to the internet from a virtual network. When making connections to the same destination endpoint, a different source port is used for the connection so that connections can be distinguished from one another. SNAT port exhaustion occurs when a source endpoint has run out of available SNAT ports to differentiate between new connections.
+
+### Example SNAT flows for NAT gateway
+ The following example flows explain the basic concept of SNAT and how it works with NAT gateway. In the table below the VM is making connections to destination IP 65.52.0.1 from the following source tuples (IPs and ports):
When NAT gateway is configured with public IP address 65.52.1.1, the source IPs
The source IP address and port of each flow is SNAT'd to the public IP address 65.52.1.1 (source tuple after SNAT) and to a different port for each new connection going to the same destination endpoint. The act of NAT gateway replacing all of the source ports and IPs with the public IP and port before connecting to the internet is known as *IP masquerading* or *port masquerading*. Multiple private sources are masqueraded behind a public IP.
-#### Source (SNAT) port reuse
-
-For NAT gateway, 64,512 SNAT ports are available per public IP address. For each public IP address attached to NAT gateway, the entire inventory of ports provided by those IPs is made available to any virtual machine instance within a subnet that is also attached to NAT gateway. NAT gateway selects a port at random out of the available inventory of ports to make new outbound connections. If NAT gateway doesn't find any available SNAT ports, then it will reuse a SNAT port. A port can be reused so long as it's going to a different destination endpoint. As mentioned in the [Performance](#performance) section, NAT gateway supports up to 50,000 concurrent connections per public IP address to the same destination endpoint over the internet.
-
-The following illustrates this concept as an additional flow to the preceding set, with a VM flowing to a new destination IP 65.52.0.2.
-
-| Flow | Source tuple | Destination tuple |
-|::|::|::|
-| 4 | 192.168.0.16:4285 | 65.52.0.2:80 |
-
-A NAT gateway will likely translate flow 4 to a source port that may be used for other destinations as well. See [Scale NAT](#scale-nat) for more discussion on correctly sizing your IP address provisioning.
-
-| Flow | Source tuple | Source tuple after SNAT | Destination tuple |
-|::|::|::|::|
-| 4 | 192.168.0.16:4285 | 65.52.1.1:**1234** | 65.52.0.2:80 |
-
-Don't take a dependency on the specific way source ports are assigned in the above example. The preceding is an illustration of the fundamental concept only.
-
-SNAT provided by NAT is different from SNAT provided by a [load balancer](../../load-balancer/load-balancer-outbound-connections.md) in several aspects, including:
--- NAT gateway dynamically allocates SNAT ports across all VMs within a NAT gateway configured subnet whereas Load Balancer pre-allocates a fixed number of SNAT ports to each VM.--- NAT gateway selects source ports at random for outbound traffic flow whereas Load Balancer selects ports sequentially.
+### NAT gateway dynamically allocates SNAT ports
-- NAT gateway reuses SNAT ports for connections to different destination endpoints if no other source ports are available, whereas Load Balancer looks to select the lowest available SNAT port in sequential order.-
-### On-demand
-
-NAT provides on-demand SNAT ports for new outbound traffic flows. All available SNAT ports in inventory can be used by any virtual machine on subnets configured with NAT:
+NAT gateway dynamically allocates SNAT ports across a subnet's resources (ie virtual machines). SNAT port inventory is made available by attaching public IP addresses to NAT gateway. All available SNAT ports in inventory can be used by any virtual machine on subnets configured with NAT gateway:
:::image type="content" source="./media/nat-overview/lb-vnnat-chart.png" alt-text="Diagram that depicts the inventory of all available SNAT ports used by any VM on subnets configured with NAT."::: *Figure: Virtual Network NAT on-demand outbound SNAT*
-Any IP configuration of a virtual machine can create outbound flows on-demand as needed. Pre-allocation of SNAT ports to each virtual machine isn't required.
+Pre-allocation of SNAT ports to each virtual machine isn't required, which means SNAT ports aren't left unused by VMs not actively needing them.
:::image type="content" source="./media/nat-overview/exhaustion-threshold.png" alt-text="Diagram that depicts the inventory of all available SNAT ports used by any VM on subnets configured with NAT with an exhaustion threshold."::: *Figure: Differences in exhaustion scenarios*
-After a SNAT port is released, it's available for use by any VM on subnets configured with NAT. On-demand allocation allows dynamic and divergent workloads on subnets to use SNAT ports as needed. As long as SNAT ports are available, SNAT flows will succeed. SNAT port hot spots benefit from a larger inventory. SNAT ports aren't left unused for VMs not actively needing them.
+After a SNAT port is released, it's available for use by any VM on subnets configured with NAT. On-demand allocation allows dynamic and divergent workloads on subnets to use SNAT ports as needed. As long as SNAT ports are available, SNAT flows will succeed.
-### Scale NAT
+### Source (SNAT) port reuse
-Scaling NAT is primarily a function of managing the shared, available SNAT port inventory. NAT needs sufficient SNAT port inventory for expected peak outbound flows for all subnets that are attached to a NAT gateway. You can use public IP addresses, public IP prefixes, or both to create SNAT port inventory.
+NAT gateway selects a port at random out of the available inventory of ports to make new outbound connections. If NAT gateway doesn't find any available SNAT ports, then it will reuse a SNAT port. A port can be reused so long as there is no existing connection going to the same destination IP and port.
-> [!NOTE]
-> If you assign a public IP prefix, the entire public IP prefix is used. You can't assign a public IP prefix and then break out individual IP addresses to assign to other resources. If you want to assign individual IP addresses from a public IP prefix to multiple resources, you need to create individual public IP addresses and assign them as needed instead of using the public IP prefix itself.
+The following illustrates this concept as an additional flow to the preceding set, with a VM flowing to a new destination IP 65.52.0.2.
-SNAT maps private addresses to one or more public IP addresses, rewriting the source address and source port in the process. A single NAT gateway can scale up to 16 IP addresses. If a public IP prefix is provided, each IP address within the prefix provides SNAT port inventory. Adding more public IP addresses increases the available inventory of SNAT ports. TCP and UDP are separate SNAT port inventories and are unrelated to NAT gateway.
+| Flow | Source tuple | Destination tuple |
+|::|::|::|
+| 4 | 192.168.0.16:4285 | 65.52.0.2:80 |
-NAT gateway opportunistically reuses source (SNAT) ports. When you scale your workload, assume that each flow requires a new SNAT port, and then scale the total number of available IP addresses for outbound traffic. Carefully consider the scale you're designing for, and then allocate IP addresses quantities accordingly.
+A NAT gateway will translate flow 4 to a source port that may already be in use for other destinations as well. See [Scale NAT gateway](#scale-nat-gateway) for more discussion on correctly sizing your IP address provisioning.
-SNAT ports to different destinations are most likely to be reused when possible. As SNAT port exhaustion approaches, flows may not succeed.
+| Flow | Source tuple | Source tuple after SNAT | Destination tuple |
+|::|::|::|::|
+| 4 | 192.168.0.16:4285 | 65.52.1.1:**1234** | 65.52.0.2:80 |
-For a SNAT example, see [SNAT fundamentals](#source-network-address-translation).
+Don't take a dependency on the specific way source ports are assigned in the above example. The preceding is an illustration of the fundamental concept only.
-### Protocols
+## Timers
-NAT gateway interacts with IP and IP transport headers of UDP and TCP flows. NAT gateway is agnostic to application layer payloads. Other IP protocols aren't supported.
+### Port Reuse Timers
+
+Port reuse timers determine the amount of time after a connection closes that a source port is in hold down before it can be reused to go to the same destination endpoint by NAT gateway.
-### Timers
+The following table provides information about when a TCP port becomes available for reuse to the same destination endpoint by NAT gateway.
-TCP timers determine the amount of time a connection is held between two endpoints before it's terminated and the port is available for reuse. Depending on the type of packet sent by either endpoint, a specific type of timer will be triggered.
+| Timer | Description | Value |
+||||
+| TCP FIN | After a connection is closed by a TCP FIN packet, a 65 second timer is activated that holds down the SNAT port. The SNAT port will be available for reuse after the timer ends. | 65 seconds |
+| TCP RST | After a connection is closed by a TCP RST packet (reset), a 20 second timer is activated that holds down the SNAT port. When the timer ends, the port is available for reuse. | 20 seconds |
+| TCP half open | During connection establishment where one connection endpoint is waiting for acknowledgment from the other endpoint, a 25 second timer is activated. If no traffic is detected, the connection will close. Once the connection has closed, the source port is available for reuse to the same destination endpoint. | 25 seconds |
-The following timers indicate how long a connection is maintained before closing and releasing the destination SNAT port for reuse:
+For UDP traffic, after a connection has closed, the port will be in hold down for 65 seconds before it is available for reuse.
+
+### Idle Timeout Timers
| Timer | Description | Value | ||||
-| TCP FIN | Occurs when the private side of NAT initiates termination of a TCP connection. A timer is set after the FIN packet is sent by the public endpoint. This timer allows the private endpoint time to resend an ACK (acknowledgment) packet should it be lost. Once the timer ends, the connection is closed. | 60 seconds |
-| TCP RST | Occurs when the private side of NAT sends an RST (reset) packet in an attempt to communicate on the TCP connection. If the RST packet isn't received by the public side of NAT, or the RST packet is returned to the private endpoint, the connection will time out and close. The public side of NAT doesn't generate TCP RST packets or any other traffic. | 10 seconds |
-| TCP half open | Occurs when the public endpoint is waiting for acknowledgment from the private endpoint that the connection between the two is fully bidirectional. | 30 seconds |
| TCP idle timeout | TCP connections can go idle when no data is transmitted between either endpoint for a prolonged period of time. A timer can be configured from 4 minutes (default) to 120 minutes (2 hours) to time out a connection that has gone idle. Traffic on the flow will reset the idle timeout timer. | Configurable; 4 minutes (default) - 120 minutes | > [!NOTE] > These timer settings are subject to change. The values are provided to help with troubleshooting and you should not take a dependency on specific timers at this time.
-After a SNAT port is no longer in use, it's available for reuse to the same destination IP address and port after 5 seconds.
-
-#### Timer considerations
+### Timer considerations
Design recommendations for configuring timers:
Design recommendations for configuring timers:
- To upgrade a basic public IP address too standard, see [Upgrade a public IP address](../ip-services/public-ip-upgrade-portal.md)
+- NAT gateway does not support ICMP
+ - IP fragmentation isn't available for NAT gateway. ## Next steps
vpn-gateway Vpn Gateway Create Site To Site Rm Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-create-site-to-site-rm-powershell.md
Next, create the Site-to-Site VPN connection between your virtual network gatewa
```azurepowershell-interactive New-AzVirtualNetworkGatewayConnection -Name VNet1toSite1 -ResourceGroupName TestRG1 ` -Location 'East US' -VirtualNetworkGateway1 $gateway1 -LocalNetworkGateway2 $local `
- -ConnectionType IPsec -RoutingWeight 10 -SharedKey 'abc123'
+ -ConnectionType IPsec -SharedKey 'abc123'
``` After a short while, the connection will be established.
vpn-gateway Vpn Gateway Forced Tunneling Rm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-forced-tunneling-rm.md
Forced tunneling in Azure is configured using virtual network custom user-define
* **Local VNet routes:** Directly to the destination VMs in the same virtual network. * **On-premises routes:** To the Azure VPN gateway. * **Default route:** Directly to the Internet. Packets destined to the private IP addresses not covered by the previous two routes are dropped.
-* This procedure uses user-defined routes (UDR) to create a routing table to add a default route, and then associate the routing table to your VNet subnet(s) to enable forced tunneling on those subnets.
* Forced tunneling must be associated with a VNet that has a route-based VPN gateway. Your forced tunneling configuration will override the default route for any subnet in its VNet. You need to set a "default site" among the cross-premises local sites connected to the virtual network. Also, the on-premises VPN device must be configured using 0.0.0.0/0 as traffic selectors. * ExpressRoute forced tunneling is not configured via this mechanism, but instead, is enabled by advertising a default route via the ExpressRoute BGP peering sessions. For more information, see the [ExpressRoute Documentation](https://azure.microsoft.com/documentation/services/expressroute/).
-* When having both VPN Gateway and ExpressRoute Gateway deployed in the same VNet, user-defined routes (UDR) is no longer needed as ExpressRoute Gateway will advertise configured "default site" into VNet.
## Configuration overview
Install the latest version of the Azure Resource Manager PowerShell cmdlets. See
$lng2 = New-AzLocalNetworkGateway -Name "Branch1" -ResourceGroupName "ForcedTunneling" -Location "North Europe" -GatewayIpAddress "111.111.111.112" -AddressPrefix "192.168.2.0/24" $lng3 = New-AzLocalNetworkGateway -Name "Branch2" -ResourceGroupName "ForcedTunneling" -Location "North Europe" -GatewayIpAddress "111.111.111.113" -AddressPrefix "192.168.3.0/24" $lng4 = New-AzLocalNetworkGateway -Name "Branch3" -ResourceGroupName "ForcedTunneling" -Location "North Europe" -GatewayIpAddress "111.111.111.114" -AddressPrefix "192.168.4.0/24"
- ```
-4. Create the route table and route rule.
-
- ```powershell
- New-AzRouteTable ΓÇôName "MyRouteTable" -ResourceGroupName "ForcedTunneling" ΓÇôLocation "North Europe"
- $rt = Get-AzRouteTable ΓÇôName "MyRouteTable" -ResourceGroupName "ForcedTunneling"
- Add-AzRouteConfig -Name "DefaultRoute" -AddressPrefix "0.0.0.0/0" -NextHopType VirtualNetworkGateway -RouteTable $rt
- Set-AzRouteTable -RouteTable $rt
- ```
-5. Associate the route table to the Midtier and Backend subnets.
-
- ```powershell
- $vnet = Get-AzVirtualNetwork -Name "MultiTier-Vnet" -ResourceGroupName "ForcedTunneling"
- Set-AzVirtualNetworkSubnetConfig -Name "MidTier" -VirtualNetwork $vnet -AddressPrefix "10.1.1.0/24" -RouteTable $rt
- Set-AzVirtualNetworkSubnetConfig -Name "Backend" -VirtualNetwork $vnet -AddressPrefix "10.1.2.0/24" -RouteTable $rt
- Set-AzVirtualNetwork -VirtualNetwork $vnet
- ```
-6. Create the virtual network gateway. Creating a gateway can often take 45 minutes or more, depending on the selected gateway SKU. If you see ValidateSet errors regarding the GatewaySKU value, verify that you have installed the [latest version of the PowerShell cmdlets](#before). The latest version of the PowerShell cmdlets contains the new validated values for the latest Gateway SKUs.
+ `````
+4. Create the virtual network gateway. Creating a gateway can often take 45 minutes or more, depending on the selected gateway SKU. If you see ValidateSet errors regarding the GatewaySKU value, verify that you have installed the [latest version of the PowerShell cmdlets](#before). The latest version of the PowerShell cmdlets contains the new validated values for the latest Gateway SKUs.
```powershell $pip = New-AzPublicIpAddress -Name "GatewayIP" -ResourceGroupName "ForcedTunneling" -Location "North Europe" -AllocationMethod Dynamic
Install the latest version of the Azure Resource Manager PowerShell cmdlets. See
$ipconfig = New-AzVirtualNetworkGatewayIpConfig -Name "gwIpConfig" -SubnetId $gwsubnet.Id -PublicIpAddressId $pip.Id New-AzVirtualNetworkGateway -Name "Gateway1" -ResourceGroupName "ForcedTunneling" -Location "North Europe" -IpConfigurations $ipconfig -GatewayType Vpn -VpnType RouteBased -GatewaySku VpnGw1 -EnableBgp $false ```
-7. Assign a default site to the virtual network gateway. The **-GatewayDefaultSite** is the cmdlet parameter that allows the forced routing configuration to work, so take care to configure this setting properly.
+5. Assign a default site to the virtual network gateway. The **-GatewayDefaultSite** is the cmdlet parameter that allows the forced routing configuration to work, so take care to configure this setting properly.
```powershell $LocalGateway = Get-AzLocalNetworkGateway -Name "DefaultSiteHQ" -ResourceGroupName "ForcedTunneling" $VirtualGateway = Get-AzVirtualNetworkGateway -Name "Gateway1" -ResourceGroupName "ForcedTunneling" Set-AzVirtualNetworkGatewayDefaultSite -GatewayDefaultSite $LocalGateway -VirtualNetworkGateway $VirtualGateway ```
-8. Establish the Site-to-Site VPN connections.
+6. Establish the Site-to-Site VPN connections.
```powershell $gateway = Get-AzVirtualNetworkGateway -Name "Gateway1" -ResourceGroupName "ForcedTunneling"
Install the latest version of the Azure Resource Manager PowerShell cmdlets. See
Get-AzVirtualNetworkGatewayConnection -Name "Connection1" -ResourceGroupName "ForcedTunneling" ```
+
vpn-gateway Vpn Gateway Vpn Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-vpn-faq.md
Yes, but you must configure BGP on both tunnels to the same location.
Yes, Azure VPN gateway will honor AS Path prepending to help make routing decisions when BGP is enabled. A shorter AS Path will be preferred in BGP path selection.
+### Can I use the RoutingWeight property when creating a new VPN VirtualNetworkGateway connection?
+
+No, such setting is reserved for ExpressRoute gateway connections. If you want to influence routing decisions between multiple connections you need to use AS Path prepending.
+ ### Can I use Point-to-Site VPNs with my virtual network with multiple VPN tunnels? Yes, Point-to-Site (P2S) VPNs can be used with the VPN gateways connecting to multiple on-premises sites and other virtual networks.