Updates from: 01/06/2022 02:07:04
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Claimsschema https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/claimsschema.md
Title: ClaimsSchema - Azure Active Directory B2C
+ Title: "ClaimsSchema: Azure Active Directory B2C"
description: Specify the ClaimsSchema element of a custom policy in Azure Active Directory B2C.
Last updated 03/05/2020 + # ClaimsSchema
The **DataType** element supports the following values:
| - | -- | |boolean|Represents a Boolean (`true` or `false`) value.| |date| Represents an instant in time, typically expressed as a date of a day. The value of the date follows ISO 8601 convention.|
-|dateTime|Represents an instant in time, typically expressed as a date and time of day. The value of the date follows ISO 8601 convention.|
+|dateTime|Represents an instant in time, typically expressed as a date and time of day. The value of the date follows ISO 8601 convention during runtime and is converted to UNIX epoch time when issued as a claim into the token.|
|duration|Represents a time interval in years, months, days, hours, minutes, and seconds. The format of is `PnYnMnDTnHnMnS`, where `P` indicates positive, or `N` for negative value. `nY` is the number of years followed by a literal `Y`. `nMo` is the number of months followed by a literal `Mo`. `nD` is the number of days followed by a literal `D`. Examples: `P21Y` represents 21 years. `P1Y2Mo` represents one year, and two months. `P1Y2Mo5D` represents one year, two months, and five days. `P1Y2M5DT8H5M620S` represents one year, two months, five days, eight hours, five minutes, and twenty seconds. | |phoneNumber|Represents a phone number. | |int| Represents number between -2,147,483,648 and 2,147,483,647|
active-directory-b2c Configure User Input https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/configure-user-input.md
The application claims are values that are returned to the application. Update y
1. Select **Page layouts**. 1. Select **Local account sign-up page**. 1. Under **User attributes**, select **City**.
- 1. In the **User input type** drop-down, select **DropdownSingleSelect**. Optional: Use the "Move up/down" buttons to arrange the text order on the sign-up page.
1. In the **Optional** drop-down, select **No**.
+ 1. In the **User input type**, select the current user input type, such as **TextBox**, to open a **User input type editor** window pane.
+ 1. In the **User input type** drop-down, select **DropdownSingleSelect**.
+ 1. In the **Text** and **Values**, enter the text and value pairs that make up your set of responses for the attribute. The **Text** displays in the web interface for your flow, and the **Values** is stored to Azure AD B2C for selected **Text**. Optional: Use the "Move up/down" buttons to reorder drop-down items.
+1. Select **Ok**. Optional: Use the "Move up/down" buttons to reorder user attributes in the sign-up page.
1. Select **Save**.
+ :::image type="content" source="./media/configure-user-input/configure-user-attributes-input-type.png" alt-text="Web page call green API.":::
+ ### Provide a list of values by using localized collections To provide a set list of values for the city attribute:
After you add the localization element, [edit the content definition with the lo
- Learn how to [use custom attributes in Azure AD B2C](user-flow-custom-attributes.md). ::: zone-end+
+## Next steps
+- [Customize user interface in Azure Active Directory B2C](customize-ui.md).
+- [Customize the user interface with HTML templates in Azure Active Directory B2C](customize-ui-with-html.md).
+- [Enable JavaScript](javascript-and-page-layout.md).
+
active-directory-b2c Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/custom-domain.md
Follow these steps to add a custom domain to your Azure AD B2C tenant:
> You can manage your custom domain with any publicly available DNS service, such as GoDaddy. If you don't have a DNS server, you can use [Azure DNS zone](../dns/dns-getstarted-portal.md), or [App Service domains](../app-service/manage-custom-dns-buy-domain.md). 1. [Verify your custom domain name](../active-directory/fundamentals/add-custom-domain.md#verify-your-custom-domain-name). Verify each subdomain, or hostname you plan to use. For example, to be able to sign-in with *login.contoso.com* and *account.contoso.com*, you need to verify both subdomains and not the top-level domain *contoso.com*. -
- After the domain is verified, **delete** the DNS TXT record you created.
+
+ > [!IMPORTANT]
+ > After the domain is verified, **delete** the DNS TXT record you created.
## Step 2. Create a new Azure Front Door instance
active-directory-b2c Identity Provider Azure Ad Multi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-azure-ad-multi-tenant.md
To enable users to sign in using an Azure AD account, you need to define Azure A
You can define Azure AD as a claims provider by adding Azure AD to the **ClaimsProvider** element in the extension file of your policy.
-1. Open the *SocialAndLocalAccounts/**TrustFrameworkExtensions.xml*** file.
+1. Open the *SocialAndLocalAccounts/**TrustFrameworkExtensions.xml*** file (see the files you've used in the prerequisites).
1. Find the **ClaimsProviders** element. If it does not exist, add it under the root element. 1. Add a new **ClaimsProvider** as follows:
active-directory How To Mfa Additional Context https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/how-to-mfa-additional-context.md
Your organization will need to enable Microsoft Authenticator push notifications
When a user receives a Passwordless phone sign-in or MFA push notification in the Microsoft Authenticator app, they'll see the name of the application that requests the approval and the app location based on its IP address.
-![Screenshot of additional context in the MFA push notification.](media/howto-authentication-passwordless-phone/location.png)
The additional context can be combined with [number matching](how-to-mfa-number-match.md) to further improve sign-in security.
-![Screenshot of additional context with number matching in the MFA push notification.](media/howto-authentication-passwordless-phone/location-with-number-match.png)
### Policy schema changes
active-directory Howto Authentication Passwordless Phone https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-authentication-passwordless-phone.md
Previously updated : 11/12/2021 Last updated : 01/05/2022
The user is then presented with a number. The app prompts the user to authentica
After the user has utilized passwordless phone sign-in, the app continues to guide the user through this method. However, the user will see the option to choose another method. ## Known Issues
active-directory Quickstart V2 Aspnet Core Webapp Calls Graph https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-aspnet-core-webapp-calls-graph.md
This section gives an overview of the code required to sign in users and call th
The *Microsoft.AspNetCore.Authentication* middleware uses a `Startup` class that's executed when the hosting process initializes: ```csharp
- // Get the scopes from the configuration (appsettings.json)
- var initialScopes = Configuration.GetValue<string>("DownstreamApi:Scopes")?.Split(' ');
public void ConfigureServices(IServiceCollection services)
- {
+ {
+ // Get the scopes from the configuration (appsettings.json)
+ var initialScopes = Configuration.GetValue<string>("DownstreamApi:Scopes")?.Split(' ');
+
// Add sign-in with Microsoft services.AddAuthentication(OpenIdConnectDefaults.AuthenticationScheme) .AddMicrosoftIdentityWebApp(Configuration.GetSection("AzureAd"))
active-directory Reference Breaking Changes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/reference-breaking-changes.md
# What's new for authentication?
-> Get notified of updates to this page by pasting this URL into your RSS feed reader:<br/>`https://docs.microsoft.com/api/search/rss?search=%22whats%20new%20for%20authentication%22&locale=en-us`
+> Get notified of updates to this page by pasting this URL into your RSS feed reader:<br/>`https://docs.microsoft.com/api/search/rss?search=%22Azure+Active+Directory+breaking+changes+reference%22&locale=en-us`
The authentication system alters and adds features on an ongoing basis to improve security and standards compliance. To stay up to date with the most recent developments, this article provides you with information about the following details:
active-directory Directory Overview User Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/directory-overview-user-model.md
In Azure AD, when users join a licensed group, they're automatically assigned th
If there are not enough available licenses, or an issue occurs like service plans that can't be assigned at the same time, you can see status of any licensing issue for the group in the Azure portal.
->[!NOTE]
->The group-based licensing feature currently is in public preview. During the preview, the feature is available with any paid Azure Active Directory (Azure AD) license plan or trial.
- ## Delegate administrator roles Many large organizations want options for their users to obtain sufficient permissions for their work tasks without assigning the powerful Global Administrator role to, for example, users who must register applications. Here's an example of new Azure AD administrator roles to help you distribute the work of application management with more granularity:
Azure AD also gives you granular control of the data that flows between the app
If you're a beginning Azure AD administrator, get the basics down in [Azure Active Directory Fundamentals](../fundamentals/index.yml).
-Or you can start [creating groups](../fundamentals/active-directory-groups-create-azure-portal.md?context=azure%2factive-directory%2fusers-groups-roles%2fcontext%2fugr-context), [assigning licenses](../fundamentals/license-users-groups.md?context=azure%2factive-directory%2fusers-groups-roles%2fcontext%2fugr-context), [assigning app access](../manage-apps/assign-user-or-group-access-portal.md?context=azure%2factive-directory%2fusers-groups-roles%2fcontext%2fugr-context) or [assigning administrator roles](../roles/permissions-reference.md).
+Or you can start [creating groups](../fundamentals/active-directory-groups-create-azure-portal.md?context=azure%2factive-directory%2fusers-groups-roles%2fcontext%2fugr-context), [assigning licenses](../fundamentals/license-users-groups.md?context=azure%2factive-directory%2fusers-groups-roles%2fcontext%2fugr-context), [assigning app access](../manage-apps/assign-user-or-group-access-portal.md?context=azure%2factive-directory%2fusers-groups-roles%2fcontext%2fugr-context) or [assigning administrator roles](../roles/permissions-reference.md).
active-directory Choose Ad Authn https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/choose-ad-authn.md
Title: Authentication for Azure AD hybrid identity solutions
description: This guide helps CEOs, CIOs, CISOs, Chief Identity Architects, Enterprise Architects, and Security and IT decision makers responsible for choosing an authentication method for their Azure AD hybrid identity solution in medium to large organizations. keywords:-- Previously updated : 10/30/2019++ Last updated : 01/05/2022
In todayΓÇÖs world, threats are present 24 hours a day and come from everywhere.
[Get started](../fundamentals/active-directory-whatis.md) with Azure AD and deploy the right authentication solution for your organization.
-If you're thinking about migrating from federated to cloud authentication, learn more about [changing the sign-in method](../../active-directory/hybrid/plan-connect-user-signin.md). To help you plan and implement the migration, use [these project deployment plans](../fundamentals/active-directory-deployment-plans.md) or consider using the new [Staged Rollout](../../active-directory/hybrid/how-to-connect-staged-rollout.md) feature to migrate federated users to using cloud authentication in a staged approach.
+If you're thinking about migrating from federated to cloud authentication, learn more about [changing the sign-in method](../../active-directory/hybrid/plan-connect-user-signin.md). To help you plan and implement the migration, use [these project deployment plans](../fundamentals/active-directory-deployment-plans.md) or consider using the new [Staged Rollout](../../active-directory/hybrid/how-to-connect-staged-rollout.md) feature to migrate federated users to using cloud authentication in a staged approach.
active-directory Cloud Governed Management For On Premises https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/cloud-governed-management-for-on-premises.md
na Previously updated : 05/29/2020 Last updated : 01/05/2022
In hybrid environments, Microsoft's strategy is to enable deployments where the
## Next steps
-For more information on how to get started on this journey, see the Azure AD deployment plans, located at <https://aka.ms/deploymentplans>. They provide end-to-end guidance about how to deploy Azure Active Directory (Azure AD) capabilities. Each plan explains the business value, planning considerations, design, and operational procedures needed to successfully roll out common Azure AD capabilities. Microsoft continually updates the deployment plans with best practices learned from customer deployments and other feedback when we add new capabilities to managing from the cloud with Azure AD.
+For more information on how to get started on this journey, see the Azure AD deployment plans, located at <https://aka.ms/deploymentplans>. They provide end-to-end guidance about how to deploy Azure Active Directory (Azure AD) capabilities. Each plan explains the business value, planning considerations, design, and operational procedures needed to successfully roll out common Azure AD capabilities. Microsoft continually updates the deployment plans with best practices learned from customer deployments and other feedback when we add new capabilities to managing from the cloud with Azure AD.
active-directory Concept Adsync Service Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/concept-adsync-service-account.md
na Previously updated : 03/17/2021 Last updated : 01/05/2022
The sync service can run under different accounts. It can run under a Virtual Se
|Type of account|Installation option|Description| |--||--| |Virtual Service Account|Express and custom, 2017 April and later| A Virtual Service Account is used for all express installations, except for installations on a Domain Controller. When using custom installation, it is the default option unless another option is used.|
-|Managed Service Account|Custom, 2017 April and later|If you use a remote SQL Server, then we recommend using a group Managed Service Account. |
+|Managed Service Account|Custom, 2017 April and later|If you use a remote SQL Server, then we recommend using a group managed service account. |
|Managed Service Account|Express and custom, 2021 March and later|A standalone Managed Service Account prefixed with ADSyncMSA_ is created during installation for express installations when installed on a Domain Controller. When using custom installation, it is the default option unless another option is used.| |User Account|Express and custom, 2017 April to 2021 March|A User Account prefixed with AAD_ is created during installation for express installations when installed on a Domain Controller. When using custom installation, it is the default option unless another option is used.| |User Account|Express and custom, 2017 March and earlier|A User Account prefixed with AAD_ is created during installation for express installations. When using custom installation, another account can be specified.|
A Virtual Service Account is a special type of managed local account that does n
![Virtual service account](media/concept-adsync-service-account/account-1.png)
-The Virtual Service Account is intended to be used with scenarios where the sync engine and SQL are on the same server. If you use remote SQL, then we recommend using a group Managed Service Account instead.
+The Virtual Service Account is intended to be used with scenarios where the sync engine and SQL are on the same server. If you use remote SQL, then we recommend using a group managed service account instead.
The Virtual Service Account cannot be used on a Domain Controller due to [Windows Data Protection API (DPAPI)](/previous-versions/ms995355(v=msdn.10)) issues. ## Managed Service Account
-If you use a remote SQL Server, then we recommend to using a group Managed Service Account. For more information on how to prepare your Active Directory for group Managed Service account, see [Group Managed Service Accounts Overview](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/hh831782(v=ws.11)).
+If you use a remote SQL Server, then we recommend to using a group managed service account. For more information on how to prepare your Active Directory for group Managed Service account, see [Group Managed Service Accounts Overview](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/hh831782(v=ws.11)).
To use this option, on the [Install required components](how-to-connect-install-custom.md#install-required-components) page, select **Use an existing service account**, and select **Managed Service Account**.
This account is intended to be used with scenarios where the sync engine and SQL
## User Account A local service account is created by the installation wizard (unless you specify the account to use in custom settings). The account is prefixed AAD_ and used for the actual sync service to run as. If you install Azure AD Connect on a Domain Controller, the account is created in the domain. The AAD_ service account must be located in the domain if: -- you use a remote server running SQL Server -- you use a proxy that requires authentication
+- You use a remote server running SQL Server
+- You use a proxy that requires authentication
![user account](media/concept-adsync-service-account/account-3.png)
The account is also granted permission to files, registry keys, and other object
## Next steps
-Learn more about [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md).
+Learn more about [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md).
active-directory Concept Azure Ad Connect Sync Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/concept-azure-ad-connect-sync-architecture.md
na Previously updated : 07/13/2017 Last updated : 01/05/2022
active-directory Concept Azure Ad Connect Sync Declarative Provisioning Expressions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/concept-azure-ad-connect-sync-declarative-provisioning-expressions.md
na Previously updated : 07/18/2017 Last updated : 01/05/2022
For example:
**Reference topics**
-* [Azure AD Connect sync: Functions Reference](reference-connect-sync-functions-reference.md)
+* [Azure AD Connect sync: Functions Reference](reference-connect-sync-functions-reference.md)
active-directory Concept Azure Ad Connect Sync Declarative Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/concept-azure-ad-connect-sync-declarative-provisioning.md
na Previously updated : 07/13/2017 Last updated : 01/05/2022
active-directory Concept Azure Ad Connect Sync Default Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/concept-azure-ad-connect-sync-default-configuration.md
na Previously updated : 07/13/2017 Last updated : 01/05/2022
active-directory Concept Azure Ad Connect Sync User And Contacts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/concept-azure-ad-connect-sync-user-and-contacts.md
na Previously updated : 01/15/2018 Last updated : 01/05/2022
active-directory How To Connect Adconnectivitytools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-adconnectivitytools.md
Previously updated : 4/25/2019 Last updated : 01/05/2022
active-directory How To Connect Azure Ad Trust https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-azure-ad-trust.md
na Previously updated : 07/28/2018 Last updated : 01/05/2022
active-directory How To Connect Azureadaccount https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-azureadaccount.md
na Previously updated : 04/25/2019 Last updated : 01/05/2022
active-directory How To Connect Configure Ad Ds Connector Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-configure-ad-ds-connector-account.md
Previously updated : 08/20/2021 Last updated : 01/05/2022
active-directory How To Connect Create Custom Sync Rule https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-create-custom-sync-rule.md
na Previously updated : 01/31/2019 Last updated : 01/05/2022
You can use the synchronization rule editor to edit or create a new synchronizat
## Next Steps - [Azure AD Connect sync](how-to-connect-sync-whatis.md).-- [What is hybrid identity?](whatis-hybrid-identity.md).
+- [What is hybrid identity?](whatis-hybrid-identity.md).
active-directory How To Connect Device Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-device-options.md
na Previously updated : 09/13/2018 Last updated : 01/05/2022
The following documentation provides information about the various device option
## Next steps * [Configure Hybrid Azure AD join](../devices/hybrid-azuread-join-plan.md)
-* [Configure / Disable device writeback](how-to-connect-device-writeback.md)
+* [Configure / Disable device writeback](how-to-connect-device-writeback.md)
active-directory How To Connect Device Writeback https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-device-writeback.md
na Previously updated : 05/08/2018 Last updated : 01/05/2022
Verify configuration in Active Directory:
* [Setting up On-premises Conditional Access using Azure Active Directory Device Registration](../devices/overview.md) ## Next steps
-Learn more about [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md).
+Learn more about [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md).
active-directory How To Connect Emergency Ad Fs Certificate Rotation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-emergency-ad-fs-certificate-rotation.md
Previously updated : 03/22/2021 Last updated : 01/05/2022
active-directory How To Connect Fed Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-fed-compatibility.md
na Previously updated : 08/23/2018 Last updated : 01/05/2022
active-directory How To Connect Fed Group Claims https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-fed-group-claims.md
Previously updated : 12/13/2021 Last updated : 01/05/2022
active-directory How To Connect Fed Hybrid Azure Ad Join Post Config Tasks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-fed-hybrid-azure-ad-join-post-config-tasks.md
na Previously updated : 08/10/2018 Last updated : 01/05/2022
active-directory How To Connect Fed Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-fed-management.md
na Previously updated : 07/18/2017 Last updated : 01/05/2022
In this rule, you're simply checking the temporary flag **idflag**. You decide w
You can add more than one domain to be federated by using Azure AD Connect, as described in [Add a new federated domain](how-to-connect-fed-management.md#addfeddomain). Azure AD Connect version 1.1.553.0 and latest creates the correct claim rule for issuerID automatically. If you cannot use Azure AD Connect version 1.1.553.0 or latest, it is recommended that [Azure AD RPT Claim Rules](https://aka.ms/aadrptclaimrules) tool is used to generate and set correct claim rules for the Azure AD relying party trust. ## Next steps
-Learn more about [user sign-in options](plan-connect-user-signin.md).
+Learn more about [user sign-in options](plan-connect-user-signin.md).
active-directory How To Connect Fed O365 Certs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-fed-o365-certs.md
na Previously updated : 10/20/2017 Last updated : 01/05/2022
By default, AD FS is configured to generate token signing and token decryption c
Azure AD tries to retrieve a new certificate from your federation service metadata 30 days before the expiry of the current certificate. In case a new certificate is not available at that time, Azure AD will continue to monitor the metadata on regular daily intervals. As soon as the new certificate is available in the metadata, the federation settings for the domain are updated with the new certificate information. You can use `Get-MsolDomainFederationSettings` to verify if you see the new certificate in the NextSigningCertificate / SigningCertificate.
-For more information on Token Signing certificates in AD FS see [Obtain and Configure Token Signing and Token Decryption Certificates for AD FS](/windows-server/identity/ad-fs/operations/configure-ts-td-certs-ad-fs)
+For more information on Token Signing certificates in AD FS see [Obtain and Configure Token Signing and Token Decryption Certificates for AD FS](/windows-server/identity/ad-fs/operations/configure-ts-td-certs-ad-fs)
active-directory Reference Connect Version History https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/reference-connect-version-history.md
# Azure AD Connect: Version release history
-The Azure Active Directory (Azure AD) team regularly updates Azure AD Connect with new features and functionality. Not all additions are applicable to all audiences.
+The Azure Active Directory (Azure AD) team regularly updates Azure AD Connect with new features and functionality. Not all additions apply to all audiences.
-This article is designed to help you keep track of the versions that have been released, and to understand what the changes are in the latest version.
+This article helps you keep track of the versions that have been released and understand what the changes are in the latest version.
## Looking for the latest versions? You can upgrade your Azure AD Connect server from all supported versions with the latest versions:
+ - If you're using *Windows Server 2016 or newer*, use *Azure AD Connect V2.0*. You can download the latest version of Azure AD Connect 2.0 from the [Microsoft Download Center](https://www.microsoft.com/download/details.aspx?id=47594). See the [release notes for the latest V2.0 release](reference-connect-version-history.md#20280).
+ - If you're still using an *older version of Windows Server*, use *Azure AD Connect V1.6*. You can download the latest version of Azure AD Connect V1 from the [Microsoft Download Center](https://www.microsoft.com/download/details.aspx?id=103336). See the [release notes for the latest V1.6 release](reference-connect-version-history.md#16160).
+ - We're only applying critical changes to the V1.x versions going forward. You might not find some of the features and fixes for V2.0 in the V1.x releases. For this reason, upgrade to the V2.0 version as soon as possible. Most notably, there's an issue with the 1.16.4.2 build. When you upgrade to this V1.6 build or any newer builds, the group limit resets to 50,000. When you upgrade a server to this build, or any newer 1.6 builds, reapply the rule changes you applied when you initially increased the group membership limit to 250,000 before you enable sync for the server.
-This table is a list of related topics:
+The following table lists related topics:
Topic | Details | | Steps to upgrade from Azure AD Connect | Different methods to [upgrade from a previous version to the latest](how-to-upgrade-previous-version.md) Azure AD Connect release.
-Required permissions | For permissions required to apply an update, see [accounts and permissions](reference-connect-accounts-permissions.md#upgrade).
+Required permissions | For permissions required to apply an update, see [Azure AD Connect: Accounts and permissions](reference-connect-accounts-permissions.md#upgrade).
> [!IMPORTANT]
-> **On 31 August 2022, all 1.x versions of Azure Active Directory (Azure AD) Connect will be retired because they include SQL Server 2012 components that will no longer be supported.** Either upgrade to the most recent version of Azure AD Connect (2.x version) by that date, or [evaluate and switch to Azure AD cloud sync](../cloud-sync/what-is-cloud-sync.md).
->
-> You need to make sure you are running a recent version of Azure AD Connect to receive an optimal support experience.
->
-> If you run a retired version of Azure AD Connect it may unexpectedly stop working and you may not have the latest security fixes, performance improvements, troubleshooting and diagnostic tools and service enhancements. Moreover, if you require support we may not be able to provide you with the level of service your organization needs.
->
-> Go to this article to learn more about [Azure Active Directory Connect V2.0](whatis-azure-ad-connect-v2.md), what has changed in V2.0 and how this change impacts you.
->
-> Please refer to [this article](./how-to-upgrade-previous-version.md) to learn more about how to upgrade Azure AD Connect to the latest version.
->
-> For version history information on retired versions, see [Azure AD Connect version release history archive](reference-connect-version-history-archive.md)
+> *On August 31, 2022, all 1.x versions of Azure AD Connect will be retired because they include SQL Server 2012 components that will no longer be supported.* Upgrade to the most recent version of Azure AD Connect (2.x version) by that date or [evaluate and switch to Azure AD cloud sync](../cloud-sync/what-is-cloud-sync.md).
+
+Make sure you're running a recent version of Azure AD Connect to receive an optimal support experience.
+
+If you run a retired version of Azure AD Connect, it might unexpectedly stop working. You also might not have the latest security fixes, performance improvements, troubleshooting and diagnostic tools, and service enhancements. If you require support, we might not be able to provide you with the level of service your organization needs.
+
+To learn more about what has changed in V2.0 and how this change affects you, see [Azure AD Connect V2.0](whatis-azure-ad-connect-v2.md).
+
+To learn more about how to upgrade Azure AD Connect to the latest version, see [Azure AD Connect: Upgrade from a previous version to the latest](./how-to-upgrade-previous-version.md).
+
+For version history information on retired versions, see [Azure AD Connect: Version release history archive](reference-connect-version-history-archive.md).
> [!NOTE]
-> Releasing a new version of Azure AD Connect is a process that requires several quality control step to ensure the operation functionality of the service, and while we go through this process the version number of a new release as well as the release status will be updated to reflect the most recent state.
->
-> Not all releases of Azure AD Connect will be made available for auto upgrade. The release status will indicate whether a release is made available for auto upgrade or for download only. If auto upgrade was enabled on your Azure AD Connect server then that server will automatically upgrade to the latest version of Azure AD Connect that is released for auto upgrade. Note that not all Azure AD Connect configurations are eligible for auto upgrade.
->
-> To clarify the use of Auto Upgrade, it is meant to push all important updates and critical fixes to you. This is not necessarily the latest version because not all versions will require/include a fix to a critical security issue (just one example of many). Critical issues would usually be addressed with a new version provided via Auto Upgrade. If there are no such issues, there are no updates pushed out using Auto Upgrade, and in general if you are using the latest auto upgrade version you should be good.
->
-> However, if you'd like all the latest features and updates, the best way to see if there are any is to check this page and install them as you see fit.
->
-> Please follow this link to read more about [auto upgrade](how-to-connect-install-automatic-upgrade.md).
+> Releasing a new version of Azure AD Connect requires several quality-control steps to ensure the operation functionality of the service. While we go through this process, the version number of a new release and the release status are updated to reflect the most recent state.
+
+Not all releases of Azure AD Connect are made available for auto-upgrade. The release status indicates whether a release is made available for auto-upgrade or for download only. If auto-upgrade was enabled on your Azure AD Connect server, that server automatically upgrades to the latest version of Azure AD Connect that's released for auto-upgrade. Not all Azure AD Connect configurations are eligible for auto-upgrade.
+
+Auto-upgrade is meant to push all important updates and critical fixes to you. It isn't necessarily the latest version because not all versions will require or include a fix to a critical security issue. (This example is just one of many.) Critical issues are usually addressed with a new version provided via auto-upgrade. If there are no such issues, there are no updates pushed out by using auto-upgrade. In general, if you're using the latest auto-upgrade version, you should be good.
+
+If you want all the latest features and updates, check this page and install what you need.
+
+To read more about auto-upgrade, see [Azure AD Connect: Automatic upgrade](how-to-connect-install-automatic-upgrade.md).
## 2.0.89.0 ### Release status
-12/22/2021: Released for download only, not available for auto upgrade.
+
+12/22/2021: Released for download only, not available for auto upgrade
### Bug fixes-- We fixed a bug in version 2.0.88.0 where, under certain conditions, linked mailboxes of disabled users and mailboxes of certain resource objects, were getting deleted.+
+We fixed a bug in version 2.0.88.0 where, under certain conditions, linked mailboxes of disabled users and mailboxes of certain resource objects, were getting deleted.
## 2.0.88.0
-> [!NOTE]
-> This release requires Windows Server 2016 or newer. It fixes a vulnerability that is present in version 2.0 of Azure AD Connect, as well as some other bug fixes and minor feature updates.
+
+> [!NOTE]
+> This release requires Windows Server 2016 or newer. It fixes a vulnerability that's present in version 2.0 of Azure AD Connect and other bug fixes and minor feature updates.
### Release status
-12/15/2021: Released for download only, not available for auto upgrade.
+
+12/15/2021: Released for download only, not available for auto-upgrade
### Bug fixes
+- We upgraded the version of Microsoft.Data.OData from 5.8.1 to 5.8.4 to fix a vulnerability.
+- Accessibility fixes:
+ - We made the Azure AD Connect wizard resizable to account for different zoom levels and screen resolutions.
+ - We named elements to satisfy accessibility requirements.
+- We fixed a bug where miisserver failed because of a null reference.
+- We fixed a bug to ensure the desktop SSO value persists after upgrading Azure AD Connect to a newer version.
+- We modified the inetorgperson sync rules to fix an issue with account/resource forests.
+- We fixed a radio button test to display a **Link More** link.
+ ### Functional changes+
+- We made a change so that group writeback DN is now configurable with the display name of the synced group.
+- We removed the hard requirement for exchange schema when you enable group writeback.
+- Azure AD Kerberos changes:
+ - We extended the PowerShell command to support custom top-level names for trusted object creation.
+ - We made a change to set an official brand name for the Azure AD Kerberos feature.
## 1.6.16.0 > [!NOTE]
-> This is an update release of Azure AD Connect. This version is intended to be used by customers who are running an older version of Windows Server and cannot upgrade their server to Windows Server 2016 or newer at this time. You cannot use this version to update an Azure AD Connect V2.0 server.
+> This release is an update release of Azure AD Connect. This version is intended to be used by customers who are running an older version of Windows Server and can't upgrade their server to Windows Server 2016 or newer at this time. You can't use this version to update an Azure AD Connect V2.0 server.
>
-> This release should not be installed on Windows Server 2016 or newer. This release includes SQL Server 2012 components and will be retired on August 31st 2022. You will need to upgrade your Server OS and Azure AD Connect version before that date.
+> Don't install this release on Windows Server 2016 or newer. This release includes SQL Server 2012 components and will be retired on August 31, 2022. Upgrade your Server OS and Azure AD Connect version before that date.
>
-> There is an issue where upgrading to this v1.6 build or any newer builds resets the group membership limit to 50k. When a server is upgraded to this build, or any newer 1.6 builds, then the customer should reapply the rules changes they applied when initially increasing the group membership limit to 250k before they enable sync for the server.
+> When you upgrade to this V1.6 build or any newer builds, the group membership limit resets to 50,000. When a server is upgraded to this build, or any newer 1.6 builds, reapply the rule changes you applied when you initially increased the group membership limit to 250,000 before you enable sync for the server.
### Release status
-10/13/2021: Released for download and auto upgrade.
+10/13/2021: Released for download and auto-upgrade
### Bug fixes -- We fixed a bug where the Autoupgrade process attempted to upgrade Azure AD Connect servers that are running older Windows OS version 2008 or 2008 R2 and failed. These versions of Windows Server are no longer supported. In this release we only attempt autoupgrade on machines that run Windows Server 2012 or newer.-- We fixed an issue where, under certain conditions, miisserver would be crashing due to access violation exception.
+- We fixed a bug where the auto-upgrade process attempted to upgrade Azure AD Connect servers that are running older Windows OS version 2008 or 2008 R2 and failed. These versions of Windows Server are no longer supported. In this release, we only attempt auto-upgrade on machines that run Windows Server 2012 or newer.
+- We fixed an issue where, under certain conditions, miisserver failed because of an access violation exception.
+
+### Known issues
-### Known Issues
+When you upgrade to this V1.6 build or any newer builds, the group membership limit resets to 50,000. When a server is upgraded to this build, or any newer 1.6 builds, reapply the rule changes you applied when you initially increased the group membership limit to 250,000 before you enable sync for the server.
## 2.0.28.0 > [!NOTE]
-> This is a maintenance update release of Azure AD Connect. This release requires Windows Server 2016 or newer.
+> This release is a maintenance update release of Azure AD Connect. It requires Windows Server 2016 or newer.
### Release status
-9/30/2021: Released for download only, not available for auto upgrade.
+9/30/2021: Released for download only, not available for auto-upgrade
### Bug fixes
+- We removed a download button for a PowerShell script on the **Group Writeback Permissions** page in the wizard. We also changed the text on the wizard page to include a **Learn More** link that links to an online article where the PowerShell script can be found.
+- We fixed a bug where the wizard was incorrectly blocking the installation when the .NET version on the server was greater than 4.6 because of missing registry keys. Those registry keys aren't required and should only block installation if they're intentionally set to false.
+- We fixed a bug where an error was thrown if phantom objects were found during the initialization of a sync step. This bug blocked the sync step or removed transient objects. The phantom objects are now ignored.
-
-Note: A phantom object is a placeholder for an object which is not there or has not been seen yet, for example if a source object has a reference for a target object which is not there then we create the target object as a phantom.
+ A phantom object is a placeholder for an object that isn't there or hasn't been seen yet. For example, if a source object has a reference for a target object that isn't there, we create the target object as a phantom.
### Functional changes
+A change was made that allows a user to deselect objects and attributes from the inclusion list, even if they're in use. Instead of blocking this action, we now provide a warning.
## 1.6.14.2 > [!NOTE]
-> This is an update release of Azure AD Connect. This version is intended to be used by customers who are running an older version of Windows Server and cannot upgrade their server to Windows Server 2016 or newer at this time. You cannot use this version to update an Azure AD Connect V2.0 server.
-> We will begin auto upgrading eligible tenants when this version is available for download, autoupgrade will take a few weeks to complete.
-> There is an issue where upgrading to this v1.6 build or any newer builds resets the group membership limit to 50k. When a server is upgraded to this build, or any newer 1.6 builds, then the customer should reapply the rules changes they applied when initially increasing the group membership limit to 250k before they enable sync for the server.
+> This release is an update release of Azure AD Connect. This version is intended to be used by customers who are running an older version of Windows Server and can't upgrade their server to Windows Server 2016 or newer at this time. You can't use this version to update an Azure AD Connect V2.0 server.
+
+We'll begin auto-upgrading eligible tenants when this version is available for download. Auto-upgrade will take a few weeks to complete.
+
+When you upgrade to this V1.6 build or any newer builds, the group membership limit resets to 50,000. When a server is upgraded to this build, or any newer 1.6 builds, reapply the rule changes you applied when you initially increased the group membership limit to 250,000 before you enable sync for the server.
### Release status
-9/21/2021: Released for download and auto upgrade.
+9/21/2021: Released for download and auto-upgrade
### Functional changes
+- We added the latest versions of Microsoft Identity Manager (MIM) Connectors (1.1.1610.0). For more information, see the [release history page of the MIM Connectors](/microsoft-identity-manager/reference/microsoft-identity-manager-2016-connector-version-history#1116100-september-2021).
+- We added a configuration option to disable the Soft Matching feature in Azure AD Connect. We recommend that you disable Soft Matching unless you need it to take over cloud-only accounts. To disable Soft Matching, see [this reference article](/powershell/module/msonline/set-msoldirsyncfeature#example-2--block-soft-matching-for-the-tenant).
### Bug fixes
+- We fixed a bug where the desktop single sign-on settings weren't persisted after upgrade from a previous version.
+- We fixed a bug that caused the Set-ADSync\*Permission cmdlets to fail.
## 2.0.25.1 > [!NOTE]
-> This is a hotfix update release of Azure AD Connect. This release requires Windows Server 2016 or newer and fixes a security issue that is present in version 2.0 of Azure AD Connect, as well as some other bug fixes.
+> This release is a hotfix update release of Azure AD Connect. This release requires Windows Server 2016 or newer. It fixes a security issue that's present in version 2.0 of Azure AD Connect and includes other bug fixes.
### Release status
-9/14/2021: Released for download only, not available for auto upgrade.
+9/14/2021: Released for download only, not available for auto-upgrade
### Bug fixes
+- We fixed a security issue where an unquoted path was used to point to the Azure AD Connect service. This path is now a quoted path.
+- We fixed an import configuration issue with writeback enabled when you use the existing Azure AD Connector account.
+- We fixed an issue in Set-ADSyncExchangeHybridPermissions and other related cmdlets, which were broken from V1.6 because of an invalid inheritance type.
+- We fixed an issue with the cmdlet we published in a previous release to set the TLS version. The cmdlet overwrote the keys, which destroyed any values that were in them. Now a new key is created only if one doesn't already exist. We added a warning to let users know the TLS registry changes aren't exclusive to Azure AD Connect and might affect other applications on the same server.
+- We added a check to enforce auto-upgrade for V2.0 to require Windows Server 2016 or newer.
+- We added the Replicating Directory Changes permission in the Set-ADSyncBasicReadPermissions cmdlet.
+- We made a change to prevent UseExistingDatabase and import configuration from being used together because they could contain conflicting configuration settings.
+- We made a change to allow a user with the Application Admin role to change the App Proxy service configuration.
+- We removed the (Preview) label from the labels of **Import/Export** settings. This functionality is generally available.
+- We changed some labels that still referred to Company Administrator. We now use the role name Global Administrator.
+- We created new Azure AD Kerberos PowerShell cmdlets (\*-AADKerberosServer) to add a Claims Transform rule to the Azure AD Service Principal.
### Functional changes -- We added the latest versions of MIM Connectors (1.1.1610.0). More information can be found at [the release history page of the MiM connectors](/microsoft-identity-manager/reference/microsoft-identity-manager-2016-connector-version-history#1116100-september-2021)
+- We added the latest versions of MIM Connectors (1.1.1610.0). For more information, see the [release history page of the MIM Connectors](/microsoft-identity-manager/reference/microsoft-identity-manager-2016-connector-version-history#1116100-september-2021).
+- We added a configuration option to disable the Soft Matching feature in Azure AD Connect. We recommend that you disable Soft Matching unless you need it to take over cloud-only accounts. To disable Soft Matching, see [this reference article](/powershell/module/msonline/set-msoldirsyncfeature#example-2--block-soft-matching-for-the-tenant).
## 2.0.10.0 ### Release status
-8/19/2021: Released for download only, not available for auto upgrade.
+
+8/19/2021: Released for download only, not available for auto-upgrade
> [!NOTE]
-> This is a hotfix update release of Azure AD Connect. This release requires Windows Server 2016 or newer. This hotfix addresses an issue that is present in version 2.0 as well as in Azure AD Connect version 1.6. If you are running Azure AD Connect on an older Windows Server you should install the [1.6.13.0](#16130) build instead.
+> This is a hotfix update release of Azure AD Connect. This release requires Windows Server 2016 or newer. This hotfix addresses an issue that's present in version 2.0 and in Azure AD Connect version 1.6. If you're running Azure AD Connect on an older Windows server, install the [1.6.13.0](#16130) build instead.
### Release status
-8/19/2021: Released for download only, not available for auto upgrade.
+8/19/2021: Released for download only, not available for auto-upgrade
### Known issues
+Under certain circumstances, the installer for this version displays an error that states TLS 1.2 isn't enabled and stops the installation. This issue occurs because of an error in the code that verifies the registry setting for TLS 1.2. We'll correct this issue in a future release. If you see this issue, follow the instructions to enable TLS 1.2 in [TLS 1.2 enforcement for Azure AD Connect](reference-connect-tls-enforcement.md).
### Bug fixes
+We fixed a bug that occurred when a domain was renamed and Password Hash Sync failed with an error that indicated "a specified cast is not valid" in the Event log. This regression is from earlier builds.
## 1.6.13.0 > [!NOTE]
-> This is a hotfix update release of Azure AD Connect. This release is intended for customers who are running Azure AD Connect on a server with Windows Server 2012 or 2012 R2.
+> This release is a hotfix update release of Azure AD Connect. It's intended to be used by customers who are running Azure AD Connect on a server with Windows Server 2012 or 2012 R2.
-8/19/2021: Released for download only, not available for auto upgrade.
+8/19/2021: Released for download only, not available for auto-upgrade
### Bug fixes
+We fixed a bug that occurred when a domain was renamed and Password Hash Sync failed with an error that indicated "a specified cast is not valid" in the Event log. This regression is from earlier builds.
### Functional changes
-There are no functional changes in this release
+There are no functional changes in this release.
## 2.0.9.0 ### Release status
-8/17/2021: Released for download only, not available for auto upgrade.
+8/17/2021: Released for download only, not available for auto-upgrade
### Bug fixes > [!NOTE]
-> This is a hotfix update release of Azure AD Connect. This release requires Windows Server 2016 or newer. This release addresses an issue that is present in version 2.0.8.0, this issue is not present in Azure AD Connect version 1.6.
+> This release is a hotfix update release of Azure AD Connect. This release requires Windows Server 2016 or newer. It addresses an issue that's present in version 2.0.8.0. This issue isn't present in Azure AD Connect version 1.6.
+We fixed a bug that occurred when you synced a large number of Password Hash Sync transactions and the Event log entry length exceeded the maximum-allowed length for a Password Hash Sync event entry. We now split the lengthy log entry into multiple entries.
## 2.0.8.0 > [!NOTE]
-> This is a security update release of Azure AD Connect. This release requires Windows Server 2016 or newer. If you are using an older version of Windows Server, please use [version 1.6.11.3](#16113).
-> This release addresses a vulnerability as documented in [this CVE](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2021-36949). For more information about this vulnerability please refer to the CVE.
-> You can download the latest version of Azure AD Connect 2.0 using [this link](https://www.microsoft.com/download/details.aspx?id=47594).
+> This release is a security update release of Azure AD Connect. This release requires Windows Server 2016 or newer. If you're using an older version of Windows Server, use [version 1.6.11.3](#16113).
+
+This release addresses a vulnerability as documented in [this CVE](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2021-36949). For more information about this vulnerability, see the CVE.
+
+To download the latest version of Azure AD Connect 2.0, see the [Microsoft Download Center](https://www.microsoft.com/download/details.aspx?id=47594).
### Release status
-8/10/2021: Released for download only, not available for auto upgrade.
+8/10/2021: Released for download only, not available for auto-upgrade
### Functional changes
-There are no functional changes in this release
+There are no functional changes in this release.
## 1.6.11.3 > [!NOTE]
-> This is security update release of Azure AD Connect. This version is intended to be used by customers are running an older version of Windows Server and cannot upgrade their server to Windows Server 2016 or newer as this time. You cannot use this version to update an Azure AD Connect V2.0 server.
-> This release addresses a vulnerability as documented in [this CVE](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2021-36949). For more information about this vulnerability please refer to the CVE.
-> You can download the latest version of Azure AD Connect 1.6 using [this link](https://www.microsoft.com/download/details.aspx?id=103336).
+> This release is a security update release of Azure AD Connect. It's intended to be used by customers who are running an older version of Windows Server and can't upgrade their server to Windows Server 2016 or newer at this time. You can't use this version to update an Azure AD Connect V2.0 server.
+
+This release addresses a vulnerability as documented in [this CVE](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2021-36949). For more information about this vulnerability, see the CVE.
+
+To download the latest version of Azure AD Connect 1.6, see the [Microsoft Download Center](https://www.microsoft.com/download/details.aspx?id=103336).
### Release status
-8/10/2021: Released for download only, not available for auto upgrade.
+8/10/2021: Released for download only, not available for auto-upgrade
### Functional changes
-There are no functional changes in this release
+There are no functional changes in this release.
## 2.0.3.0 > [!NOTE]
-> This is a major release of Azure AD Connect. Please refer to the [Azure Active Directory V2.0 article](whatis-azure-ad-connect-v2.md) for more details.
+> This release is a major release of Azure AD Connect. For more information, see [Introduction to Azure AD Connect V2.0](whatis-azure-ad-connect-v2.md).
### Release status
-7/20/2021: Released for download only, not available for auto upgrade
+7/20/2021: Released for download only, not available for auto-upgrade
### Functional changes
-To sync an expired password from Active Directory to Azure Active Directory please use the [Synchronizing temporary passwords](how-to-connect-password-hash-synchronization.md#synchronizing-temporary-passwords-and-force-password-change-on-next-logon) feature in Azure AD Connect. Note that you will need to enable password writeback to use this feature, so the password the user updates is written back to Active Directory too.
- - Get-ADSyncToolsTls12
- - Set-ADSyncToolsTls12
-
-You can use these cmdlets to retrieve the TLS 1.2 enablement status, or set it as needed. Note that TLS 1.2 must be enabled on the server for the installation or Azure AD Connect to succeed.
-
- The following cmdlets have been added or updated
- - Clear-ADSyncToolsMsDsConsistencyGuid
- - ConvertFrom-ADSyncToolsAadDistinguishedName
- - ConvertFrom-ADSyncToolsImmutableID
- - ConvertTo-ADSyncToolsAadDistinguishedName
- - ConvertTo-ADSyncToolsCloudAnchor
- - ConvertTo-ADSyncToolsImmutableID
- - Export-ADSyncToolsAadDisconnectors
- - Export-ADSyncToolsObjects
- - Export-ADSyncToolsRunHistory
- - Get-ADSyncToolsAadObject
- - Get-ADSyncToolsMsDsConsistencyGuid
- - Import-ADSyncToolsObjects
- - Import-ADSyncToolsRunHistory
- - Remove-ADSyncToolsAadObject
- - Search-ADSyncToolsADobject
- - Set-ADSyncToolsMsDsConsistencyGuid
- - Trace-ADSyncToolsADImport
- - Trace-ADSyncToolsLdapQuery
-- We now use the V2 endpoint for import and export and we fixed issue in the Get-ADSyncAADConnectorExportApiVersion cmdlet. You can read more about the V2 endpoint in the [Azure AD Connect sync V2 endpoint article](how-to-connect-sync-endpoint-api-v2.md).-- We have added the following new user properties to sync from on-prem AD to Azure AD
- - employeeType
- - employeeHireDate
-- This release requires PowerShell version 5.0 or newer to be installed on the Windows Server. Note that this version is part of Windows Server 2016 and newer.-- We increased the Group sync membership limits to 250k with the new V2 endpoint.-- We have updated the Generic LDAP connector and the Generic SQL Connector to the latest versions. Read more about these connectors here:
- - [Generic LDAP Connector reference documentation](/microsoft-identity-manager/reference/microsoft-identity-manager-2016-connector-genericldap)
- - [Generic SQL Connector reference documentation](/microsoft-identity-manager/reference/microsoft-identity-manager-2016-connector-genericsql)
-- In the Microsoft 365 Admin Center, we now report the Azure AD Connect client version whenever there is export activity to Azure AD. This ensures that the Microsoft 365 Admin Center always has the most up to date Azure AD Connect client version, and that it can detect when you're using an outdated version
+- We upgraded the LocalDB components of SQL Server to SQL 2019.
+- This release requires Windows Server 2016 or newer because of the requirements of SQL Server 2019. An in-place upgrade of Windows Server on an Azure AD Connect server isn't supported. For this reason, you might need to use a [swing migration](how-to-upgrade-previous-version.md#swing-migration).
+- We enforce the use of TLS 1.2 in this release. If you enabled your Windows Server for TLS 1.2, Azure AD Connect uses this protocol. If TLS 1.2 isn't enabled on the server, you'll see an error message when you attempt to install Azure AD Connect. The installation won't continue until you've enabled TLS 1.2. You can use the new Set-ADSyncToolsTls12 cmdlets to enable TLS 1.2 on your server.
+- We made a change so that with this release, you can use the Hybrid Identity Administrator role to authenticate when you install Azure AD Connect. You no longer need to use the Global Administrator role.
+- We upgraded the Visual C++ runtime library to version 14 as a prerequisite for SQL Server 2019.
+- We updated this release to use the Microsoft Authentication Library for authentication. We removed the older Azure AD Authentication Library, which will be retired in 2022.
+- We no longer apply permissions on AdminSDHolders following Windows security guidance. We changed the parameter SkipAdminSdHolders to IncludeAdminSdHolders in the ADSyncConfig.psm1 module.
+- We made a change so that passwords are now reevaluated when an expired password is "unexpired," no matter if the password itself is changed. If the password is set to "Must change password at next logon" for a user, and this flag is cleared (which "unexpires" the password), the unexpired status and the password hash are synced to Azure AD. In Azure AD, when the user attempts to sign in, they can use the unexpired password.
+To sync an expired password from Active Directory to Azure AD, use the feature in Azure AD Connect to [synchronize temporary passwords](how-to-connect-password-hash-synchronization.md#synchronizing-temporary-passwords-and-force-password-change-on-next-logon). Enable password writeback to use this feature so that the password the user updates is written back to Active Directory.
+- We added two new cmdlets to the ADSyncTools module to enable or retrieve TLS 1.2 settings from the Windows Server:
+ - Get-ADSyncToolsTls12
+ - Set-ADSyncToolsTls12
+
+You can use these cmdlets to retrieve the TLS 1.2 enablement status or set it as needed. TLS 1.2 must be enabled on the server for the installation or Azure AD Connect to succeed.
+
+- We revamped ADSyncTools with several new and improved cmdlets. The [ADSyncTools article](reference-connect-adsynctools.md) has more details about these cmdlets.
+ The following cmdlets have been added or updated:
+ - Clear-ADSyncToolsMsDsConsistencyGuid
+ - ConvertFrom-ADSyncToolsAadDistinguishedName
+ - ConvertFrom-ADSyncToolsImmutableID
+ - ConvertTo-ADSyncToolsAadDistinguishedName
+ - ConvertTo-ADSyncToolsCloudAnchor
+ - ConvertTo-ADSyncToolsImmutableID
+ - Export-ADSyncToolsAadDisconnectors
+ - Export-ADSyncToolsObjects
+ - Export-ADSyncToolsRunHistory
+ - Get-ADSyncToolsAadObject
+ - Get-ADSyncToolsMsDsConsistencyGuid
+ - Import-ADSyncToolsObjects
+ - Import-ADSyncToolsRunHistory
+ - Remove-ADSyncToolsAadObject
+ - Search-ADSyncToolsADobject
+ - Set-ADSyncToolsMsDsConsistencyGuid
+ - Trace-ADSyncToolsADImport
+ - Trace-ADSyncToolsLdapQuery
+- We now use the V2 endpoint for import and export. We fixed an issue in the Get-ADSyncAADConnectorExportApiVersion cmdlet. To learn more about the V2 endpoint, see [Azure AD Connect sync V2 endpoint](how-to-connect-sync-endpoint-api-v2.md).
+- We added the following new user properties to sync from on-premises Active Directory to Azure AD:
+ - employeeType
+ - employeeHireDate
+- This release requires PowerShell version 5.0 or newer to be installed on the Windows server. This version is part of Windows Server 2016 and newer.
+- We increased the group sync membership limits to 250,000 with the new V2 endpoint.
+- We updated the Generic LDAP Connector and the Generic SQL Connector to the latest versions. To learn more about these connectors, see the reference documentation for:
+ - [Generic LDAP Connector](/microsoft-identity-manager/reference/microsoft-identity-manager-2016-connector-genericldap)
+ - [Generic SQL Connector](/microsoft-identity-manager/reference/microsoft-identity-manager-2016-connector-genericsql)
+- In the Microsoft 365 admin center, we now report the Azure AD Connect client version whenever there's export activity to Azure AD. This reporting ensures that the Microsoft 365 admin center always has the most up-to-date Azure AD Connect client version, and that it can detect when you're using an outdated version.
### Bug fixes -- We fixed an accessibility bug where the screen reader is announcing an incorrect role of the 'Learn More' link.-- We fixed a bug where sync rules with large precedence values (i.e. 387163089) cause an upgrade to fail. We updated the sproc 'mms_UpdateSyncRulePrecedence' to cast the precedence number as an integer prior to incrementing the value.-- We fixed a bug where group writeback permissions are not set on the sync account if a group writeback configuration is imported. We now set the group writeback permissions if group writeback is enabled on the imported configuration.-- We updated the Azure AD Connect Health agent version to 3.1.110.0 to fix an installation failure.-- We are seeing an issue with non-default attributes from exported configurations where directory extension attributes are configured. When importing these configurations to a new server/installation, the attribute inclusion list is overridden by the directory extension configuration step, so after import only default and directory extension attributes are selected in the sync service manager (non-default attributes are not included in the installation, so the user must manually reenable them from the sync service manager if they want their imported sync rules to work). We now refresh the AAD Connector before configuring directory extension to keep existing attributes from the attribute inclusion list.-- We fixed an accessibility issues where the page header's font weight is set as "Light". Font weight is now set to "Bold" for the page title, which applies to the header of all pages.-- The function Get-AdObject in ADSyncSingleObjectSync.ps1 has been renamed to Get-AdDirectoryObject to prevent ambiguity with the AD cmdlet.-- The SQL function 'mms_CheckSynchronizationRuleHasUniquePrecedence' allow duplicates precedence on outbound sync rules on different connectors. We removed the condition that allows duplicate rule precedence.-- We fixed a bug where the Single Object Sync cmdlet fails if the attribute flow data is null i.e. on exporting delete operation.-- We fixed a bug where the installation fails because the ADSync bootstrap service cannot be started. We now add Sync Service Account to the Local Builtin User Group before starting the bootstrap service.-- We fixed an accessibility issue where the active tab on Azure AD Connect wizard is not showing correct color on High Contrast theme. The selected color code was being overwritten due to missing condition in normal color code configuration.-- We addressed an issue where users were allowed to deselect objects and attributes used in sync rules using the UI and PowerShell. We now show friendly error message if you try to deselect any attribute or object that is used in any sync rules.-- We made some updates to the "migrate settings code" to check and fix backward compatibility issue when the script is ran on an older version of Azure AD Connect.-- Fixed a bug where, when PHS tries to look up an incomplete object, it does not use the same algorithm to resolve the DC as it used originally to fetch the passwords. In particular, it is ignoring affinitized DC information. The Incomplete object lookup should use the same logic to locate the DC in both instances.-- We fixed a bug where Azure AD Connect cannot read Application Proxy items using Microsoft Graph due to a permissions issue with calling Microsoft Graph directly based on Azure AD Connect client identifier. To fix this, we removed the dependency on Microsoft Graph and instead use Azure AD PowerShell to work with the App Proxy Application objects.-- We removed the writeback member limit from 'Out to AD - Group SOAInAAD Exchange' sync rule-- We fixed a bug where, when changing connector account permissions, if an object comes in scope that has not changed since the last delta import, a delta import will not import it. We now display warning alerting user of the issue.-- We fixed an accessibility issue where the screen reader is not reading radio button position. We added added positional text to the radio button accessibility text field.-- We updated the Pass-Thru Authentication Agent bundle. The older bundle did not have correct reply URL for HIP's first party application in US Gov.-- We fixed a bug where there is a 'stopped-extension-dll-exception' on AAD connector export after clean installing Azure AD Connect version 1.6.X.X, which defaults to using DirSyncWebServices API V2, using an existing database. Previously the setting export version to v2 was only being done for upgrade, we changed so that it is set on clean install as well.-- The "ADSyncPrep.psm1" module is no longer used and is removed from the installation.
+- We fixed an accessibility bug where the screen reader announced an incorrect role of the **Learn More** link.
+- We fixed a bug where sync rules with large precedence values (for example, 387163089) caused an upgrade to fail. We updated the sproc mms_UpdateSyncRulePrecedence to cast the precedence number as an integer prior to incrementing the value.
+- We fixed a bug where group writeback permissions weren't set on the sync account if a group writeback configuration was imported. We now set the group writeback permissions if group writeback is enabled on the imported configuration.
+- We updated the Azure AD Connect Health agent version to 3.1.110.0 to fix an installation failure.
+- We're seeing an issue with nondefault attributes from exported configurations where directory extension attributes are configured. In the process of importing these configurations to a new server or installation, the attribute inclusion list is overridden by the directory extension configuration step. As a result, after import, only default and directory extension attributes are selected in the sync service manager. Nondefault attributes aren't included in the installation, so the user must manually reenable them from the sync service manager if they want their imported sync rules to work. We now refresh the Azure AD Connector before configuring the directory extension to keep existing attributes from the attribute inclusion list.
+- We fixed an accessibility issue where the page header's font weight was set as Light. Font weight is now set to Bold for the page title, which applies to the header of all pages.
+- We renamed the function Get-AdObject in ADSyncSingleObjectSync.ps1 to Get-AdDirectoryObject to prevent ambiguity with the Active Directory cmdlet.
+- We removed the condition that allowed duplicate rule precedence. The SQL function mms_CheckSynchronizationRuleHasUniquePrecedence had allowed duplicates precedence on outbound sync rules on different connectors.
+- We fixed a bug where the Single Object Sync cmdlet fails if the attribute flow data is null. An example is on exporting a delete operation.
+- We fixed a bug where the installation fails because the ADSync bootstrap service can't be started. We now add Sync Service Account to the Local Builtin User Group before starting the bootstrap service.
+- We fixed an accessibility issue where the active tab on Azure AD Connect wizard wasn't showing the correct color on High Contrast theme. The selected color code was being overwritten because of a missing condition in the normal color code configuration.
+- We addressed an issue where you were allowed to deselect objects and attributes used in sync rules by using the UI and PowerShell. We now show friendly error messages if you try to deselect any attribute or object that's used in any sync rules.
+- We made some updates to the "migrate settings code" to check and fix backward compatibility issues when the script runs on an older version of Azure AD Connect.
+- We fixed a bug that occurred when PHS tried to look up an incomplete object. It didn't use the same algorithm to resolve the DC as it used originally to fetch the passwords. In particular, it ignored affinitized DC information. The Incomplete object lookup should use the same logic to locate the DC in both instances.
+- We fixed a bug where Azure AD Connect can't read Application Proxy items by using Microsoft Graph because of a permissions issue with calling Microsoft Graph directly based on the Azure AD Connect client identifier. To fix this issue, we removed the dependency on Microsoft Graph and instead use Azure AD PowerShell to work with the App Proxy Application objects.
+- We removed the writeback member limit from the Out to AD - Group SOAInAAD Exchange sync rule.
+- We fixed a bug that occurred when you changed connector account permissions. If an object came in scope that hadn't changed since the last delta import, a delta import wouldn't import it. We now display a warning to alert you of the issue.
+- We fixed an accessibility issue where the screen reader wasn't reading the radio button position. We added positional text to the radio button accessibility text field.
+- We updated the Pass-Thru Authentication Agent bundle. The older bundle didn't have the correct reply URL for the HIP's first-party application in US Government.
+- We fixed a bug where a stopped-extension-dll-exception error on Azure AD Connector exported after clean installing the Azure AD Connect version 1.6.X.X, which defaulted to using DirSyncWebServices API V2, by using an existing database. Previously, the setting export version to V2 was only being done for upgrades. We changed it so that it's set on clean install.
+- We removed the ADSyncPrep.psm1 module from the installation because it's no longer used.
### Known issues -- The Azure AD Connect wizard shows the "Import Synchronization Settings" option as "Preview", while this feature is generally Available.-- Some Active Directory connectors may be installed in a different order when using the output of the migrate settings script to install the product.-- The User Sign In options page in the Azure AD Connect wizard mentions "Company Administrator". This term is no longer used and needs to be replace by "Global Administrator".-- The "Export settings" option is broken when the Sign In option has been configured to use PingFederate.-- While Azure AD Connect can now be deployed using the Hybrid Identity Administrator role, configuring Self Service Password Reset, Passthru Authentication or Single Sign On will still require user with the Global Administrator role.-- When importing the Azure AD Connect configuration while deploying to connect with a different tenant than the original Azure AD Connect configuration, directory extension attributes are not configured correctly.
+- The Azure AD Connect wizard shows the **Import Synchronization Settings** option as **Preview**, although this feature is generally available.
+- Some Active Directory connectors might be installed in a different order when you use the output of the migrate settings script to install the product.
+- The **User Sign In** options page in the Azure AD Connect wizard mentions Company Administrator. This term is no longer used and needs to be replaced by Global Administrator.
+- The **Export settings** option is broken when the **Sign In** option has been configured to use PingFederate.
+- While Azure AD Connect can now be deployed by using the Hybrid Identity Administrator role, configuring Self-Service Password Reset, Passthru Authentication, or single sign-on still requires a user with the Global Administrator role.
+- When you import the Azure AD Connect configuration while you deploy to connect with a different tenant than the original Azure AD Connect configuration, directory extension attributes aren't configured correctly.
## 1.6.4.0 > [!NOTE] > The Azure AD Connect sync V2 endpoint API is now available in these Azure environments:
+>
> - Azure Commercial > - Azure China cloud > - Azure US Government cloud
-> - This release will not be made available in the Azure German cloud
+>
+> This release won't be made available in the Azure German cloud.
### Release status
-3/31/2021: Released for download only, not available for auto upgrade
+3/31/2021: Released for download only, not available for auto-upgrade
### Bug fixes -- This release fixes a bug in version 1.6.2.4 where, after upgrade to that release, the Azure AD Connect Health feature was not registered correctly and did not work. Customers who have deployed build 1.6.2.4 are requested to update their Azure AD Connect server with this build, which will correctly register the Health feature.
+This release fixes a bug that occurred in version 1.6.2.4. After upgrade to that release, the Azure AD Connect Health feature wasn't registered correctly and didn't work. If you deployed build 1.6.2.4, update your Azure AD Connect server with this build to register the Health feature correctly.
## 1.6.2.4 > [!IMPORTANT]
-> Update per March 30, 2021: we have discovered an issue in this build. After installation of this build, the Health services are not registered. We recommend not installing this build. We will release a hotfix shortly.
-> If you already installed this build, you can manually register the Health services by using the cmdlet as shown in [this article](./how-to-connect-health-agent-install.md#manually-register-azure-ad-connect-health-for-sync).
+> Update per March 30, 2021: We've discovered an issue in this build. After installation of this build, the Health services aren't registered. We recommend that you not install this build. We'll release a hotfix shortly.
+> If you already installed this build, you can manually register the Health services by using the cmdlet, as shown in [Azure AD Connect Health agent installation](./how-to-connect-health-agent-install.md#manually-register-azure-ad-connect-health-for-sync).
-> [!NOTE]
-> - This release will be made available for download only.
-> - The upgrade to this release will require a full synchronization due to sync rule changes.
-> - This release defaults the Azure AD Connect server to the new V2 end point.
+- This release will be made available for download only.
+- The upgrade to this release will require a full synchronization because of sync rule changes.
+- This release defaults the Azure AD Connect server to the new V2 endpoint.
### Release status
-3/19/2021: Released for download, not available for auto upgrade
+3/19/2021: Released for download, not available for auto-upgrade
### Functional changes
- - Added new default sync rules for limiting membership count in group writeback (Out to AD - Group Writeback Member Limit) and group sync to Azure Active Directory (Out to AAD - Group Writeup Member Limit) groups.
- - Added member attribute to the 'Out to AD - Group SOAInAAD - Exchange' rule to limit members in written back groups to 50k.
- -If the "In from AAD - Group SOAInAAD" rule is cloned and Azure AD Connect is upgraded.
- - The updated rule will be disabled by default and so the targetWritebackType will be null.
- - Azure AD Connect will writeback all Cloud Groups (including Azure Active Directory Security Groups enabled for writeback) as Distribution Groups.
- -If the "Out to AD - Group SOAInAAD" rule is cloned and Azure AD Connect is upgraded.
- - The updated rule will be disabled by default. However, a new sync rule "Out to AD - Group SOAInAAD - Exchange" which is added will be enabled.
- - Depending on the Cloned Custom Sync Rule's precedence, Azure AD Connect will flow the Mail and Exchange attributes.
- - If the Cloned Custom Sync Rule does not flow some Mail and Exchange attributes, then new Exchange Sync Rule will add those attributes.
- - Clear-ADSyncToolsMsDsConsistencyGuid
- - ConvertFrom-ADSyncToolsAadDistinguishedName
- - ConvertFrom-ADSyncToolsImmutableID
- - ConvertTo-ADSyncToolsAadDistinguishedName
- - ConvertTo-ADSyncToolsCloudAnchor
- - ConvertTo-ADSyncToolsImmutableID
- - Export-ADSyncToolsAadDisconnectors
- - Export-ADSyncToolsObjects
- - Export-ADSyncToolsRunHistory
- - Get-ADSyncToolsAadObject
- - Get-ADSyncToolsMsDsConsistencyGuid
- - Import-ADSyncToolsObjects
- - Import-ADSyncToolsRunHistory
- - Remove-ADSyncToolsAadObject
- - Search-ADSyncToolsADobject
- - Set-ADSyncToolsMsDsConsistencyGuid
- - Trace-ADSyncToolsADImport
- - Trace-ADSyncToolsLdapQuery
-
- - Set-ADSyncAADCompanyFeature
- - Get-ADSyncAADCompanyFeature
- - Get-ADSyncAADConnectorImportApiVersion - to get import AWS API version
- - Get-ADSyncAADConnectorExportApiVersion - to get export AWS API version
-
+- We updated default sync rules to limit membership in writeback groups to 50,000 members.
+ - We added new default sync rules for limiting the membership count in group writeback (Out to AD - Group Writeback Member Limit) and group sync to Azure AD (Out to AAD - Group Writeup Member Limit) groups.
+ - We added a member attribute to the Out to AD - Group SOAInAAD - Exchange rule to limit members in writeback groups to 50,000.
+- We updated sync rules to support group writeback V2:
+ - If the In from AAD - Group SOAInAAD rule is cloned and Azure AD Connect is upgraded:
+ - The updated rule will be disabled by default, so targetWritebackType will be null.
+ - Azure AD Connect will write back all Cloud Groups (including Azure AD Security Groups enabled for writeback) as Distribution Groups.
+ - If the Out to AD - Group SOAInAAD rule is cloned and Azure AD Connect is upgraded:
+ - The updated rule will be disabled by default. A new sync rule, Out to AD - Group SOAInAAD - Exchange, which is added will be enabled.
+ - Depending on the Cloned Custom Sync Rule's precedence, Azure AD Connect will flow the Mail and Exchange attributes.
+ - If the Cloned Custom Sync Rule doesn't flow some Mail and Exchange attributes, the new Exchange Sync Rule will add those attributes.
+- We added support for [Selective Password Hash Synchronization](./how-to-connect-selective-password-hash-synchronization.md).
+- We added the new [Single Object Sync cmdlet](./how-to-connect-single-object-sync.md). Use this cmdlet to troubleshoot your Azure AD Connect sync configuration.
+- Azure AD Connect now supports the Hybrid Identity Administrator role for configuring the service.
+- We updated the Azure AD ConnectHealth agent to 3.1.83.0.
+- We introduced a new version of the [ADSyncTools PowerShell module](./reference-connect-adsynctools.md), which has several new or improved cmdlets:
+ - Clear-ADSyncToolsMsDsConsistencyGuid
+ - ConvertFrom-ADSyncToolsAadDistinguishedName
+ - ConvertFrom-ADSyncToolsImmutableID
+ - ConvertTo-ADSyncToolsAadDistinguishedName
+ - ConvertTo-ADSyncToolsCloudAnchor
+ - ConvertTo-ADSyncToolsImmutableID
+ - Export-ADSyncToolsAadDisconnectors
+ - Export-ADSyncToolsObjects
+ - Export-ADSyncToolsRunHistory
+ - Get-ADSyncToolsAadObject
+ - Get-ADSyncToolsMsDsConsistencyGuid
+ - Import-ADSyncToolsObjects
+ - Import-ADSyncToolsRunHistory
+ - Remove-ADSyncToolsAadObject
+ - Search-ADSyncToolsADobject
+ - Set-ADSyncToolsMsDsConsistencyGuid
+ - Trace-ADSyncToolsADImport
+ - Trace-ADSyncToolsLdapQuery
+
+- We updated error logging for token acquisition failures.
+- We updated **Learn More** links on the configuration page to give more detail on the linked information.
+- We removed the **Explicit** column from the **CS Search** page in the old sync UI.
+- We added to the UI for the group writeback flow to prompt users for credentials or to configure their own permissions by using the ADSyncConfig module if credentials weren't already provided in an earlier step.
+- We added the ability to autocreate a managed service account for an ADSync service account on a DC.
+- We added the ability to set and get the Azure AD DirSync feature group writeback V2 in the existing cmdlets:
+
+ - Set-ADSyncAADCompanyFeature
+ - Get-ADSyncAADCompanyFeature
+- We added two cmdlets to read the AWS API version:
+
+ - Get-ADSyncAADConnectorImportApiVersion: To get the import AWS API version
+ - Get-ADSyncAADConnectorExportApiVersion: To get the export AWS API version
+
+- We updated change tracking so that changes made to synchronization rules are now tracked to assist troubleshooting changes in the service. The cmdlet Get-ADSyncRuleAudit retrieves tracked changes.
+- We updated the Add-ADSyncADDSConnectorAccount cmdlet in the [ADSyncConfig PowerShell module](./how-to-connect-configure-ad-ds-connector-account.md#using-the-adsyncconfig-powershell-module) to allow a user in the ADSyncAdmin group to change the Active Directory Domain Services Connector account.
### Bug fixes
+- We updated disabled foreground color to satisfy luminosity requirements on a white background. We added more conditions for the navigation tree to set the foreground text color to white when a disabled page is selected to satisfy luminosity requirements.
+- We increased granularity for Set-ADSyncPasswordHashSyncPermissions cmdlet.
+- We updated the PHS permissions script (Set-ADSyncPasswordHashSyncPermissions) to include an optional ADobjectDN parameter.
+- We made an accessibility bug fix. The screen reader now describes the UX element that holds the list of forests as **Forests list** instead of **Forest List list**.
+- We updated screen reader output for some items in the Azure AD Connect wizard. We updated the button hover color to satisfy contrast requirements. We updated Synchronization Service Manager title color to satisfy contrast requirements.
+- We fixed an issue with installing Azure AD Connect from exported configuration having custom extension attributes.
+- We added a condition to skip checking for extension attributes in the target schema while applying the sync rule.
+- We added appropriate permissions on installation if the group writeback feature is enabled.
+- We fixed duplicate default sync rule precedence on import.
+- We fixed an issue that caused a staging error during V2 API delta import for a conflicting object that was repaired via the Health portal.
+- We fixed an issue in the sync engine that caused Connector Spaces objects to have an inconsistent link state.
+- We added import counters to Get-ADSyncConnectorStatistics output.
+- We fixed an unreachable domain de-selection (selected previously) issue in some corner cases during the pass2 wizard.
+- We modified policy import and export to fail if custom rule has duplicate precedence.
+- We fixed a bug in the domain selection logic.
+- We fixed an issue with build 1.5.18.0 if you use mS-DS-ConsistencyGuid as the source anchor and have cloned the In from AD - Group Join rule.
+- Fresh Azure AD Connect installations will use the Export Deletion Threshold stored in the cloud if there's one available and if there isn't a different one passed in.
+- We fixed an issue where Azure AD Connect wouldn't read Active Directory displayName changes of hybrid-joined devices.
## 1.5.45.0
This is a bug fix release. There are no functional changes in this release.
### Fixed issues -- Fixed an issue where admin can't enable "Seamless Single Sign On" if AZUREADSSOACC computer account is already present in the "Active Directory".-- Fixed an issue that caused a staging error during V2 API delta import for a conflicting object that was repaired via the health portal.-- Fixed an issue in the import/export configuration where disabled custom rule was imported as enabled.
+- We fixed an issue where admin can't enable seamless single sign-on if the AZUREADSSOACC computer account is already present in Active Directory.
+- We fixed an issue that caused a staging error during V2 API delta import for a conflicting object that was repaired via the Health portal.
+- We fixed an issue in the import/export configuration where a disabled custom rule was imported as enabled.
## Next steps
-Learn more about [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md).
+Learn more about how to [integrate your on-premises identities with Azure AD](whatis-hybrid-identity.md).
active-directory F5 Aad Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/f5-aad-integration.md
Title: Secure hybrid access with F5 description: F5 BIG-IP Access Policy Manager and Azure Active Directory integration for Secure Hybrid Access-+ Last updated 11/12/2020-+ - # Integrate F5 BIG-IP with Azure Active Directory
Integrating F5 BIG-IP with Azure AD for SHA have the following pre-requisites:
No previous experience or F5 BIG-IP knowledge is necessary to implement SHA, but we do recommend familiarizing yourself with F5 BIG-IP terminology. F5ΓÇÖs rich [knowledge base](https://www.f5.com/services/resources/glossary) is also a good place to start building BIG-IP knowledge.
-## Deployment scenarios
+Configuring a BIG-IP for SHA is achieved using any of the many available methods, including several template based options, or a manual configuration.
+The following tutorials provide detailed guidance on implementing some of the more common patterns for BIG-IP and Azure AD SHA, using these methods.
-The following tutorials provide detailed guidance on implementing some of the more common patterns for BIG-IP and Azure AD SHA:
+**Advanced configuration**
+
+The advanced approach provides a more elaborate, yet flexible way of implementing SHA by manually creating all BIG-IP configuration objects. You would use this approach for scenarios not covered by the guided configuration templates.
- [F5 BIG-IP in Azure deployment walk-through](f5-bigip-deployment-guide.md)
+- [Securing F5 BIG-IP SSL-VPN with Azure AD SHA](f5-aad-password-less-vpn.md)
+
+- [Extend Azure AD B2C to protect applications using F5 BIG-IP](../../active-directory-b2c/partner-f5.md)
+ - [F5 BIG-IP APM and Azure AD SSO to Kerberos applications](f5-big-ip-kerberos-advanced.md) - [F5 BIG-IP APM and Azure AD SSO to Header-based applications](f5-big-ip-header-advanced.md) -- [Securing F5 BIG-IP SSL-VPN with Azure AD SHA](f5-aad-password-less-vpn.md)
+- [F5 BIG-IP APM and Azure AD SSO to forms-based applications](f5-big-ip-forms-advanced.md)
-- [Configure Azure AD B2C with F5 BIG-IP](../../active-directory-b2c/partner-f5.md)
+**Guided Configuration and Easy Button templates**
-- [F5 BIG-IP APM and Azure AD SSO to forms-based applications](f5-big-ip-forms-advanced.md)
+The Guided Configuration wizard, available from BIG-IP version 13.1 aims to minimize time and effort implementing common BIG-IP publishing scenarios. Its workflow-based framework provides an intuitive deployment experience tailored to specific access topologies.
+
+The latest version of the Guided Configuration 16.1 now offers an Easy Button feature. With **Easy Button**, admins no longer go back and forth between Azure AD and a BIG-IP to enable services for SHA. The end-to-end deployment and policy management is handled directly between the APMΓÇÖs Guided Configuration wizard and Microsoft Graph. This rich integration between BIG-IP APM and Azure AD ensures applications can quickly, easily support identity federation, SSO, and Azure AD Conditional Access, without management overhead of having to do so on a per app basis.
+
+- [F5 BIG-IP Easy Button for SSO to Kerberos applications](f5-big-ip-kerberos-easy-button.md)
- [F5 BIG-IP Easy Button for SSO to header-based and LDAP applications](f5-big-ip-ldap-header-easybutton.md)
active-directory F5 Big Ip Kerberos Advanced https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/f5-big-ip-kerberos-advanced.md
Title: Configure F5 BIG-IP Access Policy Manager for Kerberos authentication
-description: Learn how to implement Secure Hybrid Access (SHA) with Single Sign-on (SSO) to Kerberos applications using F5ΓÇÖs BIG-IP advanced configuration.
+description: Learn how to implement Secure Hybrid Access (SHA) with single sign-on (SSO) to Kerberos applications by using F5's BIG-IP advanced configuration.
# Tutorial: Configure F5 BIG-IP Access Policy Manager for Kerberos authentication
-In this tutorial, youΓÇÖll learn how to implement Secure Hybrid Access (SHA) with Single Sign-on (SSO) to Kerberos applications using F5ΓÇÖs BIG-IP advanced configuration.
+In this article, you'll learn how to implement Secure Hybrid Access (SHA) with single sign-on (SSO) to Kerberos applications by using F5's BIG-IP advanced configuration.
-Integrating a BIG-IP with Azure AD provides many benefits, including:
+Integrating a BIG-IP with Azure Active Directory (Azure AD) provides many benefits, including:
-* Improved zero-trust governance through Azure AD pre-authentication and authorization
+* Improved Zero Trust governance through Azure AD pre-authentication and authorization.
-* Full Single Sign-on (SSO) between Azure AD and BIG-IP published services
+* Full SSO between Azure AD and BIG-IP published services.
-* Manage Identities and access from a single control plane - [The Azure portal](https://portal.azure.com/)
+* Management of identities and access from a single control plane, the [Azure portal](https://portal.azure.com/).
-To learn about all of the benefits, see the article on [F5 BIG-IP and Azure AD integration](./f5-aad-integration.md) and [what is application access and single sign-on with Azure AD](/azure/active-directory/active-directory-appssoaccess-whatis).
+To learn about all of the benefits, see [Integrate F5 BIG-IP with Azure Active Directory](./f5-aad-integration.md) and [What is single sign-on in Azure Active Directory?](/azure/active-directory/active-directory-appssoaccess-whatis).
## Scenario description
-For this scenario, you will configure a critical line of business (LOB) application for **Kerberos authentication**, also known as **Integrated Windows Authentication (IWA)**.
+For this scenario, you'll configure a critical line-of-business application for *Kerberos authentication*, also known as *Integrated Windows Authentication*.
-To integrate the application directly with Azure AD, itΓÇÖd need to support some form of federation-based protocol such as Security Assertion Markup Language (SAML), or better. But as modernizing the application introduces risk of potential downtime, there are other options. While using Kerberos Constrained Delegation (KCD) for SSO, you can use [Azure AD Application Proxy](../app-proxy/application-proxy.md) to access the application remotely.
+For you to integrate the application directly with Azure AD, it would need to support some form of federation-based protocol, such as Security Assertion Markup Language (SAML). But because modernizing the application introduces the risk of potential downtime, there are other options.
-In this arrangement, you can achieve the protocol transitioning required to bridge the legacy application to the modern identity control plane. Another approach is to use an F5 BIG-IP Application Delivery Controller (ADC). This enables overlay of the application with Azure AD pre-authentication and KCD SSO, and significantly improves the overall Zero Trust posture of the application.
+While you're using Kerberos Constrained Delegation (KCD) for SSO, you can use [Azure AD Application Proxy](../app-proxy/application-proxy.md) to access the application remotely. In this arrangement, you can achieve the protocol transitioning that's required to bridge the legacy application to the modern identity control plane.
+
+Another approach is to use an F5 BIG-IP Application Delivery Controller. This approach enables overlay of the application with Azure AD pre-authentication and KCD SSO. It significantly improves the overall Zero Trust posture of the application.
## Scenario architecture
-The secure hybrid access solution for this scenario is made up of the following:
+The SHA solution for this scenario consists of the following elements:
-**Application:** The backend Kerberos-based service that gets externally published by the BIG-IP and is protected by SHA.
+- **Application**: Back-end Kerberos-based service that's externally published by BIG-IP and protected by SHA.
-**BIG-IP:** Reverse proxy functionality enables publishing backend applications. The APM then overlays published applications with SAML Service Provider (SP) and SSO functionality.
+- **BIG-IP**: Reverse proxy functionality that enables publishing back-end applications. The Access Policy Manager (APM) then overlays published applications with SAML service provider (SP) and SSO functionality.
-**Azure AD:** Identity Provider (IdP) responsible for verifying user credentials, Conditional Access (CA), and SSO to the BIG-IP APM through SAML.
+- **Azure AD**: Identity provider (IdP) responsible for verifying user credentials, Azure AD Conditional Access, and SSO to the BIG-IP APM through SAML.
-**KDC:** Key Distribution Center role on a Domain Controller (DC), issuing Kerberos tickets.
+- **KDC**: Key Distribution Center role on a domain controller (DC). It issues Kerberos tickets.
-The following image illustrates the SAML SP initiated flow for this scenario, but IdP initiated flow is also supported.
+The following image illustrates the SAML SP-initiated flow for this scenario, but IdP-initiated flow is also supported.
-![Scenario architecture](./media/f5-big-ip-kerberos-advanced/scenario-architecture.png)
+![Diagram of the scenario architecture.](./media/f5-big-ip-kerberos-advanced/scenario-architecture.png)
-| Steps| Description |
+| Step| Description |
| -- |-|
-| 1| User connects to application endpoint (BIG-IP) |
-| 2| BIG-IP access policy redirects user to Azure AD (SAML IdP) |
-| 3| Azure AD pre-authenticates user and applies any enforced CA policies |
-| 4| User is redirected to BIG-IP (SAML SP) and SSO is performed using issued SAML token |
-| 5| BIG-IP authenticates user and requests Kerberos ticket from KDC |
-| 6| BIG-IP sends request to backend application, along with Kerberos ticket for SSO |
-| 7| Application authorizes request and returns payload |
+| 1| User connects to the application endpoint (BIG-IP). |
+| 2| BIG-IP access policy redirects the user to Azure AD (SAML IdP). |
+| 3| Azure AD pre-authenticates the user and applies any enforced Conditional Access policies. |
+| 4| User is redirected to BIG-IP (SAML SP), and SSO is performed via the issued SAML token. |
+| 5| BIG-IP authenticates the user and requests a Kerberos ticket from KDC. |
+| 6| BIG-IP sends the request to the back-end application, along with the Kerberos ticket for SSO. |
+| 7| Application authorizes the request and returns the payload. |
## Prerequisites
-Prior BIG-IP experience isnΓÇÖt necessary, but you will need:
+Prior BIG-IP experience isn't necessary, but you will need:
-* An Azure AD free subscription or above
+* An Azure AD free subscription or higher-tier subscription.
-* An existing BIG-IP or [deploy a BIG-IP Virtual Edition (VE) in Azure](../manage-apps/f5-bigip-deployment-guide.md)
+* An existing BIG-IP, or [deploy BIG-IP Virtual Edition in Azure](../manage-apps/f5-bigip-deployment-guide.md).
-* Any of the following F5 BIG-IP license offers
+* Any of the following F5 BIG-IP license offers:
- * F5 BIG-IP® Best bundle
+ * F5 BIG-IP Best bundle
* F5 BIG-IP APM standalone license
- * F5 BIG-IP APM add-on license on an existing BIG-IP F5 BIG-IP® Local Traffic Manager™ (LTM)
+ * F5 BIG-IP APM add-on license on an existing BIG-IP Local Traffic Manager
- * 90-day BIG-IP full feature [trial license](https://www.f5.com/trial/big-ip-trial.php).
+ * 90-day BIG-IP full feature [trial license](https://www.f5.com/trial/big-ip-trial.php)
-* User identities [synchronized](../hybrid/how-to-connect-sync-whatis.md) from an on-premises directory to Azure AD or created directly within Azure AD and flowed back to your on-premises directory
+* User identities [synchronized](../hybrid/how-to-connect-sync-whatis.md) from an on-premises directory to Azure AD, or created directly within Azure AD and flowed back to your on-premises directory.
-* An account with Azure AD Application admin [permissions](../users-groups-roles/directory-assign-admin-roles.md)
+* An account with Azure AD Application Administrator [permissions](../users-groups-roles/directory-assign-admin-roles.md).
-* Web server [certificate](../manage-apps/f5-bigip-deployment-guide.md) for publishing services over HTTPS or use default BIG-IP certs while testing
+* A web server [certificate](../manage-apps/f5-bigip-deployment-guide.md) for publishing services over HTTPS, or use default BIG-IP certificates while testing.
-* An existing Kerberos application or [setup an IIS (Internet Information Services) app](https://active-directory-wp.com/docs/Networking/Single_Sign_On/SSO_with_IIS_on_Windows.html) for KCD SSO
+* An existing Kerberos application, or [set up an Internet Information Services (IIS) app](https://active-directory-wp.com/docs/Networking/Single_Sign_On/SSO_with_IIS_on_Windows.html) for KCD SSO.
## Configuration methods
-There are many methods to configure BIG-IP for this scenario, including two template-based options and an advanced configuration. This tutorial covers the advanced approach that provides a more flexible way of implementing SHA by manually creating all BIG-IP configuration objects. You would also use this approach for scenarios not covered by the guided configuration templates.
+There are many methods to configure BIG-IP for this scenario, including two template-based options and an advanced configuration. This article covers the advanced approach, which provides a more flexible way of implementing SHA by manually creating all BIG-IP configuration objects. You would also use this approach for scenarios that the guided configuration templates don't cover.
>[!NOTE]
-> All example strings or values referenced throughout this guide should be replaced with those for your actual environment.
+> All example strings or values in this article should be replaced with those for your actual environment.
## Register F5 BIG-IP in Azure AD
-Before a BIG-IP can hand off pre-authentication to Azure AD, it must be registered in your tenant. This is the first step in establishing SSO between both entities and is no different to making any IDP aware of a SAML Relying Party (RP). In this case, the app you create from the F5 BIG-IP gallery template is the RP representing the SAML SP for the BIG-IP published application.
+Before BIG-IP can hand off pre-authentication to Azure AD, it must be registered in your tenant. This is the first step in establishing SSO between both entities. It's no different from making any IdP aware of a SAML relying party. In this case, the app that you create from the F5 BIG-IP gallery template is the relying party that represents the SAML SP for the BIG-IP published application.
+
+1. Sign in to the [Azure AD portal](https://portal.azure.com) by using an account with Application Administrator permissions.
-1. Sign-in to the [Azure AD portal](https://portal.azure.com) using an account with Application Admin rights.
+2. From the left pane, select the **Azure Active Directory** service.
-2. From the left navigation pane, select the **Azure Active Directory** service
+3. On the left menu, select **Enterprise applications**. The **All applications** pane opens and displays a list of the applications in your Azure AD tenant.
-3. In the left menu, select **Enterprise applications.** The **All applications** pane opens and displays a list of the applications in your Azure AD tenant.
+4. On the **Enterprise applications** pane, select **New application**.
-4. In the **Enterprise applications** pane, select **New application**.
+5. The **Browse Azure AD Gallery** pane opens and displays tiles for cloud platforms, on-premises applications, and featured applications. Applications listed in the **Featured applications** section have icons that indicate whether they support federated SSO and provisioning.
-5. The **Browse Azure AD Gallery** pane opens and displays tiles for cloud platforms, on-premises applications, and featured applications. Applications listed in the **Featured applications** section have icons indicating whether they support federated single sign-on (SSO) and provisioning. Search for **F5** in the Azure gallery and select **F5 BIG-IP APM Azure AD integration**
+ Search for **F5** in the Azure gallery, and select **F5 BIG-IP APM Azure AD integration**.
6. Provide a name for the new application to recognize the instance of the application. Select **Add/Create** to add it to your tenant.
-## Enable SSO to the F5 BIG-IP
+## Enable SSO to F5 BIG-IP
-Next, configure the BIG-IP registration to fulfill SAML tokens requested by the BIG-IP APM.
+Next, configure the BIG-IP registration to fulfill SAML tokens that the BIG-IP APM requests:
1. In the **Manage** section of the left menu, select **Single sign-on** to open the **Single sign-on** pane for editing.
-2. On the **Select a single sign-on method** page, select **SAML** followed by **No, IΓÇÖll save later** to skip the prompt.
+2. On the **Select a single sign-on method** page, select **SAML** followed by **No, I'll save later** to skip the prompt.
3. On the **Set up single sign-on with SAML** pane, select the pen icon to edit **Basic SAML Configuration**. Make these edits:
- 1. Replace the pre-defined **Identifier** with the full URL for the BIG-IP published application
+ 1. Replace the predefined **Identifier** value with the full URL for the BIG-IP published application.
- 2. Replace the **Reply URL** but retain the path for the applicationΓÇÖs SAML SP endpoint.
+ 2. Replace the **Reply URL** value but retain the path for the application's SAML SP endpoint.
- In this configuration, the SAML flow would operate in IdP initiated mode, where Azure AD issues a SAML assertion before the user is redirected to the BIG-IP endpoint for the application.
-
+ In this configuration, the SAML flow would operate in IdP-initiated mode. In that mode, Azure AD issues a SAML assertion before the user is redirected to the BIG-IP endpoint for the application.
- 3. To use SP initiated mode, populate the **Sign on URL** with the application URL.
+ 3. To use SP-initiated mode, populate **Sign on URL** with the application URL.
- 4. For the **Logout URI**, enter the BIG-IP APM single logout (SLO) endpoint pre-pended by the host header of the service being published. It ensures the userΓÇÖs BIG-IP APM session is also terminated after being signed out of Azure AD.
+ 4. For **Logout Url**, enter the BIG-IP APM single logout (SLO) endpoint prepended by the host header of the service that's being published. This step ensures that the user's BIG-IP APM session ends after the user is signed out of Azure AD.
- ![Screenshot for editing basic SAML configuration](./media/f5-big-ip-kerberos-advanced/edit-basic-saml-configuration.png)
+ ![Screenshot for editing basic SAML configuration.](./media/f5-big-ip-kerberos-advanced/edit-basic-saml-configuration.png)
> [!NOTE]
- > From TMOS v16 the SAML SLO endpoint has changed to **/saml/sp/profile/redirect/slo**
+ > From TMOS v16, the SAML SLO endpoint has changed to **/saml/sp/profile/redirect/slo**.
-4. Select **Save** before exiting the SAML configuration pane and skip the SSO test prompt.
+4. Select **Save** before closing the SAML configuration pane and skip the SSO test prompt.
-5. Note the properties of the **User Attributes & Claims** section, as these are what Azure AD will issue users for BIG-IP APM authentication and SSO to the backend application.
+5. Note the properties of the **User Attributes & Claims** section. Azure AD will issue these properties to users for BIG-IP APM authentication and for SSO to the back-end application.
-6. In the **SAML Signing Certificate** pane, select the **Download** button to save the **Federation Metadata XML** file to your computer.
+6. On the **SAML Signing Certificate** pane, select **Download** to save the **Federation Metadata XML** file to your computer.
- ![Edit SAML signing certificate](./media/f5-big-ip-kerberos-advanced/edit-saml-signing-certificate.png)
+ ![Screenshot that shows selections for editing a SAML signing certificate.](./media/f5-big-ip-kerberos-advanced/edit-saml-signing-certificate.png)
-SAML signing certificates created by Azure AD have a lifespan of 3 years. For more information, see [Managed certificates for federated single sign-on](./manage-certificates-for-federated-single-sign-on.md).
+SAML signing certificates that Azure AD creates have a lifespan of three years. For more information, see [Managed certificates for federated single sign-on](./manage-certificates-for-federated-single-sign-on.md).
## Assign users and groups
-By default, Azure AD will issue tokens only for users that have been granted access to an application. To provide specific users and groups access to the application:
+By default, Azure AD will issue tokens only for users who have been granted access to an application. To grant specific users and groups access to the application:
-1. In the **F5 BIG-IP applicationΓÇÖs overview** blade, select **Assign Users and groups**
+1. On the **F5 BIG-IP application's overview** pane, select **Assign Users and groups**.
- ![Screenshot for assigning users and groups](./media/f5-big-ip-kerberos-advanced/authorize-users-groups.png)
+2. Select **+ Add user/group**.
-2. Select **+ Add user/group** to add the groups authorized to access the internal application followed by **Select > Assign** to assign the users/ groups to your application
+ ![Screenshot that shows the button for assigning users and groups.](./media/f5-big-ip-kerberos-advanced/authorize-users-groups.png)
-## Active Directory KCD configurations
+3. Select users and groups, and then select **Assign** to assign them to your application.
-For the BIG-IP APM to perform SSO to the backend application on behalf of users, KCD must be configured in the target AD domain. Delegating authentication also requires that the BIG-IP APM be provisioned with a domain service account.
+## Configure Active Directory KCD
-For our scenario, the application is hosted on server **APP-VM-01** and is running in the context of a service account named **web_svc_account**, not the computerΓÇÖs identity. The delegating service account assigned to the APM will be called **F5-BIG-IP**.
+For the BIG-IP APM to perform SSO to the back-end application on behalf of users, KCD must be configured in the target Active Directory domain. Delegating authentication also requires that the BIG-IP APM is provisioned with a domain service account.
+
+For the scenario in this article, the application is hosted on server **APP-VM-01** and is running in the context of a service account named **web_svc_account**, not the computer's identity. The delegating service account assigned to the APM is **F5-BIG-IP**.
### Create a BIG-IP APM delegation account
-As the BIG-IP doesnΓÇÖt support group managed service accounts (gMSA), create a standard user account to use as the APM service account:
+Because BIG-IP doesn't support group managed service accounts, create a standard user account to use as the APM service account:
+1. Enter the following PowerShell command. Replace the `UserPrincipalName` and `SamAccountName` values with those for your environment.
-1. Replace the **UserPrincipalName** and **SamAccountName** values with those for your environment in these PowerShell commands:
+ ```New-ADUser -Name "F5 BIG-IP Delegation Account" UserPrincipalName host/f5-big-ip.contoso.com@contoso.com SamAccountName "f5-big-ip" -PasswordNeverExpires $true Enabled $true -AccountPassword (Read-Host -AsSecureString "Account Password") ```
- ```New-ADUser -Name "F5 BIG-IP Delegation Account" UserPrincipalName host/f5-big-ip.contoso.com@contoso.com SamAccountName "f5-big-ip" -PasswordNeverExpires $true Enabled $true -AccountPassword (Read-Host -AsSecureString "Account Password") ```
+2. Create a service principal name (SPN) for the APM service account to use when you're performing delegation to the web application's service account:
-2. Create a **Service Principal Name (SPN)** for the APM service account to use when performing delegation to the web applicationΓÇÖs service account.
+ ```Set-AdUser -Identity f5-big-ip -ServicePrincipalNames @Add="host/f5-big-ip.contoso.com"} ```
- ```Set-AdUser -Identity f5-big-ip -ServicePrincipalNames @Add="host/f5-big-ip.contoso.com"} ```
+3. Ensure that the SPN now shows against the APM service account:
-3. Ensure the SPN now shows against the APM service account.
+ ```Get-ADUser -identity f5-big-ip -properties ServicePrincipalNames | Select-Object -ExpandProperty ServicePrincipalNames ```
- ```Get-ADUser -identity f5-big-ip -properties ServicePrincipalNames | Select-Object -ExpandProperty ServicePrincipalNames ```
+ 4. Before you specify the target SPN that the APM service account should delegate to for the web application, view its existing SPN configuration:
+
+ 1. Check whether your web application is running in the computer context or a dedicated service account.
+ 2. Use the following command to query the account object in Active Directory to see its defined SPNs. Replace `<name_of_account>` with the account for your environment.
- 4. Before specifying the target SPN that the APM service account should delegate to for the web application, you need to view its existing SPN config. Check whether your web application is running in the computer context or a dedicated service account. Next, query that account object in AD to see its defined SPNs. Replace <name_of_account> with the account for your environment.
+ ```Get-ADUser -identity <name_of_account> -properties ServicePrincipalNames | Select-Object -ExpandProperty ServicePrincipalNames ```
- ```Get-ADUser -identity <name_of _account> -properties ServicePrincipalNames | Select-Object -ExpandProperty ServicePrincipalNames ```
+5. You can use any SPN that you see defined against a web application's service account. But in the interest of security, it's best to use a dedicated SPN that matches the host header of the application.
-5. You can use any SPN you see defined against a web applicationΓÇÖs service account, but in the interest of security itΓÇÖs best to use a dedicated SPN matching the host header of the application. For example, as our web application host header is myexpenses.contoso.com we would add HTTP/myexpenses.contoso.com to the application's service account object in AD.
+ For example, because the web application host header in this example is **myexpenses.contoso.com**, you would add `HTTP/myexpenses.contoso.com` to the application's service account object in Active Directory:
- ```Set-AdUser -Identity web_svc_account -ServicePrincipalNames @{Add="http/myexpenses.contoso.com"} ```
+ ```Set-AdUser -Identity web_svc_account -ServicePrincipalNames @{Add="http/myexpenses.contoso.com"} ```
- Or if the app ran in the machine context, we would add the SPN to the object of the computer account in AD.
+ Or if the app ran in the machine context, you would add the SPN to the object of the computer account in Active Directory:
```Set-ADComputer -Identity APP-VM-01 -ServicePrincipalNames @{Add="http/myexpenses.contoso.com"} ```
-With the SPNs defined, the APM service account now needs trusting to delegate to that service. The configuration will vary depending on the topology of your BIG-IP and application server.
+With the SPNs defined, you now need to establish trust for the APM service account delegate to that service. The configuration will vary depending on the topology of your BIG-IP instance and application server.
-### Configure BIG-IP and target application in same domain
+### Configure BIG-IP and the target application in the same domain
-1. Set trust for the APM service account to delegate authentication
+1. Set trust for the APM service account to delegate authentication:
- ```Get-ADUser -Identity f5-big-ip | Set-ADAccountControl -TrustedToAuthForDelegation $true ```
+ ```Get-ADUser -Identity f5-big-ip | Set-ADAccountControl -TrustedToAuthForDelegation $true ```
-2. The APM service account then needs to know which target SPN itΓÇÖs trusted to delegate to, Or in other words which service is it allowed to request a Kerberos ticket for. Set target SPN to the service account running your web application.
+2. The APM service account then needs to know which target SPN it's trusted to delegate to. In other words, the APM service account needs to know which service it's allowed to request a Kerberos ticket for. Set the target SPN to the service account that's running your web application:
- ```Set-ADUser -Identity f5-big-ip -Add @{'msDS-AllowedToDelegateTo'=@('HTTP/myexpenses.contoso.com')} ```
+ ```Set-ADUser -Identity f5-big-ip -Add @{'msDS-AllowedToDelegateTo'=@('HTTP/myexpenses.contoso.com')} ```
-If preferred, you can also complete these tasks through the Active Directory Users and Computers Microsoft Management Console (MMC) on a domain controller.
+If you prefer, you can complete these tasks through the **Active Directory Users and Computers** Microsoft Management Console (MMC) snap-in on a domain controller.
-### BIG-IP and application in different domains
+### Configure BIG-IP and the target application in different domains
-Starting with Windows Server 2012, cross domain KCD uses Resource-based constrained delegation (RCD). The constraints for a service have been transferred from the domain administrator to the service administrator. This allows the back-end service administrator to allow or deny SSO. This also introduces a different approach at configuration delegation, which is only possible using either PowerShell or ADSIEdit.
+Starting with Windows Server 2012, cross-domain KCD uses resource-based constrained delegation. The constraints for a service have been transferred from the domain administrator to the service administrator. This delegation allows the back-end service administrator to allow or deny SSO. It also introduces a different approach at configuration delegation, which is possible only when you use either PowerShell or ADSI Edit.
-The PrincipalsAllowedToDelegateToAccount property of the applications service account (computer or dedicated service account) can be used to grant delegation from the BIG-IP. For this scenario, use the following PowerShell command on a Domain Controller DC (2012 R2+) within the same domain as the application.
+You can use the `PrincipalsAllowedToDelegateToAccount` property of the application's service account (computer or dedicated service account) to grant delegation from BIG-IP. For this scenario, use the following PowerShell command on a domain controller (Windows Server 2012 R2 or later) within the same domain as the application.
-If the **web_svc_account** service runs in context of a user account:
+If the **web_svc_account** service runs in context of a user account, use these commands:
```$big-ip= Get-ADComputer -Identity f5-big-ip -server dc.contoso.com``` ```Set-ADUser -Identity web_svc_account -PrincipalsAllowedToDelegateToAccount $big-ip``` ```Get-ADUser web_svc_account -Properties PrincipalsAllowedToDelegateToAccount```
-If the **web_svc_account** service runs in context of a computer account:
+If the **web_svc_account** service runs in context of a computer account, use these commands:
```$big-ip= Get-ADComputer -Identity f5-big-ip -server dc.contoso.com``` ```Set-ADComputer -Identity web_svc_account -PrincipalsAllowedToDelegateToAccount $big-ip``` ```Get-ADComputer web_svc_account -Properties PrincipalsAllowedToDelegateToAccount``` For more information, see [Kerberos Constrained Delegation across domains](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/hh831477(v=ws.11)).
-## BIG-IP advanced configuration
-Now we can proceed with setting up the BIG-IP configurations.
+## Make BIG-IP advanced configurations
+
+Now you can proceed with setting up the BIG-IP configurations.
+
+### Configure SAML service provider settings
-### Configure SAML Service Provider settings
+SAML service provider settings define the SAML SP properties that the APM will use for overlaying the legacy application with SAML pre-authentication. To configure them:
-These settings define the SAML SP properties that the APM will use for overlaying the legacy application with SAML pre-authentication.
+1. From a browser, sign in to the F5 BIG-IP management console.
-1. From a browser, sign-in to the F5 BIG-IP management console
+2. Select **Access** > **Federation** > **SAML Service Provider** > **Local SP Services** > **Create**.
-2. Select **Access > Federation > SAML Service Provider > Local SP Services > Create**
+ ![Screenshot that shows the button for creating a local SAML service provider service.](./media/f5-big-ip-kerberos-advanced/create-local-services-saml-service-provider.png)
- ![Create local service SAML service provider](./media/f5-big-ip-kerberos-advanced/create-local-services-saml-service-provider.png)
+3. Provide the **Name** and **Entity ID** values that you saved when you configured SSO for Azure AD earlier.
-3. Provide a **Name** and the **Entity ID** saved when you configured SSO for Azure AD earlier.
+ ![Screenshot that shows name and entity I D values entered for a new SAML service provider service.](./media/f5-big-ip-kerberos-advanced/create-new-saml-sp-service.png)
- ![Create a new SAML SP service](./media/f5-big-ip-kerberos-advanced/create-new-saml-sp-service.png)
+4. You don't need to specify **SP Name Settings** information if the SAML entity ID is an exact match with the URL for the published application.
-4. You need not specify **SP Name Settings** if the SAML entity ID is an exact match with the URL for the published application. For example, if the entity ID were urn:myexpenses:contosoonline then you would need to provide the **Scheme** and **Host** as https myexpenses.contoso.com. Whereas if the entity ID was `https://myexpenses.contoso.com` then not.
+ For example, if the entity ID is **urn:myexpenses:contosoonline**, you need to provide the **Scheme** and **Host** values as **https** and **myexpenses.contoso.com**. But if the entity ID is `https://myexpenses.contoso.com`, you don't need to provide this information.
-### Configure external IdP connector
+### Configure an external IdP connector
-A SAML IdP connector defines the settings required for the BIG-IP APM to trust Azure AD as its SAML IdP. These settings will map the SAML SP to a SAML IdP, establishing the federation trust between the APM and Azure AD.
+A SAML IdP connector defines the settings that are required for the BIG-IP APM to trust Azure AD as its SAML IdP. These settings will map the SAML SP to a SAML IdP, establishing the federation trust between the APM and Azure AD. To configure the connector:
-1. Scroll down to select the new SAML SP object and select **Bind/Unbind IdP Connectors**
+1. Scroll down to select the new SAML SP object, and then select **Bind/Unbind IdP Connectors**.
- ![Screenshot for select new SAML object](./media/f5-big-ip-kerberos-advanced/bind-unbind-idp-connectors.png)
+ ![Screenshot that shows the button for binding or unbinding identity provider connectors.](./media/f5-big-ip-kerberos-advanced/bind-unbind-idp-connectors.png)
-2. Select **Create New IdP Connector**, choose **From Metadata**
+2. Select **Create New IdP Connector** > **From Metadata**.
- ![Screenshot for creating new IdP connector from metadata](./media/f5-big-ip-kerberos-advanced/create-new-idp-connector-from-metadata.png)
+ ![Screenshot that shows selections for creating new identity provider connector from metadata.](./media/f5-big-ip-kerberos-advanced/create-new-idp-connector-from-metadata.png)
-3. Browse to the federation metadata XML file you downloaded earlier and provide an **Identity Provider Name** for the APM object thatΓÇÖll represent the external SAML IdP. For example, MyExpenses_AzureAD
+3. Browse to the federation metadata XML file that you downloaded earlier, and provide an **Identity Provider Name** value for the APM object that will represent the external SAML IdP. The following example shows **MyExpenses_AzureAD**.
- ![Screenshot for browse to federation metadata XML](./media/f5-big-ip-kerberos-advanced/browse-federation-metadata-xml.png)
+ ![Screenshot that shows example values for the federation metadata X M L file and the identity provider name.](./media/f5-big-ip-kerberos-advanced/browse-federation-metadata-xml.png)
-4. Select **Add New Row** to choose the new **SAML IdP Connector**, and then select **Update**
+4. Select **Add New Row** to choose the new **SAML IdP Connector** value, and then select **Update**.
- ![Screenshot to choose new IdP connector](./media/f5-big-ip-kerberos-advanced/choose-new-saml-idp-connector.png)
+ ![Screenshot that shows selections for choosing a new identity provider connector.](./media/f5-big-ip-kerberos-advanced/choose-new-saml-idp-connector.png)
-5. Select **OK** to save the settings
+5. Select **OK** to save the settings.
### Configure Kerberos SSO
-In this section, you create an APM SSO object for performing KCD SSO to backend applications. You will need the APM delegation account created earlier to complete this step.
+In this section, you create an APM SSO object for performing KCD SSO to back-end applications. To complete this step, you need the APM delegation account that you created earlier.
-Select **Access > Single Sign-on > Kerberos > Create** and provide the following:
+Select **Access** > **Single Sign-on** > **Kerberos** > **Create** and provide the following information:
-* **Name:** You can use a descriptive name. Once created, the Kerberos SSO APM object can be used by other published applications as well. For example, *Contoso_KCD_sso* can be used for multiple published applications for the entire Contoso domain, whereas *MyExpenses_KCD_sso* can be used for a single application only.
+* **Name**: You can use a descriptive name. After you create it, other published applications can also use the Kerberos SSO APM object. For example, **Contoso_KCD_sso** can be used for multiple published applications for the entire Contoso domain. But **MyExpenses_KCD_sso** can be used for a single application only.
-* **Username Source:** Specifies the preferred source of user ID. You can specify any APM session variable as the source, but *session.saml.last.identity* is typically best as it contains the logged in user ID derived from the Azure AD claim.
+* **Username Source**: Specify the preferred source for user ID. You can specify any APM session variable as the source, but **session.saml.last.identity** is typically best because it contains the logged-in user's ID derived from the Azure AD claim.
-* **User Realm Source:** Required in scenarios where the user domain is different to the Kerberos realm that will be used for KCD. If users were in a separate trusted domain, then you make the APM aware by specifying the APM session variable containing the logged-in user domain. For example, session.saml.last.attr.name.domain. You would also do this in scenarios where UPN of users is based on an alternative suffix.
+* **User Realm Source**: This source is required in scenarios where the user domain is different from the Kerberos realm that will be used for KCD. If users are in a separate trusted domain, you make the APM aware by specifying the APM session variable that contains the logged-in user's domain. An example is **session.saml.last.attr.name.domain**. You also do this in scenarios where the UPN of users is based on an alternative suffix.
-* **Kerberos Realm:** Enter users domain suffix in uppercase
+* **Kerberos Realm**: Enter the user's domain suffix in uppercase.
-* **KDC:** IP of a Domain Controller (Or FQDN if DNS is configured and efficient)
+* **KDC**: Enter the IP address of a domain controller. (Or enter a fully qualified domain name if DNS is configured and efficient.)
-* **UPN Support:** Enable if specified source of username is in UPN format, such as if using session.saml.last.identity variable
+* **UPN Support**: Select this checkbox if the specified source for username is in UPN format, such as if you're using the **session.saml.last.identity** variable.
-* **Account Name and Account Password:** APM service account credentials to perform KCD
+* **Account Name** and **Account Password**: Provide APM service account credentials to perform KCD.
-* **SPN Pattern:** If you use HTTP/%h, APM then uses the host header of the client request to build the SPN that itΓÇÖs requesting a Kerberos token for
+* **SPN Pattern**: If you use **HTTP/%h**, APM then uses the host header of the client request to build the SPN that it's requesting a Kerberos token for.
-* **Send Authorization:** Disable for applications that prefer negotiating authentication, instead of receiving the Kerberos token in the first request. For example, *Tomcat*.
+* **Send Authorization**: Disable this option for applications that prefer negotiating authentication, instead of receiving the Kerberos token in the first request (for example, Tomcat).
- ![Screenshot to configure kerberos S S O](./media/f5-big-ip-kerberos-advanced/configure-kerberos-sso.png)
+![Screenshot that shows selections for configuring Kerberos single sign-on.](./media/f5-big-ip-kerberos-advanced/configure-kerberos-sso.png)
-You can leave *KDC* undefined if the user realm is different to the backend server realm. This applies for multi-domain realm scenarios as well. When left blank, BIG-IP will attempt to discover a Kerberos realm through a DNS lookup of SRV records for the backend serverΓÇÖs domain, so it expects the domain name to be the same as the realm name. If the domain name is different from the realm name, it must be specified in the [/etc/krb5.conf](https://support.f5.com/csp/article/K17976428) file.
+You can leave KDC undefined if the user realm is different from the back-end server realm. This rule also applies for multiple-domain realm scenarios. If you leave KDC undefined, BIG-IP will try to discover a Kerberos realm through a DNS lookup of SRV records for the back-end server's domain. So it expects the domain name to be the same as the realm name. If the domain name is different from the realm name, it must be specified in the [/etc/krb5.conf](https://support.f5.com/csp/article/K17976428) file.
-Kerberos SSO processing is fastest when a KDC is specified by IP, slower when specified by host name, and due to additional DNS queries, even slower when left undefined. For this reason, you should ensure your DNS is performing optimally before moving a proofs of concept (POC) into production. Note that if backend servers are in multiple realms, you must create a separate SSO configuration object for each realm.
+Kerberos SSO processing is fastest when a KDC is specified by IP address. Kerberos SSO processing is slower when a KDC is specified by host name. Because of additional DNS queries, processing is even slower when a KDC is left undefined. For this reason, you should ensure that your DNS is performing optimally before moving a proof of concept into production.
-You can inject headers as part of the SSO request to the backend application. Simply change **General Properties** setting from **Basic** to **Advanced**.
+> [!NOTE]
+> If back-end servers are in multiple realms, you must create a separate SSO configuration object for each realm.
+
+You can inject headers as part of the SSO request to the back-end application. Simply change the **General Properties** setting from **Basic** to **Advanced**.
-For more information on configuring an APM for KCD SSO, refer to the F5 article on [Overview of Kerberos constrained delegation](https://support.f5.com/csp/article/K17976428).
+For more information on configuring an APM for KCD SSO, see the F5 article [Overview of Kerberos constrained delegation](https://support.f5.com/csp/article/K17976428).
-### Configure Access Profile
+### Configure an access profile
-An *Access Profile* binds many APM elements managing access to BIG-IP virtual servers, including access policies, SSO configuration, and UI settings.
+An *access profile* binds many APM elements that manage access to BIG-IP virtual servers. These elements include access policies, SSO configuration, and UI settings.
-1. Select **Access > Profiles / Policies > Access Profiles (Per-Session Policies) > Create** and provide these general properties:
+1. Select **Access** > **Profiles / Policies** > **Access Profiles (Per-Session Policies)** > **Create** and provide these general properties:
- * **Name:** For example, MyExpenses
+ * **Name**: For example, enter **MyExpenses**.
- * **Profile Type:** All
+ * **Profile Type:** Select **All**.
- * **SSO Configuration:** The KCD SSO configuration object you just created
+ * **SSO Configuration:** Select the KCD SSO configuration object that you just created.
- * **Accepted Language:** Add at least one language
+ * **Accepted Language:** Add at least one language.
- ![Screenshot to create new access profile](./media/f5-big-ip-kerberos-advanced/create-new-access-profile.png)
+ ![Screenshot that shows selections for creating an access profile.](./media/f5-big-ip-kerberos-advanced/create-new-access-profile.png)
-2. Select **Edit** for the per-session profile you just created
+2. Select **Edit** for the per-session profile that you just created.
- ![Screenshot to edit per session profile](./media/f5-big-ip-kerberos-advanced/edit-per-session-profile.png)
+ ![Screenshot that shows the button for editing a per-session profile.](./media/f5-big-ip-kerberos-advanced/edit-per-session-profile.png)
-3. Once the Visual Policy Editor (VPE) has launched, select the **+** sign next to the fallback
+3. When the visual policy editor opens, select the plus sign (**+**) next to the fallback.
- ![Select plus sign next to fallback](./media/f5-big-ip-kerberos-advanced/select-plus-fallback.png)
+ ![Screenshot that shows the plus sign next to fallback.](./media/f5-big-ip-kerberos-advanced/select-plus-fallback.png)
-4. In the pop-up select **Authentication > SAML Auth > Add Item**
+4. In the pop-up dialog, select **Authentication** > **SAML Auth** > **Add Item**.
- ![Screenshot popup to add Saml authentication item](./media/f5-big-ip-kerberos-advanced/add-item-saml-auth.png)
+ ![Screenshot that shows selections for adding a SAML authentication item.](./media/f5-big-ip-kerberos-advanced/add-item-saml-auth.png)
-5. In the **SAML authentication SP** configuration, set the **AAA Server** option to use the SAML SP object you created earlier
+5. In the **SAML authentication SP** configuration, set the **AAA Server** option to use the SAML SP object that you created earlier.
- ![Screenshot to configure A A A server](./media/f5-big-ip-kerberos-advanced/configure-aaa-server.png)
+ ![Screenshot that shows the list box for configuring an A A A server.](./media/f5-big-ip-kerberos-advanced/configure-aaa-server.png)
-6. Select the link in the upper **Deny** box to change the **Successful** branch to **Allow**
+6. Select the link in the upper **Deny** box to change the **Successful** branch to **Allow**.
- ![Change successful branch to allow](./media/f5-big-ip-kerberos-advanced/select-allow-successful-branch.png)
+ ![Screenshot that shows changing the successful branch to Allow.](./media/f5-big-ip-kerberos-advanced/select-allow-successful-branch.png)
-### Configure Attribute Mappings
+### Configure attribute mappings
-Although optional, adding a *LogonID_Mapping configuration* enables the BIG-IP active sessions list to display the UPN of the logged-in user instead of a session number. This is useful when you analyze logs, or while troubleshooting.
+Although it's optional, adding a **LogonID_Mapping** configuration enables the BIG-IP active sessions list to display the UPN of the logged-in user instead of a session number. This information is useful when you're analyzing logs or troubleshooting.
-1. Click the **+** symbol for the SAML Auth Successful branch
+1. Select the **+** symbol for the **SAML Auth Successful** branch.
-2. In the pop-up select **Assignment > Variable Assign > Add Item**
+2. In the pop-up dialog, select **Assignment** > **Variable Assign** > **Add Item**.
- ![Screenshot to configure variable assign](./media/f5-big-ip-kerberos-advanced/configure-variable-assign.png)
+ ![Screenshot that shows the option for assigning custom variables.](./media/f5-big-ip-kerberos-advanced/configure-variable-assign.png)
3. Enter **Name**.
-4. In the **Variable Assign** pane, select **Add new entry > change.** For example, *LogonID_Mapping*
+4. On the **Variable Assign** pane, select **Add new entry** > **change**. The following example shows **LogonID_Mapping** in the **Name** box.
- ![Screenshot to add new entry for variable assign](./media/f5-big-ip-kerberos-advanced/add-new-entry-variable-assign.png)
+ ![Screenshot that shows selections for adding an entry for variable assignment.](./media/f5-big-ip-kerberos-advanced/add-new-entry-variable-assign.png)
-5. Set both variables.
+5. Set both variables:
- * **Custom Variable:** session.logon.last.username
- * **Session Variable:** session.saml.last.identity
+ * **Custom Variable**: Enter **session.logon.last.username**.
+ * **Session Variable**: Enter **session.saml.last.identity**.
-6. Select **Finished > Save:**
+6. Select **Finished** > **Save**.
-7. Select the **Deny** terminal of the Access PolicyΓÇÖs **Successful** branch and change it to **Allow,** followed by **Save**
+7. Select the **Deny** terminal of the access policy's **Successful** branch and change it to **Allow**. Then select **Save**.
-8. Commit those settings by selecting **Apply Access Policy** and close the visual policy editor
+8. Commit those settings by selecting **Apply Access Policy**, and close the visual policy editor.
- ![Screenshot to commit apply access policy](./media/f5-big-ip-kerberos-advanced/apply-access-policy.png)
+ ![Screenshot of the button for applying an access policy.](./media/f5-big-ip-kerberos-advanced/apply-access-policy.png)
-### Configure Backend Pool
+### Configure the back-end pool
-For the BIG-IP to know where to forward client traffic, you need to create a BIG-IP node object representing the backend server hosting your application, and place that node in a BIG-IP server pool.
+For BIG-IP to know where to forward client traffic, you need to create a BIG-IP node object that represents the back-end server that hosts your application. Then, place that node in a BIG-IP server pool.
-1. Select **Local Traffic > Pools > Pool List > Create** and provide a name for a server pool object. For example *MyApps_VMs*
+1. Select **Local Traffic** > **Pools** > **Pool List** > **Create** and provide a name for a server pool object. For example, enter **MyApps_VMs**.
- ![Screenshot to create new advanced backend pool](./media/f5-big-ip-kerberos-advanced/create-new-backend-pool.png)
+ ![Screenshot that shows selections for creatng an advanced back-end pool.](./media/f5-big-ip-kerberos-advanced/create-new-backend-pool.png)
2. Add a pool member object with the following resource details:
- * **Node Name:** Optional display name for the server hosting the backend web application
- * **Address:** IP address of the server hosting the application
- * **Service Port:** The HTTP/S port the application is listening on
+ * **Node Name**: Optional display name for the server that hosts the back-end web application.
+ * **Address**: IP address of the server that hosts the application.
+ * **Service Port**: HTTP/S port that the application is listening on.
- ![Screenshot to add a pool member object](./media/f5-big-ip-kerberos-advanced/add-pool-member-object.png)
+ ![Screenshot that shows entries for adding a pool member object.](./media/f5-big-ip-kerberos-advanced/add-pool-member-object.png)
> [!NOTE]
-> The Health Monitors require [additional configuration](https://support.f5.com/csp/article/K13397) that is not covered in this tutorial.
+> The health monitors require [additional configuration](https://support.f5.com/csp/article/K13397) that this article doesn't cover.
-### Configure Virtual Server
-A *Virtual Server* is a BIG-IP data plane object represented by a virtual IP address listening for client requests to the application. Any received traffic is processed and evaluated against the APM access profile associated with the virtual server, before being directed according to the policy results and settings. To configure a Virtual Server:
+### Configure the virtual server
-1. Select **Local Traffic > Virtual Servers > Virtual Server List > Create**
+A *virtual server* is a BIG-IP data plane object that's represented by a virtual IP address listening for client requests to the application. Any received traffic is processed and evaluated against the APM access profile that's associated with the virtual server, before being directed according to the policy results and settings.
-2. Provide the virtual server with a **Name** and IP IPv4/IPv6 that isnΓÇÖt already allocated to an existing BIG-IP object or device on the connected network. The IP will be dedicated to receiving client traffic for the published backend application. Then set the **Service Port** to **443**
+To configure a virtual server:
- ![Screenshot to configure new virtual server](./media/f5-big-ip-kerberos-advanced/configure-new-virtual-server.png)
+1. Select **Local Traffic** > **Virtual Servers** > **Virtual Server List** > **Create**.
-3. Set the HTTP Profile: to **http**
+2. Provide the virtual server with a **Name** value and an IPv4/IPv6 address that isn't already allocated to an existing BIG-IP object or device on the connected network. The IP address will be dedicated to receiving client traffic for the published back-end application. Then set **Service Port** to **443**.
-4. Enable a virtual server for Transport Layer Security (TLS), allowing services to be published over HTTPS. Select the **client SSL profile** you created as part of the prerequisites or leave the default if testing
+ ![Screenshot that shows selections and entries for configuring a virtual server.](./media/f5-big-ip-kerberos-advanced/configure-new-virtual-server.png)
- ![Screenshot to update http profile client](./media/f5-big-ip-kerberos-advanced/update-http-profile-client.png)
+3. Set **HTTP Profile (Client)** to **http**.
-5. Change the **Source Address Translation** to **Auto Map**
+4. Enable a virtual server for Transport Layer Security to allow services to be published over HTTPS. For **SSL Profile (Client)**, select the profile that you created as part of the prerequisites. (Or leave the default if you're testing.)
- ![Screenshot to change source address translation](./media/f5-big-ip-kerberos-advanced/change-auto-map.png)
+ ![Screenshot that shows selections for H T T P profile and S S L profile for the client.](./media/f5-big-ip-kerberos-advanced/update-http-profile-client.png)
-6. Under **Access Policy**, set the **Access Profile** created earlier. This binds the Azure AD SAML pre-authentication profile & KCD SSO policy to the virtual server.
+5. Change **Source Address Translation** to **Auto Map**.
+
+ ![Screenshot to change source address translation](./media/f5-big-ip-kerberos-advanced/change-auto-map.png)
+6. Under **Access Policy**, set **Access Profile** based on the profile that you created earlier. This step binds the Azure AD SAML pre-authentication profile and KCD SSO policy to the virtual server.
- ![Screenshot to set access profile for access policy](./media/f5-big-ip-kerberos-advanced/set-access-profile-for-access-policy.png)
+ ![Screenshot that shows the box for setting an access profile for an access policy.](./media/f5-big-ip-kerberos-advanced/set-access-profile-for-access-policy.png)
-7. Finally, set the **Default Pool** to use the backend pool objects created in the previous section, then select **Finished**.
+7. Set **Default Pool** to use the back-end pool objects that you created in the previous section. Then select **Finished**.
- ![Screenshot to set default pool](./media/f5-big-ip-kerberos-advanced/set-default-pool-use-backend-object.png)
+ ![Screenshot that shows selecting a default pool.](./media/f5-big-ip-kerberos-advanced/set-default-pool-use-backend-object.png)
-### Configure Session Management settings
+### Configure session management settings
-BIG-IP's session management settings define the conditions under which user sessions are terminated or allowed to continue, limits for users and IP addresses, and error pages. You can create your own policy here. Navigate to **Access Policy > Access Profiles > Access Profile** and select your application from the list.
+BIG-IP's session management settings define the conditions under which user sessions are terminated or allowed to continue, limits for users and IP addresses, and error pages. You can create your own policy here. Go to **Access Policy** > **Access Profiles** > **Access Profile** and select your application from the list.
-If you have defined a **Single Log-out URI** in Azure AD, itΓÇÖll ensure an IdP initiated sign-out from the MyApps portal also terminates the session between the client and the BIG-IP APM. The imported applicationΓÇÖs federation metadata.xml provides the APM with the Azure AD SAML log-out endpoint for SP initiated sign-outs. But for this to be truly effective, the APM needs to know exactly when a user signs-out.
+If you've defined a **Single Logout URI** value in Azure AD, it will ensure that an IdP-initiated sign-out from the MyApps portal also ends the session between the client and the BIG-IP APM. The imported application's federation metadata XML file provides the APM with the Azure AD SAML logout endpoint for SP-initiated sign-outs. But for this to be truly effective, the APM needs to know exactly when a user signs out.
-Consider a scenario where a BIG-IP web portal is not used. The user has no way of instructing the APM to sign-out. Even if the user signs-out of the application itself, the BIG-IP is technically oblivious to this, so the application session could easily be re-instated through SSO. For this reason, SP initiated sign-out needs careful consideration to ensure sessions are securely terminated when no longer required.
+Consider a scenario where a BIG-IP web portal is not used. The user has no way of instructing the APM to sign out. Even if the user signs out of the application itself, BIG-IP is technically oblivious to this, so the application session could easily be reinstated through SSO. For this reason, SP-initiated sign-out needs careful consideration to ensure that sessions are securely terminated when they're no longer required.
-One way to achieve this will be by adding an SLO function to your applications sign-out button. It can redirect your client to the Azure AD SAML sign-out endpoint. You can find this SAML sign-out endpoint at **App Registrations > Endpoints.**
+One way to achieve this is by adding an SLO function to your application's sign-out button. This function can redirect your client to the Azure AD SAML sign-out endpoint. You can find this SAML sign-out endpoint at **App Registrations** > **Endpoints**.
-If unable to change the app, consider having the BIG-IP listen for the app's sign-out call, and upon detecting the request, it should trigger SLO.
+If you can't change the app, consider having BIG-IP listen for the app's sign-out call. When it detects the request, it should trigger SLO.
-For more details, see this F5 article on [Configuring automatic session termination (logout) based on a URI-referenced file name](https://support.f5.com/csp/article/K42052145) and [Overview of the Logout URI Include option](https://support.f5.com/csp/article/K12056).
+For more information, see the F5 articles [Configuring automatic session termination (logout) based on a URI-referenced file name](https://support.f5.com/csp/article/K42052145) and [Overview of the Logout URI Include option](https://support.f5.com/csp/article/K12056).
## Summary
-Your application should now be published and accessible via SHA, either directly via its URL or through MicrosoftΓÇÖs application portals. The application should also be visible as a target resource in [Azure AD Conditional Access](../conditional-access/concept-conditional-access-policies.md).
+Your application should now be published and accessible via SHA, either directly via its URL or through Microsoft's application portals. The application should also be visible as a target resource in [Azure AD Conditional Access](../conditional-access/concept-conditional-access-policies.md).
-For increased security, organizations using this pattern could also consider blocking all direct access to the application, forcing a strict path through the BIG-IP.
+For increased security, organizations that use this pattern can also consider blocking all direct access to the application. Blocking all direct access forces a strict path through BIG-IP.
## Next steps
-As a user, launch a browser and connect to the applicationΓÇÖs external URL. You can also select the applicationΓÇÖs icon from the [Microsoft MyApps portal](https://myapps.microsoft.com/). Once you authenticate against your Azure AD tenant, you will be redirected to the BIG-IP endpoint for the application and automatically signed in via SSO.
+As a user, open a browser and connect to the application's external URL. You can also select the application's icon from the [Microsoft MyApps portal](https://myapps.microsoft.com/). After you authenticate against your Azure AD tenant, you'll be redirected to the BIG-IP endpoint for the application and automatically signed in via SSO.
- ![Screenshot of app view](./media/f5-big-ip-kerberos-advanced/app-view.png)
+![Screenshot of the an example application's website.](./media/f5-big-ip-kerberos-advanced/app-view.png)
### Azure AD B2B guest access
-SHA also supports [Azure AD B2B guest access](../external-identities/hybrid-cloud-to-on-premises.md). Guest identities are synchronized from your Azure AD tenant to your target Kerberos domain. It is necessary to have a local representation of guest objects for BIG-IP to perform KCD SSO to the backend application.
+SHA also supports [Azure AD B2B guest access](../external-identities/hybrid-cloud-to-on-premises.md). Guest identities are synchronized from your Azure AD tenant to your target Kerberos domain. It's necessary to have a local representation of guest objects for BIG-IP to perform KCD SSO to the back-end application.
-## Troubleshooting
+## Troubleshoot
-There can be many reasons for failure to access a SHA protected application, including a misconfiguration. Consider the following points while troubleshooting any issue.
+There can be many reasons for failure to access a SHA-protected application, including a misconfiguration. Consider the following points while troubleshooting any problem:
-* Kerberos is time sensitive, so requires that servers and clients be set to the correct time and where possible synchronized to a reliable time source
+* Kerberos is time sensitive. It requires that servers and clients are set to the correct time and, where possible, synchronized to a reliable time source.
-* Ensure the hostnames for the domain controller and web application are resolvable in DNS
+* Ensure that the host names for the domain controller and web application are resolvable in DNS.
-* Ensure there are no duplicate SPNs in your environment by executing the following query at the command line: setspn -q HTTP/my_target_SPN
+* Ensure that there are no duplicate SPNs in your environment by running the following query at the command line: `setspn -q HTTP/my_target_SPN`.
> [!NOTE]
-> You can refer to our [App Proxy guidance to validate an IIS application ](../app-proxy/application-proxy-back-end-kerberos-constrained-delegation-how-to.md)is configured appropriately for KCD. F5ΓÇÖs article on [how the APM handles Kerberos SSO](https://techdocs.f5.com/en-us/bigip-15-1-0/big-ip-access-policy-manager-single-sign-on-concepts-configuration/kerberos-single-sign-on-method.html) is also a valuable resource.
+> To validate that an IIS application is configured appropriately for KCD, see [Troubleshoot Kerberos constrained delegation configurations for Application Proxy](../app-proxy/application-proxy-back-end-kerberos-constrained-delegation-how-to.md). F5's article on [how the APM handles Kerberos SSO](https://techdocs.f5.com/en-us/bigip-15-1-0/big-ip-access-policy-manager-single-sign-on-concepts-configuration/kerberos-single-sign-on-method.html) is also a valuable resource.
-### Authentication and SSO issues
+### Authentication and SSO problems
BIG-IP logs are a reliable source of information. To increase the log verbosity level:
-1. Navigate to **Access Policy > Overview > Event Logs > Settings**
+1. Go to **Access Policy** > **Overview** > **Event Logs** > **Settings**.
-2. Select the row for your published application, then **Edit > Access System Logs**
+2. Select the row for your published application. Then, select **Edit** > **Access System Logs**.
-3. Select **Debug** from the SSO list, and then select OK. Reproduce your issue before looking at the logs but remember to switch this back when finished.
+3. Select **Debug** from the SSO list, and then select **OK**. Reproduce your problem before you look at the logs, but remember to switch this back when finished.
-If you see a BIG-IP branded error immediately after successful Azure AD pre-authentication, itΓÇÖs possible the issue relates to SSO from Azure AD to the BIG-IP.
+If you see a BIG-IP branded error immediately after successful Azure AD pre-authentication, it's possible that the problem relates to SSO from Azure AD to BIG-IP. To find out:
-1. Navigate to **Access > Overview > Access reports**
+1. Go to **Access** > **Overview** > **Access reports**.
-2. Run the report for the last hour to see logs provide any clues. The **View session variables** link for your session will also help understand if the APM is receiving the expected claims from Azure AD.
+2. Run the report for the last hour to see if logs provide any clues. The **View session variables** link for your session will also help you understand if the APM is receiving the expected claims from Azure AD.
-If you donΓÇÖt see a BIG-IP error page, then the issue is probably more related to the backend request or SSO from the BIG-IP to the application.
+If you don't see a BIG-IP error page, the problem is probably more related to the back-end request or related to SSO from BIG-IP to the application. To find out:
-1. Navigate to **Access Policy > Overview > Active Sessions**
+1. Go to **Access Policy** > **Overview** > **Active Sessions**.
-2. Select the link for your active session. The **View Variables** link in this location may also help determine root cause KCD issues, particularly if the BIG-IP APM fails to obtain the right user and domain identifiers.
+2. Select the link for your active session. The **View Variables** link in this location might also help you determine root-cause KCD problems, particularly if the BIG-IP APM fails to get the right user and domain identifiers.
-F5 provides a great BIG-IP specific paper to help diagnose KCD related issues, see the deployment guide on [Configuring Kerberos Constrained Delegation](https://www.f5.com/pdf/deployment-guides/kerberos-constrained-delegation-dg.pdf).
+For help with diagnosing KCD-related problems, see the F5 BIG-IP deployment guide [Configuring Kerberos Constrained Delegation](https://www.f5.com/pdf/deployment-guides/kerberos-constrained-delegation-dg.pdf).
## Additional resources
-* [BIG-IP Advanced configuration](https://techdocs.f5.com/kb/en-us/products/big-ip_apm/manuals/product/apm-authentication-single-sign-on-11-5-0/2.html)
+* [Active Directory Authentication](https://techdocs.f5.com/kb/en-us/products/big-ip_apm/manuals/product/apm-authentication-single-sign-on-11-5-0/2.html) (F5 article about BIG-IP advanced configuration)
-* [The end of passwords, go password-less](https://www.microsoft.com/security/business/identity/passwordless)
+* [Forget passwords, go passwordless](https://www.microsoft.com/security/business/identity/passwordless)
* [What is Conditional Access?](../conditional-access/overview.md)
-* [Microsoft Zero Trust framework to enable remote work](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/)
+* [Zero Trust framework to enable remote work](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/)
active-directory Services Support Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/services-support-managed-identities.md
Managed identity type | All Generally Available<br>Global Azure Regions | Azure
For more information, see [Use managed identities with Azure Machine Learning](../../machine-learning/how-to-use-managed-identities.md).
+### Azure Maps
+
+Managed identity type | All Generally Available<br>Global Azure Regions | Azure Government | Azure Germany | Azure China 21Vianet |
+| | :-: | :-: | :-: | :-: |
+| System assigned | Preview | Preview | Not available | Not available |
+| User assigned | Preview | Preview | Not available | Not available |
+
+For more information, see [Authentication on Azure Maps](../../azure-maps/azure-maps-authentication.md).
++ ### Azure Media Services | Managed identity type | All Generally Available<br>Global Azure Regions | Azure Government | Azure Germany | Azure China 21Vianet |
active-directory Permissions Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/permissions-reference.md
In Azure Active Directory (Azure AD), if another administrator or non-administrator needs to manage Azure AD resources, you assign them an Azure AD role that provides the permissions they need. For example, you can assign roles to allow adding or changing users, resetting user passwords, managing user licenses, or managing domain names.
-This article lists the Azure AD built-in roles you can assign to allow management of Azure AD resources. For information about how to assign roles, see [Assign Azure AD roles to users](manage-roles-portal.md).
+This article lists the Azure AD built-in roles you can assign to allow management of Azure AD resources. For information about how to assign roles, see [Assign Azure AD roles to users](manage-roles-portal.md). If you are looking for roles to manage Azure resources, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md).
## All roles
aks Kubernetes Walkthrough Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/kubernetes-walkthrough-portal.md
Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.c
6. On the **Authentication** page, configure the following options: - Create a new cluster identity by either:
- * Leaving the **Authentication** field with **System-assinged managed identity**, or
+ * Leaving the **Authentication** field with **System-assigned managed identity**, or
* Choosing **Service Principal** to use a service principal. * Select *(new) default service principal* to create a default service principal, or * Select *Configure service principal* to use an existing one. You will need to provide the existing principal's SPN client ID and secret.
aks Quotas Skus Regions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/quotas-skus-regions.md
The list of supported VM sizes in AKS is evolving with the release of new VM SKU
VM sizes with less than 2 CPUs may not be used with AKS.
-Each node in an AKS cluster contains a fixed amount of compute resources such as vCPU and memory. If an AKS node contains insufficient compute resources, pods might fail to run correctly. To ensure the required *kube-system* pods and your applications can be reliably scheduled, AKS requires nodes use VM sizes with > 2 CPUs.
+Each node in an AKS cluster contains a fixed amount of compute resources such as vCPU and memory. If an AKS node contains insufficient compute resources, pods might fail to run correctly. To ensure the required *kube-system* pods and your applications can be reliably scheduled, AKS requires nodes use VM sizes with at least 2 CPUs.
For more information on VM types and their compute resources, see [Sizes for virtual machines in Azure][vm-skus].
aks Servicemesh About https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/servicemesh-about.md
Title: About service meshes
description: Obtain an overview of service meshes, supported scenarios, selection criteria, and next steps to explore. Previously updated : 07/29/2021 Last updated : 01/04/2022
A service mesh provides capabilities like traffic management, resiliency, policy
These are some of the scenarios that can be enabled for your workloads when you use a service mesh: -- **Encrypt all traffic in cluster** - Enable mutual TLS between specified services in the cluster. This can be extended to ingress and egress at the network perimeter. Provides a secure by default option with no changes needed for application code and infrastructure.
+- **Encrypt all traffic in cluster** - Enable mutual TLS between specified services in the cluster. This can be extended to ingress and egress at the network perimeter, and provides a secure by default option with no changes needed for application code and infrastructure.
- **Canary and phased rollouts** - Specify conditions for a subset of traffic to be routed to a set of new services in the cluster. On successful test of canary release, remove conditional routing and phase gradually increasing % of all traffic to new service. Eventually all traffic will be directed to new service. -- **Traffic management and manipulation** - Create a policy on a service that will rate limit all traffic to a version of a service from a specific origin. Or a policy that applies a retry strategy to classes of failures between specified services. Mirror live traffic to new versions of services during a migration or to debug issues. Inject faults between services in a test environment to test resiliency.
+- **Traffic management and manipulation** - Create a policy on a service that will rate limit all traffic to a version of a service from a specific origin, or a policy that applies a retry strategy to classes of failures between specified services. Mirror live traffic to new versions of services during a migration or to debug issues. Inject faults between services in a test environment to test resiliency.
-- **Observability** - Gain insight into how your services are connected the traffic that flows between them. Obtain metrics, logs, and traces for all traffic in cluster, and ingress/egress. Add distributed tracing abilities to your applications.
+- **Observability** - Gain insight into how your services are connected and the traffic that flows between them. Obtain metrics, logs, and traces for all traffic in cluster, including ingress/egress. Add distributed tracing abilities to your applications.
## Selection criteria
-Before you select a service mesh, ensure that you understand your requirements and the reasons for installing a service mesh. Ask the following questions.
+Before you select a service mesh, ensure that you understand your requirements and the reasons for installing a service mesh. Ask the following questions:
-- **Is an Ingress Controller sufficient for my needs?** - Sometimes having a capability like a/b testing or traffic splitting at the ingress is sufficient to support the required scenario. Don't add complexity to your environment with no upside.
+- **Is an Ingress Controller sufficient for my needs?** - Sometimes having a capability like A/B testing or traffic splitting at the ingress is sufficient to support the required scenario. Don't add complexity to your environment with no upside.
-- **Can my workloads and environment tolerate the additional overheads?** - All the additional components required to support the service mesh require additional resources like cpu and memory. In addition, all the proxies and their associated policy checks add latency to your traffic. If you have workloads that are very sensitive to latency or cannot provide the additional resources to cover the service mesh components, then re-consider.
+- **Can my workloads and environment tolerate the additional overheads?** - All the additional components required to support the service mesh require additional resources like CPU and memory. In addition, all the proxies and their associated policy checks add latency to your traffic. If you have workloads that are very sensitive to latency or cannot provide the additional resources to cover the service mesh components, then re-consider.
- **Is this adding additional complexity unnecessarily?** - If the reason for installing a service mesh is to gain a capability that is not necessarily critical to the business or operational teams, then consider whether the additional complexity of installation, maintenance, and configuration is worth it.
app-service Deploy Container Github Action https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/deploy-container-github-action.md
OpenID Connect is an authentication method that uses short-lived tokens. Setting
1. Run the following command to [create a new federated identity credential](/graph/api/application-post-federatedidentitycredentials?view=graph-rest-beta&preserve-view=true) for your active directory application.
-* Replace `APPLICATION-OBJECT-ID` with the **objectId (generated while creating app)** for your Active Directory application.
-* Set a value for `CREDENTIAL-NAME` to reference later.
-* Set the `subject`. The value of this is defined by GitHub depending on your workflow:
- * Jobs in your GitHub Actions environment: `repo:< Organization/Repository >:environment:< Name >`
- * For Jobs not tied to an environment, include the ref path for branch/tag based on the ref path used for triggering the workflow: `repo:< Organization/Repository >:ref:< ref path>`. For example, `repo:n-username/ node_express:ref:refs/heads/my-branch` or `repo:n-username/ node_express:ref:refs/tags/my-tag`.
- * For workflows triggered by a pull request event: `repo:< Organization/Repository >:pull_request`.
-
-```azurecli
-az rest --method POST --uri 'https://graph.microsoft.com/beta/applications/<APPLICATION-OBJECT-ID>/federatedIdentityCredentials' --body '{"name":"<CREDENTIAL-NAME>","issuer":"https://token.actions.githubusercontent.com","subject":"repo:organization/repository:ref:refs/heads/main","description":"Testing","audiences":["api://AzureADTokenExchange"]}'
-```
-
-To learn how to create a Create an active directory application, service principal, and federated credentials in Azure portal, see [Connect GitHub and Azure](/azure/developer/github/connect-from-azure#use-the-azure-login-action-with-openid-connect).
-
+ * Replace `APPLICATION-OBJECT-ID` with the **objectId (generated while creating app)** for your Active Directory application.
+ * Set a value for `CREDENTIAL-NAME` to reference later.
+ * Set the `subject`. The value of this is defined by GitHub depending on your workflow:
+ * Jobs in your GitHub Actions environment: `repo:< Organization/Repository >:environment:< Name >`
+ * For Jobs not tied to an environment, include the ref path for branch/tag based on the ref path used for triggering the workflow: `repo:< Organization/Repository >:ref:< ref path>`. For example, `repo:n-username/ node_express:ref:refs/heads/my-branch` or `repo:n-username/ node_express:ref:refs/tags/my-tag`.
+ * For workflows triggered by a pull request event: `repo:< Organization/Repository >:pull_request`.
+
+ ```azurecli
+ az rest --method POST --uri 'https://graph.microsoft.com/beta/applications/<APPLICATION-OBJECT-ID>/federatedIdentityCredentials' --body '{"name":"<CREDENTIAL-NAME>","issuer":"https://token.actions.githubusercontent.com","subject":"repo:organization/repository:ref:refs/heads/main","description":"Testing","audiences":["api://AzureADTokenExchange"]}'
+ ```
+
+ To learn how to create a Create an active directory application, service principal, and federated credentials in Azure portal, see [Connect GitHub and Azure](/azure/developer/github/connect-from-azure#use-the-azure-login-action-with-openid-connect).
+
## Configure the GitHub secret for authentication
app-service Deploy Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/deploy-github-actions.md
OpenID Connect is an authentication method that uses short-lived tokens. Setting
1. Run the following command to [create a new federated identity credential](/graph/api/application-post-federatedidentitycredentials?view=graph-rest-beta&preserve-view=true) for your active directory application.
-* Replace `APPLICATION-OBJECT-ID` with the **objectId (generated while creating app)** for your Active Directory application.
-* Set a value for `CREDENTIAL-NAME` to reference later.
-* Set the `subject`. The value of this is defined by GitHub depending on your workflow:
- * Jobs in your GitHub Actions environment: `repo:< Organization/Repository >:environment:< Name >`
- * For Jobs not tied to an environment, include the ref path for branch/tag based on the ref path used for triggering the workflow: `repo:< Organization/Repository >:ref:< ref path>`. For example, `repo:n-username/ node_express:ref:refs/heads/my-branch` or `repo:n-username/ node_express:ref:refs/tags/my-tag`.
- * For workflows triggered by a pull request event: `repo:< Organization/Repository >:pull_request`.
-
-```azurecli
-az rest --method POST --uri 'https://graph.microsoft.com/beta/applications/<APPLICATION-OBJECT-ID>/federatedIdentityCredentials' --body '{"name":"<CREDENTIAL-NAME>","issuer":"https://token.actions.githubusercontent.com","subject":"repo:organization/repository:ref:refs/heads/main","description":"Testing","audiences":["api://AzureADTokenExchange"]}'
-```
-
+ * Replace `APPLICATION-OBJECT-ID` with the **objectId (generated while creating app)** for your Active Directory application.
+ * Set a value for `CREDENTIAL-NAME` to reference later.
+ * Set the `subject`. The value of this is defined by GitHub depending on your workflow:
+ * Jobs in your GitHub Actions environment: `repo:< Organization/Repository >:environment:< Name >`
+ * For Jobs not tied to an environment, include the ref path for branch/tag based on the ref path used for triggering the workflow: `repo:< Organization/Repository >:ref:< ref path>`. For example, `repo:n-username/ node_express:ref:refs/heads/my-branch` or `repo:n-username/ node_express:ref:refs/tags/my-tag`.
+ * For workflows triggered by a pull request event: `repo:< Organization/Repository >:pull_request`.
+
+ ```azurecli
+ az rest --method POST --uri 'https://graph.microsoft.com/beta/applications/<APPLICATION-OBJECT-ID>/federatedIdentityCredentials' --body '{"name":"<CREDENTIAL-NAME>","issuer":"https://token.actions.githubusercontent.com","subject":"repo:organization/repository:ref:refs/heads/main","description":"Testing","audiences":["api://AzureADTokenExchange"]}'
+ ```
+
To learn how to create a Create an active directory application, service principal, and federated credentials in Azure portal, see [Connect GitHub and Azure](/azure/developer/github/connect-from-azure#use-the-azure-login-action-with-openid-connect).
app-service Creation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/creation.md
# Create an App Service Environment
-> [!NOTE]
-> This article is about the App Service Environment v3 which is used with Isolated v2 App Service plans
->
-The [App Service Environment (ASE)][Intro] is a single tenant deployment of the App Service that injects into your Azure Virtual Network (VNet). A deployment of an ASE will require use of one subnet. This subnet can't be used for anything else other than the ASE.
+[App Service Environment][Intro] is a single-tenant deployment of Azure App Service. You use it with an Azure virtual network. You need one subnet for a deployment of App Service Environment, and this subnet can't be used for anything else.
+
+> [!NOTE]
+> This article is about App Service Environment v3, which is used with isolated v2 App Service plans.
-## Before you create your ASE
+## Before you create your App Service Environment
-After your ASE is created, you can't change:
+Be aware that after you create your App Service Environment, you can't change any of the following:
- Location - Subscription - Resource group-- Azure Virtual Network (VNet) used-- Subnets used
+- Azure virtual network
+- Subnets
- Subnet size-- Name of your ASE
+- Name of your App Service Environment
-The subnet needs to be large enough to hold the maximum size that you'll scale your ASE. Pick a large enough subnet to support your maximum scale needs since it can't be changed after creation. The recommended size is a /24 with 256 addresses.
+Make your subnet large enough to hold the maximum size that you'll scale your App Service Environment. The recommended size is a /24 with 256 addresses.
## Deployment considerations
-There are two important things that need to be thought out before you deploy your ASE.
--- VIP type-- deployment type-
-There are two different VIP types, internal and external. With an internal VIP, your apps will be reached on the ASE at an address in your ASE subnet and your apps are not on public DNS. During creation in the portal, there is an option to create an Azure private DNS zone for your ASE. With an external VIP, your apps will be on a public internet facing address and your apps are in public DNS.
+Before you deploy your App Service Environment, think about the virtual IP (VIP) type and the deployment type.
-There are three different deployment types;
+With an *internal VIP*, an address in your App Service Environment subnet reaches your apps. Your apps aren't on a public DNS. When you create your App Service Environment in the Azure portal, you have an option to create an Azure private DNS zone for your App Service Environment. With an *external VIP*, your apps are on an address facing the public internet, and they're in a public DNS.
-- single zone-- zone redundant-- host group
+For the deployment type, you can choose *single zone*, *zone redundant*, or *host group*. The single zone is available in all regions where App Service Environment v3 is available. With the single zone deployment type, you have a minimum charge in your App Service plan of one instance of Windows Isolated v2. As soon as you have one or more instances, then that charge goes away. It isn't an additive charge.
-The single zone ASE is available in all regions where ASEv3 is available. When you have a single zone ASE, you have a minimum App Service plan instance charge of one instance of Windows Isolated v2. As soon as you have one or more instances, then that charge goes away. It is not an additive charge.
+In a zone redundant App Service Environment, your apps spread across three zones in the same region. Zone redundant is available in regions that support availability zones. With this deployment type, the smallest size for your App Service plan is three instances. That ensures that there is an instance in each availability zone. App Service plans can be scaled up one or more instances at a time. Scaling doesn't need to be in units of three, but the app is only balanced across all availability zones when the total instances are multiples of three.
-In a zone redundant ASE, your apps spread across three zones in the same region. The zone redundant ASE is available in a subset of ASE capable regions primarily limited by the regions that support availability zones. When you have zone redundant ASE, the smallest size for your App Service plan is three instances. That ensures that there is an instance in each availability zone. App Service plans can be scaled up one or more instances at a time. Scaling does not need to be in units of three, but the app is only balanced across all availability zones when the total instances are multiples of three. A zone redundant ASE has triple the infrastructure and is made with zone redundant components so that if even two of the three zones go down for whatever reason, your workloads remain available. Due to the increased system need, the minimum charge for a zone redundant ASE is nine instances. If you have less than nine total App Service plan instances in your ASEv3, the difference will be charged as Windows I1v2. If you have nine or more instances, there is no added charge to have a zone redundant ASE. To learn more about zone redundancy, read [Regions and Availability zones](./overview-zone-redundancy.md).
+A zone redundant deployment has triple the infrastructure, and ensures that even if two of the three zones go down, your workloads remain available. Due to the increased system need, the minimum charge for a zone redundant App Service Environment is nine instances. If you have fewer than this number of instances, the difference is charged as Windows I1v2. If you have nine or more instances, there is no added charge to have a zone redundant App Service Environment. To learn more about zone redundancy, see [Regions and availability zones](./overview-zone-redundancy.md).
-In a host group deployment, your apps are deployed onto a dedicated host group. The dedicated host group is not zone redundant. Dedicated host group deployment enables your ASE to be deployed on dedicated hardware. There is no minimum instance charge for use of an ASE on a dedicated host group, but you do have to pay for the host group when provisioning the ASE. On top of that you pay a discounted App Service plan rate as you create your plans and scale out. There are a finite number of cores available with a dedicated host deployment that are used by both the App Service plans and the infrastructure roles. Dedicated host deployments of the ASE can't reach the 200 total instance count normally available in an ASE. The number of total instances possible is related to the total number of App Service plan instances plus the load based number of infrastructure roles.
+In a host group deployment, your apps are deployed onto a dedicated host group. The dedicated host group isn't zone redundant. With this type of deployment, you can install and use your App Service Environment on dedicated hardware. There is no minimum instance charge for using App Service Environment on a dedicated host group, but you do have to pay for the host group when you're provisioning the App Service Environment. You also pay a discounted App Service plan rate as you create your plans and scale out.
-## Creating an ASE in the portal
+With a dedicated host group deployment, there are a finite number of cores available that are used by both the App Service plans and the infrastructure roles. This type of deployment can't reach the 200 total instance count normally available in App Service Environment. The number of total instances possible is related to the total number of App Service plan instances, plus the load-based number of infrastructure roles.
-1. To create an ASE, search the marketplace for **App Service Environment v3**.
+## Create an App Service Environment in the portal
-2. Basics: Select the Subscription, select or create the Resource Group, and enter the name of your ASE. Select the type of Virtual IP type. If you select Internal, your inbound ASE address will be an address in your ASE subnet. If you select External, your inbound ASE address will be a public internet facing address. The ASE name will be also used for the domain suffix of your ASE. If your ASE name is *contoso* and you have an Internal VIP ASE, then the domain suffix will be *contoso.appserviceenvironment.net*. If your ASE name is *contoso* and you have an external VIP, the domain suffix will be *contoso.p.azurewebsites.net*.
+Here's how:
- ![App Service Environment create basics tab](./media/creation/creation-basics.png)
+1. Search Azure Marketplace for *App Service Environment v3*.
-3. Hosting: Select *Enabled* or *Disabled* for Host Group deployment. Host Group deployment is used to select dedicated hardware deployment. If you select Enabled, your ASE will be deployed onto dedicated hardware. When you deploy onto dedicated hardware, you are charged for the entire dedicated host during ASE creation and then a reduced price for your App Service plan instances.
+1. From the **Basics** tab, for **Subscription**, select the subscription. For **Resource Group**, select or create the resource group, and enter the name of your App Service Environment. For **Virtual IP**, select **Internal** if you want your inbound address to be an address in your subnet. Select **External** if you want your inbound address to face the public internet. For **App Service Environment Name**, enter a name. The name you choose will also be used for the domain suffix. For example, if the name you choose is *contoso*, and you have an internal VIP, the domain suffix will be `contoso.appserviceenvironment.net`. If the name you choose is *contoso*, and you have an external VIP, the domain suffix will be `contoso.p.azurewebsites.net`.
- ![App Service Environment hosting selections](./media/creation/creation-hosting.png)
+ ![Screenshot that shows the App Service Environment basics tab.](./media/creation/creation-basics.png)
-4. Networking: Select or create your Virtual Network, select or create your subnet. If you are creating an internal VIP ASE, you can configure Azure DNS private zones to point your domain suffix to your ASE. Details on how to manually configure DNS are in the DNS section under [Using an App Service Environment][UsingASE].
+1. From the **Hosting** tab, for **Host group deployment**, select **Enabled** or **Disabled**. If you enable this option, you can deploy onto dedicated hardware. If you do so, you're charged for the entire dedicated host during the creation of the App Service Environment, and then you're charged a reduced price for your App Service plan instances.
- ![App Service Environment networking selections](./media/creation/creation-networking.png)
+ ![Screenshot that shows the App Service Environment hosting selections.](./media/creation/creation-hosting.png)
-5. Review and Create: Check that your configuration is correct and select create. Your ASE can take up to nearly two hours to create.
+1. From the **Networking** tab, for **Virtual Network**, select or create your virtual network. For **Subnet**, select or create your subnet. If you're creating an App Service Environment with an internal VIP, you can configure Azure DNS private zones to point your domain suffix to your App Service Environment. For more details, see the DNS section in [Use an App Service Environment][UsingASE].
-After your ASE creation completes, you can select it as a location when creating your apps. To learn more about creating apps in your new ASE or managing your ASE, read [Using an App Service Environment][UsingASE]
+ ![Screenshot that shows App Service Environment networking selections.](./media/creation/creation-networking.png)
-## Dedicated hosts
+1. From the **Review + create** tab, check that your configuration is correct, and select **Create**. Your App Service Environment can take up to two hours to create.
-The ASE is normally deployed on VMs that are provisioned on a multi-tenant hypervisor. If you need to deploy on dedicated systems, including the hardware, you can provision your ASE onto dedicated hosts. Dedicated hosts come in a pair to ensure redundancy. Dedicated host-based ASE deployments are priced differently than normal. There is a charge for the dedicated host and then another charge for each App Service plan instance. Deployments on host groups are not zone redundant. To deploy onto dedicated hosts, select **enable** for host group deployment on the Hosting tab.
+When your App Service Environment has been successfully created, you can select it as a location when you're creating your apps.
<!--Links--> [Intro]: ./overview.md
app-service Network Info https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/network-info.md
Title: Networking considerations
-description: Learn about the ASE network traffic and how to set network security groups and user defined routes with your ASE.
+description: Learn about App Service Environment network traffic, and how to set network security groups and user-defined routes.
Last updated 11/15/2021
-# Networking considerations for an App Service Environment v2
+# Networking considerations for App Service Environment
-> [!NOTE]
-> This article is about the App Service Environment v2 which is used with Isolated App Service plans
->
+[App Service Environment][Intro] is a deployment of Azure App Service into a subnet in your Azure virtual network. There are two deployment types for an App Service Environment:
-## Overview
+- **External:** This type of deployment exposes the hosted apps by using an IP address that is accessible on the internet. For more information, see [Create an external App Service Environment][MakeExternalASE].
+- **Internal load balancer:** This type of deployment exposes the hosted apps on an IP address inside your virtual network. The internal endpoint is an internal load balancer. For more information, see [Create and use an internal load balancer App Service Environment][MakeILBASE].
- Azure [App Service Environment][Intro] is a deployment of Azure App Service into a subnet in your Azure virtual network. There are two deployment types for an App Service environment (ASE):
+> [!NOTE]
+> This article is about App Service Environment v2, which is used with isolated App Service plans.
+>
-- **External ASE**: Exposes the ASE-hosted apps on an internet-accessible IP address. For more information, see [Create an External ASE][MakeExternalASE].-- **ILB ASE**: Exposes the ASE-hosted apps on an IP address inside your virtual network. The internal endpoint is an internal load balancer (ILB), which is why it's called an ILB ASE. For more information, see [Create and use an ILB ASE][MakeILBASE].
+Regardless of the deployment type, all App Service Environments have a public virtual IP (VIP). This VIP is used for inbound management traffic, and as the address when you're making calls from the App Service Environment to the internet. Such calls leave the virtual network through the VIP assigned for the App Service Environment.
-All ASEs, External, and ILB, have a public VIP that is used for inbound management traffic and as the from address when making calls from the ASE to the internet. The calls from an ASE that go to the internet leave the virtual network through the VIP assigned for the ASE. The public IP of this VIP is the source IP for all calls from the ASE that go to the internet. If the apps in your ASE make calls to resources in your virtual network or across a VPN, the source IP is one of the IPs in the subnet used by your ASE. Because the ASE is within the virtual network, it can also access resources within the virtual network without any additional configuration. If the virtual network is connected to your on-premises network, apps in your ASE also have access to resources there without additional configuration.
+If the apps make calls to resources in your virtual network or across a VPN, the source IP is one of the IPs in the subnet. Because the App Service Environment is within the virtual network, it can also access resources within the virtual network without any additional configuration. If the virtual network is connected to your on-premises network, apps also have access to resources there without additional configuration.
-![External ASE][1] 
+![Diagram that shows the elements of an external deployment.][1] 
-If you have an External ASE, the public VIP is also the endpoint that your ASE apps resolve to for:
+If you have an App Service Environment with an external deployment, the public VIP is also the endpoint to which your apps resolve for the following:
* HTTP/S * FTP/S * Web deployment * Remote debugging
-![ILB ASE][2]
+![Diagram that shows the elements of an internal load balancer deployment.][2]
+
+If you have an App Service Environment with an internal load balancer deployment, the address of the internal address is the endpoint for HTTP/S, FTP/S, web deployment, and remote debugging.
-If you have an ILB ASE, the address of the ILB address is the endpoint for HTTP/S, FTP/S, web deployment, and remote debugging.
+## Subnet size
-## ASE subnet size
+After the App Service Environment is deployed, you can't alter the size of the subnet used to host it. App Service Environment uses an address for each infrastructure role, as well as for each isolated App Service plan instance. Additionally, Azure networking uses five addresses for every subnet that is created.
-The size of the subnet used to host an ASE cannot be altered after the ASE is deployed. The ASE uses an address for each infrastructure role as well as for each Isolated App Service plan instance. Additionally, there are five addresses used by Azure Networking for every subnet that is created. An ASE with no App Service plans at all will use 12 addresses before you create an app. If it is an ILB ASE, then it will use 13 addresses before you create an app in that ASE. As you scale out your ASE, infrastructure roles are added every multiple of 15 and 20 of your App Service plan instances.
+An App Service Environment with no App Service plans at all will use 12 addresses before you create an app. If you use the internal load balancer deployment, then it will use 13 addresses before you create an app. As you scale out, be aware that infrastructure roles are added at every multiple of 15 and 20 of your App Service plan instances.
- > [!NOTE]
- > Nothing else can be in the subnet but the ASE. Be sure to choose an address space that allows for future growth. You can't change this setting later. We recommend a size of `/24` with 256 addresses.
+> [!IMPORTANT]
+> Nothing else can be in the subnet but the App Service Environment. Be sure to choose an address space that allows for future growth. You can't change this setting later. We recommend a size of `/24` with 256 addresses.
-When you scale up or down, new roles of the appropriate size are added and then your workloads are migrated from the current size to the target size. The original VMs are removed only after the workloads have been migrated. If you had an ASE with 100 ASP instances, there would be a period where you need double the number of VMs. It is for this reason that we recommend the use of a '/24' to accommodate any changes you might require.
+When you scale up or down, new roles of the appropriate size are added, and then your workloads are migrated from the current size to the target size. The original VMs are removed only after the workloads have been migrated. For example, if you had an App Service Environment with 100 App Service plan instances, there's a period of time in which you need double the number of VMs.
-## ASE dependencies
+## Inbound and outbound dependencies
-### ASE inbound dependencies
+The following sections cover dependencies to be aware of for your App Service Environment. Another section discusses DNS settings.
-Just for the ASE to operate, the ASE requires the following ports to be open:
+### Inbound dependencies
+
+Just for the App Service Environment to operate, the following ports must be open:
| Use | From | To | |--||-|
-| Management | App Service management addresses | ASE subnet: 454, 455 |
-| ASE internal communication | ASE subnet: All ports | ASE subnet: All ports
-| Allow Azure load balancer inbound | Azure load balancer | ASE subnet: 16001
-
-There are 2 other ports that can show as open on a port scan, 7654 and 1221. They reply with an IP address and nothing more. They can be blocked if desired.
+| Management | App Service management addresses | App Service Environment subnet: 454, 455 |
+| App Service Environment internal communication | App Service Environment subnet: All ports | App Service Environment subnet: All ports
+| Allow Azure load balancer inbound | Azure load balancer | App Service Environment subnet: 16001
-The inbound management traffic provides command and control of the ASE in addition to system monitoring. The source addresses for this traffic are listed in the [ASE Management addresses][ASEManagement] document. The network security configuration needs to allow access from the ASE management addresses on ports 454 and 455. If you block access from those addresses, your ASE will become unhealthy and then become suspended. The TCP traffic that comes in on ports 454 and 455 must go back out from the same VIP or you will have an asymmetric routing problem.
+Ports 7564 and 1221 can show as open on a port scan. They reply with an IP address, and nothing more. You can block them if you want to.
-Within the ASE subnet, there are many ports used for internal component communication and they can change. This requires all of the ports in the ASE subnet to be accessible from the ASE subnet.
+The inbound management traffic provides command and control of the App Service Environment, in addition to system monitoring. The source addresses for this traffic are listed in [App Service Environment management addresses][ASEManagement]. The network security configuration needs to allow access from the App Service Environment management addresses on ports 454 and 455. If you block access from those addresses, your App Service Environment will become unhealthy and then become suspended. The TCP traffic that comes in on ports 454 and 455 must go back out from the same VIP, or you will have an asymmetric routing problem.
-For the communication between the Azure load balancer and the ASE subnet the minimum ports that need to be open are 454, 455 and 16001. The 16001 port is used for keep alive traffic between the load balancer and the ASE. If you are using an ILB ASE, then you can lock traffic down to just the 454, 455, 16001 ports. If you are using an External ASE, then you need to take into account the normal app access ports.
+Within the subnet, there are many ports used for internal component communication, and they can change. This requires all of the ports in the subnet to be accessible from the subnet.
-The other ports you need to concern yourself with are the application ports:
+For communication between the Azure load balancer and the App Service Environment subnet, the minimum ports that need to be open are 454, 455, and 16001. If you're using an internal load balancer deployment, then you can lock traffic down to just the 454, 455, 16001 ports. If you're using an external deployment, then you need to take into account the normal app access ports. Specifically, these are:
| Use | Ports | |-|-| | HTTP/HTTPS | 80, 443 | | FTP/FTPS | 21, 990, 10001-10020 | | Visual Studio remote debugging | 4020, 4022, 4024 |
-| Web Deploy service | 8172 |
+| Web deploy service | 8172 |
-If you block the application ports, your ASE can still function but your app might not. If you are using app assigned IP addresses with an External ASE, you will need to allow traffic from the IPs assigned to your apps to the ASE subnet on the ports shown in the ASE portal > IP Addresses page.
+If you block the application ports, your App Service Environment can still function, but your app might not. If you're using app-assigned IP addresses with an external deployment, you need to allow traffic from the IPs assigned to your apps to the subnet. From the App Service Environment portal, go to **IP addresses**, and see the ports from which you need to allow traffic.
-### ASE outbound dependencies
+### Outbound dependencies
-For outbound access, an ASE depends on multiple external systems. Many of those system dependencies are defined with DNS names and don't map to a fixed set of IP addresses. Thus, the ASE requires outbound access from the ASE subnet to all external IPs across a variety of ports.
+For outbound access, an App Service Environment depends on multiple external systems. Many of those system dependencies are defined with DNS names, and don't map to a fixed set of IP addresses. Thus, the App Service Environment requires outbound access from the subnet to all external IPs, across a variety of ports.
-The ASE communicates out to internet accessible addresses on the following ports:
+App Service Environment communicates out to internet accessible addresses on the following ports:
| Uses | Ports | |--||
The ASE communicates out to internet accessible addresses on the following ports
| Azure SQL | 1433 | | Monitoring | 12000 |
-The outbound dependencies are listed in the document that describes [Locking down App Service Environment outbound traffic](./firewall-integration.md). If the ASE loses access to its dependencies, it stops working. When that happens long enough, the ASE is suspended.
+The outbound dependencies are listed in [Locking down an App Service Environment](./firewall-integration.md). If the App Service Environment loses access to its dependencies, it stops working. When that happens for a long enough period of time, it's suspended.
-### Customer DNS ###
+### Customer DNS
-If the virtual network is configured with a customer-defined DNS server, the tenant workloads use it. The ASE uses Azure DNS for management purposes. If the virtual network is configured with a customer-selected DNS server, the DNS server must be reachable from the subnet that contains the ASE.
+If the virtual network is configured with a customer-defined DNS server, the tenant workloads use it. The App Service Environment uses Azure DNS for management purposes. If the virtual network is configured with a customer-selected DNS server, the DNS server must be reachable from the subnet.
- > [!NOTE]
- > Storage mounts or container images pulls in ASEv2 will not be able to use customer DNS defined in the virtual network or through the `WEBSITE_DNS_SERVER` app setting.
+> [!NOTE]
+> Storage mounts or container image pulls in App Service Environment v2 aren't able to use customer-defined DNS in the virtual network, or through the `WEBSITE_DNS_SERVER` app setting.
-To test DNS resolution from your web app, you can use the console command *nameresolver*. Go to the debug window in your scm site for your app or go to the app in the portal and select console. From the shell prompt you can issue the command *nameresolver* along with the DNS name you wish to look up. The result you get back is the same as what your app would get while making the same lookup. If you use nslookup, you will do a lookup using Azure DNS instead.
+To test DNS resolution from your web app, you can use the console command `nameresolver`. Go to the debug window in your `scm` site for your app, or go to the app in the portal and select console. From the shell prompt, you can issue the command `nameresolver`, along with the DNS name you wish to look up. The result you get back is the same as what your app would get while making the same lookup. If you use `nslookup`, you do a lookup by using Azure DNS instead.
-If you change the DNS setting of the virtual network that your ASE is in, you will need to reboot your ASE. To avoid rebooting your ASE, it is highly recommended that you configure your DNS settings for your virtual network before you create your ASE.
+If you change the DNS setting of the virtual network that your App Service Environment is in, you will need to reboot. To avoid rebooting, it's a good idea to configure your DNS settings for your virtual network before you create your App Service Environment.
<a name="portaldep"></a> ## Portal dependencies
-In addition to the ASE functional dependencies, there are a few extra items related to the portal experience. Some of the capabilities in the Azure portal depend on direct access to _SCM site_. For every app in Azure App Service, there are two URLs. The first URL is to access your app. The second URL is to access the SCM site, which is also called the _Kudu console_. Features that use the SCM site include:
+In addition to the dependencies described in the previous sections, there are a few extra considerations you should be aware of that are related to the portal experience. Some of the capabilities in the Azure portal depend on direct access to the source control manager (SCM) site. For every app in Azure App Service, there are two URLs. The first URL is to access your app. The second URL is to access the SCM site, which is also called the _Kudu console_. Features that use the SCM site include:
-- Web jobs-- Functions-- Log streaming-- Kudu-- Extensions-- Process Explorer-- Console
+- Web jobs
+- Functions
+- Log streaming
+- Kudu
+- Extensions
+- Process Explorer
+- Console
-When you use an ILB ASE, the SCM site isn't accessible from outside the virtual network. Some capabilities will not work from the app portal because they require access to the SCM site of an app. You can connect to the SCM site directly instead of using the portal.
+When you use an internal load balancer, the SCM site isn't accessible from outside the virtual network. Some capabilities don't work from the app portal because they require access to the SCM site of an app. You can connect to the SCM site directly, instead of by using the portal.
-If your ILB ASE is the domain name *contoso.appserviceenvironment.net* and your app name is *testapp*, the app is reached at *testapp.contoso.appserviceenvironment.net*. The SCM site that goes with it is reached at *testapp.scm.contoso.appserviceenvironment.net*.
+If your internal load balancer is the domain name `contoso.appserviceenvironment.net`, and your app name is *testapp*, the app is reached at `testapp.contoso.appserviceenvironment.net`. The SCM site that goes with it is reached at `testapp.scm.contoso.appserviceenvironment.net`.
-## ASE IP addresses ##
+## IP addresses
-An ASE has a few IP addresses to be aware of. They are:
+An App Service Environment has a few IP addresses to be aware of. They are:
-- **Public inbound IP address**: Used for app traffic in an External ASE, and management traffic in both an External ASE and an ILB ASE.-- **Outbound public IP**: Used as the "from" IP for outbound connections from the ASE that leave the virtual network, which aren't routed down a VPN.-- **ILB IP address**: The ILB IP address only exists in an ILB ASE.-- **App-assigned IP-based TLS/SSL addresses**: Only possible with an External ASE and when IP-based TLS/SSL binding is configured.
+- **Public inbound IP address:** Used for app traffic in an external deployment, and management traffic in both internal and external deployments.
+- **Outbound public IP:** Used as the "from" IP for outbound connections that leave the virtual network. These connections aren't routed down a VPN.
+- **Internal load balancer IP address:** This address only exists in an internal deployment.
+- **App-assigned IP-based TLS/SSL addresses:** These addresses are only possible with an external deployment, and when IP-based TLS/SSL binding is configured.
-All these IP addresses are visible in the Azure portal from the ASE UI. If you have an ILB ASE, the IP for the ILB is listed.
+All these IP addresses are visible in the Azure portal from the App Service Environment UI. If you have an internal deployment, the IP for the internal load balancer is listed.
- > [!NOTE]
- > These IP addresses will not change so long as your ASE stays up and running. If your ASE becomes suspended and restored, the addresses used by your ASE will change. The normal cause for an ASE to become suspended is if you block inbound management access or block access to an ASE dependency.
+> [!NOTE]
+> These IP addresses don't change, as long as your App Service Environment is running. If your App Service Environment becomes suspended and is then restored, the addresses used will change. The normal cause for a suspension is if you block inbound management access, or you block access to a dependency.
-![IP addresses][3]
+![Screenshot that shows IP addresses.][3]
-### App-assigned IP addresses ###
+### App-assigned IP addresses
-With an External ASE, you can assign IP addresses to individual apps. You can't do that with an ILB ASE. For more information on how to configure your app to have its own IP address, see [Secure a custom DNS name with a TLS/SSL binding in Azure App Service](../configure-ssl-bindings.md).
+With an external deployment, you can assign IP addresses to individual apps. You can't do that with an internal deployment. For more information on how to configure your app to have its own IP address, see [Secure a custom DNS name with a TLS/SSL binding in Azure App Service](../configure-ssl-bindings.md).
-When an app has its own IP-based SSL address, the ASE reserves two ports to map to that IP address. One port is for HTTP traffic, and the other port is for HTTPS. Those ports are listed in the ASE UI in the IP addresses section. Traffic must be able to reach those ports from the VIP or the apps are inaccessible. This requirement is important to remember when you configure Network Security Groups (NSGs).
+When an app has its own IP-based SSL address, the App Service Environment reserves two ports to map to that IP address. One port is for HTTP traffic, and the other port is for HTTPS. Those ports are listed in the **IP addresses** section of your App Service Environment portal. Traffic must be able to reach those ports from the VIP. Otherwise, the apps are inaccessible. This requirement is important to remember when you configure network security groups (NSGs).
-## Network Security Groups ##
+## Network security groups
-[Network Security Groups][NSGs] provide the ability to control network access within a virtual network. When you use the portal, there's an implicit deny rule at the lowest priority to deny everything. What you build are your allow rules.
+[NSGs][NSGs] provide the ability to control network access within a virtual network. When you use the portal, there's an implicit *deny rule* at the lowest priority to deny everything. What you build are your *allow rules*.
-In an ASE, you don't have access to the VMs used to host the ASE itself. They're in a Microsoft-managed subscription. If you want to restrict access to the apps on the ASE, set NSGs on the ASE subnet. In doing so, pay careful attention to the ASE dependencies. If you block any dependencies, the ASE stops working.
+You don't have access to the VMs used to host the App Service Environment itself. They're in a subscription that Microsoft manages. If you want to restrict access to the apps, set NSGs on the subnet. In doing so, pay careful attention to the dependencies. If you block any dependencies, the App Service Environment stops working.
-NSGs can be configured through the Azure portal or via PowerShell. The information here shows the Azure portal. You create and manage NSGs in the portal as a top-level resource under **Networking**.
+You can configure NSGs through the Azure portal or via PowerShell. The information here shows the Azure portal. You create and manage NSGs in the portal as a top-level resource under **Networking**.
-The required entries in an NSG, for an ASE to function, are to allow traffic:
+The required entries in an NSG are to allow traffic:
**Inbound**
-* TCP from the IP service tag AppServiceManagement on ports 454,455
+
+* TCP from the IP service tag `AppServiceManagement` on ports 454, 455
* TCP from the load balancer on port 16001
-* from the ASE subnet to the ASE subnet on all ports
+* From the App Service Environment subnet to the App Service Environment subnet on all ports
**Outbound**+ * UDP to all IPs on port 53 * UDP to all IPs on port 123 * TCP to all IPs on ports 80, 443
-* TCP to the IP service tag `Sql` on ports 1433
+* TCP to the IP service tag `Sql` on port 1433
* TCP to all IPs on port 12000
-* to the ASE subnet on all ports
+* To the App Service Environment subnet on all ports
-These ports do not include the ports that your apps require for successful use. As an example, your app may need to call a MySQL server on port 3306. Network Time Protocol (NTP) on port 123 is the time synchronization protocol used by the operating system. The NTP endpoints are not specific to App Services, can vary with the operating system, and are not in a well defined list of addresses. To prevent time synchronization issues, you then need to allow UDP traffic to all addresses on port 123. The outbound TCP to port 12000 traffic is for system support and analysis. The endpoints are dynamic and are not in a well defined set of addresses.
+These ports don't include the ports that your apps require for successful use. For example, suppose your app needs to call a MySQL server on port 3306. Network Time Protocol (NTP) on port 123 is the time synchronization protocol used by the operating system. The NTP endpoints aren't specific to App Service, can vary with the operating system, and aren't in a well-defined list of addresses. To prevent time synchronization issues, you then need to allow UDP traffic to all addresses on port 123. The outbound TCP to port 12000 traffic is for system support and analysis. The endpoints are dynamic, and aren't in a well-defined set of addresses.
The normal app access ports are:
The normal app access ports are:
| Visual Studio remote debugging | 4020, 4022, 4024 | | Web Deploy service | 8172 |
-When the inbound and outbound requirements are taken into account, the NSGs should look similar to the NSGs shown in this example.
+When the inbound and outbound requirements are taken into account, the NSGs should look similar to the NSGs shown in the following screenshot:
+
+![Screenshot that shows inbound security rules.][4]
-![Inbound security rules][4]
+A default rule enables the IPs in the virtual network to talk to the subnet. Another default rule enables the load balancer, also known as the public VIP, to communicate with the App Service Environment. To see the default rules, select **Default rules** (next to the **Add** icon).
-A default rule enables the IPs in the virtual network to talk to the ASE subnet. Another default rule enables the load balancer, also known as the public VIP, to communicate with the ASE. To see the default rules, select **Default rules** next to the **Add** icon. If you put a deny everything else rule before the default rules, you prevent traffic between the VIP and the ASE. To prevent traffic coming from inside the virtual network, add your own rule to allow inbound. Use a source equal to AzureLoadBalancer with a destination of **Any** and a port range of **\***. Because the NSG rule is applied to the ASE subnet, you don't need to be specific in the destination.
+If you put a *deny everything else* rule before the default rules, you prevent traffic between the VIP and the App Service Environment. To prevent traffic coming from inside the virtual network, add your own rule to allow inbound. Use a source equal to `AzureLoadBalancer`, with a destination of **Any** and a port range of **\***. Because the NSG rule is applied to the subnet, you don't need to be specific in the destination.
If you assigned an IP address to your app, make sure you keep the ports open. To see the ports, select **App Service Environment** > **IP addresses**.  
-All the items shown in the following outbound rules are needed, except for the last item. They enable network access to the ASE dependencies that were noted earlier in this article. If you block any of them, your ASE stops working. The last item in the list enables your ASE to communicate with other resources in your virtual network.
+All the items shown in the following outbound rules are needed, except for the last item. They enable network access to the App Service Environment dependencies that were noted earlier in this article. If you block any of them, your App Service Environment stops working. The last item in the list enables your App Service Environment to communicate with other resources in your virtual network.
+
+![Screenshot that shows outbound security rules.][5]
-![Outbound security rules][5]
+After your NSGs are defined, assign them to the subnet. If you don't remember the virtual network or subnet, you can see it from the App Service Environment portal. To assign the NSG to your subnet, go to the subnet UI and select the NSG.
-After your NSGs are defined, assign them to the subnet that your ASE is on. If you donΓÇÖt remember the ASE virtual network or subnet, you can see it from the ASE portal page. To assign the NSG to your subnet, go to the subnet UI and select the NSG.
+## Routes
-## Routes ##
+*Forced tunneling* is when you set routes in your virtual network so the outbound traffic doesn't go directly to the internet. Instead, the traffic goes somewhere else, like an Azure ExpressRoute gateway or a virtual appliance. If you need to configure your App Service Environment in such a manner, see [Configuring your App Service Environment with forced tunneling][forcedtunnel].
-Forced tunneling is when you set routes in your virtual network so the outbound traffic doesn't go directly to the internet but somewhere else like an ExpressRoute gateway or a virtual appliance. If you need to configure your ASE in such a manner, then read the document on [Configuring your App Service Environment with Forced Tunneling][forcedtunnel]. This document will tell you the options available to work with ExpressRoute and forced tunneling.
+When you create an App Service Environment in the portal, you automatically create a set of route tables on the subnet. Those routes simply say to send outbound traffic directly to the internet.
-When you create an ASE in the portal we also create a set of route tables on the subnet that is created with the ASE. Those routes simply say to send outbound traffic directly to the internet.
To create the same routes manually, follow these steps:
-1. Go to the Azure portal. Select **Networking** > **Route Tables**.
+1. Go to the Azure portal, and select **Networking** > **Route Tables**.
2. Create a new route table in the same region as your virtual network. 3. From within your route table UI, select **Routes** > **Add**.
-4. Set the **Next hop type** to **Internet** and the **Address prefix** to **0.0.0.0/0**. Select **Save**.
+4. Set the **Next hop type** to **Internet**, and the **Address prefix** to **0.0.0.0/0**. Select **Save**.
You then see something like the following:
- ![Functional routes][6]
+ ![Screenshot that shows functional routes.][6]
-5. After you create the new route table, go to the subnet that contains your ASE. Select your route table from the list in the portal. After you save the change, you should then see the NSGs and routes noted with your subnet.
+5. After you create the new route table, go to the subnet. Select your route table from the list in the portal. After you save the change, you should then see the NSGs and routes noted with your subnet.
- ![NSGs and routes][7]
+ ![Screenshot that shows NSGs and routes.][7]
-## Service Endpoints ##
+## Service endpoints
-Service Endpoints enable you to restrict access to multi-tenant services to a set of Azure virtual networks and subnets. You can read more about Service Endpoints in the [Virtual Network Service Endpoints][serviceendpoints] documentation.
+Service endpoints enable you to restrict access to multi-tenant services to a set of Azure virtual networks and subnets. For more information, see [Virtual Network service endpoints][serviceendpoints].
-When you enable Service Endpoints on a resource, there are routes created with higher priority than all other routes. If you use Service Endpoints on any Azure service, with a forced tunneled ASE, the traffic to those services will not be forced tunneled.
+When you enable service endpoints on a resource, there are routes created with higher priority than all other routes. If you use service endpoints on any Azure service, with a force-tunneled App Service Environment, the traffic to those services isn't force-tunneled.
-When Service Endpoints is enabled on a subnet with an Azure SQL instance, all Azure SQL instances connected to from that subnet must have Service Endpoints enabled. if you want to access multiple Azure SQL instances from the same subnet, you can't enable Service Endpoints on one Azure SQL instance and not on another. No other Azure service behaves like Azure SQL with respect to Service Endpoints. When you enable Service Endpoints with Azure Storage, you lock access to that resource from your subnet but can still access other Azure Storage accounts even if they do not have Service Endpoints enabled.
+When service endpoints are enabled on a subnet with an instance of Azure SQL, all Azure SQL instances connected to from that subnet must have service endpoints enabled. If you want to access multiple Azure SQL instances from the same subnet, you can't enable service endpoints on one Azure SQL instance and not on another. No other Azure service behaves like Azure SQL with respect to service endpoints. When you enable service endpoints with Azure Storage, you lock access to that resource from your subnet. You can still access other Azure Storage accounts, however, even if they don't have service endpoints enabled.
-![Service Endpoints][8]
+![Diagram that shows service endpoints.][8]
<!--Image references--> [1]: ./media/network_considerations_with_an_app_service_environment/networkase-overflow.png
app-service Networking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/networking.md
Title: App Service Environment Networking
+ Title: App Service Environment networking
description: App Service Environment networking details
# App Service Environment networking
-> [!NOTE]
-> This article is about the App Service Environment v3 which is used with Isolated v2 App Service plans
->
+App Service Environment is a single-tenant deployment of Azure App Service that hosts Windows and Linux containers, web apps, API apps, logic apps, and function apps. When you install an App Service Environment, you pick the Azure virtual network that you want it to be deployed in. All of the inbound and outbound application traffic is inside the virtual network you specify. You deploy into a single subnet in your virtual network, and nothing else can be deployed into that subnet.
-The App Service Environment (ASE) is a single tenant deployment of the Azure App Service that hosts Windows and Linux containers, web apps, api apps, logic apps, and function apps. When you install an ASE, you pick the Azure Virtual Network that you want it to be deployed in. All of the inbound and outbound application traffic will be inside the virtual network you specify. The ASE is deployed into a single subnet in your virtual network. Nothing else can be deployed into that same subnet.
+> [!NOTE]
+> This article is about App Service Environment v3, which is used with isolated v2 App Service plans.
## Subnet requirements
-The subnet must be delegated to Microsoft.Web/hostingEnvironments and must be empty.
+You must delegate the subnet to `Microsoft.Web/hostingEnvironments`, and the subnet must be empty.
-The size of the subnet can affect the scaling limits of the App Service plan instances within the ASE. We recommend using a `/24` address space (256 addresses) for your subnet to ensure enough addresses to support production scale.
+The size of the subnet can affect the scaling limits of the App Service plan instances within the App Service Environment. It's a good idea to use a `/24` address space (256 addresses) for your subnet, to ensure enough addresses to support production scale.
-To use a smaller subnet, you should be aware of the following details of the ASE and network setup.
+If you use a smaller subnet, be aware of the following:
-Any given subnet has five addresses reserved for management purposes. On top of the management addresses, ASE will dynamically scale the supporting infrastructure and will use between 4 and 27 addresses depending on configuration and load. The remaining addresses can be used for instances in the App Service plan. The minimal size of your subnet is a `/27` address space (32 addresses).
+- Any particular subnet has five addresses reserved for management purposes. In addition to the management addresses, App Service Environment dynamically scales the supporting infrastructure, and uses between 4 and 27 addresses, depending on the configuration and load. You can use the remaining addresses for instances in the App Service plan. The minimal size of your subnet is a `/27` address space (32 addresses).
-If you run out of addresses within your subnet, you can be restricted from scaling out your App Service plans in the ASE or you can experience increased latency during intensive traffic load if we are not able scale the supporting infrastructure.
+- If you run out of addresses within your subnet, you can be restricted from scaling out your App Service plans in the App Service Environment. Another possibility is that you can experience increased latency during intensive traffic load, if Microsoft isn't able to scale the supporting infrastructure.
## Addresses
-The ASE has the following network information at creation:
+App Service Environment has the following network information at creation:
-| Address type | description |
+| Address type | Description |
|--|-|
-| ASE virtual network | The virtual network the ASE is deployed into |
-| ASE subnet | The subnet that the ASE is deployed into |
-| Domain suffix | The domain suffix that is used by the apps made in this ASE |
-| Virtual IP | This setting is the VIP type used by the ASE. The two possible values are internal and external |
-| Inbound address | The inbound address is the address your apps on this ASE are reached at. If you have an internal VIP, it is an address in your ASE subnet. If the address is external, it will be a public facing address |
-| Default outbound addresses | The apps in this ASE will use this address, by default, when making outbound calls to the internet. |
+| App Service Environment virtual network | The virtual network deployed into. |
+| App Service Environment subnet | The subnet deployed into. |
+| Domain suffix | The domain suffix that is used by the apps made. |
+| Virtual IP (VIP) | The VIP type used. The two possible values are internal and external. |
+| Inbound address | The inbound address is the address at which your apps are reached. If you have an internal VIP, it's an address in your App Service Environment subnet. If the address is external, it's a public-facing address. |
+| Default outbound addresses | The apps use this address, by default, when making outbound calls to the internet. |
-The ASEv3 has details on the addresses used by the ASE in the **IP Addresses** portion of the ASE portal.
+You can find details in the **IP Addresses** portion of the portal, as shown in the following screenshot:
-![ASE addresses UI](./media/networking/networking-ip-addresses.png)
+![Screenshot that shows details about IP addresses.](./media/networking/networking-ip-addresses.png)
-As you scale your App Service plans in your ASE, you'll use more addresses out of your ASE subnet. The number of addresses used will vary based on the number of App Service plan instances you have, and how much traffic your ASE is receiving. Apps in the ASE don't have dedicated addresses in the ASE subnet. The specific addresses used by an app in the ASE subnet by an app will change over time.
+As you scale your App Service plans in your App Service Environment, you'll use more addresses out of your subnet. The number of addresses you use varies, based on the number of App Service plan instances you have, and how much traffic there is. Apps in the App Service Environment don't have dedicated addresses in the subnet. The specific addresses used by an app in the subnet will change over time.
## Ports and network restrictions
-For your app to receive traffic, you need to ensure that inbound Network Security Groups (NSGs) rules allow the ASE subnet to receive traffic from the required ports. In addition to any ports you'd like to receive traffic on, you should ensure the AzureLoadBalancer is able to connect to the ASE subnet on port 80. This is used for internal VM health checks. You can still control port 80 traffic from the virtual network to you ASE subnet.
+For your app to receive traffic, ensure that inbound network security group (NSG) rules allow the App Service Environment subnet to receive traffic from the required ports. In addition to any ports you'd like to receive traffic on, you should ensure that Azure Load Balancer is able to connect to the subnet on port 80. This is used for health checks of the internal virtual machine. You can still control port 80 traffic from the virtual network to your subnet.
-The general recommendation is to configure the following inbound NSG rule:
+It's a good idea to configure the following inbound NSG rule:
|Port|Source|Destination| |-|-|-|
-|80,443|VirtualNetwork|ASE subnet range|
+|80,443|Virtual network|App Service Environment subnet range|
-The minimal requirement for ASE to be operational is:
+The minimal requirement for App Service Environment to be operational is:
|Port|Source|Destination| |-|-|-|
-|80|AzureLoadBalancer|ASE subnet range|
+|80|Azure Load Balancer|App Service Environment subnet range|
-If you use the minimum required rule you may need one or more rules for your application traffic, and if you are using any of the deployment or debugging options, you will also have to allow this traffic to the ASE subnet. The source of these rules can be VirtualNetwork or one or more specific client IPs or IP ranges. The destination will always be the ASE subnet range.
+If you use the minimum required rule, you might need one or more rules for your application traffic. If you're using any of the deployment or debugging options, you must also allow this traffic to the App Service Environment subnet. The source of these rules can be the virtual network, or one or more specific client IPs or IP ranges. The destination is always the App Service Environment subnet range.
-The normal app access ports are:
+The normal app access ports are as follows:
|Use|Ports| |-|-|
The normal app access ports are:
## Network routing
-You can set Route Tables (UDRs) without restriction. You can force tunnel all of the outbound application traffic from your ASE to an egress firewall device, such as the Azure Firewall, and not have to worry about anything other than your application dependencies. You can put WAF devices, such as the Application Gateway, in front of inbound traffic to your ASE to expose specific apps on that ASE. If you'd like to customize the outbound address of your applications on an ASE, you can add a NAT Gateway to your ASE subnet.
+You can set route tables without restriction. You can tunnel all of the outbound application traffic from your App Service Environment to an egress firewall device, such as Azure Firewall. In this scenario, the only thing you have to worry about is your application dependencies.
+
+You can put your web application firewall devices, such as Azure Application Gateway, in front of inbound traffic. Doing so exposes specific apps on that App Service Environment. If you want to customize the outbound address of your applications on an App Service Environment, you can add a NAT gateway to your subnet.
## DNS
-The following sections describe the DNS considerations and configuration inbound to your ASE and outbound from your ASE.
+The following sections describe the DNS considerations and configuration that apply inbound to and outbound from your App Service Environment.
-### DNS configuration to your ASE
+### DNS configuration to your App Service Environment
-If your ASE is made with an external VIP, your apps are automatically put into public DNS. If your ASE is made with an internal VIP, you may need to configure DNS for it. If you selected having Azure DNS private zones configured automatically during ASE creation, then DNS is configured in your ASE virtual network. If you selected Manually configuring DNS, you need to either use your own DNS server or configure Azure DNS private zones. To find the inbound address of your ASE, go to the **ASE portal > IP Addresses** UI.
+If your App Service Environment is made with an external VIP, your apps are automatically put into public DNS. If your App Service Environment is made with an internal VIP, you might need to configure DNS for it. When you created your App Service Environment, if you selected having Azure DNS private zones configured automatically, then DNS is configured in your virtual network. If you chose to configure DNS manually, you need to either use your own DNS server or configure Azure DNS private zones. To find the inbound address, go to the App Service Environment portal, and select **IP Addresses**.
-If you want to use your own DNS server, you need to add the following records:
+If you want to use your own DNS server, add the following records:
-1. create a zone for `<ASE-name>.appserviceenvironment.net`
-1. create an A record in that zone that points * to the inbound IP address used by your ASE
-1. create an A record in that zone that points @ to the inbound IP address used by your ASE
-1. create a zone in `<ASE-name>.appserviceenvironment.net` named scm
-1. create an A record in the scm zone that points * to the IP address used by your ASE private endpoint
+1. Create a zone for `<App Service Environment-name>.appserviceenvironment.net`.
+1. Create an A record in that zone that points * to the inbound IP address used by your App Service Environment.
+1. Create an A record in that zone that points @ to the inbound IP address used by your App Service Environment.
+1. Create a zone in `<App Service Environment-name>.appserviceenvironment.net` named `scm`.
+1. Create an A record in the `scm` zone that points * to the IP address used by the private endpoint of your App Service Environment.
-To configure DNS in Azure DNS Private zones:
+To configure DNS in Azure DNS private zones:
-1. create an Azure DNS private zone named `<ASE-name>.appserviceenvironment.net`
-1. create an A record in that zone that points * to the inbound IP address
-1. create an A record in that zone that points @ to the inbound IP address
-1. create an A record in that zone that points *.scm to the inbound IP address
+1. Create an Azure DNS private zone named `<App Service Environment-name>.appserviceenvironment.net`.
+1. Create an A record in that zone that points * to the inbound IP address.
+1. Create an A record in that zone that points @ to the inbound IP address.
+1. Create an A record in that zone that points *.scm to the inbound IP address.
-In addition to the default domain provided when an app is created, you can also add a custom domain to your app. You can set a custom domain name without any validation on your apps in an ILB ASE. If you are using custom domains, you will need to ensure they have DNS records configured. You can follow the guidance above to configure DNS zones and records for a custom domain name by replacing the default domain name with the custom domain name. The custom domain name works for app requests but doesn't for the scm site. The scm site is only available at *&lt;appname&gt;.scm.&lt;asename&gt;.appserviceenvironment.net*.
+In addition to the default domain provided when an app is created, you can also add a custom domain to your app. You can set a custom domain name without any validation on your apps. If you're using custom domains, you need to ensure they have DNS records configured. You can follow the preceding guidance to configure DNS zones and records for a custom domain name (simply replace the default domain name with the custom domain name). The custom domain name works for app requests, but doesn't work for the `scm` site. The `scm` site is only available at *&lt;appname&gt;.scm.&lt;asename&gt;.appserviceenvironment.net*.
-### DNS configuration from your ASE
+### DNS configuration from your App Service Environment
-The apps in your ASE will use the DNS that your virtual network is configured with. If you want some apps to use a different DNS server than what your virtual network is configured with, you can manually set it on a per app basis with the app settings WEBSITE_DNS_SERVER and WEBSITE_DNS_ALT_SERVER. The app setting WEBSITE_DNS_ALT_SERVER configures the secondary DNS server. The secondary DNS server is only used when there is no response from the primary DNS server.
+The apps in your App Service Environment will use the DNS that your virtual network is configured with. If you want some apps to use a different DNS server, you can manually set it on a per app basis, with the app settings `WEBSITE_DNS_SERVER` and `WEBSITE_DNS_ALT_SERVER`. `WEBSITE_DNS_ALT_SERVER` configures the secondary DNS server. The secondary DNS server is only used when there is no response from the primary DNS server.
## Limitations
-While the ASE does deploy into a customer virtual network, there are a few networking features that aren't available with ASE:
+While App Service Environment does deploy into your virtual network, there are a few networking features that aren't available:
-* Send SMTP traffic. You can still have email triggered alerts but your app can't send outbound traffic on port 25
-* Use of Network Watcher or NSG Flow to monitor outbound traffic
+* Sending SMTP traffic. Although you can still have email-triggered alerts, your app can't send outbound traffic on port 25.
+* Using Azure Network Watcher or NSG flow to monitor outbound traffic.
## More resources
app-service Overview Zone Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/overview-zone-redundancy.md
Title: Zone-redundancy in App Service Environment
+ Title: Zone redundancy in App Service Environment
description: Overview of zone redundancy in an App Service Environment.
Last updated 11/15/2021
-# Availability Zone support for App Service Environments
+# Availability zone support for App Service Environment
+
+You can deploy App Service Environment across [availability zones](../../availability-zones/az-overview.md). This architecture is also known as zone redundancy. When you configure to be zone redundant, the platform automatically spreads the instances of the Azure App Service plan across all three zones in the selected region. If you specify a capacity larger than three, and the number of instances is divisible by three, the instances are spread evenly. Otherwise, instance counts beyond 3*N are spread across the remaining one or two zones.
> [!NOTE]
-> This article is about the App Service Environment v3 which is used with Isolated v2 App Service plans
->
+> This article is about App Service Environment v3, which is used with isolated v2 App Service plans.
-App Service Environment (ASE) can be deployed across [Availability Zones (AZ)](../../availability-zones/az-overview.md). This architecture is also known as zone redundancy. When an ASE is configured to be zone redundant, the platform automatically spreads the App Service plan instances in the ASE across all three zones in the selected region. If a capacity larger than three is specified and the number of instances is divisible by three, the instances will be spread evenly. Otherwise, instance counts beyond 3*N will get spread across the remaining one or two zones.
+You configure zone redundancy when you create your App Service Environment, and all App Service plans created in that App Service Environment will be zone redundant. You can only specify zone redundancy when you're creating a new App Service Environment. Zone redundancy is only supported in a [subset of regions](./overview.md#regions).
-You configure zone redundancy when you create your ASE and all App Service plans created in that ASE will be zone redundant. Zone redundancy can only be specified when creating a *new* App Service Environment. Zone redundancy is only supported in a [subset of regions](./overview.md#regions).
+When a zone goes down, the App Service platform detects lost instances and automatically attempts to find new, replacement instances. If you also have autoscale configured, and if it determines that more instances are needed, autoscale also issues a request to App Service to add more instances. Autoscale behavior is independent of App Service platform behavior.
-In the case when a zone goes down, the App Service platform will detect lost instances and automatically attempt to find new replacement instances. If you also have autoscale configured, and if it decides more instances are needed, autoscale will also issue a request to App Service to add more instances (autoscale behavior is independent of App Service platform behavior). It's important to note there's no guarantee that requests for instances in a zone-down scenario will succeed since back filling lost instances occur on a best-effort basis. The recommended solution is to scale your App Service plans to account for losing a zone.
+There's no guarantee that requests for instances in a zone-down scenario will succeed, because back-filling lost instances occurs on a best effort basis. It's a good idea to scale your App Service plans to account for losing a zone.
-Applications deployed in a zone redundant ASE will continue to run and serve traffic even if other zones in the same region suffer an outage. However it's possible that non-runtime behaviors including App Service plan scaling, application creation, application configuration, and application publishing may still be impacted from an outage in other Availability Zones. Zone redundancy for App Service Environment only ensures continued uptime for deployed applications.
+Applications deployed in a zone redundant App Service Environment continue to run and serve traffic, even if other zones in the same region suffer an outage. It's possible, however, that non-runtime behaviors might still be affected by an outage in other availability zones. These behaviors might include the following: App Service plan scaling, application creation, application configuration, and application publishing. Zone redundancy for App Service Environment only ensures continued uptime for deployed applications.
-When the App Service platform allocates instances to a zone redundant App Service plan in an ASE, it uses [best effort zone balancing offered by the underlying Azure Virtual Machine Scale Sets](../../virtual-machine-scale-sets/virtual-machine-scale-sets-use-availability-zones.md#zone-balancing). An App Service plan will be "balanced" if each zone has either the same number of instances, or +/- 1 instance in all of the other zones used by the App Service plan.
+When the App Service platform allocates instances to a zone redundant App Service plan, it uses [best effort zone balancing offered by the underlying Azure virtual machine scale sets](../../virtual-machine-scale-sets/virtual-machine-scale-sets-use-availability-zones.md#zone-balancing). An App Service plan is considered balanced if each zone has either the same number of instances, or +/- 1 instance in all of the other zones used by the App Service plan.
## Pricing
- There is a minimum charge of nine App Service plan instances in a zone redundant ASE. There is no added charge for availability zone support if you have nine or more App Service plan instances. If you have less than nine instances (of any size) across App Service plans in the zone redundant ASE, the difference between nine and the running instance count is charged as additional Windows I1v2 instances.
+ There is a minimum charge of nine App Service plan instances in a zone redundant App Service Environment. There is no added charge for availability zone support if you have nine or more instances. If you have fewer than nine instances (of any size) across App Service plans in the zone redundant App Service Environment, you're charged for the difference between nine and the running instance count. This charge is for additional Windows I1v2 instances.
## Next steps
-* Read more about [Availability Zones](../../availability-zones/az-overview.md)
+* Read more about [availability zones](../../availability-zones/az-overview.md).
app-service Using An Ase https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/using-an-ase.md
Last updated 8/5/2021
-# Use an App Service Environment
+# Manage an App Service Environment
> [!NOTE] > This article is about the App Service Environment v2 which is used with Isolated App Service plans >
app-service Using https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/using.md
Last updated 07/06/2021
-# Using an App Service Environment
+
+# Use an App Service Environment
+
+App Service Environment is a single-tenant deployment of Azure App Service. You use it with an Azure virtual network, and you're the only user of this system. Apps deployed are subject to the networking features that are applied to the subnet. There aren't any additional features that need to be enabled on your apps to be subject to those networking features.
> [!NOTE]
-> This article is about the App Service Environment v3 which is used with Isolated v2 App Service plans
->
+> This article is about App Service Environment v3, which is used with isolated v2 App Service plans.
-The App Service Environment (ASE) is a single tenant deployment of the Azure App Service that injects directly into an Azure Virtual Network (VNet) of your choosing. It's a system that is only used by one customer. Apps deployed into the ASE are subject to the networking features that are applied to the ASE subnet. There aren't any additional features that need to be enabled on your apps to be subject to those networking features.
+## Create an app
-## Create an app in an ASE
+To create an app in your App Service Environment, you use the same process as when you normally create an app, but with a few small differences. When you create a new App Service plan:
-To create an app in an ASE, you use the same process as when you normally create an app, but with a few small differences. When you create a new App Service plan:
+- Instead of choosing a geographic location in which to deploy your app, you choose an App Service Environment as your location.
+- All App Service plans created in an App Service Environment can only be in an isolated v2 pricing tier.
-- Instead of choosing a geographic location in which to deploy your app, you choose an ASE as your location.-- All App Service plans created in an ASE can only be in an Isolated v2 pricing tier.
+If you don't yet have one, [create an App Service Environment][MakeASE].
-If you don't have an ASE, you can create one by following the instructions in [Create an App Service Environment][MakeASE].
-To create an app in an ASE:
+To create an app in an App Service Environment:
1. Select **Create a resource** > **Web + Mobile** > **Web App**. 1. Select a subscription.
-1. Enter a name for a new resource group, or select **Use existing** and select one from the drop-down list.
-1. Enter a name for the app. If you already selected an App Service plan in an ASE, the domain name for the app reflects the domain name of the ASE
-1. Select your Publish type, Stack, and Operating System.
-1. Select region. Here you need to select a pre-existing App Service Environment v3. You can't make an ASEv3 during app creation:
-![create an app in an ASE][1]
-1. Select an existing App Service plan in your ASE, or create a new one. If creating a new app, select the size that you want for your App Service plan. The only SKU you can select for your app is an Isolated v2 pricing SKU. Making a new App Service plan will normally take less than 20 minutes.
-![Isolated v2 pricing tiers][2]
-1. Select **Next: Monitoring** If you want to enable App Insights with your app, you can do it here during the creation flow.
-1. Select **Next: Tags** Add any tags you want to the app
-1. Select **Review + create**, make sure the information is correct, and then select **Create**.
-
-Windows and Linux apps can be in the same ASE but cannot be in the same App Service plan.
+1. Enter a name for a new resource group, or select **Use existing** and select one from the dropdown list.
+1. Enter a name for the app. If you already selected an App Service plan in an App Service Environment, the domain name for the app reflects the domain name of the App Service Environment.
+1. For **Publish**, **Runtime stack**, and **Operating System**, make your selections as appropriate.
+1. For **Region**, select a pre-existing App Service Environment v3. You can't make a new one when you're creating your app.
+ ![Screenshot that shows how to create an app in an App Service Environment.][1]
+1. Select an existing App Service plan, or create a new one. If you're creating a new app, select the size that you want for your App Service plan. The only SKU you can select for your app is an isolated v2 pricing SKU. Making a new App Service plan will normally take less than 20 minutes.
+ ![Screenshot that shows pricing tiers and their features and hardware.][2]
+1. Select **Next: Monitoring**. If you want to enable Application Insights with your app, you can do it here during the creation flow.
+1. Select **Next: Tags**, and add any tags you want to the app.
+1. Select **Review + create**. Make sure that the information is correct, and then select **Create**.
+
+Windows and Linux apps can be in the same App Service Environment, but can't be in the same App Service plan.
## How scale works Every App Service app runs in an App Service plan. App Service Environments hold App Service plans, and App Service plans hold apps. When you scale an app, you also scale the App Service plan and all the apps in that same plan.
-When you scale an App Service plan, the needed infrastructure is added automatically. There's a time delay to scale operations while the infrastructure is being added. When you scale an App Service plan, and you have another scale operation of the same OS and size running, there might be a slight delay of a few minutes until the requested scale starts. A scale operation on one size and OS won't affect scaling of the other combinations of size and OS. For example, if you are scaling a Windows I2v2 App Service plan then, any other requests to scale Windows I2v2 might be slightly delayed, but a scale operation to a Windows I3v2 App Service plan will start immediately. Scaling will normally take less than 20 minutes.
+When you scale an App Service plan, the needed infrastructure is added automatically. Be aware that there's a time delay to scale operations while the infrastructure is being added. For example, when you scale an App Service plan, and you have another scale operation of the same operating system and size running, there might be a delay of a few minutes until the requested scale starts.
-In the multitenant App Service, scaling is immediate because a pool of *shared* resources is readily available to support it. ASE is a single-tenant service, so there's no shared buffer, and resources are allocated based on need.
+A scale operation on one size and operating system won't affect scaling of the other combinations of size and operating system. For example, if you are scaling a Windows I2v2 App Service plan, a scale operation to a Windows I3v2 App Service plan starts immediately. Scaling normally takes less than 20 minutes.
-## App access
+In a multi-tenant App Service, scaling is immediate, because a pool of shared resources is readily available to support it. App Service Environment is a single-tenant service, so there's no shared buffer, and resources are allocated based on need.
-In an ASE with an internal VIP, the domain suffix used for app creation is *.&lt;asename&gt;.appserviceenvironment.net*. If your ASE is named _my-ase_ and you host an app called _contoso_ in that ASE, you reach it at these URLs:
+## App access
-- contoso.my-ase.appserviceenvironment.net-- contoso.scm.my-ase.appserviceenvironment.net
+In an App Service Environment with an internal virtual IP (VIP), the domain suffix used for app creation is *.&lt;asename&gt;.appserviceenvironment.net*. If your App Service Environment is named _my-ase_, and you host an app called _contoso_, you reach it at these URLs:
-The apps that are hosted on an ASE that uses an internal VIP will only be accessible if you are in the same virtual network as the ASE or are connected somehow to that virtual network. Publishing is also restricted to being only possible if you are in the same virtual network or are connected somehow to that virtual network.
+- `contoso.my-ase.appserviceenvironment.net`
+- `contoso.scm.my-ase.appserviceenvironment.net`
-In an ASE with an external VIP, the domain suffix used for app creation is *.&lt;asename&gt;.p.azurewebsites.net*. If your ASE is named _my-ase_ and you host an app called _contoso_ in that ASE, you reach it at these URLs:
+Apps hosted on an App Service Environment that uses an internal VIP are only accessible if you're in the same virtual network, or are connected to that virtual network. Similarly, publishing is only possible if you're in the same virtual network or are connected to that virtual network.
-- contoso.my-ase.p.azurewebsites.net-- contoso.scm.my-ase.p.azurewebsites.net
+In an App Service Environment with an external VIP, the domain suffix used for app creation is *.&lt;asename&gt;.p.azurewebsites.net*. If your App Service Environment is named _my-ase_, and you host an app called _contoso_, you reach it at these URLs:
-For information about how to create an ASE, see [Create an App Service Environment][MakeASE].
+- `contoso.my-ase.p.azurewebsites.net`
+- `contoso.scm.my-ase.p.azurewebsites.net`
-The SCM URL is used to access the Kudu console or for publishing your app by using Web Deploy. For information on the Kudu console, see [Kudu console for Azure App Service][Kudu]. The Kudu console gives you a web UI for debugging, uploading files, editing files, and much more.
+You use the `scm` URL to access the Kudu console, or for publishing your app by using web deploy. For more information, see [Kudu console for Azure App Service][Kudu]. The Kudu console gives you a web UI for debugging, uploading files, and editing files.
### DNS configuration
-If your ASE is made with an external VIP, your apps are automatically put into public DNS. If your ASE is made with an internal VIP, you may need to configure DNS for it. If you selected having Azure DNS private zones configured automatically during ASE creation then DNS is configured in your ASE VNet. If you selected Manually configuring DNS, you need to either use your own DNS server or configure Azure DNS private zones. To find the inbound address of your ASE, go to the **ASE portal > IP Addresses** UI.
+If your App Service Environment is made with an external VIP, your apps are automatically put into public DNS. If your App Service Environment is made with an internal VIP, you might need to configure DNS for it.
-![IP addresses UI][6]
+If you selected having Azure DNS private zones configured automatically, then DNS is configured in the virtual network of your App Service Environment. If you selected to configure DNS manually, you need to use your own DNS server or configure Azure DNS private zones.
-If you want to use your own DNS server, you need to add the following records:
+To find the inbound address, in the App Service Environment portal, select **IP addresses**.
-1. create a zone for &lt;ASE name&gt;.appserviceenvironment.net
-1. create an A record in that zone that points * to the inbound IP address used by your ASE
-1. create an A record in that zone that points @ to the inbound IP address used by your ASE
-1. create a zone in &lt;ASE name&gt;.appserviceenvironment.net named scm
-1. create an A record in the scm zone that points * to the inbound address used by your ASE
+![Screenshot that shows how to find the inbound address.][6]
-To configure DNS in Azure DNS Private zones:
+If you want to use your own DNS server, add the following records:
-1. create an Azure DNS private zone named &lt;ASE name&gt;.appserviceenvironment.net
-1. create an A record in that zone that points * to the inbound IP address
-1. create an A record in that zone that points @ to the inbound IP address
-1. create an A record in that zone that points *.scm to the inbound IP address
+1. Create a zone for `<App Service Environment-name>.appserviceenvironment.net`.
+1. Create an A record in that zone that points * to the inbound IP address used by your App Service Environment.
+1. Create an A record in that zone that points @ to the inbound IP address used by your App Service Environment.
+1. Create a zone in `<App Service Environment-name>.appserviceenvironment.net` named `scm`.
+1. Create an A record in the `scm` zone that points * to the inbound address used by your App Service Environment.
-The DNS settings for your ASE default domain suffix don't restrict your apps to only being accessible by those names. You can set a custom domain name without any validation on your apps in an ASE. If you then want to create a zone named *contoso.net*, you could do so and point it to the inbound IP address. The custom domain name works for app requests but doesn't for the scm site. The scm site is only available at *&lt;appname&gt;.scm.&lt;asename&gt;.appserviceenvironment.net*.
+To configure DNS in Azure DNS private zones:
+
+1. Create an Azure DNS private zone named `<App Service Environment-name>.appserviceenvironment.net`.
+1. Create an A record in that zone that points * to the inbound IP address.
+1. Create an A record in that zone that points @ to the inbound IP address.
+1. Create an A record in that zone that points *.scm to the inbound IP address.
+
+The DNS settings for the default domain suffix of your App Service Environment don't restrict your apps to only being accessible by those names. You can set a custom domain name without any validation on your apps in an App Service Environment. If you then want to create a zone named `contoso.net`, you can do so and point it to the inbound IP address. The custom domain name works for app requests, but doesn't work for the `scm` site. The `scm` site is only available at *&lt;appname&gt;.scm.&lt;asename&gt;.appserviceenvironment.net*.
## Publishing
-In an ASE, as with the multitenant App Service, you can publish by these methods:
+You can publish by any of the following methods:
- Web deployment - Continuous integration (CI)-- Drag and drop in the Kudu console-- An IDE, such as Visual Studio, Eclipse, or IntelliJ IDEA
+- Drag-and-drop in the Kudu console
+- An integrated development environment (IDE), such as Visual Studio, Eclipse, or IntelliJ IDEA
-With an internal VIP ASE, the publishing endpoints are only available through the inbound address. If you don't have network access to the inbound address, you can't publish any apps on that ASE. Your IDEs must also have network access to the inbound address on the ASE to publish directly to it.
+With an internal VIP App Service Environment, the publishing endpoints are only available through the inbound address. If you don't have network access to the inbound address, you can't publish any apps on that App Service Environment. Your IDEs must also have network access to the inbound address on the App Service Environment to publish directly to it.
-Without additional changes, internet-based CI systems like GitHub and Azure DevOps don't work with an internal VIP ASE because the publishing endpoint isn't internet accessible. You can enable publishing to an internal VIP ASE from Azure DevOps by installing a self-hosted release agent in the virtual network that contains the ASE.
+Without additional changes, internet-based CI systems like GitHub and Azure DevOps don't work with an internal VIP App Service Environment. The publishing endpoint isn't internet accessible. You can enable publishing to an internal VIP App Service Environment from Azure DevOps, by installing a self-hosted release agent in the virtual network.
## Storage
-An ASE has 1 TB of storage for all the apps in the ASE. An App Service plan in the Isolated pricing SKU has a limit of 250 GB. In an ASE, 250 GB of storage is added per App Service plan up to the 1 TB limit. You can have more App Service plans than just four, but there is no more storage added beyond the 1 TB limit.
+You have 1 TB of storage for all the apps in your App Service Environment. An App Service plan in the isolated pricing SKU has a limit of 250 GB. In an App Service Environment, 250 GB of storage is added per App Service plan, up to the 1 TB limit. You can have more App Service plans than just four, but there is no additional storage beyond the 1 TB limit.
## Logging
-You can integrate your ASE with Azure Monitor to send logs about the ASE to Azure Storage, Azure Event Hubs, or Log Analytics. These items are logged today:
+You can integrate with Azure Monitor to send logs to Azure Storage, Azure Event Hubs, or Azure Monitor Logs. The following table shows the situations and messages you can log:
|Situation |Message | |-|--|
-|ASE subnet is almost out of space | The specified ASE is in a subnet that is almost out of space. There are {0} remaining addresses. Once these addresses are exhausted, the ASE will not be able to scale. |
-|ASE is approaching total instance limit | The specified ASE is approaching the total instance limit of the ASE. It currently contains {0} App Service Plan instances of a maximum 200 instances. |
-|ASE is suspended | The specified ASE is suspended. The ASE suspension may be due to an account shortfall or an invalid virtual network configuration. Resolve the root cause and resume the ASE to continue serving traffic. |
-|ASE upgrade has started | A platform upgrade to the specified ASE has begun. Expect delays in scaling operations. |
-|ASE upgrade has completed | A platform upgrade to the specified ASE has finished. |
-|App Service plan creation has started | An App Service plan ({0}) creation has started. Desired state: {1} I{2}v2 workers.
-|Scale operations have completed | An App Service plan ({0}) creation has finished. Current state: {1} I{2}v2 workers. |
-|Scale operations have failed | An App Service plan ({0}) creation has failed. This may be due to the ASE operating at peak number of instances, or run out of subnet addresses. |
-|Scale operations have started | An App Service plan ({0}) has begun scaling. Current state: {1} I(2)v2. Desired state: {3} I{4}v2 workers.|
-|Scale operations have completed | An App Service plan ({0}) has finished scaling. Current state: {1} I{2}v2 workers. |
-|Scale operations were interrupted | An App Service plan ({0}) was interrupted while scaling. Previous desired state: {1} I{2}v2 workers. New desired state: {3} I{4}v2 workers. |
-|Scale operations have failed | An App Service plan ({0}) has failed to scale. Current state: {1} I{2}v2 workers. |
-
-To enable logging on your ASE:
-
-1. In the portal, go to **Diagnostics settings**.
+|App Service Environment subnet is almost out of space. | The specified App Service Environment is in a subnet that is almost out of space. There are {0} remaining addresses. Once these addresses are exhausted, the App Service Environment will not be able to scale. |
+|App Service Environment is approaching total instance limit. | The specified App Service Environment is approaching the total instance limit of the App Service Environment. It currently contains {0} App Service Plan instances of a maximum 200 instances. |
+|App Service Environment is suspended. | The specified App Service Environment is suspended. The App Service Environment suspension may be due to an account shortfall or an invalid virtual network configuration. Resolve the root cause and resume the App Service Environment to continue serving traffic. |
+|App Service Environment upgrade has started. | A platform upgrade to the specified App Service Environment has begun. Expect delays in scaling operations. |
+|App Service Environment upgrade has completed. | A platform upgrade to the specified App Service Environment has finished. |
+|App Service plan creation has started. | An App Service plan ({0}) creation has started. Desired state: {1} I{2}v2 workers.
+|Scale operations have completed. | An App Service plan ({0}) creation has finished. Current state: {1} I{2}v2 workers. |
+|Scale operations have failed. | An App Service plan ({0}) creation has failed. This may be due to the App Service Environment operating at peak number of instances, or run out of subnet addresses. |
+|Scale operations have started. | An App Service plan ({0}) has begun scaling. Current state: {1} I(2)v2. Desired state: {3} I{4}v2 workers.|
+|Scale operations have completed. | An App Service plan ({0}) has finished scaling. Current state: {1} I{2}v2 workers. |
+|Scale operations were interrupted. | An App Service plan ({0}) was interrupted while scaling. Previous desired state: {1} I{2}v2 workers. New desired state: {3} I{4}v2 workers. |
+|Scale operations have failed. | An App Service plan ({0}) has failed to scale. Current state: {1} I{2}v2 workers. |
+
+To enable logging, follow these steps:
+
+1. In the portal, go to **Diagnostic settings**.
1. Select **Add diagnostic setting**. 1. Provide a name for the log integration. 1. Select and configure the log destinations that you want. 1. Select **AppServiceEnvironmentPlatformLogs**.
-![ASE diagnostic log settings][4]
+![Screenshot that shows how to enable logging.][4]
+
+If you integrate with Azure Monitor Logs, you can see the logs by selecting **Logs** from the App Service Environment portal, and creating a query against **AppServiceEnvironmentPlatformLogs**. Logs are only emitted when your App Service Environment has an event that triggers the logs. If your App Service Environment doesn't have such an event, there won't be any logs. To quickly see an example of logs, perform a scale operation with an App Service plan. You can then run a query against **AppServiceEnvironmentPlatformLogs** to see those logs.
-If you integrate with Log Analytics, you can see the logs by selecting **Logs** from the ASE portal and creating a query against **AppServiceEnvironmentPlatformLogs**. Logs are only emitted when your ASE has an event that will trigger it. If your ASE doesn't have such an event, there won't be any logs. To quickly see an example of logs in your Log Analytics workspace, perform a scale operation with an App Service plan in your ASE. You can then run a query against **AppServiceEnvironmentPlatformLogs** to see those logs.
+### Create an alert
-### Creating an alert
+To create an alert against your logs, follow the instructions in [Create, view, and manage log alerts by using Azure Monitor](../../azure-monitor/alerts/alerts-log.md). In brief:
-To create an alert against your logs, follow the instructions in [Create, view, and manage log alerts using Azure Monitor](../../azure-monitor/alerts/alerts-log.md). In brief:
+1. Open the **Alerts** page in your App Service Environment portal.
+1. Select **New alert rule**.
+1. For **Resource**, select your Azure Monitor Logs workspace.
+1. Set your condition with a custom log search to use a query. For example, you might set the following: **AppServiceEnvironmentPlatformLogs | where ResultDescription contains *has begun scaling***. Set the threshold as appropriate.
+1. Add or create an action group (optional). The action group is where you define the response to the alert, such as sending an email or an SMS message.
+1. Name your alert and save it.
-* Open the Alerts page in your ASE portal
-* Select **New alert rule**
-* Select your Resource to be your Log Analytics workspace
-* Set your condition with a custom log search to use a query like, "AppServiceEnvironmentPlatformLogs | where ResultDescription contains "has begun scaling" or whatever you want. Set the threshold as appropriate.
-* Add or create an action group as desired. The action group is where you define the response to the alert such as sending an email or an SMS message
-* Name your alert and save it.
+## Internal encryption
-## Internal Encryption
+You can't see the internal components or the communication within the App Service Environment system. To enable higher throughput, encryption isn't enabled by default between internal components. The system is secure because the traffic is inaccessible to being monitored or accessed. If you have a compliance requirement for complete encryption of the data path, you can enable this. Select **Configuration**, as shown in the following screenshot.
-The App Service Environment operates as a black box system where you cannot see the internal components or the communication within the system. To enable higher throughput, encryption is not enabled by default between internal components. The system is secure as the traffic is inaccessible to being monitored or accessed. If you have a compliance requirement though that requires complete encryption of the data path from end to end encryption, you can enable this in the ASE **Configuration** UI.
+![Screenshot that shows how to enable internal encryption.][5]
-![Enable internal encryption][5]
+This option encrypts internal network traffic, and also encrypts the pagefile and the worker disks. Be aware that this option can affect your system performance. Your App Service Environment will be in an unstable state until the change is fully propagated. Complete propagation of the change can take a few hours to complete, depending on how many instances you have.
-This will encrypt internal network traffic in your ASE between the front ends and workers, encrypt the pagefile and also encrypt the worker disks. After the InternalEncryption clusterSetting is enabled, there can be an impact to your system performance. When you make the change to enable InternalEncryption, your ASE will be in an unstable state until the change is fully propagated. Complete propagation of the change can take a few hours to complete, depending on how many instances you have in your ASE. We highly recommend that you do not enable this on an ASE while it is in use. If you need to enable this on an actively used ASE, we highly recommend that you divert traffic to a backup environment until the operation completes.
+Avoid enabling this option while you're using App Service Environment. If you must do so, it's a good idea to divert traffic to a backup until the operation finishes.
## Upgrade preference
-If you have multiple ASEs, you might want some ASEs to be upgraded before others. This behavior can be enabled through your ASE portal. Under **Configuration** you have the option to set **Upgrade preference**. The three possible values are:
+If you have multiple App Service Environments, you might want some of them to be upgraded before others. You can enable this behavior through your App Service Environment portal. Under **Configuration**, you have the option to set **Upgrade preference**. The possible values are:
-- **None**: Azure will upgrade your ASE in no particular batch. This value is the default.-- **Early**: Your ASE will be upgraded in the first half of the App Service upgrades.-- **Late**: Your ASE will be upgraded in the second half of the App Service upgrades.
+- **None**: Azure upgrades in no particular batch. This value is the default.
+- **Early**: Upgrade in the first half of the App Service upgrades.
+- **Late**: Upgrade in the second half of the App Service upgrades.
-Select the value desired and select **Save**. The default for any ASE is **None**.
+Select the value you want, and then select **Save**.
-![ASE configuration portal][5]
+![Screenshot that shows the App Service Environment configuration portal.][5]
-The **upgradePreferences** feature makes the most sense when you have multiple ASEs because your "Early" ASEs will be upgraded before your "Late" ASEs. When you have multiple ASEs, you should set your development and test ASEs to be "Early" and your production ASEs to be "Late".
+This feature makes the most sense when you have multiple App Service Environments, and you might benefit from sequencing the upgrades. For example, you might set your development and test App Service Environments to be early, and your production App Service Environments to be late.
-## Delete an ASE
+## Delete an App Service Environment
-To delete an ASE:
+To delete:
-1. Select **Delete** at the top of the **App Service Environment** pane.
-1. Enter the name of your ASE to confirm that you want to delete it. When you delete an ASE, you also delete all the content within it.
-![ASE deletion][3]
+1. At the top of the **App Service Environment** pane, select **Delete**.
+1. Enter the name of your App Service Environment to confirm that you want to delete it. When you delete an App Service Environment, you also delete all the content within it.
+ ![Screenshot that shows how to delete.][3]
1. Select **OK**. <!--Image references-->
To delete an ASE:
[AppDeploy]: ../deploy-local-git.md [ASEWAF]: ./integrate-with-application-gateway.md [AppGW]: ../../web-application-firewall/ag/ag-overview.md
-[logalerts]: ../../azure-monitor/alerts/alerts-log.md
+[logalerts]: ../../azure-monitor/alerts/alerts-log.md
azure-arc Concepts Distributed Postgres Hyperscale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/concepts-distributed-postgres-hyperscale.md
It is important to know about a following concepts to benefit the most from Azur
- Types of tables: distributed tables, reference tables and local tables - Shards
-See more information at [Nodes and tables in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)](../../postgresql/concepts-hyperscale-nodes.md).
+See more information at [Nodes and tables in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)](../../postgresql/hyperscale/concepts-nodes.md).
## Determine the application type Clearly identifying the type of application you are building is important. Why?
The recommended distribution varies by the type of application and its query pat
The first step in data modeling is to identify which of them more closely resembles your application.
-See details at [Determining application type](../../postgresql/concepts-hyperscale-app-type.md).
+See details at [Determining application type](../../postgresql/hyperscale/concepts-app-type.md).
## Choose a distribution column
Why choose a distributed column?
This is one of the most important modeling decisions you'll make. Azure Arc-enabled PostgreSQL Hyperscale stores rows in shards based on the value of the rows' distribution column. The correct choice groups related data together on the same physical nodes, which makes queries fast and adds support for all SQL features. An incorrect choice makes the system run slowly and won't support all SQL features across nodes. This article gives distribution column tips for the two most common hyperscale scenarios.
-See details at [Choose distribution columns](../../postgresql/concepts-hyperscale-choose-distribution-column.md).
+See details at [Choose distribution columns](../../postgresql/hyperscale/concepts-choose-distribution-column.md).
## Table colocation
See details at [Choose distribution columns](../../postgresql/concepts-hyperscal
Colocation is about storing related information together on the same nodes. Queries can go fast when all the necessary data is available without any network traffic. Colocating related data on different nodes allows queries to run efficiently in parallel on each node.
-See details at [Table colocation](../../postgresql/concepts-hyperscale-colocation.md).
+See details at [Table colocation](../../postgresql/hyperscale/concepts-colocation.md).
## Next steps
azure-arc Create Postgresql Hyperscale Server Group Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-postgresql-hyperscale-server-group-azure-data-studio.md
While indicating 1 worker works, we do not recommend you use it. This deployment
- [Manage your server group using Azure Data Studio](manage-postgresql-hyperscale-server-group-with-azure-data-studio.md) - [Monitor your server group](monitor-grafana-kibana.md) - Read the concepts and How-to guides of Azure Database for PostgreSQL Hyperscale to distribute your data across multiple PostgreSQL Hyperscale nodes and to benefit from all the power of Azure Database for Postgres Hyperscale. :
- * [Nodes and tables](../../postgresql/concepts-hyperscale-nodes.md)
- * [Determine application type](../../postgresql/concepts-hyperscale-app-type.md)
- * [Choose a distribution column](../../postgresql/concepts-hyperscale-choose-distribution-column.md)
- * [Table colocation](../../postgresql/concepts-hyperscale-colocation.md)
- * [Distribute and modify tables](../../postgresql/howto-hyperscale-modify-distributed-tables.md)
- * [Design a multi-tenant database](../../postgresql/tutorial-design-database-hyperscale-multi-tenant.md)*
- * [Design a real-time analytics dashboard](../../postgresql/tutorial-design-database-hyperscale-realtime.md)*
+ * [Nodes and tables](../../postgresql/hyperscale/concepts-nodes.md)
+ * [Determine application type](../../postgresql/hyperscale/concepts-app-type.md)
+ * [Choose a distribution column](../../postgresql/hyperscale/concepts-choose-distribution-column.md)
+ * [Table colocation](../../postgresql/hyperscale/concepts-colocation.md)
+ * [Distribute and modify tables](../../postgresql/hyperscale/howto-modify-distributed-tables.md)
+ * [Design a multi-tenant database](../../postgresql/hyperscale/tutorial-design-database-multi-tenant.md)*
+ * [Design a real-time analytics dashboard](../../postgresql/hyperscale/tutorial-design-database-realtime.md)*
> \* In the documents above, skip the sections **Sign in to the Azure portal**, & **Create an Azure Database for PostgreSQL - Hyperscale (Citus)**. Implement the remaining steps in your Azure Arc deployment. Those sections are specific to the Azure Database for PostgreSQL Hyperscale (Citus) offered as a PaaS service in the Azure cloud but the other parts of the documents are directly applicable to your Azure Arc-enabled PostgreSQL Hyperscale.
azure-arc Create Postgresql Hyperscale Server Group Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-postgresql-hyperscale-server-group-azure-portal.md
While indicating 1 worker works, we do not recommend you use it. This deployment
- Connect to your Azure Arc-enabled PostgreSQL Hyperscale: read [Get Connection Endpoints And Connection Strings](get-connection-endpoints-and-connection-strings-postgres-hyperscale.md) - Read the concepts and How-to guides of Azure Database for PostgreSQL Hyperscale to distribute your data across multiple PostgreSQL Hyperscale nodes and to benefit from better performances potentially:
- * [Nodes and tables](../../postgresql/concepts-hyperscale-nodes.md)
- * [Determine application type](../../postgresql/concepts-hyperscale-app-type.md)
- * [Choose a distribution column](../../postgresql/concepts-hyperscale-choose-distribution-column.md)
- * [Table colocation](../../postgresql/concepts-hyperscale-colocation.md)
- * [Distribute and modify tables](../../postgresql/howto-hyperscale-modify-distributed-tables.md)
- * [Design a multi-tenant database](../../postgresql/tutorial-design-database-hyperscale-multi-tenant.md)*
- * [Design a real-time analytics dashboard](../../postgresql/tutorial-design-database-hyperscale-realtime.md)*
+ * [Nodes and tables](../../postgresql/hyperscale/concepts-nodes.md)
+ * [Determine application type](../../postgresql/hyperscale/concepts-app-type.md)
+ * [Choose a distribution column](../../postgresql/hyperscale/concepts-choose-distribution-column.md)
+ * [Table colocation](../../postgresql/hyperscale/concepts-colocation.md)
+ * [Distribute and modify tables](../../postgresql/hyperscale/howto-modify-distributed-tables.md)
+ * [Design a multi-tenant database](../../postgresql/hyperscale/tutorial-design-database-multi-tenant.md)*
+ * [Design a real-time analytics dashboard](../../postgresql/hyperscale/tutorial-design-database-realtime.md)*
> \* In the documents above, skip the sections **Sign in to the Azure portal**, & **Create an Azure Database for PostgreSQL - Hyperscale (Citus)**. Implement the remaining steps in your Azure Arc deployment. Those sections are specific to the Azure Database for PostgreSQL Hyperscale (Citus) offered as a PaaS service in the Azure cloud but the other parts of the documents are directly applicable to your Azure Arc-enabled PostgreSQL Hyperscale.
azure-arc Create Postgresql Hyperscale Server Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-postgresql-hyperscale-server-group.md
psql postgresql://postgres:<EnterYourPassword>@10.0.0.4:30655
- Connect to your Azure Arc-enabled PostgreSQL Hyperscale: read [Get Connection Endpoints And Connection Strings](get-connection-endpoints-and-connection-strings-postgres-hyperscale.md) - Read the concepts and How-to guides of Azure Database for PostgreSQL Hyperscale to distribute your data across multiple PostgreSQL Hyperscale nodes and to benefit from better performances potentially:
- * [Nodes and tables](../../postgresql/concepts-hyperscale-nodes.md)
- * [Determine application type](../../postgresql/concepts-hyperscale-app-type.md)
- * [Choose a distribution column](../../postgresql/concepts-hyperscale-choose-distribution-column.md)
- * [Table colocation](../../postgresql/concepts-hyperscale-colocation.md)
- * [Distribute and modify tables](../../postgresql/howto-hyperscale-modify-distributed-tables.md)
- * [Design a multi-tenant database](../../postgresql/tutorial-design-database-hyperscale-multi-tenant.md)*
- * [Design a real-time analytics dashboard](../../postgresql/tutorial-design-database-hyperscale-realtime.md)*
+ * [Nodes and tables](../../postgresql/hyperscale/concepts-nodes.md)
+ * [Determine application type](../../postgresql/hyperscale/concepts-app-type.md)
+ * [Choose a distribution column](../../postgresql/hyperscale/concepts-choose-distribution-column.md)
+ * [Table colocation](../../postgresql/hyperscale/concepts-colocation.md)
+ * [Distribute and modify tables](../../postgresql/hyperscale/howto-modify-distributed-tables.md)
+ * [Design a multi-tenant database](../../postgresql/hyperscale/tutorial-design-database-multi-tenant.md)*
+ * [Design a real-time analytics dashboard](../../postgresql/hyperscale/tutorial-design-database-realtime.md)*
> \* In the documents above, skip the sections **Sign in to the Azure portal**, & **Create an Azure Database for PostgreSQL - Hyperscale (Citus)**. Implement the remaining steps in your Azure Arc deployment. Those sections are specific to the Azure Database for PostgreSQL Hyperscale (Citus) offered as a PaaS service in the Azure cloud but the other parts of the documents are directly applicable to your Azure Arc-enabled PostgreSQL Hyperscale.
azure-arc Migrate Postgresql Data Into Postgresql Hyperscale Server Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/migrate-postgresql-data-into-postgresql-hyperscale-server-group.md
Within your Arc setup you can use `psql` to connect to your Postgres instance, s
## Next steps - Read the concepts and How-to guides of Azure Database for PostgreSQL Hyperscale to distribute your data across multiple PostgreSQL Hyperscale nodes and to benefit from all the power of Azure Database for PostgreSQL Hyperscale:
- * [Nodes and tables](../../postgresql/concepts-hyperscale-nodes.md)
- * [Determine application type](../../postgresql/concepts-hyperscale-app-type.md)
- * [Choose a distribution column](../../postgresql/concepts-hyperscale-choose-distribution-column.md)
- * [Table colocation](../../postgresql/concepts-hyperscale-colocation.md)
- * [Distribute and modify tables](../../postgresql/howto-hyperscale-modify-distributed-tables.md)
- * [Design a multi-tenant database](../../postgresql/tutorial-design-database-hyperscale-multi-tenant.md)*
- * [Design a real-time analytics dashboard](../../postgresql/tutorial-design-database-hyperscale-realtime.md)*
+ * [Nodes and tables](../../postgresql/hyperscale/concepts-nodes.md)
+ * [Determine application type](../../postgresql/hyperscale/concepts-app-type.md)
+ * [Choose a distribution column](../../postgresql/hyperscale/concepts-choose-distribution-column.md)
+ * [Table colocation](../../postgresql/hyperscale/concepts-colocation.md)
+ * [Distribute and modify tables](../../postgresql/hyperscale/howto-modify-distributed-tables.md)
+ * [Design a multi-tenant database](../../postgresql/hyperscale/tutorial-design-database-multi-tenant.md)*
+ * [Design a real-time analytics dashboard](../../postgresql/hyperscale/tutorial-design-database-realtime.md)*
> *In these documents, skip the sections **Sign in to the Azure portal**, and **Create an Azure Database for Postgres - Hyperscale (Citus)**. Implement the remaining steps in your Azure Arc deployment. Those sections are specific to the Azure Database for PostgreSQL Hyperscale (Citus) offered as a PaaS service in the Azure cloud but the other parts of the documents are directly applicable to your Azure Arc-enabled PostgreSQL Hyperscale.
azure-arc Restore Adventureworks Sample Db Into Postgresql Hyperscale Server Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/restore-adventureworks-sample-db-into-postgresql-hyperscale-server-group.md
kubectl exec <PostgreSQL pod name> -n <namespace name> -c postgres -- psql --use
## Suggested next steps - Read the concepts and How-to guides of Azure Database for PostgreSQL Hyperscale to distribute your data across multiple PostgreSQL Hyperscale nodes and to benefit from all the power of Azure Database for PostgreSQL Hyperscale. :
- * [Nodes and tables](../../postgresql/concepts-hyperscale-nodes.md)
- * [Determine application type](../../postgresql/concepts-hyperscale-app-type.md)
- * [Choose a distribution column](../../postgresql/concepts-hyperscale-choose-distribution-column.md)
- * [Table colocation](../../postgresql/concepts-hyperscale-colocation.md)
- * [Distribute and modify tables](../../postgresql/howto-hyperscale-modify-distributed-tables.md)
- * [Design a multi-tenant database](../../postgresql/tutorial-design-database-hyperscale-multi-tenant.md)*
- * [Design a real-time analytics dashboard](../../postgresql/tutorial-design-database-hyperscale-realtime.md)*
+ * [Nodes and tables](../../postgresql/hyperscale/concepts-nodes.md)
+ * [Determine application type](../../postgresql/hyperscale/concepts-app-type.md)
+ * [Choose a distribution column](../../postgresql/hyperscale/concepts-choose-distribution-column.md)
+ * [Table colocation](../../postgresql/hyperscale/concepts-colocation.md)
+ * [Distribute and modify tables](../../postgresql/hyperscale/howto-modify-distributed-tables.md)
+ * [Design a multi-tenant database](../../postgresql/hyperscale/tutorial-design-database-multi-tenant.md)*
+ * [Design a real-time analytics dashboard](../../postgresql/hyperscale/tutorial-design-database-realtime.md)*
> \* In the documents above, skip the sections **Sign in to the Azure portal**, & **Create an Azure Database for PostgreSQL - Hyperscale (Citus)**. Implement the remaining steps in your Azure Arc deployment. Those sections are specific to the Azure Database for PostgreSQL Hyperscale (Citus) offered as a PaaS service in the Azure cloud but the other parts of the documents are directly applicable to your Azure Arc-enabled PostgreSQL Hyperscale.
azure-arc Scale Out In Postgresql Hyperscale Server Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/scale-out-in-postgresql-hyperscale-server-group.md
You scale in when you remove Postgres instances (Postgres Hyperscale worker node
## Get started If you are already familiar with the scaling model of Azure Arc-enabled PostgreSQL Hyperscale or Azure Database for PostgreSQL Hyperscale (Citus), you may skip this paragraph. If you are not, it is recommended you start by reading about this scaling model in the documentation page of Azure Database for PostgreSQL Hyperscale (Citus). Azure Database for PostgreSQL Hyperscale (Citus) is the same technology that is hosted as a service in Azure (Platform As A Service also known as PAAS) instead of being offered as part of Azure Arc-enabled Data -- [Nodes and tables](../../postgresql/concepts-hyperscale-nodes.md)-- [Determine application type](../../postgresql/concepts-hyperscale-app-type.md)-- [Choose a distribution column](../../postgresql/concepts-hyperscale-choose-distribution-column.md)-- [Table colocation](../../postgresql/concepts-hyperscale-colocation.md)-- [Distribute and modify tables](../../postgresql/howto-hyperscale-modify-distributed-tables.md)-- [Design a multi-tenant database](../../postgresql/tutorial-design-database-hyperscale-multi-tenant.md)*-- [Design a real-time analytics dashboard](../../postgresql/tutorial-design-database-hyperscale-realtime.md)*
+- [Nodes and tables](../../postgresql/hyperscale/concepts-nodes.md)
+- [Determine application type](../../postgresql/hyperscale/concepts-app-type.md)
+- [Choose a distribution column](../../postgresql/hyperscale/concepts-choose-distribution-column.md)
+- [Table colocation](../../postgresql/hyperscale/concepts-colocation.md)
+- [Distribute and modify tables](../../postgresql/hyperscale/howto-modify-distributed-tables.md)
+- [Design a multi-tenant database](../../postgresql/hyperscale/tutorial-design-database-multi-tenant.md)*
+- [Design a real-time analytics dashboard](../../postgresql/hyperscale/tutorial-design-database-realtime.md)*
> \* In the documents above, skip the sections **Sign in to the Azure portal**, & **Create an Azure Database for PostgreSQL - Hyperscale (Citus)**. Implement the remaining steps in your Azure Arc deployment. Those sections are specific to the Azure Database for PostgreSQL Hyperscale (Citus) offered as a PaaS service in the Azure cloud but the other parts of the documents are directly applicable to your Azure Arc-enabled PostgreSQL Hyperscale.
The scale-in operation is an online operation. Your applications continue to acc
- Read about how to [scale up and down (memory, vCores) your Azure Arc-enabled PostgreSQL Hyperscale server group](scale-up-down-postgresql-hyperscale-server-group-using-cli.md) - Read about how to set server parameters in your Azure Arc-enabled PostgreSQL Hyperscale server group - Read the concepts and How-to guides of Azure Database for PostgreSQL Hyperscale to distribute your data across multiple PostgreSQL Hyperscale nodes and to benefit from all the power of Azure Database for Postgres Hyperscale. :
- * [Nodes and tables](../../postgresql/concepts-hyperscale-nodes.md)
- * [Determine application type](../../postgresql/concepts-hyperscale-app-type.md)
- * [Choose a distribution column](../../postgresql/concepts-hyperscale-choose-distribution-column.md)
- * [Table colocation](../../postgresql/concepts-hyperscale-colocation.md)
- * [Distribute and modify tables](../../postgresql/howto-hyperscale-modify-distributed-tables.md)
- * [Design a multi-tenant database](../../postgresql/tutorial-design-database-hyperscale-multi-tenant.md)*
- * [Design a real-time analytics dashboard](../../postgresql/tutorial-design-database-hyperscale-realtime.md)*
+ * [Nodes and tables](../../postgresql/hyperscale/concepts-nodes.md)
+ * [Determine application type](../../postgresql/hyperscale/concepts-app-type.md)
+ * [Choose a distribution column](../../postgresql/hyperscale/concepts-choose-distribution-column.md)
+ * [Table colocation](../../postgresql/hyperscale/concepts-colocation.md)
+ * [Distribute and modify tables](../../postgresql/hyperscale/howto-modify-distributed-tables.md)
+ * [Design a multi-tenant database](../../postgresql/hyperscale/tutorial-design-database-multi-tenant.md)*
+ * [Design a real-time analytics dashboard](../../postgresql/hyperscale/tutorial-design-database-realtime.md)*
> \* In the documents above, skip the sections **Sign in to the Azure portal**, & **Create an Azure Database for PostgreSQL - Hyperscale (Citus)**. Implement the remaining steps in your Azure Arc deployment. Those sections are specific to the Azure Database for PostgreSQL Hyperscale (Citus) offered as a PaaS service in the Azure cloud but the other parts of the documents are directly applicable to your Azure Arc-enabled PostgreSQL Hyperscale.
azure-arc What Is Azure Arc Enabled Postgres Hyperscale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/what-is-azure-arc-enabled-postgres-hyperscale.md
With the Direct connectivity mode offered by Azure Arc-enabled data services you
- **Read the concepts and How-to guides of Azure Database for PostgreSQL Hyperscale to distribute your data across multiple PostgreSQL Hyperscale nodes and to potentially benefit from better performances**:
- * [Nodes and tables](../../postgresql/concepts-hyperscale-nodes.md)
- * [Determine application type](../../postgresql/concepts-hyperscale-app-type.md)
- * [Choose a distribution column](../../postgresql/concepts-hyperscale-choose-distribution-column.md)
- * [Table colocation](../../postgresql/concepts-hyperscale-colocation.md)
- * [Distribute and modify tables](../../postgresql/howto-hyperscale-modify-distributed-tables.md)
- * [Design a multi-tenant database](../../postgresql/tutorial-design-database-hyperscale-multi-tenant.md)*
- * [Design a real-time analytics dashboard](../../postgresql/tutorial-design-database-hyperscale-realtime.md)*
+ * [Nodes and tables](../../postgresql/hyperscale/concepts-nodes.md)
+ * [Determine application type](../../postgresql/hyperscale/concepts-app-type.md)
+ * [Choose a distribution column](../../postgresql/hyperscale/concepts-choose-distribution-column.md)
+ * [Table colocation](../../postgresql/hyperscale/concepts-colocation.md)
+ * [Distribute and modify tables](../../postgresql/hyperscale/howto-modify-distributed-tables.md)
+ * [Design a multi-tenant database](../../postgresql/hyperscale/tutorial-design-database-multi-tenant.md)*
+ * [Design a real-time analytics dashboard](../../postgresql/hyperscale/tutorial-design-database-realtime.md)*
azure-arc Agent Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/agent-release-notes-archive.md
The Azure Connected Machine agent receives improvements on an ongoing basis. Thi
- Known issues - Bug fixes
-## June 2021
+## Version 1.9 - July 2021
-Version 1.7
+### New features
+
+Added support for the Indonesian language
+
+### Fixed
+
+Fixed a bug that prevented extension management in the West US 3 region
+
+## Version 1.8 - July 2021
+
+### New features
+
+- Improved reliability when installing the Azure Monitor Agent extension on Red Hat and CentOS systems
+- Added agent-side enforcement of max resource name length (54 characters)
+- Guest Configuration policy improvements:
+ - Added support for PowerShell-based Guest Configuration policies on Linux operating systems
+ - Added support for multiple assignments of the same Guest Configuration policy on the same server
+ - Upgraded PowerShell Core to version 7.1 on Windows operating systems
+
+### Fixed
+
+- The agent will continue running if it is unable to write service start/stop events to the Windows application event log
+
+## Version 1.7 - June 2021
### New features
Version 1.7
- Onboarding continues instead of aborting if OS information cannot be obtained - Improved reliability when installing the Log Analytics agent for Linux extension on Red Hat and CentOS systems
-## May 2021
-
-Version 1.6
+## Version 1.6 - May 2021
### New features
Version 1.6
- Added V2 signature support for extension validation. - Minor update to data logging.
-## April 2021
-
-Version 1.5
+## Version 1.5 - April 2021
### New features
Version 1.5
- New `-json` parameter to direct output results in JSON format (when used with -useStderr). - Collect other instance metadata - Manufacturer, model, and cluster resource ID (for Azure Stack HCI nodes).
-## March 2021
-
-Version 1.4
+## Version 1.4 - March 2021
### New features
Version 1.4
Network endpoint checks are now faster.
-## December 2020
-
-Version: 1.3
+## Version 1.3 - December 2020
### New features
Added support for Windows Server 2008 R2 SP1.
Resolved issue preventing the Custom Script Extension on Linux from installing successfully.
-## November 2020
-
-Version: 1.2
+## Version 1.2 - November 2020
### Fixed Resolved issue where proxy configuration could be lost after upgrade on RPM-based distributions.
-## October 2020
-
-Version: 1.1
+## Version 1.1 - October 2020
### Fixed
Version: 1.1
- GuestConfig agent support for US Gov Virginia region. - GuestConfig agent extension report messages to be more verbose if there is a failure.
-## September 2020
+## Version 1.0 - September 2020
-Version: 1.0 (General Availability)
+This version is the first generally available release of the Azure Connected Machine Agent.
### Plan for change
Version: 1.0 (General Availability)
- Resolved issues when attempting to install agent on server running Windows Server 2012 R2. - Improvements to extension installation reliability
-## August 2020
-
-Version: 0.11
--- This release previously announced support for Ubuntu 20.04. Because some Azure VM extensions don't support Ubuntu 20.04, support for this version of Ubuntu is being removed.--- Reliability improvements for extension deployments.-
-### Known issues
-
-If you are using an older version of the Linux agent and it's configured to use a proxy server, you need to reconfigure the proxy server setting after the upgrade. To do this, run `sudo azcmagent_proxy add http://proxyserver.local:83`.
- ## Next steps - Before evaluating or enabling Arc-enabled servers across multiple hybrid machines, review [Connected Machine agent overview](agent-overview.md) to understand requirements, technical details about the agent, and deployment methods.
azure-arc Agent Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/agent-release-notes.md
The Azure Connected Machine agent receives improvements on an ongoing basis. To
This page is updated monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [archive for What's new with Azure Arc-enabled servers agent](agent-release-notes-archive.md).
-## November 2021
+## Version 1.14 - January 2022
-Version 1.13
+### Fixed
+
+- A state corruption issue in the extension manager that could cause extension operations to get stuck in transient states has been fixed. Customers running agent version 1.13 are encouraged to upgrade to version 1.14 as soon as possible. If you continue to have issues with extensions after upgrading the agent, [submit a support ticket](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
+
+## Version 1.13 - November 2021
+
+### Known issues
+
+- Extensions may get stuck in transient states (creating, deleting, updating) on Windows machines running the 1.13 agent in certain conditions. Microsoft recommends upgrading to agent version 1.14 as soon as possible to resolve this issue.
### Fixed
Version 1.13
- Extension operations will execute faster using a new notification pipeline. You may need to adjust your firewall or proxy server rules to allow the new network addresses for this notification service (see [networking configuration](agent-overview.md#networking-configuration)). The extension manager will fall back to the existing behavior of checking every 5 minutes when the notification service cannot be reached. - Detection of the AWS account ID, instance ID, and region information for servers running in Amazon Web Services.
-## October 2021
-
-Version 1.12
+## Version 1.12 - October 2021
### Fixed
Version 1.12
- `azcmagent_proxy remove` command on Linux now correctly removes environment variables on Red Hat Enterprise Linux and related distributions. - `azcmagent logs` now includes the computer name and timestamp to help disambiguate log files.
-## September 2021
-
-Version 1.11
+## Version 1.11 - September 2021
### Fixed
Version 1.11
- The guest configuration policy agent will now automatically retry if an error is encountered during service start or restart events. - Fixed an issue that prevented guest configuration audit policies from successfully executing on Linux machines.
-## August 2021
-
-Version 1.10
+## Version 1.10 - August 2021
### Fixed - The guest configuration policy agent can now configure and remediate system settings. Existing policy assignments continue to be audit-only. Learn more about the Azure Policy [guest configuration remediation options](../../governance/policy/concepts/guest-configuration-policy-effects.md). - The guest configuration policy agent now restarts every 48 hours instead of every 6 hours.
-## July 2021
-
-Version 1.9
-
-## New features
-
-Added support for the Indonesian language
-
-### Fixed
-
-Fixed a bug that prevented extension management in the West US 3 region
-
-Version 1.8
-
-### New features
--- Improved reliability when installing the Azure Monitor Agent extension on Red Hat and CentOS systems-- Added agent-side enforcement of max resource name length (54 characters)-- Guest Configuration policy improvements:
- - Added support for PowerShell-based Guest Configuration policies on Linux operating systems
- - Added support for multiple assignments of the same Guest Configuration policy on the same server
- - Upgraded PowerShell Core to version 7.1 on Windows operating systems
-
-### Fixed
--- The agent will continue running if it is unable to write service start/stop events to the Windows application event log- ## Next steps - Before evaluating or enabling Azure Arc-enabled servers across multiple hybrid machines, review [Connected Machine agent overview](agent-overview.md) to understand requirements, technical details about the agent, and deployment methods.
azure-cache-for-redis Cache How To Active Geo Replication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-how-to-active-geo-replication.md
Active geo-replication groups up to five Enterprise Azure Cache for Redis instan
1. In the **Advanced** tab of **New Redis Cache** creation UI, select **Enterprise** for **Clustering Policy**.
- ![Configure active geo-replication](./media/cache-how-to-active-geo-replication/cache-active-geo-replication-not-configured.png)
+ For more information on choosing **Clustering policy**, see [Clustering Policy](quickstart-create-redis-enterprise.md#clustering-policy).
+
+ :::image type="content" source="media/cache-how-to-active-geo-replication/cache-active-geo-replication-not-configured.png" alt-text="Configure active geo-replication":::
1. Select **Configure** to set up **Active geo-replication**. 1. Create a new replication group, for a first cache instance, or select an existing one from the list.
- ![Link caches](./media/cache-how-to-active-geo-replication/cache-active-geo-replication-new-group.png)
+ :::image type="content" source="media/cache-how-to-active-geo-replication/cache-active-geo-replication-new-group.png" alt-text="Link caches":::
1. Select **Configure** to finish.
- ![Active geo-replication configured](./media/cache-how-to-active-geo-replication/cache-active-geo-replication-configured.png)
+ :::image type="content" source="media/cache-how-to-active-geo-replication/cache-active-geo-replication-configured.png" alt-text="Active geo-replication configured":::
1. Wait for the first cache to be created successfully. Repeat the above steps for each additional cache instance in the geo-replication group.
azure-cache-for-redis Quickstart Create Redis Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/quickstart-create-redis-enterprise.md
Last updated 02/08/2021
# Quickstart: Create a Redis Enterprise cache
-Azure Cache for Redis' Enterprise tiers provide fully integrated and managed [Redis Enterprise](https://redislabs.com/redis-enterprise/) on Azure. These new tiers are:
+The Azure Cache for Redis Enterprise tiers provide fully integrated and managed [Redis Enterprise](https://redislabs.com/redis-enterprise/) on Azure. These new tiers are:
* Enterprise, which uses volatile memory (DRAM) on a virtual machine to store data * Enterprise Flash, which uses both volatile and non-volatile memory (NVMe or SSD) to store data.
You'll need an Azure subscription before you begin. If you don't have one, creat
1. Select **Next: Networking** and skip.
-1. Select **Next: Advanced** and set **Clustering policy** to **Enterprise** for a non-clustered cache. Enable **Non-TLS access only** if you plan to connect to the new cache without using TLS. Disabling TLS is **not** recommended, however.
+1. Select **Next: Advanced**.
+
+ Enable **Non-TLS access only** if you plan to connect to the new cache without using TLS. Disabling TLS is **not** recommended, however.
+
+ Set **Clustering policy** to **Enterprise** for a non-clustered cache. For more information on choosing **Clustering policy**, see [Clustering Policy](#clustering-policy).
:::image type="content" source="media/cache-create/enterprise-tier-advanced.png" alt-text="Screenshot that shows the Enterprise tier Advanced tab."::: > [!NOTE]
- > Redis Enterprise supports two clustering policies. Use the **Enterprise** policy to access
- > your cache using the regular Redis API, and **OSS** the OSS Cluster API.
+ > Redis Enterprise supports two clustering policies. Use the **Enterprise** policy to access your cache using the regular Redis API, and **OSS** the OSS Cluster API.
> > [!NOTE]
You'll need an Azure subscription before you begin. If you don't have one, creat
It takes some time for the cache to create. You can monitor progress on the Azure Cache for Redis **Overview** page. When **Status** shows as **Running**, the cache is ready to use.
+## Clustering Policy
+
+The OSS Cluster mode allows clients to communicate with Redis using the same Redis Cluster API as open-source Redis. This mode provides optimal latency and near-linear scalability improvements when scaling the cluster. Your client library must support clustering to use the OSS Cluster mode.
+
+The Enterprise Cluster mode is a simpler configuration that exposes a single endpoint for client connections. This mode allows an application designed to use a standalone, or non-clustered, Redis server to seamlessly operate with a scalable, multi-node, Redis implementation. Enterprise Cluster mode abstracts the Redis Cluster implementation from the client by internally routing requests to the correct node in the cluster. Clients are not required to support OSS Cluster mode.
+ ## Next steps In this quickstart, you learned how to create an Enterprise tier instance of Azure Cache for Redis.
azure-functions Durable Functions Http Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/durable-functions-http-features.md
The "call HTTP" API can automatically implement the client side of the polling c
Durable Functions natively supports calls to APIs that accept Azure Active Directory (Azure AD) tokens for authorization. This support uses [Azure managed identities](../../active-directory/managed-identities-azure-resources/overview.md) to acquire these tokens.
-The following code is an example of a .NET orchestrator function. The function makes authenticated calls to restart a virtual machine by using the Azure Resource Manager [virtual machines REST API](/rest/api/compute/virtualmachines).
+The following code is an example of an orchestrator function. The function makes authenticated calls to restart a virtual machine by using the Azure Resource Manager [virtual machines REST API](/rest/api/compute/virtualmachines).
# [C#](#tab/csharp)
azure-functions Functions Compare Logic Apps Ms Flow Webjobs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-compare-logic-apps-ms-flow-webjobs.md
Power Automate and Logic Apps are both *designer-first* integration services tha
Power Automate is built on top of Logic Apps. They share the same workflow designer and the same [connectors](../connectors/apis-list.md).
-Power Automate empowers any office worker to perform simple integrations (for example, an approval process on a SharePoint Document Library) without going through developers or IT. Logic Apps can also enable advanced integrations (for example, B2B processes) where enterprise-level Azure DevOps and security practices are required. It's typical for a business workflow to grow in complexity over time. Accordingly, you can start with a flow at first, and then convert it to a logic app as needed.
+Power Automate empowers any office worker to perform simple integrations (for example, an approval process on a SharePoint Document Library) without going through developers or IT. Logic Apps can also enable advanced integrations (for example, B2B processes) where enterprise-level Azure DevOps and security practices are required. It's typical for a business workflow to grow in complexity over time.
The following table helps you determine whether Power Automate or Logic Apps is best for a particular integration:
The following table helps you determine whether Power Automate or Logic Apps is
| | | | | **Users** |Office workers, business users, SharePoint administrators |Pro integrators and developers, IT pros | | **Scenarios** |Self-service |Advanced integrations |
-| **Design tool** |In-browser and mobile app, UI only |In-browser and [Visual Studio](../logic-apps/logic-apps-azure-resource-manager-templates-overview.md), [Code view](../logic-apps/logic-apps-author-definitions.md) available |
+| **Design tool** |In-browser and mobile app, UI only |In-browser, [Visual Studio Code](../logic-apps/quickstart-create-logic-apps-visual-studio-code.md), and [Visual Studio](../logic-apps/quickstart-create-logic-apps-with-visual-studio.md) with code view available |
| **Application lifecycle management (ALM)** |Design and test in non-production environments, promote to production when ready |Azure DevOps: source control, testing, support, automation, and manageability in [Azure Resource Manager](../logic-apps/logic-apps-azure-resource-manager-templates-overview.md) | | **Admin experience** |Manage Power Automate environments and data loss prevention (DLP) policies, track licensing: [Admin center](https://admin.flow.microsoft.com) |Manage resource groups, connections, access management, and logging: [Azure portal](https://portal.azure.com) | | **Security** |Microsoft 365 security audit logs, DLP, [encryption at rest](https://wikipedia.org/wiki/Data_at_rest#Encryption) for sensitive data |Security assurance of Azure: [Azure security](https://www.microsoft.com/en-us/trustcenter/Security/AzureSecurity), [Microsoft Defender for Cloud](https://azure.microsoft.com/services/security-center/), [audit logs](https://azure.microsoft.com/blog/azure-audit-logs-ux-refresh/) |
You can mix and match services when you build an orchestration, calling function
| | Durable Functions | Logic Apps | | | | | | **Development** | Code-first (imperative) | Designer-first (declarative) |
-| **Connectivity** | [About a dozen built-in binding types](functions-triggers-bindings.md#supported-bindings), write code for custom bindings | [Large collection of connectors](../connectors/apis-list.md), [Enterprise Integration Pack for B2B scenarios](../logic-apps/logic-apps-enterprise-integration-overview.md), [build custom connectors](../logic-apps/custom-connector-overview.md) |
-| **Actions** | Each activity is an Azure function; write code for activity functions |[Large collection of ready-made actions](../logic-apps/logic-apps-workflow-actions-triggers.md)|
-| **Monitoring** | [Azure Application Insights](../azure-monitor/app/app-insights-overview.md) | [Azure portal](../logic-apps/quickstart-create-first-logic-app-workflow.md), [Azure Monitor logs](../logic-apps/monitor-logic-apps.md)|
+| **Connectivity** | [About a dozen built-in binding types](functions-triggers-bindings.md#supported-bindings), write code for custom bindings | [Large collection of connectors](/connectors/connector-reference/connector-reference-logicapps-connectors), [Enterprise Integration Pack for B2B scenarios](../logic-apps/logic-apps-enterprise-integration-overview.md), [build custom connectors](/connectors/custom-connectors/) |
+| **Actions** | Each activity is an Azure function; write code for activity functions |[Large collection of ready-made actions](/connectors/connector-reference/connector-reference-logicapps-connectors)|
+| **Monitoring** | [Azure Application Insights](../azure-monitor/app/app-insights-overview.md) | [Azure portal](../logic-apps/quickstart-create-first-logic-app-workflow.md), [Azure Monitor logs](../logic-apps/monitor-logic-apps-log-analytics.md), [Microsoft Defender for Cloud](../logic-apps/healthy-unhealthy-resource.md) |
| **Management** | [REST API](durable/durable-functions-http-api.md), [Visual Studio](/visualstudio/azure/vs-azure-tools-resources-managing-with-cloud-explorer) | [Azure portal](../logic-apps/quickstart-create-first-logic-app-workflow.md), [REST API](/rest/api/logic/), [PowerShell](/powershell/module/az.logicapp), [Visual Studio](../logic-apps/manage-logic-apps-with-visual-studio.md) | | **Execution context** | Can run [locally](./functions-kubernetes-keda.md) or in the cloud | Runs only in the cloud|
azure-functions Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/ip-addresses.md
You can control the IP address of outbound traffic from your functions by using
### App Service Environments
-For full control over the IP addresses, both inbound and outbound, we recommend [App Service Environments](../app-service/environment/intro.md) (the [Isolated tier](https://azure.microsoft.com/pricing/details/app-service/) of App Service plans). For more information, see [App Service Environment IP addresses](../app-service/environment/network-info.md#ase-ip-addresses) and [How to control inbound traffic to an App Service Environment](../app-service/environment/app-service-app-service-environment-control-inbound-traffic.md).
+For full control over the IP addresses, both inbound and outbound, we recommend [App Service Environments](../app-service/environment/intro.md) (the [Isolated tier](https://azure.microsoft.com/pricing/details/app-service/) of App Service plans). For more information, see [App Service Environment IP addresses](../app-service/environment/network-info.md#ip-addresses) and [How to control inbound traffic to an App Service Environment](../app-service/environment/app-service-app-service-environment-control-inbound-traffic.md).
To find out if your function app runs in an App Service Environment:
azure-maps Azure Maps Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/azure-maps-authentication.md
# Authentication with Azure Maps
-Azure Maps supports two ways to authenticate requests: Shared Key authentication and [Azure Active Directory (Azure AD)](../active-directory/fundamentals/active-directory-whatis.md) authentication. This article explains both authentication methods to help guide your implementation of Azure Maps services.
+Azure Maps supports three ways to authenticate requests: Shared Key authentication, [Azure Active Directory (Azure AD)](../active-directory/fundamentals/active-directory-whatis.md) authentication, and Shared Access Signature (SAS) Token authentication. This article explains authentication methods to help guide your implementation of Azure Maps services. The article also describes additional account controls such as disabling local authentication for Azure Policy and Cross-Origin Resource Sharing (CORS).
> [!NOTE] > To improve secure communication with Azure Maps, we now support Transport Layer Security (TLS) 1.2, and we're retiring support for TLS 1.0 and 1.1. If you currently use TLS 1.x, evaluate your TLS 1.2 readiness and develop a migration plan with the testing described in [Solving the TLS 1.0 Problem](/security/solving-tls1-problem). ## Shared Key authentication
- Primary and secondary keys are generated after the Azure Maps account is created. You're encouraged to use the primary key as the subscription key when calling Azure Maps with shared key authentication. Shared Key authentication passes a key generated by an Azure Maps account to an Azure Maps service. For each request to Azure Maps services, add the *subscription key* as a parameter to the URL. The secondary key can be used in scenarios like rolling key changes.
+For information about viewing your keys in the Azure portal, see [Manage authentication](./how-to-manage-authentication.md#view-authentication-details).
+
+Primary and secondary keys are generated after the Azure Maps account is created. You're encouraged to use the primary key as the subscription key when calling Azure Maps with shared key authentication. Shared Key authentication passes a key generated by an Azure Maps account to an Azure Maps service. For each request to Azure Maps services, add the _subscription key_ as a parameter to the URL. The secondary key can be used in scenarios like rolling key changes.
-Example using the *subscription key* as a parameter in your URL:
+Example using the _subscription key_ as a parameter in your URL:
```http https://atlas.microsoft.com/mapData/upload?api-version=1.0&dataFormat=zip&subscription-key={Azure-Maps-Primary-Subscription-key}
-```
-
-For information about viewing your keys in the Azure portal, see [Manage authentication](./how-to-manage-authentication.md#view-authentication-details).
+```
-> [!NOTE]
-> Primary and Secondary keys should be treated as sensitive data. The shared key is used to authenticate all Azure Maps REST APIs. Users who use a shared key should abstract the API key away, either through environment variables or secure secret storage, where it can be managed centrally.
+> [!IMPORTANT]
+> Primary and Secondary keys should be treated as sensitive data. The shared key is used to authenticate all Azure Maps REST APIs. Users who use a shared key should abstract the API key away, either through environment variables or secure secret storage, where it can be managed centrally.
## Azure AD authentication
Azure Subscriptions are provided with an Azure AD tenant to enable fine grained
Azure Maps accepts **OAuth 2.0** access tokens for Azure AD tenants associated with an Azure subscription that contains an Azure Maps account. Azure Maps also accepts tokens for:
-* Azure AD users
-* Partner applications that use permissions delegated by users
-* Managed identities for Azure resources
+- Azure AD users
+- Partner applications that use permissions delegated by users
+- Managed identities for Azure resources
-Azure Maps generates a *unique identifier (client ID)* for each Azure Maps account. You can request tokens from Azure AD when you combine this client ID with additional parameters.
+Azure Maps generates a _unique identifier_ (client ID) for each Azure Maps account. You can request tokens from Azure AD when you combine this client ID with additional parameters.
For more information about how to configure Azure AD and request tokens for Azure Maps, see [Manage authentication in Azure Maps](./how-to-manage-authentication.md).
-For general information about authenticating with Azure AD, see [What is authentication?](../active-directory/develop/authentication-vs-authorization.md).
+For general information about authenticating with Azure AD, see [Authentication vs. authorization](../active-directory/develop/authentication-vs-authorization.md).
-### Managed identities for Azure resources and Azure Maps
+## Managed identities for Azure resources and Azure Maps
-[Managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md) provide Azure services with an automatically managed application based security principal that can authenticate with Azure AD. With Azure role-based access control (Azure RBAC), the managed identity security principal can be authorized to access Azure Maps services. Some examples of managed identities include: Azure App Service, Azure Functions, and Azure Virtual Machines. For a list of managed identities, see [managed identities for Azure resources](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md).
+[Managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md) provide Azure services with an automatically managed application based security principal that can authenticate with Azure AD. With Azure role-based access control (Azure RBAC), the managed identity security principal can be authorized to access Azure Maps services. Some examples of managed identities include: Azure App Service, Azure Functions, and Azure Virtual Machines. For a list of managed identities, see [Services that support managed identities for Azure resources](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md). To add and remove managed identities read more on [Manage authentication in Azure Maps](./how-to-manage-authentication.md).
-### Configuring application Azure AD authentication
+### Configure application Azure AD authentication
Applications will authenticate with the Azure AD tenant using one or more supported scenarios provided by Azure AD. Each Azure AD application scenario represents different requirements based on business needs. Some applications may require user sign-in experiences and other applications may require an application sign-in experience. For more information, see [Authentication flows and application scenarios](../active-directory/develop/authentication-flows-app-scenarios.md).
After the application receives an access token, the SDK and/or application sends
| x-ms-client-id | 30d7cc….9f55 | | Authorization | Bearer eyJ0e….HNIVN |
-> [!NOTE]
+> [!NOTE]
> `x-ms-client-id` is the Azure Maps account-based GUID that appears on the Azure Maps authentication page. Here's an example of an Azure Maps route request that uses an Azure AD OAuth Bearer token:
For information about viewing your client ID, see [View authentication details](
## Authorization with role-based access control
-Azure Maps supports access to all principal types for [Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md) including: individual Azure AD users, groups, applications, Azure resources, and Azure Managed identities. Principal types are granted a set of permissions, also known as a role definition. A role definition provides permissions to REST API actions. Applying access to one or more Azure Maps accounts is known as a scope. When applying a principal, role definition, and scope then a role assignment is created.
+### Prerequisites
+
+If you are new to Azure RBAC, [Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md) overview provides Principal types are granted a set of permissions, also known as a role definition. A role definition provides permissions to REST API actions. Azure Maps supports access to all principal types for [Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md) including: individual Azure AD users, groups, applications, Azure resources, and Azure managed identities. Applying access to one or more Azure Maps accounts is known as a scope. When applying a principal, role definition, and scope then a role assignment is created.
+
+### Overview
The next sections discuss concepts and components of Azure Maps integration with Azure RBAC. As part of the process to set up your Azure Maps account, an Azure AD directory is associated to the Azure subscription, which the Azure Maps account resides.
When you configure Azure RBAC, you choose a security principal and apply it to a
The following role definition types exist to support application scenarios.
-| Azure Role Definition | Description |
-| :-- | :- |
-| Azure Maps Data Reader | Provides access to immutable Azure Maps REST APIs. |
-| Azure Maps Data Contributor | Provides access to mutable Azure Maps REST APIs. Mutability is defined by the actions: write and delete. |
-| Custom Role Definition | Create a crafted role to enable flexible restricted access to Azure Maps REST APIs. |
+| Azure Role Definition | Description |
+| : | :- |
+| Azure Maps Search and Render Data Reader | Provides access to only search and render Azure Maps REST APIs to limit access to basic web browser use cases. |
+| Azure Maps Data Reader | Provides access to immutable Azure Maps REST APIs. |
+| Azure Maps Data Contributor | Provides access to mutable Azure Maps REST APIs. Mutability is defined by the actions: write and delete. |
+| Custom Role Definition | Create a crafted role to enable flexible restricted access to Azure Maps REST APIs. |
Some Azure Maps services may require elevated privileges to perform write or delete actions on Azure Maps REST APIs. Azure Maps Data Contributor role is required for services, which provide write or delete actions. The following table describes what services Azure Maps Data Contributor is applicable when using write or delete actions. When only read actions are required, the Azure Maps Data Reader role can be used in place of the Azure Maps Data Contributor role.
-| Azure Maps Service | Azure Maps Role Definition |
-| :-- | :-- |
-| Data | Azure Maps Data Contributor |
-| Creator | Azure Maps Data Contributor |
-| Spatial | Azure Maps Data Contributor |
+| Azure Maps Service | Azure Maps Role Definition |
+| : | :-- |
+| [Data](/rest/api/maps/data) | Azure Maps Data Contributor |
+| [Creator](/rest/api/maps-creator/) | Azure Maps Data Contributor |
+| [Spatial](/rest/api/maps/spatial) | Azure Maps Data Contributor |
+| Batch [Search](/rest/api/maps/search) and [Route](/rest/api/maps/route) | Azure Maps Data Contributor |
For information about viewing your Azure RBAC settings, see [How to configure Azure RBAC for Azure Maps](./how-to-manage-authentication.md).
The custom role definition can then be used in a role assignment for any securit
Here are some example scenarios where custom roles can improve application security.
-| Scenario | Custom Role Data Action(s) |
-| :-- | : |
-| A public facing or interactive sign-in web page with base map tiles and no other REST APIs. | `Microsoft.Maps/accounts/services/render/read` |
-| An application, which only requires reverse geocoding and no other REST APIs. | `Microsoft.Maps/accounts/services/search/read` |
-| A role for a security principal, which requests reading of Azure Maps Creator based map data and base map tile REST APIs. | `Microsoft.Maps/accounts/services/data/read`, `Microsoft.Maps/accounts/services/render/read` |
+| Scenario | Custom Role Data Action(s) |
+| :- | : |
+| A public facing or interactive sign-in web page with base map tiles and no other REST APIs. | `Microsoft.Maps/accounts/services/render/read` |
+| An application, which only requires reverse geocoding and no other REST APIs. | `Microsoft.Maps/accounts/services/search/read` |
+| A role for a security principal, which requests reading of Azure Maps Creator based map data and base map tile REST APIs. | `Microsoft.Maps/accounts/services/data/read`, `Microsoft.Maps/accounts/services/render/read` |
| A role for a security principal, which requires reading, writing, and deleting of Creator based map data. This can be defined as a map data editor role, but does not allow access to other REST APIs like base map tiles. | `Microsoft.Maps/accounts/services/data/read`, `Microsoft.Maps/accounts/services/data/write`, `Microsoft.Maps/accounts/services/data/delete` |
-### Understanding scope
+### Understand scope
When creating a role assignment, it is defined within the Azure resource hierarchy. At the top of the hierarchy is a [management group](../governance/management-groups/overview.md) and the lowest is an Azure resource, like an Azure Maps account. Assigning a role assignment to a resource group can enable access to multiple Azure Maps accounts or resources in the group.
Assigning a role assignment to a resource group can enable access to multiple Az
> [!TIP] > Microsoft's general recommendation is to assign access to the Azure Maps account scope because it prevents **unintended access to other Azure Maps accounts** existing in the same Azure subscription.
-## Next steps
+## Disable local authentication
+
+Azure Maps accounts support the standard Azure property in the [Azure Maps Management REST API](/rest/api/maps-management/) for `Microsoft.Maps/accounts` called `disableLocalAuth`. When `true`, all authentication to the Azure Maps data-plane REST API is disabled, except [Azure AD authentication](./azure-maps-authentication.md#azure-ad-authentication). This is configured using Azure Policy to control distribution and management of shared keys and SAS tokens. For more information, see [What is Azure Policy?](../governance/policy/overview.md).
+
+Disabling local authentication doesn't take effect immediately. Allow a few minutes for the service to block future authentication requests. To re-enable local authentication, set the property to `false` and after a few minutes local authentication will resume.
+
+```json
+{
+ // omitted other properties for brevity.
+ "properties": {
+ "disableLocalAuth": true
+ }
+}
+```
+
+## Shared access signature token authentication
++
+Shared Access Signature token authentication is in preview.
+
+Shared access signature (SAS) tokens are authentication tokens created using the JSON Web token (JWT) format and are cryptographically signed to prove authentication for an application to the Azure Maps REST API. A SAS token is created by first integrating a [user-assigned managed identity](/azure/active-directory/managed-identities-azure-resources/overview) with an Azure Maps account in your Azure subscription. The user-assigned managed identity is given authorization to the Azure Maps account through Azure RBAC using one of the built-in or custom role definitions.
+
+Functional key differences of SAS token from Azure AD Access tokens:
+
+- Lifetime of a token for a max expiration of 1 year (365 days).
+- Azure location and geography access control per token.
+- Rate limits per token for an approximate of 1 to 500 requests per second.
+- Private keys of the token are the primary and secondary keys of an Azure Maps account resource.
+- Service Principal object for authorization is supplied by a user-assigned managed identity.
+
+SAS tokens are immutable. This means that once a token is created, the SAS token is valid until the expiry has been met and the configuration of the allowed regions, rate limits, and user-assigned managed identity cannot be changed. Read more below on [understanding access control](./azure-maps-authentication.md#understand-sas-token-access-control) for SAS token revocation and changes to access control.
+
+### Understand SAS token rate limits
+
+#### SAS token maximum rate limit can control billing for an Azure Maps resource
+
+By specifying a maximum rate limit on the token (`maxRatePerSecond`), the excess rate will not be billed to the account allowing you to set an upper limit of billable transactions for the account, when using the token. However, the application will receive client error responses with `429 (TooManyRequests)` for all transactions once that limit it reached. It is the responsibility of the application to manage retry and distribution of SAS tokens. There is no limit on how many SAS tokens can be created for an account. To allow for an increase or decrease in an existing token's limit; a new SAS token must be created but remember that the old SAS token is still valid until its expiration.
+
+Estimated Example:
+
+| Approximate Maximum Rate Per Second | Actual Rate Per Second | Duration of sustained rate in seconds | Total billable transactions |
+| :- | : | : | :-- |
+| 10 | 20 | 600 | 6000 |
+
+This is an estimate, actual rate limits vary slightly based on Azure Maps ability to enforce consistency within a span of time. However, this allows for preventive control of billing cost.
+
+#### Rate limits are enforced per Azure location, not globally or geographically
+
+For example, a single SAS token with a `maxRatePerSecond` of 10 can be used to limit the throughput in the `East US` location. If that same token is used in `West US 2`, a new counter is created to limit the throughput to 10 in that location, independent of the `East US` location. To control costs and improve predictability, we recommend:
+
+1. Create SAS tokens with designated allowed Azure locations for targeted geography. Continue reading to understand creating SAS tokens.
+1. Use geographic data-plane REST API endpoints, `https://us.atlas.microsoft.com` or `https://eu.atlas.microsoft.com`.
+
+Consider the application topology where the endpoint `https://us.atlas.microsoft.com` routes to the same US locations that the Azure Maps services are hosted, such as `East US`, `West Central US`, or `West US 2`. The same idea applies to other geographical endpoints such as `https://eu.atlas.microsoft.com` between `West Europe` and `North Europe`. To prevent unexpected authorization denials, leverage a SAS token that uses the same Azure locations that the application consumes. The endpoint location is defined using the Azure Maps Management REST API.
+
+#### Default rate limits take precedent over SAS token rate limits
+
+As described in [Azure Maps rate limits](./azure-maps-qps-rate-limits.md), individual service offerings have varying rate limits which are enforced as an aggregate of the account.
+
+Consider the case of **Search Service - Non-Batch Reverse**, with its limit of 250 queries per second (QPS) for the following tables. Each table represents estimated total successful transactions from example usage.
+
+The first table shows 1 token which has a maximum request per second of 500, and then actual usage of the application was 500 request per second for a duration of 60 seconds. **Search Service - Non-Batch Reverse** has a rate limit of 250, this means of the total 30000 requests made in the 60 seconds; 15000 of those requests will be billable transactions. The remaining requests will result in status code `429 (TooManyRequests)`.
+
+| Name | Approximate Maximum Rate Per Second | Actual Rate Per Second | Duration of sustained rate in seconds | Approximate total successful transactions |
+| :- | :- | : | : | :- |
+| token | 500 | 500 | 60 | ~15000 |
+
+For example, if two SAS tokens are created in, and use the same location as an Azure Maps account, each token now shares the default rate limit of 250 QPS. If each token are used at the same time with the same throughput token 1 and token 2 would successfully grant 7500 successful transactions each.
+
+| Name | Approximate Maximum Rate Per Second | Actual Rate Per Second | Duration of sustained rate in seconds | Approximate total successful transactions |
+| : | :- | : | : | :- |
+| token 1 | 250 | 250 | 60 | ~7500 |
+| token 2 | 250 | 250 | 60 | ~7500 |
+
+### Understand SAS token access control
+
+SAS tokens use RBAC to control access to the REST API. When you create a SAS token, the prerequisite managed identity on the Map Account is assigned an Azure RBAC role which grants access to specific REST API actions. See [Picking a role definition](./azure-maps-authentication.md#picking-a-role-definition) to determine which API should be allowed by the application.
+
+If you want to assign temporary access and remove access for before the SAS token expires, you will want to revoke the token. Other reasons to revoke access may be if the token is distributed with `Azure Maps Data Contributor` role assignment unintentionally and anyone with the SAS token may be able to read and write data to Azure Maps REST APIs which may expose sensitive data or unexpected financial cost from usage.
+
+there are 2 options to revoke access for SAS token(s):
+
+1. Regenerate the key which was used by the SAS token, the primaryKey or secondaryKey of the map account.
+1. Remove the role assignment for the Managed Identity on the associated map account.
+
+> [!WARNING]
+> Deleting a managed identity used by a SAS token or revoking access control of the managed identity will cause instances of your application using the SAS token and managed identity to intentionally return `401 Unauthorized` or `403 Forbidden` from Azure Maps REST APIs which will create application disruption.
+>
+> To avoid disruption:
+>
+> 1. Add a second managed identity to the Map Account and grant the new managed identity the correct role assignment.
+> 1. Create a SAS token using `secondaryKey` as the `signingKey` and distribute the new SAS token to the application.
+> 1. Regenerate the primary key, remove the managed identity from the account, and remove the role assignment for the managed identity.
++
+### Create SAS tokens
+
+To create SAS tokens you must have `Contributor` role access to both manage Azure Maps accounts and user-assigned identities in the Azure subscription.
+
+> [!IMPORTANT]
+> Existing Azure Maps accounts created in the Azure location `global` don't support managed identities.
-To learn more about Azure RBAC, see
-> [!div class="nextstepaction"]
-> [Azure role-based access control](../role-based-access-control/overview.md)
+First, you should [Create a user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md#create-a-user-assigned-managed-identity) in the same location as the Azure Maps account.
+
+> [!TIP]
+> You should use the same location for both the managed identity and the Azure Maps account.
+
+Once a managed identity is created, you can create or update the Azure Maps account and attach it. See [Manage your Azure Maps account](./how-to-manage-account-keys.md) for more information.
+
+After the account has been successfully created or updated with the managed identity; assign role-based access control for the managed identity to an Azure Maps data role at the account scope. This enables the managed identity to be given access to the Azure Maps REST API for your map account.
+
+Next, you'll need to create a SAS token using the Azure Management SDK tooling, List SAS operation on Account Management API, or the Azure portal Shared Access Signature page of the Map account resource.
+
+SAS token parameters :
+
+| Parameter Name | Example Value | Description |
+| : | :-- | :- |
+| signingKey | `primaryKey` | Required, the string enum value for the signingKey either `primaryKey` or `secondaryKey` is used to create the signature of the SAS. |
+| principalId | `<GUID>` | Required, the principalId is the Object (principal) id of the user-assigned managed identity attached to the map account. |
+| regions | `[ "eastus", "westus2", "westcentralus" ]` | Optional, the default value is `null`. The regions control which regions the SAS token is allowed to be used in the Azure Maps REST [data-plane](../azure-resource-manager/management/control-plane-and-data-plane.md) API. Omitting regions parameter will allow the SAS token to be used without any constraints. When used in combination with an Azure Maps data-plane geographic endpoint like `us.atlas.microsoft.com` and `eu.atlas.microsoft.com` will allow the application to control usage with-in the specified geography. This allows prevention of usage in other geographies. |
+| maxRatePerSecond | 500 | Required, the specified approximate maximum request per second which the SAS token is granted. Once the limit is reached, additional throughput will be rate limited with HTTP status code `429 (TooManyRequests)`. |
+| start | `2021-05-24T10:42:03.1567373Z` | Required, a UTC date that specifies the date and time the token becomes active. |
+| expiry | `2021-05-24T11:42:03.1567373Z` | Required, a UTC date that specifies the date and time the token expires. The duration between start and expiry cannot be more than 365 days. |
+
+### Configuring application with SAS token
+
+After the application receives a SAS token, the Azure Maps SDK and/or applications send an HTTPS request with the following required HTTP header in addition to other REST API HTTP headers:
+
+| Header Name | Value |
+| : | :- |
+| Authorization | jwt-sas eyJ0e….HNIVN |
+
+> [!NOTE]
+> `jwt-sas` is the authentication scheme to denote using SAS token. Do not include `x-ms-client-id` or other Authorization headers or `subscription-key` query string parameter.
+
+## Cross origin resource sharing (CORS)
++
+Cross Origin Resource Sharing (CORS) is in preview.
+
+### Prerequisites
+
+To prevent malicious code execution on the client, modern browsers block requests from web applications to resources running in a separate domain.
+
+- If you're unfamiliar with CORS check out [Cross-origin resource sharing (CORS)](https://developer.mozilla.org/docs/Web/HTTP/CORS), it lets an `Access-Control-Allow-Origin` header declare which origins are allowed to call endpoints of an Azure Maps account. CORS protocol is non-specific to Azure Maps.
+
+### Account CORS
+
+[CORS](https://fetch.spec.whatwg.org/#http-cors-protocol) is an HTTP protocol that enables a web application running under one domain to access resources in another domain. Web browsers implement a security restriction known as [same-origin policy](https://www.w3.org/Security/wiki/Same_Origin_Policy) that prevents a web page from calling APIs in a different domain; CORS provides a secure way to allow one domain (the origin domain) to call APIs in another domain. Azure Maps account resource supports the ability to configure allowed origins for your app which can access the Azure Maps REST API.
+
+> [!IMPORTANT]
+> CORS is not an authorization mechanism. Any request made to a map account using REST API, when CORS is enabled, also needs a valid map account authentication scheme such as Shared Key, Azure AD, or SAS token.
+>
+> CORS is supported for all map account pricing tiers, data-plane endpoints, and locations.
+
+### CORS requests
+
+A CORS request from an origin domain may consist of two separate requests:
+
+- A preflight request, which queries the CORS restrictions imposed by the service. The preflight request is required unless the request is standard method GET, HEAD, POST, or requests which contain `Authorization` request header.
+
+- The actual request, made against the desired resource.
+
+### Preflight request
+
+The preflight request is done not only as a security measure to ensure that the server understands the method and headers that will be sent in the actual request and that the server knows and trusts the source of the request, but it also queries the CORS restrictions that have been established for the map account. The web browser (or other user agent) sends an OPTIONS request that includes the request headers, method and origin domain. The map account service tries to fetch any CORS rules if account authentication is possible through the CORS preflight protocol.
+
+If authentication is not possible, the maps service evaluates pre-configured set of CORS rules that specify which origin domains, request methods, and request headers may be specified on an actual request against the maps service. By default, a maps account is configured to allow all origins to enable seamless integration into web browsers.
+
+The service will respond to the preflight request with the required Access-Control headers if the following criteria are met:
+
+1. The OPTIONS request contains the required CORS headers (the Origin and Access-Control-Request-Method headers)
+1. Authentication was successful and A CORS rule is enabled for the account which matches the preflight request.
+1. Authentication was skipped due to required `Authorization` request headers which cannot be specified on preflight request.
+
+When preflight request is successful, the service responds with status code `200 (OK)`, and includes the required Access-Control headers in the response.
+
+The service will reject preflight requests if the following conditions occur:
+
+1. If the OPTIONS request doesnΓÇÖt contain the required CORS headers (the Origin and Access-Control-Request-Method headers), the service will respond with status code `400 (Bad request)`.
+1. If authentication was successful on preflight request and no CORS rule matches the preflight request, the service will respond with status code `403 (Forbidden)`. This may occur if the CORS rule is configured to accept an origin which does not match the current browser client origin request header.
+
+> [!NOTE]
+> A preflight request is evaluated against the service and not against the requested resource. The account owner must have enabled CORS by setting the appropriate account properties in order for the request to succeed.
+
+### Actual request
+
+Once the preflight request is accepted and the response is returned, the browser will dispatch the actual request against the map service. The browser will deny the actual request immediately if the preflight request is rejected.
+
+The actual request is treated as a normal request against the map service. The presence of the `Origin` header indicates that the request is a CORS request and the service will then validate against the CORS rules. If a match is found, the Access-Control headers are added to the response and sent back to the client. If a match is not found, the response will return a `403 (Forbidden)` indicating a CORS origin error.
+
+### Enable CORS policy
+
+When creating or updating an existing Map account, the Map account properties can specify the allowed origins to be configured. You can set a CORS rule on the Azure Maps account properties through Azure Maps Management SDK, Azure Maps Management REST API, and portal. Once you set the CORS rule for the service, then a properly authorized request made to the service from a different domain will be evaluated to determine whether it is allowed according to the rule you have specified. See an example below:
+
+```json
+{
+ "location": "eastus",
+ "sku": {
+ "name": "G2"
+ },
+ "kind": "Gen2",
+ "properties": {
+ "cors": {
+ "corsRules": [
+ {
+ "allowedOrigins": [
+ "https://www.azure.com",
+ "https://www.microsoft.com"
+ ]
+ }
+ ]
+ }
+ }
+}
+```
+
+Only one CORS rule with its list of allowed origins can be specified. Each origin allows the HTTP request to be made to Azure Maps REST API in the web browser of the specified origin.
+
+### Remove CORS policy
+
+You can remove CORS manually in the Azure portal, or programmatically using the Azure Maps SDK, Azure Maps management REST API or an [ARM template](/azure/azure-resource-manager/templates/overview).
+
+> [!TIP]
+> If you use the Azure Maps management REST API , use `PUT` or `PATCH` with an empty `corsRule` list in the request body.
+
+```json
+{
+ "location": "eastus",
+ "sku": {
+ "name": "G2"
+ },
+ "kind": "Gen2",
+ "properties": {
+ "cors": {
+ "corsRules": []
+ }
+ }
+ }
+}
+```
+
+## Understand billing transactions
+
+Azure Maps does not count billing transactions for:
+
+- 5xx HTTP Status Codes
+- 401 (Unauthorized)
+- 403 (Forbidden)
+- 429 (TooManyRequests)
+- CORS preflight requests
+
+See [Azure Maps pricing](https://azure.microsoft.com/pricing/details/azure-maps) for additional information on billing transactions as well as other Azure Maps pricing information.
+
+## Next steps
To learn more about authenticating an application with Azure AD and Azure Maps, see
-> [!div class="nextstepaction"]
+
+> [!div class="nextstepaction"]
> [Manage authentication in Azure Maps](./how-to-manage-authentication.md) To learn more about authenticating the Azure Maps Map Control with Azure AD, see
-> [!div class="nextstepaction"]
-> [Use the Azure Maps Map Control](./how-to-use-map-control.md)
+
+> [!div class="nextstepaction"]
+> [Use the Azure Maps Map Control](./how-to-use-map-control.md)
azure-maps How To Manage Account Keys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-manage-account-keys.md
You can manage your Azure Maps account through the Azure portal. After you have an account, you can implement the APIs in your website or mobile application.
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+## Prerequisites
+
+- If you don't already have an Azure account, [sign up for a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you continue.
+- For picking account location and you're unfamiliar with managed identities for Azure resources, check out the [overview section](../active-directory/managed-identities-azure-resources/overview.md).
+
+## Account location
+
+Picking a location for your Azure Maps account that aligns with other resources in your subscription, like managed identities, may help to improve the level of service for [control-plane](../azure-resource-manager/management/control-plane-and-data-plane.md) operations.
+
+As an example, the managed identity infrastructure will communicate and notify the Azure Maps management services for changes to the identity resource such as credential renewal or deletion. Sharing the same Azure location enables a consistent infrastructure provisioning for all resources.
+
+Any Azure Maps REST API on endpoint `atlas.microsoft.com`, `*.atlas.microsoft.com`, or other endpoints belonging to the Azure data-plane are not affected by the choice of the Azure Maps account location.
+
+Read more about data-plane service coverage for Azure Maps services on [geographic coverage](./geographic-coverage.md).
## Create a new account
azure-maps How To Manage Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-manage-authentication.md
custom.ms: subject-rbac-steps
# Manage authentication in Azure Maps
-When you create an Azure Maps account, keys and a client ID are generated. The keys and client ID are used to support Azure Active Directory (Azure AD) authentication and Shared Key authentication.
+When you create an Azure Maps account, your client ID is automatically generated along with primary and secondary keys that are required for authentication when using [Azure Active Directory (Azure AD)](../active-directory/fundamentals/active-directory-whatis.md) or [Shared Key authentication](./azure-maps-authentication.md#shared-key-authentication).
+
+## Prerequisites
+
+Sign in to the [Azure portal](https://portal.azure.com). If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+- A familiarization with [managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md). Be sure to understand the two [Managed identity types](../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types) and how they differ.
+- [An Azure Maps account](quick-demo-map-app.md#create-an-azure-maps-account).
+- A familiarization with [Azure Maps Authentication](./azure-maps-authentication.md).
## View authentication details
- > [!IMPORTANT]
- > We recommend that you use the primary key as the subscription key when you use [Shared Key authentication](./azure-maps-authentication.md#shared-key-authentication) to call Azure Maps. It's best to use the secondary key in scenarios like rolling key changes. For more information, see [Authentication with Azure Maps](./azure-maps-authentication.md).
+> [!IMPORTANT]
+> We recommend that you use the primary key as the subscription key when you use Shared Key authentication to call Azure Maps. It's best to use the secondary key in scenarios like rolling key changes.
To view your Azure Maps authentication details: 1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Navigate to the Azure portal menu. Select **All resources**, and then select your Azure Maps account.
+2. Select **All resources** in the **Azure services** section, then select your Azure Maps account.
- :::image type="content" border="true" source="./media/how-to-manage-authentication/select-all-resources.png" alt-text="Select Azure Maps account.":::
+ :::image type="content" border="true" source="./media/how-to-manage-authentication/select-all-resources.png" alt-text="Select Azure Maps account.":::
-3. Under **Settings** in the left pane, select **Authentication**.
+3. Select **Authentication** in the settings section of the left pane.
- :::image type="content" border="true" source="./media/how-to-manage-authentication/view-authentication-keys.png" alt-text="Authentication details.":::
+ :::image type="content" border="true" source="./media/how-to-manage-authentication/view-authentication-keys.png" alt-text="Authentication details.":::
## Choose an authentication category Depending on your application needs, there are specific pathways to application security. Azure AD defines specific authentication categories to support a wide range of authentication flows. To choose the best category for your application, see [application categories](../active-directory/develop/authentication-flows-app-scenarios.md#application-categories). > [!NOTE]
-> Even if you use shared key authentication, understanding categories and scenarios helps you to secure the application.
+> Understanding categories and scenarios will help you secure your Azure Maps application, whether you use Azure Active Directory or shared key authentication.
+
+## How to add and remove managed identities
+
+To enable [Shared access signature (SAS) token authentication](./azure-maps-authentication.md#shared-access-signature-token-authentication) with the Azure Maps REST API you need to add a user-assigned managed identity to your Azure Maps account.
+
+### Create a managed identity
+
+You can create a user-assigned managed identity before or after creating a map account. You can add the managed identity through the portal, Azure management SDKs, or the Azure Resource Manager (ARM) template. To add a user-assigned managed identity through an ARM template, specify the resource identifier of the user-assigned managed identity. See example below:
+
+```json
+"identity": {
+ "type": "UserAssigned",
+ "userAssignedIdentities": {
+ "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/example/providers/Microsoft.ManagedIdentity/userAssignedIdentities/exampleidentity": {}
+ }
+}
+```
+
+### Remove a managed identity
+
+You can remove a system-assigned identity by disabling the feature through the portal or the Azure Resource Manager template in the same way that it was created. User-assigned identities can be removed individually. To remove all identities, set the identity type to `"None"`.
+
+Removing a system-assigned identity in this way will also delete it from Azure AD. System-assigned identities are also automatically removed from Azure AD when the Azure Maps account is deleted.
+
+To remove all identities by using the Azure Resource Manager template, update this section:
+
+```json
+"identity": {
+ "type": "None"
+}
+```
## Choose an authentication and authorization scenario
-This table outlines common authentication and authorization scenarios in Azure Maps. Use the links to learn detailed configuration information for each scenario.
+This table outlines common authentication and authorization scenarios in Azure Maps. Each scenario describes a type of app which can be used to access Azure Maps REST API. Use the links to learn detailed configuration information for each scenario.
> [!IMPORTANT] > For production applications, we recommend implementing Azure AD with Azure role-based access control (Azure RBAC).
-| Scenario | Authentication | Authorization | Development effort | Operational effort |
-| - | -- | - | | |
-| [Trusted daemon / non-interactive client application](./how-to-secure-daemon-app.md) | Shared Key | N/A | Medium | High |
-| [Trusted daemon / non-interactive client application](./how-to-secure-daemon-app.md) | Azure AD | High | Low | Medium |
-| [Web single page application with interactive single-sign-on](./how-to-secure-spa-users.md) | Azure AD | High | Medium | Medium |
-| [Web single page application with non-interactive sign-on](./how-to-secure-spa-app.md) | Azure AD | High | Medium | Medium |
-| [Web application with interactive single-sign-on](./how-to-secure-webapp-users.md) | Azure AD | High | High | Medium |
-| [IoT device / input constrained device](./how-to-secure-device-code.md) | Azure AD | High | Medium | Medium |
+| Scenario | Authentication | Authorization | Development effort | Operational effort |
+| -- | -- | - | | |
+| [Trusted daemon app or non-interactive client app](./how-to-secure-daemon-app.md) | Shared Key | N/A | Medium | High |
+| [Trusted daemon or non-interactive client app](./how-to-secure-daemon-app.md) | Azure AD | High | Low | Medium |
+| [Web single page app with interactive single-sign-on](./how-to-secure-spa-users.md) | Azure AD | High | Medium | Medium |
+| [Web single page app with non-interactive sign-on](./how-to-secure-spa-app.md) | Azure AD | High | Medium | Medium |
+| [Web app, daemon app, or non-interactive sign-on app](./how-to-secure-sas-app.md) | SAS Token | High | Medium | Low |
+| [Web application with interactive single-sign-on](./how-to-secure-webapp-users.md) | Azure AD | High | High | Medium |
+| [IoT device or an input constrained application](./how-to-secure-device-code.md) | Azure AD | High | Medium | Medium |
## View built-in Azure Maps role definitions
Request a token from the Azure AD token endpoint. In your Azure AD request, use
| Azure public cloud | `https://login.microsoftonline.com` | `https://atlas.microsoft.com/` | | Azure Government cloud | `https://login.microsoftonline.us` | `https://atlas.microsoft.com/` |
-For more information about requesting access tokens from Azure AD for users and service principals, see [Authentication scenarios for Azure AD](../active-directory/develop/authentication-vs-authorization.md). To view specific scenarios, see [the table of scenarios](./how-to-manage-authentication.md#choose-an-authentication-and-authorization-scenario).
+For more information about requesting access tokens from Azure AD for users and service principals, see [Authentication scenarios for Azure AD](../active-directory/develop/authentication-vs-authorization.md). To view specific scenarios, see [the table of scenarios](./how-to-manage-authentication.md#choose-an-authentication-and-authorization-scenario).
## Manage and rotate shared keys
To rotate your Azure Maps subscription keys in the Azure portal:
## Next steps Find the API usage metrics for your Azure Maps account:+ > [!div class="nextstepaction"] > [View usage metrics](how-to-view-api-usage.md)
azure-maps How To Secure Sas App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-secure-sas-app.md
+
+ Title: How to secure an application in Microsoft Azure Maps with SAS token
+
+description: This article describes how to configure an application to be secured with SAS token authentication.
++ Last updated : 01/05/2022++++
+custom.ms: subject-rbac-steps
++
+# Secure an application with SAS token
+
+This article describes how to create an Azure Maps account with a SAS token that can be used to call the Azure Maps REST API.
+
+## Prerequisites
+
+This scenario assumes:
+
+- If you don't already have an Azure account, [sign up for a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you continue.
+- The current user must have subscription `Owner` role permissions on the Azure subscription to create an [Azure Key Vault](/azure/key-vault/general/basic-concepts), user-assigned managed identity, assign the managed identity a role, and create an Azure Maps account.
+- Azure CLI is installed to deploy the resources. Read more on [How to install the Azure CLI](/cli/azure/install-azure-cli).
+- The current user is signed-in to Azure CLI with an active Azure subscription using `az login`.
+
+## Scenario: SAS token
+
+Applications that use SAS token authentication should store the keys in a secure store. A SAS token is a credential that grants the level of access specified during its creation to anyone who holds it, until the token expires or access is revoked. This scenario describes how to safely store your SAS token as a secret in Azure Key Vault and distribute the SAS token into a public client. Events in an applicationΓÇÖs lifecycle may generate new SAS tokens without interrupting active connections using existing tokens. To understand how to configure Azure Key Vault, see the [Azure Key Vault developer's guide](../key-vault/general/developers-guide.md).
+
+The following sample scenario will perform the steps outlined below with two Azure Resource Manager (ARM) template deployments:
+
+- Create an Azure Key Vault.
+- Create a user-assigned managed identity.
+- Assign Azure RBAC `Azure Maps Data Reader` role to the user-assigned managed identity.
+- Create a map account with a CORS configuration and attach the user-assigned managed identity.
+- Create and save a SAS token into the Azure Key Vault
+- Retrieve the SAS token secret from Azure Key Vault.
+- Create an Azure Maps REST API request using the SAS token.
+
+When completed, you should see output from Azure Maps `Search Address (Non-Batch)` REST API results on PowerShell with Azure CLI. The Azure resources will be deployed with permissions to connect to the Azure Maps account with controls for maximum rate limit, allowed regions, `localhost` configured CORS policy, and Azure RBAC.
+
+### Azure resource deployment with Azure CLI
+
+The following steps describe how to create and configure an Azure Maps account with SAS token authentication. The Azure CLI is assumed to be running in a PowerShell instance.
+
+1. Register Key Vault, Managed Identities, and Azure Maps for your subscription
+
+ ```azurecli
+ az provider register --namespace Microsoft.KeyVault
+ az provider register --namespace Microsoft.ManagedIdentity
+ az provider register --namespace Microsoft.Maps
+ ```
+
+1. Retrieve your Azure AD object ID
+
+ ```azurecli
+ $id = $(az rest --method GET --url 'https://graph.microsoft.com/v1.0/me?$select=id' --headers 'Content-Type=application/json' --query "id")
+ ```
+
+1. Create a template file `prereq.azuredeploy.json` with the following content.
+
+ ```json
+ {
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]",
+ "metadata": {
+ "description": "Specifies the location for all the resources."
+ }
+ },
+ "keyVaultName": {
+ "type": "string",
+ "defaultValue": "[concat('vault', uniqueString(resourceGroup().id))]",
+ "metadata": {
+ "description": "Specifies the name of the key vault."
+ }
+ },
+ "userAssignedIdentityName": {
+ "type": "string",
+ "defaultValue": "[concat('identity', uniqueString(resourceGroup().id))]",
+ "metadata": {
+ "description": "The name for your managed identity resource."
+ }
+ },
+ "objectId": {
+ "type": "string",
+ "metadata": {
+ "description": "Specifies the object ID of a user, service principal or security group in the Azure Active Directory tenant for the vault. The object ID must be unique for the set of access policies. Get it by using Get-AzADUser or Get-AzADServicePrincipal cmdlets."
+ }
+ },
+ "secretsPermissions": {
+ "type": "array",
+ "defaultValue": [
+ "list",
+ "get",
+ "set"
+ ],
+ "metadata": {
+ "description": "Specifies the permissions to secrets in the vault. Valid values are: all, get, list, set, delete, backup, restore, recover, and purge."
+ }
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.ManagedIdentity/userAssignedIdentities",
+ "name": "[parameters('userAssignedIdentityName')]",
+ "apiVersion": "2018-11-30",
+ "location": "[parameters('location')]"
+ },
+ {
+ "apiVersion": "2021-04-01-preview",
+ "type": "Microsoft.KeyVault/vaults",
+ "name": "[parameters('keyVaultName')]",
+ "location": "[parameters('location')]",
+ "properties": {
+ "tenantId": "[subscription().tenantId]",
+ "sku": {
+ "name": "Standard",
+ "family": "A"
+ },
+ "enabledForTemplateDeployment": true,
+ "accessPolicies": [
+ {
+ "objectId": "[parameters('objectId')]",
+ "tenantId": "[subscription().tenantId]",
+ "permissions": {
+ "secrets": "[parameters('secretsPermissions')]"
+ }
+ }
+ ]
+ }
+ }
+ ],
+ "outputs": {
+ "userIdentityResourceId": {
+ "type": "string",
+ "value": "[resourceId('Microsoft.ManagedIdentity/userAssignedIdentities', parameters('userAssignedIdentityName'))]"
+ },
+ "userAssignedIdentityPrincipalId": {
+ "type": "string",
+ "value": "[reference(parameters('userAssignedIdentityName')).principalId]"
+ },
+ "keyVaultName": {
+ "type": "string",
+ "value": "[parameters('keyVaultName')]"
+ }
+ }
+ }
+
+ ```
+
+1. Deploy the prerequisite resources. Make sure to pick the location where the Azure Maps accounts is enabled.
+
+ ```azurecli
+ az group create --name {group-name} --location "East US"
+ $outputs = $(az deployment group create --name ExampleDeployment --resource-group {group-name} --template-file "./prereq.azuredeploy.json" --parameters objectId=$id --query "[properties.outputs.keyVaultName.value, properties.outputs.userAssignedIdentityPrincipalId.value, properties.outputs.userIdentityResourceId.value]" --output tsv)
+ ```
+
+1. Create a template file `azuredeploy.json` to provision the Map account, role assignment, and SAS token.
+
+ ```json
+ {
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]",
+ "metadata": {
+ "description": "Specifies the location for all the resources."
+ }
+ },
+ "keyVaultName": {
+ "type": "string",
+ "metadata": {
+ "description": "Specifies the resourceId of the key vault."
+ }
+ },
+ "accountName": {
+ "type": "string",
+ "defaultValue": "[concat('map', uniqueString(resourceGroup().id))]",
+ "metadata": {
+ "description": "The name for your Azure Maps account."
+ }
+ },
+ "userAssignedIdentityResourceId": {
+ "type": "string",
+ "metadata": {
+ "description": "Specifies the resourceId for the user assigned managed identity resource."
+ }
+ },
+ "userAssignedIdentityPrincipalId": {
+ "type": "string",
+ "metadata": {
+ "description": "Specifies the resourceId for the user assigned managed identity resource."
+ }
+ },
+ "pricingTier": {
+ "type": "string",
+ "allowedValues": [
+ "S0",
+ "S1",
+ "G2"
+ ],
+ "defaultValue": "G2",
+ "metadata": {
+ "description": "The pricing tier for the account. Use S0 for small-scale development. Use S1 or G2 for large-scale applications."
+ }
+ },
+ "kind": {
+ "type": "string",
+ "allowedValues": [
+ "Gen1",
+ "Gen2"
+ ],
+ "defaultValue": "Gen2",
+ "metadata": {
+ "description": "The pricing tier for the account. Use Gen1 for small-scale development. Use Gen2 for large-scale applications."
+ }
+ },
+ "guid": {
+ "type": "string",
+ "defaultValue": "[guid(resourceGroup().id)]",
+ "metadata": {
+ "description": "Input string for new GUID associated with assigning built in role types"
+ }
+ },
+ "startDateTime": {
+ "type": "string",
+ "defaultValue": "[utcNow('u')]",
+ "metadata": {
+ "description": "Current Universal DateTime in ISO 8601 'u' format to be used as start of the SAS token."
+ }
+ },
+ "duration" : {
+ "type": "string",
+ "defaultValue": "P1Y",
+ "metadata": {
+ "description": "The duration of the SAS token, P1Y is maximum, ISO 8601 format is expected."
+ }
+ },
+ "maxRatePerSecond": {
+ "type": "int",
+ "defaultValue": 500,
+ "minValue": 1,
+ "maxValue": 500,
+ "metadata": {
+ "description": "The approximate maximum rate per second the SAS token can be used."
+ }
+ },
+ "signingKey": {
+ "type": "string",
+ "defaultValue": "primaryKey",
+ "allowedValues": [
+ "primaryKey",
+ "seconaryKey"
+ ],
+ "metadata": {
+ "description": "The specified signing key which will be used to create the SAS token."
+ }
+ },
+ "allowedOrigins": {
+ "type": "array",
+ "defaultValue": [],
+ "maxLength": 10,
+ "metadata": {
+ "description": "The specified application's web host header origins (example: https://www.azure.com) which the Maps account allows for Cross Origin Resource Sharing (CORS)."
+ }
+ },
+ "allowedRegions": {
+ "type": "array",
+ "defaultValue": [],
+ "metadata": {
+ "description": "The specified SAS token allowed locations which the token may be used."
+ }
+ }
+ },
+ "variables": {
+ "accountId": "[resourceId('Microsoft.Maps/accounts', parameters('accountName'))]",
+ "Azure Maps Data Reader": "[subscriptionResourceId('Microsoft.Authorization/roleDefinitions', '423170ca-a8f6-4b0f-8487-9e4eb8f49bfa')]",
+ "sasParameters": {
+ "signingKey": "[parameters('signingKey')]",
+ "principalId": "[parameters('userAssignedIdentityPrincipalId')]",
+ "maxRatePerSecond": "[parameters('maxRatePerSecond')]",
+ "start": "[parameters('startDateTime')]",
+ "expiry": "[dateTimeAdd(parameters('startDateTime'), parameters('duration'))]",
+ "regions": "[parameters('allowedRegions')]"
+ }
+ },
+ "resources": [
+ {
+ "name": "[parameters('accountName')]",
+ "type": "Microsoft.Maps/accounts",
+ "apiVersion": "2021-12-01-preview",
+ "location": "[parameters('location')]",
+ "sku": {
+ "name": "[parameters('pricingTier')]"
+ },
+ "kind": "[parameters('kind')]",
+ "properties": {
+ "cors": {
+ "corsRules": [
+ {
+ "allowedOrigins": "[parameters('allowedOrigins')]"
+ }
+ ]
+ }
+ },
+ "identity": {
+ "type": "UserAssigned",
+ "userAssignedIdentities": {
+ "[parameters('userAssignedIdentityResourceId')]": {}
+ }
+ }
+ },
+ {
+ "apiVersion": "2020-04-01-preview",
+ "name": "[concat(parameters('accountName'), '/Microsoft.Authorization/', parameters('guid'))]",
+ "type": "Microsoft.Maps/accounts/providers/roleAssignments",
+ "dependsOn": [
+ "[parameters('accountName')]"
+ ],
+ "properties": {
+ "roleDefinitionId": "[variables('Azure Maps Data Reader')]",
+ "principalId": "[parameters('userAssignedIdentityPrincipalId')]",
+ "principalType": "ServicePrincipal"
+ }
+ },
+ {
+ "apiVersion": "2021-04-01-preview",
+ "type": "Microsoft.KeyVault/vaults/secrets",
+ "name": "[concat(parameters('keyVaultName'), '/', parameters('accountName'))]",
+ "dependsOn": [
+ "[variables('accountId')]"
+ ],
+ "tags": {
+ "signingKey": "[variables('sasParameters').signingKey]",
+ "start" : "[variables('sasParameters').start]",
+ "expiry" : "[variables('sasParameters').expiry]"
+ },
+ "properties": {
+ "value": "[listSas(variables('accountId'), '2021-12-01-preview', variables('sasParameters')).accountSasToken]"
+ }
+ }
+ ]
+ }
+ ```
+
+1. Deploy the template using ID parameters from the Azure Key Vault and managed identity resources created in the previous step. Note that when creating the SAS token, the `allowedRegions` parameter is set to `eastus`, `westus2`, and `westcentralus`. We use these locations because we plan to make HTTP requests to the `us.atlas.microsoft.com` endpoint.
+
+ > [!IMPORTANT]
+ > We save the SAS token into the Azure Key Vault to prevent its credentials from appearing in the Azure deployment logs. The Azure Key Vault SAS token secret's `tags` also contain the start, expiry, and signing key name to help understand when the SAS token will expire.
+
+ ```azurecli
+ az deployment group create --name ExampleDeployment --resource-group {group-name} --template-file "./azuredeploy.json" --parameters keyVaultName="$($outputs[0])" userAssignedIdentityPrincipalId="$($outputs[1])" userAssignedIdentityResourceId="$($outputs[2])" allowedOrigins="['http://localhost']" allowedRegions="['eastus', 'westus2', 'westcentralus']" maxRatePerSecond="10"
+ ```
+
+1. Locate, then save a copy of the single SAS token secret from Azure Key Vault.
+
+ ```azurecli
+ $secretId = $(az keyvault secret list --vault-name $outputs[0] --query "[? contains(name,'map')].id" --output tsv)
+ $sasToken = $(az keyvault secret show --id "$secretId" --query "value" --output tsv)
+ ```
+
+1. Test the SAS Token by making a request to an Azure Maps endpoint. We specify the `us.atlas.microsoft.com` to ensure that our request will be routed to the US geography because our SAS Token has allowed regions within the geography.
+
+ ```azurecli
+ az rest --method GET --url 'https://us.atlas.microsoft.com/search/address/json?api-version=1.0&query=15127 NE 24th Street, Redmond, WA 98052' --headers "Authorization=jwt-sas $($sasToken)" --query "results[].address"
+ ```
+
+## Complete example
+
+In the current directory of the PowerShell session you should have:
+
+- `prereq.azuredeploy.json` This creates the Key Vault and managed identity.
+- `azuredeploy.json` This creates the Azure Maps account and configures the role assignment and managed identity, then stores the SAS Token into the Azure Key Vault.
+
+```powershell
+az login
+az provider register --namespace Microsoft.KeyVault
+az provider register --namespace Microsoft.ManagedIdentity
+az provider register --namespace Microsoft.Maps
+
+$id = $(az rest --method GET --url 'https://graph.microsoft.com/v1.0/me?$select=id' --headers 'Content-Type=application/json' --query "id")
+az group create --name {group-name} --location "East US"
+$outputs = $(az deployment group create --name ExampleDeployment --resource-group {group-name} --template-file "./prereq.azuredeploy.json" --parameters objectId=$id --query "[properties.outputs.keyVaultName.value, properties.outputs.userAssignedIdentityPrincipalId.value, properties.outputs.userIdentityResourceId.value]" --output tsv)
+az deployment group create --name ExampleDeployment --resource-group {group-name} --template-file "./azuredeploy.json" --parameters keyVaultName="$($outputs[0])" userAssignedIdentityPrincipalId="$($outputs[1])" userAssignedIdentityResourceId="$($outputs[2])" allowedOrigins="['http://localhost']" allowedRegions="['eastus', 'westus2', 'westcentralus']" maxRatePerSecond="10"
+$secretId = $(az keyvault secret list --vault-name $outputs[0] --query "[? contains(name,'map')].id" --output tsv)
+$sasToken = $(az keyvault secret show --id "$secretId" --query "value" --output tsv)
+
+az rest --method GET --url 'https://us.atlas.microsoft.com/search/address/json?api-version=1.0&query=15127 NE 24th Street, Redmond, WA 98052' --headers "Authorization=jwt-sas $($sasToken)" --query "results[].address"
+```
+
+## Clean up resources
+
+When you no longer need the Azure resources, you can delete them:
+
+```azurecli
+az group delete --name {group-name}
+```
+
+## Next steps
+
+For more detailed examples:
+> [!div class="nextstepaction"]
+> [Authentication scenarios for Azure AD](../active-directory/develop/authentication-vs-authorization.md)
+
+Find the API usage metrics for your Azure Maps account:
+> [!div class="nextstepaction"]
+> [View usage metrics](how-to-view-api-usage.md)
+
+Explore samples that show how to integrate Azure AD with Azure Maps:
+> [!div class="nextstepaction"]
+> [Azure Maps samples](https://github.com/Azure-Samples/Azure-Maps-AzureAD-Samples)
azure-maps Tutorial Create Store Locator https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/tutorial-create-store-locator.md
Title: 'Tutorial: Use Microsoft Azure Maps to create store locator web applications'+ description: Tutorial on how to use Microsoft Azure Maps to create store locator web applications. Previously updated : 06/07/2021 Last updated : 01/03/2022 - # Tutorial: Use Azure Maps to create a store locator
-This tutorial guides you through the process of creating a simple store locator using Azure Maps. In this tutorial, you'll learn how to:
+This tutorial guides you through the process of creating a simple store locator using Azure Maps.
+
+In this tutorial, you'll learn how to:
> [!div class="checklist"]
+>
> * Create a new webpage by using the Azure Map Control API. > * Load custom data from a file and display it on a map. > * Use the Azure Maps Search service to find an address or enter a query.
This tutorial guides you through the process of creating a simple store locator
## Prerequisites
-1. [Make an Azure Maps account in Gen 1 (S1) or Gen 2 pricing tier](quick-demo-map-app.md#create-an-azure-maps-account).
-2. [Obtain a primary subscription key](quick-demo-map-app.md#get-the-primary-key-for-your-account), also known as the primary key or the subscription key.
+1. An [Azure Maps account](quick-demo-map-app.md#create-an-azure-maps-account) using the Gen 1 (S1) or Gen 2 pricing tier.
+2. An [Azure Maps primary subscription key](quick-demo-map-app.md#get-the-primary-key-for-your-account).
-For more information about Azure Maps authentication, see [manage authentication in Azure Maps](how-to-manage-authentication.md).
+For more information about Azure Maps authentication, see [Manage authentication in Azure Maps](how-to-manage-authentication.md).
-This tutorial uses the [Visual Studio Code](https://code.visualstudio.com/) application, but you can use a different coding environment.
+[Visual Studio Code](https://code.visualstudio.com/) is recommended for this tutorial, but you can use any suitable integrated development environment (IDE).
## Sample code
-In this tutorial, we'll create a store locator for a fictional company called Contoso Coffee. Also, the tutorial includes some tips to help you learn about extending the store locator with other optional functionalities.
+In this tutorial, you'll create a store locator for a fictional company named *Contoso Coffee*. Also, this tutorial includes some tips to help you learn about extending the store locator with other optional functionality.
-You can view the [Live store locator sample here](https://azuremapscodesamples.azurewebsites.net/?sample=Simple%20Store%20Locator).
+To see a live sample of what you will create in this tutorial, see [Simple Store Locator](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Simple%20Store%20Locator) on the **Azure Maps Code Samples** site.
To more easily follow and engage this tutorial, you'll need to download the following resources:
-* [Full source code for simple store locator sample](https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/AzureMapsCodeSamples/Tutorials/Simple%20Store%20Locator)
-* [Store location data to import into the store locator dataset](https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/AzureMapsCodeSamples/Tutorials/Simple%20Store%20Locator/data)
-* [Map images](https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/AzureMapsCodeSamples/Tutorials/Simple%20Store%20Locator/images)
+* Full source code for the [Simple Store Locator](https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/AzureMapsCodeSamples/Tutorials/Simple%20Store%20Locator) on GitHub.
+* [Store location data](https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/AzureMapsCodeSamples/Tutorials/Simple%20Store%20Locator/data) that you'll import into the store locator dataset.
+* The [Map images](https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/AzureMapsCodeSamples/Tutorials/Simple%20Store%20Locator/images).
## Store locator features
-This section lists the features that are supported in the Contoso Coffee store locator application.
+This section lists the Azure Maps features that are demonstrated in the Contoso Coffee store locator application created in this tutorial.
### User interface features
-* Store logo on the header
-* Map supports panning and zooming
-* A My Location button to search over the user's current location.
-* Page layout adjusts based on the width of the device screen
+* A store logo on the header
+* A map that supports panning and zooming
+* A **My Location** button to search over the user's current location.
+* A Page layout that adjusts based on the width of the devices screen
* A search box and a search button ### Functionality features * A `keypress` event added to the search box triggers a search when the user presses **Enter**.
-* When the map moves, the distance to each location from the center of the map calculates. The results list updates to display the closest locations at the top of the map.
+* When the map moves, the distance to each location from the center of the map recalculates. The results list updates to display the closest locations at the top of the map.
* When the user selects a result in the results list, the map is centered over the selected location and information about the location appears in a pop-up window. * When the user selects a specific location, the map triggers a pop-up window. * When the user zooms out, locations are grouped in clusters. Each cluster is represented by a circle with a number inside the circle. Clusters form and separate as the user changes the zoom level.
This section lists the features that are supported in the Contoso Coffee store l
## Store locator design
-The following figure shows a wireframe of the general layout of our store locator. You can view the live wireframe [here](https://azuremapscodesamples.azurewebsites.net/?sample=Simple%20Store%20Locator).
+The following screenshot shows the general layout of the Contoso Coffee store locator application. To view and interact with the live sample, see the [Simple Store Locator](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Simple%20Store%20Locator) sample application on the **Azure Maps Code Samples** site.
-To maximize the usefulness of this store locator, we include a responsive layout that adjusts when a user's screen width is smaller than 700 pixels wide. A responsive layout makes it easy to use the store locator on a small screen, like on a mobile device. Here's a wireframe of the small-screen layout:
+To maximize the usefulness of this store locator, we include a responsive layout that adjusts when a user's screen width is smaller than 700 pixels wide. A responsive layout makes it easy to use the store locator on a small screen, like on a mobile device. Here's a screenshot showing a sample of the small-screen layout:
<a id="create a data-set"></a>
This section describes how to create a dataset of the stores that you want to di
:::image type="content" source="./media/tutorial-create-store-locator/store-locator-data-spreadsheet.png" alt-text="Screenshot of the store locator data in an Excel workbook.":::
-To view the full dataset, [download the Excel workbook here](https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/AzureMapsCodeSamples/Tutorials/Simple%20Store%20Locator/data).
+The excel file containing the full dataset for the Contoso Coffee locator sample application can be downloaded from the [data](https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/AzureMapsCodeSamples/Tutorials/Simple%20Store%20Locator/data) folder of the _Azure Maps code samples_ repository in GitHub.
-Looking at the screenshot of the data, we can make the following observations:
+From the above screenshot of the data, we can make the following observations:
-* Location information is stored by using the **AddressLine**, **City**, **Municipality** (county), **AdminDivision** (state/province), **PostCode** (postal code), and **Country** columns.
-* The **Latitude** and **Longitude** columns contain the coordinates for each Contoso Coffee location. If you don't have coordinates information, you can use the Search services in Azure Maps to determine the location coordinates.
+* Location information is stored in the following six columns: **AddressLine**, **City**, **Municipality** (county), **AdminDivision** (state/province), **PostCode** (postal code), and **Country**.
+* The **Latitude** and **Longitude** columns contain the coordinates for each Contoso Coffee location. If you don't have coordinate information, you can use the Azure Maps [Search service](/rest/api/maps/search) to determine the location coordinates.
* Some other columns contain metadata that's related to the coffee shops: a phone number, Boolean columns, and store opening and closing times in 24-hour format. The Boolean columns are for Wi-Fi and wheelchair accessibility. You can create your own columns that contain metadata that's more relevant to your location data. > [!NOTE]
-> Azure Maps renders data in the spherical Mercator projection "EPSG:3857" but reads data in "EPSG:4326" that use the WGS84 datum.
+> Azure Maps renders data in the [Spherical Mercator projection](glossary.md#spherical-mercator-projection) "[EPSG:3857](https://epsg.io/3857)" but reads data in "[EPSG:4326](https://epsg.io/4326)" that use the WGS84 datum.
-## Load the store location dataset
+## Load Contoso Coffee shop locator dataset
The Contoso Coffee shop locator dataset is small, so we'll convert the Excel worksheet into a tab-delimited text file. This file can then be downloaded by the browser when the application loads.
- >[!TIP]
->If your dataset is too large for client download, or is updated frequently, you might consider storing your dataset in a database. After your data is loaded into a database, you can then set up a web service that accepts queries for the data, and then sends the results to the user's browser.
+> [!TIP]
+> If your dataset is too large for client download, or is updated frequently, you might consider storing your dataset in a database. After your data is loaded into a database, you can then set up a web service that accepts queries for the data, then sends the results to the user's browser.
### Convert data to tab-delimited text file
-To convert the Contoso Coffee shop location data from an Excel workbook into a flat text file:
-
-1. [Download the Excel workbook](https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/AzureMapsCodeSamples/Tutorials/Simple%20Store%20Locator/data).
-
-2. Save the workbook to your hard drive.
+To convert the Contoso Coffee shop location data from an Excel workbook into a tab-delimited text file:
-3. Load the Excel app.
+1. Download the Excel workbook [ContosoCoffee.xlsx](https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/AzureMapsCodeSamples/Tutorials/Simple%20Store%20Locator/data) and Open it in Excel.
-4. Open the downloaded workbook.
+1. Select **File > Save As...**.
-5. Select **Save As**.
+1. In the **Save as type** drop-down list, select **Text (Tab delimited)(*.txt)**.
-6. In the **Save as type** drop-down list, select **Text (Tab delimited)(*.txt)**.
-
-7. Name the file *ContosoCoffee*.
+1. Name the file *ContosoCoffee*.
:::image type="content" source="./media/tutorial-create-store-locator/data-delimited-text.png" alt-text="Screenshot of the Save as type dialog box.":::
If you open the text file in Notepad, it looks similar to the following text:
## Set up the project
-1. Open the Visual Studio Code app.
+1. Open [Visual Studio Code](https://code.visualstudio.com/), or your choice of development environments.
-2. Select **File**, and then select **Open Workspace...**.
+2. Select **File > Open Workspace...**.
-3. Create a new folder and name it "ContosoCoffee".
+3. Create a new folder named *ContosoCoffee*.
-4. Select **CONTOSOCOFFEE** in the explorer.
+4. Select **ContosoCoffee** in the explorer.
5. Create the following three files that define the layout, style, and logic for the application:
If you open the text file in Notepad, it looks similar to the following text:
6. Create a folder named *data*.
-7. Add *ContosoCoffee.txt* to the *data* folder.
+7. Add the *ContosoCoffee.txt* file that you previously created from the Excel workbook _ContosoCoffee.xlsx_ to the *data* folder.
8. Create another folder named *images*.
-9. If you haven't already, [download these 10 images](https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/AzureMapsCodeSamples/Tutorials/Simple%20Store%20Locator/images).
-
-10. Add the downloaded images to the *images* folder.
+9. If you haven't already, download the 10 [Map images](https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/AzureMapsCodeSamples/Tutorials/Simple%20Store%20Locator/images) from the images directory in the GitHub Repository and add them to the *images* folder.
Your workspace folder should now look like the following screenshot:
- :::image type="content" source="./media/tutorial-create-store-locator/store-locator-workspace.png" alt-text="Screenshot of the Simple Store Locator workspace folder.":::
+ :::image type="content" source="./media/tutorial-create-store-locator/store-locator-workspace.png" alt-text="Screenshot of the images folder in the Contoso Coffee directory.":::
## Create the HTML
To create the HTML:
2. Add references to the Azure Maps web control JavaScript and CSS files: ```HTML
+ <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
<link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css"> <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script> ```
-3. Add a reference to the Azure Maps Services module. The module is a JavaScript library that wraps the Azure Maps REST services and makes them easy to use in JavaScript. The module is useful for powering search functionality.
+3. Next, add a reference to the Azure Maps Services module. This module is a JavaScript library that wraps the Azure Maps REST services, making them easy to use in JavaScript. The Services module is useful for powering search functionality.
```HTML
+ <!-- Add a reference to the Azure Maps Services Module JavaScript file. -->
<script src="https://atlas.microsoft.com/sdk/javascript/service/2/atlas-service.min.js"></script> ``` 4. Add references to *index.js* and *index.css*. ```HTML
+ <!-- Add references to the store locator JavaScript and CSS files. -->
<link rel="stylesheet" href="index.css" type="text/css"> <script src="index.js"></script> ```
To create the HTML:
After you finish, *https://docsupdatetracker.net/index.html* should look like [this example https://docsupdatetracker.net/index.html file](https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/master/AzureMapsCodeSamples/Tutorials/Simple%20Store%20Locator/https://docsupdatetracker.net/index.html).
-## Define the CSS Styles
+## Define the CSS styles
The next step is to define the CSS styles. CSS styles define how the application components are laid out and the application's appearance.
The next step is to define the CSS styles. CSS styles define how the application
} ```
-Run the application. You'll see the header, search box, and search button. However, the map isn't visible because it hasn't been loaded yet. If you try to do a search, nothing happens. We need to set up the JavaScript logic, which is described in the next section. This logic accesses all the functionality of the store locator.
+Run the application. You'll see the header, search box, and search button. However, the map isn't visible because it hasn't been loaded yet. If you try to do a search, nothing happens. We need to add the JavaScript logic described in the next section. This logic accesses all the functionality of the store locator.
## Add JavaScript code
The JavaScript code in the Contoso Coffee shop locator app enables the following
1. Adds an [event listener](/javascript/api/azure-maps-control/atlas.map#events) called `ready` to wait until the page has completed its loading process. When the page loading is complete, the event handler creates more event listeners to monitor the loading of the map, and give functionality to the search and **My location** buttons.
-2. When the user selects the search button, or types a location in the search box then presses enter, a fuzzy search against the user's query is started. The code passes in an array of country/region ISO 2 values to the `countrySet` option to limit the search results to those countries/regions. Limiting the countries/regions to search helps increase the accuracy of the results that are returned.
+2. When the user selects the search button, or types a location in the search box then presses enter, a fuzzy search against the user's query begins. The code passes in an array of country/region ISO 2 values to the `countrySet` option to limit the search results to those countries/regions. Limiting the countries/regions to search helps increase the accuracy of the results that are returned.
-3. Once the search is finished, the first location result is used as the center focus of the map camera. When the user selects the My Location button, the code retrieves the user's location using the *HTML5 Geolocation API* that's built into the browser. After retrieving the location, the code centers the map over the user's location.
+3. Once the search completes, the first location result is used as the center focus of the map. When the user selects the My Location button, the code retrieves the user's location using the *HTML5 Geolocation API* that's built into the browser. After retrieving the location, the code centers the map over the user's location.
To add the JavaScript:
To add the JavaScript:
```JavaScript //The maximum zoom level to cluster data point data on the map. var maxClusterZoomLevel = 11;-
+
//The URL to the store location data. var storeLocationDataUrl = 'data/ContosoCoffee.txt';-
- //The URL to the icon image.
+
+ //The URL to the icon image.
var iconImageUrl = 'images/CoffeeIcon.png';
+
+ //An array of country region ISO2 values to limit searches to.
+ var countrySet = ['US', 'CA', 'GB', 'FR','DE','IT','ES','NL','DK'];
+
+ //
var map, popup, datasource, iconLayer, centerMarker, searchURL;+
+ // Used in function updateListItems
+ var listItemTemplate = '<div class="listItem" onclick="itemSelected(\'{id}\')"><div class="listItem-title">{title}</div>{city}<br />Open until {closes}<br />{distance} miles away</div>';
+ ``` 3. Add the following initialization code. Make sure to replace `<Your Azure Maps Key>` with your primary subscription key.
To add the JavaScript:
//Create a pop-up window, but leave it closed so we can update it and display it later. popup = new atlas.Popup();
- //Use SubscriptionKeyCredential with a subscription key
- const subscriptionKeyCredential = new atlas.service.SubscriptionKeyCredential(atlas.getSubscriptionKey());
-
- //Use subscriptionKeyCredential to create a pipeline
- const pipeline = atlas.service.MapsURL.newPipeline(subscriptionKeyCredential, {
- retryOptions: { maxTries: 4 } // Retry options
- });
+ //Use MapControlCredential to share authentication between a map control and the service module.
+ var pipeline = atlas.service.MapsURL.newPipeline(new atlas.service.MapControlCredential(map));
//Create an instance of the SearchURL client. searchURL = new atlas.service.SearchURL(pipeline);
To add the JavaScript:
} };
- //If the user selects the My Location button, use the Geolocation API (Preview) to get the user's location. Center and zoom the map on that location.
+ //If the user selects the My Location button, use the Geolocation API to get the user's location. Center and zoom the map on that location.
document.getElementById('myLocationBtn').onclick = setMapToUserLocation; //Wait until the map resources are ready. map.events.add('ready', function() {
- //Add your post-map load functionality.
+ //Add your maps post load functionality.
}); }
- //Create an array of country/region ISO 2 values to limit searches to.
- var countrySet = ['US', 'CA', 'GB', 'FR','DE','IT','ES','NL','DK'];
- function performSearch() { var query = document.getElementById('searchTbx').value; //Perform a fuzzy search on the users query. searchURL.searchFuzzy(atlas.service.Aborter.timeout(3000), query, { //Pass in the array of country/region ISO2 for which we want to limit the search to.
- countrySet: countrySet
+ countrySet: countrySet,
+ view: 'Auto'
}).then(results => { //Parse the response into GeoJSON so that the map can understand. var data = results.geojson.getFeatures();
To add the JavaScript:
function setMapToUserLocation() { //Request the user's location. navigator.geolocation.getCurrentPosition(function(position) {
- //Convert the Geolocation API (Preview) position to a longitude and latitude position value that the map can interpret and center the map over it.
+ //Convert the geolocation API position into a longitude/latitude position value the map can understand and center the map over it.
map.setCamera({ center: [position.coords.longitude, position.coords.latitude], zoom: maxClusterZoomLevel + 1 }); }, function(error) {
- //If an error occurs when the API tries to access the user's position information, display an error message.
+ //If an error occurs when trying to access the users position information, display an error message.
switch (error.code) { case error.PERMISSION_DENIED: alert('User denied the request for geolocation.');
To add the JavaScript:
window.onload = initialize; ```
-4. In the map's `ready` event listener, add a zoom control and an HTML marker to display the center of a search area.
+4. In the map's `ready` event handler, add a zoom control and an HTML marker to display the center of a search area.
```JavaScript //Add a zoom control to the map.
To add the JavaScript:
map.markers.add(centerMarker); ```
-5. In the map's `ready` event listener, add a data source. Then, make a call to load and parse the dataset. Enable clustering on the data source. Clustering on the data source groups overlapping points together in a cluster. As the user zooms in, the clusters separate into individual points. This behavior provides a better user experience and improves performance.
+5. In the map's `ready` event handler, add a data source. Then, make a call to load and parse the dataset. Enable clustering on the data source. Clustering on the data source groups overlapping points together in a cluster. As the user zooms in, the clusters separate into individual points. This behavior provides a better user experience and improves performance.
```JavaScript //Create a data source, add it to the map, and then enable clustering.
To add the JavaScript:
map.sources.add(datasource);
- //Load all the store data now that the data source is defined.
+ //Load all the store data now that the data source has been defined.
loadStoreData(); ```
-6. After the dataset loads in the map's `ready` event listener, define a set of layers to render the data. A bubble layer renders clustered data points. A symbol layer renders the number of points in each cluster above the bubble layer. A second symbol layer renders a custom icon for individual locations on the map.
+6. After the dataset loads in the map's `ready` event handler, define a set of layers to render the data. A bubble layer renders clustered data points. A symbol layer renders the number of points in each cluster above the bubble layer. A second symbol layer renders a custom icon for individual locations on the map.
Add `mouseover` and `mouseout` events to the bubble and icon layers to change the mouse cursor when the user hovers over a cluster or icon on the map. Add a `click` event to the cluster bubble layer. This `click` event zooms in the map two levels and centers the map over a cluster when the user selects any cluster. Add a `click` event to the icon layer. This `click` event displays a pop-up window that shows the details of a coffee shop when a user selects an individual location icon. Add an event to the map to monitor when the map is finished moving. When this event fires, update the items in the list panel.
To add the JavaScript:
showPopup(e.shapes[0]); });
- //Add an event to monitor when the map is finished rendering the map after it has moved.
+ //Add an event to monitor when the map has finished rendering.
map.events.add('render', function() { //Update the data in the list. updateListItems();
To add the JavaScript:
}); ```
-7. When the coffee shop dataset is loaded, it must first be downloaded. Then, the text file must be split into lines. The first line contains the header information. To make the code easier to follow, we parse the header into an object, which we can then use to look up the cell index of each property. After the first line, loop through the remaining lines and create a point feature. Add the point feature to the data source. Finally, update the list panel.
+7. When the coffee shop dataset is needed, it must first be downloaded. Once downloaded, the file must be split into lines. The first line contains the header information. To make the code easier to follow, we parse the header into an object, which we can then use to look up the cell index of each property. After the first line, loop through the remaining lines and create a point feature. Add the point feature to the data source. Finally, update the list panel.
```JavaScript function loadStoreData() {
To add the JavaScript:
var camera = map.getCamera(); var listPanel = document.getElementById('listPanel');
- //Check to see whether the user is zoomed out a substantial distance. If they are, tell the user to zoom in and to perform a search or select the My Location button.
+ //Check to see if the user is zoomed out a substantial distance. If they are, tell them to zoom in and to perform a search or select the My Location button.
if (camera.zoom < maxClusterZoomLevel) { //Close the pop-up window; clusters might be displayed on the map. popup.close();
To add the JavaScript:
} ```
-Now, you have a fully functional store locator. In a web browser, open the *https://docsupdatetracker.net/index.html* file for the store locator. When the clusters appear on the map, you can search for a location by using the search box, by selecting the My Location button, by selecting a cluster, or by zooming in on the map to see individual locations.
+Now, you have a fully functional store locator. Open the *https://docsupdatetracker.net/index.html* file in a web browser. When the clusters appear on the map, you can search for a location using any of the following methods:
+
+1. The search box.
+1. Selecting the My Location button
+1. Selecting a cluster
+1. Zooming in on the map to see individual locations.
The first time a user selects the My Location button, the browser displays a security warning that asks for permission to access the user's location. If the user agrees to share their location, the map zooms in on the user's location, and nearby coffee shops are shown.
If you resize the browser window to fewer than 700 pixels wide or open the appli
![Screenshot of the small-screen version of the store locator](./media/tutorial-create-store-locator/finished-simple-store-locator-mobile.png)
-In this tutorial, you learned how to create a basic store locator by using Azure Maps. The store locator you create in this tutorial might have all the functionality you need. You can add features to your store locator or use more advance features for a more custom user experience:
-
- * Enable [suggestions as you type](https://azuremapscodesamples.azurewebsites.net/?sample=Search%20Autosuggest%20and%20JQuery%20UI) in the search box.
- * Add [support for multiple languages](https://azuremapscodesamples.azurewebsites.net/?sample=Map%20Localization).
- * Allow the user to [filter locations along a route](https://azuremapscodesamples.azurewebsites.net/?sample=Filter%20Data%20Along%20Route).
- * Add the ability to [set filters](https://azuremapscodesamples.azurewebsites.net/?sample=Filter%20Symbols%20by%20Property).
- * Add support to specify an initial search value by using a query string. When you include this option in your store locator, users are then able to bookmark and share searches. It also provides an easy method for you to pass searches to this page from another page.
- * Deploy your store locator as an [Azure App Service Web App](../app-service/quickstart-html.md).
- * Store your data in a database and search for nearby locations. To learn more, see the [SQL Server spatial data types overview](/sql/relational-databases/spatial/spatial-data-types-overview?preserve-view=true&view=sql-server-2017) and [Query spatial data for the nearest neighbor](/sql/relational-databases/spatial/query-spatial-data-for-nearest-neighbor?preserve-view=true&view=sql-server-2017).
+In this tutorial, you learned how to create a basic store locator by using Azure Maps. The store locator you create in this tutorial might have all the functionality you need. You can add features to your store locator or use more advance features for a more custom user experience:
-You can [view full source code here](https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/AzureMapsCodeSamples/Tutorials/Simple%20Store%20Locator). [View the live sample](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Simple%20Store%20Locator) and learn more about the coverage and capabilities of Azure Maps by using [Zoom levels and tile grid](zoom-levels-and-tile-grid.md). You can also [Use data-driven style expressions](data-driven-style-expressions-web-sdk.md) to apply to your business logic.
+* Enable [suggestions as you type](https://azuremapscodesamples.azurewebsites.net/?sample=Search%20Autosuggest%20and%20JQuery%20UI) in the search box.
+* Add [support for multiple languages](https://azuremapscodesamples.azurewebsites.net/?sample=Map%20Localization).
+* Allow the user to [filter locations along a route](https://azuremapscodesamples.azurewebsites.net/?sample=Filter%20Data%20Along%20Route).
+* Add the ability to [set filters](https://azuremapscodesamples.azurewebsites.net/?sample=Filter%20Symbols%20by%20Property).
+* Add support to specify an initial search value by using a query string. When you include this option in your store locator, users are then able to bookmark and share searches. It also provides an easy method for you to pass searches to this page from another page.
+* Deploy your store locator as an [Azure App Service Web App](../app-service/quickstart-html.md).
+* Store your data in a database and search for nearby locations. To learn more, see the [SQL Server spatial data types overview](/sql/relational-databases/spatial/spatial-data-types-overview?preserve-view=true&view=sql-server-2017) and [Query spatial data for the nearest neighbor](/sql/relational-databases/spatial/query-spatial-data-for-nearest-neighbor?preserve-view=true&view=sql-server-2017).
-## Clean up resources
+## Additional information
-There are no resources that require cleanup.
+* For the completed code used in this tutorial, see [Simple Store Locator](https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/AzureMapsCodeSamples/Tutorials/Simple%20Store%20Locator) on GitHub.
+* To view this sample live, see [Simple Store Locator](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Simple%20Store%20Locator) on the **Azure Maps Code Samples** site.
+* learn more about the coverage and capabilities of Azure Maps by using [Zoom levels and tile grid](zoom-levels-and-tile-grid.md).
+* You can also [Use data-driven style expressions](data-driven-style-expressions-web-sdk.md) to apply to your business logic.
## Next steps
azure-monitor Azure Monitor Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/azure-monitor-agent-overview.md
The Azure Monitor agent replaces the [legacy agents for Azure Monitor](agents-ov
If the Azure Monitor agent has all the core capabilities you require, consider transitioning to it. If there are critical features that you require, continue with the current agent until the Azure Monitor agent reaches parity. - **Tolerance for rework:** If you're setting up a new environment with resources such as deployment scripts and onboarding templates, assess the effort involved. If the setup will take a significant amount of work, consider setting up your new environment with the new agent as it's now generally available.
- Azure Monitor's Log Analytics agent is retiring on 31 August 2024. The current agents will be supported for several years after deprecation begins.
+ Azure Monitor's Log Analytics agent is retiring on 31 August 2024. The current agents will be supported until the retirement date.
## Supported resource types Azure virtual machines, virtual machine scale sets, and Azure Arc-enabled servers are currently supported. Azure Kubernetes Service and other compute resource types aren't currently supported.
The following table shows the current support for the Azure Monitor agent with o
| Azure service | Current support | More information | |:|:|:| | [Microsoft Defender for Cloud](../../security-center/security-center-introduction.md) | Private preview | [Sign-up link](https://aka.ms/AMAgent) |
-| [Microsoft Sentinel](../../sentinel/overview.md) | <ul><li>Windows Event Forwarding (WEF): Private preview</li><li>Windows Security Events: [Public preview](../../sentinel/connect-windows-security-events.md?tabs=AMA)</li></ul> | <ul><li>[Sign-up link](https://aka.ms/AMAgent) </li><li>No sign-up needed</li></ul> |
+| [Microsoft Sentinel](../../sentinel/overview.md) | <ul><li>Windows Forwarding Event (WEF): [Public preview](/azure/sentinel/data-connectors-reference#windows-forwarded-events-preview)</li><li>Windows Security Events: [GA](../../sentinel/connect-windows-security-events.md?tabs=AMA)</li></ul> | <ul><li>No sign-up needed </li><li>No sign-up needed</li></ul> |
The following table shows the current support for the Azure Monitor agent with Azure Monitor features.
azure-monitor Alerts Activity Log https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-activity-log.md
The following fields are the options that you can use in the Azure Resource Mana
1. `level`: Level of the activity in the activity log event that the alert should be generated on. For example: `Critical`, `Error`, `Warning`, `Informational`, or `Verbose`. 1. `operationName`: The name of the operation in the activity log event. For example: `Microsoft.Resources/deployments/write`. 1. `resourceGroup`: Name of the resource group for the impacted resource in the activity log event.
-1. `resourceProvider`: For more information, see [Azure resource providers and types](https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fazure%2Fazure-resource-manager%2Fmanagement%2Fresource-providers-and-types&data=02%7C01%7CNoga.Lavi%40microsoft.com%7C90b7c2308c0647c0347908d7c9a2918d%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637199572373543634&sdata=4RjpTkO5jsdOgPdt%2F%2FDOlYjIFE2%2B%2BuoHq5%2F7lHpCwQw%3D&reserved=0). For a list that maps resource providers to Azure services, see [Resource providers for Azure services](https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fazure%2Fazure-resource-manager%2Fmanagement%2Fazure-services-resource-providers&data=02%7C01%7CNoga.Lavi%40microsoft.com%7C90b7c2308c0647c0347908d7c9a2918d%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637199572373553639&sdata=0ZgJPK7BYuJsRifBKFytqphMOxMrkfkEwDqgVH1g8lw%3D&reserved=0).
+1. `resourceProvider`: For more information, see [Azure resource providers and types](/azure/azure-resource-manager/management/resource-providers-and-types). For a list that maps resource providers to Azure services, see [Resource providers for Azure services](/azure/azure-resource-manager/management/resource-providers-and-types).
1. `status`: String describing the status of the operation in the activity event. For example: `Started`, `In Progress`, `Succeeded`, `Failed`, `Active`, or `Resolved`. 1. `subStatus`: Usually, this field is the HTTP status code of the corresponding REST call. But it can also include other strings describing a substatus. Examples of HTTP status codes include `OK` (HTTP Status Code: 200), `No Content` (HTTP Status Code: 204), and `Service Unavailable` (HTTP Status Code: 503), among many others. 1. `resourceType`: The type of the resource that was affected by the event. For example: `Microsoft.Resources/deployments`.
azure-monitor Alerts Metric Near Real Time https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-metric-near-real-time.md
Here's the full list of Azure Monitor metric sources supported by the newer aler
|Resource type |Dimensions Supported |Multi-resource alerts| Metrics Available| |||--|-|
-|Microsoft.Aadiam/azureADMetrics | Yes | No | [Azure AD](../essentials/metrics-supported.md#microsoftaadiamazureadmetrics) |
+|Microsoft.Aadiam/azureADMetrics | Yes | No | Azure Active Directory (metrics in private preview) |
|Microsoft.ApiManagement/service | Yes | No | [API Management](../essentials/metrics-supported.md#microsoftapimanagementservice) | |Microsoft.AppConfiguration/configurationStores |Yes | No | [App Configuration](../essentials/metrics-supported.md#microsoftappconfigurationconfigurationstores) | |Microsoft.AppPlatform/spring | Yes | No | [Azure Spring Cloud](../essentials/metrics-supported.md#microsoftappplatformspring) |
azure-monitor Metrics Supported https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/metrics-supported.md
The Azure Monitor agent replaces the Azure Diagnostics extension and Log Analyti
This latest update adds a new column and reorders the metrics to be alphabetical. The additional information means that the tables might have a horizontal scroll bar at the bottom, depending on the width of your browser window. If you seem to be missing information, use the scroll bar to see the entirety of the table.
-## microsoft.aadiam/azureADMetrics
-
-|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
-||||||||
-|ThrottledRequests|No|ThrottledRequests|Count|Average|azureADMetrics type metric|No Dimensions|
- ## Microsoft.AnalysisServices/servers
This latest update adds a new column and reorders the metrics to be alphabetical
- [Read about metrics in Azure Monitor](../data-platform.md) - [Create alerts on metrics](../alerts/alerts-overview.md)-- [Export metrics to storage, Event Hub, or Log Analytics](../essentials/platform-logs-overview.md)
+- [Export metrics to storage, Event Hub, or Log Analytics](../essentials/platform-logs-overview.md)
azure-monitor Resource Logs Categories https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/resource-logs-categories.md
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| |||| |AmlComputeClusterEvent|AmlComputeClusterEvent|No|
-|AmlComputeClusterNodeEvent|AmlComputeClusterNodeEvent|No|
+|AmlComputeClusterNodeEvent (deprecated) |AmlComputeClusterNodeEvent|No|
|AmlComputeCpuGpuUtilization|AmlComputeCpuGpuUtilization|No| |AmlComputeJobEvent|AmlComputeJobEvent|No| |AmlRunStatusChangedEvent|AmlRunStatusChangedEvent|No|
If you think something is missing, you can open a GitHub comment at the bottom o
|RunEvent|RunEvent|Yes| |RunReadEvent|RunReadEvent|Yes|
+> [!NOTE]
+> Effective February 2022, the AmlComputeClusterNodeEvent category will be deprecated. We recommend that you instead use the AmlComputeClusterEvent category.
+ ## Microsoft.Media/mediaservices
If you think something is missing, you can open a GitHub comment at the bottom o
* [Learn more about resource logs](../essentials/platform-logs-overview.md) * [Stream resource resource logs to **Event Hubs**](./resource-logs.md#send-to-azure-event-hubs) * [Change resource log diagnostic settings using the Azure Monitor REST API](/rest/api/monitor/diagnosticsettings)
-* [Analyze logs from Azure storage with Log Analytics](./resource-logs.md#send-to-log-analytics-workspace)
+* [Analyze logs from Azure storage with Log Analytics](./resource-logs.md#send-to-log-analytics-workspace)
azure-monitor Stream Monitoring Data Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/stream-monitoring-data-event-hubs.md
Routing your monitoring data to an event hub with Azure Monitor enables you to e
| Tool | Hosted in Azure | Description | |:|:| :|
-| IBM QRadar | No | The Microsoft Azure DSM and Microsoft Azure Event Hub Protocol are available for download from [the IBM support website](https://www.ibm.com/support). You can learn more about the integration with Azure at [QRadar DSM configuration](https://www.ibm.com/docs/en/dsm?topic=options-configuring-microsoft-azure-event-hubs-communicate-qradar). |
+| IBM QRadar | No | The Microsoft Azure DSM and Microsoft Azure Event Hub Protocol are available for download from [the IBM support website](https://www.ibm.com/support). |
| Splunk | No | [Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110/) is an open source project available in Splunkbase. <br><br> If you cannot install an add-on in your Splunk instance, if for example you're using a proxy or running on Splunk Cloud, you can forward these events to the Splunk HTTP Event Collector using [Azure Function For Splunk](https://github.com/Microsoft/AzureFunctionforSplunkVS), which is triggered by new messages in the event hub. | | SumoLogic | No | Instructions for setting up SumoLogic to consume data from an event hub are available at [Collect Logs for the Azure Audit App from Event Hub](https://help.sumologic.com/Send-Data/Applications-and-Other-Data-Sources/Azure-Audit/02Collect-Logs-for-Azure-Audit-from-Event-Hub). | | ArcSight | No | The ArcSight Azure Event Hub smart connector is available as part of [the ArcSight smart connector collection](https://community.microfocus.com/cyberres/arcsight/f/arcsight-product-announcements/163662/announcing-general-availability-of-arcsight-smart-connectors-7-10-0-8114-0). |
azure-monitor Logs Data Export https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/logs-data-export.md
If you have configured your storage account to allow access from selected networ
[![Storage account firewalls and networks](media/logs-data-export/storage-account-network.png "Screenshot of allow trusted Microsoft services.")](media/logs-data-export/storage-account-network.png#lightbox)
-### Create or update data export rule
-A data export rule defines the tables for which data is exported and destination. You can have 10 enabled rules in your workspace, more rules can be added in 'disable' state. Storage account must be unique across all export rules in workspace, but you can use the same event hub namespace in multiple rules.
-
-> [!NOTE]
-> - If export rule includes unsupported tables, no data will be exported for that tables until the tables becomes supported.
-> - A separate container is created for tables in storage account export.
-> - If event hub name isn't provided in rule, a separate event hub is created for tables in event hub namespace. The [number of supported event hubs in 'Basic' and 'Standard' namespaces tiers is 10](../../event-hubs/event-hubs-quotas.md#common-limits-for-all-tiers). When exporting more than 10 tables to these tiers, either split the tables between several export rules to different event hub namespaces, or provide an event hub name in the rule to export all tables to that event hub.
+### Destinations monitoring
> [!IMPORTANT] > Export destinations have limits and should be monitored to minimize throttling, failures, and latency. See [storage accounts scalability](../../storage/common/scalability-targets-standard-account.md#scale-targets-for-standard-storage-accounts) and [event hub namespace quota](../../event-hubs/event-hubs-quotas.md).
-#### Monitoring storage account
+**Monitoring storage account**
1. Use separate storage account for export
-1. Configure alert on the metric below:
+2. Configure alert on the metric below:
| Scope | Metric Namespace | Metric | Aggregation | Threshold | |:|:|:|:|:| | storage-name | Account | Ingress | Sum | 80% of max ingress per alert evaluation period. For example: limit is 60 Gbps for general-purpose v2 in West US. Threshold is 14,400 Gb per 5-minutes evaluation period |
-1. Alert remediation actions
+3. Alert remediation actions
- Use separate storage account for export that isn't shared with non-monitoring data. - Azure Storage standard accounts support higher ingress limit by request. To request an increase, contact [Azure Support](https://azure.microsoft.com/support/faq/). - Split tables between more storage accounts.
-#### Monitoring event hub
+**Monitoring event hub**
1. Configure alerts on the [metrics](../../event-hubs/monitor-event-hubs-reference.md) below:
A data export rule defines the tables for which data is exported and destination
| namespaces-name | Event Hub standard metrics | Incoming requests | Count | 80% of max events per alert evaluation period. For example, limit is 1000/s per unit (TU or PU) and five units used. Threshold is 1200000 per 5-minutes evaluation period | | namespaces-name | Event Hub standard metrics | Quota Exceeded Errors | Count | Between 1% of request. For example, requests per 5 minutes is 600000. Threshold is 6000 per 5-minutes evaluation period |
-1. Alert remediation actions
+2. Alert remediation actions
- Use separate event hub namespace for export that isn't shared with non-monitoring data. - Configure [Auto-inflate](../../event-hubs/event-hubs-auto-inflate.md) feature to automatically scale up and increase the number of throughput units to meet usage needs - Verify increase of throughput units to accommodate data volume - Split tables between more namespaces - Use 'Premium' or 'Dedicated' tiers for higher throughput
+### Create or update data export rule
+Data export rule defines the destination and tables for which data is exported. You can create 10 rules in 'enable' state in your workspace, more rules are allowed in 'disable' state. Storage account destination must be unique across all export rules in workspace, but multiple rules can export to the same event hub namespace in separate event hubs.
+
+> [!NOTE]
+> - You can include tables that aren't yet supported in export, and no data will be exported for these until the tables are supported.
+> - The current custom log tables wonΓÇÖt be supported in export. The next generation of custom log available early 2022 in preview is supported.
+> - Export to storage account - a separate container is created in storage account for each table.
+> - Export to event hub - if event hub name isn't provided, a separate event hub is created for each table. The [number of supported event hubs in 'Basic' and 'Standard' namespaces tiers is 10](../../event-hubs/event-hubs-quotas.md#common-limits-for-all-tiers). When exporting more than 10 tables to these tiers, either split the tables between several export rules to different event hub namespaces, or provide an event hub name in the rule to export all tables to that event hub.
+ # [Azure portal](#tab/portal) In the **Log Analytics workspace** menu in the Azure portal, select **Data Export** from the **Settings** section and click **New export rule** from the top of the middle pane.
Follow the steps, then click **Create**.
Use the following command to create a data export rule to a storage account using PowerShell. A separate container is created for each table. ```powershell
-$storageAccountResourceId = 'subscriptions/subscription-id/resourceGroups/resource-group-name/providers/Microsoft.Storage/storageAccounts/storage-account-name'
-New-AzOperationalInsightsDataExport -ResourceGroupName resourceGroupName -WorkspaceName workspaceName -DataExportName 'ruleName' -TableName 'SecurityEvent, Heartbeat' -ResourceId $storageAccountResourceId
+$storageAccountResourceId = '/subscriptions/subscription-id/resourceGroups/resource-group-name/providers/Microsoft.Storage/storageAccounts/storage-account-name'
+New-AzOperationalInsightsDataExport -ResourceGroupName resourceGroupName -WorkspaceName workspaceName -DataExportName 'ruleName' -TableName 'SecurityEvent,Heartbeat' -ResourceId $storageAccountResourceId
``` Use the following command to create a data export rule to a specific event hub using PowerShell. All tables are exported to the provided event hub name and can be filtered by "Type" field to separate tables. ```powershell
-$eventHubResourceId = 'subscriptions/subscription-id/resourceGroups/resource-group-name/providers/Microsoft.EventHub/namespaces/namespaces-name/eventhubs/eventhub-name'
-New-AzOperationalInsightsDataExport -ResourceGroupName resourceGroupName -WorkspaceName workspaceName -DataExportName 'ruleName' -TableName 'SecurityEvent, Heartbeat' -ResourceId $eventHubResourceId -EventHubName EventhubName
+$eventHubResourceId = '/subscriptions/subscription-id/resourceGroups/resource-group-name/providers/Microsoft.EventHub/namespaces/namespaces-name/eventhubs/eventhub-name'
+New-AzOperationalInsightsDataExport -ResourceGroupName resourceGroupName -WorkspaceName workspaceName -DataExportName 'ruleName' -TableName 'SecurityEvent,Heartbeat' -ResourceId $eventHubResourceId -EventHubName EventhubName
``` Use the following command to create a data export rule to an event hub using PowerShell. When specific event hub name isn't provided, a separate container is created for each table up to the [number of supported event hubs for your event hub tier](../../event-hubs/event-hubs-quotas.md#common-limits-for-all-tiers). If you have more tables to export, provide event hub name to export any number of tables, or set another rule to export the remaining tables to another event hub namespace. ```powershell
-$eventHubResourceId = 'subscriptions/subscription-id/resourceGroups/resource-group-name/providers/Microsoft.EventHub/namespaces/namespaces-name'
-New-AzOperationalInsightsDataExport -ResourceGroupName resourceGroupName -WorkspaceName workspaceName -DataExportName 'ruleName' -TableName 'SecurityEvent, Heartbeat' -ResourceId $eventHubResourceId
+$eventHubResourceId = '/subscriptions/subscription-id/resourceGroups/resource-group-name/providers/Microsoft.EventHub/namespaces/namespaces-name'
+New-AzOperationalInsightsDataExport -ResourceGroupName resourceGroupName -WorkspaceName workspaceName -DataExportName 'ruleName' -TableName 'SecurityEvent,Heartbeat' -ResourceId $eventHubResourceId
``` # [Azure CLI](#tab/azure-cli)
Export rules can be disabled to let you stop the export for a certain period suc
Export rules can be disabled to let you stop the export for a certain period such as when testing is being held. Use the following command to disable or update rule parameters using PowerShell. ```powershell
-Update-AzOperationalInsightsDataExport -ResourceGroupName resourceGroupName -WorkspaceName workspaceName -DataExportName 'ruleName' -TableName 'SecurityEvent, Heartbeat' -Enable: $false
+Update-AzOperationalInsightsDataExport -ResourceGroupName resourceGroupName -WorkspaceName workspaceName -DataExportName 'ruleName' -TableName 'SecurityEvent,Heartbeat' -Enable: $false
``` # [Azure CLI](#tab/azure-cli)
azure-monitor Monitor Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/monitor-reference.md
The following table lists Azure services and the data they collect into Azure Mo
| Service | Resource Provider Namespace | Has Metrics | Has Logs | Insight | Notes |||-|--|-|--| | [Azure Active Directory Domain Services](../active-directory-domain-services/index.yml) | Microsoft.AAD/DomainServices | No | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftaaddomainservices) | | |
- | [Azure Active Directory](../active-directory/index.yml) | Microsoft.Aadiam/azureADMetrics | [**Yes**](./essentials/metrics-supported.md#microsoftaadiamazureadmetrics) | No | [Azure Monitor Workbooks for Azure Active Directory](../active-directory/reports-monitoring/howto-use-azure-monitor-workbooks.md) | |
+ | [Azure Active Directory](../active-directory/index.yml) | No | No | [Azure Monitor Workbooks for Azure Active Directory](../active-directory/reports-monitoring/howto-use-azure-monitor-workbooks.md) | |
| [Azure Analysis Services](../analysis-services/index.yml) | Microsoft.AnalysisServices/servers | [**Yes**](./essentials/metrics-supported.md#microsoftanalysisservicesservers) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftanalysisservicesservers) | | | | [API Management](../api-management/index.yml) | Microsoft.ApiManagement/service | [**Yes**](./essentials/metrics-supported.md#microsoftapimanagementservice) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftapimanagementservice) | | | | [Azure App Configuration](../azure-app-configuration/index.yml) | Microsoft.AppConfiguration/configurationStores | [**Yes**](./essentials/metrics-supported.md#microsoftappconfigurationconfigurationstores) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftappconfigurationconfigurationstores) | | |
azure-netapp-files Backup Configure Policy Based https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/backup-configure-policy-based.md
na Previously updated : 10/13/2021 Last updated : 01/05/2022 # Configure policy-based backups for Azure NetApp Files
You need to create a snapshot policy and associate the snapshot policy to the vo
Currently, the backup functionality can back up only daily, weekly, and monthly snapshots. (Hourly backups are not supported).
- * For a daily snapshot configuration, specify the time of the day when you want the snapshot created.
- * For a weekly snapshot configuration, specify the day of the week and time of the day when you want the snapshot created.
- * For a monthly snapshot configuration, specify the day of the month and time of the day when you want the snapshot created.
+ * For a *daily* snapshot configuration, specify the time of the day when you want the snapshot created.
+ * For a *weekly* snapshot configuration, specify the day of the week and time of the day when you want the snapshot created.
+ * For a *monthly* snapshot configuration, specify the day of the month and time of the day when you want the snapshot created.
+
+ > [!IMPORTANT]
+ > Be sure to specify a day that will work for all intended months. If you intend for the monthly snapshot configuration to work for all months in the year, pick a day of the month between 1 and 28. For example, if you specify `31` (day of the month), the monthly snapshot configuration is skipped for the months that have less than 31 days.
+
* For each snapshot configuration, specify the number of snapshots that you want to keep.
- For example, if you want to have daily backups, you must configure a snapshot policy with a daily snapshot schedule and snapshot count, and then apply that daily snapshot policy to the volume. If you change the snapshot policy or delete the daily snapshot configuration, new daily snapshots will not be created, resulting in daily backups not taking place. The same process and behavior apply to weekly, and monthly backups.
+ For example, if you want to have daily backups, you must configure a snapshot policy with a daily snapshot schedule and snapshot count, and then apply that daily snapshot policy to the volume. If you change the snapshot policy or delete the daily snapshot configuration, new daily snapshots will not be created, resulting in daily backups not taking place. The same process and behavior apply to weekly and monthly backups.
Ensure that each snapshot has a unique snapshot schedule configuration. By design, Azure NetApp Files prevents you from deleting the latest backup. If multiple snapshots have the same time (for example, the same daily and weekly schedule configuration), Azure NetApp Files considers them as the latest snapshots, and deleting those backups is prevented.
azure-netapp-files Snapshots Manage Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/snapshots-manage-policy.md
na Previously updated : 09/16/2021 Last updated : 01/05/2022
A snapshot policy enables you to specify the snapshot creation frequency in hour
3. Click the **Hourly**, **Daily**, **Weekly**, or **Monthly** tab to create hourly, daily, weekly, or monthly snapshot policies. Specify the **Number of snapshots to keep**.
+ > [!IMPORTANT]
+ > For *monthly* snapshot policy definition, be sure to specify a day that will work for all intended months. If you intend for the monthly snapshot configuration to work for all months in the year, pick a day of the month between 1 and 28. For example, if you specify `31` (day of the month), the monthly snapshot configuration is skipped for the months that have less than 31 days.
+ >
See [Resource limits for Azure NetApp Files](azure-netapp-files-resource-limits.md) about the maximum number of snapshots allowed for a volume. The following example shows hourly snapshot policy configuration.
You can delete a snapshot policy that you no longer want to keep.
* [Troubleshoot snapshot policies](troubleshoot-snapshot-policies.md) * [Resource limits for Azure NetApp Files](azure-netapp-files-resource-limits.md)
-* [Learn more about snapshots](snapshots-introduction.md)
+* [Learn more about snapshots](snapshots-introduction.md)
azure-resource-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/overview.md
Last updated 01/03/2022
Bicep is a domain-specific language (DSL) that uses declarative syntax to deploy Azure resources. In a Bicep file, you define the infrastructure you want to deploy to Azure, and then use that file throughout the development lifecycle to repeatedly deploy your infrastructure. Your resources are deployed in a consistent manner.
-Bicep provides concise syntax, reliable type safety, and support for code reuse. We believe Bicep offers the best authoring experience for your infrastructure-as-code solutions in Azure.
+Bicep provides concise syntax, reliable type safety, and support for code reuse. We believe Bicep offers the best authoring experience for your [infrastructure-as-code](/devops/deliver/what-is-infrastructure-as-code) solutions in Azure.
## Benefits of Bicep versus other tools
azure-resource-manager Azure Services Resource Providers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/azure-services-resource-providers.md
The resources providers that are marked with **- registered** are registered by
| Microsoft.Migrate | [Azure Migrate](../../migrate/migrate-services-overview.md) | | Microsoft.MixedReality | [Azure Spatial Anchors](../../spatial-anchors/index.yml) | | Microsoft.NetApp | [Azure NetApp Files](../../azure-netapp-files/index.yml) |
-| Microsoft.Network | [Application Gateway](../../application-gateway/index.yml)<br />[Azure Bastion](../../bastion/index.yml)<br />[Azure DDoS Protection](../../ddos-protection/ddos-protection-overview.md)<br />[Azure DNS](../../dns/index.yml)<br />[Azure ExpressRoute](../../expressroute/index.yml)<br />[Azure Firewall](../../firewall/index.yml)<br />[Azure Front Door Service](../../frontdoor/index.yml)<br />[Azure Private Link](../../private-link/index.yml)<br />[Load Balancer](../../load-balancer/index.yml)<br />[Network Watcher](../../network-watcher/index.yml)<br />[Traffic Manager](../../traffic-manager/index.yml)<br />[Virtual Network](../../virtual-network/index.yml)<br />[Virtual WAN](../../virtual-wan/index.yml)<br />[VPN Gateway](../../vpn-gateway/index.yml)<br /> |
+| Microsoft.Network | [Application Gateway](../../application-gateway/index.yml)<br />[Azure Bastion](../../bastion/index.yml)<br />[Azure DDoS Protection](../../ddos-protection/ddos-protection-overview.md)<br />[Azure DNS](../../dns/index.yml)<br />[Azure ExpressRoute](../../expressroute/index.yml)<br />[Azure Firewall](../../firewall/index.yml)<br />[Azure Front Door Service](../../frontdoor/index.yml)<br />[Azure Private Link](../../private-link/index.yml)<br />[Load Balancer](../../load-balancer/index.yml)<br />[Network Watcher](../../network-watcher/index.yml)<br />[Traffic Manager](../../traffic-manager/index.yml)<br />[Virtual Network](../../virtual-network/index.yml)<br />[Virtual Network NAT](/azure/virtual-network/nat-gateway/nat-overview)<br />[Virtual WAN](../../virtual-wan/index.yml)<br />[VPN Gateway](../../vpn-gateway/index.yml)<br /> |
| Microsoft.Notebooks | [Azure Notebooks](https://notebooks.azure.com/help/introduction) | | Microsoft.NotificationHubs | [Notification Hubs](../../notification-hubs/index.yml) | | Microsoft.ObjectStore | Object Store |
azure-sql Azure Sql Iaas Vs Paas What Is Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/azure-sql-iaas-vs-paas-what-is-overview.md
Azure SQL is built upon the familiar SQL Server engine, so you can migrate appli
Learn how each product fits into Microsoft's Azure SQL data platform to match the right option for your business requirements. Whether you prioritize cost savings or minimal administration, this article can help you decide which approach delivers against the business requirements you care about most.
-> [!div class="nextstepaction"]
-> [Survey to improve Azure SQL!](https://aka.ms/AzureSQLSurveyNov2021)
- If you're new to Azure SQL, check out the *What is Azure SQL* video from our in-depth [Azure SQL video series](https://channel9.msdn.com/Series/Azure-SQL-for-Beginners?WT.mc_id=azuresql4beg_azuresql-ch9-niner): > [!VIDEO https://channel9.msdn.com/Series/Azure-SQL-for-Beginners/What-is-Azure-SQL-3-of-61/player]
azure-sql Authentication Aad Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/authentication-aad-configure.md
Last updated 12/15/2021
This article shows you how to create and populate an Azure Active Directory (Azure AD) instance, and then use Azure AD with [Azure SQL Database](sql-database-paas-overview.md), [Azure SQL Managed Instance](../managed-instance/sql-managed-instance-paas-overview.md), and [Azure Synapse Analytics](../../synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-is.md). For an overview, see [Azure Active Directory authentication](authentication-aad-overview.md).
-> [!div class="nextstepaction"]
-> [Survey to improve Azure SQL!](https://aka.ms/AzureSQLSurveyNov2021)
- ## Azure AD authentication methods Azure AD authentication supports the following authentication methods:
azure-sql Auto Failover Group Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/auto-failover-group-overview.md
The auto-failover groups feature allows you to manage the replication and failov
> [!NOTE] > Auto-failover groups support geo-replication of all databases in the group to only one secondary server or instance in a different region. If you need to create multiple Azure SQL Database geo-secondary replicas (in the same or different regions) for the same primary replica, use [active geo-replication](active-geo-replication-overview.md). >
-> Auto-failover groups are not currently supported in the [Hyperscale](service-tier-hyperscale.md) service tier. For geographic failover of a Hyperscale database, use [active geo-replication](active-geo-replication-overview.md).
When you are using auto-failover groups with automatic failover policy, an outage that impacts one or several of the databases in the group will result in an automatic geo-failover. Typically, these are outages that cannot be automatically mitigated by the built-in high availability infrastructure. Examples of geo-failover triggers include an incident caused by a SQL Database tenant ring or control ring being down due to an OS kernel memory leak on compute nodes, or an incident caused by one or more tenant rings being down because a wrong network cable was accidentally cut during routine hardware decommissioning. For more information, see [SQL Database High Availability](high-availability-sla.md).
azure-sql Automated Backups Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/automated-backups-overview.md
Last updated 08/28/2021
[!INCLUDE [GDPR-related guidance](../../../includes/gdpr-intro-sentence.md)]
-> [!div class="nextstepaction"]
-> [Survey to improve Azure SQL!](https://aka.ms/AzureSQLSurveyNov2021)
- ## What is a database backup? Database backups are an essential part of any business continuity and disaster recovery strategy, because they protect your data from corruption or deletion. These backups enable database restore to a point in time within the configured retention period. If your data protection rules require that your backups are available for an extended time (up to 10 years), you can configure [long-term retention](long-term-retention-overview.md) for both single and pooled databases.
azure-sql Connectivity Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/connectivity-architecture.md
This article explains architecture of various components that direct network tra
This article does *not* apply to **Azure SQL Managed Instance**. Refer to [Connectivity architecture for a managed instance](../managed-instance/connectivity-architecture-overview.md).
-> [!div class="nextstepaction"]
-> [Survey to improve Azure SQL!](https://aka.ms/AzureSQLSurveyNov2021)
- ## Connectivity architecture The following diagram provides a high-level overview of the connectivity architecture.
azure-sql Cost Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/cost-management.md
This article describes how you plan for and manage costs for Azure SQL Database.
First, you use the Azure pricing calculator to add Azure resources, and review the estimated costs. After you've started using Azure SQL Database resources, use Cost Management features to set budgets and monitor costs. You can also review forecasted costs and identify spending trends to identify areas where you might want to act. Costs for Azure SQL Database are only a portion of the monthly costs in your Azure bill. Although this article explains how to plan for and manage costs for Azure SQL Database, you're billed for all Azure services and resources used in your Azure subscription, including any third-party services.
-> [!div class="nextstepaction"]
-> [Survey to improve Azure SQL!](https://aka.ms/AzureSQLSurveyNov2021)
- ## Prerequisites Cost analysis supports most Azure account types, but not all of them. To view the full list of supported account types, see [Understand Cost Management data](../../cost-management-billing/costs/understand-cost-mgt-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). To view cost data, you need at least read access for an Azure account.
azure-sql Doc Changes Updates Release Notes Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/doc-changes-updates-release-notes-whats-new.md
This article summarizes the documentation changes associated with new features a
For Azure SQL Managed Instance, see [What's new](../managed-instance/doc-changes-updates-release-notes-whats-new.md).
-> [!div class="nextstepaction"]
-> [Survey to improve Azure SQL!](https://aka.ms/AzureSQLSurveyNov2021)
-- ## Preview
azure-sql Elastic Pool Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/elastic-pool-overview.md
Last updated 06/23/2021
Azure SQL Database elastic pools are a simple, cost-effective solution for managing and scaling multiple databases that have varying and unpredictable usage demands. The databases in an elastic pool are on a single server and share a set number of resources at a set price. Elastic pools in Azure SQL Database enable SaaS developers to optimize the price performance for a group of databases within a prescribed budget while delivering performance elasticity for each database.
-> [!div class="nextstepaction"]
-> [Survey to improve Azure SQL!](https://aka.ms/AzureSQLSurveyNov2021)
- ## What are SQL elastic pools SaaS developers build applications on top of large scale data-tiers consisting of multiple databases. A common application pattern is to provision a single database for each customer. But different customers often have varying and unpredictable usage patterns, and it's difficult to predict the resource requirements of each individual database user. Traditionally, you had two options:
azure-sql Features Comparison https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/features-comparison.md
If you need more details about the differences, you can find them in the separat
- [Azure SQL Database vs. SQL Server differences](transact-sql-tsql-differences-sql-server.md) - [Azure SQL Managed Instance vs. SQL Server differences](../managed-instance/transact-sql-tsql-differences-sql-server.md)
-> [!div class="nextstepaction"]
-> [Survey to improve Azure SQL!](https://aka.ms/AzureSQLSurveyNov2021)
-- ## Features of SQL Database and SQL Managed Instance The following table lists the major features of SQL Server and provides information about whether the feature is partially or fully supported in Azure SQL Database and Azure SQL Managed Instance, with a link to more information about the feature.
azure-sql Firewall Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/firewall-configure.md
When you create a new server in Azure SQL Database or Azure Synapse Analytics na
> Azure Synapse only supports server-level IP firewall rules. It doesn't support database-level IP firewall rules.
-> [!div class="nextstepaction"]
-> [Survey to improve Azure SQL!](https://aka.ms/AzureSQLSurveyNov2021)
- ## How the firewall works Connection attempts from the internet and Azure must pass through the firewall before they reach your server or database, as the following diagram shows.
azure-sql High Availability Sla https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/high-availability-sla.md
There are two high availability architectural models:
SQL Database and SQL Managed Instance both run on the latest stable version of the SQL Server database engine and Windows operating system, and most users would not notice that upgrades are performed continuously.
-> [!div class="nextstepaction"]
-> [Survey to improve Azure SQL!](https://aka.ms/AzureSQLSurveyNov2021)
- ## Basic, Standard, and General Purpose service tier locally redundant availability The Basic, Standard, and General Purpose service tiers leverage the standard availability architecture for both serverless and provisioned compute. The following figure shows four different nodes with the separated compute and storage layers.
azure-sql Logins Create Manage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/logins-create-manage.md
In this article, you learn about:
> [!IMPORTANT] > Databases in Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse are referred to collectively in the remainder of this article as databases, and the server is referring to the [server](logical-servers.md) that manages databases for Azure SQL Database and Azure Synapse.
-> [!div class="nextstepaction"]
-> [Survey to improve Azure SQL!](https://aka.ms/AzureSQLSurveyNov2021)
- ## Authentication and authorization [**Authentication**](security-overview.md#authentication) is the process of proving the user is who they claim to be. A user connects to a database using a user account.
azure-sql Monitor Tune Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/monitor-tune-overview.md
Azure SQL Database and Azure SQL Managed Instance provide advanced monitoring an
SQL Server has its own monitoring and diagnostic capabilities that SQL Database and SQL Managed Instance leverage, such as [query store](/sql/relational-databases/performance/monitoring-performance-by-using-the-query-store) and [dynamic management views (DMVs)](/sql/relational-databases/system-dynamic-management-views/system-dynamic-management-views). See [Monitoring using DMVs](monitoring-with-dmvs.md) for scripts to monitor for a variety of performance issues.
-> [!div class="nextstepaction"]
-> [Survey to improve Azure SQL!](https://aka.ms/AzureSQLSurveyNov2021)
- ## Monitoring and tuning capabilities in the Azure portal In the Azure portal, Azure SQL Database and Azure SQL Managed Instance provide monitoring of resource metrics. Azure SQL Database provides database advisors, and Query Performance Insight provides query tuning recommendations and query performance analysis. In the Azure portal, you can enable automatic tuning for [logical SQL servers](logical-servers.md) and their single and pooled databases.
azure-sql Purchasing Models https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/purchasing-models.md
Azure SQL Database and Azure SQL Managed Instance let you easily purchase a full
- [Database transaction unit (DTU)-based purchasing model](service-tiers-dtu.md). This purchasing model provides bundled compute and storage packages balanced for common workloads.
-> [!div class="nextstepaction"]
-> [Survey to improve Azure SQL!](https://aka.ms/AzureSQLSurveyNov2021)
-- There are two purchasing models: - [vCore-based purchasing model](service-tiers-vcore.md) is available for both [Azure SQL Database](sql-database-paas-overview.md) and [Azure SQL Managed Instance](../managed-instance/sql-managed-instance-paas-overview.md). The [Hyperscale service tier](service-tier-hyperscale.md) is available for single databases that are using the [vCore-based purchasing model](service-tiers-vcore.md).
azure-sql Service Tier Hyperscale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/service-tier-hyperscale.md
The Hyperscale service tier in Azure SQL Database is the newest service tier in
> - For details on the General Purpose and Business Critical service tiers in the vCore-based purchasing model, see [General Purpose](service-tier-general-purpose.md) and [Business Critical](service-tier-business-critical.md) service tiers. For a comparison of the vCore-based purchasing model with the DTU-based purchasing model, see [Azure SQL Database purchasing models and resources](purchasing-models.md). > - The Hyperscale service tier is currently only available for Azure SQL Database, and not Azure SQL Managed Instance. -
-> [!div class="nextstepaction"]
-> [Survey to improve Azure SQL!](https://aka.ms/AzureSQLSurveyNov2021)
- ## What are the Hyperscale capabilities The Hyperscale service tier in Azure SQL Database provides the following additional capabilities:
These are the current limitations to the Hyperscale service tier as of GA. We'r
| Elastic Pools | Elastic Pools aren't currently supported with Hyperscale.| | Migration to Hyperscale is currently a one-way operation | Once a database is migrated to Hyperscale, it can't be migrated directly to a non-Hyperscale service tier. At present, the only way to migrate a database from Hyperscale to non-Hyperscale is to export/import using a bacpac file or other data movement technologies (Bulk Copy, Azure Data Factory, Azure Databricks, SSIS, etc.) Bacpac export/import from Azure portal, from PowerShell using [New-AzSqlDatabaseExport](/powershell/module/az.sql/new-azsqldatabaseexport) or [New-AzSqlDatabaseImport](/powershell/module/az.sql/new-azsqldatabaseimport), from Azure CLI using [az sql db export](/cli/azure/sql/db#az_sql_db_export) and [az sql db import](/cli/azure/sql/db#az_sql_db_import), and from [REST API](/rest/api/sql/) is not supported. Bacpac import/export for smaller Hyperscale databases (up to 200 GB) is supported using SSMS and [SqlPackage](/sql/tools/sqlpackage) version 18.4 and later. For larger databases, bacpac export/import may take a long time, and may fail for various reasons.| | Migration of databases with In-Memory OLTP objects | Hyperscale supports a subset of In-Memory OLTP objects, including memory-optimized table types, table variables, and natively compiled modules. However, when any kind of In-Memory OLTP objects are present in the database being migrated, migration from Premium and Business Critical service tiers to Hyperscale is not supported. To migrate such a database to Hyperscale, all In-Memory OLTP objects and their dependencies must be dropped. After the database is migrated, these objects can be recreated. Durable and non-durable memory-optimized tables are not currently supported in Hyperscale, and must be changed to disk tables.|
-| Geo-replication | [Geo-replication](active-geo-replication-overview.md) on Hyperscale is now in public preview. |
+| Geo-replication | [Geo-replication](active-geo-replication-overview.md) and [auto-failover groups](auto-failover-group-overview.md) on Hyperscale is now in public preview. |
| Intelligent Database Features | With the exception of the "Force Plan" option, all other Automatic Tuning options aren't yet supported on Hyperscale: options may appear to be enabled, but there won't be any recommendations or actions made. | | Query Performance Insights | Query Performance Insights is currently not supported for Hyperscale databases. | | Shrink Database | DBCC SHRINKDATABASE or DBCC SHRINKFILE isn't currently supported for Hyperscale databases. |
azure-sql Service Tiers Vcore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/service-tiers-vcore.md
The virtual core (vCore) purchasing model used by Azure SQL Database and Azure S
For more information on choosing between the vCore and DTU purchase models, see [Choose between the vCore and DTU purchasing models](purchasing-models.md).
-> [!div class="nextstepaction"]
-> [Survey to improve Azure SQL!](https://aka.ms/AzureSQLSurveyNov2021)
- ## Service tiers The following articles provide specific information on the vCore purchase model in each product.
azure-sql Single Database Create Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/single-database-create-quickstart.md
Last updated 12/09/2021
In this quickstart, you create a [single database](single-database-overview.md) in Azure SQL Database using either the Azure portal, a PowerShell script, or an Azure CLI script. You then query the database using **Query editor** in the Azure portal.
-> [!div class="nextstepaction"]
-> [Survey to improve Azure SQL!](https://aka.ms/AzureSQLSurveyNov2021)
- ## Prerequisites - An active Azure subscription. If you don't have one, [create a free account](https://azure.microsoft.com/free/).
azure-sql Single Database Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/single-database-overview.md
The single database resource type creates a database in Azure SQL Database with
Single database is a deployment model for Azure SQL Database. The other is [elastic pools](elastic-pool-overview.md).
-> [!div class="nextstepaction"]
-> [Survey to improve Azure SQL!](https://aka.ms/AzureSQLSurveyNov2021)
-- ## Dynamic scalability You can build your first app on a small, single database at low cost in the serverless compute tier or a small compute size in the provisioned compute tier. You change the [compute or service tier](single-database-scale.md) manually or programmatically at any time to meet the needs of your solution. You can adjust performance without downtime to your app or to your customers. Dynamic scalability enables your database to transparently respond to rapidly changing resource requirements and enables you to only pay for the resources that you need when you need them.
azure-sql Sql Database Paas Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/sql-database-paas-overview.md
Azure SQL Database is based on the latest stable version of the [Microsoft SQL S
SQL Database enables you to easily define and scale performance within two different purchasing models: a [vCore-based purchasing model](service-tiers-vcore.md) and a [DTU-based purchasing model](service-tiers-dtu.md). SQL Database is a fully managed service that has built-in high availability, backups, and other common maintenance operations. Microsoft handles all patching and updating of the SQL and operating system code. You don't have to manage the underlying infrastructure.
-> [!div class="nextstepaction"]
-> [Survey to improve Azure SQL!](https://aka.ms/AzureSQLSurveyNov2021)
- If you're new to Azure SQL Database, check out the *Azure SQL Database Overview* video from our in-depth [Azure SQL video series](https://channel9.msdn.com/Series/Azure-SQL-for-Beginners?WT.mc_id=azuresql4beg_azuresql-ch9-niner): > [!VIDEO https://channel9.msdn.com/Series/Azure-SQL-for-Beginners/Azure-SQL-Database-Overview-7-of-61/player]
azure-sql Troubleshoot Common Errors Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/troubleshoot-common-errors-issues.md
The Azure infrastructure has the ability to dynamically reconfigure servers when
| 49919 |16 |Cannot process create or update request. Too many create or update operations in progress for subscription "%ld".<br/><br/>The service is busy processing multiple create or update requests for your subscription or server. Requests are currently blocked for resource optimization. Query [sys.dm_operation_status](/sql/relational-databases/system-dynamic-management-views/sys-dm-operation-status-azure-sql-database) for pending operations. Wait until pending create or update requests are complete or delete one of your pending requests and retry your request later. For more information, see: <br/>&bull; &nbsp;[Logical SQL server resource limits](resource-limits-logical-server.md)<br/>&bull; &nbsp;[DTU-based limits for single databases](service-tiers-dtu.md)<br/>&bull; &nbsp;[DTU-based limits for elastic pools](resource-limits-dtu-elastic-pools.md)<br/>&bull; &nbsp;[vCore-based limits for single databases](resource-limits-vcore-single-databases.md)<br/>&bull; &nbsp;[vCore-based limits for elastic pools](resource-limits-vcore-elastic-pools.md)<br/>&bull; &nbsp;[Azure SQL Managed Instance resource limits](../managed-instance/resource-limits.md). | | 49920 |16 |Cannot process request. Too many operations in progress for subscription "%ld".<br/><br/>The service is busy processing multiple requests for this subscription. Requests are currently blocked for resource optimization. Query [sys.dm_operation_status](/sql/relational-databases/system-dynamic-management-views/sys-dm-operation-status-azure-sql-database) for operation status. Wait until pending requests are complete or delete one of your pending requests and retry your request later. For more information, see: <br/>&bull; &nbsp;[Logical SQL server resource limits](resource-limits-logical-server.md)<br/>&bull; &nbsp;[DTU-based limits for single databases](service-tiers-dtu.md)<br/>&bull; &nbsp;[DTU-based limits for elastic pools](resource-limits-dtu-elastic-pools.md)<br/>&bull; &nbsp;[vCore-based limits for single databases](resource-limits-vcore-single-databases.md)<br/>&bull; &nbsp;[vCore-based limits for elastic pools](resource-limits-vcore-elastic-pools.md)<br/>&bull; &nbsp;[Azure SQL Managed Instance resource limits](../managed-instance/resource-limits.md). | | 4221 |16 |Login to read-secondary failed due to long wait on 'HADR_DATABASE_WAIT_FOR_TRANSITION_TO_VERSIONING'. The replica is not available for login because row versions are missing for transactions that were in-flight when the replica was recycled. The issue can be resolved by rolling back or committing the active transactions on the primary replica. Occurrences of this condition can be minimized by avoiding long write transactions on the primary. |
+| 615 | 21 | Could not find database ID %d, name '%.&#x2a;ls' . Error Code 615. <br/> This means in-memory cache is not in-sync with SQL server instance and lookups are retrieving stale database ID. <br/> <br/>SQL logins use in-memory cache to get the database name to ID mapping. The cache should be in sync with backend database and updated whenever attach and detach of database to/from the SQL server instance occurs. <br/>You receive this error when detach workflow fail to clean-up the in-memory cache on time and subsequent lookups to the database point to stale database ID. <br/><br/>Try reconnecting to SQL Database until the resource are available, and the connection is established again. For more information, see [Transient errors](troubleshoot-common-connectivity-issues.md#transient-errors-transient-faults).|
### Steps to resolve transient connectivity issues
azure-sql Connectivity Architecture Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/connectivity-architecture-overview.md
SQL Managed Instance is placed inside the Azure virtual network and the subnet t
- The ability to connect SQL Managed Instance to a linked server or another on-premises data store. - The ability to connect SQL Managed Instance to Azure resources.
-> [!div class="nextstepaction"]
-> [Survey to improve Azure SQL!](https://aka.ms/AzureSQLSurveyNov2021)
- ## Communication overview The following diagram shows entities that connect to SQL Managed Instance. It also shows the resources that need to communicate with a managed instance. The communication process at the bottom of the diagram represents customer applications and tools that connect to SQL Managed Instance as data sources.
azure-sql Doc Changes Updates Release Notes Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/doc-changes-updates-release-notes-whats-new.md
ms.devlang: Previously updated : 11/30/2021 Last updated : 01/05/2022 # What's new in Azure SQL Managed Instance? [!INCLUDE[appliesto-sqldb-sqlmi](../includes/appliesto-sqlmi.md)]
This article summarizes the documentation changes associated with new features a
For Azure SQL Database, see [What's new](../database/doc-changes-updates-release-notes-whats-new.md).
-> [!div class="nextstepaction"]
-> [Survey to improve Azure SQL!](https://aka.ms/AzureSQLSurveyNov2021)
-- ## Preview The following table lists the features of Azure SQL Managed Instance that are currently in preview:
The following changes were added to SQL Managed Instance and the documentation i
| **TDE-encrypted backup performance improvements** | It's now possible to set the point-in-time restore (PITR) backup retention period, and automated compression of backups encrypted with transparent data encryption (TDE) are now 30 percent more efficient in consuming backup storage space, saving costs for the end user. See [Change PITR](../database/automated-backups-overview.md?tabs=managed-instance#change-the-short-term-retention-policy) to learn more. | | **Azure AD authentication improvements** | Automate user creation using Azure AD applications and create individual Azure AD guest users (preview). To learn more, see [Directory readers in Azure AD](../database/authentication-aad-directory-readers-role.md)| | **Global VNet peering support** | Global virtual network peering support has been added to SQL Managed Instance, improving the geo-replication experience. See [geo-replication between managed instances](../database/auto-failover-group-overview.md?tabs=azure-powershell#enabling-geo-replication-between-managed-instances-and-their-vnets). |
-| **Hosting SSRS catalog databases** | SQL Managed Instance can now host catalog databases for all supported versions of SQL Server Reporting Services (SSRS). |
+| **Hosting SSRS catalog databases** | SQL Managed Instance can now host catalog databases of SQL Server Reporting Services (SSRS) for versions 2017 and newer. |
| **Major performance improvements** | Introducing improvements to SQL Managed Instance performance, including improved transaction log write throughput, improved data and log IOPS for business critical instances, and improved TempDB performance. See the [improved performance](https://techcommunity.microsoft.com/t5/azure-sql/announcing-major-performance-improvements-for-azure-sql-database/ba-p/1701256) tech community blog to learn more. | **Enhanced management experience** | Using the new [OPERATIONS API](/rest/api/sql/2021-02-01-preview/managed-instance-operations), it's now possible to check the progress of long-running instance operations. To learn more, see [Management operations](management-operations-overview.md?tabs=azure-portal). | **Machine learning support** | Machine Learning Services with support for R and Python languages now include preview support on Azure SQL Managed Instance (Preview). To learn more, see [Machine learning with SQL Managed Instance](machine-learning-services-overview.md). |
azure-sql Sql Managed Instance Paas Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/sql-managed-instance-paas-overview.md
Last updated 01/14/2021
Azure SQL Managed Instance is the intelligent, scalable cloud database service that combines the broadest SQL Server database engine compatibility with all the benefits of a fully managed and evergreen platform as a service. SQL Managed Instance has near 100% compatibility with the latest SQL Server (Enterprise Edition) database engine, providing a native [virtual network (VNet)](../../virtual-network/virtual-networks-overview.md) implementation that addresses common security concerns, and a [business model](https://azure.microsoft.com/pricing/details/sql-database/) favorable for existing SQL Server customers. SQL Managed Instance allows existing SQL Server customers to lift and shift their on-premises applications to the cloud with minimal application and database changes. At the same time, SQL Managed Instance preserves all PaaS capabilities (automatic patching and version updates, [automated backups](../database/automated-backups-overview.md), [high availability](../database/high-availability-sla.md)) that drastically reduce management overhead and TCO.
-> [!div class="nextstepaction"]
-> [Survey to improve Azure SQL!](https://aka.ms/AzureSQLSurveyNov2021)
- If you're new to Azure SQL Managed Instance, check out the *Azure SQL Managed Instance* video from our in-depth [Azure SQL video series](https://channel9.msdn.com/Series/Azure-SQL-for-Beginners?WT.mc_id=azuresql4beg_azuresql-ch9-niner): > [!VIDEO https://channel9.msdn.com/Series/Azure-SQL-for-Beginners/Azure-SQL-Managed-Instance-Overview-6-of-61/player]
azure-video-analyzer Video Indexer Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-for-media-docs/video-indexer-overview.md
Title: What is Azure Video Analyzer for Media (formerly Video Indexer)?- description: This article gives an overview of the Azure Video Analyzer for Media (formerly Video Indexer) service. Last updated 12/10/2021
azure-web-pubsub Tutorial Build Chat https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-web-pubsub/tutorial-build-chat.md
You may remember in the [publish and subscribe message tutorial](./tutorial-pub-
```bash dotnet add package Microsoft.Extensions.Azure
- dotnet user-secrets init
- dotnet user-secrets set Azure:WebPubSub:ConnectionString "<connection-string>"
``` 2. DI the service client inside `ConfigureServices` and don't forget to replace `<connection_string>` with the one of your services.
You may remember in the [publish and subscribe message tutorial](./tutorial-pub-
{ services.AddAzureClients(builder => {
- builder.AddWebPubSubServiceClient(Configuration["Azure:WebPubSub:ConnectionString"], "chat");
+ builder.AddWebPubSubServiceClient("<connection_string>", "chat");
}); } ```
You may remember in the [publish and subscribe message tutorial](./tutorial-pub-
await context.Response.WriteAsync("missing user id"); return; }
- var serviceClient = context.RequestServices.GetRequiredService<WebPubSubServiceClient>();
+ var serviceClient = context.RequestServices.GetRequiredService<Azure.Messaging.WebPubSub.WebPubSubServiceClient>();
await context.Response.WriteAsync(serviceClient.GetClientAccessUri(userId: id).AbsoluteUri); }); });
The `ce-type` of `message` event is always `azure.webpubsub.user.message`, detai
// abuse protection endpoints.Map("/eventhandler/{*path}", async context => {
- var serviceClient = context.RequestServices.GetRequiredService<WebPubSubServiceClient>();
+ var serviceClient = context.RequestServices.GetRequiredService<Azure.Messaging.WebPubSub.WebPubSubServiceClient>();
if (context.Request.Method == "OPTIONS") { if (context.Request.Headers["WebHook-Request-Origin"].Count > 0)
The `ce-type` of `message` event is always `azure.webpubsub.user.message`, detai
```csharp app.UseEndpoints(endpoints => {
+ var serviceClient = context.RequestServices.GetRequiredService<Azure.Messaging.WebPubSub.WebPubSubServiceClient>();
// abuse protection endpoints.Map("/eventhandler/{*path}", async context => {
azure-web-pubsub Tutorial Subprotocol https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-web-pubsub/tutorial-subprotocol.md
Now let's create a web application using the `json.webpubsub.azure.v1` subprotoc
# [Java](#tab/java) Create an HTML page with below content and save it to */src/main/resources/public/https://docsupdatetracker.net/index.html*:+ + ```html <html>
backup Backup Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-overview.md
Title: What is Azure Backup? description: Provides an overview of the Azure Backup service, and how it contributes to your business continuity and disaster recovery (BCDR) strategy. Previously updated : 07/28/2021 Last updated : 01/04/2022 # What is the Azure Backup service?
Azure Backup delivers these key benefits:
- [Geo-redundant storage (GRS)](../storage/common/storage-redundancy.md#geo-redundant-storage) is the default and recommended replication option. GRS replicates your data to a secondary region (hundreds of miles away from the primary location of the source data). GRS costs more than LRS, but GRS provides a higher level of durability for your data, even if there's a regional outage. - [Zone-redundant storage (ZRS)](../storage/common/storage-redundancy.md#zone-redundant-storage) replicates your data in [availability zones](../availability-zones/az-overview.md#availability-zones), guaranteeing data residency and resiliency in the same region. ZRS has no downtime. So your critical workloads that require [data residency](https://azure.microsoft.com/resources/achieving-compliant-data-residency-and-security-with-azure/), and must have no downtime, can be backed up in ZRS.
+## How Azure Backup protects from ransomware?
+
+Azure Backup helps protect your critical business systems and backup data against a ransomware attack by implementing preventive measures and providing tools that protect your organization from every step that attackers take to infiltrate your systems. It provides security to your backup environment, both when your data is in transit and at rest. [Learn more](/azure/security/fundamentals/backup-plan-to-protect-against-ransomware)
+ ## Next steps - [Review](backup-architecture.md) the architecture and components for different backup scenarios.
backup Backup Rbac Rs Vault https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-rbac-rs-vault.md
The following table captures the Backup management actions and corresponding min
| Create Recovery Services vault | Backup Contributor | Resource group containing the vault | | | Enable backup of Azure VMs | Backup Operator | Resource group containing the vault | | | | Virtual Machine Contributor | VM resource | Alternatively, instead of a built-in-role, you can consider a custom role which has the following permissions: Microsoft.Compute/virtualMachines/write Microsoft.Compute/virtualMachines/read |
+| Enable backup of Azure VMs (from VM blade) | Backup Operator | Resource group containing the vault | |
+| | Backup Operator | Resource group containing the virtual machine | |
+| | Virtual Machine Contributor | VM resource | Alternatively, instead of a built-in-role, you can consider a custom role which has the following permissions: Microsoft.Compute/virtualMachines/write Microsoft.Compute/virtualMachines/read Microsoft.Compute/virtualMachines/instanceView/read |
| On-demand backup of VM | Backup Operator | Recovery Services vault | | | Restore VM | Backup Operator | Recovery Services vault | | | | Contributor | Resource group in which VM will be deployed | Alternatively, instead of a built-in-role, you can consider a custom role which has the following permissions: Microsoft.Resources/subscriptions/resourceGroups/write Microsoft.DomainRegistration/domains/write, Microsoft.Compute/virtualMachines/write Microsoft.Compute/virtualMachines/read Microsoft.Network/virtualNetworks/read Microsoft.Network/virtualNetworks/subnets/join/action |
backup Backup Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-support-matrix.md
The following table describes the features of Recovery Services vaults:
**Move vaults** | You can [move vaults](./backup-azure-move-recovery-services-vault.md) across subscriptions or between resource groups in the same subscription. However, moving vaults across regions isn't supported. **Move data between vaults** | Moving backed-up data between vaults isn't supported. **Modify vault storage type** | You can modify the storage replication type (either geo-redundant storage or locally redundant storage) for a vault before backups are stored. After backups begin in the vault, the replication type can't be modified.
-**Zone-redundant storage (ZRS)** | Supported in preview in UK South, South East Asia, Australia East, North Europe, Central US, East US 2, Brazil South, South Central US, Korea Central, Norway East, France Central, West Europe, East Asia, Sweden Central, Canada Central and Japan East.
+**Zone-redundant storage (ZRS)** | Supported in preview in UK South, South East Asia, Australia East, North Europe, Central US, East US 2, Brazil South, South Central US, Korea Central, Norway East, France Central, West Europe, East Asia, Sweden Central, Canada Central, Japan East and West US 3.
**Private Endpoints** | See [this section](./private-endpoints.md#before-you-start) for requirements to create private endpoints for a recovery service vault. ## On-premises backup support
backup Sql Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/sql-support-matrix.md
Title: Azure Backup support matrix for SQL Server Backup in Azure VMs description: Provides a summary of support settings and limitations when backing up SQL Server in Azure VMs with the Azure Backup service. Previously updated : 10/22/2021 Last updated : 01/04/2022 +++ # Support matrix for SQL Server Backup in Azure VMs
You can use Azure Backup to back up SQL Server databases in Azure VMs hosted on
| **Supported deployments** | SQL Marketplace Azure VMs and non-Marketplace (SQL Server manually installed) VMs are supported. **Supported regions** | Azure Backup for SQL Server databases is available in all regions, except France South (FRS), UK North (UKN), UK South 2 (UKS2), UG IOWA (UGI), and Germany (Black Forest).
-**Supported operating systems** | Windows Server 2019, Windows Server 2016, Windows Server 2012, Windows Server 2008 R2 SP1 <br/><br/> Linux isn't currently supported.
+**Supported operating systems** | Windows Server 2022, Windows Server 2019, Windows Server 2016, Windows Server 2012, Windows Server 2008 R2 SP1 <br/><br/> Linux isn't currently supported.
**Supported SQL Server versions** | SQL Server 2019, SQL Server 2017 as detailed on the [Search product lifecycle page](https://support.microsoft.com/lifecycle/search?alpha=SQL%20server%202017), SQL Server 2016 and SPs as detailed on the [Search product lifecycle page](https://support.microsoft.com/lifecycle/search?alpha=SQL%20server%202016%20service%20pack), SQL Server 2014, SQL Server 2012, SQL Server 2008 R2, SQL Server 2008 <br/><br/> Enterprise, Standard, Web, Developer, Express.<br><br>Express Local DB versions aren't supported. **Supported .NET versions** | .NET Framework 4.5.2 or later installed on the VM **Supported deployments** | SQL Marketplace Azure VMs and non-Marketplace (SQL Server that is manually installed) VMs are supported. Support for standalone instances is always on [availability groups](backup-sql-server-on-availability-groups.md).
_*The database size limit depends on the data transfer rate that we support and
* TDE - enabled database backup is supported. To restore a TDE-encrypted database to another SQL Server, you need to first [restore the certificate to the destination server](/sql/relational-databases/security/encryption/move-a-tde-protected-database-to-another-sql-server). The backup compression for TDE-enabled databases for SQL Server 2016 and newer versions is available, but at lower transfer size as explained [here](https://techcommunity.microsoft.com/t5/sql-server/backup-compression-for-tde-enabled-databases-important-fixes-in/ba-p/385593). * The backup and restore operations for mirror databases and database snapshots aren't supported. * SQL Server **Failover Cluster Instance (FCI)** isn't supported.
+* Azure Backup supports only back up of database files with the following extensions - _.ad_, _.cs_, and _.master_. Database files with other extensions, such as _.dll_, aren't backed-up because the IIS server performs the [file extension request filtering](/iis/configuration/system.webserver/security/requestfiltering/fileextensions).
## Backup throughput performance
bastion Connect Native Client Windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/connect-native-client-windows.md
Currently, this feature has the following limitations:
Before you begin, verify that you have met the following criteria:
-* The latest version of the CLI commands (version 2.30 or later) is installed. For information about installing the CLI commands, see [Install the Azure CLI](/cli/azure/install-azure-cli) and [Get Started with Azure CLI](/cli/azure/get-started-with-azure-cli).
+* The latest version of the CLI commands (version 2.32 or later) is installed. For information about installing the CLI commands, see [Install the Azure CLI](/cli/azure/install-azure-cli) and [Get Started with Azure CLI](/cli/azure/get-started-with-azure-cli).
* An Azure virtual network. * A virtual machine in the virtual network. * If you plan to sign in to your virtual machine using your Azure AD credentials, make sure your virtual machine is set up using one of the following methods:
bastion Quickstart Host Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/quickstart-host-portal.md
# Quickstart: Configure Azure Bastion from VM settings
-This quickstart article shows you how to configure Azure Bastion based on your VM settings in the Azure portal, and then connect to a VM via private IP address. Once the service is provisioned, the RDP/SSH experience is available to all of the virtual machines in the same virtual network. The VM doesn't need a public IP address, client software, agent, or a special configuration. If you don't need the public IP address on your VM for anything else, you can remove it. You then connect to your VM through the portal using the private IP address. For more information about Azure Bastion, see [What is Azure Bastion?](bastion-overview.md)
+This quickstart article shows you how to configure Azure Bastion based on your VM settings, and then connect to the VM via private IP address using the Azure portal. Once the Bastion service is provisioned, the RDP/SSH experience is available to all of the virtual machines in the same virtual network.
+
+When connecting via Azure Bastion, your VM doesn't need a public IP address, client software, agent, or a special configuration. Additionally, if you don't need the public IP address on your VM for anything else, you can remove it and connect to your VM through the portal using the private IP address. For more information about Azure Bastion, see [What is Azure Bastion?](bastion-overview.md)
## <a name="prereq"></a>Prerequisites
chaos-studio Chaos Studio Fault Library https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/chaos-studio/chaos-studio-fault-library.md
description: Understand the available actions you can use with Chaos Studio incl
Previously updated : 11/10/2021 Last updated : 01/05/2022
Known issues on Linux:
"value": "{\"action\":\"delay\",\"mode\":\"one\",\"selector\":{\"namespaces\":[\"default\"],\"labelSelectors\":{\"app\":\"web-show\"}},\"delay\":{\"latency\":\"10ms\",\"correlation\":\"100\",\"jitter\":\"0ms\"}}" } ],
- "duration": "PT10M",
"selectorid": "myResources" } ]
Known issues on Linux:
"value": "{\"action\":\"pod-failure\",\"mode\":\"one\",\"duration\":\"30s\",\"selector\":{\"labelSelectors\":{\"app.kubernetes.io\/component\":\"tikv\"}}}" } ],
- "duration": "PT10M",
"selectorid": "myResources" } ]
Known issues on Linux:
"value": "{\"mode\":\"one\",\"selector\":{\"labelSelectors\":{\"app\":\"app1\"}},\"stressors\":{\"memory\":{\"workers\":4,\"size\":\"256MB\"}}}" } ],
- "duration": "PT10M",
"selectorid": "myResources" } ]
Known issues on Linux:
"value": "{\"action\":\"latency\",\"mode\":\"one\",\"selector\":{\"labelSelectors\":{\"app\":\"etcd\"}},\"volumePath\":\"\/var\/run\/etcd\",\"path\":\"\/var\/run\/etcd\/**\/*\",\"delay\":\"100ms\",\"percent\":50,\"duration\":\"400s\"}" } ],
- "duration": "PT10M",
"selectorid": "myResources" } ]
Known issues on Linux:
"value": "{\"mode\":\"one\",\"selector\":{\"labelSelectors\":{\"app\":\"app1\"}},\"timeOffset\":\"-10m100ns\"}" } ],
- "duration": "PT10M",
"selectorid": "myResources" } ]
Known issues on Linux:
"value": "{\"mode\":\"one\",\"selector\":{\"namespaces\":[\"chaos-mount\"]},\"failKernRequest\":{\"callchain\":[{\"funcname\":\"__x64_sys_mount\"}],\"failtype\":0}}" } ],
- "duration": "PT10M",
"selectorid": "myResources" } ]
Known issues on Linux:
"value": "{\"mode\":\"all\",\"selector\":{\"labelSelectors\":{\"app\":\"nginx\"}},\"target\":\"Request\",\"port\":80,\"method\":\"GET\",\"path\":\"\/api\",\"abort\":true,\"duration\":\"5m\",\"scheduler\":{\"cron\":\"@every 10m\"}}" } ],
- "duration": "PT10M",
"selectorid": "myResources" } ]
Known issues on Linux:
"value": "{\"action\":\"random\",\"mode\":\"all\",\"patterns\":[\"google.com\",\"chaos-mesh.*\",\"github.?om\"],\"selector\":{\"namespaces\":[\"busybox\"]}}" } ],
- "duration": "PT10M",
"selectorid": "myResources" } ]
cognitive-services Howtocallvisionapi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/Vision-API-How-to-Topics/HowToCallVisionAPI.md
Previously updated : 09/09/2019 Last updated : 01/05/2022
This article demonstrates how to call the Image Analysis API to return information about an image's visual features.
-This guide assumes you have already <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title="created a Computer Vision resource" target="_blank">create a Computer Vision resource </a> and obtained a subscription key and endpoint URL. If you haven't, follow a [quickstart](../quickstarts-sdk/image-analysis-client-library.md) to get started.
+This guide assumes you have already <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title="created a Computer Vision resource" target="_blank">created a Computer Vision resource </a> and obtained a subscription key and endpoint URL. If you haven't, follow a [quickstart](../quickstarts-sdk/image-analysis-client-library.md) to get started.
## Submit data to the service
The [Analyze API](https://westus.dev.cognitive.microsoft.com/docs/services/compu
|URL parameter | Value | Description| |||--| |`visualFeatures`|`Adult` | detects if the image is pornographic in nature (depicts nudity or a sex act), or is gory (depicts extreme violence or blood). Sexually suggestive content (aka racy content) is also detected.|
-||`Brands` | detects various brands within an image, including the approximate location. The Brands argument is only available in English.|
-||`Categories` | categorizes image content according to a taxonomy defined in documentation. This is the default value of `visualFeatures`.|
-||`Color` | determines the accent color, dominant color, and whether an image is black&white.|
-||`Description` | describes the image content with a complete sentence in supported languages.|
-||`Faces` | detects if faces are present. If present, generate coordinates, gender and age.|
-||`ImageType` | detects if image is clip art or a line drawing.|
-||`Objects` | detects various objects within an image, including the approximate location. The Objects argument is only available in English.|
-||`Tags` | tags the image with a detailed list of words related to the image content.|
+|`visualFeatures`|`Brands` | detects various brands within an image, including the approximate location. The Brands argument is only available in English.|
+|`visualFeatures`|`Categories` | categorizes image content according to a taxonomy defined in documentation. This is the default value of `visualFeatures`.|
+|`visualFeatures`|`Color` | determines the accent color, dominant color, and whether an image is black&white.|
+|`visualFeatures`|`Description` | describes the image content with a complete sentence in supported languages.|
+|`visualFeatures`|`Faces` | detects if faces are present. If present, generate coordinates, gender and age.|
+|`visualFeatures`|`ImageType` | detects if image is clip art or a line drawing.|
+|`visualFeatures`|`Objects` | detects various objects within an image, including the approximate location. The Objects argument is only available in English.|
+|`visualFeatures`|`Tags` | tags the image with a detailed list of words related to the image content.|
|`details`| `Celebrities` | identifies celebrities if detected in the image.|
-||`Landmarks` |identifies landmarks if detected in the image.|
+|`details`|`Landmarks` |identifies landmarks if detected in the image.|
A populated URL might look like the following:
You can also specify the language of the returned data. The following URL query
|URL parameter | Value | Description| |||--| |`language`|`en` | English|
-||`es` | Spanish|
-||`ja` | Japanese|
-||`pt` | Portuguese|
-||`zh` | Simplified Chinese|
+|`language`|`es` | Spanish|
+|`language`|`ja` | Japanese|
+|`language`|`pt` | Portuguese|
+|`language`|`zh` | Simplified Chinese|
A populated URL might look like the following:
description.captions[].confidence | `number` | The confidence score for th
See the following list of possible errors and their causes: * 400
- * InvalidImageUrl - Image URL is badly formatted or not accessible.
- * InvalidImageFormat - Input data is not a valid image.
- * InvalidImageSize - Input image is too large.
- * NotSupportedVisualFeature - Specified feature type is not valid.
- * NotSupportedImage - Unsupported image, e.g. child pornography.
- * InvalidDetails - Unsupported `detail` parameter value.
- * NotSupportedLanguage - The requested operation is not supported in the language specified.
- * BadArgument - Additional details are provided in the error message.
+ * `InvalidImageUrl` - Image URL is badly formatted or not accessible.
+ * `InvalidImageFormat` - Input data is not a valid image.
+ * `InvalidImageSize` - Input image is too large.
+ * `NotSupportedVisualFeature` - Specified feature type is not valid.
+ * `NotSupportedImage` - Unsupported image, for example child pornography.
+ * `InvalidDetails` - Unsupported `detail` parameter value.
+ * `NotSupportedLanguage` - The requested operation is not supported in the language specified.
+ * `BadArgument` - Additional details are provided in the error message.
* 415 - Unsupported media type error. The Content-Type is not in the allowed types:
- * For an image URL: Content-Type should be application/json
- * For a binary image data: Content-Type should be application/octet-stream or multipart/form-data
+ * For an image URL, Content-Type should be `application/json`
+ * For a binary image data, Content-Type should be `application/octet-stream` or `multipart/form-data`
* 500
- * FailedToProcess
- * Timeout - Image processing timed out.
- * InternalServerError
+ * `FailedToProcess`
+ * `Timeout` - Image processing timed out.
+ * `InternalServerError`
> [!TIP] > While working with Computer Vision, you might encounter transient failures caused by [rate limits](https://azure.microsoft.com/pricing/details/cognitive-services/computer-vision/) enforced by the service, or other transient problems like network outages. For information about handling these types of failures, see [Retry pattern](/azure/architecture/patterns/retry) in the Cloud Design Patterns guide, and the related [Circuit Breaker pattern](/azure/architecture/patterns/circuit-breaker).
cognitive-services Concept Brand Detection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/concept-brand-detection.md
Previously updated : 08/08/2019 Last updated : 01/05/2022
Brand detection is a specialized mode of [object detection](concept-object-detection.md) that uses a database of thousands of global logos to identify commercial brands in images or video. You can use this feature, for example, to discover which brands are most popular on social media or most prevalent in media product placement.
-The Computer Vision service detects whether there are brand logos in a given image; if so, it returns the brand name, a confidence score, and the coordinates of a bounding box around the logo.
+The Computer Vision service detects whether there are brand logos in a given image; if there are, it returns the brand name, a confidence score, and the coordinates of a bounding box around the logo.
-The built-in logo database covers popular brands in consumer electronics, clothing, and more. If you find that the brand you're looking for is not detected by the Computer Vision service, you may be better served creating and training your own logo detector using the [Custom Vision](../custom-vision-service/index.yml) service.
+The built-in logo database covers popular brands in consumer electronics, clothing, and more. If you find that the brand you're looking for is not detected by the Computer Vision service, you could also try creating and training your own logo detector using the [Custom Vision](../custom-vision-service/index.yml) service.
## Brand detection example
cognitive-services Concept Describing Images https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/concept-describing-images.md
Previously updated : 02/11/2019 Last updated : 01/05/2022 # Describe images with human-readable language
-Computer Vision can analyze an image and generate a human-readable sentence that describes its contents. The algorithm actually returns several descriptions based on different visual features, and each description is given a confidence score. The final output is a list of descriptions ordered from highest to lowest confidence.
+Computer Vision can analyze an image and generate a human-readable phrase that describes its contents. The algorithm returns several descriptions based on different visual features, and each description is given a confidence score. The final output is a list of descriptions ordered from highest to lowest confidence.
+
+At this time, English is the only supported language for image description.
## Image description example
cognitive-services Concept Detecting Faces https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/concept-detecting-faces.md
Previously updated : 04/17/2019 Last updated : 01/05/2022 # Face detection with Computer Vision
-Computer Vision can detect human faces within an image and generate the age, gender, and rectangle for each detected face.
+Computer Vision can detect human faces within an image and generate rectangle coordinates for each detected face.
> [!NOTE]
-> This feature is also offered by the Azure [Face](../face/index.yml) service. See this alternative for more detailed face analysis, including face identification and pose detection.
+> This feature is also offered by the Azure [Face](../face/index.yml) service. Use this alternative for more detailed face analysis, including face identification and head pose detection.
## Face detection examples
cognitive-services Concept Tagging Images https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/concept-tagging-images.md
Previously updated : 02/08/2019 Last updated : 01/05/2022
-# Applying content tags to images
+# Apply content tags to images
-Computer Vision returns tags based on thousands of recognizable objects, living beings, scenery, and actions. When tags are ambiguous or not common knowledge, the API response provides 'hints' to clarify the meaning of the tag in context of a known setting. Tags are not organized as a taxonomy and no inheritance hierarchies exist. A collection of content tags forms the foundation for an image 'description' displayed as human readable language formatted in complete sentences. Note, that at this point English is the only supported language for image description.
+Computer Vision can return content tags for thousands of recognizable objects, living beings, scenery, and actions that appear in images. Tags are not organized as a taxonomy and do not have inheritance hierarchies. A collection of content tags forms the foundation for an image [description](./concept-describing-images.md) displayed as human readable language formatted in complete sentences. When tags are ambiguous or not common knowledge, the API response provides hints to clarify the meaning of the tag in context of a known setting.
-After uploading an image or specifying an image URL, Computer Vision algorithms output tags based on the objects, living beings, and actions identified in the image. Tagging is not limited to the main subject, such as a person in the foreground, but also includes the setting (indoor or outdoor), furniture, tools, plants, animals, accessories, gadgets etc.
+After you upload an image or specify an image URL, the Computer Vision algorithm can output tags based on the objects, living beings, and actions identified in the image. Tagging is not limited to the main subject, such as a person in the foreground, but also includes the setting (indoor or outdoor), furniture, tools, plants, animals, accessories, gadgets, and so on.
## Image tagging example
cognitive-services Export Model Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/export-model-python.md
Previously updated : 11/23/2020 Last updated : 01/05/2022 ms.devlang: python
-# Tutorial: Run TensorFlow model in Python
+# Tutorial: Run a TensorFlow model in Python
After you have [exported your TensorFlow model](./export-your-model.md) from the Custom Vision Service, this quickstart will show you how to use this model locally to classify images.
After you have [exported your TensorFlow model](./export-your-model.md) from the
## Prerequisites
-To use the tutorial, you need to do the following:
+To use the tutorial, first to do the following:
- Install either Python 2.7+ or Python 3.6+. - Install pip.
pip install opencv-python
## Load your model and tags
-The downloaded .zip file contains a _model.pb_ and a _labels.txt_ file. These files represent the trained model and the classification labels. The first step is to load the model into your project. Add the following code to a new Python script.
+The downloaded _.zip_ file contains a _model.pb_ and a _labels.txt_ file. These files represent the trained model and the classification labels. The first step is to load the model into your project. Add the following code to a new Python script.
```Python import tensorflow as tf
with open(labels_filename, 'rt') as lf:
## Prepare an image for prediction
-There are a few steps you need to take to prepare the image for prediction. These steps mimic the image manipulation performed during training:
+There are a few steps you need to take to prepare the image for prediction. These steps mimic the image manipulation performed during training.
### Open the file and create an image in the BGR color space
def update_orientation(image):
## Classify an image
-Once the image is prepared as a tensor, we can send it through the model for a prediction:
+Once the image is prepared as a tensor, we can send it through the model for a prediction.
```Python
cognitive-services Get Started Build Detector https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/get-started-build-detector.md
If you don't have an Azure subscription, create a [free account](https://azure.m
## Prerequisites - A set of images with which to train your detector model. You can use the set of [sample images](https://github.com/Azure-Samples/cognitive-services-python-sdk-samples/tree/master/samples/vision/images) on GitHub. Or, you can choose your own images using the tips below.-- A [supported web browser](overview.md#supported-browsers-for-custom-vision-website)
+- A [supported web browser](overview.md#supported-browsers-for-custom-vision-web-portal)
## Create Custom Vision resources
cognitive-services Getting Started Build A Classifier https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/getting-started-build-a-classifier.md
If you don't have an Azure subscription, create a [free account](https://azure.m
## Prerequisites - A set of images with which to train your classifier. See below for tips on choosing images.-- A [supported web browser](overview.md#supported-browsers-for-custom-vision-website)
+- A [supported web browser](overview.md#supported-browsers-for-custom-vision-web-portal)
## Create Custom Vision resources
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/overview.md
Additionally, you can choose from several variations of the Custom Vision algori
The Custom Vision Service is available as a set of native SDKs as well as through a web-based interface on the [Custom Vision website](https://customvision.ai/). You can create, test, and train a model through either interface or use both together.
-### Supported browsers for Custom Vision website
+### Supported browsers for Custom Vision web portal
The Custom Vision web interface can be used by the following web browsers: - Microsoft Edge (latest version)
cognitive-services Select Domain https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/select-domain.md
Previously updated : 03/06/2020 Last updated : 01/05/2022 # Select a domain for a Custom Vision project
-From the settings tab of your Custom Vision project, you can select a domain for your project. Choose the domain that is closest to your scenario. If you're accessing Custom Vision through a client library or REST API, you'll need to specify a domain ID when creating the project. You can get a list of domain IDs with [Get Domains](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddeab), or use the table below.
+From the **settings** tab of your project on the Custom Vision web portal, you can select a model domain for your project. You'll want to choose the domain that's closest to your scenario. If you're accessing Custom Vision through a client library or REST API, you'll need to specify a domain ID when creating the project. You can get a list of domain IDs with [Get Domains](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddeab), or use the table below.
## Image Classification
container-instances Container Instances Gpu https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-instances/container-instances-gpu.md
When deploying GPU resources, set CPU and memory resources appropriate for the w
* **CUDA drivers** - Container instances with GPU resources are pre-provisioned with NVIDIA CUDA drivers and container runtimes, so you can use container images developed for CUDA workloads.
- We support only CUDA 9.0 at this stage. For example, you can use the following base images for your Dockerfile:
- * [nvidia/cuda:9.0-base-ubuntu16.04](https://hub.docker.com/r/nvidia/cuda/)
- * [tensorflow/tensorflow: 1.12.0-gpu-py3](https://hub.docker.com/r/tensorflow/tensorflow)
+ We support up through CUDA 11 at this stage. For example, you can use the following base images for your Dockerfile:
+ * [nvidia/cuda:11.4.2-base-ubuntu20.04](https://hub.docker.com/r/nvidia/cuda/)
+ * [tensorflow/tensorflow:devel-gpu](https://hub.docker.com/r/tensorflow/tensorflow)
> [!NOTE] > To improve reliability when using a public container image from Docker Hub, import and manage the image in a private Azure container registry, and update your Dockerfile to use your privately managed base image. [Learn more about working with public images](../container-registry/buffer-gate-public-content.md).
One way to add GPU resources is to deploy a container group by using a [YAML fil
```yaml additional_properties: {}
-apiVersion: '2019-12-01'
+apiVersion: '2021-09-01'
name: gpucontainergroup properties: containers:
Another way to deploy a container group with GPU resources is by using a [Resour
{ "name": "[parameters('containerGroupName')]", "type": "Microsoft.ContainerInstance/containerGroups",
- "apiVersion": "2019-12-01",
+ "apiVersion": "2021-09-01",
"location": "[resourceGroup().location]", "properties": { "containers": [
cosmos-db Troubleshoot Nohostavailable Exception https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cassandra/troubleshoot-nohostavailable-exception.md
Title: Troubleshooting NoHostAvailableException and NoNodeAvailableException
-description: This article discusses the different possible reasons for having a NoHostException and ways to handle it.
+ Title: Troubleshoot NoHostAvailableException and NoNodeAvailableException
+description: This article discusses the various reasons for having a NoHostException and ways to handle it.
ms.devlang: csharp, java
-# Troubleshooting NoHostAvailableException and NoNodeAvailableException
-The NoHostAvailableException is a top-level wrapper exception with many possible causes and inner exceptions, many of which can be client-related. This exception tends to occur if there are some issues with cluster, connection settings or one or more Cassandra nodes is unavailable. Here we explore possible reasons for this exception along with details specific to the client driver being used.
+# Troubleshoot NoHostAvailableException and NoNodeAvailableException
+NoHostAvailableException is a top-level wrapper exception with many possible causes and inner exceptions, many of which can be client-related. This exception tends to occur if there are some issues with the cluster or connection settings, or if one or more Cassandra nodes are unavailable.
-## Driver Settings
-One of the most common causes of a NoHostAvailableException is because of the default driver settings. We advised the following [settings](#code-sample).
+This article explores possible reasons for this exception, and it discusses specific details about the client driver that's being used.
-- The default value of the connections per host is 1, which is not recommended for CosmosDB, a minimum value of 10 is advised. While more aggregated RUs are provisioned, increase connection count. The general guideline is 10 connections per 200k RU.-- Use cosmos retry policy to handle intermittent throttling responses, please reference [cosmosdb extension library](https://github.com/Azure/azure-cosmos-cassandra-extensions)(https://github.com/Azure/azure-cosmos-cassandra-extensions/tree/release/java-driver-4/1.0.1)-- For multi-region account, CosmosDB load-balancing policy in the extension should be used.-- Read request timeout should be set greater than 1 minute. We recommend 90 seconds.
+## Driver settings
+One of the most common causes of NoHostAvailableException is the default driver settings. We recommend that you use the [settings](#code-sample) listed at the end of this article. Here is some explanatory information:
-## Exception Messages
-If exception still persists after the recommended settings, review the exception messages below. Follow the recommendation, if your error log contains any of these messages.
+- The default value of the connections per host is 1, which we don't recommend for Azure Cosmos DB. We do recommend a minimum value of 10. Although more aggregated Request Units (RU) are provided, increase the connection count. The general guideline is 10 connections per 200,000 RU.
+- Use the Azure Cosmos DB retry policy to handle intermittent throttling responses. For more information, see the Azure Cosmos DB extension libraries:
+ - [Driver 3 extension library](https://github.com/Azure/azure-cosmos-cassandra-extensions)
+ - [Driver 4 extension library](https://github.com/Azure/azure-cosmos-cassandra-extensions/tree/release/java-driver-4/1.0.1)
+- For multi-region accounts, use the Azure Cosmos DB load-balancing policy in the extension.
+- The read request timeout should be set at greater than 1 minute. We recommend 90 seconds.
+
+## Exception messages
+If the exception persists after you've made the recommended changes, review the exception messages in the next three sections. If your error log contains any of these exception messages, follow the recommendation for that exception.
### BusyPoolException
-This client-side error indicates that the maximum number of request connections for a host has been reached. If unable to remove, request from the queue, you might see this error. If the connection per host has been set to minimum of 10, this could be caused by high server-side latency.
+This client-side error indicates that the maximum number of request connections for a host has been reached. If you're unable to remove the request from the queue, you might see this error. If the connection per host has been set to minimum of 10, the exception could be caused by high server-side latency.
```
-Java driver v3 exception:
+Java driver v3 exception:
All host(s) tried for query failed (tried: :10350 (com.datastax.driver.core.exceptions.BusyPoolException: [:10350] Pool is busy (no available connection and the queue has reached its max size 256))) All host(s) tried for query failed (tried: :10350 (com.datastax.driver.core.exceptions.BusyPoolException: [:10350] Pool is busy (no available connection and timed out after 5000 MILLISECONDS))) ```
C# driver 3:
All hosts tried for query failed (tried :10350: BusyPoolException 'All connections to host :10350 are busy, 2048 requests are in-flight on each 10 connection(s)') ``` #### Recommendation
-Instead of tuning the `max requests per connection`, we advise making sure the `connections per host` is set to a minimum of 10. See the [code sample section](#code-sample).
+Instead of tuning `max requests per connection`, make sure that `connections per host` is set to a minimum of 10. See the [code sample section](#code-sample).
### TooManyRequest(429)
-OverloadException is thrown when the request rate is too large. Which may be because of insufficient throughput being provisioned for the table and the RU budget being exceeded. Learn more about [large request](../sql/troubleshoot-request-rate-too-large.md#request-rate-is-large) and [server-side retry](prevent-rate-limiting-errors.md)
+OverloadException is thrown when the request rate is too great, which might happen when insufficient throughput is provisioned for the table and the RU budget is exceeded. For more information, see [large request](../sql/troubleshoot-request-rate-too-large.md#request-rate-is-large) and [server-side retry](prevent-rate-limiting-errors.md).
#### Recommendation
-We recommend using either of the following options:
-- If throttling is persistent, increase provisioned RU.-- If throttling is intermittent, use the CosmosRetryPolicy.-- If the extension library cannot be referenced [enable server side retry](prevent-rate-limiting-errors.md).
+Apply one of the following options:
+- If throttling is persistent, increase the provisioned RU.
+- If throttling is intermittent, use the Azure Cosmos DB retry policy.
+- If the extension library can't be referenced, [enable server-side retry](prevent-rate-limiting-errors.md).
### All hosts tried for query failed
-When the client is set to connect to a different region other than the primary contact point region, you will get below exception during the initial a few seconds upon start-up.
+When the client is set to connect to a region other than the primary contact point region, during the initial few seconds at startup, you'll get one of the following exception messages:
-Exception message with a Java driver 3: `Exception in thread "main" com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (no host was tried)at cassandra.driver.core@3.10.2/com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:83)`
+- For Java driver 3: `Exception in thread "main" com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (no host was tried)at cassandra.driver.core@3.10.2/com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:83)`
-Exception message with a Java driver 4: `No node was available to execute the query`
+- For Java driver 4: `No node was available to execute the query`
-Exception message with a C# driver 3: `System.ArgumentException: Datacenter West US does not match any of the nodes, available datacenters: West US 2`
+- For C# driver 3: `System.ArgumentException: Datacenter West US does not match any of the nodes, available datacenters: West US 2`
#### Recommendation
-We advise using the CosmosLoadBalancingPolicy in [Java driver 3](https://github.com/Azure/azure-cosmos-cassandra-extensions) and [Java driver 4](https://github.com/Azure/azure-cosmos-cassandra-extensions/tree/release/java-driver-4/1.0.1). This policy falls back to the ContactPoint of the primary write region where the specified local data is unavailable.
+Use CosmosLoadBalancingPolicy in [Java driver 3](https://github.com/Azure/azure-cosmos-cassandra-extensions) and [Java driver 4](https://github.com/Azure/azure-cosmos-cassandra-extensions/tree/release/java-driver-4/1.0.1). This policy falls back to the contact point of the primary write region where the specified local data is unavailable.
> [!NOTE]
-> Please reach out to Azure Cosmos DB support with details around - exception message, exception stacktrace, datastax driver log, universal time of failure, consistent or intermittent failures, failing keyspace and table, request type that failed, SDK version if none of the above recommendations help resolve your issue.
+> If the preceding recommendations don't help resolve your issue, contact Azure Cosmos DB support. Be sure to provide the following details: exception message, exception stacktrace, datastax driver log, universal time of failure, consistent or intermittent failures, failing keyspace and table, request type that failed, and SDK version.
-## Code Sample
+## Code sample
-#### Java Driver 3 Settings
+#### Java driver 3 settings
``` java // socket options with default values // https://docs.datastax.com/en/developer/java-driver/3.6/manual/socket_options/
We advise using the CosmosLoadBalancingPolicy in [Java driver 3](https://github.
.build(); ```
-#### Java Driver 4 Settings
+#### Java driver 4 settings
```java // driver configurations // https://docs.datastax.com/en/developer/java-driver/4.6/manual/core/configuration/
We advise using the CosmosLoadBalancingPolicy in [Java driver 3](https://github.
.build(); ```
-#### C# v3 Driver Settings
+#### C# v3 driver settings
```dotnetcli PoolingOptions poolingOptions = PoolingOptions.Create() .SetCoreConnectionsPerHost(HostDistance.Local, 10) // default 2
We advise using the CosmosLoadBalancingPolicy in [Java driver 3](https://github.
``` ## Next steps
-* [Server-side diagnostics](error-codes-solution.md) to understand different error codes and their meaning.
-* [Diagnose and troubleshoot](../sql/troubleshoot-dot-net-sdk.md) issues when you use the Azure Cosmos DB .NET SDK.
+* To understand the various error codes and their meaning, see [Server-side diagnostics](error-codes-solution.md).
+* See [Diagnose and troubleshoot issues with the Azure Cosmos DB .NET SDK](../sql/troubleshoot-dot-net-sdk.md).
* Learn about performance guidelines for [.NET v3](../sql/performance-tips-dotnet-sdk-v3-sql.md) and [.NET v2](../sql/performance-tips.md).
-* [Diagnose and troubleshoot](../sql/troubleshoot-java-sdk-v4-sql.md) issues when you use the Azure Cosmos DB Java v4 SDK.
-* Learn about performance guidelines for [Java v4 SDK](../sql/performance-tips-java-sdk-v4-sql.md).
+* See [Troubleshoot issues with the Azure Cosmos DB Java SDK v4 with SQL API accounts](../sql/troubleshoot-java-sdk-v4-sql.md).
+* See [Performance tips for the Azure Cosmos DB Java SDK v4](../sql/performance-tips-java-sdk-v4-sql.md).
cosmos-db Dedicated Gateway https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/dedicated-gateway.md
The dedicated gateway is available in the following sizes:
> [!NOTE] > Once created, you can't modify the size of the dedicated gateway nodes. However, you can add or remove nodes.
+There are many different ways to provision a dedicated gateway:
+
+- [Provision a dedicated gateway using the Azure Portal](how-to-configure-integrated-cache.md#provision-a-dedicated-gateway-cluster)
+- [Use Azure Cosmos DB's REAT API](https://docs.microsoft.com/rest/api/cosmos-db-resource-provider/2021-04-01-preview/service/create)
+- [Azure CLI](https://docs.microsoft.com/cli/azure/cosmosdb/service?view=azure-cli-latest#az_cosmosdb_service_create)
+- [ARM template](https://docs.microsoft.com/azure/templates/microsoft.documentdb/databaseaccounts/services?tabs=bicep)
+ - Note: You cannot deprovision a dedicated gateway using ARM templates
+ ## Dedicated gateway in multi-region accounts When you provision a dedicated gateway cluster in multi-region accounts, identical dedicated gateway clusters are provisioned in each region. For example, consider an Azure Cosmos DB account in East US and North Europe. If you provision a dedicated gateway cluster with two D8 nodes in this account, you'd have four D8 nodes in total - two in East US and two in North Europe. You don't need to explicitly configure dedicated gateways in each region and your connection string remains the same. There are also no changes to best practices for performing failovers.
cosmos-db Partial Document Update Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/partial-document-update-getting-started.md
if (response.isSuccessStatusCode()) {
} ```
-## Node
+## Node.js
+
+Support for Partial document update (Patch API) in the [Azure Cosmos DB JavaScript SDK](sql/sql-api-sdk-node.md) is available from version *3.15.0* onwards. You can download it from the [NPM Registry](https://www.npmjs.com/package/@azure/cosmos/v/3.15.0)
+
+> [!NOTE]
+> A complete partial document update sample can be found in the [.js v3 samples repository](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/cosmosdb/cosmos/samples/v3/typescript/src/ItemManagement.ts#L167) on GitHub.
**Executing a single patch operation**
Partial Document Update operations can also be [executed on the server-side](sto
); }; ```
+> [!NOTE]
+> Definition of validateOptionsAndCallback can be found in the [.js DocDbWrapperScript](https://github.com/Azure/azure-cosmosdb-js-server/blob/1dbe69893d09a5da29328c14ec087ef168038009/utils/DocDbWrapperScript.js#L289) on GitHub.
++
+**Sample parameter for patch operation**
+
+```javascript
+function () {
+ var doc = {
+ "id": "exampleDoc",
+ "field1": {
+ "field2": 10,
+ "field3": 20
+ }
+ };
+ var isAccepted = __.createDocument(__.getSelfLink(), doc, (err, doc) => {
+ if (err) throw err;
+ var patchSpec = [
+ {"op": "add", "path": "/field1/field2", "value": 20},
+ {"op": "remove", "path": "/field1/field3"}
+ ];
+ isAccepted = __.patchDocument(doc._self, patchSpec, (err, doc) => {
+ if (err) throw err;
+ else {
+ getContext().getResponse().setBody(docPatched);
+ }
+ }
+ }
+ if(!isAccepted) throw new Error("patch was't accepted")
+ }
+ }
+ if(!isAccepted) throw new Error("create wasn't accepted")
+}
+```
## Troubleshooting
cosmos-db Sql Api Sdk Node https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-api-sdk-node.md
|Resource |Link | |||
-|Download SDK | [NPM](https://www.npmjs.com/package/@azure/cosmos)
+|Download SDK | [@azure/cosmos](https://www.npmjs.com/package/@azure/cosmos)
|API Documentation | [JavaScript SDK reference documentation](/javascript/api/%40azure/cosmos/)
-|SDK installation instructions | [Installation instructions](https://github.com/Azure/azure-sdk-for-js)
-|Contribute to SDK | [GitHub](https://github.com/Azure/azure-sdk-for-js/tree/main)
+|SDK installation instructions | `npm install @azure/cosmos`
+|Contribute to SDK | [Contributing guide for azure-sdk-for-js repo](https://github.com/Azure/azure-sdk-for-js/blob/main/CONTRIBUTING.md)
| Samples | [Node.js code samples](sql-api-nodejs-samples.md) | Getting started tutorial | [Get started with the JavaScript SDK](sql-api-nodejs-get-started.md) | Web app tutorial | [Build a Node.js web application using Azure Cosmos DB](sql-api-nodejs-application.md)
-| Current supported platform | [Node.js v12.x](https://nodejs.org/en/blog/release/v12.7.0/) - SDK Version 3.x.x<br/>[Node.js v10.x](https://nodejs.org/en/blog/release/v10.6.0/) - SDK Version 3.x.x<br/>[Node.js v8.x](https://nodejs.org/en/blog/release/v8.16.0/) - SDK Version 3.x.x<br/>[Node.js v6.x](https://nodejs.org/en/blog/release/v6.10.3/) - SDK Version 2.x.x<br/>[Node.js v4.2.0](https://nodejs.org/en/blog/release/v4.2.0/)- SDK Version 1.x.x<br/> [Node.js v0.12](https://nodejs.org/en/blog/release/v0.12.0/)- SDK Version 1.x.x<br/> [Node.js v0.10](https://nodejs.org/en/blog/release/v0.10.0/)- SDK Version 1.x.x
+| Current supported Node.js platforms | [LTS versions of Node.js](https://nodejs.org/about/releases/)
## Release notes
Not always the most visible changes, but they help our team ship better code, fa
## Release & Retirement Dates
-Microsoft provides notification at least **12 months** in advance of retiring an SDK in order to smooth the transition to a newer/supported version. New features and functionality and optimizations are only added to the current SDK, as such it is recommended that you always upgrade to the latest SDK version as early as possible.
+Microsoft provides notification at least **12 months** in advance of retiring an SDK in order to smooth the transition to a newer/supported version. New features and functionality and optimizations are only added to the current SDK, as such it is recommended that you always upgrade to the latest SDK version as early as possible. Read the [Microsoft Support Policy for SDKs](https://github.com/Azure/azure-sdk-for-js/blob/main/SUPPORT.md#microsoft-support-policy) for more details.
| Version | Release Date | Retirement Date | | | | |
-| 3.4.2 | November 7, 2019 | |
-| 3.4.1 | November 5, 2019 | |
-| 3.4.0 | October 28, 2019 | |
-| 3.3.6 | October 14, 2019 | |
-| 3.3.5 | October 14, 2019 | |
-| 3.3.4 | October 14, 2019 | |
-| 3.3.3 | October 3, 2019 | |
-| 3.3.2 | October 3, 2019 | |
-| 3.3.1 | October 1, 2019 | |
-| 3.3.0 | September 24, 2019 | |
-| 3.2.0 | August 26, 2019 | |
-| 3.1.1 | August 7, 2019 | |
-| 3.1.0 |July 26, 2019 | |
-| 3.0.4 |July 22, 2019 | |
-| 3.0.3 |July 17, 2019 | |
-| 3.0.2 |July 9, 2019 | |
-| 3.0.0 |June 28, 2019 | |
-| 2.1.5 |March 20, 2019 | |
-| 2.1.4 |March 15, 2019 | |
-| 2.1.3 |March 8, 2019 | |
-| 2.1.2 |January 28, 2019 | |
-| 2.1.1 |December 5, 2018 | |
-| 2.1.0 |December 4, 2018 | |
-| 2.0.5 |November 7, 2018 | |
-| 2.0.4 |October 30, 2018 | |
-| 2.0.3 |October 30, 2018 | |
-| 2.0.2 |October 10, 2018 | |
-| 2.0.1 |September 25, 2018 | |
-| 2.0.0 |September 24, 2018 | |
-| 2.0.0-3 (RC) |August 2, 2018 | |
-| 1.14.4 |May 03, 2018 |August 30, 2020 |
-| 1.14.3 |May 03, 2018 |August 30, 2020 |
-| 1.14.2 |December 21, 2017 |August 30, 2020 |
-| 1.14.1 |November 10, 2017 |August 30, 2020 |
-| 1.14.0 |November 9, 2017 |August 30, 2020 |
-| 1.13.0 |October 11, 2017 |August 30, 2020 |
-| 1.12.2 |August 10, 2017 |August 30, 2020 |
-| 1.12.1 |August 10, 2017 |August 30, 2020 |
-| 1.12.0 |May 10, 2017 |August 30, 2020 |
-| 1.11.0 |March 16, 2017 |August 30, 2020 |
-| 1.10.2 |January 27, 2017 |August 30, 2020 |
-| 1.10.1 |December 22, 2016 |August 30, 2020 |
-| 1.10.0 |October 03, 2016 |August 30, 2020 |
-| 1.9.0 |July 07, 2016 |August 30, 2020 |
-| 1.8.0 |June 14, 2016 |August 30, 2020 |
-| 1.7.0 |April 26, 2016 |August 30, 2020 |
-| 1.6.0 |March 29, 2016 |August 30, 2020 |
-| 1.5.6 |March 08, 2016 |August 30, 2020 |
-| 1.5.5 |February 02, 2016 |August 30, 2020 |
-| 1.5.4 |February 01, 2016 |August 30, 2020 |
-| 1.5.2 |January 26, 2016 |August 30, 2020 |
-| 1.5.2 |January 22, 2016 |August 30, 2020 |
-| 1.5.1 |January 4, 2016 |August 30, 2020 |
-| 1.5.0 |December 31, 2015 |August 30, 2020 |
-| 1.4.0 |October 06, 2015 |August 30, 2020 |
-| 1.3.0 |October 06, 2015 |August 30, 2020 |
-| 1.2.2 |September 10, 2015 |August 30, 2020 |
-| 1.2.1 |August 15, 2015 |August 30, 2020 |
-| 1.2.0 |August 05, 2015 |August 30, 2020 |
-| 1.1.0 |July 09, 2015 |August 30, 2020 |
-| 1.0.3 |June 04, 2015 |August 30, 2020 |
-| 1.0.2 |May 23, 2015 |August 30, 2020 |
-| 1.0.1 |May 15, 2015 |August 30, 2020 |
-| 1.0.0 |April 08, 2015 |August 30, 2020 |
+| v3 | June 28, 2019 | |
+| v2 | September 24, 2018 | September 24, 2021 |
+| v1 | April 08, 2015 | August 30, 2020 |
## FAQ [!INCLUDE [cosmos-db-sdk-faq](../includes/cosmos-db-sdk-faq.md)]
cost-management-billing Mpa Request Ownership https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/mpa-request-ownership.md
tags: billing
Previously updated : 11/17/2021 Last updated : 01/05/2022
Access for existing users, groups, or service principals that was assigned using
The partners should work with the customer to get access to subscriptions. The partners need to get either [Admin on Behalf Of - AOBO](https://channel9.msdn.com/Series/cspdev/Module-11-Admin-On-Behalf-Of-AOBO) or [Azure Lighthouse](../../lighthouse/concepts/cloud-solution-provider.md) access open support tickets.
+### Power BI connectivity
+
+The Azure Cost Management connector for Power BI doesn't currently support Microsoft Partner Agreements. The connector only supports Enterprise Agreements and direct Microsoft Customer Agreements. For more information about Azure Cost Management connector support, see [Create visuals and reports with the Azure Cost Management connector in Power BI Desktop](/power-bi/connect-data/desktop-connect-azure-cost-management). After you transfer a subscription from one of the agreements to a Microsoft Partner Agreement, your Power BI reports stop working.
+
+As an alternative, you can always use Exports in Cost Management to save the consumption and usage information and then use it in Power BI. For more information, see [Create and manage exported data](../costs/tutorial-export-acm-data.md).
+ ### Azure support plan Azure support doesn't transfer with the subscriptions. If the user transfers all Azure subscriptions, ask them to cancel their support plan. After the transfer, CSP partner is responsible for the support. The customer should work with CSP partner for any support request.
data-share Concepts Pricing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/concepts-pricing.md
Previously updated : 08/11/2020 Last updated : 01/03/2022 # Understand Azure Data Share pricing
data-share Concepts Roles Permissions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/concepts-roles-permissions.md
Follow these steps to register the Microsoft.DataShare resource provider into yo
To learn more about resource provider, refer to [Azure resource providers and types](../azure-resource-manager/management/resource-providers-and-types.md).
+## Custom roles for Data Share
+This section describes custom roles and permissions required within the custom roles for sharing and receiving data, specific to a Storage account. There are also pre-requisites that are independent of custom role or Azure Data Share role.
+
+### Pre-requisites for Data Share, in addition to custom role
+* For storage and data lake snapshot-based sharing, to add a dataset in Azure Data Share, the provider data share resource's managed identity needs to be granted access to the source Azure data store. For example, in the case of a storage account, the data share resource's managed identity is granted the Storage Blob Data Reader role.
+* To receive data into a storage account, the consumer data share resource's managed identity needs to be granted access to the target storage account. The data share resource's managed identity needs to be granted the Storage Blob Data Contributor role.
+* See the [Data Provider](#data-provider) and [Data Consumer](#data-consumer) sections of this article for more specific steps.
+* You may also need to manually register the Microsoft.DataShare resource provider into your Azure subscription for some scenarios. See in [Resource provider registration](#resource-provider-registration) section of this article for specific details.
+
+### Create custom roles and required permissions
+Custom roles can be created in a subscription or resource group for sharing and receiving data. Users and groups can then be assigned the custom role.
+
+* For creating a custom role, there are actions required for Storage, Data Share, Resources group, and Authorization. Please see the [Azure resource provider operations document](../role-based-access-control/resource-provider-operations.md#microsoftdatashare) for Data Share to understand the different levels of permissions and choose the ones relevant for your custom role.
+* Alternately, you can use the Azure Portal to navigate to IAM, Custom role, Add permissions, Search, search for Microsoft.DataShare permissions to see the list of actions available.
+* To learn more about custom role assignment, refer to [Azure custom roles](../role-based-access-control/custom-roles.md). Once you have your custom role, test it to verify that it works as you expect.
+
+The following shows an example of how the required actions will be listed in JSON view for a custom role to share and receive data.
+
+```json
+{
+"Actions": [
+
+"Microsoft.Storage/storageAccounts/read",
+
+"Microsoft.Storage/storageAccounts/write",
+
+"Microsoft.Storage/storageAccounts/blobServices/containers/read",
+
+"Microsoft.Storage/storageAccounts/blobServices/generateUserDelegationKey/action",
+
+"Microsoft.DataShare/accounts/read",
+
+"Microsoft.DataShare/accounts/providers/Microsoft.Insights/metricDefinitions/read",
+
+"Microsoft.DataShare/accounts/shares/listSynchronizations/action",
+
+"Microsoft.DataShare/accounts/shares/synchronizationSettings/read",
+
+"Microsoft.DataShare/accounts/shares/synchronizationSettings/write",
+
+"Microsoft.DataShare/accounts/shares/synchronizationSettings/delete",
+
+"Microsoft.DataShare/accounts/shareSubscriptions/*",
+
+"Microsoft.DataShare/listInvitations/read",
+
+"Microsoft.DataShare/locations/rejectInvitation/action",
+
+"Microsoft.DataShare/locations/consumerInvitations/read",
+
+"Microsoft.DataShare/locations/operationResults/read",
+
+"Microsoft.Resources/subscriptions/resourceGroups/read",
+
+"Microsoft.Resources/subscriptions/resourcegroups/resources/read",
+
+"Microsoft.Authorization/roleAssignments/read",
+ ]
+}
+```
+ ## Next steps -- Learn more about roles in Azure - [Understand Azure role definitions](../role-based-access-control/role-definitions.md)
+- Learn more about roles in Azure - [Understand Azure role definitions](../role-based-access-control/role-definitions.md)
data-share Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/disaster-recovery.md
Previously updated : 07/30/2020 Last updated : 01/03/2022 # Disaster recovery for Azure Data Share
Data consumers can either have an active share subscription that is idle for DR
## Next steps
-To learn how to start sharing data, continue to the [share your data](share-your-data.md) tutorial.
+To learn how to start sharing data, continue to the [share your data](share-your-data.md) tutorial.
data-share How To Add Datasets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/how-to-add-datasets.md
Previously updated : 07/30/2020 Last updated : 01/03/2022 # How to add datasets to an existing share in Azure Data Share
Without snapshot settings configured, the consumer must manually trigger a full
For more information on snapshots, see [Snapshots](terminology.md). ## Next steps
-Learn more about how to [add recipients to an existing data share](how-to-add-recipients.md).
+Learn more about how to [add recipients to an existing data share](how-to-add-recipients.md).
data-share How To Configure Mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/how-to-configure-mapping.md
Previously updated : 08/14/2020 Last updated : 01/03/2022 # How to configure a dataset mapping for a received share in Azure Data Share
data-share How To Delete Invitation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/how-to-delete-invitation.md
Previously updated : 07/30/2020 Last updated : 01/03/2022 # How to delete an invitation to a recipient in Azure Data Share
In Azure Data Share, navigate to your sent share and select the **Invitations**
![Delete Invitation](./media/how-to/how-to-delete-invitation/delete-invitation.png) ## Next steps
-Learn more about how to [revoke a share subscription](how-to-revoke-share-subscription.md).
+Learn more about how to [revoke a share subscription](how-to-revoke-share-subscription.md).
data-share How To Monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/how-to-monitor.md
Previously updated : 07/30/2020 Last updated : 01/03/2022 # Monitor Azure Data Share
You can configure diagnostic setting to save log data or events. Navigate to Mon
## Next Steps
-Learn more about [Azure Data Share terminology](terminology.md)
+Learn more about [Azure Data Share terminology](terminology.md)
data-share How To Revoke Share Subscription https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/how-to-revoke-share-subscription.md
Previously updated : 07/30/2020 Last updated : 01/03/2022 # How to revoke a consumer's share subscription in Azure Data Share
In Azure Data Share, navigate to your sent share and select the **Share Subscrip
Check the boxes next to the recipients whose share subscriptions you would like to delete and then click **Revoke**. The consumer will no longer get updates to their data. ## Next steps
-Learn more about how to [monitor your data shares](how-to-monitor.md).
+Learn more about how to [monitor your data shares](how-to-monitor.md).
data-share Samples Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/samples-powershell.md
Previously updated : 07/06/2019 Last updated : 01/03/2022 # Azure PowerShell samples for Azure Data Share
data-share Accept Share Invitations Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/scripts/powershell/accept-share-invitations-powershell.md
Previously updated : 07/07/2019 Last updated : 01/03/2022
data-share Add Datasets Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/scripts/powershell/add-datasets-powershell.md
Previously updated : 07/07/2019 Last updated : 01/03/2022
data-share Create New Share Account Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/scripts/powershell/create-new-share-account-powershell.md
description: This PowerShell script creates a new Data Share account.
Previously updated : 07/07/2019 Last updated : 01/03/2022
data-share Create New Share Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/scripts/powershell/create-new-share-powershell.md
description: This PowerShell script creates a new data share within an existing
Previously updated : 07/07/2019 Last updated : 01/03/2022
data-share Create Share Invitation Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/scripts/powershell/create-share-invitation-powershell.md
description: This PowerShell script sends a data share invitation.
Previously updated : 07/07/2019 Last updated : 01/03/2022
data-share Create View Trigger Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/scripts/powershell/create-view-trigger-powershell.md
Previously updated : 07/07/2019 Last updated : 01/03/2022
data-share Monitor Usage Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/scripts/powershell/monitor-usage-powershell.md
description: This PowerShell script retrieves usage metrics of a sent data share
Previously updated : 07/07/2019 Last updated : 01/03/2022
data-share Set View Synchronizations Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/scripts/powershell/set-view-synchronizations-powershell.md
description: This PowerShell script sets and gets share synchronization settings
Previously updated : 07/07/2019 Last updated : 01/03/2022
data-share View Sent Invitations Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/scripts/powershell/view-sent-invitations-powershell.md
Previously updated : 07/07/2019 Last updated : 01/03/2022
data-share View Share Details Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/scripts/powershell/view-share-details-powershell.md
Previously updated : 07/07/2019 Last updated : 01/03/2022
data-share Share Your Data Arm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/share-your-data-arm.md
Last updated : 01/03/2022 Previously updated : 08/19/2020 # Quickstart: Share data using Azure Data Share and ARM template
data-share Share Your Data Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/share-your-data-portal.md
Previously updated : 10/30/2020 Last updated : 01/03/2022 # Quickstart: Share data using Azure Data Share in the Azure portal
Create an Azure Data Share resource in an Azure resource group.
1. Select the dataset type that you would like to add. You will see a different list of dataset types depending on the share type (snapshot or in-place) you have selected in the previous step.
- ![AddDatasets](./media/add-datasets.png "Add Datasets")
+ ![AddDatasets](./media/add-datasets-updated.png "Add Datasets")
1. Navigate to the object you would like to share and select 'Add Datasets'.
data-share Terminology https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/terminology.md
Previously updated : 07/10/2019 Last updated : 01/03/2022 # Azure Data Share Concepts
databox-online Azure Stack Edge Pro R Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-pro-r-overview.md
Title: Microsoft Azure Stack Edge Pro R overview | Microsoft Docs
-description: Describes Azure Stack Edge Pro R devices, a storage solution that uses a physical device for network-based transfer into Azure and the solution can deployed in harsh environments.
+ Title: Microsoft Azure Stack Edge Pro R overview
+description: Describes Azure Stack Edge Pro R devices, a storage solution that uses a physical device for network-based transfer into Azure and the solution can be deployed in harsh environments.
Previously updated : 10/05/2021 Last updated : 01/05/2022 #Customer intent: As an IT admin, I need to understand what Azure Stack Edge Pro R is and how it works so I can use it to process and transform data before sending to Azure.
Azure Stack Edge Pro R has the following capabilities:
|Disconnected mode| Device and service can be optionally managed via Azure Stack Hub. Deploy, run, manage applications in offline mode. <br> Disconnected mode supports offline upload scenarios.| |Supported file transfer protocols |Support for standard SMB, NFS, and REST protocols for data ingestion. <br> For more information on supported versions, go to [Azure Stack Edge Pro R system requirements](azure-stack-edge-gpu-system-requirements.md).| |Data refresh | Ability to refresh local files with the latest from cloud. <br> For more information, see [Refresh a share on your Azure Stack Edge](azure-stack-edge-gpu-manage-shares.md#refresh-shares).|
-|Double encryption | Use of self-encrypting drives provides the first layer of encryption. VPN provides the second layer of encryption. BitLocker support to locally encrypt data and secure data transfer to cloud over *https* . <br> For more information, see [Configure VPN on your Azure Stack Edge Pro R device](azure-stack-edge-mini-r-configure-vpn-powershell.md).|
+|Double encryption | Use of self-encrypting drives provides the first layer of encryption. VPN provides the second layer of encryption. BitLocker support to locally encrypt data and secure data transfer to cloud over *https*. <br> For more information, see [Configure VPN on your Azure Stack Edge Pro R device](azure-stack-edge-mini-r-configure-vpn-powershell.md).|
|Bandwidth throttling| Throttle to limit bandwidth usage during peak hours. <br> For more information, see [Manage bandwidth schedules on your Azure Stack Edge](azure-stack-edge-gpu-manage-bandwidth-schedules.md).| |Easy ordering| Bulk ordering and tracking of the device via Azure Edge Hardware Center (Preview). <br> For more information, see [Order a device via Azure Edge Hardware Center](azure-stack-edge-gpu-deploy-prep.md#create-a-new-resource). |
databox Data Box Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/data-box-troubleshoot.md
Previously updated : 08/11/2021 Last updated : 01/04/2022
For help troubleshooting issues with accessing the shares on your device, see [T
The errors in Data Box and Data Box Heavy are summarized as follows:
-| Error category* | Description | Recommended action |
+| Error category | Description | Recommended action |
|-||--|
-| Container or share names | The container or share names do not follow the Azure naming rules. |Download the error lists. <br> Rename the containers or shares. [Learn more](#container-or-share-name-errors). |
-| Container or share size limit | The total data in containers or shares exceeds the Azure limit. |Download the error lists. <br> Reduce the overall data in the container or share. [Learn more](#container-or-share-size-limit-errors).|
-| Object or file size limit | The object or files in containers or shares exceeds the Azure limit.|Download the error lists. <br> Reduce the file size in the container or share. [Learn more](#object-or-file-size-limit-errors). |
-| Data or file type | The data format or the file type is not supported. |Download the error lists. <br> For page blobs or managed disks, ensure the data is 512-bytes aligned and copied to the pre-created folders. [Learn more](#data-or-file-type-errors). |
-| Folder or file internal errors | The file or folder have an internal error. |Download the error lists. <br> Remove the file and copy again. For a folder, modify it by renaming or adding or deleting a file. The error should go away in 30 minutes. [Learn more](#folder-or-file-internal-errors). |
+| Container or share names<sup>*</sup> | The container or share names do not follow the Azure naming rules. |Download the error lists. <br> Rename the containers or shares. [Learn more](#container-or-share-name-errors). |
+| Container or share size limit<sup>*</sup> | The total data in containers or shares exceeds the Azure limit. |Download the error lists. <br> Reduce the overall data in the container or share. [Learn more](#container-or-share-size-limit-errors).|
+| Object or file size limit<sup>*</sup> | The object or files in containers or shares exceeds the Azure limit.|Download the error lists. <br> Reduce the file size in the container or share. [Learn more](#object-or-file-size-limit-errors). |
+| Data or file type<sup>*</sup> | The data format or the file type is not supported. |Download the error lists. <br> For page blobs or managed disks, ensure the data is 512-bytes aligned and copied to the pre-created folders. [Learn more](#data-or-file-type-errors). |
+| Folder or file internal errors<sup>*</sup> | The file or folder have an internal error. |Download the error lists. <br> Remove the file and copy again. For a folder, modify it by renaming or adding or deleting a file. The error should go away in 30 minutes. [Learn more](#folder-or-file-internal-errors). |
+| General error<sup>*</sup> | Internal exceptions or error paths in the code caused a critical error. | Reboot the device and rerun the **Prepare to Ship** operation. If the error doesn't go away, contact Microsoft Support. [Learn more](#general-errors). |
| Non-critical blob or file errors | The blob or file names do not follow the Azure naming rules or the file type is not supported. | These blob or files may not be copied or the names may be changed. [Learn how to fix these errors](#non-critical-blob-or-file-errors). |
-\* The first five error categories are critical errors and must be fixed before you can proceed to prepare to ship.
+<sup>*</sup> Errors in this category are critical errors that must be fixed before you can proceed to **Prepare to Ship**.
## Container or share name errors
-These are errors related to container and share names.
+These errors are related to container and share names.
-### ERROR_CONTAINER_OR_SHARE_NAME_LENGTH
+### ERROR_CONTAINER_OR_SHARE_NAME_LENGTH
**Error description:** The container or share name must be between 3 and 63 characters.
For more information, see the Azure naming conventions for [directories](/rest/
## Container or share size limit errors
-These are errors related to data exceeding the size of data allowed in a container or a share.
+These errors are related to data exceeding the size of data allowed in a container or a share.
### ERROR_CONTAINER_OR_SHARE_CAPACITY_EXCEEDED
These are errors related to data exceeding the size of data allowed in a contain
**Suggested resolution:** On the **Connect and copy** page of the local web UI, download, and review the error files. - Identify the folders that have this issue from the error logs and make sure that the files in that folder are under 5 TiB.-- The 5 TiB limit does not apply to a storage account that allows large file shares. However, you must have large file shares configured when you place your order.
+- The 5-TiB limit does not apply to a storage account that allows large file shares. However, you must have large file shares configured when you place your order.
- Contact [Microsoft Support](data-box-disk-contact-microsoft-support.md) and request a new shipping label. - [Enable large file shares on the storage account](../storage/files/storage-how-to-create-file-share.md#enable-large-files-shares-on-an-existing-account) - [Expand the file shares in the storage account](../storage/files/storage-how-to-create-file-share.md#expand-existing-file-shares) and set the quota to 100 TiB.
These are errors related to data exceeding the size of data allowed in a contain
## Object or file size limit errors
-These are errors related to data exceeding the maximum size of object or the file that is allowed in Azure.
+These errors are related to data exceeding the maximum size of object or the file that is allowed in Azure.
### ERROR_BLOB_OR_FILE_SIZE_LIMIT
These are errors related to data exceeding the maximum size of object or the fil
## Data or file type errors
-These are errors related to unsupported file type or data type found in the container or share.
+These errors are related to unsupported file type or data type found in the container or share.
### ERROR_BLOB_OR_FILE_SIZE_ALIGNMENT
For more information, see [Copy to managed disks](data-box-deploy-copy-data-from
**Suggested resolution:** If this is a file, remove the file and copy it again. If this is a folder, modify the folder. Either rename the folder or add or delete a file from the folder. The error should clear on its own in 30 minutes. Contact Microsoft Support, if the error persists.
+## General errors
+
+General errors are caused by internal exceptions or error paths in the code.
+
+### ERROR_GENERAL
+
+**Error description** This general error is caused by internal exceptions or error paths in the code.
+
+**Suggested resolution:** Reboot the device and rerun the **Prepare to Ship** operation. If the error doesn't go away, [contact Microsoft Support](data-box-disk-contact-microsoft-support.md).
+ ## Non-critical blob or file errors All the non-critical errors related to names of blobs, files, or containers that are seen during data copy are summarized in the following section. If these errors are present, then the names will be modified to conform to the Azure naming conventions. The corresponding order status for data upload will be **Completed with warnings**.
defender-for-cloud Adaptive Network Hardening https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/adaptive-network-hardening.md
To add an adaptive network hardening rule:
1. From the top toolbar, select **Add rule**.
- ![add rule.](./media/adaptive-network-hardening/add-hard-rule.png)
+ ![add rule.](./media/adaptive-network-hardening/add-new-hard-rule.png)
1. In the **New rule** window, enter the details and select **Add**.
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/upcoming-changes.md
Title: Important changes coming to Microsoft Defender for Cloud description: Upcoming changes to Microsoft Defender for Cloud that you might need to be aware of and for which you might need to plan Previously updated : 12/12/2021 Last updated : 01/05/2022 # Important upcoming changes to Microsoft Defender for Cloud
If you're looking for the latest release notes, you'll find them in the [What's
| Planned change | Estimated date for change | |-||
-| [Deprecating a preview alert: ARM.MCAS_ActivityFromAnonymousIPAddresses](#deprecating-a-preview-alert-armmcas_activityfromanonymousipaddresses) | December 2021 |
-| [Legacy implementation of ISO 27001 is being replaced with new ISO 27001:2013](#legacy-implementation-of-iso-27001-is-being-replaced-with-new-iso-270012013) | December 2021 |
-| [Multiple changes to identity recommendations](#multiple-changes-to-identity-recommendations) | December 2021 |
+| [Deprecating a preview alert: ARM.MCAS_ActivityFromAnonymousIPAddresses](#deprecating-a-preview-alert-armmcas_activityfromanonymousipaddresses) | January 2022 |
+| [Legacy implementation of ISO 27001 is being replaced with new ISO 27001:2013](#legacy-implementation-of-iso-27001-is-being-replaced-with-new-iso-270012013) | January 2022 |
+| [Multiple changes to identity recommendations](#multiple-changes-to-identity-recommendations) | February 2022 |
+| [Deprecating the recommendation to use service principals to protect your subscriptions](#deprecating-the-recommendation-to-use-service-principals-to-protect-your-subscriptions) | February 2022 |
+| [Deprecating the recommendations to install the network traffic data collection agent](#deprecating-the-recommendations-to-install-the-network-traffic-data-collection-agent) | February 2022 |
| [Enhancements to recommendation to classify sensitive data in SQL databases](#enhancements-to-recommendation-to-classify-sensitive-data-in-sql-databases) | Q1 2022 | | [Changes to recommendations for managing endpoint protection solutions](#changes-to-recommendations-for-managing-endpoint-protection-solutions) | March 2022 | | | | ### Deprecating a preview alert: ARM.MCAS_ActivityFromAnonymousIPAddresses
-**Estimated date for change:** December 2021
+**Estimated date for change:** January 2022
We'll be deprecating the following preview alert:
We've created new alerts that provide this information and add to it. In additio
### Legacy implementation of ISO 27001 is being replaced with new ISO 27001:2013
-**Estimated date for change:** November 2021
+**Estimated date for change:** January 2022
The legacy implementation of ISO 27001 will be removed from Defender for Cloud's regulatory compliance dashboard. If you're tracking your ISO 27001 compliance with Defender for Cloud, onboard the new ISO 27001:2013 standard for all relevant management groups or subscriptions, and the current legacy ISO 27001 will soon be removed from the dashboard.
The legacy implementation of ISO 27001 will be removed from Defender for Cloud's
### Multiple changes to identity recommendations
-**Estimated date for change:** December 2021
+**Estimated date for change:** February 2022
Defender for Cloud includes multiple recommendations for improving the management of users and accounts. In December, we'll be making the changes outlined below.
Defender for Cloud includes multiple recommendations for improving the managemen
|Description |User accounts that have been blocked from signing in, should be removed from your subscriptions.<br>These accounts can be targets for attackers looking to find ways to access your data without being noticed.|User accounts that have been blocked from signing into Active Directory, should be removed from your subscriptions. These accounts can be targets for attackers looking to find ways to access your data without being noticed.<br>Learn more about securing the identity perimeter in [Azure Identity Management and access control security best practices](../security/fundamentals/identity-management-best-practices.md).| |Related policy |[Deprecated accounts should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f6b1cbf55-e8b6-442f-ba4c-7246b6381474)|Subscriptions should be purged of accounts that are blocked in Active Directory and have read and write permissions| |||
-
++
+### Deprecating the recommendation to use service principals to protect your subscriptions
+
+**Estimated date for change:** February 2022
+
+As organizations are moving away from using management certificates to manage their subscriptions, and [our recent announcement that we're retiring the Cloud Services (classic) deployment model](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/), we'll be deprecating the following Defender for Cloud recommendation and its related policy:
+
+|Recommendation |Description |Severity |
+||||
+|[Service principals should be used to protect your subscriptions instead of Management Certificates](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/2acd365d-e8b5-4094-bce4-244b7c51d67c) |Management certificates allow anyone who authenticates with them to manage the subscription(s) they are associated with. To manage subscriptions more securely, using service principals with Resource Manager is recommended to limit the blast radius in the case of a certificate compromise. It also automates resource management. <br />(Related policy: [Service principals should be used to protect your subscriptions instead of management certificates](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f6646a0bd-e110-40ca-bb97-84fcee63c414)) |Medium |
+|||
+
+Learn more:
+
+- [Cloud Services (classic) deployment model is retiring on 31 August 2024](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/)
+- [Overview of Azure Cloud Services (classic)](../cloud-services/cloud-services-choose-me.md)
+- [Workflow of Windows Azure classic VM Architecture - including RDFE workflow basics](../cloud-services/cloud-services-workflow-process.md)
++
+### Deprecating the recommendations to install the network traffic data collection agent
+
+**Estimated date for change:** February 2022
+
+Changes in our roadmap and priorities have removed the need for the network traffic data collection agent. Consequently, we'll be deprecating the following two recommendations and their related policies.
+
+|Recommendation |Description |Severity |
+||||
+|[Network traffic data collection agent should be installed on Linux virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/8c3e93d3-0276-4d06-b20a-9a9f3012742c) |Defender for Cloud uses the Microsoft Dependency agent to collect network traffic data from your Azure virtual machines to enable advanced network protection features such as traffic visualization on the network map, network hardening recommendations and specific network threats.<br />(Related policy: [Network traffic data collection agent should be installed on Linux virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f04c4380f-3fae-46e8-96c9-30193528f602)) |Medium |
+|[Network traffic data collection agent should be installed on Windows virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/24d8af06-d441-40b4-a49c-311421aa9f58) |Defender for Cloud uses the Microsoft Dependency agent to collect network traffic data from your Azure virtual machines to enable advanced network protection features such as traffic visualization on the network map, network hardening recommendations, and specific network threats.<br />(Related policy: [Network traffic data collection agent should be installed on Windows virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f2f2ee1de-44aa-4762-b6bd-0893fc3f306d)) |Medium |
+|||
+++ ### Enhancements to recommendation to classify sensitive data in SQL databases
defender-for-iot Tutorial Configure Micro Agent Twin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/device-builders/tutorial-configure-micro-agent-twin.md
description: In this tutorial, you will learn how to configure a micro agent twi
Previously updated : 12/22/2021 Last updated : 01/05/2022
In this tutorial, you learn how to:
- A Defender for IoT subscription. -- An existing IoT Hub with:-
- - [A connected device](quickstart-standalone-agent-binary-installation.md).
-
- - [A micro agent module twin](quickstart-create-micro-agent-module-twin.md).
+- An existing IoT Hub with: [A connected device](quickstart-standalone-agent-binary-installation.md), and [A micro agent module twin](quickstart-create-micro-agent-module-twin.md).
## Micro agent configuration
-To view and update the micro agent twin configuration:
+**To view and update the micro agent twin configuration**:
1. Navigate to the [Azure portal](https://ms.portal.azure.com).
devtest Troubleshoot Expired Removed Subscription https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest/offer/troubleshoot-expired-removed-subscription.md
If your Visual Studio subscription expires or is removed, all the subscription b
> [!IMPORTANT] > You must transfer your resources to another Azure subscription before your current Azure subscription is disabled or you will lose access to your data. >
-> If you donΓÇÖt take one of these actions, your Azure subscription will be disabled at the time specified in your email notification. If the subscription is disabled, you can reenable it as a pay-as-you-go subscription by following [these steps](/azure/cost-management-billing/manage/switch-azure-offer.md).
+> If you donΓÇÖt take one of these actions, your Azure subscription will be disabled at the time specified in your email notification. If the subscription is disabled, you can reenable it as a pay-as-you-go subscription by following [these steps](/azure/cost-management-billing/manage/switch-azure-offer).
## Maintain a subscription to use monthly credits
There are several ways to continue using a monthly credit for Azure. To save you
- [Visual Studio Test Professional](https://www.microsoft.com/p/visual-studio-test-professional-subscription/dg7gmgf0dst6?activetab=pivot%3aoverviewtab) -- **If someone in your organization purchases subscriptions for your organization**, [contact your Visual Studio subscription admin](/visualstudio/subscriptions/contact-my-admin.md) and request a subscription that provides the monthly credit that you need.
+- **If someone in your organization purchases subscriptions for your organization**, [contact your Visual Studio subscription admin](/visualstudio/subscriptions/contact-my-admin) and request a subscription that provides the monthly credit that you need.
- **If you have another active Visual Studio subscription** at the same subscription level, you can use it to set up a new Azure credit subscription. ## Convert your Azure subscription to pay-as-you-go If you no longer need a Visual Studio subscription or credit but you want to continue using your Azure resources, convert your Azure subscription to pay-as-you-go pricing by [removing your spending limit](/azure/cost-management-billing/manage/spending-limit#remove-the-spending-limit-in-azure-portal).-
dms Tutorial Azure Postgresql To Azure Postgresql Online Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-azure-postgresql-to-azure-postgresql-online-portal.md
To complete this tutorial, you need to:
Also note that the target Azure Database for PostgreSQL version must be equal to or later than the on-premises PostgreSQL version. For example, PostgreSQL 10 can migrate to Azure Database for PostgreSQL 10, or 11, but not to Azure Database for PostgreSQL 9.6.
-* [Create an Azure Database for PostgreSQL server](../postgresql/quickstart-create-server-database-portal.md) or [Create an Azure Database for PostgreSQL - Hyperscale (Citus) server](../postgresql/quickstart-create-hyperscale-portal.md) as the target database server to migrate data into.
+* [Create an Azure Database for PostgreSQL server](../postgresql/quickstart-create-server-database-portal.md) or [Create an Azure Database for PostgreSQL - Hyperscale (Citus) server](../postgresql/hyperscale/quickstart-create-portal.md) as the target database server to migrate data into.
* Create a Microsoft Azure Virtual Network for Azure Database Migration Service by using the Azure Resource Manager deployment model. For more information about creating a virtual network, see the [Virtual Network Documentation](../virtual-network/index.yml), and especially the quickstart articles with step-by-step details. * Ensure that the Network Security Group (NSG) rules for your virtual network don't block the outbound port 443 of ServiceTag for ServiceBus, Storage and AzureMonitor. For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).
To complete all the database objects like table schemas, indexes and stored proc
2. Create an empty database in your target environment, which is Azure Database for PostgreSQL.
- For details on how to connect and create a database, see the article [Create an Azure Database for PostgreSQL server in the Azure portal](../postgresql/quickstart-create-server-database-portal.md) or [Create an Azure Database for PostgreSQL - Hyperscale (Citus) server in the Azure portal](../postgresql/quickstart-create-hyperscale-portal.md).
+ For details on how to connect and create a database, see the article [Create an Azure Database for PostgreSQL server in the Azure portal](../postgresql/quickstart-create-server-database-portal.md) or [Create an Azure Database for PostgreSQL - Hyperscale (Citus) server in the Azure portal](../postgresql/hyperscale/quickstart-create-portal.md).
> [!NOTE] > An instance of Azure Database for PostgreSQL - Hyperscale (Citus) has only a single database: **citus**.
dms Tutorial Postgresql Azure Postgresql Online Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-postgresql-azure-postgresql-online-portal.md
To complete this tutorial, you need to:
Also note that the target Azure Database for PostgreSQL version must be equal to or later than the on-premises PostgreSQL version. For example, PostgreSQL 9.6 can migrate to Azure Database for PostgreSQL 9.6, 10, or 11, but not to Azure Database for PostgreSQL 9.5.
-* [Create an Azure Database for PostgreSQL server](../postgresql/quickstart-create-server-database-portal.md) or [Create an Azure Database for PostgreSQL - Hyperscale (Citus) server](../postgresql/quickstart-create-hyperscale-portal.md).
+* [Create an Azure Database for PostgreSQL server](../postgresql/quickstart-create-server-database-portal.md) or [Create an Azure Database for PostgreSQL - Hyperscale (Citus) server](../postgresql/hyperscale/quickstart-create-portal.md).
* Create a Microsoft Azure Virtual Network for Azure Database Migration Service by using the Azure Resource Manager deployment model, which provides site-to-site connectivity to your on-premises source servers by using either [ExpressRoute](../expressroute/expressroute-introduction.md) or [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md). For more information about creating a virtual network, see the [Virtual Network Documentation](../virtual-network/index.yml), and especially the quickstart articles with step-by-step details. > [!NOTE]
To complete all the database objects like table schemas, indexes and stored proc
2. Create an empty database in your target environment, which is Azure Database for PostgreSQL.
- For details on how to connect and create a database, see the article [Create an Azure Database for PostgreSQL server in the Azure portal](../postgresql/quickstart-create-server-database-portal.md) or [Create an Azure Database for PostgreSQL - Hyperscale (Citus) server in the Azure portal](../postgresql/quickstart-create-hyperscale-portal.md).
+ For details on how to connect and create a database, see the article [Create an Azure Database for PostgreSQL server in the Azure portal](../postgresql/quickstart-create-server-database-portal.md) or [Create an Azure Database for PostgreSQL - Hyperscale (Citus) server in the Azure portal](../postgresql/hyperscale/quickstart-create-portal.md).
> [!NOTE] > An instance of Azure Database for PostgreSQL - Hyperscale (Citus) has only a single database: **citus**.
dms Tutorial Postgresql Azure Postgresql Online https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-postgresql-azure-postgresql-online.md
To complete this tutorial, you need to:
Also note that the target Azure Database for PostgreSQL version must be equal to or later than the on-premises PostgreSQL version. For example, PostgreSQL 9.6 can only migrate to Azure Database for PostgreSQL 9.6, 10, or 11, but not to Azure Database for PostgreSQL 9.5.
-* [Create an instance in Azure Database for PostgreSQL](../postgresql/quickstart-create-server-database-portal.md) or [Create an Azure Database for PostgreSQL - Hyperscale (Citus) server](../postgresql/quickstart-create-hyperscale-portal.md).
+* [Create an instance in Azure Database for PostgreSQL](../postgresql/quickstart-create-server-database-portal.md) or [Create an Azure Database for PostgreSQL - Hyperscale (Citus) server](../postgresql/hyperscale/quickstart-create-portal.md).
* Create a Microsoft Azure Virtual Network for Azure Database Migration Service by using the Azure Resource Manager deployment model, which provides site-to-site connectivity to your on-premises source servers by using either [ExpressRoute](../expressroute/expressroute-introduction.md) or [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md). For more information about creating a virtual network, see the [Virtual Network Documentation](../virtual-network/index.yml), and especially the quickstart articles with step-by-step details. > [!NOTE]
To complete all the database objects like table schemas, indexes and stored proc
2. Create an empty database in your target environment, which is Azure Database for PostgreSQL.
- For details on how to connect and create a database, see the article [Create an Azure Database for PostgreSQL server in the Azure portal](../postgresql/quickstart-create-server-database-portal.md) or [Create an Azure Database for PostgreSQL - Hyperscale (Citus) server in the Azure portal](../postgresql/quickstart-create-hyperscale-portal.md).
+ For details on how to connect and create a database, see the article [Create an Azure Database for PostgreSQL server in the Azure portal](../postgresql/quickstart-create-server-database-portal.md) or [Create an Azure Database for PostgreSQL - Hyperscale (Citus) server in the Azure portal](../postgresql/hyperscale/quickstart-create-portal.md).
3. Import the schema into the target database you created by restoring the schema dump file.
If you need to cancel or delete any DMS task, project, or service, perform the c
* For information about known issues and limitations when performing online migrations to Azure Database for PostgreSQL, see the article [Known issues and workarounds with Azure Database for PostgreSQL online migrations](known-issues-azure-postgresql-online.md). * For information about the Azure Database Migration Service, see the article [What is the Azure Database Migration Service?](./dms-overview.md).
-* For information about Azure Database for PostgreSQL, see the article [What is Azure Database for PostgreSQL?](../postgresql/overview.md).
+* For information about Azure Database for PostgreSQL, see the article [What is Azure Database for PostgreSQL?](../postgresql/overview.md).
dms Tutorial Rds Postgresql Server Azure Db For Postgresql Online https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-rds-postgresql-server-azure-db-for-postgresql-online.md
To complete this tutorial, you need to:
Also note that the target Azure Database for PostgreSQL version must be equal to or later than the RDS PostgreSQL version. For example, RDS PostgreSQL 9.6 can only migrate to Azure Database for PostgreSQL 9.6, 10, or 11, but not to Azure Database for PostgreSQL 9.5.
-* Create an instance of [Azure Database for PostgreSQL](../postgresql/quickstart-create-server-database-portal.md) or [Azure Database for PostgreSQL - Hyperscale (Citus)](../postgresql/quickstart-create-hyperscale-portal.md). Refer to this [section](../postgresql/quickstart-create-server-database-portal.md#connect-to-the-server-with-psql) of the document for detail on how to connect to the PostgreSQL Server using pgAdmin.
+* Create an instance of [Azure Database for PostgreSQL](../postgresql/quickstart-create-server-database-portal.md) or [Azure Database for PostgreSQL - Hyperscale (Citus)](../postgresql/hyperscale/quickstart-create-portal.md). Refer to this [section](../postgresql/quickstart-create-server-database-portal.md#connect-to-the-server-with-psql) of the document for detail on how to connect to the PostgreSQL Server using pgAdmin.
* Create a Microsoft Azure Virtual Network for Azure Database Migration Service by using the Azure Resource Manager deployment model, which provides site-to-site connectivity to your on-premises source servers by using either [ExpressRoute](../expressroute/expressroute-introduction.md) or [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md). For more information about creating a virtual network, see the [Virtual Network Documentation](../virtual-network/index.yml), and especially the quickstart articles with step-by-step details. * Ensure that your virtual network Network Security Group rules don't block the outbound port 443 of ServiceTag for ServiceBus, Storage and AzureMonitor. For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md). * Configure your [Windows Firewall for database engine access](/sql/database-engine/configure-windows/configure-a-windows-firewall-for-database-engine-access).
To complete this tutorial, you need to:
2. Create an empty database in the target service, which is Azure Database for PostgreSQL. To connect and create a database, refer to one of the following articles: * [Create an Azure Database for PostgreSQL server by using the Azure portal](../postgresql/quickstart-create-server-database-portal.md)
- * [Create an Azure Database for PostgreSQL - Hyperscale (Citus) server using the Azure portal](../postgresql/quickstart-create-hyperscale-portal.md)
+ * [Create an Azure Database for PostgreSQL - Hyperscale (Citus) server using the Azure portal](../postgresql/hyperscale/quickstart-create-portal.md)
3. Import the schema to target service, which is Azure Database for PostgreSQL. To restore the schema dump file, run the following command:
event-grid Authenticate With Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/authenticate-with-active-directory.md
Title: Authenticate Event Grid publishing clients using Azure Active Directory (Preview)
+ Title: Authenticate Event Grid publishing clients using Azure Active Directory
description: This article describes how to authenticate Azure Event Grid publishing client using Azure Active Directory. Previously updated : 08/10/2021 Last updated : 01/05/2022
-# Authentication and authorization with Azure Active Directory (Preview)
+# Authentication and authorization with Azure Active Directory
This article describes how to authenticate Azure Event Grid publishing clients using Azure Active Directory (Azure AD). ## Overview
Following are the prerequisites to authenticate to Event Grid.
### Publish events using Azure AD Authentication
-To send events to a topic, domain, or partner namespace, you can build the client in the following way. The api version that first provided support for Azure AD authentication is ``2021-06-01-preview``. Use that API version or a more recent version in your application.
+To send events to a topic, domain, or partner namespace, you can build the client in the following way. The api version that first provided support for Azure AD authentication is ``2018-01-01``. Use that API version or a more recent version in your application.
-```java
- DefaultAzureCredential credential = new DefaultAzureCredentialBuilder().build();
- EventGridPublisherClient cloudEventClient = new EventGridPublisherClientBuilder()
- .endpoint("<your-event-grid-topic-domain-or-partner-namespace-endpoint>?api-version=2021-06-01-preview")
- .credential(credential)
- .buildCloudEventPublisherClient();
-```
-If you're using a security principal associated with a client publishing application, you have to configure environmental variables as shown in the [Java SDK readme article](/java/api/overview/azure/identity-readme#environment-variables). The `DefaultCredentialBuilder` reads those environment variables to use the right identity. For more information, see [Java API overview](/java/api/overview/azure/identity-readme#defaultazurecredential).
+Sample:
+
+This C# snippet creates an Event Grid publisher client using an Application (Service Principal) with a client secret, to enable the DefaultAzureCredential method you will need to add the [Azure.Identity library](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/identity/Azure.Identity/README.md). If you are using the official SDK it will handle the version for you.
+```csharp
+Environment.SetEnvironmentVariable("AZURE_CLIENT_ID", "");
+Environment.SetEnvironmentVariable("AZURE_TENANT_ID", "");
+Environment.SetEnvironmentVariable("AZURE_CLIENT_SECRET", "");
+
+EventGridPublisherClient client = new EventGridPublisherClient(new Uri("your-event-grid-topic-domain-or-partner-namespace-endpoint"), new DefaultAzureCredential());
+```
For more information, see the following articles:
event-grid Authentication Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/authentication-overview.md
Title: Authenticate clients publishing events to Event Grid custom topics, domains, and partner namespaces. description: This article describes different ways of authenticating clients publishing events to Event Grid custom topics, domains, and partner namespaces. Previously updated : 08/10/2021 Last updated : 01/05/2022 # Client authentication when publishing events to Event Grid
Authentication for clients publishing events to Event Grid is supported using th
- Azure Active Directory (Azure AD) - Access key or shared access signature (SAS)
-## Authenticate using Azure Active Directory (preview)
+## Authenticate using Azure Active Directory
Azure AD integration for Event Grid resources provides Azure role-based access control (RBAC) for fine-grained control over a clientΓÇÖs access to resources. You can use Azure RBAC to grant permissions to a security principal, which may be a user, a group, or an application service principal. The security principal is authenticated by Azure AD to return an OAuth 2.0 token. The token can be used to authorize a request to access Event Grid resources (topics, domains, or partner namespaces). For detailed information, see [Authenticate and authorize with the Microsoft Identity platform](authenticate-with-active-directory.md).
Azure AD integration for Event Grid resources provides Azure role-based access c
> Authenticating and authorizing users or applications using Azure AD identities provides superior security and ease of use over key-based and shared access signatures (SAS) authentication. With Azure AD, there is no need to store secrets used for authentication in your code and risk potential security vulnerabilities. We strongly recommend that you use Azure AD with your Azure Event Grid event publishing applications. > [!NOTE]
-> Azure AD authentication support by Azure Event Grid has been released as preview.
> Azure Event Grid on Kubernetes does not support Azure AD authentication yet. ## Authenticate using access keys and shared access signatures
event-grid Enable Diagnostic Logs Topic https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/enable-diagnostic-logs-topic.md
Last updated 11/11/2021
This article provides step-by-step instructions for enabling diagnostic settings for Event Grid resources. These settings allow you to capture and view diagnostic information so that you can troubleshoot any failures. The following table shows the settings available for different types of Event Grid resources - custom topics, system topics, and domains.
-| Diagnostic setting | Event Grid topics | Event Grid system topics | Event Grid domains |
-| - | | -- | -- |
-| [DeliveryFailures](diagnostic-logs.md#schema-for-publishdelivery-failure-logs) | Yes | Yes | Yes |
-| [PublishFailures](diagnostic-logs.md#schema-for-publishdelivery-failure-logs) | Yes | No | Yes |
-| [DataPlaneRequests](diagnostic-logs.md#schema-for-data-plane-requests) | Yes | No | Yes |
+| Diagnostic setting | Event Grid topics | Event Grid system topics | Event domains | Event Grid partner namespaces |
+| - | | -- | -- | -- |
+| [DeliveryFailures](diagnostic-logs.md#schema-for-publishdelivery-failure-logs) | Yes | Yes | Yes | No |
+| [PublishFailures](diagnostic-logs.md#schema-for-publishdelivery-failure-logs) | Yes | No | Yes | Yes |
+| [DataPlaneRequests](diagnostic-logs.md#schema-for-data-plane-requests) | Yes | No | Yes | Yes |
> [!IMPORTANT] > For schemas of delivery failures, publish failures, and data plane requests, see [Diagnostic logs](diagnostic-logs.md).
Then, it creates a diagnostic setting on the topic to send diagnostic informatio
Event Grid can publish audit traces for data plane operations. To enable the feature, select **audit** in the **Category groups** section or select **DataPlaneRequests** in the **Categories** section.
-The audit trace can be used to ensure that data access is allowed only for authorized purposes. It collects information about security control such as resource name, operation type, network access, level, region and more. For more information about how to enable the diagnostic setting, see [Diagnostic logs in Event Grid topics and Event Grid domains](enable-diagnostic-logs-topic.md#enable-diagnostic-logs-for-event-grid-topics-and-domains).
+The audit trace can be used to ensure that data access is allowed only for authorized purposes. It collects information about security control such as resource name, operation type, network access, level, region and more. For more information about how to enable the diagnostic setting, see [Diagnostic logs in Event Grid topics and Event domains](enable-diagnostic-logs-topic.md#enable-diagnostic-logs-for-event-grid-topics-and-domains).
![Select the audit traces](./media/enable-diagnostic-logs-topic/enable-audit-logs.png) > [!IMPORTANT]
event-grid Monitor Virtual Machine Changes Event Grid Logic App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/monitor-virtual-machine-changes-event-grid-logic-app.md
Previously updated : 07/01/2021 Last updated : 01/01/2022 # Tutorial: Monitor virtual machine changes by using Azure Event Grid and Logic Apps
For example, here are some events that publishers can send to subscribers throug
* A new message appears in a queue.
-This tutorial creates a logic app resource that runs in [*multi-tenant* Azure Logic Apps](../logic-apps/logic-apps-overview.md) and is based on the [Consumption pricing model](../logic-apps/logic-apps-pricing.md#consumption-pricing). Using this logic app resource, you create a workflow that monitors changes to a virtual machine, and sends emails about those changes. When you create a workflow that has an event subscription to an Azure resource, events flow from that resource through an event grid to the workflow. For more information about multi-tenant versus single-tenant Azure Logic Apps, review [Single-tenant versus multi-tenant and integration service environment](../logic-apps/single-tenant-overview-compare.md).
+This tutorial creates a Consumption logic app resource that runs in [*multi-tenant* Azure Logic Apps](../logic-apps/logic-apps-overview.md) and is based on the [Consumption pricing model](../logic-apps/logic-apps-pricing.md#consumption-pricing). Using this logic app resource, you create a workflow that monitors changes to a virtual machine, and sends emails about those changes. When you create a workflow that has an event subscription to an Azure resource, events flow from that resource through an event grid to the workflow.
![Screenshot showing the workflow designer with a workflow that monitors a virtual machine using Azure Event Grid.](./media/monitor-virtual-machine-changes-event-grid-logic-app/monitor-virtual-machine-event-grid-logic-app-overview.png)
In this tutorial, you learn how to:
1. Sign in to the [Azure portal](https://portal.azure.com) with your Azure account.
-1. From the main Azure menu, select **Create a resource** > **Integration** > **Logic App**.
+1. From the Azure home page, select **Create a resource** > **Integration** > **Logic App**.
![Screenshot of Azure portal, showing button to create a logic app resource.](./media/monitor-virtual-machine-changes-event-grid-logic-app/azure-portal-create-logic-app.png)
-1. Under **Logic App**, provide information about your logic app resource. When you're done, select **Create**.
+1. Under **Create Logic App**, provide information about your logic app resource:
![Screenshot of logic apps creation menu, showing details like name, subscription, resource group, and location.](./media/monitor-virtual-machine-changes-event-grid-logic-app/create-logic-app-for-event-grid.png) | Property | Required | Value | Description | |-|-|-|-|
- | **Name** | Yes | <*logic-app-name*> | Provide a unique name for your logic app. |
| **Subscription** | Yes | <*Azure-subscription-name*> | Select the same Azure subscription for all the services in this tutorial. |
- | **Resource group** | Yes | <*Azure-resource-group*> | The Azure resource group name for your logic app, which you can select for all the services in this tutorial. |
- | **Location** | Yes | <*Azure-region*> | Select the same region for all services in this tutorial. |
- |||
+ | **Resource Group** | Yes | <*Azure-resource-group*> | The Azure resource group name for your logic app, which you can select for all the services in this tutorial. |
+ | **Type** | Yes | Consumption | The resource type for your logic app. For this tutorial, make sure that you select **Consumption**. |
+ | **Logic App name** | Yes | <*logic-app-name*> | Provide a unique name for your logic app. |
+ | **Publish** | Yes | Workflow | Select the deployment destination for your logic app. For this tutorial, make sure that you select **Workflow**, which deploys to Azure. |
+ | **Region** | Yes | <*Azure-region*> | Select the same region for all services in this tutorial. |
+ |||||
+
+ > [!NOTE]
+ > If you later want to use the Event Grid operations with a Standard logic app resource instead, make sure that you create a *stateful* workflow, not a stateless workflow.
+ > To add the Event Grid operations to your workflow in the designer, on the operations picker pane, make sure that you select the **Azure** tab.
+ > For more information about multi-tenant versus single-tenant Azure Logic Apps, review [Single-tenant versus multi-tenant and integration service environment](../logic-apps/single-tenant-overview-compare.md).
+
+1. When you're done, select **Review + create**. On the next pane, confirm the provided information, and select **Create**.
+
+1. After Azure deploys your logic app, select **Go to resource**.
-1. After Azure deploys your logic app, the workflow designer shows a page with an introduction video and commonly used triggers. Scroll past the video and triggers.
+ The workflow designer shows a page with an introduction video and commonly used triggers.
+
+1. Scroll past the video window and commonly used triggers section.
1. Under **Templates**, select **Blank Logic App**.
event-grid Post To Custom Topic https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/post-to-custom-topic.md
For custom topics, the top-level data contains the same fields as standard resou
] ```
-For a description of these properties, see [Azure Event Grid event schema](event-schema.md). When posting events to an event grid topic, the array can have a total size of up to 1 MB. The maximum allowed size for an event is also 1 MB. Events over 64 KB are charged in 64-KB increments.
+For a description of these properties, see [Azure Event Grid event schema](event-schema.md). When posting events to an event grid topic, the array can have a total size of up to 1 MB. The maximum allowed size for an event is also 1 MB. Events over 64 KB are charged in 64-KB increments. When receiving events in a batch, the maximum allowed number of events is 5,000 per batch.
For example, a valid event data schema is:
event-hubs Event Hubs Ip Filtering https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-ip-filtering.md
Title: Azure Event Hubs Firewall Rules | Microsoft Docs description: Use Firewall Rules to allow connections from specific IP addresses to Azure Event Hubs. Previously updated : 05/10/2021 Last updated : 10/28/2021 # Allow access to Azure Event Hubs namespaces from specific IP addresses or ranges
This section shows you how to use the Azure portal to create IP firewall rules f
1. Navigate to your **Event Hubs namespace** in the [Azure portal](https://portal.azure.com). 4. Select **Networking** under **Settings** on the left menu.
+1. On the **Networking** page, for **Public network access**, you can set one of the three following options. Choose **Selected networks** option to allow access from only specified IP addresses.
+ - **Disabled**. This option disables any public access to the namespace. The namespace will be accessible only through [private endpoints](private-link-service.md).
+
+ :::image type="content" source="./media/event-hubs-firewall/public-access-disabled.png" alt-text="Networking page - public access tab - public network access is disabled.":::
+ - **Selected networks**. This option enables public access to the namespace using an access key from selected networks.
+
+ > [!IMPORTANT]
+ > If you choose **Selected networks**, add at least one IP firewall rule or a virtual network that will have access to the namespace. Choose **Disabled** if you want to restrict all traffic to this namespace over [private endpoints](private-link-service.md) only.
- > [!WARNING]
- > If you select the **Selected networks** option and don't add at least one IP firewall rule or a virtual network on this page, the namespace can be accessed via **public internet** (using the access key).
-
- :::image type="content" source="./media/event-hubs-firewall/selected-networks.png" alt-text="Networks tab - selected networks option" lightbox="./media/event-hubs-firewall/selected-networks.png":::
-
- If you select the **All networks** option, the event hub accepts connections from any IP address (using the access key). This setting is equivalent to a rule that accepts the 0.0.0.0/0 IP address range.
-
- ![Screenshot that shows the "Firewall and virtual networks" page with the "All networks" option selected.](./media/event-hubs-firewall/firewall-all-networks-selected.png)
-1. To restrict access to specific IP addresses, confirm that the **Selected networks** option is selected. In the **Firewall** section, follow these steps:
- 1. Select **Add your client IP address** option to give your current client IP the access to the namespace.
- 2. For **address range**, enter a specific IPv4 address or a range of IPv4 address in CIDR notation.
-
- >[!WARNING]
- > If you select the **Selected networks** option and don't add at least one IP firewall rule or a virtual network on this page, the namespace can be accessed over public internet (using the access key).
-1. Specify whether you want to **allow trusted Microsoft services to bypass this firewall**. See [Trusted Microsoft services](#trusted-microsoft-services) for details.
-
- ![Firewall - All networks option selected](./media/event-hubs-firewall/firewall-selected-networks-trusted-access-disabled.png)
+ :::image type="content" source="./media/event-hubs-firewall/selected-networks.png" alt-text="Networking page with the selected networks option selected." lightbox="./media/event-hubs-firewall/selected-networks.png":::
+ - **All networks** (default). This option enables public access from all networks using an access key. If you select the **All networks** option, the event hub accepts connections from any IP address (using the access key). This setting is equivalent to a rule that accepts the 0.0.0.0/0 IP address range.
+
+ :::image type="content" source="./media/event-hubs-firewall/firewall-all-networks-selected.png" lightbox="./media/event-hubs-firewall/firewall-all-networks-selected.png" alt-text="Screenshot that shows the Public access page with the All networks option selected.":::
+1. To restrict access to **specific IP addresses**, follow these steps:
+ 1. In the **Firewall** section, select **Add your client IP address** option to give your current client IP the access to the namespace.
+ 3. For **address range**, enter a specific IPv4 address or a range of IPv4 address in CIDR notation.
+
+ To restrict access to **specific virtual networks**, see [Allow access from specific networks](event-hubs-service-endpoints.md).
+ 1. Specify whether you want to **allow trusted Microsoft services to bypass this firewall**. See [Trusted Microsoft services](#trusted-microsoft-services) for details.
+
+ :::image type="content" source="./media/event-hubs-firewall/firewall-selected-networks-trusted-access-disabled.png" lightbox="./media/event-hubs-firewall/firewall-selected-networks-trusted-access-disabled.png" alt-text="Firewall section highlighted in the Public access tab of the Networking page.":::
3. Select **Save** on the toolbar to save the settings. Wait for a few minutes for the confirmation to show up on the portal notifications. > [!NOTE]
event-hubs Event Hubs Service Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-service-endpoints.md
Title: Virtual Network service endpoints - Azure Event Hubs | Microsoft Docs description: This article provides information on how to add a Microsoft.EventHub service endpoint to a virtual network. Previously updated : 05/10/2021 Last updated : 10/28/2021 # Allow access to Azure Event Hubs namespaces from specific virtual networks
The integration of Event Hubs with [Virtual Network (VNet) Service Endpoints][vn
Once configured to bound to at least one virtual network subnet service endpoint, the respective Event Hubs namespace no longer accepts traffic from anywhere but authorized subnets in virtual networks. From the virtual network perspective, binding an Event Hubs namespace to a service endpoint configures an isolated networking tunnel from the virtual network subnet to the messaging service.
-The result is a private and isolated relationship between the workloads bound to the subnet and the respective Event Hubs namespace, in spite of the observable network address of the messaging service endpoint being in a public IP range. There's an exception to this behavior. Enabling a service endpoint, by default, enables the `denyall` rule in the [IP firewall](event-hubs-ip-filtering.md) associated with the virtual network. You can add specific IP addresses in the IP firewall to enable access to the Event Hub public endpoint.
+The result is a private and isolated relationship between the workloads bound to the subnet and the respective Event Hubs namespace, in spite of the observable network address of the messaging service endpoint being in a public IP range. There's an exception to this behavior. Enabling a service endpoint, by default, enables the `denyall` rule in the [IP firewall](event-hubs-ip-filtering.md) associated with the virtual network. You can add specific IP addresses in the IP firewall to enable access to the Event Hubs public endpoint.
## Important points - This feature isn't supported in the **basic** tier.
This section shows you how to use Azure portal to add a virtual network service
1. Navigate to your **Event Hubs namespace** in the [Azure portal](https://portal.azure.com). 4. Select **Networking** under **Settings** on the left menu. -
- > [!WARNING]
- > If you select the **Selected networks** option and don't add at least one IP firewall rule or a virtual network on this page, the namespace can be accessed via **public internet** (using the access key).
-
- :::image type="content" source="./media/event-hubs-firewall/selected-networks.png" alt-text="Networks tab - selected networks option" lightbox="./media/event-hubs-firewall/selected-networks.png":::
-
- If you select the **All networks** option, the event hub accepts connections from any IP address (using the access key). This setting is equivalent to a rule that accepts the 0.0.0.0/0 IP address range.
-
- ![Firewall - All networks option selected](./media/event-hubs-firewall/firewall-all-networks-selected.png)
-1. To restrict access to specific networks, select the **Selected Networks** option at the top of the page if it isn't already selected.
-2. In the **Virtual Network** section of the page, select **+Add existing virtual network***. Select **+ Create new virtual network** if you want to create a new VNet.
-
- ![add existing virtual network](./media/event-hubs-tutorial-vnet-and-firewalls/add-vnet-menu.png)
-
- >[!WARNING]
- > If you select the **Selected networks** option and don't add at least one IP firewall rule or a virtual network on this page, the namespace can be accessed over public internet (using the access key).
+1. On the **Networking** page, for **Public network access**, you can set one of the three following options. Choose **Selected networks** option to allow access only from specific virtual networks.
+ - **Disabled**. This option disables any public access to the namespace. The namespace will be accessible only through [private endpoints](private-link-service.md).
+
+ :::image type="content" source="./media/event-hubs-firewall/public-access-disabled.png" alt-text="Networking page - public access tab - public network access is disabled.":::
+ - **Selected networks**. This option enables public access to the namespace using an access key from selected networks.
+
+ > [!IMPORTANT]
+ > If you choose **Selected networks**, add at least one IP firewall rule or a virtual network that will have access to the namespace. Choose **Disabled** if you want to restrict all traffic to this namespace over [private endpoints](private-link-service.md) only.
+
+ :::image type="content" source="./media/event-hubs-firewall/selected-networks.png" alt-text="Networking page with the selected networks option selected." lightbox="./media/event-hubs-firewall/selected-networks.png":::
+ - **All networks** (default). This option enables public access from all networks using an access key. If you select the **All networks** option, the event hub accepts connections from any IP address (using the access key). This setting is equivalent to a rule that accepts the 0.0.0.0/0 IP address range.
+
+ :::image type="content" source="./media/event-hubs-firewall/firewall-all-networks-selected.png" lightbox="./media/event-hubs-firewall/firewall-all-networks-selected.png" alt-text="Screenshot that shows the Public access page with the All networks option selected.":::
+1. To restrict access to specific networks, choose the **Selected Networks** option at the top of the page if it isn't already selected.
+2. In the **Virtual networks** section of the page, select **+Add existing virtual network***. Select **+ Create new virtual network** if you want to create a new VNet.
+
+ :::image type="content" source="./media/event-hubs-tutorial-vnet-and-firewalls/add-vnet-menu.png" lightbox="./media/event-hubs-tutorial-vnet-and-firewalls/add-vnet-menu.png" alt-text="Selection of Add existing virtual network menu item.":::
+
+ > [!IMPORTANT]
+ > If you choose **Selected networks**, add at least one IP firewall rule or a virtual network that will have access to the namespace. Choose **Disabled** if you want to restrict all traffic to this namespace over [private endpoints](private-link-service.md) only.
3. Select the virtual network from the list of virtual networks, and then pick the **subnet**. You have to enable the service endpoint before adding the virtual network to the list. If the service endpoint isn't enabled, the portal will prompt you to enable it.
- ![select subnet](./media/event-hubs-tutorial-vnet-and-firewalls/select-subnet.png)
-
+ :::image type="content" source="./media/event-hubs-tutorial-vnet-and-firewalls/select-subnet.png" lightbox="./media/event-hubs-tutorial-vnet-and-firewalls/select-subnet.png" alt-text="Image showing the selection of a subnet.":::
4. You should see the following successful message after the service endpoint for the subnet is enabled for **Microsoft.EventHub**. Select **Add** at the bottom of the page to add the network.
- ![select subnet and enable endpoint](./media/event-hubs-tutorial-vnet-and-firewalls/subnet-service-endpoint-enabled.png)
+ :::image type="content" source="./media/event-hubs-tutorial-vnet-and-firewalls/subnet-service-endpoint-enabled.png" lightbox="./media/event-hubs-tutorial-vnet-and-firewalls/subnet-service-endpoint-enabled.png" alt-text="Image showing the selection of a subnet and enabling an endpoint.":::
> [!NOTE] > If you are unable to enable the service endpoint, you may ignore the missing virtual network service endpoint using the Resource Manager template. This functionality is not available on the portal. 5. Specify whether you want to **allow trusted Microsoft services to bypass this firewall**. See [Trusted Microsoft services](#trusted-microsoft-services) for details. 6. Select **Save** on the toolbar to save the settings. Wait for a few minutes for the confirmation to show up on the portal notifications.
- ![Save network](./media/event-hubs-tutorial-vnet-and-firewalls/save-vnet.png)
+ :::image type="content" source="./media/event-hubs-tutorial-vnet-and-firewalls/save-vnet.png" lightbox="./media/event-hubs-tutorial-vnet-and-firewalls/save-vnet.png" alt-text="Image showing the saving of virtual network.":::
> [!NOTE] > To restrict access to specific IP addresses or ranges, see [Allow access from specific IP addresses or ranges](event-hubs-ip-filtering.md).
event-hubs Private Link Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/private-link-service.md
If you already have an Event Hubs namespace, you can create a private link conne
1. Sign in to the [Azure portal](https://portal.azure.com). 2. In the search bar, type in **event hubs**. 3. Select the **namespace** from the list to which you want to add a private endpoint.
-4. Select **Networking** under **Settings** on the left menu.
-
- :::image type="content" source="./media/private-link-service/selected-networks-page.png" alt-text="Networks tab - selected networks option" lightbox="./media/private-link-service/selected-networks-page.png":::
+1. On the **Networking** page, for **Public network access**, you can set one of the three following options. Select **Disabled** if you want the namespace to be accessed only via private endpoints.
+ - **Disabled**. This option disables any public access to the namespace. The namespace will be accessible only through [private endpoints](private-link-service.md).
+
+ :::image type="content" source="./media/event-hubs-firewall/public-access-disabled.png" alt-text="Networking page - public access tab - public network access is disabled.":::
+ - **Selected networks**. This option enables public access to the namespace using an access key from selected networks.
+
+ > [!IMPORTANT]
+ > If you choose **Selected networks**, add at least one IP firewall rule or a virtual network that will have access to the namespace. Choose **Disabled** if you want to restrict all traffic to this namespace over [private endpoints](private-link-service.md) only.
+
+ :::image type="content" source="./media/event-hubs-firewall/selected-networks.png" alt-text="Networking page with the selected networks option selected." lightbox="./media/event-hubs-firewall/selected-networks.png":::
+ - **All networks** (default). This option enables public access from all networks using an access key. If you select the **All networks** option, the event hub accepts connections from any IP address (using the access key). This setting is equivalent to a rule that accepts the 0.0.0.0/0 IP address range.
- > [!WARNING]
- > By default, the **Selected networks** option is selected. If you don't specify an IP firewall rule or add a virtual network, the namespace can be accessed via public internet (using the access key).
-1. Select the **Private endpoint connections** tab at the top of the page.
+ :::image type="content" source="./media/event-hubs-firewall/firewall-all-networks-selected.png" lightbox="./media/event-hubs-firewall/firewall-all-networks-selected.png" alt-text="Screenshot that shows the Public access page with the All networks option selected.":::
+1. Switch to the **Private endpoint connections** tab.
1. Select the **+ Private Endpoint** button at the top of the page.
- :::image type="content" source="./media/private-link-service/private-link-service-3.png" alt-text="Networking page - Private endpoint connections tab - Add private endpoint link":::
+ :::image type="content" source="./media/private-link-service/private-link-service-3.png" lightbox="./media/private-link-service/private-link-service-3.png" alt-text="Networking page - Private endpoint connections tab - Add private endpoint link.":::
7. On the **Basics** page, follow these steps: 1. Select the **Azure subscription** in which you want to create the private endpoint. 2. Select the **resource group** for the private endpoint resource.
$privateEndpointConnection = New-AzPrivateLinkServiceConnection `
-PrivateLinkServiceId $namespaceResource.ResourceId ` -GroupId "namespace"
-# get subnet object that you will use later
+# get subnet object that you'll use later
$virtualNetwork = Get-AzVirtualNetwork -ResourceGroupName $rgName -Name $vnetName $subnet = $virtualNetwork | Select -ExpandProperty subnets ` | Where-Object {$_.Name -eq $subnetName}
There are four provisioning states:
5. Go to the appropriate section below based on the operation you want to: approve, reject, or remove. ### Approve a private endpoint connection
-1. If there are any connections that are pending, you will see a connection listed with **Pending** in the provisioning state.
+1. If there are any connections that are pending, you'll see a connection listed with **Pending** in the provisioning state.
2. Select the **private endpoint** you wish to approve 3. Select the **Approve** button.
There are four provisioning states:
### Reject a private endpoint connection
-1. If there are any private endpoint connections you want to reject, whether it is a pending request or existing connection, select the connection and click the **Reject** button.
+1. If there are any private endpoint connections you want to reject, whether it's a pending request or existing connection, select the connection and click the **Reject** button.
![Reject private endpoint](./media/private-link-service/private-endpoint-reject-button.png) 2. On the **Reject connection** page, enter a comment (optional), and select **Yes**. If you select **No**, nothing happens.
There are four provisioning states:
1. To remove a private endpoint connection, select it in the list, and select **Remove** on the toolbar. 2. On the **Delete connection** page, select **Yes** to confirm the deletion of the private endpoint. If you select **No**, nothing happens.
-3. You should see the status changed to **Disconnected**. Then, you will see the endpoint disappear from the list.
+3. You should see the status changed to **Disconnected**. Then, you'll see the endpoint disappear from the list.
## Validate that the private link connection works
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-locations.md
If you are remote and don't have fiber connectivity or you want to explore other
| **[Masergy](https://www.masergy.com/solutions/hybrid-networking/cloud-marketplace/microsoft-azure)** | Equinix | Washington DC | | **[MTN](https://www.mtnbusiness.co.za/en/Cloud-Solutions/Pages/microsoft-express-route.aspx)** | Teraco | Cape Town, Johannesburg | | **[NexGen Networks](https://www.nexgen-net.com/nexgen-networks-direct-connect-microsoft-azure-expressroute.html)** | Interxion | London |
-| **[Nianet](https://nianet.dk/produkter/internet/microsoft-expressroute)** |Equinix | Amsterdam, Frankfurt |
+| **[Nianet](https://www.globalconnect.dk/)** |Equinix | Amsterdam, Frankfurt |
| **[Oncore Cloud Service Inc](https://www.oncore.cloud/services/ue-for-expressroute)**| Equinix | Toronto | | **[POST Telecom Luxembourg](https://www.teralinksolutions.com/cloud-connectivity/cloudbridge-to-azure-expressroute/)**|Equinix | Amsterdam | | **[Proximus](https://www.proximus.be/en/id_b_cl_proximus_external_cloud_connect/companies-and-public-sector/discover/magazines/expert-blog/proximus-external-cloud-connect.html)**|Equinix | Amsterdam, Dublin, London, Paris |
governance Guest Configuration Custom https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/concepts/guest-configuration-custom.md
Before you begin, it's a good idea to read the overview of
[A video walk-through of this document is available](https://youtu.be/nYd55FiKpgs). Guest configuration uses
-[Desired State Configuration (DSC)](/powershell/dsc/overview/overview)
+[Desired State Configuration (DSC)](/powershell/dsc/overview)
version 3 to audit and configure machines. The DSC configuration defines the state that the machine should be in. There's many notable differences in how DSC is implemented in guest configuration.
governance Guest Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/concepts/guest-configuration.md
these tools automatically.
|Operating system|Validation tool|Notes| |-|-|-|
-|Windows|[PowerShell Desired State Configuration](/powershell/dsc/overview/overview) v3| Side-loaded to a folder only used by Azure Policy. Won't conflict with Windows PowerShell DSC. PowerShell Core isn't added to system path.|
-|Linux|[PowerShell Desired State Configuration](/powershell/dsc/overview/overview) v3| Side-loaded to a folder only used by Azure Policy. PowerShell Core isn't added to system path.|
+|Windows|[PowerShell Desired State Configuration](/powershell/dsc/overview) v3| Side-loaded to a folder only used by Azure Policy. Won't conflict with Windows PowerShell DSC. PowerShell Core isn't added to system path.|
+|Linux|[PowerShell Desired State Configuration](/powershell/dsc/overview) v3| Side-loaded to a folder only used by Azure Policy. PowerShell Core isn't added to system path.|
|Linux|[Chef InSpec](https://www.chef.io/inspec/) | Installs Chef InSpec version 2.2.61 in default location and added to system path. Dependencies for the InSpec package including Ruby and Python are installed as well. | ### Validation frequency
iot-develop Quickstart Devkit Nxp Mimxrt1050 Evkb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/quickstart-devkit-nxp-mimxrt1050-evkb.md
[![Browse code](media/common/browse-code.svg)](https://github.com/azure-rtos/getting-started/tree/master/NXP/MIMXRT1050-EVKB/)
-In this quickstart, you use Azure RTOS to connect an NXP MIMXRT1050-EVKB Evaluation kit (hereafter, NXP EVK) to Azure IoT.
+In this quickstart, you use Azure RTOS to connect an NXP MIMXRT1050-EVKB Evaluation kit (from now on, NXP EVK) to Azure IoT.
-You will complete the following tasks:
+You'll complete the following tasks:
* Install a set of embedded development tools for programming an NXP EVK in C * Build an image and flash it onto the NXP EVK
iot-hub Iot Hub Devguide Jobs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-devguide-jobs.md
Jobs are initiated by the solution back end and maintained by IoT Hub. You can i
> [!NOTE] > When you initiate a job, property names and values can only contain US-ASCII printable alphanumeric, except any in the following set: `$ ( ) < > @ , ; : \ " / [ ] ? = { } SP HT`
+> [!NOTE]
+> The `jobId` field must be 64 characters or less and can only contain US-ASCII letters, numbers, and the dash (`-`) character.
+ ## Jobs to execute direct methods The following snippet shows the HTTPS 1.1 request details for executing a [direct method](iot-hub-devguide-direct-methods.md) on a set of devices using a job:
Other reference topics in the IoT Hub developer guide include:
To try out some of the concepts described in this article, see the following IoT Hub tutorial:
-* [Schedule and broadcast jobs](iot-hub-node-node-schedule-jobs.md)
+* [Schedule and broadcast jobs](iot-hub-node-node-schedule-jobs.md)
key-vault Rbac Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/rbac-migration.md
Access policy predefined permission templates:
| Exchange Online Customer Key | Keys: get, list, wrap key, unwrap key | Key Vault Crypto Service Encryption User| | Azure Information BYOK | Keys: get, decrypt, sign | N/A<br>Custom role required|
+> [!NOTE]
+> Azure App Service certificate configuration does not support Key Vault RBAC permission model.
## Assignment scopes mapping
The vault access policy permission model is limited to assigning policies only a
In general, it's best practice to have one key vault per application and manage access at key vault level. There are scenarios when managing access at other scopes can simplify access management. -- **Infrastructure, security administrators and operators: managing group of key vaults at management group, subscription or resource group level with vault access policies requires maintaining policies for each key vault. Azure RBAC allows creating one role assignment at management group, subscription, or resource group. That assignment will apply to any new key vaults created under the same scope. In this scenario, it's recommended to use Privileged Identity Management with just-in time access over providing permanent access.
+- **Infrastructure, security administrators and operators**: managing group of key vaults at management group, subscription or resource group level with vault access policies requires maintaining policies for each key vault. Azure RBAC allows creating one role assignment at management group, subscription, or resource group. That assignment will apply to any new key vaults created under the same scope. In this scenario, it's recommended to use Privileged Identity Management with just-in time access over providing permanent access.
-- **Applications: there are scenarios when application would need to share secret with other application. Using vault access polices separate key vault had to be created to avoid giving access to all secrets. Azure RBAC allows assign role with scope for individual secret instead using single key vault.
+- **Applications**: there are scenarios when application would need to share secret with other application. Using vault access polices separate key vault had to be created to avoid giving access to all secrets. Azure RBAC allows assign role with scope for individual secret instead using single key vault.
## Vault access policy to Azure RBAC migration steps There are many differences between Azure RBAC and vault access policy permission model. In order, to avoid outages during migration, below steps are recommended.
For more information, see
## Troubleshooting - Role assignment not working after several minutes - there are situations when role assignments can take longer. It's important to write retry logic in code to cover those cases.-- Role assignments disappeared when Key Vault was deleted (soft-delete) and recovered - it's currently a limitation of soft-delete feature across all Azure services. It's required to recreate all role assignments after recovery.
+- Role assignments disappeared when Key Vault was deleted (soft-delete) and recovered - it's currently a limitation of soft-delete feature across all Azure services. It's required to recreate all role assignments after recovery.
## Learn more
key-vault How To Configure Key Rotation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/keys/how-to-configure-key-rotation.md
Key rotation policy can also be configured using ARM templates.
"description": "The name of the key to be created." } },
- "rotateTimeAfterCreation": {
+ "rotatationTimeAfterCreate": {
"defaultValue": "P18M", "type": "String", "metadata": {
Key rotation policy can also be configured using ARM templates.
"lifetimeActions": [ { "trigger": {
- "timeAfterCreate": "[parameters('rotateTimeAfterCreation')]",
+ "timeAfterCreate": "[parameters('rotatationTimeAfterCreate')]",
"timeBeforeExpiry": "" }, "action": {
load-testing Tutorial Identify Bottlenecks Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-testing/tutorial-identify-bottlenecks-azure-portal.md
Now that you have the application deployed and running, you can run your first l
## Configure and create the load test
-In this section, you'll create a load test by using an existing Apache JMeter test script.
+In this section, you'll create a load test by using a sample Apache JMeter test script.
### Configure the Apache JMeter script
-The sample application's source repo includes an Apache JMeter script named *SampleApp.jmx*. This script makes three API calls on each test iteration:
+The sample application's source repo includes an Apache JMeter script named *SampleApp.jmx*. This script makes three API calls to the web app on each test iteration:
* `add`: Carries out a data insert operation on Azure Cosmos DB for the number of visitors on the web app. * `get`: Carries out a GET operation from Azure Cosmos DB to retrieve the count. * `lasttimestamp`: Updates the time stamp since the last user went to the website.
-In this section, you'll update the Apache JMeter script with the URL of the sample web app that you just deployed.
+> [!NOTE]
+> The sample Apache JMeter script requires two plugins: ```Custom Thread Groups``` and ```Throughput Shaping Timer```. To open the script on your local Apache JMeter instance, you need to install both plugins. You can use the [Apache JMeter Plugins Manager](https://jmeter-plugins.org/install/Install/) to do this.
+
+To load test the sample web app that you deployed previously, you need to update the API URLs in the Apache JMeter script.
1. Open the directory of the cloned sample app in Visual Studio Code:
logic-apps Logic Apps Using Sap Connector https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-using-sap-connector.md
An ISE provides access to resources that are protected by an Azure virtual netwo
1. If you don't already have an Azure Storage account with a blob container, create a container using either the [Azure portal](../storage/blobs/storage-quickstart-blobs-portal.md) or [Azure Storage Explorer](../storage/blobs/quickstart-storage-explorer.md).
-1. [Download and install the latest SAP client library](#sap-client-library-prerequisites) on your local computer. You should have the following assembly files:
+1. [Download and install the latest SAP client library](#sap-client-library-prerequisites) on your local computer. You should have the following assembly (.dll) files:
* libicudecnumber.dll
The following list describes the prerequisites for the SAP client library that y
* You must have the 64-bit version of the SAP client library installed, because the data gateway only runs on 64-bit systems. Installing the unsupported 32-bit version results in a "bad image" error.
-* Copy the assembly files from the default installation folder to another location, based on your scenario as follows.
+* From the client library's default installation folder, copy the assembly (.dll) files to another location, based on your scenario as follows:
* For a logic app workflow that runs in an ISE, follow the [ISE prerequisites](#ise-prerequisites) instead.
- * For a logic app workflow that runs in multi-tenant Azure and uses your on-premises data gateway, copy the assembly files to the data gateway installation folder.
+ * For a logic app workflow that runs in multi-tenant Azure and uses your on-premises data gateway, copy the DLL files to the on-premises data gateway installation folder, for example, "C:\Program Files\On-Premises Data Gateway".
> [!NOTE] > If your SAP connection fails with the error message, **Please check your account info and/or permissions and try again**,
- > make sure you copied the assembly files to the data gateway installation folder.
+ > make sure you copied the assembly (.dll) files to the data gateway installation folder, for example, "C:\Program Files\On-Premises Data Gateway".
> > You can troubleshoot further issues using the [.NET assembly binding log viewer](/dotnet/framework/tools/fuslogvw-exe-assembly-binding-log-viewer). > This tool lets you check that your assembly files are in the correct location.
If you're enabling SNC through an external security product, copy the SNC librar
> The version of your SNC library and its dependencies must be compatible with your SAP environment. > > * You must use `sapgenpse.exe` specifically as the SAPGENPSE utility.
-> * If you use an on-premises data gateway, also copy these same binary files to the installation folder there.
+> * If you use an on-premises data gateway, also copy these same binary files to the installation folder there, for example, "C:\Program Files\On-Premises Data Gateway".
> * If PSE is provided in your connection, you don't need to copy and set up PSE and SECUDIR for your on-premises data gateway. > * You can also use your on-premises data gateway to troubleshoot any library compatibility issues.
To enable sending SAP telemetry to Application insights, follow these steps:
1. Download the NuGet package for **Microsoft.ApplicationInsights.EventSourceListener.dll** from this location: [https://www.nuget.org/packages/Microsoft.ApplicationInsights.EventSourceListener/2.14.0](https://www.nuget.org/packages/Microsoft.ApplicationInsights.EventSourceListener/2.14.0).
-1. Add the downloaded file to your on-premises data gateway installation directory.
+1. Add the downloaded file to your on-premises data gateway installation directory, for example, "C:\Program Files\On-Premises Data Gateway".
1. In your on-premises data gateway installation directory, check that the **Microsoft.ApplicationInsights.dll** file has the same version number as the **Microsoft.ApplicationInsights.EventSourceListener.dll** file that you added. The gateway currently uses version 2.14.0.
machine-learning Concept Workspace https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-workspace.md
Previously updated : 10/21/2021 Last updated : 01/04/2022 #Customer intent: As a data scientist, I want to understand the purpose of a workspace for Azure Machine Learning.
When you create a new workspace, it automatically creates several Azure resource
> [!NOTE] > If your subscription setting requires adding tags to resources under it, Azure Container Registry (ACR) created by Azure Machine Learning will fail, since we cannot set tags to ACR.
-+ [Azure Application Insights](https://azure.microsoft.com/services/application-insights/): Stores monitoring information about your models.
++ [Azure Application Insights](https://azure.microsoft.com/services/application-insights/): Stores monitoring and diagnostics information. For more information, see [Monitor and collect data from Machine Learning web service endpoints](../../articles/machine-learning/how-to-enable-app-insights.md).+
+ > [!NOTE]
+ > You can delete the Application Insights instance after cluster creation if you want. Deleting it limits the information gathered from the workspace, and may make it more difficult to troubleshoot problems. __If you delete the Application Insights instance created by the workspace, you cannot re-create it without deleting and recreating the workspace__.
+ [Azure Key Vault](https://azure.microsoft.com/services/key-vault/): Stores secrets that are used by compute targets and other sensitive information that's needed by the workspace.
machine-learning How To Access Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-access-data.md
# Customer intent: As an experienced Python developer, I need to make my data in Azure storage available to my remote compute to train my machine learning models.
-# Connect to storage services on Azure
+# Connect to storage services on Azure with datastores
In this article, learn how to connect to data storage services on Azure with Azure Machine Learning datastores and the [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/intro).
machine-learning How To Configure Private Link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-configure-private-link.md
Previously updated : 10/21/2021 Last updated : 01/05/2022 # Configure a private endpoint for an Azure Machine Learning workspace
Finally, select __Create__ to create the private endpoint.
## Remove a private endpoint
-Use one of the following methods to remove a private endpoint from a workspace:
+You can remove one or all private endpoints for a workspace. Removing a private endpoint removes the workspace from the VNet that the endpoint was associated with. This may prevent the workspace from accessing resources in that VNet, or resources in the VNet from accessing the workspace. For example, if the VNet does not allow access to or from the public internet.
-> [!IMPORTANT]
-> Public access is not enabled when you delete a private endpoint for a workspace. To enable public access, see the [Enable public access section](how-to-configure-private-link.md#enable-public-access).
+> [!WARNING]
+> Removing the private endpoints for a workspace __doesn't make it publicly accessible__. To make the workspace publicly accessible, use the steps in the [Enable public access](#enable-public-access) section.
+
+To remove a private endpoint, use the following information:
# [Python](#tab/python)
-Use [Workspace.delete_private_endpoint_connection](/python/api/azureml-core/azureml.core.workspace(class)#delete-private-endpoint-connection-private-endpoint-connection-name-) to remove a private endpoint.
+To remove a private endpoint, use [Workspace.delete_private_endpoint_connection](/python/api/azureml-core/azureml.core.workspace(class)#delete-private-endpoint-connection-private-endpoint-connection-name-). The following example demonstrates how to remove a private endpoint:
```python from azureml.core import Workspace
The Azure CLI [extension 1.0 for machine learning](reference-azure-machine-learn
# [Portal](#tab/azure-portal)
-From the Azure Machine Learning workspace in the portal, select __Private endpoint connections__, and then select the endpoint you want to remove. Finally, select __Remove__.
+1. From the [Azure portal](https://portal.azure.com), select your Azure Machine Learning workspace.
+1. From the left side of the page, select __Networking__ and then select the __Private endpoint connections__ tab.
+1. Select the endpoint to remove and then select __Remove__.
++++
+## Enable public access
+
+In some situations, you may want to allow someone to connect to your secured workspace over a public endpoint, instead of through the VNet. Or you may want to remove the workspace from the VNet and re-enable public access.
+
+> [!IMPORTANT]
+> Enabling public access doesn't remove any private endpoints that exist. All communications between components behind the VNet that the private endpoint(s) connect to is still secured. It enables public access only to the workspace, in addition to the private access through any private endpoints.
+
+> [!WARNING]
+> When connecting over the public endpoint while the workspace uses a private endpoint to communicate with other resources:
+> * __Some features of studio will fail to access your data__. This problem happens when the _data is stored on a service that is secured behind the VNet_. For example, an Azure Storage Account.
+> * Using Jupyter, JupyterLab, and RStudio on a compute instance, including running notebooks, __is not supported__.
+
+To enable public access, use the following steps:
+
+# [Python](#tab/python)
+
+To enable public access, use [Workspace.update](/python/api/azureml-core/azureml.core.workspace(class)#update-friendly-name-none--description-none--tags-none--image-build-compute-none--service-managed-resources-settings-none--primary-user-assigned-identity-none--allow-public-access-when-behind-vnet-none-) and set `allow_public_access_when_behind_vnet=True`.
+
+```python
+from azureml.core import Workspace
+
+ws = Workspace.from_config()
+ws.update(allow_public_access_when_behind_vnet=True)
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+The Azure CLI [extension 1.0 for machine learning](reference-azure-machine-learning-cli.md) provides the [az ml workspace update](/cli/azure/ml/workspace#az_ml_workspace_update) command. To enable public access to the workspace, add the parameter `--allow-public-access true`.
+
+# [Portal](#tab/azure-portal)
+
+1. From the [Azure portal](https://portal.azure.com), select your Azure Machine Learning workspace.
+1. From the left side of the page, select __Networking__ and then select the __Public access__ tab.
+1. Select __All networks__, and then select __Save__.
+
If you want to create an isolated Azure Kubernetes Service used by the workspace
:::image type="content" source="./media/how-to-configure-private-link/multiple-private-endpoint-workspace-aks.png" alt-text="Diagram of isolated AKS VNet":::
-## Enable public access
-
-In some situations, you may want to allow someone to connect to your secured workspace over a public endpoint, instead of through the VNet. After configuring a workspace with a private endpoint, you can optionally enable public access to the workspace. Doing so does not remove the private endpoint. All communications between components behind the VNet is still secured. It enables public access only to the workspace, in addition to the private access through the VNet.
-
-> [!WARNING]
-> When connecting over the public endpoint:
-> * __Some features of studio will fail to access your data__. This problem happens when the _data is stored on a service that is secured behind the VNet_. For example, an Azure Storage Account.
-> * Using Jupyter, JupyterLab, and RStudio on a compute instance, including running notebooks, __is not supported__.
-
-To enable public access to a private endpoint-enabled workspace, use the following steps:
-
-# [Python](#tab/python)
-
-Use [Workspace.delete_private_endpoint_connection](/python/api/azureml-core/azureml.core.workspace(class)#delete-private-endpoint-connection-private-endpoint-connection-name-) to remove a private endpoint.
-
-```python
-from azureml.core import Workspace
-
-ws = Workspace.from_config()
-ws.update(allow_public_access_when_behind_vnet=True)
-```
-
-# [Azure CLI](#tab/azure-cli)
-
-The Azure CLI [extension 1.0 for machine learning](reference-azure-machine-learning-cli.md) provides the [az ml workspace update](/cli/azure/ml/workspace#az_ml_workspace_update) command. To enable public access to the workspace, add the parameter `--allow-public-access true`.
-
-# [Portal](#tab/azure-portal)
-
-Currently there is no way to enable this functionality using the portal.
--- ## Next steps * For more information on securing your Azure Machine Learning workspace, see the [Virtual network isolation and privacy overview](how-to-network-security-overview.md) article.
machine-learning How To Enable App Insights https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-enable-app-insights.md
Previously updated : 10/21/2021 Last updated : 01/04/2022
In this article, you learn how to collect data from models deployed to web servi
The [enable-app-insights-in-production-service.ipynb](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/deployment/enable-app-insights-in-production-service/enable-app-insights-in-production-service.ipynb) notebook demonstrates concepts in this article. [!INCLUDE [aml-clone-in-azure-notebook](../../includes/aml-clone-for-examples.md)]+
+> [!IMPORTANT]
+> The information in this article relies on the Azure Application Insights instance that was created with your workspace. If you deleted this Application Insights instance, there is no way to re-create it other than deleting and recreating the workspace.
## Prerequisites
machine-learning How To Manage Workspace Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-manage-workspace-cli.md
Previously updated : 09/23/2021 Last updated : 01/05/2022
In this article, you learn how to create and manage Azure Machine Learning works
[!INCLUDE [register-namespace](../../includes/machine-learning-register-namespace.md)] + ## Connect the CLI to your Azure subscription > [!IMPORTANT]
machine-learning How To Manage Workspace Terraform https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-manage-workspace-terraform.md
Previously updated : 10/21/2021 Last updated : 01/05/2022
A Terraform configuration is a document that defines the resources that are need
* An installed version of the [Azure CLI](/cli/azure/). * Configure Terraform: follow the directions in this article and the [Terraform and configure access to Azure](/azure/developer/terraform/get-started-cloud-shell) article.
+## Limitations
+++ ## Declare the Azure provider Create the Terraform configuration file that declares the Azure provider:
machine-learning How To Manage Workspace https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-manage-workspace.md
As your needs change or requirements for automation increase you can also manage
By default, creating a workspace also creates an Azure Container Registry (ACR). Since ACR does not currently support unicode characters in resource group names, use a resource group that does not contain these characters. + ## Create a workspace # [Python](#tab/python)
marketplace Co Sell Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/co-sell-configure.md
description: The information you provide on the Co-sell with Microsoft tab for y
--++ Last updated 1/04/2021
marketplace Co Sell Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/co-sell-overview.md
description: The Microsoft Partner Center Co-sell program for partners can help
--++ Last updated 12/03/2021
marketplace Co Sell Requirements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/co-sell-requirements.md
description: Learn about the requirements an offer in the Microsoft commercial m
--++ Last updated 12/03/2021
marketplace Co Sell Solution Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/co-sell-solution-migration.md
description: Migrate Co-sell solutions from OCP GTM to Partner Center (Azure Mar
--++ Last updated 09/27/2021
marketplace Co Sell Status https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/co-sell-status.md
description: Learn how to verify the co-sell status of an offer in the Microsoft
--++ Last updated 09/27/2021
marketplace Commercial Marketplace Co Sell Countries https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/commercial-marketplace-co-sell-countries.md
description: Use these two-letter country/region codes when providing contact in
--+++ Last updated 04/27/2021
marketplace Commercial Marketplace Co Sell States https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/commercial-marketplace-co-sell-states.md
description: Get the available state and province codes when providing contact i
--+++ Last updated 04/27/2021
orbital Contact Profile https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/orbital/contact-profile.md
Configure a contact profile with Azure Orbital to save and reuse contact configu
## Sign in to Azure
-Sign in to the [Azure portal](https://portal.azure.com).
+Sign in to the [Azure portal - Orbital Preview](https://aka.ms/orbital/portal).
## Create a contact profile resource
-1. Select **Create a resource** in the upper left-hand corner of the portal.
-2. In the search box, enter **Contact profile**. Select **Contact profile** in the search results.
-3. In the **Contact profile** page, select **Create**.
-4. In **Create contact profile resource**, enter or select this information in the **Basics** tab:
+1. In the Azure portal search box, enter **Contact profile**. Select **Contact profile** in the search results.
+2. In the **Contact profile** page, select **Create**.
+3. In **Create contact profile resource**, enter or select this information in the **Basics** tab:
| **Field** | **Value** | | | | | Subscription | Select your subscription | | Resource group | Select your resource group |
- | Name | Enter contact profile name. Specify the antenna provider and mission information here. *i.e. Microsoft_Aqua_Uplink+Downlink_1* |
+ | Name | Enter contact profile name. Specify the antenna provider and mission information here. Like *Microsoft_Aqua_Uplink+Downlink_1* |
| Region | Select **West US 2** |
- | Minimum viable contact duration | Define the minimum duration of the contact as a prerequisite to show you available time slots to communicate with your spacecraft. If an available time window is less than this time, it won't show in the list of available options. Provide minimum contact duration in ISO 8601 format. *i.e. PT1M* |
+ | Minimum viable contact duration | Define the minimum duration of the contact as a prerequisite to show you available time slots to communicate with your spacecraft. If an available time window is less than this time, it won't show in the list of available options. Provide minimum contact duration in ISO 8601 format. Like *PT1M* |
| Minimum elevation | Define minimum elevation of the contact, after acquisition of signal (AOS), as a prerequisite to show you available time slots to communicate with your spacecraft. Using higher value can reduce the duration of the contact. Provide minimum viable elevation in decimal degrees. | | Auto track configuration | Select the frequency band to be used for autotracking during the contact. X band, S band, or Disabled. |
- | Event Hubs Namespace | Select an Event Hubs Namespace to which you will send telemetry data of your contacts. Select a Subscription before you can select an Event Hubs Namespace. |
+ | Event Hubs Namespace | Select an Event Hubs Namespace to which you'll send telemetry data of your contacts. Select a Subscription before you can select an Event Hubs Namespace. |
| Event Hubs Instance | Select an Event Hubs Instance that belongs to the previously selected Namespace. *This field will only appear if an Event Hubs Namespace is selected first*. | :::image type="content" source="media/orbital-eos-contact-profile.png" alt-text="Contact Profile Resource Page" lightbox="media/orbital-eos-contact-profile.png":::
-5. Select the **Links** tab, or select the **Next: Links** button at the bottom of the page.
-6. In the **Links** page, select **Add new Link**
-7. In the **Add Link** page, enter, or select this information per link direction:
+4. Select the **Links** tab, or select the **Next: Links** button at the bottom of the page.
+5. In the **Links** page, select **Add new Link**
+6. In the **Add Link** page, enter, or select this information per link direction:
| **Field** | **Value** | | | |
Sign in to the [Azure portal](https://portal.azure.com).
:::image type="content" source="media/orbital-eos-contact-link.png" alt-text="Contact Profile Links Page" lightbox="media/orbital-eos-contact-link.png":::
-8. Select the **Submit** button
-9. Select the **Review + create** tab or select the **Review + create** button
-10. Select the **Create** button
+7. Select the **Submit** button
+8. Select the **Review + create** tab or select the **Review + create** button
+9. Select the **Create** button
## Next steps
orbital Delete Contact https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/orbital/delete-contact.md
To cancel a scheduled contact, the contact entry must be deleted on the **Contac
## Sign in to Azure
-Sign in to the [Azure portal](https://portal.azure.com).
+Sign in to the [Azure portal - Orbital Preview](https://aka.ms/orbital/portal).
## Delete a scheduled contact entry
orbital Orbital Preview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/orbital/orbital-preview.md
# Onboard to the Azure Orbital Preview
-Azure Orbital is now on preview, to get access an Azure subscription must be onboarded. Without this onboarding process, the Azure Orbital resources won't be available in the Azure portal.
+Azure Orbital is now on preview, to get access an Azure subscription must be onboarded. Without this onboarding process, the Azure Orbital resources won't be available in the portal.
## Prerequisites
Azure Orbital is now on preview, to get access an Azure subscription must be onb
## Sign in to Azure
-Sign in to the [Azure portal](https://portal.azure.com).
+Sign in to the [Azure portal - Orbital Preview](https://aka.ms/orbital/portal).
## Register the Resource Provider
orbital Register Spacecraft https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/orbital/register-spacecraft.md
To contact a satellite, it must be registered as a spacecraft resource with the
## Sign in to Azure
-Sign in to the [Azure portal](https://portal.azure.com).
+Sign in to the [Azure portal - Orbital Preview](https://aka.ms/orbital/portal).
## Create spacecraft resource
-1. Select **Create a resource** in the upper left-hand corner of the portal.
-2. In the search box, enter **Spacecrafts*. Select **Spacecrafts** in the search results.
-3. In the **Spacecrafts** page, select Create.
-4. In **Create spacecraft resource**, enter or select this information in the Basics tab:
+> [!NOTE]
+> These steps must be followed as is or you won't be able to find the resources. Please use the specific link above to sign in directly to the Azure Orbital Preview page.
+
+1. In the Azure portal search box, enter **Spacecrafts*. Select **Spacecrafts** in the search results.
+2. In the **Spacecrafts** page, select Create.
+3. In **Create spacecraft resource**, enter or select this information in the Basics tab:
| **Field** | **Value** | | | |
Sign in to the [Azure portal](https://portal.azure.com).
:::image type="content" source="media/orbital-eos-register-bird.png" alt-text="Register Spacecraft Resource Page" lightbox="media/orbital-eos-register-bird.png":::
-5. Select the **Links** tab, or select the **Next: Links** button at the bottom of the page.
-6. In the **Links** page, enter or select this information:
+4. Select the **Links** tab, or select the **Next: Links** button at the bottom of the page.
+5. In the **Links** page, enter or select this information:
| **Field** | **Value** | | | |
Sign in to the [Azure portal](https://portal.azure.com).
:::image type="content" source="media/orbital-eos-register-links.png" alt-text="Spacecraft Links Resource Page" lightbox="media/orbital-eos-register-links.png":::
-7. Select the **Review + create** tab, or select the **Review + create** button.
-8. Select **Create**
+6. Select the **Review + create** tab, or select the **Review + create** button.
+7. Select **Create**
## Authorize the new spacecraft resource
orbital Schedule Contact https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/orbital/schedule-contact.md
Schedule a contact with the selected satellite for data retrieval/delivery on Az
## Sign in to Azure
-Sign in to the [Azure portal](https://portal.azure.com).
+Sign in to the [Azure portal - Orbital Preview](https://aka.ms/orbital/portal).
## Select an available contact
orbital Update Tle https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/orbital/update-tle.md
Update the TLE of an existing spacecraft resource.
## Sign in to Azure
-Sign in to the [Azure portal](https://portal.azure.com).
+Sign in to the [Azure portal - Orbital Preview](https://aka.ms/orbital/portal).
## Update the spacecraft TLE
postgresql Concept Reserved Pricing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/concept-reserved-pricing.md
Azure Database for PostgreSQL now helps you save money by prepaying for compute
You don't need to assign the reservation to specific Azure Database for PostgreSQL servers. An already running Azure Database for PostgreSQL (or ones that are newly deployed) will automatically get the benefit of reserved pricing. By purchasing a reservation, you're pre-paying for the compute costs for a period of one or three years. As soon as you buy a reservation, the Azure database for PostgreSQL compute charges that match the reservation attributes are no longer charged at the pay-as-you go rates. A reservation does not cover software, networking, or storage charges associated with the PostgreSQL Database servers. At the end of the reservation term, the billing benefit expires, and the Azure Database for PostgreSQL are billed at the pay-as-you go price. Reservations do not auto-renew. For pricing information, see the [Azure Database for PostgreSQL reserved capacity offering](https://azure.microsoft.com/pricing/details/postgresql/). </br> > [!IMPORTANT]
-> Reserved capacity pricing is available for the Azure Database for PostgreSQL in [Single server](./overview.md#azure-database-for-postgresqlsingle-server), [Flexible Server](flexible-server/overview.md), and [Hyperscale Citus](./overview.md#azure-database-for-postgresql--hyperscale-citus) deployment options. For information about RI pricing on Hyperscale (Citus), see [this page](concepts-hyperscale-reserved-pricing.md).
+> Reserved capacity pricing is available for the Azure Database for PostgreSQL in [Single server](./overview.md#azure-database-for-postgresqlsingle-server), [Flexible Server](flexible-server/overview.md), and [Hyperscale Citus](./overview.md#azure-database-for-postgresql--hyperscale-citus) deployment options. For information about RI pricing on Hyperscale (Citus), see [this page](hyperscale/concepts-reserved-pricing.md).
You can buy Azure Database for PostgreSQL reserved capacity in the [Azure portal](https://portal.azure.com/). Pay for the reservation [up front or with monthly payments](../cost-management-billing/reservations/prepare-buy-reservation.md). To buy the reserved capacity:
To learn more about Azure Reservations, see the following articles:
* [Understand Azure Reservations discount](../cost-management-billing/reservations/understand-reservation-charges.md) * [Understand reservation usage for your Pay-As-You-Go subscription](../cost-management-billing/reservations/understand-reservation-charges-postgresql.md) * [Understand reservation usage for your Enterprise enrollment](../cost-management-billing/reservations/understand-reserved-instance-usage-ea.md)
-* [Azure Reservations in Partner Center Cloud Solution Provider (CSP) program](/partner-center/azure-reservations)
+* [Azure Reservations in Partner Center Cloud Solution Provider (CSP) program](/partner-center/azure-reservations)
postgresql Concepts Extensions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/concepts-extensions.md
Now you can run pg_dump on the original database and then do pg_restore. After t
```SQL SELECT timescaledb_post_restore(); ```
-For more details on restore method wiith Timescae enabled database see [Timescale documentation](https://docs.timescale.com/timescaledb/latest/how-to-guides/backup-and-restore/pg-dump-and-restore/#restore-your-entire-database-from-backup)
+For more details on restore method wiith Timescale enabled database see [Timescale documentation](https://docs.timescale.com/timescaledb/latest/how-to-guides/backup-and-restore/pg-dump-and-restore/#restore-your-entire-database-from-backup)
### Restoring a Timescale database using timescaledb-backup
For more details on restore method wiith Timescae enabled database see [Timescal
4. Grant azure_pg_admin [role](https://www.postgresql.org/docs/11/database-roles.html) to user that will be used by [ts-restore](https://github.com/timescale/timescaledb-backup#using-ts-restore) 5. Run [ts-restore](https://github.com/timescale/timescaledb-backup#using-ts-restore) to restore database
- More details on hese utilities can be found [here](https://github.com/timescale/timescaledb-backup).
+ More details on these utilities can be found [here](https://github.com/timescale/timescaledb-backup).
> [!NOTE] > When using `timescale-backup` utilities to restore to Azure is that since database user names for non-flexible Azure Database for PostgresQL must use the `<user@db-name>` format, you need to replace `@` with `%40` character encoding.
postgresql Concepts Version Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/concepts-version-policy.md
Before PostgreSQL version 10, the [PostgreSQL versioning policy](https://www.pos
## Next steps - See Azure Database for PostgreSQL - Single Server [supported versions](./concepts-supported-versions.md) - See Azure Database for PostgreSQL - Flexible Server [supported versions](flexible-server/concepts-supported-versions.md)-- See Azure Database for PostgreSQL - Hyperscale (Citus) [supported versions](concepts-hyperscale-versions.md)
+- See Azure Database for PostgreSQL - Hyperscale (Citus) [supported versions](hyperscale/concepts-versions.md)
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/flexible-server/overview.md
One advantage of running your workload in Azure is global reach. The flexible se
| West US 3 | :heavy_check_mark: | :x: | :x: | <!-- We continue to add more regions for flexible server. -->
+> [!NOTE]
+> If your application requires Zone redundant HA and it's not available in your preferred Azure region, consider using other regions within the same geography where Zone redundant HA is available, such as US East for US East 2, Central US for North Central US, and so on.
## Migration
postgresql Concepts App Type https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/concepts-app-type.md
+
+ Title: Determine application type - Hyperscale (Citus) - Azure Database for PostgreSQL
+description: Identify your application for effective distributed data modeling
+++++ Last updated : 07/17/2020++
+# Determining Application Type
+
+Running efficient queries on a Hyperscale (Citus) server group requires that
+tables be properly distributed across servers. The recommended distribution
+varies by the type of application and its query patterns.
+
+There are broadly two kinds of applications that work well on Hyperscale
+(Citus). The first step in data modeling is to identify which of them more
+closely resembles your application.
+
+## At a Glance
+
+| Multi-Tenant Applications | Real-Time Applications |
+|--|-|
+| Sometimes dozens or hundreds of tables in schema | Small number of tables |
+| Queries relating to one tenant (company/store) at a time | Relatively simple analytics queries with aggregations |
+| OLTP workloads for serving web clients | High ingest volume of mostly immutable data |
+| OLAP workloads that serve per-tenant analytical queries | Often centering around large table of events |
+
+## Examples and Characteristics
+
+**Multi-Tenant Application**
+
+> These are typically SaaS applications that serve other companies,
+> accounts, or organizations. Most SaaS applications are inherently
+> relational. They have a natural dimension on which to distribute data
+> across nodes: just shard by tenant\_id.
+>
+> Hyperscale (Citus) enables you to scale out your database to millions of
+> tenants without having to re-architect your application. You can keep the
+> relational semantics you need, like joins, foreign key constraints,
+> transactions, ACID, and consistency.
+>
+> - **Examples**: Websites which host store-fronts for other
+> businesses, such as a digital marketing solution, or a sales
+> automation tool.
+> - **Characteristics**: Queries relating to a single tenant rather
+> than joining information across tenants. This includes OLTP
+> workloads for serving web clients, and OLAP workloads that serve
+> per-tenant analytical queries. Having dozens or hundreds of tables
+> in your database schema is also an indicator for the multi-tenant
+> data model.
+>
+> Scaling a multi-tenant app with Hyperscale (Citus) also requires minimal
+> changes to application code. We have support for popular frameworks like Ruby
+> on Rails and Django.
+
+**Real-Time Analytics**
+
+> Applications needing massive parallelism, coordinating hundreds of cores for
+> fast results to numerical, statistical, or counting queries. By sharding and
+> parallelizing SQL queries across multiple nodes, Hyperscale (Citus) makes it
+> possible to perform real-time queries across billions of records in under a
+> second.
+>
+> Tables in real-time analytics data models are typically distributed by
+> columns like user\_id, host\_id, or device\_id.
+>
+> - **Examples**: Customer-facing analytics dashboards requiring
+> sub-second response times.
+> - **Characteristics**: Few tables, often centering around a big
+> table of device-, site- or user-events and requiring high ingest
+> volume of mostly immutable data. Relatively simple (but
+> computationally intensive) analytics queries involving several
+> aggregations and GROUP BYs.
+
+If your situation resembles either case above, then the next step is to decide
+how to shard your data in the server group. The database administrator\'s
+choice of distribution columns needs to match the access patterns of typical
+queries to ensure performance.
+
+## Next steps
+
+* [Choose a distribution
+ column](concepts-choose-distribution-column.md) for tables in your
+ application to distribute data efficiently
postgresql Concepts Audit https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/concepts-audit.md
+
+ Title: Audit logging - Azure Database for PostgreSQL - Hyperscale (Citus)
+description: Concepts for pgAudit audit logging in Azure Database for PostgreSQL - Hyperscale (Citus).
+++++ Last updated : 08/03/2021++
+# Audit logging in Azure Database for PostgreSQL - Hyperscale (Citus)
+
+> [!IMPORTANT]
+> The pgAudit extension in Hyperscale (Citus) is currently in preview. This
+> preview version is provided without a service level agreement, and it's not
+> recommended for production workloads. Certain features might not be supported
+> or might have constrained capabilities.
+>
+> You can see a complete list of other new features in [preview features for
+> Hyperscale (Citus)](product-updates.md).
+
+Audit logging of database activities in Azure Database for PostgreSQL - Hyperscale (Citus) is available through the PostgreSQL Audit extension: [pgAudit](https://www.pgaudit.org/). pgAudit provides detailed session or object audit logging.
+
+If you want Azure resource-level logs for operations like compute and storage scaling, see the [Azure Activity Log](../../azure-monitor/essentials/platform-logs-overview.md).
+
+## Usage considerations
+By default, pgAudit log statements are emitted along with your regular log statements by using Postgres's standard logging facility. In Azure Database for PostgreSQL - Hyperscale (Citus), you can configure all logs to be sent to Azure Monitor Log store for later analytics in Log Analytics. If you enable Azure Monitor resource logging, your logs will be automatically sent (in JSON format) to Azure Storage, Event Hubs, or Azure Monitor logs, depending on your choice.
+
+## Enabling pgAudit
+
+The pgAudit extension is pre-installed and enabled on a limited number of
+Hyperscale (Citus) server groups at this time. It may or may not be available
+for preview yet on your server group.
+
+## pgAudit settings
+
+pgAudit allows you to configure session or object audit logging. [Session audit logging](https://github.com/pgaudit/pgaudit/blob/master/README.md#session-audit-logging) emits detailed logs of executed statements. [Object audit logging](https://github.com/pgaudit/pgaudit/blob/master/README.md#object-audit-logging) is audit scoped to specific relations. You can choose to set up one or both types of logging.
+
+> [!NOTE]
+> pgAudit settings are specified globally and cannot be specified at a database or role level.
+>
+> Also, pgAudit settings are specified per-node in a server group. To make a change on all nodes, you must apply it to each node individually.
+
+You must configure pgAudit parameters to start logging. The [pgAudit documentation](https://github.com/pgaudit/pgaudit/blob/master/README.md#settings) provides the definition of each parameter. Test the parameters first and confirm that you're getting the expected behavior.
+
+> [!NOTE]
+> Setting `pgaudit.log_client` to ON will redirect logs to a client process (like psql) instead of being written to file. This setting should generally be left disabled. <br> <br>
+> `pgaudit.log_level` is only enabled when `pgaudit.log_client` is on.
+
+> [!NOTE]
+> In Azure Database for PostgreSQL - Hyperscale (Citus), `pgaudit.log` cannot be set using a `-` (minus) sign shortcut as described in the pgAudit documentation. All required statement classes (READ, WRITE, etc.) should be individually specified.
+
+## Audit log format
+Each audit entry is indicated by `AUDIT:` near the beginning of the log line. The format of the rest of the entry is detailed in the [pgAudit documentation](https://github.com/pgaudit/pgaudit/blob/master/README.md#format).
+
+## Getting started
+To quickly get started, set `pgaudit.log` to `WRITE`, and open your server logs to review the output.
+
+## Viewing audit logs
+The way you access the logs depends on which endpoint you choose. For Azure Storage, see the [logs storage account](../../azure-monitor/essentials/resource-logs.md#send-to-azure-storage) article. For Event Hubs, see the [stream Azure logs](../../azure-monitor/essentials/resource-logs.md#send-to-azure-event-hubs) article.
+
+For Azure Monitor Logs, logs are sent to the workspace you selected. The Postgres logs use the **AzureDiagnostics** collection mode, so they can be queried from the AzureDiagnostics table. The fields in the table are described below. Learn more about querying and alerting in the [Azure Monitor Logs query](../../azure-monitor/logs/log-query-overview.md) overview.
+
+You can use this query to get started. You can configure alerts based on queries.
+
+Search for all pgAudit entries in Postgres logs for a particular server in the last day
+```kusto
+AzureDiagnostics
+| where LogicalServerName_s == "myservername"
+| where TimeGenerated > ago(1d)
+| where Message contains "AUDIT:"
+```
+
+## Next steps
+
+- [Learn how to setup logging in Azure Database for PostgreSQL - Hyperscale (Citus) and how to access logs](howto-logging.md)
postgresql Concepts Backup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/concepts-backup.md
+
+ Title: Backup and restore ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL
+description: Protecting data from accidental corruption or deletion
+++++ Last updated : 04/14/2021++
+# Backup and restore in Azure Database for PostgreSQL - Hyperscale (Citus)
+
+Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) automatically creates
+backups of each node and stores them in locally redundant storage. Backups can
+be used to restore your Hyperscale (Citus) server group to a specified time.
+Backup and restore are an essential part of any business continuity strategy
+because they protect your data from accidental corruption or deletion.
+
+## Backups
+
+At least once a day, Azure Database for PostgreSQL takes snapshot backups of
+data files and the database transaction log. The backups allow you to restore a
+server to any point in time within the retention period. (The retention period
+is currently 35 days for all server groups.) All backups are encrypted using
+AES 256-bit encryption.
+
+In Azure regions that support availability zones, backup snapshots are stored
+in three availability zones. As long as at least one availability zone is
+online, the Hyperscale (Citus) server group is restorable.
+
+Backup files can't be exported. They may only be used for restore operations
+in Azure Database for PostgreSQL.
+
+### Backup storage cost
+
+For current backup storage pricing, see the Azure Database for PostgreSQL -
+Hyperscale (Citus) [pricing
+page](https://azure.microsoft.com/pricing/details/postgresql/hyperscale-citus/).
+
+## Restore
+
+You can restore a Hyperscale (Citus) server group to any point in time within
+the last 35 days. Point-in-time restore is useful in multiple scenarios. For
+example, when a user accidentally deletes data, drops an important table or
+database, or if an application accidentally overwrites good data with bad data.
+
+> [!IMPORTANT]
+> Deleted Hyperscale (Citus) server groups can't be restored. If you delete the
+> server group, all nodes that belong to the server group are deleted and can't
+> be recovered. To protect server group resources, post deployment, from
+> accidental deletion or unexpected changes, administrators can leverage
+> [management locks](../../azure-resource-manager/management/lock-resources.md).
+
+The restore process creates a new server group in the same Azure region,
+subscription, and resource group as the original. The server group has the
+original's configuration: the same number of nodes, number of vCores, storage
+size, user roles, PostgreSQL version, and version of the Citus extension.
+
+Firewall settings and PostgreSQL server parameters are not preserved from the
+original server group, they are reset to default values. The firewall will
+prevent all connections. You will need to manually adjust these settings after
+restore. In general, see our list of suggested [post-restore
+tasks](howto-restore-portal.md#post-restore-tasks).
+
+## Next steps
+
+* See the steps to [restore a server group](howto-restore-portal.md)
+ in the Azure portal.
+* Learn aboutΓÇ»[Azure availability zones](../../availability-zones/az-overview.md).
postgresql Concepts Choose Distribution Column https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/concepts-choose-distribution-column.md
+
+ Title: Choose distribution columns ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL
+description: Learn how to choose distribution columns in common scenarios in Azure Database for PostgreSQL - Hyperscale (Citus).
+++++ Last updated : 12/06/2021++
+# Choose distribution columns in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)
+
+Choosing each table's distribution column is one of the most important modeling decisions you'll make. Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) stores rows in shards based on the value of the rows' distribution column.
+
+The correct choice groups related data together on the same physical nodes, which makes queries fast and adds support for all SQL features. An incorrect choice makes the system run slowly and won't support all SQL features across nodes.
+
+This article gives distribution column tips for the two most common Hyperscale (Citus) scenarios.
+
+### Multi-tenant apps
+
+The multi-tenant architecture uses a form of hierarchical database modeling to
+distribute queries across nodes in the server group. The top of the data
+hierarchy is known as the *tenant ID* and needs to be stored in a column on
+each table.
+
+Hyperscale (Citus) inspects queries to see which tenant ID they involve and finds the matching table shard. It
+routes the query to a single worker node that contains the shard. Running a query with
+all relevant data placed on the same node is called colocation.
+
+The following diagram illustrates colocation in the multi-tenant data
+model. It contains two tables, Accounts and Campaigns, each distributed
+by `account_id`. The shaded boxes represent shards. Green shards are stored
+together on one worker node, and blue shards are stored on another worker node. Notice how a join
+query between Accounts and Campaigns has all the necessary data
+together on one node when both tables are restricted to the same
+account\_id.
+
+![Multi-tenant
+colocation](../media/concepts-hyperscale-choosing-distribution-column/multi-tenant-colocation.png)
+
+To apply this design in your own schema, identify
+what constitutes a tenant in your application. Common instances include
+company, account, organization, or customer. The column name will be
+something like `company_id` or `customer_id`. Examine each of your
+queries and ask yourself, would it work if it had additional WHERE
+clauses to restrict all tables involved to rows with the same tenant ID?
+Queries in the multi-tenant model are scoped to a tenant. For
+instance, queries on sales or inventory are scoped within a certain
+store.
+
+#### Best practices
+
+- **Partition distributed tables by a common tenant\_id column.** For
+ instance, in a SaaS application where tenants are companies, the
+ tenant\_id is likely to be the company\_id.
+- **Convert small cross-tenant tables to reference tables.** When
+ multiple tenants share a small table of information, distribute it
+ as a reference table.
+- **Restrict filter all application queries by tenant\_id.** Each
+ query should request information for one tenant at a time.
+
+Read the [multi-tenant
+tutorial](./tutorial-design-database-multi-tenant.md) for an example of how to
+build this kind of application.
+
+### Real-time apps
+
+The multi-tenant architecture introduces a hierarchical structure
+and uses data colocation to route queries per tenant. By contrast, real-time
+architectures depend on specific distribution properties of their data
+to achieve highly parallel processing.
+
+We use "entity ID" as a term for distribution columns in the real-time
+model. Typical entities are users, hosts, or devices.
+
+Real-time queries typically ask for numeric aggregates grouped by date or
+category. Hyperscale (Citus) sends these queries to each shard for partial results and
+assembles the final answer on the coordinator node. Queries run fastest when as
+many nodes contribute as possible, and when no single node must do a
+disproportionate amount of work.
+
+#### Best practices
+
+- **Choose a column with high cardinality as the distribution
+ column.** For comparison, a Status field on an order table with
+ values New, Paid, and Shipped is a poor choice of
+ distribution column. It assumes only those few values, which limits the number of shards that can hold
+ the data, and the number of nodes that can process it. Among columns
+ with high cardinality, it's also good to choose those columns that
+ are frequently used in group-by clauses or as join keys.
+- **Choose a column with even distribution.** If you distribute a
+ table on a column skewed to certain common values, data in the
+ table tends to accumulate in certain shards. The nodes that hold
+ those shards end up doing more work than other nodes.
+- **Distribute fact and dimension tables on their common columns.**
+ Your fact table can have only one distribution key. Tables that join
+ on another key won't be colocated with the fact table. Choose
+ one dimension to colocate based on how frequently it's joined and
+ the size of the joining rows.
+- **Change some dimension tables into reference tables.** If a
+ dimension table can't be colocated with the fact table, you can
+ improve query performance by distributing copies of the dimension
+ table to all of the nodes in the form of a reference table.
+
+Read the [real-time dashboard
+tutorial](./tutorial-design-database-realtime.md) for an example of how to build this kind of application.
+
+### Time-series data
+
+In a time-series workload, applications query recent information while they
+archive old information.
+
+The most common mistake in modeling time-series information in Hyperscale (Citus) is to
+use the timestamp itself as a distribution column. A hash distribution based
+on time distributes times seemingly at random into different shards rather
+than keeping ranges of time together in shards. Queries that involve time
+generally reference ranges of time, for example, the most recent data. This type of
+hash distribution leads to network overhead.
+
+#### Best practices
+
+- **Don't choose a timestamp as the distribution column.** Choose a
+ different distribution column. In a multi-tenant app, use the tenant
+ ID, or in a real-time app use the entity ID.
+- **Use PostgreSQL table partitioning for time instead.** Use table
+ partitioning to break a large table of time-ordered data into
+ multiple inherited tables with each table containing different time
+ ranges. Distributing a Postgres-partitioned table in Hyperscale (Citus)
+ creates shards for the inherited tables.
+
+## Next steps
+
+- Learn how [colocation](concepts-colocation.md) between distributed data helps queries run fast.
+- Discover the distribution column of a distributed table, and other [useful diagnostic queries](howto-useful-diagnostic-queries.md).
postgresql Concepts Colocation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/concepts-colocation.md
+
+ Title: Table colocation - Hyperscale (Citus) - Azure Database for PostgreSQL
+description: How to store related information together for faster queries
+++++ Last updated : 05/06/2019++
+# Table colocation in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)
+
+Colocation means storing related information together on the same nodes. Queries can go fast when all the necessary data is available without any network traffic. Colocating related data on different nodes allows queries to run efficiently in parallel on each node.
+
+## Data colocation for hash-distributed tables
+
+In Azure Database for PostgreSQL ΓÇô Hyperscale (Citus), a row is stored in a shard if the hash of the value in the distribution column falls within the shard's hash range. Shards with the same hash range are always placed on the same node. Rows with equal distribution column values are always on the same node across tables.
++
+## A practical example of colocation
+
+Consider the following tables that might be part of a multi-tenant web
+analytics SaaS:
+
+```sql
+CREATE TABLE event (
+ tenant_id int,
+ event_id bigint,
+ page_id int,
+ payload jsonb,
+ primary key (tenant_id, event_id)
+);
+
+CREATE TABLE page (
+ tenant_id int,
+ page_id int,
+ path text,
+ primary key (tenant_id, page_id)
+);
+```
+
+Now we want to answer queries that might be issued by a customer-facing
+dashboard. An example query is "Return the number of visits in the past week for
+all pages starting with '/blog' in tenant six."
+
+If our data was in the Single-Server deployment option, we could easily express
+our query by using the rich set of relational operations offered by SQL:
+
+```sql
+SELECT page_id, count(event_id)
+FROM
+ page
+LEFT JOIN (
+ SELECT * FROM event
+ WHERE (payload->>'time')::timestamptz >= now() - interval '1 week'
+) recent
+USING (tenant_id, page_id)
+WHERE tenant_id = 6 AND path LIKE '/blog%'
+GROUP BY page_id;
+```
+
+As long as the [working set](https://en.wikipedia.org/wiki/Working_set) for this query fits in memory, a single-server table is an appropriate solution. Let's consider the opportunities of scaling the data model with the Hyperscale (Citus) deployment option.
+
+### Distribute tables by ID
+
+Single-server queries start slowing down as the number of tenants and the data stored for each tenant grows. The working set stops fitting in memory and CPU becomes a bottleneck.
+
+In this case, we can shard the data across many nodes by using Hyperscale (Citus). The
+first and most important choice we need to make when we decide to shard is the
+distribution column. Let's start with a naive choice of using `event_id` for
+the event table and `page_id` for the `page` table:
+
+```sql
+-- naively use event_id and page_id as distribution columns
+
+SELECT create_distributed_table('event', 'event_id');
+SELECT create_distributed_table('page', 'page_id');
+```
+
+When data is dispersed across different workers, we can't perform a join like we would on a single PostgreSQL node. Instead, we need to issue two queries:
+
+```sql
+-- (Q1) get the relevant page_ids
+SELECT page_id FROM page WHERE path LIKE '/blog%' AND tenant_id = 6;
+
+-- (Q2) get the counts
+SELECT page_id, count(*) AS count
+FROM event
+WHERE page_id IN (/*…page IDs from first query…*/)
+ AND tenant_id = 6
+ AND (payload->>'time')::date >= now() - interval '1 week'
+GROUP BY page_id ORDER BY count DESC LIMIT 10;
+```
+
+Afterwards, the results from the two steps need to be combined by the
+application.
+
+Running the queries must consult data in shards scattered across nodes.
++
+In this case, the data distribution creates substantial drawbacks:
+
+- Overhead from querying each shard and running multiple queries.
+- Overhead of Q1 returning many rows to the client.
+- Q2 becomes large.
+- The need to write queries in multiple steps requires changes in the application.
+
+The data is dispersed, so the queries can be parallelized. It's
+only beneficial if the amount of work that the query does is substantially
+greater than the overhead of querying many shards.
+
+### Distribute tables by tenant
+
+In Hyperscale (Citus), rows with the same distribution column value are guaranteed to
+be on the same node. Starting over, we can create our tables with `tenant_id`
+as the distribution column.
+
+```sql
+-- co-locate tables by using a common distribution column
+SELECT create_distributed_table('event', 'tenant_id');
+SELECT create_distributed_table('page', 'tenant_id', colocate_with => 'event');
+```
+
+Now Hyperscale (Citus) can answer the original single-server query without modification (Q1):
+
+```sql
+SELECT page_id, count(event_id)
+FROM
+ page
+LEFT JOIN (
+ SELECT * FROM event
+ WHERE (payload->>'time')::timestamptz >= now() - interval '1 week'
+) recent
+USING (tenant_id, page_id)
+WHERE tenant_id = 6 AND path LIKE '/blog%'
+GROUP BY page_id;
+```
+
+Because of filter and join on tenant_id, Hyperscale (Citus) knows that the entire
+query can be answered by using the set of colocated shards that contain the data
+for that particular tenant. A single PostgreSQL node can answer the query in
+a single step.
++
+In some cases, queries and table schemas must be changed to include the tenant ID in unique constraints and join conditions. This change is usually straightforward.
+
+## Next steps
+
+- See how tenant data is colocated in the [multi-tenant tutorial](tutorial-design-database-multi-tenant.md).
postgresql Concepts Columnar https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/concepts-columnar.md
+
+ Title: Columnar table storage - Hyperscale (Citus) - Azure Database for PostgreSQL
+description: Compressing data using columnar storage
+++++ Last updated : 08/03/2021++
+# Columnar table storage
+
+Azure Database for PostgreSQL - Hyperscale (Citus) supports append-only
+columnar table storage for analytic and data warehousing workloads. When
+columns (rather than rows) are stored contiguously on disk, data becomes more
+compressible, and queries can request a subset of columns more quickly.
+
+## Usage
+
+To use columnar storage, specify `USING columnar` when creating a table:
+
+```postgresql
+CREATE TABLE contestant (
+ handle TEXT,
+ birthdate DATE,
+ rating INT,
+ percentile FLOAT,
+ country CHAR(3),
+ achievements TEXT[]
+) USING columnar;
+```
+
+Hyperscale (Citus) converts rows to columnar storage in "stripes" during
+insertion. Each stripe holds one transaction's worth of data, or 150000 rows,
+whichever is less. (The stripe size and other parameters of a columnar table
+can be changed with the
+[alter_columnar_table_set](reference-functions.md#alter_columnar_table_set)
+function.)
+
+For example, the following statement puts all five rows into the same stripe,
+because all values are inserted in a single transaction:
+
+```postgresql
+-- insert these values into a single columnar stripe
+
+INSERT INTO contestant VALUES
+ ('a','1990-01-10',2090,97.1,'XA','{a}'),
+ ('b','1990-11-01',2203,98.1,'XA','{a,b}'),
+ ('c','1988-11-01',2907,99.4,'XB','{w,y}'),
+ ('d','1985-05-05',2314,98.3,'XB','{}'),
+ ('e','1995-05-05',2236,98.2,'XC','{a}');
+```
+
+It's best to make large stripes when possible, because Hyperscale (Citus)
+compresses columnar data separately per stripe. We can see facts about our
+columnar table like compression rate, number of stripes, and average rows per
+stripe by using `VACUUM VERBOSE`:
+
+```postgresql
+VACUUM VERBOSE contestant;
+```
+```
+INFO: statistics for "contestant":
+storage id: 10000000000
+total file size: 24576, total data size: 248
+compression rate: 1.31x
+total row count: 5, stripe count: 1, average rows per stripe: 5
+chunk count: 6, containing data for dropped columns: 0, zstd compressed: 6
+```
+
+The output shows that Hyperscale (Citus) used the zstd compression algorithm to
+obtain 1.31x data compression. The compression rate compares a) the size of
+inserted data as it was staged in memory against b) the size of that data
+compressed in its eventual stripe.
+
+Because of how it's measured, the compression rate may or may not match the
+size difference between row and columnar storage for a table. The only way
+to truly find that difference is to construct a row and columnar table that
+contain the same data, and compare.
+
+## Measuring compression
+
+Let's create a new example with more data to benchmark the compression savings.
+
+```postgresql
+-- first a wide table using row storage
+CREATE TABLE perf_row(
+ c00 int8, c01 int8, c02 int8, c03 int8, c04 int8, c05 int8, c06 int8, c07 int8, c08 int8, c09 int8,
+ c10 int8, c11 int8, c12 int8, c13 int8, c14 int8, c15 int8, c16 int8, c17 int8, c18 int8, c19 int8,
+ c20 int8, c21 int8, c22 int8, c23 int8, c24 int8, c25 int8, c26 int8, c27 int8, c28 int8, c29 int8,
+ c30 int8, c31 int8, c32 int8, c33 int8, c34 int8, c35 int8, c36 int8, c37 int8, c38 int8, c39 int8,
+ c40 int8, c41 int8, c42 int8, c43 int8, c44 int8, c45 int8, c46 int8, c47 int8, c48 int8, c49 int8,
+ c50 int8, c51 int8, c52 int8, c53 int8, c54 int8, c55 int8, c56 int8, c57 int8, c58 int8, c59 int8,
+ c60 int8, c61 int8, c62 int8, c63 int8, c64 int8, c65 int8, c66 int8, c67 int8, c68 int8, c69 int8,
+ c70 int8, c71 int8, c72 int8, c73 int8, c74 int8, c75 int8, c76 int8, c77 int8, c78 int8, c79 int8,
+ c80 int8, c81 int8, c82 int8, c83 int8, c84 int8, c85 int8, c86 int8, c87 int8, c88 int8, c89 int8,
+ c90 int8, c91 int8, c92 int8, c93 int8, c94 int8, c95 int8, c96 int8, c97 int8, c98 int8, c99 int8
+);
+
+-- next a table with identical columns using columnar storage
+CREATE TABLE perf_columnar(LIKE perf_row) USING COLUMNAR;
+```
+
+Fill both tables with the same large dataset:
+
+```postgresql
+INSERT INTO perf_row
+ SELECT
+ g % 00500, g % 01000, g % 01500, g % 02000, g % 02500, g % 03000, g % 03500, g % 04000, g % 04500, g % 05000,
+ g % 05500, g % 06000, g % 06500, g % 07000, g % 07500, g % 08000, g % 08500, g % 09000, g % 09500, g % 10000,
+ g % 10500, g % 11000, g % 11500, g % 12000, g % 12500, g % 13000, g % 13500, g % 14000, g % 14500, g % 15000,
+ g % 15500, g % 16000, g % 16500, g % 17000, g % 17500, g % 18000, g % 18500, g % 19000, g % 19500, g % 20000,
+ g % 20500, g % 21000, g % 21500, g % 22000, g % 22500, g % 23000, g % 23500, g % 24000, g % 24500, g % 25000,
+ g % 25500, g % 26000, g % 26500, g % 27000, g % 27500, g % 28000, g % 28500, g % 29000, g % 29500, g % 30000,
+ g % 30500, g % 31000, g % 31500, g % 32000, g % 32500, g % 33000, g % 33500, g % 34000, g % 34500, g % 35000,
+ g % 35500, g % 36000, g % 36500, g % 37000, g % 37500, g % 38000, g % 38500, g % 39000, g % 39500, g % 40000,
+ g % 40500, g % 41000, g % 41500, g % 42000, g % 42500, g % 43000, g % 43500, g % 44000, g % 44500, g % 45000,
+ g % 45500, g % 46000, g % 46500, g % 47000, g % 47500, g % 48000, g % 48500, g % 49000, g % 49500, g % 50000
+ FROM generate_series(1,50000000) g;
+
+INSERT INTO perf_columnar
+ SELECT
+ g % 00500, g % 01000, g % 01500, g % 02000, g % 02500, g % 03000, g % 03500, g % 04000, g % 04500, g % 05000,
+ g % 05500, g % 06000, g % 06500, g % 07000, g % 07500, g % 08000, g % 08500, g % 09000, g % 09500, g % 10000,
+ g % 10500, g % 11000, g % 11500, g % 12000, g % 12500, g % 13000, g % 13500, g % 14000, g % 14500, g % 15000,
+ g % 15500, g % 16000, g % 16500, g % 17000, g % 17500, g % 18000, g % 18500, g % 19000, g % 19500, g % 20000,
+ g % 20500, g % 21000, g % 21500, g % 22000, g % 22500, g % 23000, g % 23500, g % 24000, g % 24500, g % 25000,
+ g % 25500, g % 26000, g % 26500, g % 27000, g % 27500, g % 28000, g % 28500, g % 29000, g % 29500, g % 30000,
+ g % 30500, g % 31000, g % 31500, g % 32000, g % 32500, g % 33000, g % 33500, g % 34000, g % 34500, g % 35000,
+ g % 35500, g % 36000, g % 36500, g % 37000, g % 37500, g % 38000, g % 38500, g % 39000, g % 39500, g % 40000,
+ g % 40500, g % 41000, g % 41500, g % 42000, g % 42500, g % 43000, g % 43500, g % 44000, g % 44500, g % 45000,
+ g % 45500, g % 46000, g % 46500, g % 47000, g % 47500, g % 48000, g % 48500, g % 49000, g % 49500, g % 50000
+ FROM generate_series(1,50000000) g;
+
+VACUUM (FREEZE, ANALYZE) perf_row;
+VACUUM (FREEZE, ANALYZE) perf_columnar;
+```
+
+For this data, you can see a compression ratio of better than 8X in the
+columnar table.
+
+```postgresql
+SELECT pg_total_relation_size('perf_row')::numeric/
+ pg_total_relation_size('perf_columnar') AS compression_ratio;
+ compression_ratio
+--
+ 8.0196135873627944
+(1 row)
+```
+
+## Example
+
+Columnar storage works well with table partitioning. For an example, see the
+Citus Engine community documentation, [archiving with columnar
+storage](https://docs.citusdata.com/en/stable/use_cases/timeseries.html#archiving-with-columnar-storage).
+
+## Gotchas
+
+* Columnar storage compresses per stripe. Stripes are created per transaction,
+ so inserting one row per transaction will put single rows into their own
+ stripes. Compression and performance of single row stripes will be worse than
+ a row table. Always insert in bulk to a columnar table.
+* If you mess up and columnarize a bunch of tiny stripes, you're stuck.
+ The only fix is to create a new columnar table and copy
+ data from the original in one transaction:
+ ```postgresql
+ BEGIN;
+ CREATE TABLE foo_compacted (LIKE foo) USING columnar;
+ INSERT INTO foo_compacted SELECT * FROM foo;
+ DROP TABLE foo;
+ ALTER TABLE foo_compacted RENAME TO foo;
+ COMMIT;
+ ```
+* Fundamentally non-compressible data can be a problem, although columnar
+ storage is still useful when selecting specific columns. It doesn't need
+ to load the other columns into memory.
+* On a partitioned table with a mix of row and column partitions, updates must
+ be carefully targeted. Filter them to hit only the row partitions.
+ * If the operation is targeted at a specific row partition (for example,
+ `UPDATE p2 SET i = i + 1`), it will succeed; if targeted at a specified columnar
+ partition (for example, `UPDATE p1 SET i = i + 1`), it will fail.
+ * If the operation is targeted at the partitioned table and has a WHERE
+ clause that excludes all columnar partitions (for example
+ `UPDATE parent SET i = i + 1 WHERE timestamp = '2020-03-15'`),
+ it will succeed.
+ * If the operation is targeted at the partitioned table, but does not
+ filter on the partition key columns, it will fail. Even if there are
+ WHERE clauses that match rows in only columnar partitions, it's not
+ enough--the partition key must also be filtered.
+
+## Limitations
+
+This feature still has significant limitations. See [Hyperscale
+(Citus) limits and limitations](concepts-limits.md#columnar-storage).
+
+## Next steps
+
+* See an example of columnar storage in a Citus [time series
+ tutorial](https://docs.citusdata.com/en/stable/use_cases/timeseries.html#archiving-with-columnar-storage)
+ (external link).
postgresql Concepts Configuration Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/concepts-configuration-options.md
+
+ Title: Configuration options ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL
+description: Options for a Hyperscale (Citus) server group, including node compute, storage, and regions.
++++++ Last updated : 12/17/2021++
+# Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) configuration options
+
+## Compute and storage
+
+You can select the compute and storage settings independently for
+worker nodes and the coordinator node in a Hyperscale (Citus) server
+group. Compute resources are provided as vCores, which represent
+the logical CPU of the underlying hardware. The storage size for
+provisioning refers to the capacity available to the coordinator
+and worker nodes in your Hyperscale (Citus) server group. The storage
+includes database files, temporary files, transaction logs, and
+the Postgres server logs.
+
+### Standard tier
+
+| Resource | Worker node | Coordinator node |
+|--|--|--|
+| Compute, vCores | 4, 8, 16, 32, 64 | 4, 8, 16, 32, 64 |
+| Memory per vCore, GiB | 8 | 4 |
+| Storage size, TiB | 0.5, 1, 2 | 0.5, 1, 2 |
+| Storage type | General purpose (SSD) | General purpose (SSD) |
+| IOPS | Up to 3 IOPS/GiB | Up to 3 IOPS/GiB |
+
+The total amount of RAM in a single Hyperscale (Citus) node is based on the
+selected number of vCores.
+
+| vCores | One worker node, GiB RAM | Coordinator node, GiB RAM |
+|--|--||
+| 4 | 32 | 16 |
+| 8 | 64 | 32 |
+| 16 | 128 | 64 |
+| 32 | 256 | 128 |
+| 64 | 432 | 256 |
+
+The total amount of storage you provision also defines the I/O capacity
+available to each worker and coordinator node.
+
+| Storage size, TiB | Maximum IOPS |
+|-|--|
+| 0.5 | 1,536 |
+| 1 | 3,072 |
+| 2 | 6,148 |
+
+For the entire Hyperscale (Citus) cluster, the aggregated IOPS work out to the
+following values:
+
+| Worker nodes | 0.5 TiB, total IOPS | 1 TiB, total IOPS | 2 TiB, total IOPS |
+|--||-|-|
+| 2 | 3,072 | 6,144 | 12,296 |
+| 3 | 4,608 | 9,216 | 18,444 |
+| 4 | 6,144 | 12,288 | 24,592 |
+| 5 | 7,680 | 15,360 | 30,740 |
+| 6 | 9,216 | 18,432 | 36,888 |
+| 7 | 10,752 | 21,504 | 43,036 |
+| 8 | 12,288 | 24,576 | 49,184 |
+| 9 | 13,824 | 27,648 | 55,332 |
+| 10 | 15,360 | 30,720 | 61,480 |
+| 11 | 16,896 | 33,792 | 67,628 |
+| 12 | 18,432 | 36,864 | 73,776 |
+| 13 | 19,968 | 39,936 | 79,924 |
+| 14 | 21,504 | 43,008 | 86,072 |
+| 15 | 23,040 | 46,080 | 92,220 |
+| 16 | 24,576 | 49,152 | 98,368 |
+| 17 | 26,112 | 52,224 | 104,516 |
+| 18 | 27,648 | 55,296 | 110,664 |
+| 19 | 29,184 | 58,368 | 116,812 |
+| 20 | 30,720 | 61,440 | 122,960 |
+
+### Basic tier
+
+The Hyperscale (Citus) [basic tier](concepts-tiers.md) is a server
+group with just one node. Because there isn't a distinction between
+coordinator and worker nodes, it's less complicated to choose compute and
+storage resources.
+
+| Resource | Available options |
+|--|--|
+| Compute, vCores | 2, 4, 8 |
+| Memory per vCore, GiB | 4 |
+| Storage size, GiB | 128, 256, 512 |
+| Storage type | General purpose (SSD) |
+| IOPS | Up to 3 IOPS/GiB |
+
+The total amount of RAM in a single Hyperscale (Citus) node is based on the
+selected number of vCores.
+
+| vCores | GiB RAM |
+|--||
+| 2 | 8 |
+| 4 | 16 |
+| 8 | 32 |
+
+The total amount of storage you provision also defines the I/O capacity
+available to the basic tier node.
+
+| Storage size, GiB | Maximum IOPS |
+|-|--|
+| 128 | 384 |
+| 256 | 768 |
+| 512 | 1,536 |
+
+## Regions
+Hyperscale (Citus) server groups are available in the following Azure regions:
+
+* Americas:
+ * Brazil South
+ * Canada Central
+ * Central US
+ * East US
+ * East US 2
+ * North Central US
+ * West US 2
+* Asia Pacific:
+ * Australia East
+ * Central India
+ * East Asia
+ * Japan East
+ * Japan West
+ * Korea Central
+ * Southeast Asia
+* Europe:
+ * France Central
+ * Germany West Central
+ * North Europe
+ * Switzerland North
+ * UK South
+ * West Europe
+
+Some of these regions may not be initially activated on all Azure
+subscriptions. If you want to use a region from the list above and don't see it
+in your subscription, or if you want to use a region not on this list, open a
+[support
+request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
+
+## Pricing
+For the most up-to-date pricing information, see the service
+[pricing page](https://azure.microsoft.com/pricing/details/postgresql/).
+To see the cost for the configuration you want, the
+[Azure portal](https://portal.azure.com/#create/Microsoft.PostgreSQLServer)
+shows the monthly cost on the **Configure** tab based on the options you
+select. If you don't have an Azure subscription, you can use the Azure pricing
+calculator to get an estimated price. On the
+[Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/)
+website, select **Add items**, expand the **Databases** category, and choose
+**Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)** to customize the
+options.
+
+## Next steps
+Learn how to [create a Hyperscale (Citus) server group in the portal](quickstart-create-portal.md).
postgresql Concepts Connection Pool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/concepts-connection-pool.md
+
+ Title: Connection pooling ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL
+description: Scaling client database connections
+++++ Last updated : 08/03/2021++
+# Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) connection pooling
+
+Establishing new connections takes time. That works against most applications,
+which request many short-lived connections. We recommend using a connection
+pooler, both to reduce idle transactions and reuse existing connections. To
+learn more, visit our [blog
+post](https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/not-all-postgres-connection-pooling-is-equal/ba-p/825717).
+
+You can run your own connection pooler, or use PgBouncer managed by Azure.
+
+## Managed PgBouncer
+
+Connection poolers such as PgBouncer allow more clients to connect to the
+coordinator node at once. Applications connect to the pooler, and the pooler
+relays commands to the destination database.
+
+When clients connect through PgBouncer, the number of connections that can
+actively run in the database doesn't change. Instead, PgBouncer queues excess
+connections and runs them when the database is ready.
+
+Hyperscale (Citus) is now offering a managed instance of PgBouncer for server
+groups. It supports up to 2,000 simultaneous client connections. To connect
+through PgBouncer, follow these steps:
+
+1. Go to the **Connection strings** page for your server group in the Azure
+ portal.
+2. Enable the checkbox **PgBouncer connection strings**. (The listed connection
+ strings will change.)
+
+ > [!IMPORTANT]
+ >
+ > If the checkbox does not exist, PgBouncer isn't enabled for your server
+ > group yet. Managed PgBouncer is being rolled out to all [supported
+ > regions](concepts-configuration-options.md#regions). Once
+ > enabled in a region, it'll be added to existing server groups in the
+ > region during a [scheduled
+ > maintenance](concepts-maintenance.md) event.
+
+3. Update client applications to connect with the new string.
+
+## Next steps
+
+Discover more about the [limits and limitations](concepts-limits.md)
+of Hyperscale (Citus).
postgresql Concepts Distributed Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/concepts-distributed-data.md
+
+ Title: Distributed data ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL
+description: Learn about distributed tables, reference tables, local tables, and shards in Azure Database for PostgreSQL.
+++++ Last updated : 05/06/2019++
+# Distributed data in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)
+
+This article outlines the three table types in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus).
+It shows how distributed tables are stored as shards, and the way that shards are placed on nodes.
+
+## Table types
+
+There are three types of tables in a Hyperscale (Citus) server group, each
+used for different purposes.
+
+### Type 1: Distributed tables
+
+The first type, and most common, is distributed tables. They
+appear to be normal tables to SQL statements, but they're horizontally
+partitioned across worker nodes. What this means is that the rows
+of the table are stored on different nodes, in fragment tables called
+shards.
+
+Hyperscale (Citus) runs not only SQL but DDL statements throughout a cluster.
+Changing the schema of a distributed table cascades to update
+all the table's shards across workers.
+
+#### Distribution column
+
+Hyperscale (Citus) uses algorithmic sharding to assign rows to shards. The assignment is made deterministically based on the value
+of a table column called the distribution column. The cluster
+administrator must designate this column when distributing a table.
+Making the right choice is important for performance and functionality.
+
+### Type 2: Reference tables
+
+A reference table is a type of distributed table whose entire
+contents are concentrated into a single shard. The shard is replicated on every worker. Queries on any worker can access the reference information locally, without the network overhead of requesting rows from another node. Reference tables have no distribution column
+because there's no need to distinguish separate shards per row.
+
+Reference tables are typically small and are used to store data that's
+relevant to queries running on any worker node. An example is enumerated
+values like order statuses or product categories.
+
+### Type 3: Local tables
+
+When you use Hyperscale (Citus), the coordinator node you connect to is a regular PostgreSQL database. You can create ordinary tables on the coordinator and choose not to shard them.
+
+A good candidate for local tables would be small administrative tables that don't participate in join queries. An example is a users table for application sign-in and authentication.
+
+## Shards
+
+The previous section described how distributed tables are stored as shards on
+worker nodes. This section discusses more technical details.
+
+The `pg_dist_shard` metadata table on the coordinator contains a
+row for each shard of each distributed table in the system. The row
+matches a shard ID with a range of integers in a hash space
+(shardminvalue, shardmaxvalue).
+
+```sql
+SELECT * from pg_dist_shard;
+ logicalrelid | shardid | shardstorage | shardminvalue | shardmaxvalue
+++--++
+ github_events | 102026 | t | 268435456 | 402653183
+ github_events | 102027 | t | 402653184 | 536870911
+ github_events | 102028 | t | 536870912 | 671088639
+ github_events | 102029 | t | 671088640 | 805306367
+ (4 rows)
+```
+
+If the coordinator node wants to determine which shard holds a row of
+`github_events`, it hashes the value of the distribution column in the
+row. Then the node checks which shard\'s range contains the hashed value. The
+ranges are defined so that the image of the hash function is their
+disjoint union.
+
+### Shard placements
+
+Suppose that shard 102027 is associated with the row in question. The row
+is read or written in a table called `github_events_102027` in one of
+the workers. Which worker? That's determined entirely by the metadata
+tables. The mapping of shard to worker is known as the shard placement.
+
+The coordinator node
+rewrites queries into fragments that refer to the specific tables
+like `github_events_102027` and runs those fragments on the
+appropriate workers. Here's an example of a query run behind the scenes to find the node holding shard ID 102027.
+
+```sql
+SELECT
+ shardid,
+ node.nodename,
+ node.nodeport
+FROM pg_dist_placement placement
+JOIN pg_dist_node node
+ ON placement.groupid = node.groupid
+ AND node.noderole = 'primary'::noderole
+WHERE shardid = 102027;
+```
+
+```output
+ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
+Γöé shardid Γöé nodename Γöé nodeport Γöé
+Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
+Γöé 102027 Γöé localhost Γöé 5433 Γöé
+ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
+```
+
+## Next steps
+
+- Learn how to [choose a distribution column](concepts-choose-distribution-column.md) for distributed tables.
postgresql Concepts Extensions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/concepts-extensions.md
+
+ Title: Extensions ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL
+description: Describes the ability to extend the functionality of your database by using extensions in Azure Database for PostgreSQL - Hyperscale (Citus)
+++++ Last updated : 10/01/2021+
+# PostgreSQL extensions in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)
+
+PostgreSQL provides the ability to extend the functionality of your database by using extensions. Extensions allow for bundling multiple related SQL objects together in a single package that can be loaded or removed from your database with a single command. After being loaded in the database, extensions can function like built-in features. For more information on PostgreSQL extensions, see [Package related objects into an extension](https://www.postgresql.org/docs/current/static/extend-extensions.html).
+
+## Use PostgreSQL extensions
+
+PostgreSQL extensions must be installed in your database before you can use them. To install a particular extension, run the [CREATE EXTENSION](https://www.postgresql.org/docs/current/static/sql-createextension.html) command from the psql tool to load the packaged objects into your database.
+
+> [!NOTE]
+> If `CREATE EXTENSION` fails with a permission denied error, try the
+> `create_extension()` function instead. For instance:
+>
+> ```sql
+> SELECT create_extension('postgis');
+> ```
+
+Azure Database for PostgreSQL - Hyperscale (Citus) currently supports a subset of key extensions as listed here. Extensions other than the ones listed aren't supported. You can't create your own extension with Azure Database for PostgreSQL.
+
+## Extensions supported by Azure Database for PostgreSQL
+
+The following tables list the standard PostgreSQL extensions that are currently supported by Azure Database for PostgreSQL. This information is also available by running `SELECT * FROM pg_available_extensions;`.
+
+The versions of each extension installed in a server group sometimes differ based on the version of PostgreSQL (11, 12, or 13). The tables list extension versions per database version.
+
+### Citus extension
+
+> [!div class="mx-tableFixed"]
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** |
+> ||||||
+> | [citus](https://github.com/citusdata/citus) | Citus distributed database. | 9.5 | 10.0.5 | 10.2.1 | 10.2.1 |
+
+### Data types extensions
+
+> [!div class="mx-tableFixed"]
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** |
+> ||||||
+> | [citext](https://www.postgresql.org/docs/current/static/citext.html) | Provides a case-insensitive character string type. | 1.5 | 1.6 | 1.6 | 1.6 |
+> | [cube](https://www.postgresql.org/docs/current/static/cube.html) | Provides a data type for multidimensional cubes. | 1.4 | 1.4 | 1.4 | 1.5 |
+> | [hll](https://github.com/citusdata/postgresql-hll) | Provides a HyperLogLog data structure. | 2.15 | 2.15 | 2.16 | 2.16 |
+> | [hstore](https://www.postgresql.org/docs/current/static/hstore.html) | Provides a data type for storing sets of key-value pairs. | 1.5 | 1.6 | 1.7 | 1.8 |
+> | [isn](https://www.postgresql.org/docs/current/static/isn.html) | Provides data types for international product numbering standards. | 1.2 | 1.2 | 1.2 | 1.2 |
+> | [lo](https://www.postgresql.org/docs/current/lo.html) | Large Object maintenance. | 1.1 | 1.1 | 1.1 | 1.1 |
+> | [ltree](https://www.postgresql.org/docs/current/static/ltree.html) | Provides a data type for hierarchical tree-like structures. | 1.1 | 1.1 | 1.2 | 1.2 |
+> | [seg](https://www.postgresql.org/docs/current/seg.html) | Data type for representing line segments or floating-point intervals. | 1.3 | 1.3 | 1.3 | 1.4 |
+> | [tdigest](https://github.com/tvondra/tdigest) | Data type for on-line accumulation of rank-based statistics such as quantiles and trimmed means. | 1.0 | 1.0 | 1.2.0 | 1.2.0 |
+> | [topn](https://github.com/citusdata/postgresql-topn/) | Type for top-n JSONB. | 2.3.1 | 2.3.1 | 2.4.0 | 2.4.0 |
+
+### Full-text search extensions
+
+> [!div class="mx-tableFixed"]
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** |
+> ||||||
+> | [dict\_int](https://www.postgresql.org/docs/current/static/dict-int.html) | Provides a text search dictionary template for integers. | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [dict\_xsyn](https://www.postgresql.org/docs/current/dict-xsyn.html) | Text search dictionary template for extended synonym processing. | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [unaccent](https://www.postgresql.org/docs/current/static/unaccent.html) | A text search dictionary that removes accents (diacritic signs) from lexemes. | 1.1 | 1.1 | 1.1 | 1.1 |
+
+### Functions extensions
+
+> [!div class="mx-tableFixed"]
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** |
+> ||||||
+> | [autoinc](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.7) | Functions for autoincrementing fields. | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [earthdistance](https://www.postgresql.org/docs/current/static/earthdistance.html) | Provides a means to calculate great-circle distances on the surface of the Earth. | 1.1 | 1.1 | 1.1 | 1.1 |
+> | [fuzzystrmatch](https://www.postgresql.org/docs/current/static/fuzzystrmatch.html) | Provides several functions to determine similarities and distance between strings. | 1.1 | 1.1 | 1.1 | 1.1 |
+> | [insert\_username](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.8) | Functions for tracking who changed a table. | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [intagg](https://www.postgresql.org/docs/current/intagg.html) | Integer aggregator and enumerator (obsolete). | 1.1 | 1.1 | 1.1 | 1.1 |
+> | [intarray](https://www.postgresql.org/docs/current/static/intarray.html) | Provides functions and operators for manipulating null-free arrays of integers. | 1.2 | 1.2 | 1.3 | 1.5 |
+> | [moddatetime](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.9) | Functions for tracking last modification time. | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [pg\_partman](https://pgxn.org/dist/pg_partman/doc/pg_partman.html) | Manages partitioned tables by time or ID. | 4.5.1 | 4.5.1 | 4.5.1 | 4.5.1 |
+> | [pg\_trgm](https://www.postgresql.org/docs/current/static/pgtrgm.html) | Provides functions and operators for determining the similarity of alphanumeric text based on trigram matching. | 1.4 | 1.4 | 1.5 | 1.6 |
+> | [pgcrypto](https://www.postgresql.org/docs/current/static/pgcrypto.html) | Provides cryptographic functions. | 1.3 | 1.3 | 1.3 | 1.3 |
+> | [refint](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.5) | Functions for implementing referential integrity (obsolete). | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [tablefunc](https://www.postgresql.org/docs/current/static/tablefunc.html) | Provides functions that manipulate whole tables, including crosstab. | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [tcn](https://www.postgresql.org/docs/current/tcn.html) | Triggered change notifications. | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [timetravel](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.6) | Functions for implementing time travel. | 1.0 | | | |
+> | [uuid-ossp](https://www.postgresql.org/docs/current/static/uuid-ossp.html) | Generates universally unique identifiers (UUIDs). | 1.1 | 1.1 | 1.1 | 1.1 |
+
+### Index types extensions
+
+> [!div class="mx-tableFixed"]
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** |
+> ||||||
+> | [bloom](https://www.postgresql.org/docs/current/bloom.html) | Bloom access method - signature file-based index. | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [btree\_gin](https://www.postgresql.org/docs/current/static/btree-gin.html) | Provides sample GIN operator classes that implement B-tree-like behavior for certain data types. | 1.3 | 1.3 | 1.3 | 1.3 |
+> | [btree\_gist](https://www.postgresql.org/docs/current/static/btree-gist.html) | Provides GiST index operator classes that implement B-tree. | 1.5 | 1.5 | 1.5 | 1.6 |
+
+### Language extensions
+
+> [!div class="mx-tableFixed"]
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** |
+> ||||||
+> | [plpgsql](https://www.postgresql.org/docs/current/static/plpgsql.html) | PL/pgSQL loadable procedural language. | 1.0 | 1.0 | 1.0 | 1.0 |
+
+### Miscellaneous extensions
+
+> [!div class="mx-tableFixed"]
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** |
+> ||||||
+> | [adminpack](https://www.postgresql.org/docs/current/adminpack.html) | Administrative functions for PostgreSQL. | 2.0 | 2.0 | 2.1 | 2.1 |
+> | [amcheck](https://www.postgresql.org/docs/current/amcheck.html) | Functions for verifying relation integrity. | 1.1 | 1.2 | 1.2 | 1.3 |
+> | [dblink](https://www.postgresql.org/docs/current/dblink.html) | A module that supports connections to other PostgreSQL databases from within a database session. See the "dblink and postgres_fdw" section for information about this extension. | 1.2 | 1.2 | 1.2 | 1.2 |
+> | [file\_fdw](https://www.postgresql.org/docs/current/file-fdw.html) | Foreign-data wrapper for flat file access. | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [pageinspect](https://www.postgresql.org/docs/current/pageinspect.html) | Inspect the contents of database pages at a low level. | 1.7 | 1.7 | 1.8 | 1.9 |
+> | [pg\_buffercache](https://www.postgresql.org/docs/current/static/pgbuffercache.html) | Provides a means for examining what's happening in the shared buffer cache in real time. | 1.3 | 1.3 | 1.3 | 1.3 |
+> | [pg\_cron](https://github.com/citusdata/pg_cron) | Job scheduler for PostgreSQL. | 1.3 | 1.3 | 1.3 | 1.4 |
+> | [pg\_freespacemap](https://www.postgresql.org/docs/current/pgfreespacemap.html) | Examine the free space map (FSM). | 1.2 | 1.2 | 1.2 | 1.2 |
+> | [pg\_prewarm](https://www.postgresql.org/docs/current/static/pgprewarm.html) | Provides a way to load relation data into the buffer cache. | 1.2 | 1.2 | 1.2 | 1.2 |
+> | [pg\_stat\_statements](https://www.postgresql.org/docs/current/static/pgstatstatements.html) | Provides a means for tracking execution statistics of all SQL statements executed by a server. See the "pg_stat_statements" section for information about this extension. | 1.6 | 1.7 | 1.8 | 1.9 |
+> | [pg\_visibility](https://www.postgresql.org/docs/current/pgvisibility.html) | Examine the visibility map (VM) and page-level visibility information. | 1.2 | 1.2 | 1.2 | 1.2 |
+> | [pgrowlocks](https://www.postgresql.org/docs/current/static/pgrowlocks.html) | Provides a means for showing row-level locking information. | 1.2 | 1.2 | 1.2 | 1.2 |
+> | [pgstattuple](https://www.postgresql.org/docs/current/static/pgstattuple.html) | Provides a means for showing tuple-level statistics. | 1.5 | 1.5 | 1.5 | 1.5 |
+> | [postgres\_fdw](https://www.postgresql.org/docs/current/static/postgres-fdw.html) | Foreign-data wrapper used to access data stored in external PostgreSQL servers. See the "dblink and postgres_fdw" section for information about this extension.| 1.0 | 1.0 | 1.0 | 1.1 |
+> | [sslinfo](https://www.postgresql.org/docs/current/sslinfo.html) | Information about TLS/SSL certificates. | 1.2 | 1.2 | 1.2 | 1.2 |
+> | [tsm\_system\_rows](https://www.postgresql.org/docs/current/tsm-system-rows.html) | TABLESAMPLE method, which accepts number of rows as a limit. | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [tsm\_system\_time](https://www.postgresql.org/docs/current/tsm-system-time.html) | TABLESAMPLE method, which accepts time in milliseconds as a limit. | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [xml2](https://www.postgresql.org/docs/current/xml2.html) | XPath querying and XSLT. | 1.1 | 1.1 | 1.1 | 1.1 |
++
+### PostGIS extensions
+
+> [!div class="mx-tableFixed"]
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** |
+> ||||||
+> | [PostGIS](https://www.postgis.net/), postgis\_topology, postgis\_tiger\_geocoder, postgis\_sfcgal | Spatial and geographic objects for PostgreSQL. | 2.5.5 | 3.0.3 | 3.0.3 | 3.1.4 |
+> | address\_standardizer, address\_standardizer\_data\_us | Used to parse an address into constituent elements. Used to support geocoding address normalization step. | 2.5.5 | 3.0.3 | 3.0.3 | 3.1.4 |
+> | postgis\_sfcgal | PostGIS SFCGAL functions. | 2.5.5 | 3.0.3 | 3.0.3 | 3.1.4 |
+> | postgis\_tiger\_geocoder | PostGIS tiger geocoder and reverse geocoder. | 2.5.5 | 3.0.3 | 3.0.3 | 3.1.4 |
+> | postgis\_topology | PostGIS topology spatial types and functions. | 2.5.5 | 3.0.3 | 3.0.3 | 3.1.4 |
++
+## pg_stat_statements
+The [pg\_stat\_statements extension](https://www.postgresql.org/docs/current/pgstatstatements.html) is preloaded on every Azure Database for PostgreSQL server to provide you with a means of tracking execution statistics of SQL statements.
+
+The setting `pg_stat_statements.track` controls what statements are counted by the extension. It defaults to `top`, which means that all statements issued directly by clients are tracked. The two other tracking levels are `none` and `all`. This setting is configurable as a server parameter through the [Azure portal](../howto-configure-server-parameters-using-portal.md) or the [Azure CLI](../howto-configure-server-parameters-using-cli.md).
+
+There's a tradeoff between the query execution information pg_stat_statements provides and the effect on server performance as it logs each SQL statement. If you aren't actively using the pg_stat_statements extension, we recommend that you set `pg_stat_statements.track` to `none`. Some third-party monitoring services might rely on pg_stat_statements to deliver query performance insights, so confirm whether this is the case for you or not.
+
+## dblink and postgres_fdw
+
+You can use dblink and postgres\_fdw to connect from one PostgreSQL server to
+another, or to another database in the same server. The receiving server needs
+to allow connections from the sending server through its firewall. To use
+these extensions to connect between Azure Database for PostgreSQL servers or
+Hyperscale (Citus) server groups, set **Allow Azure services and resources to
+access this server group (or server)** to ON. You also need to turn this
+setting ON if you want to use the extensions to loop back to the same server.
+The **Allow Azure services and resources to access this server group** setting
+can be found in the Azure portal page for the Hyperscale (Citus) server group
+under **Networking**. Currently, outbound connections from Azure Database for
+PostgreSQL Single server and Hyperscale (Citus) aren't supported, except for
+connections to other Azure Database for PostgreSQL servers and Hyperscale
+(Citus) server groups.
postgresql Concepts Firewall Rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/concepts-firewall-rules.md
+
+ Title: Public access - Hyperscale (Citus) - Azure Database for PostgreSQL
+description: This article describes public access for Azure Database for PostgreSQL - Hyperscale (Citus).
+++++ Last updated : 10/15/2021++
+# Public access in Azure Database for PostgreSQL - Hyperscale (Citus)
++
+This page describes the public access option. For private access, see
+[here](concepts-private-access.md).
+
+## Firewall overview
+
+Azure Database for PostgreSQL server firewall prevents all access to your Hyperscale (Citus) coordinator node until you specify which computers have permission. The firewall grants access to the server based on the originating IP address of each request.
+To configure your firewall, you create firewall rules that specify ranges of acceptable IP addresses. You can create firewall rules at the server level.
+
+**Firewall rules:** These rules enable clients to access your Hyperscale (Citus) coordinator node, that is, all the databases within the same logical server. Server-level firewall rules can be configured by using the Azure portal. To create server-level firewall rules, you must be the subscription owner or a subscription contributor.
+
+All database access to your coordinator node is blocked by the firewall by default. To begin using your server from another computer, you need to specify one or more server-level firewall rules to enable access to your server. Use the firewall rules to specify which IP address ranges from the Internet to allow. Access to the Azure portal website itself is not impacted by the firewall rules.
+Connection attempts from the Internet and Azure must first pass through the firewall before they can reach your PostgreSQL Database, as shown in the following diagram:
++
+## Connecting from the Internet and from Azure
+
+A Hyperscale (Citus) server group firewall controls who can connect to the group's coordinator node. The firewall determines access by consulting a configurable list of rules. Each rule is an IP address, or range of addresses, that are allowed in.
+
+When the firewall blocks connections, it can cause application errors. Using the PostgreSQL JDBC driver, for instance, raises an error like this:
+
+> java.util.concurrent.ExecutionException: java.lang.RuntimeException:
+> org.postgresql.util.PSQLException: FATAL: no pg\_hba.conf entry for host "123.45.67.890", user "citus", database "citus", SSL
+
+See [Create and manage firewall rules](howto-manage-firewall-using-portal.md) to learn how the rules are defined.
+
+## Troubleshooting the database server firewall
+When access to the Microsoft Azure Database for PostgreSQL - Hyperscale (Citus) service doesn't behave as you expect, consider these points:
+
+* **Changes to the allow list have not taken effect yet:** There may be as much as a five-minute delay for changes to the Hyperscale (Citus) firewall configuration to take effect.
+
+* **The user is not authorized or an incorrect password was used:** If a user does not have permissions on the server or the password used is incorrect, the connection to the server is denied. Creating a firewall setting only provides clients with an opportunity to attempt connecting to your server; each client must still provide the necessary security credentials.
+
+For example, using a JDBC client, the following error may appear.
+> java.util.concurrent.ExecutionException: java.lang.RuntimeException: org.postgresql.util.PSQLException: FATAL: password authentication failed for user "yourusername"
+
+* **Dynamic IP address:** If you have an Internet connection with dynamic IP addressing and you are having trouble getting through the firewall, you could try one of the following solutions:
+
+* Ask your Internet Service Provider (ISP) for the IP address range assigned to your client computers that access the Hyperscale (Citus) coordinator node, and then add the IP address range as a firewall rule.
+
+* Get static IP addressing instead for your client computers, and then add the static IP address as a firewall rule.
+
+## Next steps
+For articles on creating server-level and database-level firewall rules, see:
+* [Create and manage Azure Database for PostgreSQL firewall rules using the Azure portal](howto-manage-firewall-using-portal.md)
postgresql Concepts High Availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/concepts-high-availability.md
+
+ Title: High availability ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL
+description: High availability and disaster recovery concepts
+++++ Last updated : 11/15/2021++
+# High availability in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)
+
+High availability (HA) avoids database downtime by maintaining standby replicas
+of every node in a server group. If a node goes down, Hyperscale (Citus) switches
+incoming connections from the failed node to its standby. Failover happens
+within a few minutes, and promoted nodes always have fresh data through
+PostgreSQL synchronous streaming replication.
+
+Even without HA enabled, each Hyperscale (Citus) node has its own locally
+redundant storage (LRS) with three synchronous replicas maintained by Azure
+Storage service. If there's a single replica failure, itΓÇÖs detected by Azure
+Storage service and is transparently re-created. For LRS storage durability,
+see metrics [on this
+page](../../storage/common/storage-redundancy.md#summary-of-redundancy-options).
+
+When HA *is* enabled, Hyperscale (Citus) runs one standby node for each primary
+node in the server group. The primary and its standby use synchronous
+PostgreSQL replication. This replication allows customers to have predictable
+downtime if a primary node fails. In a nutshell, our service detects a failure
+on primary nodes, and fails over to standby nodes with zero data loss.
+
+To take advantage of HA on the coordinator node, database applications need to
+detect and retry dropped connections and failed transactions. The newly
+promoted coordinator will be accessible with the same connection string.
+
+Recovery can be broken into three stages: detection, failover, and full
+recovery. Hyperscale (Citus) runs periodic health checks on every node, and after four
+failed checks it determines that a node is down. Hyperscale (Citus) then promotes a
+standby to primary node status (failover), and provisions a new standby-to-be.
+Streaming replication begins, bringing the new node up-to-date. When all data
+has been replicated, the node has reached full recovery.
+
+### Next steps
+
+- Learn how to [enable high
+ availability](howto-high-availability.md) in a Hyperscale (Citus) server
+ group.
postgresql Concepts Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/concepts-limits.md
+
+ Title: Limits and limitations ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL
+description: Current limits for Hyperscale (Citus) server groups
+++++ Last updated : 12/10/2021++
+# Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) limits and limitations
+
+The following section describes capacity and functional limits in the
+Hyperscale (Citus) service.
+
+## Networking
+
+### Maximum connections
+
+Every PostgreSQL connection (even idle ones) uses at least 10 MB of memory, so
+it's important to limit simultaneous connections. Here are the limits we chose
+to keep nodes healthy:
+
+* Coordinator node
+ * Maximum connections
+ * 300 for 0-3 vCores
+ * 500 for 4-15 vCores
+ * 1000 for 16+ vCores
+ * Maximum user connections
+ * 297 for 0-3 vCores
+ * 497 for 4-15 vCores
+ * 997 for 16+ vCores
+* Worker node
+ * Maximum connections
+ * 600
+
+Attempts to connect beyond these limits will fail with an error. The system
+reserves three connections for monitoring nodes, which is why there are three
+fewer connections available for user queries than connections total.
+
+#### Connection pooling
+
+You can scale connections further using [connection
+pooling](concepts-connection-pool.md). Hyperscale (Citus) offers a
+managed pgBouncer connection pooler configured for up to 2,000 simultaneous
+client connections.
+
+### Private access (preview)
+
+#### Server group name
+
+To be compatible with [private access](concepts-private-access.md),
+a Hyperscale (Citus) server group must have a name that is 40 characters or
+shorter.
+
+#### Regions
+
+The private access feature is available in preview in only these regions:
+
+* Americas
+ * East US
+ * East US 2
+ * West US 2
+* Asia Pacific
+ * Japan East
+ * Japan West
+ * Korea Central
+* Europe
+ * Germany West Central
+ * UK South
+ * West Europe
+
+## Storage
+
+### Storage scaling
+
+Storage on coordinator and worker nodes can be scaled up (increased) but can't
+be scaled down (decreased).
+
+### Storage size
+
+Up to 2 TiB of storage is supported on coordinator and worker nodes. See the
+available storage options and IOPS calculation
+[above](concepts-configuration-options.md#compute-and-storage) for
+node and cluster sizes.
+
+## Compute
+
+### Subscription vCore limits
+
+Azure enforces a vCore quota per subscription per region. There are two
+independently adjustable quotas: vCores for coordinator nodes, and vCores for
+worker nodes. The default quota should be more than enough to experiment with
+Hyperscale (Citus). If you do need more vCores for a region in your
+subscription, see how to [adjust compute
+quotas](howto-compute-quota.md).
+
+## PostgreSQL
+
+### Database creation
+
+The Azure portal provides credentials to connect to exactly one database per
+Hyperscale (Citus) server group, the `citus` database. Creating another
+database is currently not allowed, and the CREATE DATABASE command will fail
+with an error.
+
+### Columnar storage
+
+Hyperscale (Citus) currently has these limitations with [columnar
+tables](concepts-columnar.md):
+
+* Compression is on disk, not in memory
+* Append-only (no UPDATE/DELETE support)
+* No space reclamation (for example, rolled-back transactions may still consume
+ disk space)
+* No index support, index scans, or bitmap index scans
+* No tidscans
+* No sample scans
+* No TOAST support (large values supported inline)
+* No support for ON CONFLICT statements (except DO NOTHING actions with no
+ target specified).
+* No support for tuple locks (SELECT ... FOR SHARE, SELECT ... FOR UPDATE)
+* No support for serializable isolation level
+* Support for PostgreSQL server versions 12+ only
+* No support for foreign keys, unique constraints, or exclusion constraints
+* No support for logical decoding
+* No support for intra-node parallel scans
+* No support for AFTER ... FOR EACH ROW triggers
+* No UNLOGGED columnar tables
+* No TEMPORARY columnar tables
+
+## Next steps
+
+* Learn how to [create a Hyperscale (Citus) server group in the
+ portal](quickstart-create-portal.md).
+* Learn to enable [connection pooling](concepts-connection-pool.md).
postgresql Concepts Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/concepts-maintenance.md
+
+ Title: Scheduled maintenance - Azure Database for PostgreSQL - Hyperscale (Citus)
+description: This article describes the scheduled maintenance feature in Azure Database for PostgreSQL - Hyperscale (Citus).
+++++ Last updated : 04/07/2021++
+# Scheduled maintenance in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)
+
+Azure Database for PostgreSQL - Hyperscale (Citus) does periodic maintenance to
+keep your managed database secure, stable, and up-to-date. During maintenance,
+all nodes in the server group get new features, updates, and patches.
+
+The key features of scheduled maintenance for Hyperscale (Citus) are:
+
+* Updates are applied at the same time on all nodes in the server group
+* Notifications about upcoming maintenance are posted to Azure Service Health
+ five days in advance
+* Usually there are at least 30 days between successful maintenance events for
+ a server group
+* Preferred day of the week and time window within that day for maintenance
+ start can be defined for each server group individually
+
+## Selecting a maintenance window and notification about upcoming maintenance
+
+You can schedule maintenance during a specific day of the week and a time
+window within that day. Or you can let the system pick a day and a time window
+for you automatically. Either way, the system will alert you five days before
+running any maintenance. The system will also let you know when maintenance is
+started, and when it's successfully completed.
+
+Notifications about upcoming scheduled maintenance are posted to Azure Service
+Health and can be:
+
+* Emailed to a specific address
+* Emailed to an Azure Resource Manager Role
+* Sent in a text message (SMS) to mobile devices
+* Pushed as a notification to an Azure app
+* Delivered as a voice message
+
+When specifying preferences for the maintenance schedule, you can pick a day of
+the week and a time window. If you don't specify, the system will pick times
+between 11pm and 7am in your server group's region time. You can define
+different schedules for each Hyperscale (Citus) server group in your Azure
+subscription.
+
+> [!IMPORTANT]
+> Normally there are at least 30 days between successful scheduled maintenance
+> events for a server group.
+>
+> However, in case of a critical emergency update such as a severe
+> vulnerability, the notification window could be shorter than five days. The
+> critical update may be applied to your server even if a successful scheduled
+> maintenance was performed in the last 30 days.
+
+You can update scheduling settings at any time. If there's maintenance
+scheduled for your Hyperscale (Citus) server group and you update the schedule,
+existing events will continue as originally scheduled. The settings change will
+take effect after successful completion of existing events.
+
+If maintenance fails or gets canceled, the system will create a notification.
+It will try maintenance again according to current scheduling settings, and
+notify you five days before the next maintenance event.
+
+## Next steps
+
+* Learn how to [change the maintenance schedule](howto-maintenance.md)
+* Learn how to [get notifications about upcoming maintenance](../../service-health/service-notifications.md) using Azure Service Health
+* Learn how to [set up alerts about upcoming scheduled maintenance events](../../service-health/resource-health-alert-monitor-guide.md)
postgresql Concepts Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/concepts-monitoring.md
+
+ Title: Monitor and tune - Hyperscale (Citus) - Azure Database for PostgreSQL
+description: This article describes monitoring and tuning features in Azure Database for PostgreSQL - Hyperscale (Citus)
+++++ Last updated : 12/06/2021++
+# Monitor and tune Azure Database for PostgreSQL - Hyperscale (Citus)
+
+Monitoring data about your servers helps you troubleshoot and optimize for your
+workload. Hyperscale (Citus) provides various monitoring options to provide
+insight into the behavior of nodes in a server group.
+
+## Metrics
+
+Hyperscale (Citus) provides metrics for nodes in a server group, and aggregate
+metrics for the group as a whole. The metrics give insight into the behavior of
+supporting resources. Each metric is emitted at a one-minute frequency, and has
+up to 30 days of history.
+
+In addition to viewing graphs of the metrics, you can configure alerts. For
+step-by-step guidance, see [How to set up
+alerts](howto-alert-on-metric.md). Other tasks include setting up
+automated actions, running advanced analytics, and archiving history. For more
+information, see the [Azure Metrics
+Overview](../../azure-monitor/data-platform.md).
+
+### Per node vs aggregate
+
+By default, the Azure portal aggregates Hyperscale (Citus) metrics across nodes
+in a server group. However, some metrics, such as disk usage percentage, are
+more informative on a per-node basis. To see metrics for nodes displayed
+individually, use Azure Monitor [metric
+splitting](howto-monitoring.md#view-metrics-per-node) by server
+name.
+
+> [!NOTE]
+>
+> Some Hyperscale (Citus) server groups do not support metric splitting. On
+> these server groups, you can view metrics for individual nodes by clicking
+> the node name in the server group **Overview** page. Then open the
+> **Metrics** page for the node.
+
+### List of metrics
+
+These metrics are available for Hyperscale (Citus) nodes:
+
+|Metric|Metric Display Name|Unit|Description|
+|||||
+|active_connections|Active Connections|Count|The number of active connections to the server.|
+|cpu_percent|CPU percent|Percent|The percentage of CPU in use.|
+|iops|IOPS|Count|See the [IOPS definition](../../virtual-machines/premium-storage-performance.md#iops) and [Hyperscale (Citus) throughput](concepts-configuration-options.md)|
+|memory_percent|Memory percent|Percent|The percentage of memory in use.|
+|network_bytes_ingress|Network In|Bytes|Network In across active connections.|
+|network_bytes_egress|Network Out|Bytes|Network Out across active connections.|
+|storage_percent|Storage percentage|Percent|The percentage of storage used out of the server's maximum.|
+|storage_used|Storage used|Bytes|The amount of storage in use. The storage used by the service may include the database files, transaction logs, and the server logs.|
+
+Azure supplies no aggregate metrics for the cluster as a whole, but metrics for
+multiple nodes can be placed on the same graph.
+
+## Next steps
+
+- Learn how to [view metrics](howto-monitoring.md) for a
+ Hyperscale (Citus) server group.
+- See [how to set up alerts](howto-alert-on-metric.md) for guidance
+ on creating an alert on a metric.
+- Learn how to do [metric
+ splitting](../../azure-monitor/essentials/metrics-charts.md#metric-splitting) to
+ inspect metrics per node in a server group.
+- See other measures of database health with [useful diagnostic queries](howto-useful-diagnostic-queries.md).
postgresql Concepts Nodes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/concepts-nodes.md
+
+ Title: Nodes ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL
+description: Learn about the types of nodes and tables in a server group in Azure Database for PostgreSQL.
+++++ Last updated : 07/28/2019++
+# Nodes and tables in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)
+
+## Nodes
+
+The Hyperscale (Citus) hosting type allows Azure Database for PostgreSQL
+servers (called nodes) to coordinate with one another in a "shared nothing"
+architecture. The nodes in a server group collectively hold more data and use
+more CPU cores than would be possible on a single server. The architecture also
+allows the database to scale by adding more nodes to the server group.
+
+### Coordinator and workers
+
+Every server group has a coordinator node and multiple workers. Applications
+send their queries to the coordinator node, which relays it to the relevant
+workers and accumulates their results. Applications are not able to connect
+directly to workers.
+
+Hyperscale (Citus) allows the database