Updates from: 01/06/2022 02:07:04
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Claimsschema https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/claimsschema.md
Title: ClaimsSchema - Azure Active Directory B2C
+ Title: "ClaimsSchema: Azure Active Directory B2C"
description: Specify the ClaimsSchema element of a custom policy in Azure Active Directory B2C.
Last updated 03/05/2020 + # ClaimsSchema
The **DataType** element supports the following values:
| - | -- | |boolean|Represents a Boolean (`true` or `false`) value.| |date| Represents an instant in time, typically expressed as a date of a day. The value of the date follows ISO 8601 convention.|
-|dateTime|Represents an instant in time, typically expressed as a date and time of day. The value of the date follows ISO 8601 convention.|
+|dateTime|Represents an instant in time, typically expressed as a date and time of day. The value of the date follows ISO 8601 convention during runtime and is converted to UNIX epoch time when issued as a claim into the token.|
|duration|Represents a time interval in years, months, days, hours, minutes, and seconds. The format of is `PnYnMnDTnHnMnS`, where `P` indicates positive, or `N` for negative value. `nY` is the number of years followed by a literal `Y`. `nMo` is the number of months followed by a literal `Mo`. `nD` is the number of days followed by a literal `D`. Examples: `P21Y` represents 21 years. `P1Y2Mo` represents one year, and two months. `P1Y2Mo5D` represents one year, two months, and five days. `P1Y2M5DT8H5M620S` represents one year, two months, five days, eight hours, five minutes, and twenty seconds. | |phoneNumber|Represents a phone number. | |int| Represents number between -2,147,483,648 and 2,147,483,647|
active-directory-b2c Configure User Input https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/configure-user-input.md
The application claims are values that are returned to the application. Update y
1. Select **Page layouts**. 1. Select **Local account sign-up page**. 1. Under **User attributes**, select **City**.
- 1. In the **User input type** drop-down, select **DropdownSingleSelect**. Optional: Use the "Move up/down" buttons to arrange the text order on the sign-up page.
1. In the **Optional** drop-down, select **No**.
+ 1. In the **User input type**, select the current user input type, such as **TextBox**, to open a **User input type editor** window pane.
+ 1. In the **User input type** drop-down, select **DropdownSingleSelect**.
+ 1. In the **Text** and **Values**, enter the text and value pairs that make up your set of responses for the attribute. The **Text** displays in the web interface for your flow, and the **Values** is stored to Azure AD B2C for selected **Text**. Optional: Use the "Move up/down" buttons to reorder drop-down items.
+1. Select **Ok**. Optional: Use the "Move up/down" buttons to reorder user attributes in the sign-up page.
1. Select **Save**.
+ :::image type="content" source="./media/configure-user-input/configure-user-attributes-input-type.png" alt-text="Web page call green API.":::
+ ### Provide a list of values by using localized collections To provide a set list of values for the city attribute:
After you add the localization element, [edit the content definition with the lo
- Learn how to [use custom attributes in Azure AD B2C](user-flow-custom-attributes.md). ::: zone-end+
+## Next steps
+- [Customize user interface in Azure Active Directory B2C](customize-ui.md).
+- [Customize the user interface with HTML templates in Azure Active Directory B2C](customize-ui-with-html.md).
+- [Enable JavaScript](javascript-and-page-layout.md).
+
active-directory-b2c Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/custom-domain.md
Follow these steps to add a custom domain to your Azure AD B2C tenant:
> You can manage your custom domain with any publicly available DNS service, such as GoDaddy. If you don't have a DNS server, you can use [Azure DNS zone](../dns/dns-getstarted-portal.md), or [App Service domains](../app-service/manage-custom-dns-buy-domain.md). 1. [Verify your custom domain name](../active-directory/fundamentals/add-custom-domain.md#verify-your-custom-domain-name). Verify each subdomain, or hostname you plan to use. For example, to be able to sign-in with *login.contoso.com* and *account.contoso.com*, you need to verify both subdomains and not the top-level domain *contoso.com*. -
- After the domain is verified, **delete** the DNS TXT record you created.
+
+ > [!IMPORTANT]
+ > After the domain is verified, **delete** the DNS TXT record you created.
## Step 2. Create a new Azure Front Door instance
active-directory-b2c Identity Provider Azure Ad Multi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-azure-ad-multi-tenant.md
To enable users to sign in using an Azure AD account, you need to define Azure A
You can define Azure AD as a claims provider by adding Azure AD to the **ClaimsProvider** element in the extension file of your policy.
-1. Open the *SocialAndLocalAccounts/**TrustFrameworkExtensions.xml*** file.
+1. Open the *SocialAndLocalAccounts/**TrustFrameworkExtensions.xml*** file (see the files you've used in the prerequisites).
1. Find the **ClaimsProviders** element. If it does not exist, add it under the root element. 1. Add a new **ClaimsProvider** as follows:
active-directory How To Mfa Additional Context https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/how-to-mfa-additional-context.md
Your organization will need to enable Microsoft Authenticator push notifications
When a user receives a Passwordless phone sign-in or MFA push notification in the Microsoft Authenticator app, they'll see the name of the application that requests the approval and the app location based on its IP address.
-![Screenshot of additional context in the MFA push notification.](media/howto-authentication-passwordless-phone/location.png)
The additional context can be combined with [number matching](how-to-mfa-number-match.md) to further improve sign-in security.
-![Screenshot of additional context with number matching in the MFA push notification.](media/howto-authentication-passwordless-phone/location-with-number-match.png)
### Policy schema changes
active-directory Howto Authentication Passwordless Phone https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-authentication-passwordless-phone.md
Previously updated : 11/12/2021 Last updated : 01/05/2022
The user is then presented with a number. The app prompts the user to authentica
After the user has utilized passwordless phone sign-in, the app continues to guide the user through this method. However, the user will see the option to choose another method. ## Known Issues
active-directory Quickstart V2 Aspnet Core Webapp Calls Graph https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-aspnet-core-webapp-calls-graph.md
This section gives an overview of the code required to sign in users and call th
The *Microsoft.AspNetCore.Authentication* middleware uses a `Startup` class that's executed when the hosting process initializes: ```csharp
- // Get the scopes from the configuration (appsettings.json)
- var initialScopes = Configuration.GetValue<string>("DownstreamApi:Scopes")?.Split(' ');
public void ConfigureServices(IServiceCollection services)
- {
+ {
+ // Get the scopes from the configuration (appsettings.json)
+ var initialScopes = Configuration.GetValue<string>("DownstreamApi:Scopes")?.Split(' ');
+
// Add sign-in with Microsoft services.AddAuthentication(OpenIdConnectDefaults.AuthenticationScheme) .AddMicrosoftIdentityWebApp(Configuration.GetSection("AzureAd"))
active-directory Reference Breaking Changes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/reference-breaking-changes.md
# What's new for authentication?
-> Get notified of updates to this page by pasting this URL into your RSS feed reader:<br/>`https://docs.microsoft.com/api/search/rss?search=%22whats%20new%20for%20authentication%22&locale=en-us`
+> Get notified of updates to this page by pasting this URL into your RSS feed reader:<br/>`https://docs.microsoft.com/api/search/rss?search=%22Azure+Active+Directory+breaking+changes+reference%22&locale=en-us`
The authentication system alters and adds features on an ongoing basis to improve security and standards compliance. To stay up to date with the most recent developments, this article provides you with information about the following details:
active-directory Directory Overview User Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/directory-overview-user-model.md
In Azure AD, when users join a licensed group, they're automatically assigned th
If there are not enough available licenses, or an issue occurs like service plans that can't be assigned at the same time, you can see status of any licensing issue for the group in the Azure portal.
->[!NOTE]
->The group-based licensing feature currently is in public preview. During the preview, the feature is available with any paid Azure Active Directory (Azure AD) license plan or trial.
- ## Delegate administrator roles Many large organizations want options for their users to obtain sufficient permissions for their work tasks without assigning the powerful Global Administrator role to, for example, users who must register applications. Here's an example of new Azure AD administrator roles to help you distribute the work of application management with more granularity:
Azure AD also gives you granular control of the data that flows between the app
If you're a beginning Azure AD administrator, get the basics down in [Azure Active Directory Fundamentals](../fundamentals/index.yml).
-Or you can start [creating groups](../fundamentals/active-directory-groups-create-azure-portal.md?context=azure%2factive-directory%2fusers-groups-roles%2fcontext%2fugr-context), [assigning licenses](../fundamentals/license-users-groups.md?context=azure%2factive-directory%2fusers-groups-roles%2fcontext%2fugr-context), [assigning app access](../manage-apps/assign-user-or-group-access-portal.md?context=azure%2factive-directory%2fusers-groups-roles%2fcontext%2fugr-context) or [assigning administrator roles](../roles/permissions-reference.md).
+Or you can start [creating groups](../fundamentals/active-directory-groups-create-azure-portal.md?context=azure%2factive-directory%2fusers-groups-roles%2fcontext%2fugr-context), [assigning licenses](../fundamentals/license-users-groups.md?context=azure%2factive-directory%2fusers-groups-roles%2fcontext%2fugr-context), [assigning app access](../manage-apps/assign-user-or-group-access-portal.md?context=azure%2factive-directory%2fusers-groups-roles%2fcontext%2fugr-context) or [assigning administrator roles](../roles/permissions-reference.md).
active-directory Choose Ad Authn https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/choose-ad-authn.md
Title: Authentication for Azure AD hybrid identity solutions
description: This guide helps CEOs, CIOs, CISOs, Chief Identity Architects, Enterprise Architects, and Security and IT decision makers responsible for choosing an authentication method for their Azure AD hybrid identity solution in medium to large organizations. keywords:-- Previously updated : 10/30/2019++ Last updated : 01/05/2022
In todayΓÇÖs world, threats are present 24 hours a day and come from everywhere.
[Get started](../fundamentals/active-directory-whatis.md) with Azure AD and deploy the right authentication solution for your organization.
-If you're thinking about migrating from federated to cloud authentication, learn more about [changing the sign-in method](../../active-directory/hybrid/plan-connect-user-signin.md). To help you plan and implement the migration, use [these project deployment plans](../fundamentals/active-directory-deployment-plans.md) or consider using the new [Staged Rollout](../../active-directory/hybrid/how-to-connect-staged-rollout.md) feature to migrate federated users to using cloud authentication in a staged approach.
+If you're thinking about migrating from federated to cloud authentication, learn more about [changing the sign-in method](../../active-directory/hybrid/plan-connect-user-signin.md). To help you plan and implement the migration, use [these project deployment plans](../fundamentals/active-directory-deployment-plans.md) or consider using the new [Staged Rollout](../../active-directory/hybrid/how-to-connect-staged-rollout.md) feature to migrate federated users to using cloud authentication in a staged approach.
active-directory Cloud Governed Management For On Premises https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/cloud-governed-management-for-on-premises.md
na Previously updated : 05/29/2020 Last updated : 01/05/2022
In hybrid environments, Microsoft's strategy is to enable deployments where the
## Next steps
-For more information on how to get started on this journey, see the Azure AD deployment plans, located at <https://aka.ms/deploymentplans>. They provide end-to-end guidance about how to deploy Azure Active Directory (Azure AD) capabilities. Each plan explains the business value, planning considerations, design, and operational procedures needed to successfully roll out common Azure AD capabilities. Microsoft continually updates the deployment plans with best practices learned from customer deployments and other feedback when we add new capabilities to managing from the cloud with Azure AD.
+For more information on how to get started on this journey, see the Azure AD deployment plans, located at <https://aka.ms/deploymentplans>. They provide end-to-end guidance about how to deploy Azure Active Directory (Azure AD) capabilities. Each plan explains the business value, planning considerations, design, and operational procedures needed to successfully roll out common Azure AD capabilities. Microsoft continually updates the deployment plans with best practices learned from customer deployments and other feedback when we add new capabilities to managing from the cloud with Azure AD.
active-directory Concept Adsync Service Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/concept-adsync-service-account.md
na Previously updated : 03/17/2021 Last updated : 01/05/2022
The sync service can run under different accounts. It can run under a Virtual Se
|Type of account|Installation option|Description| |--||--| |Virtual Service Account|Express and custom, 2017 April and later| A Virtual Service Account is used for all express installations, except for installations on a Domain Controller. When using custom installation, it is the default option unless another option is used.|
-|Managed Service Account|Custom, 2017 April and later|If you use a remote SQL Server, then we recommend using a group Managed Service Account. |
+|Managed Service Account|Custom, 2017 April and later|If you use a remote SQL Server, then we recommend using a group managed service account. |
|Managed Service Account|Express and custom, 2021 March and later|A standalone Managed Service Account prefixed with ADSyncMSA_ is created during installation for express installations when installed on a Domain Controller. When using custom installation, it is the default option unless another option is used.| |User Account|Express and custom, 2017 April to 2021 March|A User Account prefixed with AAD_ is created during installation for express installations when installed on a Domain Controller. When using custom installation, it is the default option unless another option is used.| |User Account|Express and custom, 2017 March and earlier|A User Account prefixed with AAD_ is created during installation for express installations. When using custom installation, another account can be specified.|
A Virtual Service Account is a special type of managed local account that does n
![Virtual service account](media/concept-adsync-service-account/account-1.png)
-The Virtual Service Account is intended to be used with scenarios where the sync engine and SQL are on the same server. If you use remote SQL, then we recommend using a group Managed Service Account instead.
+The Virtual Service Account is intended to be used with scenarios where the sync engine and SQL are on the same server. If you use remote SQL, then we recommend using a group managed service account instead.
The Virtual Service Account cannot be used on a Domain Controller due to [Windows Data Protection API (DPAPI)](/previous-versions/ms995355(v=msdn.10)) issues. ## Managed Service Account
-If you use a remote SQL Server, then we recommend to using a group Managed Service Account. For more information on how to prepare your Active Directory for group Managed Service account, see [Group Managed Service Accounts Overview](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/hh831782(v=ws.11)).
+If you use a remote SQL Server, then we recommend to using a group managed service account. For more information on how to prepare your Active Directory for group Managed Service account, see [Group Managed Service Accounts Overview](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/hh831782(v=ws.11)).
To use this option, on the [Install required components](how-to-connect-install-custom.md#install-required-components) page, select **Use an existing service account**, and select **Managed Service Account**.
This account is intended to be used with scenarios where the sync engine and SQL
## User Account A local service account is created by the installation wizard (unless you specify the account to use in custom settings). The account is prefixed AAD_ and used for the actual sync service to run as. If you install Azure AD Connect on a Domain Controller, the account is created in the domain. The AAD_ service account must be located in the domain if: -- you use a remote server running SQL Server -- you use a proxy that requires authentication
+- You use a remote server running SQL Server
+- You use a proxy that requires authentication
![user account](media/concept-adsync-service-account/account-3.png)
The account is also granted permission to files, registry keys, and other object
## Next steps
-Learn more about [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md).
+Learn more about [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md).
active-directory Concept Azure Ad Connect Sync Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/concept-azure-ad-connect-sync-architecture.md
na Previously updated : 07/13/2017 Last updated : 01/05/2022
active-directory Concept Azure Ad Connect Sync Declarative Provisioning Expressions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/concept-azure-ad-connect-sync-declarative-provisioning-expressions.md
na Previously updated : 07/18/2017 Last updated : 01/05/2022
For example:
**Reference topics**
-* [Azure AD Connect sync: Functions Reference](reference-connect-sync-functions-reference.md)
+* [Azure AD Connect sync: Functions Reference](reference-connect-sync-functions-reference.md)
active-directory Concept Azure Ad Connect Sync Declarative Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/concept-azure-ad-connect-sync-declarative-provisioning.md
na Previously updated : 07/13/2017 Last updated : 01/05/2022
active-directory Concept Azure Ad Connect Sync Default Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/concept-azure-ad-connect-sync-default-configuration.md
na Previously updated : 07/13/2017 Last updated : 01/05/2022
active-directory Concept Azure Ad Connect Sync User And Contacts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/concept-azure-ad-connect-sync-user-and-contacts.md
na Previously updated : 01/15/2018 Last updated : 01/05/2022
active-directory How To Connect Adconnectivitytools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-adconnectivitytools.md
Previously updated : 4/25/2019 Last updated : 01/05/2022
active-directory How To Connect Azure Ad Trust https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-azure-ad-trust.md
na Previously updated : 07/28/2018 Last updated : 01/05/2022
active-directory How To Connect Azureadaccount https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-azureadaccount.md
na Previously updated : 04/25/2019 Last updated : 01/05/2022
active-directory How To Connect Configure Ad Ds Connector Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-configure-ad-ds-connector-account.md
Previously updated : 08/20/2021 Last updated : 01/05/2022
active-directory How To Connect Create Custom Sync Rule https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-create-custom-sync-rule.md
na Previously updated : 01/31/2019 Last updated : 01/05/2022
You can use the synchronization rule editor to edit or create a new synchronizat
## Next Steps - [Azure AD Connect sync](how-to-connect-sync-whatis.md).-- [What is hybrid identity?](whatis-hybrid-identity.md).
+- [What is hybrid identity?](whatis-hybrid-identity.md).
active-directory How To Connect Device Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-device-options.md
na Previously updated : 09/13/2018 Last updated : 01/05/2022
The following documentation provides information about the various device option
## Next steps * [Configure Hybrid Azure AD join](../devices/hybrid-azuread-join-plan.md)
-* [Configure / Disable device writeback](how-to-connect-device-writeback.md)
+* [Configure / Disable device writeback](how-to-connect-device-writeback.md)
active-directory How To Connect Device Writeback https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-device-writeback.md
na Previously updated : 05/08/2018 Last updated : 01/05/2022
Verify configuration in Active Directory:
* [Setting up On-premises Conditional Access using Azure Active Directory Device Registration](../devices/overview.md) ## Next steps
-Learn more about [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md).
+Learn more about [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md).
active-directory How To Connect Emergency Ad Fs Certificate Rotation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-emergency-ad-fs-certificate-rotation.md
Previously updated : 03/22/2021 Last updated : 01/05/2022
active-directory How To Connect Fed Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-fed-compatibility.md
na Previously updated : 08/23/2018 Last updated : 01/05/2022
active-directory How To Connect Fed Group Claims https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-fed-group-claims.md
Previously updated : 12/13/2021 Last updated : 01/05/2022
active-directory How To Connect Fed Hybrid Azure Ad Join Post Config Tasks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-fed-hybrid-azure-ad-join-post-config-tasks.md
na Previously updated : 08/10/2018 Last updated : 01/05/2022
active-directory How To Connect Fed Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-fed-management.md
na Previously updated : 07/18/2017 Last updated : 01/05/2022
In this rule, you're simply checking the temporary flag **idflag**. You decide w
You can add more than one domain to be federated by using Azure AD Connect, as described in [Add a new federated domain](how-to-connect-fed-management.md#addfeddomain). Azure AD Connect version 1.1.553.0 and latest creates the correct claim rule for issuerID automatically. If you cannot use Azure AD Connect version 1.1.553.0 or latest, it is recommended that [Azure AD RPT Claim Rules](https://aka.ms/aadrptclaimrules) tool is used to generate and set correct claim rules for the Azure AD relying party trust. ## Next steps
-Learn more about [user sign-in options](plan-connect-user-signin.md).
+Learn more about [user sign-in options](plan-connect-user-signin.md).
active-directory How To Connect Fed O365 Certs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-fed-o365-certs.md
na Previously updated : 10/20/2017 Last updated : 01/05/2022
By default, AD FS is configured to generate token signing and token decryption c
Azure AD tries to retrieve a new certificate from your federation service metadata 30 days before the expiry of the current certificate. In case a new certificate is not available at that time, Azure AD will continue to monitor the metadata on regular daily intervals. As soon as the new certificate is available in the metadata, the federation settings for the domain are updated with the new certificate information. You can use `Get-MsolDomainFederationSettings` to verify if you see the new certificate in the NextSigningCertificate / SigningCertificate.
-For more information on Token Signing certificates in AD FS see [Obtain and Configure Token Signing and Token Decryption Certificates for AD FS](/windows-server/identity/ad-fs/operations/configure-ts-td-certs-ad-fs)
+For more information on Token Signing certificates in AD FS see [Obtain and Configure Token Signing and Token Decryption Certificates for AD FS](/windows-server/identity/ad-fs/operations/configure-ts-td-certs-ad-fs)
active-directory Reference Connect Version History https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/reference-connect-version-history.md
# Azure AD Connect: Version release history
-The Azure Active Directory (Azure AD) team regularly updates Azure AD Connect with new features and functionality. Not all additions are applicable to all audiences.
+The Azure Active Directory (Azure AD) team regularly updates Azure AD Connect with new features and functionality. Not all additions apply to all audiences.
-This article is designed to help you keep track of the versions that have been released, and to understand what the changes are in the latest version.
+This article helps you keep track of the versions that have been released and understand what the changes are in the latest version.
## Looking for the latest versions? You can upgrade your Azure AD Connect server from all supported versions with the latest versions:
+ - If you're using *Windows Server 2016 or newer*, use *Azure AD Connect V2.0*. You can download the latest version of Azure AD Connect 2.0 from the [Microsoft Download Center](https://www.microsoft.com/download/details.aspx?id=47594). See the [release notes for the latest V2.0 release](reference-connect-version-history.md#20280).
+ - If you're still using an *older version of Windows Server*, use *Azure AD Connect V1.6*. You can download the latest version of Azure AD Connect V1 from the [Microsoft Download Center](https://www.microsoft.com/download/details.aspx?id=103336). See the [release notes for the latest V1.6 release](reference-connect-version-history.md#16160).
+ - We're only applying critical changes to the V1.x versions going forward. You might not find some of the features and fixes for V2.0 in the V1.x releases. For this reason, upgrade to the V2.0 version as soon as possible. Most notably, there's an issue with the 1.16.4.2 build. When you upgrade to this V1.6 build or any newer builds, the group limit resets to 50,000. When you upgrade a server to this build, or any newer 1.6 builds, reapply the rule changes you applied when you initially increased the group membership limit to 250,000 before you enable sync for the server.
-This table is a list of related topics:
+The following table lists related topics:
Topic | Details | | Steps to upgrade from Azure AD Connect | Different methods to [upgrade from a previous version to the latest](how-to-upgrade-previous-version.md) Azure AD Connect release.
-Required permissions | For permissions required to apply an update, see [accounts and permissions](reference-connect-accounts-permissions.md#upgrade).
+Required permissions | For permissions required to apply an update, see [Azure AD Connect: Accounts and permissions](reference-connect-accounts-permissions.md#upgrade).
> [!IMPORTANT]
-> **On 31 August 2022, all 1.x versions of Azure Active Directory (Azure AD) Connect will be retired because they include SQL Server 2012 components that will no longer be supported.** Either upgrade to the most recent version of Azure AD Connect (2.x version) by that date, or [evaluate and switch to Azure AD cloud sync](../cloud-sync/what-is-cloud-sync.md).
->
-> You need to make sure you are running a recent version of Azure AD Connect to receive an optimal support experience.
->
-> If you run a retired version of Azure AD Connect it may unexpectedly stop working and you may not have the latest security fixes, performance improvements, troubleshooting and diagnostic tools and service enhancements. Moreover, if you require support we may not be able to provide you with the level of service your organization needs.
->
-> Go to this article to learn more about [Azure Active Directory Connect V2.0](whatis-azure-ad-connect-v2.md), what has changed in V2.0 and how this change impacts you.
->
-> Please refer to [this article](./how-to-upgrade-previous-version.md) to learn more about how to upgrade Azure AD Connect to the latest version.
->
-> For version history information on retired versions, see [Azure AD Connect version release history archive](reference-connect-version-history-archive.md)
+> *On August 31, 2022, all 1.x versions of Azure AD Connect will be retired because they include SQL Server 2012 components that will no longer be supported.* Upgrade to the most recent version of Azure AD Connect (2.x version) by that date or [evaluate and switch to Azure AD cloud sync](../cloud-sync/what-is-cloud-sync.md).
+
+Make sure you're running a recent version of Azure AD Connect to receive an optimal support experience.
+
+If you run a retired version of Azure AD Connect, it might unexpectedly stop working. You also might not have the latest security fixes, performance improvements, troubleshooting and diagnostic tools, and service enhancements. If you require support, we might not be able to provide you with the level of service your organization needs.
+
+To learn more about what has changed in V2.0 and how this change affects you, see [Azure AD Connect V2.0](whatis-azure-ad-connect-v2.md).
+
+To learn more about how to upgrade Azure AD Connect to the latest version, see [Azure AD Connect: Upgrade from a previous version to the latest](./how-to-upgrade-previous-version.md).
+
+For version history information on retired versions, see [Azure AD Connect: Version release history archive](reference-connect-version-history-archive.md).
> [!NOTE]
-> Releasing a new version of Azure AD Connect is a process that requires several quality control step to ensure the operation functionality of the service, and while we go through this process the version number of a new release as well as the release status will be updated to reflect the most recent state.
->
-> Not all releases of Azure AD Connect will be made available for auto upgrade. The release status will indicate whether a release is made available for auto upgrade or for download only. If auto upgrade was enabled on your Azure AD Connect server then that server will automatically upgrade to the latest version of Azure AD Connect that is released for auto upgrade. Note that not all Azure AD Connect configurations are eligible for auto upgrade.
->
-> To clarify the use of Auto Upgrade, it is meant to push all important updates and critical fixes to you. This is not necessarily the latest version because not all versions will require/include a fix to a critical security issue (just one example of many). Critical issues would usually be addressed with a new version provided via Auto Upgrade. If there are no such issues, there are no updates pushed out using Auto Upgrade, and in general if you are using the latest auto upgrade version you should be good.
->
-> However, if you'd like all the latest features and updates, the best way to see if there are any is to check this page and install them as you see fit.
->
-> Please follow this link to read more about [auto upgrade](how-to-connect-install-automatic-upgrade.md).
+> Releasing a new version of Azure AD Connect requires several quality-control steps to ensure the operation functionality of the service. While we go through this process, the version number of a new release and the release status are updated to reflect the most recent state.
+
+Not all releases of Azure AD Connect are made available for auto-upgrade. The release status indicates whether a release is made available for auto-upgrade or for download only. If auto-upgrade was enabled on your Azure AD Connect server, that server automatically upgrades to the latest version of Azure AD Connect that's released for auto-upgrade. Not all Azure AD Connect configurations are eligible for auto-upgrade.
+
+Auto-upgrade is meant to push all important updates and critical fixes to you. It isn't necessarily the latest version because not all versions will require or include a fix to a critical security issue. (This example is just one of many.) Critical issues are usually addressed with a new version provided via auto-upgrade. If there are no such issues, there are no updates pushed out by using auto-upgrade. In general, if you're using the latest auto-upgrade version, you should be good.
+
+If you want all the latest features and updates, check this page and install what you need.
+
+To read more about auto-upgrade, see [Azure AD Connect: Automatic upgrade](how-to-connect-install-automatic-upgrade.md).
## 2.0.89.0 ### Release status
-12/22/2021: Released for download only, not available for auto upgrade.
+
+12/22/2021: Released for download only, not available for auto upgrade
### Bug fixes-- We fixed a bug in version 2.0.88.0 where, under certain conditions, linked mailboxes of disabled users and mailboxes of certain resource objects, were getting deleted.+
+We fixed a bug in version 2.0.88.0 where, under certain conditions, linked mailboxes of disabled users and mailboxes of certain resource objects, were getting deleted.
## 2.0.88.0
-> [!NOTE]
-> This release requires Windows Server 2016 or newer. It fixes a vulnerability that is present in version 2.0 of Azure AD Connect, as well as some other bug fixes and minor feature updates.
+
+> [!NOTE]
+> This release requires Windows Server 2016 or newer. It fixes a vulnerability that's present in version 2.0 of Azure AD Connect and other bug fixes and minor feature updates.
### Release status
-12/15/2021: Released for download only, not available for auto upgrade.
+
+12/15/2021: Released for download only, not available for auto-upgrade
### Bug fixes
+- We upgraded the version of Microsoft.Data.OData from 5.8.1 to 5.8.4 to fix a vulnerability.
+- Accessibility fixes:
+ - We made the Azure AD Connect wizard resizable to account for different zoom levels and screen resolutions.
+ - We named elements to satisfy accessibility requirements.
+- We fixed a bug where miisserver failed because of a null reference.
+- We fixed a bug to ensure the desktop SSO value persists after upgrading Azure AD Connect to a newer version.
+- We modified the inetorgperson sync rules to fix an issue with account/resource forests.
+- We fixed a radio button test to display a **Link More** link.
+ ### Functional changes+
+- We made a change so that group writeback DN is now configurable with the display name of the synced group.
+- We removed the hard requirement for exchange schema when you enable group writeback.
+- Azure AD Kerberos changes:
+ - We extended the PowerShell command to support custom top-level names for trusted object creation.
+ - We made a change to set an official brand name for the Azure AD Kerberos feature.
## 1.6.16.0 > [!NOTE]
-> This is an update release of Azure AD Connect. This version is intended to be used by customers who are running an older version of Windows Server and cannot upgrade their server to Windows Server 2016 or newer at this time. You cannot use this version to update an Azure AD Connect V2.0 server.
+> This release is an update release of Azure AD Connect. This version is intended to be used by customers who are running an older version of Windows Server and can't upgrade their server to Windows Server 2016 or newer at this time. You can't use this version to update an Azure AD Connect V2.0 server.
>
-> This release should not be installed on Windows Server 2016 or newer. This release includes SQL Server 2012 components and will be retired on August 31st 2022. You will need to upgrade your Server OS and Azure AD Connect version before that date.
+> Don't install this release on Windows Server 2016 or newer. This release includes SQL Server 2012 components and will be retired on August 31, 2022. Upgrade your Server OS and Azure AD Connect version before that date.
>
-> There is an issue where upgrading to this v1.6 build or any newer builds resets the group membership limit to 50k. When a server is upgraded to this build, or any newer 1.6 builds, then the customer should reapply the rules changes they applied when initially increasing the group membership limit to 250k before they enable sync for the server.
+> When you upgrade to this V1.6 build or any newer builds, the group membership limit resets to 50,000. When a server is upgraded to this build, or any newer 1.6 builds, reapply the rule changes you applied when you initially increased the group membership limit to 250,000 before you enable sync for the server.
### Release status
-10/13/2021: Released for download and auto upgrade.
+10/13/2021: Released for download and auto-upgrade
### Bug fixes -- We fixed a bug where the Autoupgrade process attempted to upgrade Azure AD Connect servers that are running older Windows OS version 2008 or 2008 R2 and failed. These versions of Windows Server are no longer supported. In this release we only attempt autoupgrade on machines that run Windows Server 2012 or newer.-- We fixed an issue where, under certain conditions, miisserver would be crashing due to access violation exception.
+- We fixed a bug where the auto-upgrade process attempted to upgrade Azure AD Connect servers that are running older Windows OS version 2008 or 2008 R2 and failed. These versions of Windows Server are no longer supported. In this release, we only attempt auto-upgrade on machines that run Windows Server 2012 or newer.
+- We fixed an issue where, under certain conditions, miisserver failed because of an access violation exception.
+
+### Known issues
-### Known Issues
+When you upgrade to this V1.6 build or any newer builds, the group membership limit resets to 50,000. When a server is upgraded to this build, or any newer 1.6 builds, reapply the rule changes you applied when you initially increased the group membership limit to 250,000 before you enable sync for the server.
## 2.0.28.0 > [!NOTE]
-> This is a maintenance update release of Azure AD Connect. This release requires Windows Server 2016 or newer.
+> This release is a maintenance update release of Azure AD Connect. It requires Windows Server 2016 or newer.
### Release status
-9/30/2021: Released for download only, not available for auto upgrade.
+9/30/2021: Released for download only, not available for auto-upgrade
### Bug fixes
+- We removed a download button for a PowerShell script on the **Group Writeback Permissions** page in the wizard. We also changed the text on the wizard page to include a **Learn More** link that links to an online article where the PowerShell script can be found.
+- We fixed a bug where the wizard was incorrectly blocking the installation when the .NET version on the server was greater than 4.6 because of missing registry keys. Those registry keys aren't required and should only block installation if they're intentionally set to false.
+- We fixed a bug where an error was thrown if phantom objects were found during the initialization of a sync step. This bug blocked the sync step or removed transient objects. The phantom objects are now ignored.
-
-Note: A phantom object is a placeholder for an object which is not there or has not been seen yet, for example if a source object has a reference for a target object which is not there then we create the target object as a phantom.
+ A phantom object is a placeholder for an object that isn't there or hasn't been seen yet. For example, if a source object has a reference for a target object that isn't there, we create the target object as a phantom.
### Functional changes
+A change was made that allows a user to deselect objects and attributes from the inclusion list, even if they're in use. Instead of blocking this action, we now provide a warning.
## 1.6.14.2 > [!NOTE]
-> This is an update release of Azure AD Connect. This version is intended to be used by customers who are running an older version of Windows Server and cannot upgrade their server to Windows Server 2016 or newer at this time. You cannot use this version to update an Azure AD Connect V2.0 server.
-> We will begin auto upgrading eligible tenants when this version is available for download, autoupgrade will take a few weeks to complete.
-> There is an issue where upgrading to this v1.6 build or any newer builds resets the group membership limit to 50k. When a server is upgraded to this build, or any newer 1.6 builds, then the customer should reapply the rules changes they applied when initially increasing the group membership limit to 250k before they enable sync for the server.
+> This release is an update release of Azure AD Connect. This version is intended to be used by customers who are running an older version of Windows Server and can't upgrade their server to Windows Server 2016 or newer at this time. You can't use this version to update an Azure AD Connect V2.0 server.
+
+We'll begin auto-upgrading eligible tenants when this version is available for download. Auto-upgrade will take a few weeks to complete.
+
+When you upgrade to this V1.6 build or any newer builds, the group membership limit resets to 50,000. When a server is upgraded to this build, or any newer 1.6 builds, reapply the rule changes you applied when you initially increased the group membership limit to 250,000 before you enable sync for the server.
### Release status
-9/21/2021: Released for download and auto upgrade.
+9/21/2021: Released for download and auto-upgrade
### Functional changes
+- We added the latest versions of Microsoft Identity Manager (MIM) Connectors (1.1.1610.0). For more information, see the [release history page of the MIM Connectors](/microsoft-identity-manager/reference/microsoft-identity-manager-2016-connector-version-history#1116100-september-2021).
+- We added a configuration option to disable the Soft Matching feature in Azure AD Connect. We recommend that you disable Soft Matching unless you need it to take over cloud-only accounts. To disable Soft Matching, see [this reference article](/powershell/module/msonline/set-msoldirsyncfeature#example-2--block-soft-matching-for-the-tenant).
### Bug fixes
+- We fixed a bug where the desktop single sign-on settings weren't persisted after upgrade from a previous version.
+- We fixed a bug that caused the Set-ADSync\*Permission cmdlets to fail.
## 2.0.25.1 > [!NOTE]
-> This is a hotfix update release of Azure AD Connect. This release requires Windows Server 2016 or newer and fixes a security issue that is present in version 2.0 of Azure AD Connect, as well as some other bug fixes.
+> This release is a hotfix update release of Azure AD Connect. This release requires Windows Server 2016 or newer. It fixes a security issue that's present in version 2.0 of Azure AD Connect and includes other bug fixes.
### Release status
-9/14/2021: Released for download only, not available for auto upgrade.
+9/14/2021: Released for download only, not available for auto-upgrade
### Bug fixes
+- We fixed a security issue where an unquoted path was used to point to the Azure AD Connect service. This path is now a quoted path.
+- We fixed an import configuration issue with writeback enabled when you use the existing Azure AD Connector account.
+- We fixed an issue in Set-ADSyncExchangeHybridPermissions and other related cmdlets, which were broken from V1.6 because of an invalid inheritance type.
+- We fixed an issue with the cmdlet we published in a previous release to set the TLS version. The cmdlet overwrote the keys, which destroyed any values that were in them. Now a new key is created only if one doesn't already exist. We added a warning to let users know the TLS registry changes aren't exclusive to Azure AD Connect and might affect other applications on the same server.
+- We added a check to enforce auto-upgrade for V2.0 to require Windows Server 2016 or newer.
+- We added the Replicating Directory Changes permission in the Set-ADSyncBasicReadPermissions cmdlet.
+- We made a change to prevent UseExistingDatabase and import configuration from being used together because they could contain conflicting configuration settings.
+- We made a change to allow a user with the Application Admin role to change the App Proxy service configuration.
+- We removed the (Preview) label from the labels of **Import/Export** settings. This functionality is generally available.
+- We changed some labels that still referred to Company Administrator. We now use the role name Global Administrator.
+- We created new Azure AD Kerberos PowerShell cmdlets (\*-AADKerberosServer) to add a Claims Transform rule to the Azure AD Service Principal.
### Functional changes -- We added the latest versions of MIM Connectors (1.1.1610.0). More information can be found at [the release history page of the MiM connectors](/microsoft-identity-manager/reference/microsoft-identity-manager-2016-connector-version-history#1116100-september-2021)
+- We added the latest versions of MIM Connectors (1.1.1610.0). For more information, see the [release history page of the MIM Connectors](/microsoft-identity-manager/reference/microsoft-identity-manager-2016-connector-version-history#1116100-september-2021).
+- We added a configuration option to disable the Soft Matching feature in Azure AD Connect. We recommend that you disable Soft Matching unless you need it to take over cloud-only accounts. To disable Soft Matching, see [this reference article](/powershell/module/msonline/set-msoldirsyncfeature#example-2--block-soft-matching-for-the-tenant).
## 2.0.10.0 ### Release status
-8/19/2021: Released for download only, not available for auto upgrade.
+
+8/19/2021: Released for download only, not available for auto-upgrade
> [!NOTE]
-> This is a hotfix update release of Azure AD Connect. This release requires Windows Server 2016 or newer. This hotfix addresses an issue that is present in version 2.0 as well as in Azure AD Connect version 1.6. If you are running Azure AD Connect on an older Windows Server you should install the [1.6.13.0](#16130) build instead.
+> This is a hotfix update release of Azure AD Connect. This release requires Windows Server 2016 or newer. This hotfix addresses an issue that's present in version 2.0 and in Azure AD Connect version 1.6. If you're running Azure AD Connect on an older Windows server, install the [1.6.13.0](#16130) build instead.
### Release status
-8/19/2021: Released for download only, not available for auto upgrade.
+8/19/2021: Released for download only, not available for auto-upgrade
### Known issues
+Under certain circumstances, the installer for this version displays an error that states TLS 1.2 isn't enabled and stops the installation. This issue occurs because of an error in the code that verifies the registry setting for TLS 1.2. We'll correct this issue in a future release. If you see this issue, follow the instructions to enable TLS 1.2 in [TLS 1.2 enforcement for Azure AD Connect](reference-connect-tls-enforcement.md).
### Bug fixes
+We fixed a bug that occurred when a domain was renamed and Password Hash Sync failed with an error that indicated "a specified cast is not valid" in the Event log. This regression is from earlier builds.
## 1.6.13.0 > [!NOTE]
-> This is a hotfix update release of Azure AD Connect. This release is intended for customers who are running Azure AD Connect on a server with Windows Server 2012 or 2012 R2.
+> This release is a hotfix update release of Azure AD Connect. It's intended to be used by customers who are running Azure AD Connect on a server with Windows Server 2012 or 2012 R2.
-8/19/2021: Released for download only, not available for auto upgrade.
+8/19/2021: Released for download only, not available for auto-upgrade
### Bug fixes
+We fixed a bug that occurred when a domain was renamed and Password Hash Sync failed with an error that indicated "a specified cast is not valid" in the Event log. This regression is from earlier builds.
### Functional changes
-There are no functional changes in this release
+There are no functional changes in this release.
## 2.0.9.0 ### Release status
-8/17/2021: Released for download only, not available for auto upgrade.
+8/17/2021: Released for download only, not available for auto-upgrade
### Bug fixes > [!NOTE]
-> This is a hotfix update release of Azure AD Connect. This release requires Windows Server 2016 or newer. This release addresses an issue that is present in version 2.0.8.0, this issue is not present in Azure AD Connect version 1.6.
+> This release is a hotfix update release of Azure AD Connect. This release requires Windows Server 2016 or newer. It addresses an issue that's present in version 2.0.8.0. This issue isn't present in Azure AD Connect version 1.6.
+We fixed a bug that occurred when you synced a large number of Password Hash Sync transactions and the Event log entry length exceeded the maximum-allowed length for a Password Hash Sync event entry. We now split the lengthy log entry into multiple entries.
## 2.0.8.0 > [!NOTE]
-> This is a security update release of Azure AD Connect. This release requires Windows Server 2016 or newer. If you are using an older version of Windows Server, please use [version 1.6.11.3](#16113).
-> This release addresses a vulnerability as documented in [this CVE](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2021-36949). For more information about this vulnerability please refer to the CVE.
-> You can download the latest version of Azure AD Connect 2.0 using [this link](https://www.microsoft.com/download/details.aspx?id=47594).
+> This release is a security update release of Azure AD Connect. This release requires Windows Server 2016 or newer. If you're using an older version of Windows Server, use [version 1.6.11.3](#16113).
+
+This release addresses a vulnerability as documented in [this CVE](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2021-36949). For more information about this vulnerability, see the CVE.
+
+To download the latest version of Azure AD Connect 2.0, see the [Microsoft Download Center](https://www.microsoft.com/download/details.aspx?id=47594).
### Release status
-8/10/2021: Released for download only, not available for auto upgrade.
+8/10/2021: Released for download only, not available for auto-upgrade
### Functional changes
-There are no functional changes in this release
+There are no functional changes in this release.
## 1.6.11.3 > [!NOTE]
-> This is security update release of Azure AD Connect. This version is intended to be used by customers are running an older version of Windows Server and cannot upgrade their server to Windows Server 2016 or newer as this time. You cannot use this version to update an Azure AD Connect V2.0 server.
-> This release addresses a vulnerability as documented in [this CVE](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2021-36949). For more information about this vulnerability please refer to the CVE.
-> You can download the latest version of Azure AD Connect 1.6 using [this link](https://www.microsoft.com/download/details.aspx?id=103336).
+> This release is a security update release of Azure AD Connect. It's intended to be used by customers who are running an older version of Windows Server and can't upgrade their server to Windows Server 2016 or newer at this time. You can't use this version to update an Azure AD Connect V2.0 server.
+
+This release addresses a vulnerability as documented in [this CVE](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2021-36949). For more information about this vulnerability, see the CVE.
+
+To download the latest version of Azure AD Connect 1.6, see the [Microsoft Download Center](https://www.microsoft.com/download/details.aspx?id=103336).
### Release status
-8/10/2021: Released for download only, not available for auto upgrade.
+8/10/2021: Released for download only, not available for auto-upgrade
### Functional changes
-There are no functional changes in this release
+There are no functional changes in this release.
## 2.0.3.0 > [!NOTE]
-> This is a major release of Azure AD Connect. Please refer to the [Azure Active Directory V2.0 article](whatis-azure-ad-connect-v2.md) for more details.
+> This release is a major release of Azure AD Connect. For more information, see [Introduction to Azure AD Connect V2.0](whatis-azure-ad-connect-v2.md).
### Release status
-7/20/2021: Released for download only, not available for auto upgrade
+7/20/2021: Released for download only, not available for auto-upgrade
### Functional changes
-To sync an expired password from Active Directory to Azure Active Directory please use the [Synchronizing temporary passwords](how-to-connect-password-hash-synchronization.md#synchronizing-temporary-passwords-and-force-password-change-on-next-logon) feature in Azure AD Connect. Note that you will need to enable password writeback to use this feature, so the password the user updates is written back to Active Directory too.
- - Get-ADSyncToolsTls12
- - Set-ADSyncToolsTls12
-
-You can use these cmdlets to retrieve the TLS 1.2 enablement status, or set it as needed. Note that TLS 1.2 must be enabled on the server for the installation or Azure AD Connect to succeed.
-
- The following cmdlets have been added or updated
- - Clear-ADSyncToolsMsDsConsistencyGuid
- - ConvertFrom-ADSyncToolsAadDistinguishedName
- - ConvertFrom-ADSyncToolsImmutableID
- - ConvertTo-ADSyncToolsAadDistinguishedName
- - ConvertTo-ADSyncToolsCloudAnchor
- - ConvertTo-ADSyncToolsImmutableID
- - Export-ADSyncToolsAadDisconnectors
- - Export-ADSyncToolsObjects
- - Export-ADSyncToolsRunHistory
- - Get-ADSyncToolsAadObject
- - Get-ADSyncToolsMsDsConsistencyGuid
- - Import-ADSyncToolsObjects
- - Import-ADSyncToolsRunHistory
- - Remove-ADSyncToolsAadObject
- - Search-ADSyncToolsADobject
- - Set-ADSyncToolsMsDsConsistencyGuid
- - Trace-ADSyncToolsADImport
- - Trace-ADSyncToolsLdapQuery
-- We now use the V2 endpoint for import and export and we fixed issue in the Get-ADSyncAADConnectorExportApiVersion cmdlet. You can read more about the V2 endpoint in the [Azure AD Connect sync V2 endpoint article](how-to-connect-sync-endpoint-api-v2.md).-- We have added the following new user properties to sync from on-prem AD to Azure AD
- - employeeType
- - employeeHireDate
-- This release requires PowerShell version 5.0 or newer to be installed on the Windows Server. Note that this version is part of Windows Server 2016 and newer.-- We increased the Group sync membership limits to 250k with the new V2 endpoint.-- We have updated the Generic LDAP connector and the Generic SQL Connector to the latest versions. Read more about these connectors here:
- - [Generic LDAP Connector reference documentation](/microsoft-identity-manager/reference/microsoft-identity-manager-2016-connector-genericldap)
- - [Generic SQL Connector reference documentation](/microsoft-identity-manager/reference/microsoft-identity-manager-2016-connector-genericsql)
-- In the Microsoft 365 Admin Center, we now report the Azure AD Connect client version whenever there is export activity to Azure AD. This ensures that the Microsoft 365 Admin Center always has the most up to date Azure AD Connect client version, and that it can detect when you're using an outdated version
+- We upgraded the LocalDB components of SQL Server to SQL 2019.
+- This release requires Windows Server 2016 or newer because of the requirements of SQL Server 2019. An in-place upgrade of Windows Server on an Azure AD Connect server isn't supported. For this reason, you might need to use a [swing migration](how-to-upgrade-previous-version.md#swing-migration).
+- We enforce the use of TLS 1.2 in this release. If you enabled your Windows Server for TLS 1.2, Azure AD Connect uses this protocol. If TLS 1.2 isn't enabled on the server, you'll see an error message when you attempt to install Azure AD Connect. The installation won't continue until you've enabled TLS 1.2. You can use the new Set-ADSyncToolsTls12 cmdlets to enable TLS 1.2 on your server.
+- We made a change so that with this release, you can use the Hybrid Identity Administrator role to authenticate when you install Azure AD Connect. You no longer need to use the Global Administrator role.
+- We upgraded the Visual C++ runtime library to version 14 as a prerequisite for SQL Server 2019.
+- We updated this release to use the Microsoft Authentication Library for authentication. We removed the older Azure AD Authentication Library, which will be retired in 2022.
+- We no longer apply permissions on AdminSDHolders following Windows security guidance. We changed the parameter SkipAdminSdHolders to IncludeAdminSdHolders in the ADSyncConfig.psm1 module.
+- We made a change so that passwords are now reevaluated when an expired password is "unexpired," no matter if the password itself is changed. If the password is set to "Must change password at next logon" for a user, and this flag is cleared (which "unexpires" the password), the unexpired status and the password hash are synced to Azure AD. In Azure AD, when the user attempts to sign in, they can use the unexpired password.
+To sync an expired password from Active Directory to Azure AD, use the feature in Azure AD Connect to [synchronize temporary passwords](how-to-connect-password-hash-synchronization.md#synchronizing-temporary-passwords-and-force-password-change-on-next-logon). Enable password writeback to use this feature so that the password the user updates is written back to Active Directory.
+- We added two new cmdlets to the ADSyncTools module to enable or retrieve TLS 1.2 settings from the Windows Server:
+ - Get-ADSyncToolsTls12
+ - Set-ADSyncToolsTls12
+
+You can use these cmdlets to retrieve the TLS 1.2 enablement status or set it as needed. TLS 1.2 must be enabled on the server for the installation or Azure AD Connect to succeed.
+
+- We revamped ADSyncTools with several new and improved cmdlets. The [ADSyncTools article](reference-connect-adsynctools.md) has more details about these cmdlets.
+ The following cmdlets have been added or updated:
+ - Clear-ADSyncToolsMsDsConsistencyGuid
+ - ConvertFrom-ADSyncToolsAadDistinguishedName
+ - ConvertFrom-ADSyncToolsImmutableID
+ - ConvertTo-ADSyncToolsAadDistinguishedName
+ - ConvertTo-ADSyncToolsCloudAnchor
+ - ConvertTo-ADSyncToolsImmutableID
+ - Export-ADSyncToolsAadDisconnectors
+ - Export-ADSyncToolsObjects
+ - Export-ADSyncToolsRunHistory
+ - Get-ADSyncToolsAadObject
+ - Get-ADSyncToolsMsDsConsistencyGuid
+ - Import-ADSyncToolsObjects
+ - Import-ADSyncToolsRunHistory
+ - Remove-ADSyncToolsAadObject
+ - Search-ADSyncToolsADobject
+ - Set-ADSyncToolsMsDsConsistencyGuid
+ - Trace-ADSyncToolsADImport
+ - Trace-ADSyncToolsLdapQuery
+- We now use the V2 endpoint for import and export. We fixed an issue in the Get-ADSyncAADConnectorExportApiVersion cmdlet. To learn more about the V2 endpoint, see [Azure AD Connect sync V2 endpoint](how-to-connect-sync-endpoint-api-v2.md).
+- We added the following new user properties to sync from on-premises Active Directory to Azure AD:
+ - employeeType
+ - employeeHireDate
+- This release requires PowerShell version 5.0 or newer to be installed on the Windows server. This version is part of Windows Server 2016 and newer.
+- We increased the group sync membership limits to 250,000 with the new V2 endpoint.
+- We updated the Generic LDAP Connector and the Generic SQL Connector to the latest versions. To learn more about these connectors, see the reference documentation for:
+ - [Generic LDAP Connector](/microsoft-identity-manager/reference/microsoft-identity-manager-2016-connector-genericldap)
+ - [Generic SQL Connector](/microsoft-identity-manager/reference/microsoft-identity-manager-2016-connector-genericsql)
+- In the Microsoft 365 admin center, we now report the Azure AD Connect client version whenever there's export activity to Azure AD. This reporting ensures that the Microsoft 365 admin center always has the most up-to-date Azure AD Connect client version, and that it can detect when you're using an outdated version.
### Bug fixes -- We fixed an accessibility bug where the screen reader is announcing an incorrect role of the 'Learn More' link.-- We fixed a bug where sync rules with large precedence values (i.e. 387163089) cause an upgrade to fail. We updated the sproc 'mms_UpdateSyncRulePrecedence' to cast the precedence number as an integer prior to incrementing the value.-- We fixed a bug where group writeback permissions are not set on the sync account if a group writeback configuration is imported. We now set the group writeback permissions if group writeback is enabled on the imported configuration.-- We updated the Azure AD Connect Health agent version to 3.1.110.0 to fix an installation failure.-- We are seeing an issue with non-default attributes from exported configurations where directory extension attributes are configured. When importing these configurations to a new server/installation, the attribute inclusion list is overridden by the directory extension configuration step, so after import only default and directory extension attributes are selected in the sync service manager (non-default attributes are not included in the installation, so the user must manually reenable them from the sync service manager if they want their imported sync rules to work). We now refresh the AAD Connector before configuring directory extension to keep existing attributes from the attribute inclusion list.-- We fixed an accessibility issues where the page header's font weight is set as "Light". Font weight is now set to "Bold" for the page title, which applies to the header of all pages.-- The function Get-AdObject in ADSyncSingleObjectSync.ps1 has been renamed to Get-AdDirectoryObject to prevent ambiguity with the AD cmdlet.-- The SQL function 'mms_CheckSynchronizationRuleHasUniquePrecedence' allow duplicates precedence on outbound sync rules on different connectors. We removed the condition that allows duplicate rule precedence.-- We fixed a bug where the Single Object Sync cmdlet fails if the attribute flow data is null i.e. on exporting delete operation.-- We fixed a bug where the installation fails because the ADSync bootstrap service cannot be started. We now add Sync Service Account to the Local Builtin User Group before starting the bootstrap service.-- We fixed an accessibility issue where the active tab on Azure AD Connect wizard is not showing correct color on High Contrast theme. The selected color code was being overwritten due to missing condition in normal color code configuration.-- We addressed an issue where users were allowed to deselect objects and attributes used in sync rules using the UI and PowerShell. We now show friendly error message if you try to deselect any attribute or object that is used in any sync rules.-- We made some updates to the "migrate settings code" to check and fix backward compatibility issue when the script is ran on an older version of Azure AD Connect.-- Fixed a bug where, when PHS tries to look up an incomplete object, it does not use the same algorithm to resolve the DC as it used originally to fetch the passwords. In particular, it is ignoring affinitized DC information. The Incomplete object lookup should use the same logic to locate the DC in both instances.-- We fixed a bug where Azure AD Connect cannot read Application Proxy items using Microsoft Graph due to a permissions issue with calling Microsoft Graph directly based on Azure AD Connect client identifier. To fix this, we removed the dependency on Microsoft Graph and instead use Azure AD PowerShell to work with the App Proxy Application objects.-- We removed the writeback member limit from 'Out to AD - Group SOAInAAD Exchange' sync rule-- We fixed a bug where, when changing connector account permissions, if an object comes in scope that has not changed since the last delta import, a delta import will not import it. We now display warning alerting user of the issue.-- We fixed an accessibility issue where the screen reader is not reading radio button position. We added added positional text to the radio button accessibility text field.-- We updated the Pass-Thru Authentication Agent bundle. The older bundle did not have correct reply URL for HIP's first party application in US Gov.-- We fixed a bug where there is a 'stopped-extension-dll-exception' on AAD connector export after clean installing Azure AD Connect version 1.6.X.X, which defaults to using DirSyncWebServices API V2, using an existing database. Previously the setting export version to v2 was only being done for upgrade, we changed so that it is set on clean install as well.-- The "ADSyncPrep.psm1" module is no longer used and is removed from the installation.
+- We fixed an accessibility bug where the screen reader announced an incorrect role of the **Learn More** link.
+- We fixed a bug where sync rules with large precedence values (for example, 387163089) caused an upgrade to fail. We updated the sproc mms_UpdateSyncRulePrecedence to cast the precedence number as an integer prior to incrementing the value.
+- We fixed a bug where group writeback permissions weren't set on the sync account if a group writeback configuration was imported. We now set the group writeback permissions if group writeback is enabled on the imported configuration.
+- We updated the Azure AD Connect Health agent version to 3.1.110.0 to fix an installation failure.
+- We're seeing an issue with nondefault attributes from exported configurations where directory extension attributes are configured. In the process of importing these configurations to a new server or installation, the attribute inclusion list is overridden by the directory extension configuration step. As a result, after import, only default and directory extension attributes are selected in the sync service manager. Nondefault attributes aren't included in the installation, so the user must manually reenable them from the sync service manager if they want their imported sync rules to work. We now refresh the Azure AD Connector before configuring the directory extension to keep existing attributes from the attribute inclusion list.
+- We fixed an accessibility issue where the page header's font weight was set as Light. Font weight is now set to Bold for the page title, which applies to the header of all pages.
+- We renamed the function Get-AdObject in ADSyncSingleObjectSync.ps1 to Get-AdDirectoryObject to prevent ambiguity with the Active Directory cmdlet.
+- We removed the condition that allowed duplicate rule precedence. The SQL function mms_CheckSynchronizationRuleHasUniquePrecedence had allowed duplicates precedence on outbound sync rules on different connectors.
+- We fixed a bug where the Single Object Sync cmdlet fails if the attribute flow data is null. An example is on exporting a delete operation.
+- We fixed a bug where the installation fails because the ADSync bootstrap service can't be started. We now add Sync Service Account to the Local Builtin User Group before starting the bootstrap service.
+- We fixed an accessibility issue where the active tab on Azure AD Connect wizard wasn't showing the correct color on High Contrast theme. The selected color code was being overwritten because of a missing condition in the normal color code configuration.
+- We addressed an issue where you were allowed to deselect objects and attributes used in sync rules by using the UI and PowerShell. We now show friendly error messages if you try to deselect any attribute or object that's used in any sync rules.
+- We made some updates to the "migrate settings code" to check and fix backward compatibility issues when the script runs on an older version of Azure AD Connect.
+- We fixed a bug that occurred when PHS tried to look up an incomplete object. It didn't use the same algorithm to resolve the DC as it used originally to fetch the passwords. In particular, it ignored affinitized DC information. The Incomplete object lookup should use the same logic to locate the DC in both instances.
+- We fixed a bug where Azure AD Connect can't read Application Proxy items by using Microsoft Graph because of a permissions issue with calling Microsoft Graph directly based on the Azure AD Connect client identifier. To fix this issue, we removed the dependency on Microsoft Graph and instead use Azure AD PowerShell to work with the App Proxy Application objects.
+- We removed the writeback member limit from the Out to AD - Group SOAInAAD Exchange sync rule.
+- We fixed a bug that occurred when you changed connector account permissions. If an object came in scope that hadn't changed since the last delta import, a delta import wouldn't import it. We now display a warning to alert you of the issue.
+- We fixed an accessibility issue where the screen reader wasn't reading the radio button position. We added positional text to the radio button accessibility text field.
+- We updated the Pass-Thru Authentication Agent bundle. The older bundle didn't have the correct reply URL for the HIP's first-party application in US Government.
+- We fixed a bug where a stopped-extension-dll-exception error on Azure AD Connector exported after clean installing the Azure AD Connect version 1.6.X.X, which defaulted to using DirSyncWebServices API V2, by using an existing database. Previously, the setting export version to V2 was only being done for upgrades. We changed it so that it's set on clean install.
+- We removed the ADSyncPrep.psm1 module from the installation because it's no longer used.
### Known issues -- The Azure AD Connect wizard shows the "Import Synchronization Settings" option as "Preview", while this feature is generally Available.-- Some Active Directory connectors may be installed in a different order when using the output of the migrate settings script to install the product.-- The User Sign In options page in the Azure AD Connect wizard mentions "Company Administrator". This term is no longer used and needs to be replace by "Global Administrator".-- The "Export settings" option is broken when the Sign In option has been configured to use PingFederate.-- While Azure AD Connect can now be deployed using the Hybrid Identity Administrator role, configuring Self Service Password Reset, Passthru Authentication or Single Sign On will still require user with the Global Administrator role.-- When importing the Azure AD Connect configuration while deploying to connect with a different tenant than the original Azure AD Connect configuration, directory extension attributes are not configured correctly.
+- The Azure AD Connect wizard shows the **Import Synchronization Settings** option as **Preview**, although this feature is generally available.
+- Some Active Directory connectors might be installed in a different order when you use the output of the migrate settings script to install the product.
+- The **User Sign In** options page in the Azure AD Connect wizard mentions Company Administrator. This term is no longer used and needs to be replaced by Global Administrator.
+- The **Export settings** option is broken when the **Sign In** option has been configured to use PingFederate.
+- While Azure AD Connect can now be deployed by using the Hybrid Identity Administrator role, configuring Self-Service Password Reset, Passthru Authentication, or single sign-on still requires a user with the Global Administrator role.
+- When you import the Azure AD Connect configuration while you deploy to connect with a different tenant than the original Azure AD Connect configuration, directory extension attributes aren't configured correctly.
## 1.6.4.0 > [!NOTE] > The Azure AD Connect sync V2 endpoint API is now available in these Azure environments:
+>
> - Azure Commercial > - Azure China cloud > - Azure US Government cloud
-> - This release will not be made available in the Azure German cloud
+>
+> This release won't be made available in the Azure German cloud.
### Release status
-3/31/2021: Released for download only, not available for auto upgrade
+3/31/2021: Released for download only, not available for auto-upgrade
### Bug fixes -- This release fixes a bug in version 1.6.2.4 where, after upgrade to that release, the Azure AD Connect Health feature was not registered correctly and did not work. Customers who have deployed build 1.6.2.4 are requested to update their Azure AD Connect server with this build, which will correctly register the Health feature.
+This release fixes a bug that occurred in version 1.6.2.4. After upgrade to that release, the Azure AD Connect Health feature wasn't registered correctly and didn't work. If you deployed build 1.6.2.4, update your Azure AD Connect server with this build to register the Health feature correctly.
## 1.6.2.4 > [!IMPORTANT]
-> Update per March 30, 2021: we have discovered an issue in this build. After installation of this build, the Health services are not registered. We recommend not installing this build. We will release a hotfix shortly.
-> If you already installed this build, you can manually register the Health services by using the cmdlet as shown in [this article](./how-to-connect-health-agent-install.md#manually-register-azure-ad-connect-health-for-sync).
+> Update per March 30, 2021: We've discovered an issue in this build. After installation of this build, the Health services aren't registered. We recommend that you not install this build. We'll release a hotfix shortly.
+> If you already installed this build, you can manually register the Health services by using the cmdlet, as shown in [Azure AD Connect Health agent installation](./how-to-connect-health-agent-install.md#manually-register-azure-ad-connect-health-for-sync).
-> [!NOTE]
-> - This release will be made available for download only.
-> - The upgrade to this release will require a full synchronization due to sync rule changes.
-> - This release defaults the Azure AD Connect server to the new V2 end point.
+- This release will be made available for download only.
+- The upgrade to this release will require a full synchronization because of sync rule changes.
+- This release defaults the Azure AD Connect server to the new V2 endpoint.
### Release status
-3/19/2021: Released for download, not available for auto upgrade
+3/19/2021: Released for download, not available for auto-upgrade
### Functional changes
- - Added new default sync rules for limiting membership count in group writeback (Out to AD - Group Writeback Member Limit) and group sync to Azure Active Directory (Out to AAD - Group Writeup Member Limit) groups.
- - Added member attribute to the 'Out to AD - Group SOAInAAD - Exchange' rule to limit members in written back groups to 50k.
- -If the "In from AAD - Group SOAInAAD" rule is cloned and Azure AD Connect is upgraded.
- - The updated rule will be disabled by default and so the targetWritebackType will be null.
- - Azure AD Connect will writeback all Cloud Groups (including Azure Active Directory Security Groups enabled for writeback) as Distribution Groups.
- -If the "Out to AD - Group SOAInAAD" rule is cloned and Azure AD Connect is upgraded.
- - The updated rule will be disabled by default. However, a new sync rule "Out to AD - Group SOAInAAD - Exchange" which is added will be enabled.
- - Depending on the Cloned Custom Sync Rule's precedence, Azure AD Connect will flow the Mail and Exchange attributes.
- - If the Cloned Custom Sync Rule does not flow some Mail and Exchange attributes, then new Exchange Sync Rule will add those attributes.
- - Clear-ADSyncToolsMsDsConsistencyGuid
- - ConvertFrom-ADSyncToolsAadDistinguishedName
- - ConvertFrom-ADSyncToolsImmutableID
- - ConvertTo-ADSyncToolsAadDistinguishedName
- - ConvertTo-ADSyncToolsCloudAnchor
- - ConvertTo-ADSyncToolsImmutableID
- - Export-ADSyncToolsAadDisconnectors
- - Export-ADSyncToolsObjects
- - Export-ADSyncToolsRunHistory
- - Get-ADSyncToolsAadObject
- - Get-ADSyncToolsMsDsConsistencyGuid
- - Import-ADSyncToolsObjects
- - Import-ADSyncToolsRunHistory
- - Remove-ADSyncToolsAadObject
- - Search-ADSyncToolsADobject
- - Set-ADSyncToolsMsDsConsistencyGuid
- - Trace-ADSyncToolsADImport
- - Trace-ADSyncToolsLdapQuery
-
- - Set-ADSyncAADCompanyFeature
- - Get-ADSyncAADCompanyFeature
- - Get-ADSyncAADConnectorImportApiVersion - to get import AWS API version
- - Get-ADSyncAADConnectorExportApiVersion - to get export AWS API version
-
+- We updated default sync rules to limit membership in writeback groups to 50,000 members.
+ - We added new default sync rules for limiting the membership count in group writeback (Out to AD - Group Writeback Member Limit) and group sync to Azure AD (Out to AAD - Group Writeup Member Limit) groups.
+ - We added a member attribute to the Out to AD - Group SOAInAAD - Exchange rule to limit members in writeback groups to 50,000.
+- We updated sync rules to support group writeback V2:
+ - If the In from AAD - Group SOAInAAD rule is cloned and Azure AD Connect is upgraded:
+ - The updated rule will be disabled by default, so targetWritebackType will be null.
+ - Azure AD Connect will write back all Cloud Groups (including Azure AD Security Groups enabled for writeback) as Distribution Groups.
+ - If the Out to AD - Group SOAInAAD rule is cloned and Azure AD Connect is upgraded:
+ - The updated rule will be disabled by default. A new sync rule, Out to AD - Group SOAInAAD - Exchange, which is added will be enabled.
+ - Depending on the Cloned Custom Sync Rule's precedence, Azure AD Connect will flow the Mail and Exchange attributes.
+ - If the Cloned Custom Sync Rule doesn't flow some Mail and Exchange attributes, the new Exchange Sync Rule will add those attributes.
+- We added support for [Selective Password Hash Synchronization](./how-to-connect-selective-password-hash-synchronization.md).
+- We added the new [Single Object Sync cmdlet](./how-to-connect-single-object-sync.md). Use this cmdlet to troubleshoot your Azure AD Connect sync configuration.
+- Azure AD Connect now supports the Hybrid Identity Administrator role for configuring the service.
+- We updated the Azure AD ConnectHealth agent to 3.1.83.0.
+- We introduced a new version of the [ADSyncTools PowerShell module](./reference-connect-adsynctools.md), which has several new or improved cmdlets:
+ - Clear-ADSyncToolsMsDsConsistencyGuid
+ - ConvertFrom-ADSyncToolsAadDistinguishedName
+ - ConvertFrom-ADSyncToolsImmutableID
+ - ConvertTo-ADSyncToolsAadDistinguishedName
+ - ConvertTo-ADSyncToolsCloudAnchor
+ - ConvertTo-ADSyncToolsImmutableID
+ - Export-ADSyncToolsAadDisconnectors
+ - Export-ADSyncToolsObjects
+ - Export-ADSyncToolsRunHistory
+ - Get-ADSyncToolsAadObject
+ - Get-ADSyncToolsMsDsConsistencyGuid
+ - Import-ADSyncToolsObjects
+ - Import-ADSyncToolsRunHistory
+ - Remove-ADSyncToolsAadObject
+ - Search-ADSyncToolsADobject
+ - Set-ADSyncToolsMsDsConsistencyGuid
+ - Trace-ADSyncToolsADImport
+ - Trace-ADSyncToolsLdapQuery
+
+- We updated error logging for token acquisition failures.
+- We updated **Learn More** links on the configuration page to give more detail on the linked information.
+- We removed the **Explicit** column from the **CS Search** page in the old sync UI.
+- We added to the UI for the group writeback flow to prompt users for credentials or to configure their own permissions by using the ADSyncConfig module if credentials weren't already provided in an earlier step.
+- We added the ability to autocreate a managed service account for an ADSync service account on a DC.
+- We added the ability to set and get the Azure AD DirSync feature group writeback V2 in the existing cmdlets:
+
+ - Set-ADSyncAADCompanyFeature
+ - Get-ADSyncAADCompanyFeature
+- We added two cmdlets to read the AWS API version:
+
+ - Get-ADSyncAADConnectorImportApiVersion: To get the import AWS API version
+ - Get-ADSyncAADConnectorExportApiVersion: To get the export AWS API version
+
+- We updated change tracking so that changes made to synchronization rules are now tracked to assist troubleshooting changes in the service. The cmdlet Get-ADSyncRuleAudit retrieves tracked changes.
+- We updated the Add-ADSyncADDSConnectorAccount cmdlet in the [ADSyncConfig PowerShell module](./how-to-connect-configure-ad-ds-connector-account.md#using-the-adsyncconfig-powershell-module) to allow a user in the ADSyncAdmin group to change the Active Directory Domain Services Connector account.
### Bug fixes
+- We updated disabled foreground color to satisfy luminosity requirements on a white background. We added more conditions for the navigation tree to set the foreground text color to white when a disabled page is selected to satisfy luminosity requirements.
+- We increased granularity for Set-ADSyncPasswordHashSyncPermissions cmdlet.
+- We updated the PHS permissions script (Set-ADSyncPasswordHashSyncPermissions) to include an optional ADobjectDN parameter.
+- We made an accessibility bug fix. The screen reader now describes the UX element that holds the list of forests as **Forests list** instead of **Forest List list**.
+- We updated screen reader output for some items in the Azure AD Connect wizard. We updated the button hover color to satisfy contrast requirements. We updated Synchronization Service Manager title color to satisfy contrast requirements.
+- We fixed an issue with installing Azure AD Connect from exported configuration having custom extension attributes.
+- We added a condition to skip checking for extension attributes in the target schema while applying the sync rule.
+- We added appropriate permissions on installation if the group writeback feature is enabled.
+- We fixed duplicate default sync rule precedence on import.
+- We fixed an issue that caused a staging error during V2 API delta import for a conflicting object that was repaired via the Health portal.
+- We fixed an issue in the sync engine that caused Connector Spaces objects to have an inconsistent link state.
+- We added import counters to Get-ADSyncConnectorStatistics output.
+- We fixed an unreachable domain de-selection (selected previously) issue in some corner cases during the pass2 wizard.
+- We modified policy import and export to fail if custom rule has duplicate precedence.
+- We fixed a bug in the domain selection logic.
+- We fixed an issue with build 1.5.18.0 if you use mS-DS-ConsistencyGuid as the source anchor and have cloned the In from AD - Group Join rule.
+- Fresh Azure AD Connect installations will use the Export Deletion Threshold stored in the cloud if there's one available and if there isn't a different one passed in.
+- We fixed an issue where Azure AD Connect wouldn't read Active Directory displayName changes of hybrid-joined devices.
## 1.5.45.0
This is a bug fix release. There are no functional changes in this release.
### Fixed issues -- Fixed an issue where admin can't enable "Seamless Single Sign On" if AZUREADSSOACC computer account is already present in the "Active Directory".-- Fixed an issue that caused a staging error during V2 API delta import for a conflicting object that was repaired via the health portal.-- Fixed an issue in the import/export configuration where disabled custom rule was imported as enabled.
+- We fixed an issue where admin can't enable seamless single sign-on if the AZUREADSSOACC computer account is already present in Active Directory.
+- We fixed an issue that caused a staging error during V2 API delta import for a conflicting object that was repaired via the Health portal.
+- We fixed an issue in the import/export configuration where a disabled custom rule was imported as enabled.
## Next steps
-Learn more about [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md).
+Learn more about how to [integrate your on-premises identities with Azure AD](whatis-hybrid-identity.md).
active-directory F5 Aad Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/f5-aad-integration.md
Title: Secure hybrid access with F5 description: F5 BIG-IP Access Policy Manager and Azure Active Directory integration for Secure Hybrid Access-+ Last updated 11/12/2020-+ - # Integrate F5 BIG-IP with Azure Active Directory
Integrating F5 BIG-IP with Azure AD for SHA have the following pre-requisites:
No previous experience or F5 BIG-IP knowledge is necessary to implement SHA, but we do recommend familiarizing yourself with F5 BIG-IP terminology. F5ΓÇÖs rich [knowledge base](https://www.f5.com/services/resources/glossary) is also a good place to start building BIG-IP knowledge.
-## Deployment scenarios
+Configuring a BIG-IP for SHA is achieved using any of the many available methods, including several template based options, or a manual configuration.
+The following tutorials provide detailed guidance on implementing some of the more common patterns for BIG-IP and Azure AD SHA, using these methods.
-The following tutorials provide detailed guidance on implementing some of the more common patterns for BIG-IP and Azure AD SHA:
+**Advanced configuration**
+
+The advanced approach provides a more elaborate, yet flexible way of implementing SHA by manually creating all BIG-IP configuration objects. You would use this approach for scenarios not covered by the guided configuration templates.
- [F5 BIG-IP in Azure deployment walk-through](f5-bigip-deployment-guide.md)
+- [Securing F5 BIG-IP SSL-VPN with Azure AD SHA](f5-aad-password-less-vpn.md)
+
+- [Extend Azure AD B2C to protect applications using F5 BIG-IP](../../active-directory-b2c/partner-f5.md)
+ - [F5 BIG-IP APM and Azure AD SSO to Kerberos applications](f5-big-ip-kerberos-advanced.md) - [F5 BIG-IP APM and Azure AD SSO to Header-based applications](f5-big-ip-header-advanced.md) -- [Securing F5 BIG-IP SSL-VPN with Azure AD SHA](f5-aad-password-less-vpn.md)
+- [F5 BIG-IP APM and Azure AD SSO to forms-based applications](f5-big-ip-forms-advanced.md)
-- [Configure Azure AD B2C with F5 BIG-IP](../../active-directory-b2c/partner-f5.md)
+**Guided Configuration and Easy Button templates**
-- [F5 BIG-IP APM and Azure AD SSO to forms-based applications](f5-big-ip-forms-advanced.md)
+The Guided Configuration wizard, available from BIG-IP version 13.1 aims to minimize time and effort implementing common BIG-IP publishing scenarios. Its workflow-based framework provides an intuitive deployment experience tailored to specific access topologies.
+
+The latest version of the Guided Configuration 16.1 now offers an Easy Button feature. With **Easy Button**, admins no longer go back and forth between Azure AD and a BIG-IP to enable services for SHA. The end-to-end deployment and policy management is handled directly between the APMΓÇÖs Guided Configuration wizard and Microsoft Graph. This rich integration between BIG-IP APM and Azure AD ensures applications can quickly, easily support identity federation, SSO, and Azure AD Conditional Access, without management overhead of having to do so on a per app basis.
+
+- [F5 BIG-IP Easy Button for SSO to Kerberos applications](f5-big-ip-kerberos-easy-button.md)
- [F5 BIG-IP Easy Button for SSO to header-based and LDAP applications](f5-big-ip-ldap-header-easybutton.md)
active-directory F5 Big Ip Kerberos Advanced https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/f5-big-ip-kerberos-advanced.md
Title: Configure F5 BIG-IP Access Policy Manager for Kerberos authentication
-description: Learn how to implement Secure Hybrid Access (SHA) with Single Sign-on (SSO) to Kerberos applications using F5ΓÇÖs BIG-IP advanced configuration.
+description: Learn how to implement Secure Hybrid Access (SHA) with single sign-on (SSO) to Kerberos applications by using F5's BIG-IP advanced configuration.
# Tutorial: Configure F5 BIG-IP Access Policy Manager for Kerberos authentication
-In this tutorial, youΓÇÖll learn how to implement Secure Hybrid Access (SHA) with Single Sign-on (SSO) to Kerberos applications using F5ΓÇÖs BIG-IP advanced configuration.
+In this article, you'll learn how to implement Secure Hybrid Access (SHA) with single sign-on (SSO) to Kerberos applications by using F5's BIG-IP advanced configuration.
-Integrating a BIG-IP with Azure AD provides many benefits, including:
+Integrating a BIG-IP with Azure Active Directory (Azure AD) provides many benefits, including:
-* Improved zero-trust governance through Azure AD pre-authentication and authorization
+* Improved Zero Trust governance through Azure AD pre-authentication and authorization.
-* Full Single Sign-on (SSO) between Azure AD and BIG-IP published services
+* Full SSO between Azure AD and BIG-IP published services.
-* Manage Identities and access from a single control plane - [The Azure portal](https://portal.azure.com/)
+* Management of identities and access from a single control plane, the [Azure portal](https://portal.azure.com/).
-To learn about all of the benefits, see the article on [F5 BIG-IP and Azure AD integration](./f5-aad-integration.md) and [what is application access and single sign-on with Azure AD](/azure/active-directory/active-directory-appssoaccess-whatis).
+To learn about all of the benefits, see [Integrate F5 BIG-IP with Azure Active Directory](./f5-aad-integration.md) and [What is single sign-on in Azure Active Directory?](/azure/active-directory/active-directory-appssoaccess-whatis).
## Scenario description
-For this scenario, you will configure a critical line of business (LOB) application for **Kerberos authentication**, also known as **Integrated Windows Authentication (IWA)**.
+For this scenario, you'll configure a critical line-of-business application for *Kerberos authentication*, also known as *Integrated Windows Authentication*.
-To integrate the application directly with Azure AD, itΓÇÖd need to support some form of federation-based protocol such as Security Assertion Markup Language (SAML), or better. But as modernizing the application introduces risk of potential downtime, there are other options. While using Kerberos Constrained Delegation (KCD) for SSO, you can use [Azure AD Application Proxy](../app-proxy/application-proxy.md) to access the application remotely.
+For you to integrate the application directly with Azure AD, it would need to support some form of federation-based protocol, such as Security Assertion Markup Language (SAML). But because modernizing the application introduces the risk of potential downtime, there are other options.
-In this arrangement, you can achieve the protocol transitioning required to bridge the legacy application to the modern identity control plane. Another approach is to use an F5 BIG-IP Application Delivery Controller (ADC). This enables overlay of the application with Azure AD pre-authentication and KCD SSO, and significantly improves the overall Zero Trust posture of the application.
+While you're using Kerberos Constrained Delegation (KCD) for SSO, you can use [Azure AD Application Proxy](../app-proxy/application-proxy.md) to access the application remotely. In this arrangement, you can achieve the protocol transitioning that's required to bridge the legacy application to the modern identity control plane.
+
+Another approach is to use an F5 BIG-IP Application Delivery Controller. This approach enables overlay of the application with Azure AD pre-authentication and KCD SSO. It significantly improves the overall Zero Trust posture of the application.
## Scenario architecture
-The secure hybrid access solution for this scenario is made up of the following:
+The SHA solution for this scenario consists of the following elements:
-**Application:** The backend Kerberos-based service that gets externally published by the BIG-IP and is protected by SHA.
+- **Application**: Back-end Kerberos-based service that's externally published by BIG-IP and protected by SHA.
-**BIG-IP:** Reverse proxy functionality enables publishing backend applications. The APM then overlays published applications with SAML Service Provider (SP) and SSO functionality.
+- **BIG-IP**: Reverse proxy functionality that enables publishing back-end applications. The Access Policy Manager (APM) then overlays published applications with SAML service provider (SP) and SSO functionality.
-**Azure AD:** Identity Provider (IdP) responsible for verifying user credentials, Conditional Access (CA), and SSO to the BIG-IP APM through SAML.
+- **Azure AD**: Identity provider (IdP) responsible for verifying user credentials, Azure AD Conditional Access, and SSO to the BIG-IP APM through SAML.
-**KDC:** Key Distribution Center role on a Domain Controller (DC), issuing Kerberos tickets.
+- **KDC**: Key Distribution Center role on a domain controller (DC). It issues Kerberos tickets.
-The following image illustrates the SAML SP initiated flow for this scenario, but IdP initiated flow is also supported.
+The following image illustrates the SAML SP-initiated flow for this scenario, but IdP-initiated flow is also supported.
-![Scenario architecture](./media/f5-big-ip-kerberos-advanced/scenario-architecture.png)
+![Diagram of the scenario architecture.](./media/f5-big-ip-kerberos-advanced/scenario-architecture.png)
-| Steps| Description |
+| Step| Description |
| -- |-|
-| 1| User connects to application endpoint (BIG-IP) |
-| 2| BIG-IP access policy redirects user to Azure AD (SAML IdP) |
-| 3| Azure AD pre-authenticates user and applies any enforced CA policies |
-| 4| User is redirected to BIG-IP (SAML SP) and SSO is performed using issued SAML token |
-| 5| BIG-IP authenticates user and requests Kerberos ticket from KDC |
-| 6| BIG-IP sends request to backend application, along with Kerberos ticket for SSO |
-| 7| Application authorizes request and returns payload |
+| 1| User connects to the application endpoint (BIG-IP). |
+| 2| BIG-IP access policy redirects the user to Azure AD (SAML IdP). |
+| 3| Azure AD pre-authenticates the user and applies any enforced Conditional Access policies. |
+| 4| User is redirected to BIG-IP (SAML SP), and SSO is performed via the issued SAML token. |
+| 5| BIG-IP authenticates the user and requests a Kerberos ticket from KDC. |
+| 6| BIG-IP sends the request to the back-end application, along with the Kerberos ticket for SSO. |
+| 7| Application authorizes the request and returns the payload. |
## Prerequisites
-Prior BIG-IP experience isnΓÇÖt necessary, but you will need:
+Prior BIG-IP experience isn't necessary, but you will need:
-* An Azure AD free subscription or above
+* An Azure AD free subscription or higher-tier subscription.
-* An existing BIG-IP or [deploy a BIG-IP Virtual Edition (VE) in Azure](../manage-apps/f5-bigip-deployment-guide.md)
+* An existing BIG-IP, or [deploy BIG-IP Virtual Edition in Azure](../manage-apps/f5-bigip-deployment-guide.md).
-* Any of the following F5 BIG-IP license offers
+* Any of the following F5 BIG-IP license offers:
- * F5 BIG-IP® Best bundle
+ * F5 BIG-IP Best bundle
* F5 BIG-IP APM standalone license
- * F5 BIG-IP APM add-on license on an existing BIG-IP F5 BIG-IP® Local Traffic Manager™ (LTM)
+ * F5 BIG-IP APM add-on license on an existing BIG-IP Local Traffic Manager
- * 90-day BIG-IP full feature [trial license](https://www.f5.com/trial/big-ip-trial.php).
+ * 90-day BIG-IP full feature [trial license](https://www.f5.com/trial/big-ip-trial.php)
-* User identities [synchronized](../hybrid/how-to-connect-sync-whatis.md) from an on-premises directory to Azure AD or created directly within Azure AD and flowed back to your on-premises directory
+* User identities [synchronized](../hybrid/how-to-connect-sync-whatis.md) from an on-premises directory to Azure AD, or created directly within Azure AD and flowed back to your on-premises directory.
-* An account with Azure AD Application admin [permissions](../users-groups-roles/directory-assign-admin-roles.md)
+* An account with Azure AD Application Administrator [permissions](../users-groups-roles/directory-assign-admin-roles.md).
-* Web server [certificate](../manage-apps/f5-bigip-deployment-guide.md) for publishing services over HTTPS or use default BIG-IP certs while testing
+* A web server [certificate](../manage-apps/f5-bigip-deployment-guide.md) for publishing services over HTTPS, or use default BIG-IP certificates while testing.
-* An existing Kerberos application or [setup an IIS (Internet Information Services) app](https://active-directory-wp.com/docs/Networking/Single_Sign_On/SSO_with_IIS_on_Windows.html) for KCD SSO
+* An existing Kerberos application, or [set up an Internet Information Services (IIS) app](https://active-directory-wp.com/docs/Networking/Single_Sign_On/SSO_with_IIS_on_Windows.html) for KCD SSO.
## Configuration methods
-There are many methods to configure BIG-IP for this scenario, including two template-based options and an advanced configuration. This tutorial covers the advanced approach that provides a more flexible way of implementing SHA by manually creating all BIG-IP configuration objects. You would also use this approach for scenarios not covered by the guided configuration templates.
+There are many methods to configure BIG-IP for this scenario, including two template-based options and an advanced configuration. This article covers the advanced approach, which provides a more flexible way of implementing SHA by manually creating all BIG-IP configuration objects. You would also use this approach for scenarios that the guided configuration templates don't cover.
>[!NOTE]
-> All example strings or values referenced throughout this guide should be replaced with those for your actual environment.
+> All example strings or values in this article should be replaced with those for your actual environment.
## Register F5 BIG-IP in Azure AD
-Before a BIG-IP can hand off pre-authentication to Azure AD, it must be registered in your tenant. This is the first step in establishing SSO between both entities and is no different to making any IDP aware of a SAML Relying Party (RP). In this case, the app you create from the F5 BIG-IP gallery template is the RP representing the SAML SP for the BIG-IP published application.
+Before BIG-IP can hand off pre-authentication to Azure AD, it must be registered in your tenant. This is the first step in establishing SSO between both entities. It's no different from making any IdP aware of a SAML relying party. In this case, the app that you create from the F5 BIG-IP gallery template is the relying party that represents the SAML SP for the BIG-IP published application.
+
+1. Sign in to the [Azure AD portal](https://portal.azure.com) by using an account with Application Administrator permissions.
-1. Sign-in to the [Azure AD portal](https://portal.azure.com) using an account with Application Admin rights.
+2. From the left pane, select the **Azure Active Directory** service.
-2. From the left navigation pane, select the **Azure Active Directory** service
+3. On the left menu, select **Enterprise applications**. The **All applications** pane opens and displays a list of the applications in your Azure AD tenant.
-3. In the left menu, select **Enterprise applications.** The **All applications** pane opens and displays a list of the applications in your Azure AD tenant.
+4. On the **Enterprise applications** pane, select **New application**.
-4. In the **Enterprise applications** pane, select **New application**.
+5. The **Browse Azure AD Gallery** pane opens and displays tiles for cloud platforms, on-premises applications, and featured applications. Applications listed in the **Featured applications** section have icons that indicate whether they support federated SSO and provisioning.
-5. The **Browse Azure AD Gallery** pane opens and displays tiles for cloud platforms, on-premises applications, and featured applications. Applications listed in the **Featured applications** section have icons indicating whether they support federated single sign-on (SSO) and provisioning. Search for **F5** in the Azure gallery and select **F5 BIG-IP APM Azure AD integration**
+ Search for **F5** in the Azure gallery, and select **F5 BIG-IP APM Azure AD integration**.
6. Provide a name for the new application to recognize the instance of the application. Select **Add/Create** to add it to your tenant.
-## Enable SSO to the F5 BIG-IP
+## Enable SSO to F5 BIG-IP
-Next, configure the BIG-IP registration to fulfill SAML tokens requested by the BIG-IP APM.
+Next, configure the BIG-IP registration to fulfill SAML tokens that the BIG-IP APM requests:
1. In the **Manage** section of the left menu, select **Single sign-on** to open the **Single sign-on** pane for editing.
-2. On the **Select a single sign-on method** page, select **SAML** followed by **No, IΓÇÖll save later** to skip the prompt.
+2. On the **Select a single sign-on method** page, select **SAML** followed by **No, I'll save later** to skip the prompt.
3. On the **Set up single sign-on with SAML** pane, select the pen icon to edit **Basic SAML Configuration**. Make these edits:
- 1. Replace the pre-defined **Identifier** with the full URL for the BIG-IP published application
+ 1. Replace the predefined **Identifier** value with the full URL for the BIG-IP published application.
- 2. Replace the **Reply URL** but retain the path for the applicationΓÇÖs SAML SP endpoint.
+ 2. Replace the **Reply URL** value but retain the path for the application's SAML SP endpoint.
- In this configuration, the SAML flow would operate in IdP initiated mode, where Azure AD issues a SAML assertion before the user is redirected to the BIG-IP endpoint for the application.
-
+ In this configuration, the SAML flow would operate in IdP-initiated mode. In that mode, Azure AD issues a SAML assertion before the user is redirected to the BIG-IP endpoint for the application.
- 3. To use SP initiated mode, populate the **Sign on URL** with the application URL.
+ 3. To use SP-initiated mode, populate **Sign on URL** with the application URL.
- 4. For the **Logout URI**, enter the BIG-IP APM single logout (SLO) endpoint pre-pended by the host header of the service being published. It ensures the userΓÇÖs BIG-IP APM session is also terminated after being signed out of Azure AD.
+ 4. For **Logout Url**, enter the BIG-IP APM single logout (SLO) endpoint prepended by the host header of the service that's being published. This step ensures that the user's BIG-IP APM session ends after the user is signed out of Azure AD.
- ![Screenshot for editing basic SAML configuration](./media/f5-big-ip-kerberos-advanced/edit-basic-saml-configuration.png)
+ ![Screenshot for editing basic SAML configuration.](./media/f5-big-ip-kerberos-advanced/edit-basic-saml-configuration.png)
> [!NOTE]
- > From TMOS v16 the SAML SLO endpoint has changed to **/saml/sp/profile/redirect/slo**
+ > From TMOS v16, the SAML SLO endpoint has changed to **/saml/sp/profile/redirect/slo**.
-4. Select **Save** before exiting the SAML configuration pane and skip the SSO test prompt.
+4. Select **Save** before closing the SAML configuration pane and skip the SSO test prompt.
-5. Note the properties of the **User Attributes & Claims** section, as these are what Azure AD will issue users for BIG-IP APM authentication and SSO to the backend application.
+5. Note the properties of the **User Attributes & Claims** section. Azure AD will issue these properties to users for BIG-IP APM authentication and for SSO to the back-end application.
-6. In the **SAML Signing Certificate** pane, select the **Download** button to save the **Federation Metadata XML** file to your computer.
+6. On the **SAML Signing Certificate** pane, select **Download** to save the **Federation Metadata XML** file to your computer.
- ![Edit SAML signing certificate](./media/f5-big-ip-kerberos-advanced/edit-saml-signing-certificate.png)
+ ![Screenshot that shows selections for editing a SAML signing certificate.](./media/f5-big-ip-kerberos-advanced/edit-saml-signing-certificate.png)
-SAML signing certificates created by Azure AD have a lifespan of 3 years. For more information, see [Managed certificates for federated single sign-on](./manage-certificates-for-federated-single-sign-on.md).
+SAML signing certificates that Azure AD creates have a lifespan of three years. For more information, see [Managed certificates for federated single sign-on](./manage-certificates-for-federated-single-sign-on.md).
## Assign users and groups
-By default, Azure AD will issue tokens only for users that have been granted access to an application. To provide specific users and groups access to the application:
+By default, Azure AD will issue tokens only for users who have been granted access to an application. To grant specific users and groups access to the application:
-1. In the **F5 BIG-IP applicationΓÇÖs overview** blade, select **Assign Users and groups**
+1. On the **F5 BIG-IP application's overview** pane, select **Assign Users and groups**.
- ![Screenshot for assigning users and groups](./media/f5-big-ip-kerberos-advanced/authorize-users-groups.png)
+2. Select **+ Add user/group**.
-2. Select **+ Add user/group** to add the groups authorized to access the internal application followed by **Select > Assign** to assign the users/ groups to your application
+ ![Screenshot that shows the button for assigning users and groups.](./media/f5-big-ip-kerberos-advanced/authorize-users-groups.png)
-## Active Directory KCD configurations
+3. Select users and groups, and then select **Assign** to assign them to your application.
-For the BIG-IP APM to perform SSO to the backend application on behalf of users, KCD must be configured in the target AD domain. Delegating authentication also requires that the BIG-IP APM be provisioned with a domain service account.
+## Configure Active Directory KCD
-For our scenario, the application is hosted on server **APP-VM-01** and is running in the context of a service account named **web_svc_account**, not the computerΓÇÖs identity. The delegating service account assigned to the APM will be called **F5-BIG-IP**.
+For the BIG-IP APM to perform SSO to the back-end application on behalf of users, KCD must be configured in the target Active Directory domain. Delegating authentication also requires that the BIG-IP APM is provisioned with a domain service account.
+
+For the scenario in this article, the application is hosted on server **APP-VM-01** and is running in the context of a service account named **web_svc_account**, not the computer's identity. The delegating service account assigned to the APM is **F5-BIG-IP**.
### Create a BIG-IP APM delegation account
-As the BIG-IP doesnΓÇÖt support group managed service accounts (gMSA), create a standard user account to use as the APM service account:
+Because BIG-IP doesn't support group managed service accounts, create a standard user account to use as the APM service account:
+1. Enter the following PowerShell command. Replace the `UserPrincipalName` and `SamAccountName` values with those for your environment.
-1. Replace the **UserPrincipalName** and **SamAccountName** values with those for your environment in these PowerShell commands:
+ ```New-ADUser -Name "F5 BIG-IP Delegation Account" UserPrincipalName host/f5-big-ip.contoso.com@contoso.com SamAccountName "f5-big-ip" -PasswordNeverExpires $true Enabled $true -AccountPassword (Read-Host -AsSecureString "Account Password") ```
- ```New-ADUser -Name "F5 BIG-IP Delegation Account" UserPrincipalName host/f5-big-ip.contoso.com@contoso.com SamAccountName "f5-big-ip" -PasswordNeverExpires $true Enabled $true -AccountPassword (Read-Host -AsSecureString "Account Password") ```
+2. Create a service principal name (SPN) for the APM service account to use when you're performing delegation to the web application's service account:
-2. Create a **Service Principal Name (SPN)** for the APM service account to use when performing delegation to the web applicationΓÇÖs service account.
+ ```Set-AdUser -Identity f5-big-ip -ServicePrincipalNames @Add="host/f5-big-ip.contoso.com"} ```
- ```Set-AdUser -Identity f5-big-ip -ServicePrincipalNames @Add="host/f5-big-ip.contoso.com"} ```
+3. Ensure that the SPN now shows against the APM service account:
-3. Ensure the SPN now shows against the APM service account.
+ ```Get-ADUser -identity f5-big-ip -properties ServicePrincipalNames | Select-Object -ExpandProperty ServicePrincipalNames ```
- ```Get-ADUser -identity f5-big-ip -properties ServicePrincipalNames | Select-Object -ExpandProperty ServicePrincipalNames ```
+ 4. Before you specify the target SPN that the APM service account should delegate to for the web application, view its existing SPN configuration:
+
+ 1. Check whether your web application is running in the computer context or a dedicated service account.
+ 2. Use the following command to query the account object in Active Directory to see its defined SPNs. Replace `<name_of_account>` with the account for your environment.
- 4. Before specifying the target SPN that the APM service account should delegate to for the web application, you need to view its existing SPN config. Check whether your web application is running in the computer context or a dedicated service account. Next, query that account object in AD to see its defined SPNs. Replace <name_of_account> with the account for your environment.
+ ```Get-ADUser -identity <name_of_account> -properties ServicePrincipalNames | Select-Object -ExpandProperty ServicePrincipalNames ```
- ```Get-ADUser -identity <name_of _account> -properties ServicePrincipalNames | Select-Object -ExpandProperty ServicePrincipalNames ```
+5. You can use any SPN that you see defined against a web application's service account. But in the interest of security, it's best to use a dedicated SPN that matches the host header of the application.
-5. You can use any SPN you see defined against a web applicationΓÇÖs service account, but in the interest of security itΓÇÖs best to use a dedicated SPN matching the host header of the application. For example, as our web application host header is myexpenses.contoso.com we would add HTTP/myexpenses.contoso.com to the application's service account object in AD.
+ For example, because the web application host header in this example is **myexpenses.contoso.com**, you would add `HTTP/myexpenses.contoso.com` to the application's service account object in Active Directory:
- ```Set-AdUser -Identity web_svc_account -ServicePrincipalNames @{Add="http/myexpenses.contoso.com"} ```
+ ```Set-AdUser -Identity web_svc_account -ServicePrincipalNames @{Add="http/myexpenses.contoso.com"} ```
- Or if the app ran in the machine context, we would add the SPN to the object of the computer account in AD.
+ Or if the app ran in the machine context, you would add the SPN to the object of the computer account in Active Directory:
```Set-ADComputer -Identity APP-VM-01 -ServicePrincipalNames @{Add="http/myexpenses.contoso.com"} ```
-With the SPNs defined, the APM service account now needs trusting to delegate to that service. The configuration will vary depending on the topology of your BIG-IP and application server.
+With the SPNs defined, you now need to establish trust for the APM service account delegate to that service. The configuration will vary depending on the topology of your BIG-IP instance and application server.
-### Configure BIG-IP and target application in same domain
+### Configure BIG-IP and the target application in the same domain
-1. Set trust for the APM service account to delegate authentication
+1. Set trust for the APM service account to delegate authentication:
- ```Get-ADUser -Identity f5-big-ip | Set-ADAccountControl -TrustedToAuthForDelegation $true ```
+ ```Get-ADUser -Identity f5-big-ip | Set-ADAccountControl -TrustedToAuthForDelegation $true ```
-2. The APM service account then needs to know which target SPN itΓÇÖs trusted to delegate to, Or in other words which service is it allowed to request a Kerberos ticket for. Set target SPN to the service account running your web application.
+2. The APM service account then needs to know which target SPN it's trusted to delegate to. In other words, the APM service account needs to know which service it's allowed to request a Kerberos ticket for. Set the target SPN to the service account that's running your web application:
- ```Set-ADUser -Identity f5-big-ip -Add @{'msDS-AllowedToDelegateTo'=@('HTTP/myexpenses.contoso.com')} ```
+ ```Set-ADUser -Identity f5-big-ip -Add @{'msDS-AllowedToDelegateTo'=@('HTTP/myexpenses.contoso.com')} ```
-If preferred, you can also complete these tasks through the Active Directory Users and Computers Microsoft Management Console (MMC) on a domain controller.
+If you prefer, you can complete these tasks through the **Active Directory Users and Computers** Microsoft Management Console (MMC) snap-in on a domain controller.
-### BIG-IP and application in different domains
+### Configure BIG-IP and the target application in different domains
-Starting with Windows Server 2012, cross domain KCD uses Resource-based constrained delegation (RCD). The constraints for a service have been transferred from the domain administrator to the service administrator. This allows the back-end service administrator to allow or deny SSO. This also introduces a different approach at configuration delegation, which is only possible using either PowerShell or ADSIEdit.
+Starting with Windows Server 2012, cross-domain KCD uses resource-based constrained delegation. The constraints for a service have been transferred from the domain administrator to the service administrator. This delegation allows the back-end service administrator to allow or deny SSO. It also introduces a different approach at configuration delegation, which is possible only when you use either PowerShell or ADSI Edit.
-The PrincipalsAllowedToDelegateToAccount property of the applications service account (computer or dedicated service account) can be used to grant delegation from the BIG-IP. For this scenario, use the following PowerShell command on a Domain Controller DC (2012 R2+) within the same domain as the application.
+You can use the `PrincipalsAllowedToDelegateToAccount` property of the application's service account (computer or dedicated service account) to grant delegation from BIG-IP. For this scenario, use the following PowerShell command on a domain controller (Windows Server 2012 R2 or later) within the same domain as the application.
-If the **web_svc_account** service runs in context of a user account:
+If the **web_svc_account** service runs in context of a user account, use these commands:
```$big-ip= Get-ADComputer -Identity f5-big-ip -server dc.contoso.com``` ```Set-ADUser -Identity web_svc_account -PrincipalsAllowedToDelegateToAccount $big-ip``` ```Get-ADUser web_svc_account -Properties PrincipalsAllowedToDelegateToAccount```
-If the **web_svc_account** service runs in context of a computer account:
+If the **web_svc_account** service runs in context of a computer account, use these commands:
```$big-ip= Get-ADComputer -Identity f5-big-ip -server dc.contoso.com``` ```Set-ADComputer -Identity web_svc_account -PrincipalsAllowedToDelegateToAccount $big-ip``` ```Get-ADComputer web_svc_account -Properties PrincipalsAllowedToDelegateToAccount``` For more information, see [Kerberos Constrained Delegation across domains](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/hh831477(v=ws.11)).
-## BIG-IP advanced configuration
-Now we can proceed with setting up the BIG-IP configurations.
+## Make BIG-IP advanced configurations
+
+Now you can proceed with setting up the BIG-IP configurations.
+
+### Configure SAML service provider settings
-### Configure SAML Service Provider settings
+SAML service provider settings define the SAML SP properties that the APM will use for overlaying the legacy application with SAML pre-authentication. To configure them:
-These settings define the SAML SP properties that the APM will use for overlaying the legacy application with SAML pre-authentication.
+1. From a browser, sign in to the F5 BIG-IP management console.
-1. From a browser, sign-in to the F5 BIG-IP management console
+2. Select **Access** > **Federation** > **SAML Service Provider** > **Local SP Services** > **Create**.
-2. Select **Access > Federation > SAML Service Provider > Local SP Services > Create**
+ ![Screenshot that shows the button for creating a local SAML service provider service.](./media/f5-big-ip-kerberos-advanced/create-local-services-saml-service-provider.png)
- ![Create local service SAML service provider](./media/f5-big-ip-kerberos-advanced/create-local-services-saml-service-provider.png)
+3. Provide the **Name** and **Entity ID** values that you saved when you configured SSO for Azure AD earlier.
-3. Provide a **Name** and the **Entity ID** saved when you configured SSO for Azure AD earlier.
+ ![Screenshot that shows name and entity I D values entered for a new SAML service provider service.](./media/f5-big-ip-kerberos-advanced/create-new-saml-sp-service.png)
- ![Create a new SAML SP service](./media/f5-big-ip-kerberos-advanced/create-new-saml-sp-service.png)
+4. You don't need to specify **SP Name Settings** information if the SAML entity ID is an exact match with the URL for the published application.
-4. You need not specify **SP Name Settings** if the SAML entity ID is an exact match with the URL for the published application. For example, if the entity ID were urn:myexpenses:contosoonline then you would need to provide the **Scheme** and **Host** as https myexpenses.contoso.com. Whereas if the entity ID was `https://myexpenses.contoso.com` then not.
+ For example, if the entity ID is **urn:myexpenses:contosoonline**, you need to provide the **Scheme** and **Host** values as **https** and **myexpenses.contoso.com**. But if the entity ID is `https://myexpenses.contoso.com`, you don't need to provide this information.
-### Configure external IdP connector
+### Configure an external IdP connector
-A SAML IdP connector defines the settings required for the BIG-IP APM to trust Azure AD as its SAML IdP. These settings will map the SAML SP to a SAML IdP, establishing the federation trust between the APM and Azure AD.
+A SAML IdP connector defines the settings that are required for the BIG-IP APM to trust Azure AD as its SAML IdP. These settings will map the SAML SP to a SAML IdP, establishing the federation trust between the APM and Azure AD. To configure the connector:
-1. Scroll down to select the new SAML SP object and select **Bind/Unbind IdP Connectors**
+1. Scroll down to select the new SAML SP object, and then select **Bind/Unbind IdP Connectors**.
- ![Screenshot for select new SAML object](./media/f5-big-ip-kerberos-advanced/bind-unbind-idp-connectors.png)
+ ![Screenshot that shows the button for binding or unbinding identity provider connectors.](./media/f5-big-ip-kerberos-advanced/bind-unbind-idp-connectors.png)
-2. Select **Create New IdP Connector**, choose **From Metadata**
+2. Select **Create New IdP Connector** > **From Metadata**.
- ![Screenshot for creating new IdP connector from metadata](./media/f5-big-ip-kerberos-advanced/create-new-idp-connector-from-metadata.png)
+ ![Screenshot that shows selections for creating new identity provider connector from metadata.](./media/f5-big-ip-kerberos-advanced/create-new-idp-connector-from-metadata.png)
-3. Browse to the federation metadata XML file you downloaded earlier and provide an **Identity Provider Name** for the APM object thatΓÇÖll represent the external SAML IdP. For example, MyExpenses_AzureAD
+3. Browse to the federation metadata XML file that you downloaded earlier, and provide an **Identity Provider Name** value for the APM object that will represent the external SAML IdP. The following example shows **MyExpenses_AzureAD**.
- ![Screenshot for browse to federation metadata XML](./media/f5-big-ip-kerberos-advanced/browse-federation-metadata-xml.png)
+ ![Screenshot that shows example values for the federation metadata X M L file and the identity provider name.](./media/f5-big-ip-kerberos-advanced/browse-federation-metadata-xml.png)
-4. Select **Add New Row** to choose the new **SAML IdP Connector**, and then select **Update**
+4. Select **Add New Row** to choose the new **SAML IdP Connector** value, and then select **Update**.
- ![Screenshot to choose new IdP connector](./media/f5-big-ip-kerberos-advanced/choose-new-saml-idp-connector.png)
+ ![Screenshot that shows selections for choosing a new identity provider connector.](./media/f5-big-ip-kerberos-advanced/choose-new-saml-idp-connector.png)
-5. Select **OK** to save the settings
+5. Select **OK** to save the settings.
### Configure Kerberos SSO
-In this section, you create an APM SSO object for performing KCD SSO to backend applications. You will need the APM delegation account created earlier to complete this step.
+In this section, you create an APM SSO object for performing KCD SSO to back-end applications. To complete this step, you need the APM delegation account that you created earlier.
-Select **Access > Single Sign-on > Kerberos > Create** and provide the following:
+Select **Access** > **Single Sign-on** > **Kerberos** > **Create** and provide the following information:
-* **Name:** You can use a descriptive name. Once created, the Kerberos SSO APM object can be used by other published applications as well. For example, *Contoso_KCD_sso* can be used for multiple published applications for the entire Contoso domain, whereas *MyExpenses_KCD_sso* can be used for a single application only.
+* **Name**: You can use a descriptive name. After you create it, other published applications can also use the Kerberos SSO APM object. For example, **Contoso_KCD_sso** can be used for multiple published applications for the entire Contoso domain. But **MyExpenses_KCD_sso** can be used for a single application only.
-* **Username Source:** Specifies the preferred source of user ID. You can specify any APM session variable as the source, but *session.saml.last.identity* is typically best as it contains the logged in user ID derived from the Azure AD claim.
+* **Username Source**: Specify the preferred source for user ID. You can specify any APM session variable as the source, but **session.saml.last.identity** is typically best because it contains the logged-in user's ID derived from the Azure AD claim.
-* **User Realm Source:** Required in scenarios where the user domain is different to the Kerberos realm that will be used for KCD. If users were in a separate trusted domain, then you make the APM aware by specifying the APM session variable containing the logged-in user domain. For example, session.saml.last.attr.name.domain. You would also do this in scenarios where UPN of users is based on an alternative suffix.
+* **User Realm Source**: This source is required in scenarios where the user domain is different from the Kerberos realm that will be used for KCD. If users are in a separate trusted domain, you make the APM aware by specifying the APM session variable that contains the logged-in user's domain. An example is **session.saml.last.attr.name.domain**. You also do this in scenarios where the UPN of users is based on an alternative suffix.
-* **Kerberos Realm:** Enter users domain suffix in uppercase
+* **Kerberos Realm**: Enter the user's domain suffix in uppercase.
-* **KDC:** IP of a Domain Controller (Or FQDN if DNS is configured and efficient)
+* **KDC**: Enter the IP address of a domain controller. (Or enter a fully qualified domain name if DNS is configured and efficient.)
-* **UPN Support:** Enable if specified source of username is in UPN format, such as if using session.saml.last.identity variable
+* **UPN Support**: Select this checkbox if the specified source for username is in UPN format, such as if you're using the **session.saml.last.identity** variable.
-* **Account Name and Account Password:** APM service account credentials to perform KCD
+* **Account Name** and **Account Password**: Provide APM service account credentials to perform KCD.
-* **SPN Pattern:** If you use HTTP/%h, APM then uses the host header of the client request to build the SPN that itΓÇÖs requesting a Kerberos token for
+* **SPN Pattern**: If you use **HTTP/%h**, APM then uses the host header of the client request to build the SPN that it's requesting a Kerberos token for.
-* **Send Authorization:** Disable for applications that prefer negotiating authentication, instead of receiving the Kerberos token in the first request. For example, *Tomcat*.
+* **Send Authorization**: Disable this option for applications that prefer negotiating authentication, instead of receiving the Kerberos token in the first request (for example, Tomcat).
- ![Screenshot to configure kerberos S S O](./media/f5-big-ip-kerberos-advanced/configure-kerberos-sso.png)
+![Screenshot that shows selections for configuring Kerberos single sign-on.](./media/f5-big-ip-kerberos-advanced/configure-kerberos-sso.png)
-You can leave *KDC* undefined if the user realm is different to the backend server realm. This applies for multi-domain realm scenarios as well. When left blank, BIG-IP will attempt to discover a Kerberos realm through a DNS lookup of SRV records for the backend serverΓÇÖs domain, so it expects the domain name to be the same as the realm name. If the domain name is different from the realm name, it must be specified in the [/etc/krb5.conf](https://support.f5.com/csp/article/K17976428) file.
+You can leave KDC undefined if the user realm is different from the back-end server realm. This rule also applies for multiple-domain realm scenarios. If you leave KDC undefined, BIG-IP will try to discover a Kerberos realm through a DNS lookup of SRV records for the back-end server's domain. So it expects the domain name to be the same as the realm name. If the domain name is different from the realm name, it must be specified in the [/etc/krb5.conf](https://support.f5.com/csp/article/K17976428) file.
-Kerberos SSO processing is fastest when a KDC is specified by IP, slower when specified by host name, and due to additional DNS queries, even slower when left undefined. For this reason, you should ensure your DNS is performing optimally before moving a proofs of concept (POC) into production. Note that if backend servers are in multiple realms, you must create a separate SSO configuration object for each realm.
+Kerberos SSO processing is fastest when a KDC is specified by IP address. Kerberos SSO processing is slower when a KDC is specified by host name. Because of additional DNS queries, processing is even slower when a KDC is left undefined. For this reason, you should ensure that your DNS is performing optimally before moving a proof of concept into production.
-You can inject headers as part of the SSO request to the backend application. Simply change **General Properties** setting from **Basic** to **Advanced**.
+> [!NOTE]
+> If back-end servers are in multiple realms, you must create a separate SSO configuration object for each realm.
+
+You can inject headers as part of the SSO request to the back-end application. Simply change the **General Properties** setting from **Basic** to **Advanced**.
-For more information on configuring an APM for KCD SSO, refer to the F5 article on [Overview of Kerberos constrained delegation](https://support.f5.com/csp/article/K17976428).
+For more information on configuring an APM for KCD SSO, see the F5 article [Overview of Kerberos constrained delegation](https://support.f5.com/csp/article/K17976428).
-### Configure Access Profile
+### Configure an access profile
-An *Access Profile* binds many APM elements managing access to BIG-IP virtual servers, including access policies, SSO configuration, and UI settings.
+An *access profile* binds many APM elements that manage access to BIG-IP virtual servers. These elements include access policies, SSO configuration, and UI settings.
-1. Select **Access > Profiles / Policies > Access Profiles (Per-Session Policies) > Create** and provide these general properties:
+1. Select **Access** > **Profiles / Policies** > **Access Profiles (Per-Session Policies)** > **Create** and provide these general properties:
- * **Name:** For example, MyExpenses
+ * **Name**: For example, enter **MyExpenses**.
- * **Profile Type:** All
+ * **Profile Type:** Select **All**.
- * **SSO Configuration:** The KCD SSO configuration object you just created
+ * **SSO Configuration:** Select the KCD SSO configuration object that you just created.
- * **Accepted Language:** Add at least one language
+ * **Accepted Language:** Add at least one language.
- ![Screenshot to create new access profile](./media/f5-big-ip-kerberos-advanced/create-new-access-profile.png)
+ ![Screenshot that shows selections for creating an access profile.](./media/f5-big-ip-kerberos-advanced/create-new-access-profile.png)
-2. Select **Edit** for the per-session profile you just created
+2. Select **Edit** for the per-session profile that you just created.
- ![Screenshot to edit per session profile](./media/f5-big-ip-kerberos-advanced/edit-per-session-profile.png)
+ ![Screenshot that shows the button for editing a per-session profile.](./media/f5-big-ip-kerberos-advanced/edit-per-session-profile.png)
-3. Once the Visual Policy Editor (VPE) has launched, select the **+** sign next to the fallback
+3. When the visual policy editor opens, select the plus sign (**+**) next to the fallback.
- ![Select plus sign next to fallback](./media/f5-big-ip-kerberos-advanced/select-plus-fallback.png)
+ ![Screenshot that shows the plus sign next to fallback.](./media/f5-big-ip-kerberos-advanced/select-plus-fallback.png)
-4. In the pop-up select **Authentication > SAML Auth > Add Item**
+4. In the pop-up dialog, select **Authentication** > **SAML Auth** > **Add Item**.
- ![Screenshot popup to add Saml authentication item](./media/f5-big-ip-kerberos-advanced/add-item-saml-auth.png)
+ ![Screenshot that shows selections for adding a SAML authentication item.](./media/f5-big-ip-kerberos-advanced/add-item-saml-auth.png)
-5. In the **SAML authentication SP** configuration, set the **AAA Server** option to use the SAML SP object you created earlier
+5. In the **SAML authentication SP** configuration, set the **AAA Server** option to use the SAML SP object that you created earlier.
- ![Screenshot to configure A A A server](./media/f5-big-ip-kerberos-advanced/configure-aaa-server.png)
+ ![Screenshot that shows the list box for configuring an A A A server.](./media/f5-big-ip-kerberos-advanced/configure-aaa-server.png)
-6. Select the link in the upper **Deny** box to change the **Successful** branch to **Allow**
+6. Select the link in the upper **Deny** box to change the **Successful** branch to **Allow**.
- ![Change successful branch to allow](./media/f5-big-ip-kerberos-advanced/select-allow-successful-branch.png)
+ ![Screenshot that shows changing the successful branch to Allow.](./media/f5-big-ip-kerberos-advanced/select-allow-successful-branch.png)
-### Configure Attribute Mappings
+### Configure attribute mappings
-Although optional, adding a *LogonID_Mapping configuration* enables the BIG-IP active sessions list to display the UPN of the logged-in user instead of a session number. This is useful when you analyze logs, or while troubleshooting.
+Although it's optional, adding a **LogonID_Mapping** configuration enables the BIG-IP active sessions list to display the UPN of the logged-in user instead of a session number. This information is useful when you're analyzing logs or troubleshooting.
-1. Click the **+** symbol for the SAML Auth Successful branch
+1. Select the **+** symbol for the **SAML Auth Successful** branch.
-2. In the pop-up select **Assignment > Variable Assign > Add Item**
+2. In the pop-up dialog, select **Assignment** > **Variable Assign** > **Add Item**.
- ![Screenshot to configure variable assign](./media/f5-big-ip-kerberos-advanced/configure-variable-assign.png)
+ ![Screenshot that shows the option for assigning custom variables.](./media/f5-big-ip-kerberos-advanced/configure-variable-assign.png)
3. Enter **Name**.
-4. In the **Variable Assign** pane, select **Add new entry > change.** For example, *LogonID_Mapping*
+4. On the **Variable Assign** pane, select **Add new entry** > **change**. The following example shows **LogonID_Mapping** in the **Name** box.
- ![Screenshot to add new entry for variable assign](./media/f5-big-ip-kerberos-advanced/add-new-entry-variable-assign.png)
+ ![Screenshot that shows selections for adding an entry for variable assignment.](./media/f5-big-ip-kerberos-advanced/add-new-entry-variable-assign.png)
-5. Set both variables.
+5. Set both variables:
- * **Custom Variable:** session.logon.last.username
- * **Session Variable:** session.saml.last.identity
+ * **Custom Variable**: Enter **session.logon.last.username**.
+ * **Session Variable**: Enter **session.saml.last.identity**.
-6. Select **Finished > Save:**
+6. Select **Finished** > **Save**.
-7. Select the **Deny** terminal of the Access PolicyΓÇÖs **Successful** branch and change it to **Allow,** followed by **Save**
+7. Select the **Deny** terminal of the access policy's **Successful** branch and change it to **Allow**. Then select **Save**.
-8. Commit those settings by selecting **Apply Access Policy** and close the visual policy editor
+8. Commit those settings by selecting **Apply Access Policy**, and close the visual policy editor.
- ![Screenshot to commit apply access policy](./media/f5-big-ip-kerberos-advanced/apply-access-policy.png)
+ ![Screenshot of the button for applying an access policy.](./media/f5-big-ip-kerberos-advanced/apply-access-policy.png)
-### Configure Backend Pool
+### Configure the back-end pool
-For the BIG-IP to know where to forward client traffic, you need to create a BIG-IP node object representing the backend server hosting your application, and place that node in a BIG-IP server pool.
+For BIG-IP to know where to forward client traffic, you need to create a BIG-IP node object that represents the back-end server that hosts your application. Then, place that node in a BIG-IP server pool.
-1. Select **Local Traffic > Pools > Pool List > Create** and provide a name for a server pool object. For example *MyApps_VMs*
+1. Select **Local Traffic** > **Pools** > **Pool List** > **Create** and provide a name for a server pool object. For example, enter **MyApps_VMs**.
- ![Screenshot to create new advanced backend pool](./media/f5-big-ip-kerberos-advanced/create-new-backend-pool.png)
+ ![Screenshot that shows selections for creatng an advanced back-end pool.](./media/f5-big-ip-kerberos-advanced/create-new-backend-pool.png)
2. Add a pool member object with the following resource details:
- * **Node Name:** Optional display name for the server hosting the backend web application
- * **Address:** IP address of the server hosting the application
- * **Service Port:** The HTTP/S port the application is listening on
+ * **Node Name**: Optional display name for the server that hosts the back-end web application.
+ * **Address**: IP address of the server that hosts the application.
+ * **Service Port**: HTTP/S port that the application is listening on.
- ![Screenshot to add a pool member object](./media/f5-big-ip-kerberos-advanced/add-pool-member-object.png)
+ ![Screenshot that shows entries for adding a pool member object.](./media/f5-big-ip-kerberos-advanced/add-pool-member-object.png)
> [!NOTE]
-> The Health Monitors require [additional configuration](https://support.f5.com/csp/article/K13397) that is not covered in this tutorial.
+> The health monitors require [additional configuration](https://support.f5.com/csp/article/K13397) that this article doesn't cover.
-### Configure Virtual Server
-A *Virtual Server* is a BIG-IP data plane object represented by a virtual IP address listening for client requests to the application. Any received traffic is processed and evaluated against the APM access profile associated with the virtual server, before being directed according to the policy results and settings. To configure a Virtual Server:
+### Configure the virtual server
-1. Select **Local Traffic > Virtual Servers > Virtual Server List > Create**
+A *virtual server* is a BIG-IP data plane object that's represented by a virtual IP address listening for client requests to the application. Any received traffic is processed and evaluated against the APM access profile that's associated with the virtual server, before being directed according to the policy results and settings.
-2. Provide the virtual server with a **Name** and IP IPv4/IPv6 that isnΓÇÖt already allocated to an existing BIG-IP object or device on the connected network. The IP will be dedicated to receiving client traffic for the published backend application. Then set the **Service Port** to **443**
+To configure a virtual server:
- ![Screenshot to configure new virtual server](./media/f5-big-ip-kerberos-advanced/configure-new-virtual-server.png)
+1. Select **Local Traffic** > **Virtual Servers** > **Virtual Server List** > **Create**.
-3. Set the HTTP Profile: to **http**
+2. Provide the virtual server with a **Name** value and an IPv4/IPv6 address that isn't already allocated to an existing BIG-IP object or device on the connected network. The IP address will be dedicated to receiving client traffic for the published back-end application. Then set **Service Port** to **443**.
-4. Enable a virtual server for Transport Layer Security (TLS), allowing services to be published over HTTPS. Select the **client SSL profile** you created as part of the prerequisites or leave the default if testing
+ ![Screenshot that shows selections and entries for configuring a virtual server.](./media/f5-big-ip-kerberos-advanced/configure-new-virtual-server.png)
- ![Screenshot to update http profile client](./media/f5-big-ip-kerberos-advanced/update-http-profile-client.png)
+3. Set **HTTP Profile (Client)** to **http**.
-5. Change the **Source Address Translation** to **Auto Map**
+4. Enable a virtual server for Transport Layer Security to allow services to be published over HTTPS. For **SSL Profile (Client)**, select the profile that you created as part of the prerequisites. (Or leave the default if you're testing.)
- ![Screenshot to change source address translation](./media/f5-big-ip-kerberos-advanced/change-auto-map.png)
+ ![Screenshot that shows selections for H T T P profile and S S L profile for the client.](./media/f5-big-ip-kerberos-advanced/update-http-profile-client.png)
-6. Under **Access Policy**, set the **Access Profile** created earlier. This binds the Azure AD SAML pre-authentication profile & KCD SSO policy to the virtual server.
+5. Change **Source Address Translation** to **Auto Map**.
+
+ ![Screenshot to change source address translation](./media/f5-big-ip-kerberos-advanced/change-auto-map.png)
+6. Under **Access Policy**, set **Access Profile** based on the profile that you created earlier. This step binds the Azure AD SAML pre-authentication profile and KCD SSO policy to the virtual server.
- ![Screenshot to set access profile for access policy](./media/f5-big-ip-kerberos-advanced/set-access-profile-for-access-policy.png)
+ ![Screenshot that shows the box for setting an access profile for an access policy.](./media/f5-big-ip-kerberos-advanced/set-access-profile-for-access-policy.png)
-7. Finally, set the **Default Pool** to use the backend pool objects created in the previous section, then select **Finished**.
+7. Set **Default Pool** to use the back-end pool objects that you created in the previous section. Then select **Finished**.
- ![Screenshot to set default pool](./media/f5-big-ip-kerberos-advanced/set-default-pool-use-backend-object.png)
+ ![Screenshot that shows selecting a default pool.](./media/f5-big-ip-kerberos-advanced/set-default-pool-use-backend-object.png)
-### Configure Session Management settings
+### Configure session management settings
-BIG-IP's session management settings define the conditions under which user sessions are terminated or allowed to continue, limits for users and IP addresses, and error pages. You can create your own policy here. Navigate to **Access Policy > Access Profiles > Access Profile** and select your application from the list.
+BIG-IP's session management settings define the conditions under which user sessions are terminated or allowed to continue, limits for users and IP addresses, and error pages. You can create your own policy here. Go to **Access Policy** > **Access Profiles** > **Access Profile** and select your application from the list.
-If you have defined a **Single Log-out URI** in Azure AD, itΓÇÖll ensure an IdP initiated sign-out from the MyApps portal also terminates the session between the client and the BIG-IP APM. The imported applicationΓÇÖs federation metadata.xml provides the APM with the Azure AD SAML log-out endpoint for SP initiated sign-outs. But for this to be truly effective, the APM needs to know exactly when a user signs-out.
+If you've defined a **Single Logout URI** value in Azure AD, it will ensure that an IdP-initiated sign-out from the MyApps portal also ends the session between the client and the BIG-IP APM. The imported application's federation metadata XML file provides the APM with the Azure AD SAML logout endpoint for SP-initiated sign-outs. But for this to be truly effective, the APM needs to know exactly when a user signs out.
-Consider a scenario where a BIG-IP web portal is not used. The user has no way of instructing the APM to sign-out. Even if the user signs-out of the application itself, the BIG-IP is technically oblivious to this, so the application session could easily be re-instated through SSO. For this reason, SP initiated sign-out needs careful consideration to ensure sessions are securely terminated when no longer required.
+Consider a scenario where a BIG-IP web portal is not used. The user has no way of instructing the APM to sign out. Even if the user signs out of the application itself, BIG-IP is technically oblivious to this, so the application session could easily be reinstated through SSO. For this reason, SP-initiated sign-out needs careful consideration to ensure that sessions are securely terminated when they're no longer required.
-One way to achieve this will be by adding an SLO function to your applications sign-out button. It can redirect your client to the Azure AD SAML sign-out endpoint. You can find this SAML sign-out endpoint at **App Registrations > Endpoints.**
+One way to achieve this is by adding an SLO function to your application's sign-out button. This function can redirect your client to the Azure AD SAML sign-out endpoint. You can find this SAML sign-out endpoint at **App Registrations** > **Endpoints**.
-If unable to change the app, consider having the BIG-IP listen for the app's sign-out call, and upon detecting the request, it should trigger SLO.
+If you can't change the app, consider having BIG-IP listen for the app's sign-out call. When it detects the request, it should trigger SLO.
-For more details, see this F5 article on [Configuring automatic session termination (logout) based on a URI-referenced file name](https://support.f5.com/csp/article/K42052145) and [Overview of the Logout URI Include option](https://support.f5.com/csp/article/K12056).
+For more information, see the F5 articles [Configuring automatic session termination (logout) based on a URI-referenced file name](https://support.f5.com/csp/article/K42052145) and [Overview of the Logout URI Include option](https://support.f5.com/csp/article/K12056).
## Summary
-Your application should now be published and accessible via SHA, either directly via its URL or through MicrosoftΓÇÖs application portals. The application should also be visible as a target resource in [Azure AD Conditional Access](../conditional-access/concept-conditional-access-policies.md).
+Your application should now be published and accessible via SHA, either directly via its URL or through Microsoft's application portals. The application should also be visible as a target resource in [Azure AD Conditional Access](../conditional-access/concept-conditional-access-policies.md).
-For increased security, organizations using this pattern could also consider blocking all direct access to the application, forcing a strict path through the BIG-IP.
+For increased security, organizations that use this pattern can also consider blocking all direct access to the application. Blocking all direct access forces a strict path through BIG-IP.
## Next steps
-As a user, launch a browser and connect to the applicationΓÇÖs external URL. You can also select the applicationΓÇÖs icon from the [Microsoft MyApps portal](https://myapps.microsoft.com/). Once you authenticate against your Azure AD tenant, you will be redirected to the BIG-IP endpoint for the application and automatically signed in via SSO.
+As a user, open a browser and connect to the application's external URL. You can also select the application's icon from the [Microsoft MyApps portal](https://myapps.microsoft.com/). After you authenticate against your Azure AD tenant, you'll be redirected to the BIG-IP endpoint for the application and automatically signed in via SSO.
- ![Screenshot of app view](./media/f5-big-ip-kerberos-advanced/app-view.png)
+![Screenshot of the an example application's website.](./media/f5-big-ip-kerberos-advanced/app-view.png)
### Azure AD B2B guest access
-SHA also supports [Azure AD B2B guest access](../external-identities/hybrid-cloud-to-on-premises.md). Guest identities are synchronized from your Azure AD tenant to your target Kerberos domain. It is necessary to have a local representation of guest objects for BIG-IP to perform KCD SSO to the backend application.
+SHA also supports [Azure AD B2B guest access](../external-identities/hybrid-cloud-to-on-premises.md). Guest identities are synchronized from your Azure AD tenant to your target Kerberos domain. It's necessary to have a local representation of guest objects for BIG-IP to perform KCD SSO to the back-end application.
-## Troubleshooting
+## Troubleshoot
-There can be many reasons for failure to access a SHA protected application, including a misconfiguration. Consider the following points while troubleshooting any issue.
+There can be many reasons for failure to access a SHA-protected application, including a misconfiguration. Consider the following points while troubleshooting any problem:
-* Kerberos is time sensitive, so requires that servers and clients be set to the correct time and where possible synchronized to a reliable time source
+* Kerberos is time sensitive. It requires that servers and clients are set to the correct time and, where possible, synchronized to a reliable time source.
-* Ensure the hostnames for the domain controller and web application are resolvable in DNS
+* Ensure that the host names for the domain controller and web application are resolvable in DNS.
-* Ensure there are no duplicate SPNs in your environment by executing the following query at the command line: setspn -q HTTP/my_target_SPN
+* Ensure that there are no duplicate SPNs in your environment by running the following query at the command line: `setspn -q HTTP/my_target_SPN`.
> [!NOTE]
-> You can refer to our [App Proxy guidance to validate an IIS application ](../app-proxy/application-proxy-back-end-kerberos-constrained-delegation-how-to.md)is configured appropriately for KCD. F5ΓÇÖs article on [how the APM handles Kerberos SSO](https://techdocs.f5.com/en-us/bigip-15-1-0/big-ip-access-policy-manager-single-sign-on-concepts-configuration/kerberos-single-sign-on-method.html) is also a valuable resource.
+> To validate that an IIS application is configured appropriately for KCD, see [Troubleshoot Kerberos constrained delegation configurations for Application Proxy](../app-proxy/application-proxy-back-end-kerberos-constrained-delegation-how-to.md). F5's article on [how the APM handles Kerberos SSO](https://techdocs.f5.com/en-us/bigip-15-1-0/big-ip-access-policy-manager-single-sign-on-concepts-configuration/kerberos-single-sign-on-method.html) is also a valuable resource.
-### Authentication and SSO issues
+### Authentication and SSO problems
BIG-IP logs are a reliable source of information. To increase the log verbosity level:
-1. Navigate to **Access Policy > Overview > Event Logs > Settings**
+1. Go to **Access Policy** > **Overview** > **Event Logs** > **Settings**.
-2. Select the row for your published application, then **Edit > Access System Logs**
+2. Select the row for your published application. Then, select **Edit** > **Access System Logs**.
-3. Select **Debug** from the SSO list, and then select OK. Reproduce your issue before looking at the logs but remember to switch this back when finished.
+3. Select **Debug** from the SSO list, and then select **OK**. Reproduce your problem before you look at the logs, but remember to switch this back when finished.
-If you see a BIG-IP branded error immediately after successful Azure AD pre-authentication, itΓÇÖs possible the issue relates to SSO from Azure AD to the BIG-IP.
+If you see a BIG-IP branded error immediately after successful Azure AD pre-authentication, it's possible that the problem relates to SSO from Azure AD to BIG-IP. To find out:
-1. Navigate to **Access > Overview > Access reports**
+1. Go to **Access** > **Overview** > **Access reports**.
-2. Run the report for the last hour to see logs provide any clues. The **View session variables** link for your session will also help understand if the APM is receiving the expected claims from Azure AD.
+2. Run the report for the last hour to see if logs provide any clues. The **View session variables** link for your session will also help you understand if the APM is receiving the expected claims from Azure AD.
-If you donΓÇÖt see a BIG-IP error page, then the issue is probably more related to the backend request or SSO from the BIG-IP to the application.
+If you don't see a BIG-IP error page, the problem is probably more related to the back-end request or related to SSO from BIG-IP to the application. To find out:
-1. Navigate to **Access Policy > Overview > Active Sessions**
+1. Go to **Access Policy** > **Overview** > **Active Sessions**.
-2. Select the link for your active session. The **View Variables** link in this location may also help determine root cause KCD issues, particularly if the BIG-IP APM fails to obtain the right user and domain identifiers.
+2. Select the link for your active session. The **View Variables** link in this location might also help you determine root-cause KCD problems, particularly if the BIG-IP APM fails to get the right user and domain identifiers.
-F5 provides a great BIG-IP specific paper to help diagnose KCD related issues, see the deployment guide on [Configuring Kerberos Constrained Delegation](https://www.f5.com/pdf/deployment-guides/kerberos-constrained-delegation-dg.pdf).
+For help with diagnosing KCD-related problems, see the F5 BIG-IP deployment guide [Configuring Kerberos Constrained Delegation](https://www.f5.com/pdf/deployment-guides/kerberos-constrained-delegation-dg.pdf).
## Additional resources
-* [BIG-IP Advanced configuration](https://techdocs.f5.com/kb/en-us/products/big-ip_apm/manuals/product/apm-authentication-single-sign-on-11-5-0/2.html)
+* [Active Directory Authentication](https://techdocs.f5.com/kb/en-us/products/big-ip_apm/manuals/product/apm-authentication-single-sign-on-11-5-0/2.html) (F5 article about BIG-IP advanced configuration)
-* [The end of passwords, go password-less](https://www.microsoft.com/security/business/identity/passwordless)
+* [Forget passwords, go passwordless](https://www.microsoft.com/security/business/identity/passwordless)
* [What is Conditional Access?](../conditional-access/overview.md)
-* [Microsoft Zero Trust framework to enable remote work](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/)
+* [Zero Trust framework to enable remote work](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/)
active-directory Services Support Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/services-support-managed-identities.md
Managed identity type | All Generally Available<br>Global Azure Regions | Azure
For more information, see [Use managed identities with Azure Machine Learning](../../machine-learning/how-to-use-managed-identities.md).
+### Azure Maps
+
+Managed identity type | All Generally Available<br>Global Azure Regions | Azure Government | Azure Germany | Azure China 21Vianet |
+| | :-: | :-: | :-: | :-: |
+| System assigned | Preview | Preview | Not available | Not available |
+| User assigned | Preview | Preview | Not available | Not available |
+
+For more information, see [Authentication on Azure Maps](../../azure-maps/azure-maps-authentication.md).
++ ### Azure Media Services | Managed identity type | All Generally Available<br>Global Azure Regions | Azure Government | Azure Germany | Azure China 21Vianet |
active-directory Permissions Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/permissions-reference.md
In Azure Active Directory (Azure AD), if another administrator or non-administrator needs to manage Azure AD resources, you assign them an Azure AD role that provides the permissions they need. For example, you can assign roles to allow adding or changing users, resetting user passwords, managing user licenses, or managing domain names.
-This article lists the Azure AD built-in roles you can assign to allow management of Azure AD resources. For information about how to assign roles, see [Assign Azure AD roles to users](manage-roles-portal.md).
+This article lists the Azure AD built-in roles you can assign to allow management of Azure AD resources. For information about how to assign roles, see [Assign Azure AD roles to users](manage-roles-portal.md). If you are looking for roles to manage Azure resources, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md).
## All roles
aks Kubernetes Walkthrough Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/kubernetes-walkthrough-portal.md
Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.c
6. On the **Authentication** page, configure the following options: - Create a new cluster identity by either:
- * Leaving the **Authentication** field with **System-assinged managed identity**, or
+ * Leaving the **Authentication** field with **System-assigned managed identity**, or
* Choosing **Service Principal** to use a service principal. * Select *(new) default service principal* to create a default service principal, or * Select *Configure service principal* to use an existing one. You will need to provide the existing principal's SPN client ID and secret.
aks Quotas Skus Regions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/quotas-skus-regions.md
The list of supported VM sizes in AKS is evolving with the release of new VM SKU
VM sizes with less than 2 CPUs may not be used with AKS.
-Each node in an AKS cluster contains a fixed amount of compute resources such as vCPU and memory. If an AKS node contains insufficient compute resources, pods might fail to run correctly. To ensure the required *kube-system* pods and your applications can be reliably scheduled, AKS requires nodes use VM sizes with > 2 CPUs.
+Each node in an AKS cluster contains a fixed amount of compute resources such as vCPU and memory. If an AKS node contains insufficient compute resources, pods might fail to run correctly. To ensure the required *kube-system* pods and your applications can be reliably scheduled, AKS requires nodes use VM sizes with at least 2 CPUs.
For more information on VM types and their compute resources, see [Sizes for virtual machines in Azure][vm-skus].
aks Servicemesh About https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/servicemesh-about.md
Title: About service meshes
description: Obtain an overview of service meshes, supported scenarios, selection criteria, and next steps to explore. Previously updated : 07/29/2021 Last updated : 01/04/2022
A service mesh provides capabilities like traffic management, resiliency, policy
These are some of the scenarios that can be enabled for your workloads when you use a service mesh: -- **Encrypt all traffic in cluster** - Enable mutual TLS between specified services in the cluster. This can be extended to ingress and egress at the network perimeter. Provides a secure by default option with no changes needed for application code and infrastructure.
+- **Encrypt all traffic in cluster** - Enable mutual TLS between specified services in the cluster. This can be extended to ingress and egress at the network perimeter, and provides a secure by default option with no changes needed for application code and infrastructure.
- **Canary and phased rollouts** - Specify conditions for a subset of traffic to be routed to a set of new services in the cluster. On successful test of canary release, remove conditional routing and phase gradually increasing % of all traffic to new service. Eventually all traffic will be directed to new service. -- **Traffic management and manipulation** - Create a policy on a service that will rate limit all traffic to a version of a service from a specific origin. Or a policy that applies a retry strategy to classes of failures between specified services. Mirror live traffic to new versions of services during a migration or to debug issues. Inject faults between services in a test environment to test resiliency.
+- **Traffic management and manipulation** - Create a policy on a service that will rate limit all traffic to a version of a service from a specific origin, or a policy that applies a retry strategy to classes of failures between specified services. Mirror live traffic to new versions of services during a migration or to debug issues. Inject faults between services in a test environment to test resiliency.
-- **Observability** - Gain insight into how your services are connected the traffic that flows between them. Obtain metrics, logs, and traces for all traffic in cluster, and ingress/egress. Add distributed tracing abilities to your applications.
+- **Observability** - Gain insight into how your services are connected and the traffic that flows between them. Obtain metrics, logs, and traces for all traffic in cluster, including ingress/egress. Add distributed tracing abilities to your applications.
## Selection criteria
-Before you select a service mesh, ensure that you understand your requirements and the reasons for installing a service mesh. Ask the following questions.
+Before you select a service mesh, ensure that you understand your requirements and the reasons for installing a service mesh. Ask the following questions:
-- **Is an Ingress Controller sufficient for my needs?** - Sometimes having a capability like a/b testing or traffic splitting at the ingress is sufficient to support the required scenario. Don't add complexity to your environment with no upside.
+- **Is an Ingress Controller sufficient for my needs?** - Sometimes having a capability like A/B testing or traffic splitting at the ingress is sufficient to support the required scenario. Don't add complexity to your environment with no upside.
-- **Can my workloads and environment tolerate the additional overheads?** - All the additional components required to support the service mesh require additional resources like cpu and memory. In addition, all the proxies and their associated policy checks add latency to your traffic. If you have workloads that are very sensitive to latency or cannot provide the additional resources to cover the service mesh components, then re-consider.
+- **Can my workloads and environment tolerate the additional overheads?** - All the additional components required to support the service mesh require additional resources like CPU and memory. In addition, all the proxies and their associated policy checks add latency to your traffic. If you have workloads that are very sensitive to latency or cannot provide the additional resources to cover the service mesh components, then re-consider.
- **Is this adding additional complexity unnecessarily?** - If the reason for installing a service mesh is to gain a capability that is not necessarily critical to the business or operational teams, then consider whether the additional complexity of installation, maintenance, and configuration is worth it.
app-service Deploy Container Github Action https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/deploy-container-github-action.md
OpenID Connect is an authentication method that uses short-lived tokens. Setting
1. Run the following command to [create a new federated identity credential](/graph/api/application-post-federatedidentitycredentials?view=graph-rest-beta&preserve-view=true) for your active directory application.
-* Replace `APPLICATION-OBJECT-ID` with the **objectId (generated while creating app)** for your Active Directory application.
-* Set a value for `CREDENTIAL-NAME` to reference later.
-* Set the `subject`. The value of this is defined by GitHub depending on your workflow:
- * Jobs in your GitHub Actions environment: `repo:< Organization/Repository >:environment:< Name >`
- * For Jobs not tied to an environment, include the ref path for branch/tag based on the ref path used for triggering the workflow: `repo:< Organization/Repository >:ref:< ref path>`. For example, `repo:n-username/ node_express:ref:refs/heads/my-branch` or `repo:n-username/ node_express:ref:refs/tags/my-tag`.
- * For workflows triggered by a pull request event: `repo:< Organization/Repository >:pull_request`.
-
-```azurecli
-az rest --method POST --uri 'https://graph.microsoft.com/beta/applications/<APPLICATION-OBJECT-ID>/federatedIdentityCredentials' --body '{"name":"<CREDENTIAL-NAME>","issuer":"https://token.actions.githubusercontent.com","subject":"repo:organization/repository:ref:refs/heads/main","description":"Testing","audiences":["api://AzureADTokenExchange"]}'
-```
-
-To learn how to create a Create an active directory application, service principal, and federated credentials in Azure portal, see [Connect GitHub and Azure](/azure/developer/github/connect-from-azure#use-the-azure-login-action-with-openid-connect).
-
+ * Replace `APPLICATION-OBJECT-ID` with the **objectId (generated while creating app)** for your Active Directory application.
+ * Set a value for `CREDENTIAL-NAME` to reference later.
+ * Set the `subject`. The value of this is defined by GitHub depending on your workflow:
+ * Jobs in your GitHub Actions environment: `repo:< Organization/Repository >:environment:< Name >`
+ * For Jobs not tied to an environment, include the ref path for branch/tag based on the ref path used for triggering the workflow: `repo:< Organization/Repository >:ref:< ref path>`. For example, `repo:n-username/ node_express:ref:refs/heads/my-branch` or `repo:n-username/ node_express:ref:refs/tags/my-tag`.
+ * For workflows triggered by a pull request event: `repo:< Organization/Repository >:pull_request`.
+
+ ```azurecli
+ az rest --method POST --uri 'https://graph.microsoft.com/beta/applications/<APPLICATION-OBJECT-ID>/federatedIdentityCredentials' --body '{"name":"<CREDENTIAL-NAME>","issuer":"https://token.actions.githubusercontent.com","subject":"repo:organization/repository:ref:refs/heads/main","description":"Testing","audiences":["api://AzureADTokenExchange"]}'
+ ```
+
+ To learn how to create a Create an active directory application, service principal, and federated credentials in Azure portal, see [Connect GitHub and Azure](/azure/developer/github/connect-from-azure#use-the-azure-login-action-with-openid-connect).
+
## Configure the GitHub secret for authentication
app-service Deploy Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/deploy-github-actions.md
OpenID Connect is an authentication method that uses short-lived tokens. Setting
1. Run the following command to [create a new federated identity credential](/graph/api/application-post-federatedidentitycredentials?view=graph-rest-beta&preserve-view=true) for your active directory application.
-* Replace `APPLICATION-OBJECT-ID` with the **objectId (generated while creating app)** for your Active Directory application.
-* Set a value for `CREDENTIAL-NAME` to reference later.
-* Set the `subject`. The value of this is defined by GitHub depending on your workflow:
- * Jobs in your GitHub Actions environment: `repo:< Organization/Repository >:environment:< Name >`
- * For Jobs not tied to an environment, include the ref path for branch/tag based on the ref path used for triggering the workflow: `repo:< Organization/Repository >:ref:< ref path>`. For example, `repo:n-username/ node_express:ref:refs/heads/my-branch` or `repo:n-username/ node_express:ref:refs/tags/my-tag`.
- * For workflows triggered by a pull request event: `repo:< Organization/Repository >:pull_request`.
-
-```azurecli
-az rest --method POST --uri 'https://graph.microsoft.com/beta/applications/<APPLICATION-OBJECT-ID>/federatedIdentityCredentials' --body '{"name":"<CREDENTIAL-NAME>","issuer":"https://token.actions.githubusercontent.com","subject":"repo:organization/repository:ref:refs/heads/main","description":"Testing","audiences":["api://AzureADTokenExchange"]}'
-```
-
+ * Replace `APPLICATION-OBJECT-ID` with the **objectId (generated while creating app)** for your Active Directory application.
+ * Set a value for `CREDENTIAL-NAME` to reference later.
+ * Set the `subject`. The value of this is defined by GitHub depending on your workflow:
+ * Jobs in your GitHub Actions environment: `repo:< Organization/Repository >:environment:< Name >`
+ * For Jobs not tied to an environment, include the ref path for branch/tag based on the ref path used for triggering the workflow: `repo:< Organization/Repository >:ref:< ref path>`. For example, `repo:n-username/ node_express:ref:refs/heads/my-branch` or `repo:n-username/ node_express:ref:refs/tags/my-tag`.
+ * For workflows triggered by a pull request event: `repo:< Organization/Repository >:pull_request`.
+
+ ```azurecli
+ az rest --method POST --uri 'https://graph.microsoft.com/beta/applications/<APPLICATION-OBJECT-ID>/federatedIdentityCredentials' --body '{"name":"<CREDENTIAL-NAME>","issuer":"https://token.actions.githubusercontent.com","subject":"repo:organization/repository:ref:refs/heads/main","description":"Testing","audiences":["api://AzureADTokenExchange"]}'
+ ```
+
To learn how to create a Create an active directory application, service principal, and federated credentials in Azure portal, see [Connect GitHub and Azure](/azure/developer/github/connect-from-azure#use-the-azure-login-action-with-openid-connect).
app-service Creation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/creation.md
# Create an App Service Environment
-> [!NOTE]
-> This article is about the App Service Environment v3 which is used with Isolated v2 App Service plans
->
-The [App Service Environment (ASE)][Intro] is a single tenant deployment of the App Service that injects into your Azure Virtual Network (VNet). A deployment of an ASE will require use of one subnet. This subnet can't be used for anything else other than the ASE.
+[App Service Environment][Intro] is a single-tenant deployment of Azure App Service. You use it with an Azure virtual network. You need one subnet for a deployment of App Service Environment, and this subnet can't be used for anything else.
+
+> [!NOTE]
+> This article is about App Service Environment v3, which is used with isolated v2 App Service plans.
-## Before you create your ASE
+## Before you create your App Service Environment
-After your ASE is created, you can't change:
+Be aware that after you create your App Service Environment, you can't change any of the following:
- Location - Subscription - Resource group-- Azure Virtual Network (VNet) used-- Subnets used
+- Azure virtual network
+- Subnets
- Subnet size-- Name of your ASE
+- Name of your App Service Environment
-The subnet needs to be large enough to hold the maximum size that you'll scale your ASE. Pick a large enough subnet to support your maximum scale needs since it can't be changed after creation. The recommended size is a /24 with 256 addresses.
+Make your subnet large enough to hold the maximum size that you'll scale your App Service Environment. The recommended size is a /24 with 256 addresses.
## Deployment considerations
-There are two important things that need to be thought out before you deploy your ASE.
--- VIP type-- deployment type-
-There are two different VIP types, internal and external. With an internal VIP, your apps will be reached on the ASE at an address in your ASE subnet and your apps are not on public DNS. During creation in the portal, there is an option to create an Azure private DNS zone for your ASE. With an external VIP, your apps will be on a public internet facing address and your apps are in public DNS.
+Before you deploy your App Service Environment, think about the virtual IP (VIP) type and the deployment type.
-There are three different deployment types;
+With an *internal VIP*, an address in your App Service Environment subnet reaches your apps. Your apps aren't on a public DNS. When you create your App Service Environment in the Azure portal, you have an option to create an Azure private DNS zone for your App Service Environment. With an *external VIP*, your apps are on an address facing the public internet, and they're in a public DNS.
-- single zone-- zone redundant-- host group
+For the deployment type, you can choose *single zone*, *zone redundant*, or *host group*. The single zone is available in all regions where App Service Environment v3 is available. With the single zone deployment type, you have a minimum charge in your App Service plan of one instance of Windows Isolated v2. As soon as you have one or more instances, then that charge goes away. It isn't an additive charge.
-The single zone ASE is available in all regions where ASEv3 is available. When you have a single zone ASE, you have a minimum App Service plan instance charge of one instance of Windows Isolated v2. As soon as you have one or more instances, then that charge goes away. It is not an additive charge.
+In a zone redundant App Service Environment, your apps spread across three zones in the same region. Zone redundant is available in regions that support availability zones. With this deployment type, the smallest size for your App Service plan is three instances. That ensures that there is an instance in each availability zone. App Service plans can be scaled up one or more instances at a time. Scaling doesn't need to be in units of three, but the app is only balanced across all availability zones when the total instances are multiples of three.
-In a zone redundant ASE, your apps spread across three zones in the same region. The zone redundant ASE is available in a subset of ASE capable regions primarily limited by the regions that support availability zones. When you have zone redundant ASE, the smallest size for your App Service plan is three instances. That ensures that there is an instance in each availability zone. App Service plans can be scaled up one or more instances at a time. Scaling does not need to be in units of three, but the app is only balanced across all availability zones when the total instances are multiples of three. A zone redundant ASE has triple the infrastructure and is made with zone redundant components so that if even two of the three zones go down for whatever reason, your workloads remain available. Due to the increased system need, the minimum charge for a zone redundant ASE is nine instances. If you have less than nine total App Service plan instances in your ASEv3, the difference will be charged as Windows I1v2. If you have nine or more instances, there is no added charge to have a zone redundant ASE. To learn more about zone redundancy, read [Regions and Availability zones](./overview-zone-redundancy.md).
+A zone redundant deployment has triple the infrastructure, and ensures that even if two of the three zones go down, your workloads remain available. Due to the increased system need, the minimum charge for a zone redundant App Service Environment is nine instances. If you have fewer than this number of instances, the difference is charged as Windows I1v2. If you have nine or more instances, there is no added charge to have a zone redundant App Service Environment. To learn more about zone redundancy, see [Regions and availability zones](./overview-zone-redundancy.md).
-In a host group deployment, your apps are deployed onto a dedicated host group. The dedicated host group is not zone redundant. Dedicated host group deployment enables your ASE to be deployed on dedicated hardware. There is no minimum instance charge for use of an ASE on a dedicated host group, but you do have to pay for the host group when provisioning the ASE. On top of that you pay a discounted App Service plan rate as you create your plans and scale out. There are a finite number of cores available with a dedicated host deployment that are used by both the App Service plans and the infrastructure roles. Dedicated host deployments of the ASE can't reach the 200 total instance count normally available in an ASE. The number of total instances possible is related to the total number of App Service plan instances plus the load based number of infrastructure roles.
+In a host group deployment, your apps are deployed onto a dedicated host group. The dedicated host group isn't zone redundant. With this type of deployment, you can install and use your App Service Environment on dedicated hardware. There is no minimum instance charge for using App Service Environment on a dedicated host group, but you do have to pay for the host group when you're provisioning the App Service Environment. You also pay a discounted App Service plan rate as you create your plans and scale out.
-## Creating an ASE in the portal
+With a dedicated host group deployment, there are a finite number of cores available that are used by both the App Service plans and the infrastructure roles. This type of deployment can't reach the 200 total instance count normally available in App Service Environment. The number of total instances possible is related to the total number of App Service plan instances, plus the load-based number of infrastructure roles.
-1. To create an ASE, search the marketplace for **App Service Environment v3**.
+## Create an App Service Environment in the portal
-2. Basics: Select the Subscription, select or create the Resource Group, and enter the name of your ASE. Select the type of Virtual IP type. If you select Internal, your inbound ASE address will be an address in your ASE subnet. If you select External, your inbound ASE address will be a public internet facing address. The ASE name will be also used for the domain suffix of your ASE. If your ASE name is *contoso* and you have an Internal VIP ASE, then the domain suffix will be *contoso.appserviceenvironment.net*. If your ASE name is *contoso* and you have an external VIP, the domain suffix will be *contoso.p.azurewebsites.net*.
+Here's how:
- ![App Service Environment create basics tab](./media/creation/creation-basics.png)
+1. Search Azure Marketplace for *App Service Environment v3*.
-3. Hosting: Select *Enabled* or *Disabled* for Host Group deployment. Host Group deployment is used to select dedicated hardware deployment. If you select Enabled, your ASE will be deployed onto dedicated hardware. When you deploy onto dedicated hardware, you are charged for the entire dedicated host during ASE creation and then a reduced price for your App Service plan instances.
+1. From the **Basics** tab, for **Subscription**, select the subscription. For **Resource Group**, select or create the resource group, and enter the name of your App Service Environment. For **Virtual IP**, select **Internal** if you want your inbound address to be an address in your subnet. Select **External** if you want your inbound address to face the public internet. For **App Service Environment Name**, enter a name. The name you choose will also be used for the domain suffix. For example, if the name you choose is *contoso*, and you have an internal VIP, the domain suffix will be `contoso.appserviceenvironment.net`. If the name you choose is *contoso*, and you have an external VIP, the domain suffix will be `contoso.p.azurewebsites.net`.
- ![App Service Environment hosting selections](./media/creation/creation-hosting.png)
+ ![Screenshot that shows the App Service Environment basics tab.](./media/creation/creation-basics.png)
-4. Networking: Select or create your Virtual Network, select or create your subnet. If you are creating an internal VIP ASE, you can configure Azure DNS private zones to point your domain suffix to your ASE. Details on how to manually configure DNS are in the DNS section under [Using an App Service Environment][UsingASE].
+1. From the **Hosting** tab, for **Host group deployment**, select **Enabled** or **Disabled**. If you enable this option, you can deploy onto dedicated hardware. If you do so, you're charged for the entire dedicated host during the creation of the App Service Environment, and then you're charged a reduced price for your App Service plan instances.
- ![App Service Environment networking selections](./media/creation/creation-networking.png)
+ ![Screenshot that shows the App Service Environment hosting selections.](./media/creation/creation-hosting.png)
-5. Review and Create: Check that your configuration is correct and select create. Your ASE can take up to nearly two hours to create.
+1. From the **Networking** tab, for **Virtual Network**, select or create your virtual network. For **Subnet**, select or create your subnet. If you're creating an App Service Environment with an internal VIP, you can configure Azure DNS private zones to point your domain suffix to your App Service Environment. For more details, see the DNS section in [Use an App Service Environment][UsingASE].
-After your ASE creation completes, you can select it as a location when creating your apps. To learn more about creating apps in your new ASE or managing your ASE, read [Using an App Service Environment][UsingASE]
+ ![Screenshot that shows App Service Environment networking selections.](./media/creation/creation-networking.png)
-## Dedicated hosts
+1. From the **Review + create** tab, check that your configuration is correct, and select **Create**. Your App Service Environment can take up to two hours to create.
-The ASE is normally deployed on VMs that are provisioned on a multi-tenant hypervisor. If you need to deploy on dedicated systems, including the hardware, you can provision your ASE onto dedicated hosts. Dedicated hosts come in a pair to ensure redundancy. Dedicated host-based ASE deployments are priced differently than normal. There is a charge for the dedicated host and then another charge for each App Service plan instance. Deployments on host groups are not zone redundant. To deploy onto dedicated hosts, select **enable** for host group deployment on the Hosting tab.
+When your App Service Environment has been successfully created, you can select it as a location when you're creating your apps.
<!--Links--> [Intro]: ./overview.md
app-service Network Info https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/network-info.md
Title: Networking considerations
-description: Learn about the ASE network traffic and how to set network security groups and user defined routes with your ASE.
+description: Learn about App Service Environment network traffic, and how to set network security groups and user-defined routes.
Last updated 11/15/2021
-# Networking considerations for an App Service Environment v2
+# Networking considerations for App Service Environment
-> [!NOTE]
-> This article is about the App Service Environment v2 which is used with Isolated App Service plans
->
+[App Service Environment][Intro] is a deployment of Azure App Service into a subnet in your Azure virtual network. There are two deployment types for an App Service Environment:
-## Overview
+- **External:** This type of deployment exposes the hosted apps by using an IP address that is accessible on the internet. For more information, see [Create an external App Service Environment][MakeExternalASE].
+- **Internal load balancer:** This type of deployment exposes the hosted apps on an IP address inside your virtual network. The internal endpoint is an internal load balancer. For more information, see [Create and use an internal load balancer App Service Environment][MakeILBASE].
- Azure [App Service Environment][Intro] is a deployment of Azure App Service into a subnet in your Azure virtual network. There are two deployment types for an App Service environment (ASE):
+> [!NOTE]
+> This article is about App Service Environment v2, which is used with isolated App Service plans.
+>
-- **External ASE**: Exposes the ASE-hosted apps on an internet-accessible IP address. For more information, see [Create an External ASE][MakeExternalASE].-- **ILB ASE**: Exposes the ASE-hosted apps on an IP address inside your virtual network. The internal endpoint is an internal load balancer (ILB), which is why it's called an ILB ASE. For more information, see [Create and use an ILB ASE][MakeILBASE].
+Regardless of the deployment type, all App Service Environments have a public virtual IP (VIP). This VIP is used for inbound management traffic, and as the address when you're making calls from the App Service Environment to the internet. Such calls leave the virtual network through the VIP assigned for the App Service Environment.
-All ASEs, External, and ILB, have a public VIP that is used for inbound management traffic and as the from address when making calls from the ASE to the internet. The calls from an ASE that go to the internet leave the virtual network through the VIP assigned for the ASE. The public IP of this VIP is the source IP for all calls from the ASE that go to the internet. If the apps in your ASE make calls to resources in your virtual network or across a VPN, the source IP is one of the IPs in the subnet used by your ASE. Because the ASE is within the virtual network, it can also access resources within the virtual network without any additional configuration. If the virtual network is connected to your on-premises network, apps in your ASE also have access to resources there without additional configuration.
+If the apps make calls to resources in your virtual network or across a VPN, the source IP is one of the IPs in the subnet. Because the App Service Environment is within the virtual network, it can also access resources within the virtual network without any additional configuration. If the virtual network is connected to your on-premises network, apps also have access to resources there without additional configuration.
-![External ASE][1] 
+![Diagram that shows the elements of an external deployment.][1] 
-If you have an External ASE, the public VIP is also the endpoint that your ASE apps resolve to for:
+If you have an App Service Environment with an external deployment, the public VIP is also the endpoint to which your apps resolve for the following:
* HTTP/S * FTP/S * Web deployment * Remote debugging
-![ILB ASE][2]
+![Diagram that shows the elements of an internal load balancer deployment.][2]
+
+If you have an App Service Environment with an internal load balancer deployment, the address of the internal address is the endpoint for HTTP/S, FTP/S, web deployment, and remote debugging.
-If you have an ILB ASE, the address of the ILB address is the endpoint for HTTP/S, FTP/S, web deployment, and remote debugging.
+## Subnet size
-## ASE subnet size
+After the App Service Environment is deployed, you can't alter the size of the subnet used to host it. App Service Environment uses an address for each infrastructure role, as well as for each isolated App Service plan instance. Additionally, Azure networking uses five addresses for every subnet that is created.
-The size of the subnet used to host an ASE cannot be altered after the ASE is deployed. The ASE uses an address for each infrastructure role as well as for each Isolated App Service plan instance. Additionally, there are five addresses used by Azure Networking for every subnet that is created. An ASE with no App Service plans at all will use 12 addresses before you create an app. If it is an ILB ASE, then it will use 13 addresses before you create an app in that ASE. As you scale out your ASE, infrastructure roles are added every multiple of 15 and 20 of your App Service plan instances.
+An App Service Environment with no App Service plans at all will use 12 addresses before you create an app. If you use the internal load balancer deployment, then it will use 13 addresses before you create an app. As you scale out, be aware that infrastructure roles are added at every multiple of 15 and 20 of your App Service plan instances.
- > [!NOTE]
- > Nothing else can be in the subnet but the ASE. Be sure to choose an address space that allows for future growth. You can't change this setting later. We recommend a size of `/24` with 256 addresses.
+> [!IMPORTANT]
+> Nothing else can be in the subnet but the App Service Environment. Be sure to choose an address space that allows for future growth. You can't change this setting later. We recommend a size of `/24` with 256 addresses.
-When you scale up or down, new roles of the appropriate size are added and then your workloads are migrated from the current size to the target size. The original VMs are removed only after the workloads have been migrated. If you had an ASE with 100 ASP instances, there would be a period where you need double the number of VMs. It is for this reason that we recommend the use of a '/24' to accommodate any changes you might require.
+When you scale up or down, new roles of the appropriate size are added, and then your workloads are migrated from the current size to the target size. The original VMs are removed only after the workloads have been migrated. For example, if you had an App Service Environment with 100 App Service plan instances, there's a period of time in which you need double the number of VMs.
-## ASE dependencies
+## Inbound and outbound dependencies
-### ASE inbound dependencies
+The following sections cover dependencies to be aware of for your App Service Environment. Another section discusses DNS settings.
-Just for the ASE to operate, the ASE requires the following ports to be open:
+### Inbound dependencies
+
+Just for the App Service Environment to operate, the following ports must be open:
| Use | From | To | |--||-|
-| Management | App Service management addresses | ASE subnet: 454, 455 |
-| ASE internal communication | ASE subnet: All ports | ASE subnet: All ports
-| Allow Azure load balancer inbound | Azure load balancer | ASE subnet: 16001
-
-There are 2 other ports that can show as open on a port scan, 7654 and 1221. They reply with an IP address and nothing more. They can be blocked if desired.
+| Management | App Service management addresses | App Service Environment subnet: 454, 455 |
+| App Service Environment internal communication | App Service Environment subnet: All ports | App Service Environment subnet: All ports
+| Allow Azure load balancer inbound | Azure load balancer | App Service Environment subnet: 16001
-The inbound management traffic provides command and control of the ASE in addition to system monitoring. The source addresses for this traffic are listed in the [ASE Management addresses][ASEManagement] document. The network security configuration needs to allow access from the ASE management addresses on ports 454 and 455. If you block access from those addresses, your ASE will become unhealthy and then become suspended. The TCP traffic that comes in on ports 454 and 455 must go back out from the same VIP or you will have an asymmetric routing problem.
+Ports 7564 and 1221 can show as open on a port scan. They reply with an IP address, and nothing more. You can block them if you want to.
-Within the ASE subnet, there are many ports used for internal component communication and they can change. This requires all of the ports in the ASE subnet to be accessible from the ASE subnet.
+The inbound management traffic provides command and control of the App Service Environment, in addition to system monitoring. The source addresses for this traffic are listed in [App Service Environment management addresses][ASEManagement]. The network security configuration needs to allow access from the App Service Environment management addresses on ports 454 and 455. If you block access from those addresses, your App Service Environment will become unhealthy and then become suspended. The TCP traffic that comes in on ports 454 and 455 must go back out from the same VIP, or you will have an asymmetric routing problem.
-For the communication between the Azure load balancer and the ASE subnet the minimum ports that need to be open are 454, 455 and 16001. The 16001 port is used for keep alive traffic between the load balancer and the ASE. If you are using an ILB ASE, then you can lock traffic down to just the 454, 455, 16001 ports. If you are using an External ASE, then you need to take into account the normal app access ports.
+Within the subnet, there are many ports used for internal component communication, and they can change. This requires all of the ports in the subnet to be accessible from the subnet.
-The other ports you need to concern yourself with are the application ports:
+For communication between the Azure load balancer and the App Service Environment subnet, the minimum ports that need to be open are 454, 455, and 16001. If you're using an internal load balancer deployment, then you can lock traffic down to just the 454, 455, 16001 ports. If you're using an external deployment, then you need to take into account the normal app access ports. Specifically, these are:
| Use | Ports | |-|-| | HTTP/HTTPS | 80, 443 | | FTP/FTPS | 21, 990, 10001-10020 | | Visual Studio remote debugging | 4020, 4022, 4024 |
-| Web Deploy service | 8172 |
+| Web deploy service | 8172 |
-If you block the application ports, your ASE can still function but your app might not. If you are using app assigned IP addresses with an External ASE, you will need to allow traffic from the IPs assigned to your apps to the ASE subnet on the ports shown in the ASE portal > IP Addresses page.
+If you block the application ports, your App Service Environment can still function, but your app might not. If you're using app-assigned IP addresses with an external deployment, you need to allow traffic from the IPs assigned to your apps to the subnet. From the App Service Environment portal, go to **IP addresses**, and see the ports from which you need to allow traffic.
-### ASE outbound dependencies
+### Outbound dependencies
-For outbound access, an ASE depends on multiple external systems. Many of those system dependencies are defined with DNS names and don't map to a fixed set of IP addresses. Thus, the ASE requires outbound access from the ASE subnet to all external IPs across a variety of ports.
+For outbound access, an App Service Environment depends on multiple external systems. Many of those system dependencies are defined with DNS names, and don't map to a fixed set of IP addresses. Thus, the App Service Environment requires outbound access from the subnet to all external IPs, across a variety of ports.
-The ASE communicates out to internet accessible addresses on the following ports:
+App Service Environment communicates out to internet accessible addresses on the following ports:
| Uses | Ports | |--||
The ASE communicates out to internet accessible addresses on the following ports
| Azure SQL | 1433 | | Monitoring | 12000 |
-The outbound dependencies are listed in the document that describes [Locking down App Service Environment outbound traffic](./firewall-integration.md). If the ASE loses access to its dependencies, it stops working. When that happens long enough, the ASE is suspended.
+The outbound dependencies are listed in [Locking down an App Service Environment](./firewall-integration.md). If the App Service Environment loses access to its dependencies, it stops working. When that happens for a long enough period of time, it's suspended.
-### Customer DNS ###
+### Customer DNS
-If the virtual network is configured with a customer-defined DNS server, the tenant workloads use it. The ASE uses Azure DNS for management purposes. If the virtual network is configured with a customer-selected DNS server, the DNS server must be reachable from the subnet that contains the ASE.
+If the virtual network is configured with a customer-defined DNS server, the tenant workloads use it. The App Service Environment uses Azure DNS for management purposes. If the virtual network is configured with a customer-selected DNS server, the DNS server must be reachable from the subnet.
- > [!NOTE]
- > Storage mounts or container images pulls in ASEv2 will not be able to use customer DNS defined in the virtual network or through the `WEBSITE_DNS_SERVER` app setting.
+> [!NOTE]
+> Storage mounts or container image pulls in App Service Environment v2 aren't able to use customer-defined DNS in the virtual network, or through the `WEBSITE_DNS_SERVER` app setting.
-To test DNS resolution from your web app, you can use the console command *nameresolver*. Go to the debug window in your scm site for your app or go to the app in the portal and select console. From the shell prompt you can issue the command *nameresolver* along with the DNS name you wish to look up. The result you get back is the same as what your app would get while making the same lookup. If you use nslookup, you will do a lookup using Azure DNS instead.
+To test DNS resolution from your web app, you can use the console command `nameresolver`. Go to the debug window in your `scm` site for your app, or go to the app in the portal and select console. From the shell prompt, you can issue the command `nameresolver`, along with the DNS name you wish to look up. The result you get back is the same as what your app would get while making the same lookup. If you use `nslookup`, you do a lookup by using Azure DNS instead.
-If you change the DNS setting of the virtual network that your ASE is in, you will need to reboot your ASE. To avoid rebooting your ASE, it is highly recommended that you configure your DNS settings for your virtual network before you create your ASE.
+If you change the DNS setting of the virtual network that your App Service Environment is in, you will need to reboot. To avoid rebooting, it's a good idea to configure your DNS settings for your virtual network before you create your App Service Environment.
<a name="portaldep"></a> ## Portal dependencies
-In addition to the ASE functional dependencies, there are a few extra items related to the portal experience. Some of the capabilities in the Azure portal depend on direct access to _SCM site_. For every app in Azure App Service, there are two URLs. The first URL is to access your app. The second URL is to access the SCM site, which is also called the _Kudu console_. Features that use the SCM site include:
+In addition to the dependencies described in the previous sections, there are a few extra considerations you should be aware of that are related to the portal experience. Some of the capabilities in the Azure portal depend on direct access to the source control manager (SCM) site. For every app in Azure App Service, there are two URLs. The first URL is to access your app. The second URL is to access the SCM site, which is also called the _Kudu console_. Features that use the SCM site include:
-- Web jobs-- Functions-- Log streaming-- Kudu-- Extensions-- Process Explorer-- Console
+- Web jobs
+- Functions
+- Log streaming
+- Kudu
+- Extensions
+- Process Explorer
+- Console
-When you use an ILB ASE, the SCM site isn't accessible from outside the virtual network. Some capabilities will not work from the app portal because they require access to the SCM site of an app. You can connect to the SCM site directly instead of using the portal.
+When you use an internal load balancer, the SCM site isn't accessible from outside the virtual network. Some capabilities don't work from the app portal because they require access to the SCM site of an app. You can connect to the SCM site directly, instead of by using the portal.
-If your ILB ASE is the domain name *contoso.appserviceenvironment.net* and your app name is *testapp*, the app is reached at *testapp.contoso.appserviceenvironment.net*. The SCM site that goes with it is reached at *testapp.scm.contoso.appserviceenvironment.net*.
+If your internal load balancer is the domain name `contoso.appserviceenvironment.net`, and your app name is *testapp*, the app is reached at `testapp.contoso.appserviceenvironment.net`. The SCM site that goes with it is reached at `testapp.scm.contoso.appserviceenvironment.net`.
-## ASE IP addresses ##
+## IP addresses
-An ASE has a few IP addresses to be aware of. They are:
+An App Service Environment has a few IP addresses to be aware of. They are:
-- **Public inbound IP address**: Used for app traffic in an External ASE, and management traffic in both an External ASE and an ILB ASE.-- **Outbound public IP**: Used as the "from" IP for outbound connections from the ASE that leave the virtual network, which aren't routed down a VPN.-- **ILB IP address**: The ILB IP address only exists in an ILB ASE.-- **App-assigned IP-based TLS/SSL addresses**: Only possible with an External ASE and when IP-based TLS/SSL binding is configured.
+- **Public inbound IP address:** Used for app traffic in an external deployment, and management traffic in both internal and external deployments.
+- **Outbound public IP:** Used as the "from" IP for outbound connections that leave the virtual network. These connections aren't routed down a VPN.
+- **Internal load balancer IP address:** This address only exists in an internal deployment.
+- **App-assigned IP-based TLS/SSL addresses:** These addresses are only possible with an external deployment, and when IP-based TLS/SSL binding is configured.
-All these IP addresses are visible in the Azure portal from the ASE UI. If you have an ILB ASE, the IP for the ILB is listed.
+All these IP addresses are visible in the Azure portal from the App Service Environment UI. If you have an internal deployment, the IP for the internal load balancer is listed.
- > [!NOTE]
- > These IP addresses will not change so long as your ASE stays up and running. If your ASE becomes suspended and restored, the addresses used by your ASE will change. The normal cause for an ASE to become suspended is if you block inbound management access or block access to an ASE dependency.
+> [!NOTE]
+> These IP addresses don't change, as long as your App Service Environment is running. If your App Service Environment becomes suspended and is then restored, the addresses used will change. The normal cause for a suspension is if you block inbound management access, or you block access to a dependency.
-![IP addresses][3]
+![Screenshot that shows IP addresses.][3]
-### App-assigned IP addresses ###
+### App-assigned IP addresses
-With an External ASE, you can assign IP addresses to individual apps. You can't do that with an ILB ASE. For more information on how to configure your app to have its own IP address, see [Secure a custom DNS name with a TLS/SSL binding in Azure App Service](../configure-ssl-bindings.md).
+With an external deployment, you can assign IP addresses to individual apps. You can't do that with an internal deployment. For more information on how to configure your app to have its own IP address, see [Secure a custom DNS name with a TLS/SSL binding in Azure App Service](../configure-ssl-bindings.md).
-When an app has its own IP-based SSL address, the ASE reserves two ports to map to that IP address. One port is for HTTP traffic, and the other port is for HTTPS. Those ports are listed in the ASE UI in the IP addresses section. Traffic must be able to reach those ports from the VIP or the apps are inaccessible. This requirement is important to remember when you configure Network Security Groups (NSGs).
+When an app has its own IP-based SSL address, the App Service Environment reserves two ports to map to that IP address. One port is for HTTP traffic, and the other port is for HTTPS. Those ports are listed in the **IP addresses** section of your App Service Environment portal. Traffic must be able to reach those ports from the VIP. Otherwise, the apps are inaccessible. This requirement is important to remember when you configure network security groups (NSGs).
-## Network Security Groups ##
+## Network security groups
-[Network Security Groups][NSGs] provide the ability to control network access within a virtual network. When you use the portal, there's an implicit deny rule at the lowest priority to deny everything. What you build are your allow rules.
+[NSGs][NSGs] provide the ability to control network access within a virtual network. When you use the portal, there's an implicit *deny rule* at the lowest priority to deny everything. What you build are your *allow rules*.
-In an ASE, you don't have access to the VMs used to host the ASE itself. They're in a Microsoft-managed subscription. If you want to restrict access to the apps on the ASE, set NSGs on the ASE subnet. In doing so, pay careful attention to the ASE dependencies. If you block any dependencies, the ASE stops working.
+You don't have access to the VMs used to host the App Service Environment itself. They're in a subscription that Microsoft manages. If you want to restrict access to the apps, set NSGs on the subnet. In doing so, pay careful attention to the dependencies. If you block any dependencies, the App Service Environment stops working.
-NSGs can be configured through the Azure portal or via PowerShell. The information here shows the Azure portal. You create and manage NSGs in the portal as a top-level resource under **Networking**.
+You can configure NSGs through the Azure portal or via PowerShell. The information here shows the Azure portal. You create and manage NSGs in the portal as a top-level resource under **Networking**.
-The required entries in an NSG, for an ASE to function, are to allow traffic:
+The required entries in an NSG are to allow traffic:
**Inbound**
-* TCP from the IP service tag AppServiceManagement on ports 454,455
+
+* TCP from the IP service tag `AppServiceManagement` on ports 454, 455
* TCP from the load balancer on port 16001
-* from the ASE subnet to the ASE subnet on all ports
+* From the App Service Environment subnet to the App Service Environment subnet on all ports
**Outbound**+ * UDP to all IPs on port 53 * UDP to all IPs on port 123 * TCP to all IPs on ports 80, 443
-* TCP to the IP service tag `Sql` on ports 1433
+* TCP to the IP service tag `Sql` on port 1433
* TCP to all IPs on port 12000
-* to the ASE subnet on all ports
+* To the App Service Environment subnet on all ports
-These ports do not include the ports that your apps require for successful use. As an example, your app may need to call a MySQL server on port 3306. Network Time Protocol (NTP) on port 123 is the time synchronization protocol used by the operating system. The NTP endpoints are not specific to App Services, can vary with the operating system, and are not in a well defined list of addresses. To prevent time synchronization issues, you then need to allow UDP traffic to all addresses on port 123. The outbound TCP to port 12000 traffic is for system support and analysis. The endpoints are dynamic and are not in a well defined set of addresses.
+These ports don't include the ports that your apps require for successful use. For example, suppose your app needs to call a MySQL server on port 3306. Network Time Protocol (NTP) on port 123 is the time synchronization protocol used by the operating system. The NTP endpoints aren't specific to App Service, can vary with the operating system, and aren't in a well-defined list of addresses. To prevent time synchronization issues, you then need to allow UDP traffic to all addresses on port 123. The outbound TCP to port 12000 traffic is for system support and analysis. The endpoints are dynamic, and aren't in a well-defined set of addresses.
The normal app access ports are:
The normal app access ports are:
| Visual Studio remote debugging | 4020, 4022, 4024 | | Web Deploy service | 8172 |
-When the inbound and outbound requirements are taken into account, the NSGs should look similar to the NSGs shown in this example.
+When the inbound and outbound requirements are taken into account, the NSGs should look similar to the NSGs shown in the following screenshot:
+
+![Screenshot that shows inbound security rules.][4]
-![Inbound security rules][4]
+A default rule enables the IPs in the virtual network to talk to the subnet. Another default rule enables the load balancer, also known as the public VIP, to communicate with the App Service Environment. To see the default rules, select **Default rules** (next to the **Add** icon).
-A default rule enables the IPs in the virtual network to talk to the ASE subnet. Another default rule enables the load balancer, also known as the public VIP, to communicate with the ASE. To see the default rules, select **Default rules** next to the **Add** icon. If you put a deny everything else rule before the default rules, you prevent traffic between the VIP and the ASE. To prevent traffic coming from inside the virtual network, add your own rule to allow inbound. Use a source equal to AzureLoadBalancer with a destination of **Any** and a port range of **\***. Because the NSG rule is applied to the ASE subnet, you don't need to be specific in the destination.
+If you put a *deny everything else* rule before the default rules, you prevent traffic between the VIP and the App Service Environment. To prevent traffic coming from inside the virtual network, add your own rule to allow inbound. Use a source equal to `AzureLoadBalancer`, with a destination of **Any** and a port range of **\***. Because the NSG rule is applied to the subnet, you don't need to be specific in the destination.
If you assigned an IP address to your app, make sure you keep the ports open. To see the ports, select **App Service Environment** > **IP addresses**.  
-All the items shown in the following outbound rules are needed, except for the last item. They enable network access to the ASE dependencies that were noted earlier in this article. If you block any of them, your ASE stops working. The last item in the list enables your ASE to communicate with other resources in your virtual network.
+All the items shown in the following outbound rules are needed, except for the last item. They enable network access to the App Service Environment dependencies that were noted earlier in this article. If you block any of them, your App Service Environment stops working. The last item in the list enables your App Service Environment to communicate with other resources in your virtual network.
+
+![Screenshot that shows outbound security rules.][5]
-![Outbound security rules][5]
+After your NSGs are defined, assign them to the subnet. If you don't remember the virtual network or subnet, you can see it from the App Service Environment portal. To assign the NSG to your subnet, go to the subnet UI and select the NSG.
-After your NSGs are defined, assign them to the subnet that your ASE is on. If you donΓÇÖt remember the ASE virtual network or subnet, you can see it from the ASE portal page. To assign the NSG to your subnet, go to the subnet UI and select the NSG.
+## Routes
-## Routes ##
+*Forced tunneling* is when you set routes in your virtual network so the outbound traffic doesn't go directly to the internet. Instead, the traffic goes somewhere else, like an Azure ExpressRoute gateway or a virtual appliance. If you need to configure your App Service Environment in such a manner, see [Configuring your App Service Environment with forced tunneling][forcedtunnel].
-Forced tunneling is when you set routes in your virtual network so the outbound traffic doesn't go directly to the internet but somewhere else like an ExpressRoute gateway or a virtual appliance. If you need to configure your ASE in such a manner, then read the document on [Configuring your App Service Environment with Forced Tunneling][forcedtunnel]. This document will tell you the options available to work with ExpressRoute and forced tunneling.
+When you create an App Service Environment in the portal, you automatically create a set of route tables on the subnet. Those routes simply say to send outbound traffic directly to the internet.
-When you create an ASE in the portal we also create a set of route tables on the subnet that is created with the ASE. Those routes simply say to send outbound traffic directly to the internet.
To create the same routes manually, follow these steps:
-1. Go to the Azure portal. Select **Networking** > **Route Tables**.
+1. Go to the Azure portal, and select **Networking** > **Route Tables**.
2. Create a new route table in the same region as your virtual network. 3. From within your route table UI, select **Routes** > **Add**.
-4. Set the **Next hop type** to **Internet** and the **Address prefix** to **0.0.0.0/0**. Select **Save**.
+4. Set the **Next hop type** to **Internet**, and the **Address prefix** to **0.0.0.0/0**. Select **Save**.
You then see something like the following:
- ![Functional routes][6]
+ ![Screenshot that shows functional routes.][6]
-5. After you create the new route table, go to the subnet that contains your ASE. Select your route table from the list in the portal. After you save the change, you should then see the NSGs and routes noted with your subnet.
+5. After you create the new route table, go to the subnet. Select your route table from the list in the portal. After you save the change, you should then see the NSGs and routes noted with your subnet.
- ![NSGs and routes][7]
+ ![Screenshot that shows NSGs and routes.][7]
-## Service Endpoints ##
+## Service endpoints
-Service Endpoints enable you to restrict access to multi-tenant services to a set of Azure virtual networks and subnets. You can read more about Service Endpoints in the [Virtual Network Service Endpoints][serviceendpoints] documentation.
+Service endpoints enable you to restrict access to multi-tenant services to a set of Azure virtual networks and subnets. For more information, see [Virtual Network service endpoints][serviceendpoints].
-When you enable Service Endpoints on a resource, there are routes created with higher priority than all other routes. If you use Service Endpoints on any Azure service, with a forced tunneled ASE, the traffic to those services will not be forced tunneled.
+When you enable service endpoints on a resource, there are routes created with higher priority than all other routes. If you use service endpoints on any Azure service, with a force-tunneled App Service Environment, the traffic to those services isn't force-tunneled.
-When Service Endpoints is enabled on a subnet with an Azure SQL instance, all Azure SQL instances connected to from that subnet must have Service Endpoints enabled. if you want to access multiple Azure SQL instances from the same subnet, you can't enable Service Endpoints on one Azure SQL instance and not on another. No other Azure service behaves like Azure SQL with respect to Service Endpoints. When you enable Service Endpoints with Azure Storage, you lock access to that resource from your subnet but can still access other Azure Storage accounts even if they do not have Service Endpoints enabled.
+When service endpoints are enabled on a subnet with an instance of Azure SQL, all Azure SQL instances connected to from that subnet must have service endpoints enabled. If you want to access multiple Azure SQL instances from the same subnet, you can't enable service endpoints on one Azure SQL instance and not on another. No other Azure service behaves like Azure SQL with respect to service endpoints. When you enable service endpoints with Azure Storage, you lock access to that resource from your subnet. You can still access other Azure Storage accounts, however, even if they don't have service endpoints enabled.
-![Service Endpoints][8]
+![Diagram that shows service endpoints.][8]
<!--Image references--> [1]: ./media/network_considerations_with_an_app_service_environment/networkase-overflow.png
app-service Networking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/networking.md
Title: App Service Environment Networking
+ Title: App Service Environment networking
description: App Service Environment networking details
# App Service Environment networking
-> [!NOTE]
-> This article is about the App Service Environment v3 which is used with Isolated v2 App Service plans
->
+App Service Environment is a single-tenant deployment of Azure App Service that hosts Windows and Linux containers, web apps, API apps, logic apps, and function apps. When you install an App Service Environment, you pick the Azure virtual network that you want it to be deployed in. All of the inbound and outbound application traffic is inside the virtual network you specify. You deploy into a single subnet in your virtual network, and nothing else can be deployed into that subnet.
-The App Service Environment (ASE) is a single tenant deployment of the Azure App Service that hosts Windows and Linux containers, web apps, api apps, logic apps, and function apps. When you install an ASE, you pick the Azure Virtual Network that you want it to be deployed in. All of the inbound and outbound application traffic will be inside the virtual network you specify. The ASE is deployed into a single subnet in your virtual network. Nothing else can be deployed into that same subnet.
+> [!NOTE]
+> This article is about App Service Environment v3, which is used with isolated v2 App Service plans.
## Subnet requirements
-The subnet must be delegated to Microsoft.Web/hostingEnvironments and must be empty.
+You must delegate the subnet to `Microsoft.Web/hostingEnvironments`, and the subnet must be empty.
-The size of the subnet can affect the scaling limits of the App Service plan instances within the ASE. We recommend using a `/24` address space (256 addresses) for your subnet to ensure enough addresses to support production scale.
+The size of the subnet can affect the scaling limits of the App Service plan instances within the App Service Environment. It's a good idea to use a `/24` address space (256 addresses) for your subnet, to ensure enough addresses to support production scale.
-To use a smaller subnet, you should be aware of the following details of the ASE and network setup.
+If you use a smaller subnet, be aware of the following:
-Any given subnet has five addresses reserved for management purposes. On top of the management addresses, ASE will dynamically scale the supporting infrastructure and will use between 4 and 27 addresses depending on configuration and load. The remaining addresses can be used for instances in the App Service plan. The minimal size of your subnet is a `/27` address space (32 addresses).
+- Any particular subnet has five addresses reserved for management purposes. In addition to the management addresses, App Service Environment dynamically scales the supporting infrastructure, and uses between 4 and 27 addresses, depending on the configuration and load. You can use the remaining addresses for instances in the App Service plan. The minimal size of your subnet is a `/27` address space (32 addresses).
-If you run out of addresses within your subnet, you can be restricted from scaling out your App Service plans in the ASE or you can experience increased latency during intensive traffic load if we are not able scale the supporting infrastructure.
+- If you run out of addresses within your subnet, you can be restricted from scaling out your App Service plans in the App Service Environment. Another possibility is that you can experience increased latency during intensive traffic load, if Microsoft isn't able to scale the supporting infrastructure.
## Addresses
-The ASE has the following network information at creation:
+App Service Environment has the following network information at creation:
-| Address type | description |
+| Address type | Description |
|--|-|
-| ASE virtual network | The virtual network the ASE is deployed into |
-| ASE subnet | The subnet that the ASE is deployed into |
-| Domain suffix | The domain suffix that is used by the apps made in this ASE |
-| Virtual IP | This setting is the VIP type used by the ASE. The two possible values are internal and external |
-| Inbound address | The inbound address is the address your apps on this ASE are reached at. If you have an internal VIP, it is an address in your ASE subnet. If the address is external, it will be a public facing address |
-| Default outbound addresses | The apps in this ASE will use this address, by default, when making outbound calls to the internet. |
+| App Service Environment virtual network | The virtual network deployed into. |
+| App Service Environment subnet | The subnet deployed into. |
+| Domain suffix | The domain suffix that is used by the apps made. |
+| Virtual IP (VIP) | The VIP type used. The two possible values are internal and external. |
+| Inbound address | The inbound address is the address at which your apps are reached. If you have an internal VIP, it's an address in your App Service Environment subnet. If the address is external, it's a public-facing address. |
+| Default outbound addresses | The apps use this address, by default, when making outbound calls to the internet. |
-The ASEv3 has details on the addresses used by the ASE in the **IP Addresses** portion of the ASE portal.
+You can find details in the **IP Addresses** portion of the portal, as shown in the following screenshot:
-![ASE addresses UI](./media/networking/networking-ip-addresses.png)
+![Screenshot that shows details about IP addresses.](./media/networking/networking-ip-addresses.png)
-As you scale your App Service plans in your ASE, you'll use more addresses out of your ASE subnet. The number of addresses used will vary based on the number of App Service plan instances you have, and how much traffic your ASE is receiving. Apps in the ASE don't have dedicated addresses in the ASE subnet. The specific addresses used by an app in the ASE subnet by an app will change over time.
+As you scale your App Service plans in your App Service Environment, you'll use more addresses out of your subnet. The number of addresses you use varies, based on the number of App Service plan instances you have, and how much traffic there is. Apps in the App Service Environment don't have dedicated addresses in the subnet. The specific addresses used by an app in the subnet will change over time.
## Ports and network restrictions
-For your app to receive traffic, you need to ensure that inbound Network Security Groups (NSGs) rules allow the ASE subnet to receive traffic from the required ports. In addition to any ports you'd like to receive traffic on, you should ensure the AzureLoadBalancer is able to connect to the ASE subnet on port 80. This is used for internal VM health checks. You can still control port 80 traffic from the virtual network to you ASE subnet.
+For your app to receive traffic, ensure that inbound network security group (NSG) rules allow the App Service Environment subnet to receive traffic from the required ports. In addition to any ports you'd like to receive traffic on, you should ensure that Azure Load Balancer is able to connect to the subnet on port 80. This is used for health checks of the internal virtual machine. You can still control port 80 traffic from the virtual network to your subnet.
-The general recommendation is to configure the following inbound NSG rule:
+It's a good idea to configure the following inbound NSG rule:
|Port|Source|Destination| |-|-|-|
-|80,443|VirtualNetwork|ASE subnet range|
+|80,443|Virtual network|App Service Environment subnet range|
-The minimal requirement for ASE to be operational is:
+The minimal requirement for App Service Environment to be operational is:
|Port|Source|Destination| |-|-|-|
-|80|AzureLoadBalancer|ASE subnet range|
+|80|Azure Load Balancer|App Service Environment subnet range|
-If you use the minimum required rule you may need one or more rules for your application traffic, and if you are using any of the deployment or debugging options, you will also have to allow this traffic to the ASE subnet. The source of these rules can be VirtualNetwork or one or more specific client IPs or IP ranges. The destination will always be the ASE subnet range.
+If you use the minimum required rule, you might need one or more rules for your application traffic. If you're using any of the deployment or debugging options, you must also allow this traffic to the App Service Environment subnet. The source of these rules can be the virtual network, or one or more specific client IPs or IP ranges. The destination is always the App Service Environment subnet range.
-The normal app access ports are:
+The normal app access ports are as follows:
|Use|Ports| |-|-|
The normal app access ports are:
## Network routing
-You can set Route Tables (UDRs) without restriction. You can force tunnel all of the outbound application traffic from your ASE to an egress firewall device, such as the Azure Firewall, and not have to worry about anything other than your application dependencies. You can put WAF devices, such as the Application Gateway, in front of inbound traffic to your ASE to expose specific apps on that ASE. If you'd like to customize the outbound address of your applications on an ASE, you can add a NAT Gateway to your ASE subnet.
+You can set route tables without restriction. You can tunnel all of the outbound application traffic from your App Service Environment to an egress firewall device, such as Azure Firewall. In this scenario, the only thing you have to worry about is your application dependencies.
+
+You can put your web application firewall devices, such as Azure Application Gateway, in front of inbound traffic. Doing so exposes specific apps on that App Service Environment. If you want to customize the outbound address of your applications on an App Service Environment, you can add a NAT gateway to your subnet.
## DNS
-The following sections describe the DNS considerations and configuration inbound to your ASE and outbound from your ASE.
+The following sections describe the DNS considerations and configuration that apply inbound to and outbound from your App Service Environment.
-### DNS configuration to your ASE
+### DNS configuration to your App Service Environment
-If your ASE is made with an external VIP, your apps are automatically put into public DNS. If your ASE is made with an internal VIP, you may need to configure DNS for it. If you selected having Azure DNS private zones configured automatically during ASE creation, then DNS is configured in your ASE virtual network. If you selected Manually configuring DNS, you need to either use your own DNS server or configure Azure DNS private zones. To find the inbound address of your ASE, go to the **ASE portal > IP Addresses** UI.
+If your App Service Environment is made with an external VIP, your apps are automatically put into public DNS. If your App Service Environment is made with an internal VIP, you might need to configure DNS for it. When you created your App Service Environment, if you selected having Azure DNS private zones configured automatically, then DNS is configured in your virtual network. If you chose to configure DNS manually, you need to either use your own DNS server or configure Azure DNS private zones. To find the inbound address, go to the App Service Environment portal, and select **IP Addresses**.
-If you want to use your own DNS server, you need to add the following records:
+If you want to use your own DNS server, add the following records:
-1. create a zone for `<ASE-name>.appserviceenvironment.net`
-1. create an A record in that zone that points * to the inbound IP address used by your ASE
-1. create an A record in that zone that points @ to the inbound IP address used by your ASE
-1. create a zone in `<ASE-name>.appserviceenvironment.net` named scm
-1. create an A record in the scm zone that points * to the IP address used by your ASE private endpoint
+1. Create a zone for `<App Service Environment-name>.appserviceenvironment.net`.
+1. Create an A record in that zone that points * to the inbound IP address used by your App Service Environment.
+1. Create an A record in that zone that points @ to the inbound IP address used by your App Service Environment.
+1. Create a zone in `<App Service Environment-name>.appserviceenvironment.net` named `scm`.
+1. Create an A record in the `scm` zone that points * to the IP address used by the private endpoint of your App Service Environment.
-To configure DNS in Azure DNS Private zones:
+To configure DNS in Azure DNS private zones:
-1. create an Azure DNS private zone named `<ASE-name>.appserviceenvironment.net`
-1. create an A record in that zone that points * to the inbound IP address
-1. create an A record in that zone that points @ to the inbound IP address
-1. create an A record in that zone that points *.scm to the inbound IP address
+1. Create an Azure DNS private zone named `<App Service Environment-name>.appserviceenvironment.net`.
+1. Create an A record in that zone that points * to the inbound IP address.
+1. Create an A record in that zone that points @ to the inbound IP address.
+1. Create an A record in that zone that points *.scm to the inbound IP address.
-In addition to the default domain provided when an app is created, you can also add a custom domain to your app. You can set a custom domain name without any validation on your apps in an ILB ASE. If you are using custom domains, you will need to ensure they have DNS records configured. You can follow the guidance above to configure DNS zones and records for a custom domain name by replacing the default domain name with the custom domain name. The custom domain name works for app requests but doesn't for the scm site. The scm site is only available at *&lt;appname&gt;.scm.&lt;asename&gt;.appserviceenvironment.net*.
+In addition to the default domain provided when an app is created, you can also add a custom domain to your app. You can set a custom domain name without any validation on your apps. If you're using custom domains, you need to ensure they have DNS records configured. You can follow the preceding guidance to configure DNS zones and records for a custom domain name (simply replace the default domain name with the custom domain name). The custom domain name works for app requests, but doesn't work for the `scm` site. The `scm` site is only available at *&lt;appname&gt;.scm.&lt;asename&gt;.appserviceenvironment.net*.
-### DNS configuration from your ASE
+### DNS configuration from your App Service Environment
-The apps in your ASE will use the DNS that your virtual network is configured with. If you want some apps to use a different DNS server than what your virtual network is configured with, you can manually set it on a per app basis with the app settings WEBSITE_DNS_SERVER and WEBSITE_DNS_ALT_SERVER. The app setting WEBSITE_DNS_ALT_SERVER configures the secondary DNS server. The secondary DNS server is only used when there is no response from the primary DNS server.
+The apps in your App Service Environment will use the DNS that your virtual network is configured with. If you want some apps to use a different DNS server, you can manually set it on a per app basis, with the app settings `WEBSITE_DNS_SERVER` and `WEBSITE_DNS_ALT_SERVER`. `WEBSITE_DNS_ALT_SERVER` configures the secondary DNS server. The secondary DNS server is only used when there is no response from the primary DNS server.
## Limitations
-While the ASE does deploy into a customer virtual network, there are a few networking features that aren't available with ASE:
+While App Service Environment does deploy into your virtual network, there are a few networking features that aren't available:
-* Send SMTP traffic. You can still have email triggered alerts but your app can't send outbound traffic on port 25
-* Use of Network Watcher or NSG Flow to monitor outbound traffic
+* Sending SMTP traffic. Although you can still have email-triggered alerts, your app can't send outbound traffic on port 25.
+* Using Azure Network Watcher or NSG flow to monitor outbound traffic.
## More resources
app-service Overview Zone Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/overview-zone-redundancy.md
Title: Zone-redundancy in App Service Environment
+ Title: Zone redundancy in App Service Environment
description: Overview of zone redundancy in an App Service Environment.
Last updated 11/15/2021
-# Availability Zone support for App Service Environments
+# Availability zone support for App Service Environment
+
+You can deploy App Service Environment across [availability zones](../../availability-zones/az-overview.md). This architecture is also known as zone redundancy. When you configure to be zone redundant, the platform automatically spreads the instances of the Azure App Service plan across all three zones in the selected region. If you specify a capacity larger than three, and the number of instances is divisible by three, the instances are spread evenly. Otherwise, instance counts beyond 3*N are spread across the remaining one or two zones.
> [!NOTE]
-> This article is about the App Service Environment v3 which is used with Isolated v2 App Service plans
->
+> This article is about App Service Environment v3, which is used with isolated v2 App Service plans.
-App Service Environment (ASE) can be deployed across [Availability Zones (AZ)](../../availability-zones/az-overview.md). This architecture is also known as zone redundancy. When an ASE is configured to be zone redundant, the platform automatically spreads the App Service plan instances in the ASE across all three zones in the selected region. If a capacity larger than three is specified and the number of instances is divisible by three, the instances will be spread evenly. Otherwise, instance counts beyond 3*N will get spread across the remaining one or two zones.
+You configure zone redundancy when you create your App Service Environment, and all App Service plans created in that App Service Environment will be zone redundant. You can only specify zone redundancy when you're creating a new App Service Environment. Zone redundancy is only supported in a [subset of regions](./overview.md#regions).
-You configure zone redundancy when you create your ASE and all App Service plans created in that ASE will be zone redundant. Zone redundancy can only be specified when creating a *new* App Service Environment. Zone redundancy is only supported in a [subset of regions](./overview.md#regions).
+When a zone goes down, the App Service platform detects lost instances and automatically attempts to find new, replacement instances. If you also have autoscale configured, and if it determines that more instances are needed, autoscale also issues a request to App Service to add more instances. Autoscale behavior is independent of App Service platform behavior.
-In the case when a zone goes down, the App Service platform will detect lost instances and automatically attempt to find new replacement instances. If you also have autoscale configured, and if it decides more instances are needed, autoscale will also issue a request to App Service to add more instances (autoscale behavior is independent of App Service platform behavior). It's important to note there's no guarantee that requests for instances in a zone-down scenario will succeed since back filling lost instances occur on a best-effort basis. The recommended solution is to scale your App Service plans to account for losing a zone.
+There's no guarantee that requests for instances in a zone-down scenario will succeed, because back-filling lost instances occurs on a best effort basis. It's a good idea to scale your App Service plans to account for losing a zone.
-Applications deployed in a zone redundant ASE will continue to run and serve traffic even if other zones in the same region suffer an outage. However it's possible that non-runtime behaviors including App Service plan scaling, application creation, application configuration, and application publishing may still be impacted from an outage in other Availability Zones. Zone redundancy for App Service Environment only ensures continued uptime for deployed applications.
+Applications deployed in a zone redundant App Service Environment continue to run and serve traffic, even if other zones in the same region suffer an outage. It's possible, however, that non-runtime behaviors might still be affected by an outage in other availability zones. These behaviors might include the following: App Service plan scaling, application creation, application configuration, and application publishing. Zone redundancy for App Service Environment only ensures continued uptime for deployed applications.
-When the App Service platform allocates instances to a zone redundant App Service plan in an ASE, it uses [best effort zone balancing offered by the underlying Azure Virtual Machine Scale Sets](../../virtual-machine-scale-sets/virtual-machine-scale-sets-use-availability-zones.md#zone-balancing). An App Service plan will be "balanced" if each zone has either the same number of instances, or +/- 1 instance in all of the other zones used by the App Service plan.
+When the App Service platform allocates instances to a zone redundant App Service plan, it uses [best effort zone balancing offered by the underlying Azure virtual machine scale sets](../../virtual-machine-scale-sets/virtual-machine-scale-sets-use-availability-zones.md#zone-balancing). An App Service plan is considered balanced if each zone has either the same number of instances, or +/- 1 instance in all of the other zones used by the App Service plan.
## Pricing
- There is a minimum charge of nine App Service plan instances in a zone redundant ASE. There is no added charge for availability zone support if you have nine or more App Service plan instances. If you have less than nine instances (of any size) across App Service plans in the zone redundant ASE, the difference between nine and the running instance count is charged as additional Windows I1v2 instances.
+ There is a minimum charge of nine App Service plan instances in a zone redundant App Service Environment. There is no added charge for availability zone support if you have nine or more instances. If you have fewer than nine instances (of any size) across App Service plans in the zone redundant App Service Environment, you're charged for the difference between nine and the running instance count. This charge is for additional Windows I1v2 instances.
## Next steps
-* Read more about [Availability Zones](../../availability-zones/az-overview.md)
+* Read more about [availability zones](../../availability-zones/az-overview.md).
app-service Using An Ase https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/using-an-ase.md
Last updated 8/5/2021
-# Use an App Service Environment
+# Manage an App Service Environment
> [!NOTE] > This article is about the App Service Environment v2 which is used with Isolated App Service plans >
app-service Using https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/using.md
Last updated 07/06/2021
-# Using an App Service Environment
+
+# Use an App Service Environment
+
+App Service Environment is a single-tenant deployment of Azure App Service. You use it with an Azure virtual network, and you're the only user of this system. Apps deployed are subject to the networking features that are applied to the subnet. There aren't any additional features that need to be enabled on your apps to be subject to those networking features.
> [!NOTE]
-> This article is about the App Service Environment v3 which is used with Isolated v2 App Service plans
->
+> This article is about App Service Environment v3, which is used with isolated v2 App Service plans.
-The App Service Environment (ASE) is a single tenant deployment of the Azure App Service that injects directly into an Azure Virtual Network (VNet) of your choosing. It's a system that is only used by one customer. Apps deployed into the ASE are subject to the networking features that are applied to the ASE subnet. There aren't any additional features that need to be enabled on your apps to be subject to those networking features.
+## Create an app
-## Create an app in an ASE
+To create an app in your App Service Environment, you use the same process as when you normally create an app, but with a few small differences. When you create a new App Service plan:
-To create an app in an ASE, you use the same process as when you normally create an app, but with a few small differences. When you create a new App Service plan:
+- Instead of choosing a geographic location in which to deploy your app, you choose an App Service Environment as your location.
+- All App Service plans created in an App Service Environment can only be in an isolated v2 pricing tier.
-- Instead of choosing a geographic location in which to deploy your app, you choose an ASE as your location.-- All App Service plans created in an ASE can only be in an Isolated v2 pricing tier.
+If you don't yet have one, [create an App Service Environment][MakeASE].
-If you don't have an ASE, you can create one by following the instructions in [Create an App Service Environment][MakeASE].
-To create an app in an ASE:
+To create an app in an App Service Environment:
1. Select **Create a resource** > **Web + Mobile** > **Web App**. 1. Select a subscription.
-1. Enter a name for a new resource group, or select **Use existing** and select one from the drop-down list.
-1. Enter a name for the app. If you already selected an App Service plan in an ASE, the domain name for the app reflects the domain name of the ASE
-1. Select your Publish type, Stack, and Operating System.
-1. Select region. Here you need to select a pre-existing App Service Environment v3. You can't make an ASEv3 during app creation:
-![create an app in an ASE][1]
-1. Select an existing App Service plan in your ASE, or create a new one. If creating a new app, select the size that you want for your App Service plan. The only SKU you can select for your app is an Isolated v2 pricing SKU. Making a new App Service plan will normally take less than 20 minutes.
-![Isolated v2 pricing tiers][2]
-1. Select **Next: Monitoring** If you want to enable App Insights with your app, you can do it here during the creation flow.
-1. Select **Next: Tags** Add any tags you want to the app
-1. Select **Review + create**, make sure the information is correct, and then select **Create**.
-
-Windows and Linux apps can be in the same ASE but cannot be in the same App Service plan.
+1. Enter a name for a new resource group, or select **Use existing** and select one from the dropdown list.
+1. Enter a name for the app. If you already selected an App Service plan in an App Service Environment, the domain name for the app reflects the domain name of the App Service Environment.
+1. For **Publish**, **Runtime stack**, and **Operating System**, make your selections as appropriate.
+1. For **Region**, select a pre-existing App Service Environment v3. You can't make a new one when you're creating your app.
+ ![Screenshot that shows how to create an app in an App Service Environment.][1]
+1. Select an existing App Service plan, or create a new one. If you're creating a new app, select the size that you want for your App Service plan. The only SKU you can select for your app is an isolated v2 pricing SKU. Making a new App Service plan will normally take less than 20 minutes.
+ ![Screenshot that shows pricing tiers and their features and hardware.][2]
+1. Select **Next: Monitoring**. If you want to enable Application Insights with your app, you can do it here during the creation flow.
+1. Select **Next: Tags**, and add any tags you want to the app.
+1. Select **Review + create**. Make sure that the information is correct, and then select **Create**.
+
+Windows and Linux apps can be in the same App Service Environment, but can't be in the same App Service plan.
## How scale works Every App Service app runs in an App Service plan. App Service Environments hold App Service plans, and App Service plans hold apps. When you scale an app, you also scale the App Service plan and all the apps in that same plan.
-When you scale an App Service plan, the needed infrastructure is added automatically. There's a time delay to scale operations while the infrastructure is being added. When you scale an App Service plan, and you have another scale operation of the same OS and size running, there might be a slight delay of a few minutes until the requested scale starts. A scale operation on one size and OS won't affect scaling of the other combinations of size and OS. For example, if you are scaling a Windows I2v2 App Service plan then, any other requests to scale Windows I2v2 might be slightly delayed, but a scale operation to a Windows I3v2 App Service plan will start immediately. Scaling will normally take less than 20 minutes.
+When you scale an App Service plan, the needed infrastructure is added automatically. Be aware that there's a time delay to scale operations while the infrastructure is being added. For example, when you scale an App Service plan, and you have another scale operation of the same operating system and size running, there might be a delay of a few minutes until the requested scale starts.
-In the multitenant App Service, scaling is immediate because a pool of *shared* resources is readily available to support it. ASE is a single-tenant service, so there's no shared buffer, and resources are allocated based on need.
+A scale operation on one size and operating system won't affect scaling of the other combinations of size and operating system. For example, if you are scaling a Windows I2v2 App Service plan, a scale operation to a Windows I3v2 App Service plan starts immediately. Scaling normally takes less than 20 minutes.
-## App access
+In a multi-tenant App Service, scaling is immediate, because a pool of shared resources is readily available to support it. App Service Environment is a single-tenant service, so there's no shared buffer, and resources are allocated based on need.
-In an ASE with an internal VIP, the domain suffix used for app creation is *.&lt;asename&gt;.appserviceenvironment.net*. If your ASE is named _my-ase_ and you host an app called _contoso_ in that ASE, you reach it at these URLs:
+## App access
-- contoso.my-ase.appserviceenvironment.net-- contoso.scm.my-ase.appserviceenvironment.net
+In an App Service Environment with an internal virtual IP (VIP), the domain suffix used for app creation is *.&lt;asename&gt;.appserviceenvironment.net*. If your App Service Environment is named _my-ase_, and you host an app called _contoso_, you reach it at these URLs:
-The apps that are hosted on an ASE that uses an internal VIP will only be accessible if you are in the same virtual network as the ASE or are connected somehow to that virtual network. Publishing is also restricted to being only possible if you are in the same virtual network or are connected somehow to that virtual network.
+- `contoso.my-ase.appserviceenvironment.net`
+- `contoso.scm.my-ase.appserviceenvironment.net`
-In an ASE with an external VIP, the domain suffix used for app creation is *.&lt;asename&gt;.p.azurewebsites.net*. If your ASE is named _my-ase_ and you host an app called _contoso_ in that ASE, you reach it at these URLs:
+Apps hosted on an App Service Environment that uses an internal VIP are only accessible if you're in the same virtual network, or are connected to that virtual network. Similarly, publishing is only possible if you're in the same virtual network or are connected to that virtual network.
-- contoso.my-ase.p.azurewebsites.net-- contoso.scm.my-ase.p.azurewebsites.net
+In an App Service Environment with an external VIP, the domain suffix used for app creation is *.&lt;asename&gt;.p.azurewebsites.net*. If your App Service Environment is named _my-ase_, and you host an app called _contoso_, you reach it at these URLs:
-For information about how to create an ASE, see [Create an App Service Environment][MakeASE].
+- `contoso.my-ase.p.azurewebsites.net`
+- `contoso.scm.my-ase.p.azurewebsites.net`
-The SCM URL is used to access the Kudu console or for publishing your app by using Web Deploy. For information on the Kudu console, see [Kudu console for Azure App Service][Kudu]. The Kudu console gives you a web UI for debugging, uploading files, editing files, and much more.
+You use the `scm` URL to access the Kudu console, or for publishing your app by using web deploy. For more information, see [Kudu console for Azure App Service][Kudu]. The Kudu console gives you a web UI for debugging, uploading files, and editing files.
### DNS configuration
-If your ASE is made with an external VIP, your apps are automatically put into public DNS. If your ASE is made with an internal VIP, you may need to configure DNS for it. If you selected having Azure DNS private zones configured automatically during ASE creation then DNS is configured in your ASE VNet. If you selected Manually configuring DNS, you need to either use your own DNS server or configure Azure DNS private zones. To find the inbound address of your ASE, go to the **ASE portal > IP Addresses** UI.
+If your App Service Environment is made with an external VIP, your apps are automatically put into public DNS. If your App Service Environment is made with an internal VIP, you might need to configure DNS for it.
-![IP addresses UI][6]
+If you selected having Azure DNS private zones configured automatically, then DNS is configured in the virtual network of your App Service Environment. If you selected to configure DNS manually, you need to use your own DNS server or configure Azure DNS private zones.
-If you want to use your own DNS server, you need to add the following records:
+To find the inbound address, in the App Service Environment portal, select **IP addresses**.
-1. create a zone for &lt;ASE name&gt;.appserviceenvironment.net
-1. create an A record in that zone that points * to the inbound IP address used by your ASE
-1. create an A record in that zone that points @ to the inbound IP address used by your ASE
-1. create a zone in &lt;ASE name&gt;.appserviceenvironment.net named scm
-1. create an A record in the scm zone that points * to the inbound address used by your ASE
+![Screenshot that shows how to find the inbound address.][6]
-To configure DNS in Azure DNS Private zones:
+If you want to use your own DNS server, add the following records:
-1. create an Azure DNS private zone named &lt;ASE name&gt;.appserviceenvironment.net
-1. create an A record in that zone that points * to the inbound IP address
-1. create an A record in that zone that points @ to the inbound IP address
-1. create an A record in that zone that points *.scm to the inbound IP address
+1. Create a zone for `<App Service Environment-name>.appserviceenvironment.net`.
+1. Create an A record in that zone that points * to the inbound IP address used by your App Service Environment.
+1. Create an A record in that zone that points @ to the inbound IP address used by your App Service Environment.
+1. Create a zone in `<App Service Environment-name>.appserviceenvironment.net` named `scm`.
+1. Create an A record in the `scm` zone that points * to the inbound address used by your App Service Environment.
-The DNS settings for your ASE default domain suffix don't restrict your apps to only being accessible by those names. You can set a custom domain name without any validation on your apps in an ASE. If you then want to create a zone named *contoso.net*, you could do so and point it to the inbound IP address. The custom domain name works for app requests but doesn't for the scm site. The scm site is only available at *&lt;appname&gt;.scm.&lt;asename&gt;.appserviceenvironment.net*.
+To configure DNS in Azure DNS private zones:
+
+1. Create an Azure DNS private zone named `<App Service Environment-name>.appserviceenvironment.net`.
+1. Create an A record in that zone that points * to the inbound IP address.
+1. Create an A record in that zone that points @ to the inbound IP address.
+1. Create an A record in that zone that points *.scm to the inbound IP address.
+
+The DNS settings for the default domain suffix of your App Service Environment don't restrict your apps to only being accessible by those names. You can set a custom domain name without any validation on your apps in an App Service Environment. If you then want to create a zone named `contoso.net`, you can do so and point it to the inbound IP address. The custom domain name works for app requests, but doesn't work for the `scm` site. The `scm` site is only available at *&lt;appname&gt;.scm.&lt;asename&gt;.appserviceenvironment.net*.
## Publishing
-In an ASE, as with the multitenant App Service, you can publish by these methods:
+You can publish by any of the following methods:
- Web deployment - Continuous integration (CI)-- Drag and drop in the Kudu console-- An IDE, such as Visual Studio, Eclipse, or IntelliJ IDEA
+- Drag-and-drop in the Kudu console
+- An integrated development environment (IDE), such as Visual Studio, Eclipse, or IntelliJ IDEA
-With an internal VIP ASE, the publishing endpoints are only available through the inbound address. If you don't have network access to the inbound address, you can't publish any apps on that ASE. Your IDEs must also have network access to the inbound address on the ASE to publish directly to it.
+With an internal VIP App Service Environment, the publishing endpoints are only available through the inbound address. If you don't have network access to the inbound address, you can't publish any apps on that App Service Environment. Your IDEs must also have network access to the inbound address on the App Service Environment to publish directly to it.
-Without additional changes, internet-based CI systems like GitHub and Azure DevOps don't work with an internal VIP ASE because the publishing endpoint isn't internet accessible. You can enable publishing to an internal VIP ASE from Azure DevOps by installing a self-hosted release agent in the virtual network that contains the ASE.
+Without additional changes, internet-based CI systems like GitHub and Azure DevOps don't work with an internal VIP App Service Environment. The publishing endpoint isn't internet accessible. You can enable publishing to an internal VIP App Service Environment from Azure DevOps, by installing a self-hosted release agent in the virtual network.
## Storage
-An ASE has 1 TB of storage for all the apps in the ASE. An App Service plan in the Isolated pricing SKU has a limit of 250 GB. In an ASE, 250 GB of storage is added per App Service plan up to the 1 TB limit. You can have more App Service plans than just four, but there is no more storage added beyond the 1 TB limit.
+You have 1 TB of storage for all the apps in your App Service Environment. An App Service plan in the isolated pricing SKU has a limit of 250 GB. In an App Service Environment, 250 GB of storage is added per App Service plan, up to the 1 TB limit. You can have more App Service plans than just four, but there is no additional storage beyond the 1 TB limit.
## Logging
-You can integrate your ASE with Azure Monitor to send logs about the ASE to Azure Storage, Azure Event Hubs, or Log Analytics. These items are logged today:
+You can integrate with Azure Monitor to send logs to Azure Storage, Azure Event Hubs, or Azure Monitor Logs. The following table shows the situations and messages you can log:
|Situation |Message | |-|--|
-|ASE subnet is almost out of space | The specified ASE is in a subnet that is almost out of space. There are {0} remaining addresses. Once these addresses are exhausted, the ASE will not be able to scale. |
-|ASE is approaching total instance limit | The specified ASE is approaching the total instance limit of the ASE. It currently contains {0} App Service Plan instances of a maximum 200 instances. |
-|ASE is suspended | The specified ASE is suspended. The ASE suspension may be due to an account shortfall or an invalid virtual network configuration. Resolve the root cause and resume the ASE to continue serving traffic. |
-|ASE upgrade has started | A platform upgrade to the specified ASE has begun. Expect delays in scaling operations. |
-|ASE upgrade has completed | A platform upgrade to the specified ASE has finished. |
-|App Service plan creation has started | An App Service plan ({0}) creation has started. Desired state: {1} I{2}v2 workers.
-|Scale operations have completed | An App Service plan ({0}) creation has finished. Current state: {1} I{2}v2 workers. |
-|Scale operations have failed | An App Service plan ({0}) creation has failed. This may be due to the ASE operating at peak number of instances, or run out of subnet addresses. |
-|Scale operations have started | An App Service plan ({0}) has begun scaling. Current state: {1} I(2)v2. Desired state: {3} I{4}v2 workers.|
-|Scale operations have completed | An App Service plan ({0}) has finished scaling. Current state: {1} I{2}v2 workers. |
-|Scale operations were interrupted | An App Service plan ({0}) was interrupted while scaling. Previous desired state: {1} I{2}v2 workers. New desired state: {3} I{4}v2 workers. |
-|Scale operations have failed | An App Service plan ({0}) has failed to scale. Current state: {1} I{2}v2 workers. |
-
-To enable logging on your ASE:
-
-1. In the portal, go to **Diagnostics settings**.
+|App Service Environment subnet is almost out of space. | The specified App Service Environment is in a subnet that is almost out of space. There are {0} remaining addresses. Once these addresses are exhausted, the App Service Environment will not be able to scale. |
+|App Service Environment is approaching total instance limit. | The specified App Service Environment is approaching the total instance limit of the App Service Environment. It currently contains {0} App Service Plan instances of a maximum 200 instances. |
+|App Service Environment is suspended. | The specified App Service Environment is suspended. The App Service Environment suspension may be due to an account shortfall or an invalid virtual network configuration. Resolve the root cause and resume the App Service Environment to continue serving traffic. |
+|App Service Environment upgrade has started. | A platform upgrade to the specified App Service Environment has begun. Expect delays in scaling operations. |
+|App Service Environment upgrade has completed. | A platform upgrade to the specified App Service Environment has finished. |
+|App Service plan creation has started. | An App Service plan ({0}) creation has started. Desired state: {1} I{2}v2 workers.
+|Scale operations have completed. | An App Service plan ({0}) creation has finished. Current state: {1} I{2}v2 workers. |
+|Scale operations have failed. | An App Service plan ({0}) creation has failed. This may be due to the App Service Environment operating at peak number of instances, or run out of subnet addresses. |
+|Scale operations have started. | An App Service plan ({0}) has begun scaling. Current state: {1} I(2)v2. Desired state: {3} I{4}v2 workers.|
+|Scale operations have completed. | An App Service plan ({0}) has finished scaling. Current state: {1} I{2}v2 workers. |
+|Scale operations were interrupted. | An App Service plan ({0}) was interrupted while scaling. Previous desired state: {1} I{2}v2 workers. New desired state: {3} I{4}v2 workers. |
+|Scale operations have failed. | An App Service plan ({0}) has failed to scale. Current state: {1} I{2}v2 workers. |
+
+To enable logging, follow these steps:
+
+1. In the portal, go to **Diagnostic settings**.
1. Select **Add diagnostic setting**. 1. Provide a name for the log integration. 1. Select and configure the log destinations that you want. 1. Select **AppServiceEnvironmentPlatformLogs**.
-![ASE diagnostic log settings][4]
+![Screenshot that shows how to enable logging.][4]
+
+If you integrate with Azure Monitor Logs, you can see the logs by selecting **Logs** from the App Service Environment portal, and creating a query against **AppServiceEnvironmentPlatformLogs**. Logs are only emitted when your App Service Environment has an event that triggers the logs. If your App Service Environment doesn't have such an event, there won't be any logs. To quickly see an example of logs, perform a scale operation with an App Service plan. You can then run a query against **AppServiceEnvironmentPlatformLogs** to see those logs.
-If you integrate with Log Analytics, you can see the logs by selecting **Logs** from the ASE portal and creating a query against **AppServiceEnvironmentPlatformLogs**. Logs are only emitted when your ASE has an event that will trigger it. If your ASE doesn't have such an event, there won't be any logs. To quickly see an example of logs in your Log Analytics workspace, perform a scale operation with an App Service plan in your ASE. You can then run a query against **AppServiceEnvironmentPlatformLogs** to see those logs.
+### Create an alert
-### Creating an alert
+To create an alert against your logs, follow the instructions in [Create, view, and manage log alerts by using Azure Monitor](../../azure-monitor/alerts/alerts-log.md). In brief:
-To create an alert against your logs, follow the instructions in [Create, view, and manage log alerts using Azure Monitor](../../azure-monitor/alerts/alerts-log.md). In brief:
+1. Open the **Alerts** page in your App Service Environment portal.
+1. Select **New alert rule**.
+1. For **Resource**, select your Azure Monitor Logs workspace.
+1. Set your condition with a custom log search to use a query. For example, you might set the following: **AppServiceEnvironmentPlatformLogs | where ResultDescription contains *has begun scaling***. Set the threshold as appropriate.
+1. Add or create an action group (optional). The action group is where you define the response to the alert, such as sending an email or an SMS message.
+1. Name your alert and save it.
-* Open the Alerts page in your ASE portal
-* Select **New alert rule**
-* Select your Resource to be your Log Analytics workspace
-* Set your condition with a custom log search to use a query like, "AppServiceEnvironmentPlatformLogs | where ResultDescription contains "has begun scaling" or whatever you want. Set the threshold as appropriate.
-* Add or create an action group as desired. The action group is where you define the response to the alert such as sending an email or an SMS message
-* Name your alert and save it.
+## Internal encryption
-## Internal Encryption
+You can't see the internal components or the communication within the App Service Environment system. To enable higher throughput, encryption isn't enabled by default between internal components. The system is secure because the traffic is inaccessible to being monitored or accessed. If you have a compliance requirement for complete encryption of the data path, you can enable this. Select **Configuration**, as shown in the following screenshot.
-The App Service Environment operates as a black box system where you cannot see the internal components or the communication within the system. To enable higher throughput, encryption is not enabled by default between internal components. The system is secure as the traffic is inaccessible to being monitored or accessed. If you have a compliance requirement though that requires complete encryption of the data path from end to end encryption, you can enable this in the ASE **Configuration** UI.
+![Screenshot that shows how to enable internal encryption.][5]
-![Enable internal encryption][5]
+This option encrypts internal network traffic, and also encrypts the pagefile and the worker disks. Be aware that this option can affect your system performance. Your App Service Environment will be in an unstable state until the change is fully propagated. Complete propagation of the change can take a few hours to complete, depending on how many instances you have.
-This will encrypt internal network traffic in your ASE between the front ends and workers, encrypt the pagefile and also encrypt the worker disks. After the InternalEncryption clusterSetting is enabled, there can be an impact to your system performance. When you make the change to enable InternalEncryption, your ASE will be in an unstable state until the change is fully propagated. Complete propagation of the change can take a few hours to complete, depending on how many instances you have in your ASE. We highly recommend that you do not enable this on an ASE while it is in use. If you need to enable this on an actively used ASE, we highly recommend that you divert traffic to a backup environment until the operation completes.
+Avoid enabling this option while you're using App Service Environment. If you must do so, it's a good idea to divert traffic to a backup until the operation finishes.
## Upgrade preference
-If you have multiple ASEs, you might want some ASEs to be upgraded before others. This behavior can be enabled through your ASE portal. Under **Configuration** you have the option to set **Upgrade preference**. The three possible values are:
+If you have multiple App Service Environments, you might want some of them to be upgraded before others. You can enable this behavior through your App Service Environment portal. Under **Configuration**, you have the option to set **Upgrade preference**. The possible values are:
-- **None**: Azure will upgrade your ASE in no particular batch. This value is the default.-- **Early**: Your ASE will be upgraded in the first half of the App Service upgrades.-- **Late**: Your ASE will be upgraded in the second half of the App Service upgrades.
+- **None**: Azure upgrades in no particular batch. This value is the default.
+- **Early**: Upgrade in the first half of the App Service upgrades.
+- **Late**: Upgrade in the second half of the App Service upgrades.
-Select the value desired and select **Save**. The default for any ASE is **None**.
+Select the value you want, and then select **Save**.
-![ASE configuration portal][5]
+![Screenshot that shows the App Service Environment configuration portal.][5]
-The **upgradePreferences** feature makes the most sense when you have multiple ASEs because your "Early" ASEs will be upgraded before your "Late" ASEs. When you have multiple ASEs, you should set your development and test ASEs to be "Early" and your production ASEs to be "Late".
+This feature makes the most sense when you have multiple App Service Environments, and you might benefit from sequencing the upgrades. For example, you might set your development and test App Service Environments to be early, and your production App Service Environments to be late.
-## Delete an ASE
+## Delete an App Service Environment
-To delete an ASE:
+To delete:
-1. Select **Delete** at the top of the **App Service Environment** pane.
-1. Enter the name of your ASE to confirm that you want to delete it. When you delete an ASE, you also delete all the content within it.
-![ASE deletion][3]
+1. At the top of the **App Service Environment** pane, select **Delete**.
+1. Enter the name of your App Service Environment to confirm that you want to delete it. When you delete an App Service Environment, you also delete all the content within it.
+ ![Screenshot that shows how to delete.][3]
1. Select **OK**. <!--Image references-->
To delete an ASE:
[AppDeploy]: ../deploy-local-git.md [ASEWAF]: ./integrate-with-application-gateway.md [AppGW]: ../../web-application-firewall/ag/ag-overview.md
-[logalerts]: ../../azure-monitor/alerts/alerts-log.md
+[logalerts]: ../../azure-monitor/alerts/alerts-log.md
azure-arc Concepts Distributed Postgres Hyperscale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/concepts-distributed-postgres-hyperscale.md
It is important to know about a following concepts to benefit the most from Azur
- Types of tables: distributed tables, reference tables and local tables - Shards
-See more information at [Nodes and tables in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)](../../postgresql/concepts-hyperscale-nodes.md).
+See more information at [Nodes and tables in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)](../../postgresql/hyperscale/concepts-nodes.md).
## Determine the application type Clearly identifying the type of application you are building is important. Why?
The recommended distribution varies by the type of application and its query pat
The first step in data modeling is to identify which of them more closely resembles your application.
-See details at [Determining application type](../../postgresql/concepts-hyperscale-app-type.md).
+See details at [Determining application type](../../postgresql/hyperscale/concepts-app-type.md).
## Choose a distribution column
Why choose a distributed column?
This is one of the most important modeling decisions you'll make. Azure Arc-enabled PostgreSQL Hyperscale stores rows in shards based on the value of the rows' distribution column. The correct choice groups related data together on the same physical nodes, which makes queries fast and adds support for all SQL features. An incorrect choice makes the system run slowly and won't support all SQL features across nodes. This article gives distribution column tips for the two most common hyperscale scenarios.
-See details at [Choose distribution columns](../../postgresql/concepts-hyperscale-choose-distribution-column.md).
+See details at [Choose distribution columns](../../postgresql/hyperscale/concepts-choose-distribution-column.md).
## Table colocation
See details at [Choose distribution columns](../../postgresql/concepts-hyperscal
Colocation is about storing related information together on the same nodes. Queries can go fast when all the necessary data is available without any network traffic. Colocating related data on different nodes allows queries to run efficiently in parallel on each node.
-See details at [Table colocation](../../postgresql/concepts-hyperscale-colocation.md).
+See details at [Table colocation](../../postgresql/hyperscale/concepts-colocation.md).
## Next steps
azure-arc Create Postgresql Hyperscale Server Group Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-postgresql-hyperscale-server-group-azure-data-studio.md
While indicating 1 worker works, we do not recommend you use it. This deployment
- [Manage your server group using Azure Data Studio](manage-postgresql-hyperscale-server-group-with-azure-data-studio.md) - [Monitor your server group](monitor-grafana-kibana.md) - Read the concepts and How-to guides of Azure Database for PostgreSQL Hyperscale to distribute your data across multiple PostgreSQL Hyperscale nodes and to benefit from all the power of Azure Database for Postgres Hyperscale. :
- * [Nodes and tables](../../postgresql/concepts-hyperscale-nodes.md)
- * [Determine application type](../../postgresql/concepts-hyperscale-app-type.md)
- * [Choose a distribution column](../../postgresql/concepts-hyperscale-choose-distribution-column.md)
- * [Table colocation](../../postgresql/concepts-hyperscale-colocation.md)
- * [Distribute and modify tables](../../postgresql/howto-hyperscale-modify-distributed-tables.md)
- * [Design a multi-tenant database](../../postgresql/tutorial-design-database-hyperscale-multi-tenant.md)*
- * [Design a real-time analytics dashboard](../../postgresql/tutorial-design-database-hyperscale-realtime.md)*
+ * [Nodes and tables](../../postgresql/hyperscale/concepts-nodes.md)
+ * [Determine application type](../../postgresql/hyperscale/concepts-app-type.md)
+ * [Choose a distribution column](../../postgresql/hyperscale/concepts-choose-distribution-column.md)
+ * [Table colocation](../../postgresql/hyperscale/concepts-colocation.md)
+ * [Distribute and modify tables](../../postgresql/hyperscale/howto-modify-distributed-tables.md)
+ * [Design a multi-tenant database](../../postgresql/hyperscale/tutorial-design-database-multi-tenant.md)*
+ * [Design a real-time analytics dashboard](../../postgresql/hyperscale/tutorial-design-database-realtime.md)*
> \* In the documents above, skip the sections **Sign in to the Azure portal**, & **Create an Azure Database for PostgreSQL - Hyperscale (Citus)**. Implement the remaining steps in your Azure Arc deployment. Those sections are specific to the Azure Database for PostgreSQL Hyperscale (Citus) offered as a PaaS service in the Azure cloud but the other parts of the documents are directly applicable to your Azure Arc-enabled PostgreSQL Hyperscale.
azure-arc Create Postgresql Hyperscale Server Group Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-postgresql-hyperscale-server-group-azure-portal.md
While indicating 1 worker works, we do not recommend you use it. This deployment
- Connect to your Azure Arc-enabled PostgreSQL Hyperscale: read [Get Connection Endpoints And Connection Strings](get-connection-endpoints-and-connection-strings-postgres-hyperscale.md) - Read the concepts and How-to guides of Azure Database for PostgreSQL Hyperscale to distribute your data across multiple PostgreSQL Hyperscale nodes and to benefit from better performances potentially:
- * [Nodes and tables](../../postgresql/concepts-hyperscale-nodes.md)
- * [Determine application type](../../postgresql/concepts-hyperscale-app-type.md)
- * [Choose a distribution column](../../postgresql/concepts-hyperscale-choose-distribution-column.md)
- * [Table colocation](../../postgresql/concepts-hyperscale-colocation.md)
- * [Distribute and modify tables](../../postgresql/howto-hyperscale-modify-distributed-tables.md)
- * [Design a multi-tenant database](../../postgresql/tutorial-design-database-hyperscale-multi-tenant.md)*
- * [Design a real-time analytics dashboard](../../postgresql/tutorial-design-database-hyperscale-realtime.md)*
+ * [Nodes and tables](../../postgresql/hyperscale/concepts-nodes.md)
+ * [Determine application type](../../postgresql/hyperscale/concepts-app-type.md)
+ * [Choose a distribution column](../../postgresql/hyperscale/concepts-choose-distribution-column.md)
+ * [Table colocation](../../postgresql/hyperscale/concepts-colocation.md)
+ * [Distribute and modify tables](../../postgresql/hyperscale/howto-modify-distributed-tables.md)
+ * [Design a multi-tenant database](../../postgresql/hyperscale/tutorial-design-database-multi-tenant.md)*
+ * [Design a real-time analytics dashboard](../../postgresql/hyperscale/tutorial-design-database-realtime.md)*
> \* In the documents above, skip the sections **Sign in to the Azure portal**, & **Create an Azure Database for PostgreSQL - Hyperscale (Citus)**. Implement the remaining steps in your Azure Arc deployment. Those sections are specific to the Azure Database for PostgreSQL Hyperscale (Citus) offered as a PaaS service in the Azure cloud but the other parts of the documents are directly applicable to your Azure Arc-enabled PostgreSQL Hyperscale.
azure-arc Create Postgresql Hyperscale Server Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-postgresql-hyperscale-server-group.md
psql postgresql://postgres:<EnterYourPassword>@10.0.0.4:30655
- Connect to your Azure Arc-enabled PostgreSQL Hyperscale: read [Get Connection Endpoints And Connection Strings](get-connection-endpoints-and-connection-strings-postgres-hyperscale.md) - Read the concepts and How-to guides of Azure Database for PostgreSQL Hyperscale to distribute your data across multiple PostgreSQL Hyperscale nodes and to benefit from better performances potentially:
- * [Nodes and tables](../../postgresql/concepts-hyperscale-nodes.md)
- * [Determine application type](../../postgresql/concepts-hyperscale-app-type.md)
- * [Choose a distribution column](../../postgresql/concepts-hyperscale-choose-distribution-column.md)
- * [Table colocation](../../postgresql/concepts-hyperscale-colocation.md)
- * [Distribute and modify tables](../../postgresql/howto-hyperscale-modify-distributed-tables.md)
- * [Design a multi-tenant database](../../postgresql/tutorial-design-database-hyperscale-multi-tenant.md)*
- * [Design a real-time analytics dashboard](../../postgresql/tutorial-design-database-hyperscale-realtime.md)*
+ * [Nodes and tables](../../postgresql/hyperscale/concepts-nodes.md)
+ * [Determine application type](../../postgresql/hyperscale/concepts-app-type.md)
+ * [Choose a distribution column](../../postgresql/hyperscale/concepts-choose-distribution-column.md)
+ * [Table colocation](../../postgresql/hyperscale/concepts-colocation.md)
+ * [Distribute and modify tables](../../postgresql/hyperscale/howto-modify-distributed-tables.md)
+ * [Design a multi-tenant database](../../postgresql/hyperscale/tutorial-design-database-multi-tenant.md)*
+ * [Design a real-time analytics dashboard](../../postgresql/hyperscale/tutorial-design-database-realtime.md)*
> \* In the documents above, skip the sections **Sign in to the Azure portal**, & **Create an Azure Database for PostgreSQL - Hyperscale (Citus)**. Implement the remaining steps in your Azure Arc deployment. Those sections are specific to the Azure Database for PostgreSQL Hyperscale (Citus) offered as a PaaS service in the Azure cloud but the other parts of the documents are directly applicable to your Azure Arc-enabled PostgreSQL Hyperscale.
azure-arc Migrate Postgresql Data Into Postgresql Hyperscale Server Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/migrate-postgresql-data-into-postgresql-hyperscale-server-group.md
Within your Arc setup you can use `psql` to connect to your Postgres instance, s
## Next steps - Read the concepts and How-to guides of Azure Database for PostgreSQL Hyperscale to distribute your data across multiple PostgreSQL Hyperscale nodes and to benefit from all the power of Azure Database for PostgreSQL Hyperscale:
- * [Nodes and tables](../../postgresql/concepts-hyperscale-nodes.md)
- * [Determine application type](../../postgresql/concepts-hyperscale-app-type.md)
- * [Choose a distribution column](../../postgresql/concepts-hyperscale-choose-distribution-column.md)
- * [Table colocation](../../postgresql/concepts-hyperscale-colocation.md)
- * [Distribute and modify tables](../../postgresql/howto-hyperscale-modify-distributed-tables.md)
- * [Design a multi-tenant database](../../postgresql/tutorial-design-database-hyperscale-multi-tenant.md)*
- * [Design a real-time analytics dashboard](../../postgresql/tutorial-design-database-hyperscale-realtime.md)*
+ * [Nodes and tables](../../postgresql/hyperscale/concepts-nodes.md)
+ * [Determine application type](../../postgresql/hyperscale/concepts-app-type.md)
+ * [Choose a distribution column](../../postgresql/hyperscale/concepts-choose-distribution-column.md)
+ * [Table colocation](../../postgresql/hyperscale/concepts-colocation.md)
+ * [Distribute and modify tables](../../postgresql/hyperscale/howto-modify-distributed-tables.md)
+ * [Design a multi-tenant database](../../postgresql/hyperscale/tutorial-design-database-multi-tenant.md)*
+ * [Design a real-time analytics dashboard](../../postgresql/hyperscale/tutorial-design-database-realtime.md)*
> *In these documents, skip the sections **Sign in to the Azure portal**, and **Create an Azure Database for Postgres - Hyperscale (Citus)**. Implement the remaining steps in your Azure Arc deployment. Those sections are specific to the Azure Database for PostgreSQL Hyperscale (Citus) offered as a PaaS service in the Azure cloud but the other parts of the documents are directly applicable to your Azure Arc-enabled PostgreSQL Hyperscale.
azure-arc Restore Adventureworks Sample Db Into Postgresql Hyperscale Server Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/restore-adventureworks-sample-db-into-postgresql-hyperscale-server-group.md
kubectl exec <PostgreSQL pod name> -n <namespace name> -c postgres -- psql --use
## Suggested next steps - Read the concepts and How-to guides of Azure Database for PostgreSQL Hyperscale to distribute your data across multiple PostgreSQL Hyperscale nodes and to benefit from all the power of Azure Database for PostgreSQL Hyperscale. :
- * [Nodes and tables](../../postgresql/concepts-hyperscale-nodes.md)
- * [Determine application type](../../postgresql/concepts-hyperscale-app-type.md)
- * [Choose a distribution column](../../postgresql/concepts-hyperscale-choose-distribution-column.md)
- * [Table colocation](../../postgresql/concepts-hyperscale-colocation.md)
- * [Distribute and modify tables](../../postgresql/howto-hyperscale-modify-distributed-tables.md)
- * [Design a multi-tenant database](../../postgresql/tutorial-design-database-hyperscale-multi-tenant.md)*
- * [Design a real-time analytics dashboard](../../postgresql/tutorial-design-database-hyperscale-realtime.md)*
+ * [Nodes and tables](../../postgresql/hyperscale/concepts-nodes.md)
+ * [Determine application type](../../postgresql/hyperscale/concepts-app-type.md)
+ * [Choose a distribution column](../../postgresql/hyperscale/concepts-choose-distribution-column.md)
+ * [Table colocation](../../postgresql/hyperscale/concepts-colocation.md)
+ * [Distribute and modify tables](../../postgresql/hyperscale/howto-modify-distributed-tables.md)
+ * [Design a multi-tenant database](../../postgresql/hyperscale/tutorial-design-database-multi-tenant.md)*
+ * [Design a real-time analytics dashboard](../../postgresql/hyperscale/tutorial-design-database-realtime.md)*
> \* In the documents above, skip the sections **Sign in to the Azure portal**, & **Create an Azure Database for PostgreSQL - Hyperscale (Citus)**. Implement the remaining steps in your Azure Arc deployment. Those sections are specific to the Azure Database for PostgreSQL Hyperscale (Citus) offered as a PaaS service in the Azure cloud but the other parts of the documents are directly applicable to your Azure Arc-enabled PostgreSQL Hyperscale.
azure-arc Scale Out In Postgresql Hyperscale Server Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/scale-out-in-postgresql-hyperscale-server-group.md
You scale in when you remove Postgres instances (Postgres Hyperscale worker node
## Get started If you are already familiar with the scaling model of Azure Arc-enabled PostgreSQL Hyperscale or Azure Database for PostgreSQL Hyperscale (Citus), you may skip this paragraph. If you are not, it is recommended you start by reading about this scaling model in the documentation page of Azure Database for PostgreSQL Hyperscale (Citus). Azure Database for PostgreSQL Hyperscale (Citus) is the same technology that is hosted as a service in Azure (Platform As A Service also known as PAAS) instead of being offered as part of Azure Arc-enabled Data -- [Nodes and tables](../../postgresql/concepts-hyperscale-nodes.md)-- [Determine application type](../../postgresql/concepts-hyperscale-app-type.md)-- [Choose a distribution column](../../postgresql/concepts-hyperscale-choose-distribution-column.md)-- [Table colocation](../../postgresql/concepts-hyperscale-colocation.md)-- [Distribute and modify tables](../../postgresql/howto-hyperscale-modify-distributed-tables.md)-- [Design a multi-tenant database](../../postgresql/tutorial-design-database-hyperscale-multi-tenant.md)*-- [Design a real-time analytics dashboard](../../postgresql/tutorial-design-database-hyperscale-realtime.md)*
+- [Nodes and tables](../../postgresql/hyperscale/concepts-nodes.md)
+- [Determine application type](../../postgresql/hyperscale/concepts-app-type.md)
+- [Choose a distribution column](../../postgresql/hyperscale/concepts-choose-distribution-column.md)
+- [Table colocation](../../postgresql/hyperscale/concepts-colocation.md)
+- [Distribute and modify tables](../../postgresql/hyperscale/howto-modify-distributed-tables.md)
+- [Design a multi-tenant database](../../postgresql/hyperscale/tutorial-design-database-multi-tenant.md)*
+- [Design a real-time analytics dashboard](../../postgresql/hyperscale/tutorial-design-database-realtime.md)*
> \* In the documents above, skip the sections **Sign in to the Azure portal**, & **Create an Azure Database for PostgreSQL - Hyperscale (Citus)**. Implement the remaining steps in your Azure Arc deployment. Those sections are specific to the Azure Database for PostgreSQL Hyperscale (Citus) offered as a PaaS service in the Azure cloud but the other parts of the documents are directly applicable to your Azure Arc-enabled PostgreSQL Hyperscale.
The scale-in operation is an online operation. Your applications continue to acc
- Read about how to [scale up and down (memory, vCores) your Azure Arc-enabled PostgreSQL Hyperscale server group](scale-up-down-postgresql-hyperscale-server-group-using-cli.md) - Read about how to set server parameters in your Azure Arc-enabled PostgreSQL Hyperscale server group - Read the concepts and How-to guides of Azure Database for PostgreSQL Hyperscale to distribute your data across multiple PostgreSQL Hyperscale nodes and to benefit from all the power of Azure Database for Postgres Hyperscale. :
- * [Nodes and tables](../../postgresql/concepts-hyperscale-nodes.md)
- * [Determine application type](../../postgresql/concepts-hyperscale-app-type.md)
- * [Choose a distribution column](../../postgresql/concepts-hyperscale-choose-distribution-column.md)
- * [Table colocation](../../postgresql/concepts-hyperscale-colocation.md)
- * [Distribute and modify tables](../../postgresql/howto-hyperscale-modify-distributed-tables.md)
- * [Design a multi-tenant database](../../postgresql/tutorial-design-database-hyperscale-multi-tenant.md)*
- * [Design a real-time analytics dashboard](../../postgresql/tutorial-design-database-hyperscale-realtime.md)*
+ * [Nodes and tables](../../postgresql/hyperscale/concepts-nodes.md)
+ * [Determine application type](../../postgresql/hyperscale/concepts-app-type.md)
+ * [Choose a distribution column](../../postgresql/hyperscale/concepts-choose-distribution-column.md)
+ * [Table colocation](../../postgresql/hyperscale/concepts-colocation.md)
+ * [Distribute and modify tables](../../postgresql/hyperscale/howto-modify-distributed-tables.md)
+ * [Design a multi-tenant database](../../postgresql/hyperscale/tutorial-design-database-multi-tenant.md)*
+ * [Design a real-time analytics dashboard](../../postgresql/hyperscale/tutorial-design-database-realtime.md)*
> \* In the documents above, skip the sections **Sign in to the Azure portal**, & **Create an Azure Database for PostgreSQL - Hyperscale (Citus)**. Implement the remaining steps in your Azure Arc deployment. Those sections are specific to the Azure Database for PostgreSQL Hyperscale (Citus) offered as a PaaS service in the Azure cloud but the other parts of the documents are directly applicable to your Azure Arc-enabled PostgreSQL Hyperscale.
azure-arc What Is Azure Arc Enabled Postgres Hyperscale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/what-is-azure-arc-enabled-postgres-hyperscale.md
With the Direct connectivity mode offered by Azure Arc-enabled data services you
- **Read the concepts and How-to guides of Azure Database for PostgreSQL Hyperscale to distribute your data across multiple PostgreSQL Hyperscale nodes and to potentially benefit from better performances**:
- * [Nodes and tables](../../postgresql/concepts-hyperscale-nodes.md)
- * [Determine application type](../../postgresql/concepts-hyperscale-app-type.md)
- * [Choose a distribution column](../../postgresql/concepts-hyperscale-choose-distribution-column.md)
- * [Table colocation](../../postgresql/concepts-hyperscale-colocation.md)
- * [Distribute and modify tables](../../postgresql/howto-hyperscale-modify-distributed-tables.md)
- * [Design a multi-tenant database](../../postgresql/tutorial-design-database-hyperscale-multi-tenant.md)*
- * [Design a real-time analytics dashboard](../../postgresql/tutorial-design-database-hyperscale-realtime.md)*
+ * [Nodes and tables](../../postgresql/hyperscale/concepts-nodes.md)
+ * [Determine application type](../../postgresql/hyperscale/concepts-app-type.md)
+ * [Choose a distribution column](../../postgresql/hyperscale/concepts-choose-distribution-column.md)
+ * [Table colocation](../../postgresql/hyperscale/concepts-colocation.md)
+ * [Distribute and modify tables](../../postgresql/hyperscale/howto-modify-distributed-tables.md)
+ * [Design a multi-tenant database](../../postgresql/hyperscale/tutorial-design-database-multi-tenant.md)*
+ * [Design a real-time analytics dashboard](../../postgresql/hyperscale/tutorial-design-database-realtime.md)*
azure-arc Agent Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/agent-release-notes-archive.md
The Azure Connected Machine agent receives improvements on an ongoing basis. Thi
- Known issues - Bug fixes
-## June 2021
+## Version 1.9 - July 2021
-Version 1.7
+### New features
+
+Added support for the Indonesian language
+
+### Fixed
+
+Fixed a bug that prevented extension management in the West US 3 region
+
+## Version 1.8 - July 2021
+
+### New features
+
+- Improved reliability when installing the Azure Monitor Agent extension on Red Hat and CentOS systems
+- Added agent-side enforcement of max resource name length (54 characters)
+- Guest Configuration policy improvements:
+ - Added support for PowerShell-based Guest Configuration policies on Linux operating systems
+ - Added support for multiple assignments of the same Guest Configuration policy on the same server
+ - Upgraded PowerShell Core to version 7.1 on Windows operating systems
+
+### Fixed
+
+- The agent will continue running if it is unable to write service start/stop events to the Windows application event log
+
+## Version 1.7 - June 2021
### New features
Version 1.7
- Onboarding continues instead of aborting if OS information cannot be obtained - Improved reliability when installing the Log Analytics agent for Linux extension on Red Hat and CentOS systems
-## May 2021
-
-Version 1.6
+## Version 1.6 - May 2021
### New features
Version 1.6
- Added V2 signature support for extension validation. - Minor update to data logging.
-## April 2021
-
-Version 1.5
+## Version 1.5 - April 2021
### New features
Version 1.5
- New `-json` parameter to direct output results in JSON format (when used with -useStderr). - Collect other instance metadata - Manufacturer, model, and cluster resource ID (for Azure Stack HCI nodes).
-## March 2021
-
-Version 1.4
+## Version 1.4 - March 2021
### New features
Version 1.4
Network endpoint checks are now faster.
-## December 2020
-
-Version: 1.3
+## Version 1.3 - December 2020
### New features
Added support for Windows Server 2008 R2 SP1.
Resolved issue preventing the Custom Script Extension on Linux from installing successfully.
-## November 2020
-
-Version: 1.2
+## Version 1.2 - November 2020
### Fixed Resolved issue where proxy configuration could be lost after upgrade on RPM-based distributions.
-## October 2020
-
-Version: 1.1
+## Version 1.1 - October 2020
### Fixed
Version: 1.1
- GuestConfig agent support for US Gov Virginia region. - GuestConfig agent extension report messages to be more verbose if there is a failure.
-## September 2020
+## Version 1.0 - September 2020
-Version: 1.0 (General Availability)
+This version is the first generally available release of the Azure Connected Machine Agent.
### Plan for change
Version: 1.0 (General Availability)
- Resolved issues when attempting to install agent on server running Windows Server 2012 R2. - Improvements to extension installation reliability
-## August 2020
-
-Version: 0.11
--- This release previously announced support for Ubuntu 20.04. Because some Azure VM extensions don't support Ubuntu 20.04, support for this version of Ubuntu is being removed.--- Reliability improvements for extension deployments.-
-### Known issues
-
-If you are using an older version of the Linux agent and it's configured to use a proxy server, you need to reconfigure the proxy server setting after the upgrade. To do this, run `sudo azcmagent_proxy add http://proxyserver.local:83`.
- ## Next steps - Before evaluating or enabling Arc-enabled servers across multiple hybrid machines, review [Connected Machine agent overview](agent-overview.md) to understand requirements, technical details about the agent, and deployment methods.
azure-arc Agent Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/agent-release-notes.md
The Azure Connected Machine agent receives improvements on an ongoing basis. To
This page is updated monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [archive for What's new with Azure Arc-enabled servers agent](agent-release-notes-archive.md).
-## November 2021
+## Version 1.14 - January 2022
-Version 1.13
+### Fixed
+
+- A state corruption issue in the extension manager that could cause extension operations to get stuck in transient states has been fixed. Customers running agent version 1.13 are encouraged to upgrade to version 1.14 as soon as possible. If you continue to have issues with extensions after upgrading the agent, [submit a support ticket](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
+
+## Version 1.13 - November 2021
+
+### Known issues
+
+- Extensions may get stuck in transient states (creating, deleting, updating) on Windows machines running the 1.13 agent in certain conditions. Microsoft recommends upgrading to agent version 1.14 as soon as possible to resolve this issue.
### Fixed
Version 1.13
- Extension operations will execute faster using a new notification pipeline. You may need to adjust your firewall or proxy server rules to allow the new network addresses for this notification service (see [networking configuration](agent-overview.md#networking-configuration)). The extension manager will fall back to the existing behavior of checking every 5 minutes when the notification service cannot be reached. - Detection of the AWS account ID, instance ID, and region information for servers running in Amazon Web Services.
-## October 2021
-
-Version 1.12
+## Version 1.12 - October 2021
### Fixed
Version 1.12
- `azcmagent_proxy remove` command on Linux now correctly removes environment variables on Red Hat Enterprise Linux and related distributions. - `azcmagent logs` now includes the computer name and timestamp to help disambiguate log files.
-## September 2021
-
-Version 1.11
+## Version 1.11 - September 2021
### Fixed
Version 1.11
- The guest configuration policy agent will now automatically retry if an error is encountered during service start or restart events. - Fixed an issue that prevented guest configuration audit policies from successfully executing on Linux machines.
-## August 2021
-
-Version 1.10
+## Version 1.10 - August 2021
### Fixed - The guest configuration policy agent can now configure and remediate system settings. Existing policy assignments continue to be audit-only. Learn more about the Azure Policy [guest configuration remediation options](../../governance/policy/concepts/guest-configuration-policy-effects.md). - The guest configuration policy agent now restarts every 48 hours instead of every 6 hours.
-## July 2021
-
-Version 1.9
-
-## New features
-
-Added support for the Indonesian language
-
-### Fixed
-
-Fixed a bug that prevented extension management in the West US 3 region
-
-Version 1.8
-
-### New features
--- Improved reliability when installing the Azure Monitor Agent extension on Red Hat and CentOS systems-- Added agent-side enforcement of max resource name length (54 characters)-- Guest Configuration policy improvements:
- - Added support for PowerShell-based Guest Configuration policies on Linux operating systems
- - Added support for multiple assignments of the same Guest Configuration policy on the same server
- - Upgraded PowerShell Core to version 7.1 on Windows operating systems
-
-### Fixed
--- The agent will continue running if it is unable to write service start/stop events to the Windows application event log- ## Next steps - Before evaluating or enabling Azure Arc-enabled servers across multiple hybrid machines, review [Connected Machine agent overview](agent-overview.md) to understand requirements, technical details about the agent, and deployment methods.
azure-cache-for-redis Cache How To Active Geo Replication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-how-to-active-geo-replication.md
Active geo-replication groups up to five Enterprise Azure Cache for Redis instan
1. In the **Advanced** tab of **New Redis Cache** creation UI, select **Enterprise** for **Clustering Policy**.
- ![Configure active geo-replication](./media/cache-how-to-active-geo-replication/cache-active-geo-replication-not-configured.png)
+ For more information on choosing **Clustering policy**, see [Clustering Policy](quickstart-create-redis-enterprise.md#clustering-policy).
+
+ :::image type="content" source="media/cache-how-to-active-geo-replication/cache-active-geo-replication-not-configured.png" alt-text="Configure active geo-replication":::
1. Select **Configure** to set up **Active geo-replication**. 1. Create a new replication group, for a first cache instance, or select an existing one from the list.
- ![Link caches](./media/cache-how-to-active-geo-replication/cache-active-geo-replication-new-group.png)
+ :::image type="content" source="media/cache-how-to-active-geo-replication/cache-active-geo-replication-new-group.png" alt-text="Link caches":::
1. Select **Configure** to finish.
- ![Active geo-replication configured](./media/cache-how-to-active-geo-replication/cache-active-geo-replication-configured.png)
+ :::image type="content" source="media/cache-how-to-active-geo-replication/cache-active-geo-replication-configured.png" alt-text="Active geo-replication configured":::
1. Wait for the first cache to be created successfully. Repeat the above steps for each additional cache instance in the geo-replication group.
azure-cache-for-redis Quickstart Create Redis Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/quickstart-create-redis-enterprise.md
Last updated 02/08/2021
# Quickstart: Create a Redis Enterprise cache
-Azure Cache for Redis' Enterprise tiers provide fully integrated and managed [Redis Enterprise](https://redislabs.com/redis-enterprise/) on Azure. These new tiers are:
+The Azure Cache for Redis Enterprise tiers provide fully integrated and managed [Redis Enterprise](https://redislabs.com/redis-enterprise/) on Azure. These new tiers are:
* Enterprise, which uses volatile memory (DRAM) on a virtual machine to store data * Enterprise Flash, which uses both volatile and non-volatile memory (NVMe or SSD) to store data.
You'll need an Azure subscription before you begin. If you don't have one, creat
1. Select **Next: Networking** and skip.
-1. Select **Next: Advanced** and set **Clustering policy** to **Enterprise** for a non-clustered cache. Enable **Non-TLS access only** if you plan to connect to the new cache without using TLS. Disabling TLS is **not** recommended, however.
+1. Select **Next: Advanced**.
+
+ Enable **Non-TLS access only** if you plan to connect to the new cache without using TLS. Disabling TLS is **not** recommended, however.
+
+ Set **Clustering policy** to **Enterprise** for a non-clustered cache. For more information on choosing **Clustering policy**, see [Clustering Policy](#clustering-policy).
:::image type="content" source="media/cache-create/enterprise-tier-advanced.png" alt-text="Screenshot that shows the Enterprise tier Advanced tab."::: > [!NOTE]
- > Redis Enterprise supports two clustering policies. Use the **Enterprise** policy to access
- > your cache using the regular Redis API, and **OSS** the OSS Cluster API.
+ > Redis Enterprise supports two clustering policies. Use the **Enterprise** policy to access your cache using the regular Redis API, and **OSS** the OSS Cluster API.
> > [!NOTE]
You'll need an Azure subscription before you begin. If you don't have one, creat
It takes some time for the cache to create. You can monitor progress on the Azure Cache for Redis **Overview** page. When **Status** shows as **Running**, the cache is ready to use.
+## Clustering Policy
+
+The OSS Cluster mode allows clients to communicate with Redis using the same Redis Cluster API as open-source Redis. This mode provides optimal latency and near-linear scalability improvements when scaling the cluster. Your client library must support clustering to use the OSS Cluster mode.
+
+The Enterprise Cluster mode is a simpler configuration that exposes a single endpoint for client connections. This mode allows an application designed to use a standalone, or non-clustered, Redis server to seamlessly operate with a scalable, multi-node, Redis implementation. Enterprise Cluster mode abstracts the Redis Cluster implementation from the client by internally routing requests to the correct node in the cluster. Clients are not required to support OSS Cluster mode.
+ ## Next steps In this quickstart, you learned how to create an Enterprise tier instance of Azure Cache for Redis.
azure-functions Durable Functions Http Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/durable-functions-http-features.md
The "call HTTP" API can automatically implement the client side of the polling c
Durable Functions natively supports calls to APIs that accept Azure Active Directory (Azure AD) tokens for authorization. This support uses [Azure managed identities](../../active-directory/managed-identities-azure-resources/overview.md) to acquire these tokens.
-The following code is an example of a .NET orchestrator function. The function makes authenticated calls to restart a virtual machine by using the Azure Resource Manager [virtual machines REST API](/rest/api/compute/virtualmachines).
+The following code is an example of an orchestrator function. The function makes authenticated calls to restart a virtual machine by using the Azure Resource Manager [virtual machines REST API](/rest/api/compute/virtualmachines).
# [C#](#tab/csharp)
azure-functions Functions Compare Logic Apps Ms Flow Webjobs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-compare-logic-apps-ms-flow-webjobs.md
Power Automate and Logic Apps are both *designer-first* integration services tha
Power Automate is built on top of Logic Apps. They share the same workflow designer and the same [connectors](../connectors/apis-list.md).
-Power Automate empowers any office worker to perform simple integrations (for example, an approval process on a SharePoint Document Library) without going through developers or IT. Logic Apps can also enable advanced integrations (for example, B2B processes) where enterprise-level Azure DevOps and security practices are required. It's typical for a business workflow to grow in complexity over time. Accordingly, you can start with a flow at first, and then convert it to a logic app as needed.
+Power Automate empowers any office worker to perform simple integrations (for example, an approval process on a SharePoint Document Library) without going through developers or IT. Logic Apps can also enable advanced integrations (for example, B2B processes) where enterprise-level Azure DevOps and security practices are required. It's typical for a business workflow to grow in complexity over time.
The following table helps you determine whether Power Automate or Logic Apps is best for a particular integration:
The following table helps you determine whether Power Automate or Logic Apps is
| | | | | **Users** |Office workers, business users, SharePoint administrators |Pro integrators and developers, IT pros | | **Scenarios** |Self-service |Advanced integrations |
-| **Design tool** |In-browser and mobile app, UI only |In-browser and [Visual Studio](../logic-apps/logic-apps-azure-resource-manager-templates-overview.md), [Code view](../logic-apps/logic-apps-author-definitions.md) available |
+| **Design tool** |In-browser and mobile app, UI only |In-browser, [Visual Studio Code](../logic-apps/quickstart-create-logic-apps-visual-studio-code.md), and [Visual Studio](../logic-apps/quickstart-create-logic-apps-with-visual-studio.md) with code view available |
| **Application lifecycle management (ALM)** |Design and test in non-production environments, promote to production when ready |Azure DevOps: source control, testing, support, automation, and manageability in [Azure Resource Manager](../logic-apps/logic-apps-azure-resource-manager-templates-overview.md) | | **Admin experience** |Manage Power Automate environments and data loss prevention (DLP) policies, track licensing: [Admin center](https://admin.flow.microsoft.com) |Manage resource groups, connections, access management, and logging: [Azure portal](https://portal.azure.com) | | **Security** |Microsoft 365 security audit logs, DLP, [encryption at rest](https://wikipedia.org/wiki/Data_at_rest#Encryption) for sensitive data |Security assurance of Azure: [Azure security](https://www.microsoft.com/en-us/trustcenter/Security/AzureSecurity), [Microsoft Defender for Cloud](https://azure.microsoft.com/services/security-center/), [audit logs](https://azure.microsoft.com/blog/azure-audit-logs-ux-refresh/) |
You can mix and match services when you build an orchestration, calling function
| | Durable Functions | Logic Apps | | | | | | **Development** | Code-first (imperative) | Designer-first (declarative) |
-| **Connectivity** | [About a dozen built-in binding types](functions-triggers-bindings.md#supported-bindings), write code for custom bindings | [Large collection of connectors](../connectors/apis-list.md), [Enterprise Integration Pack for B2B scenarios](../logic-apps/logic-apps-enterprise-integration-overview.md), [build custom connectors](../logic-apps/custom-connector-overview.md) |
-| **Actions** | Each activity is an Azure function; write code for activity functions |[Large collection of ready-made actions](../logic-apps/logic-apps-workflow-actions-triggers.md)|
-| **Monitoring** | [Azure Application Insights](../azure-monitor/app/app-insights-overview.md) | [Azure portal](../logic-apps/quickstart-create-first-logic-app-workflow.md), [Azure Monitor logs](../logic-apps/monitor-logic-apps.md)|
+| **Connectivity** | [About a dozen built-in binding types](functions-triggers-bindings.md#supported-bindings), write code for custom bindings | [Large collection of connectors](/connectors/connector-reference/connector-reference-logicapps-connectors), [Enterprise Integration Pack for B2B scenarios](../logic-apps/logic-apps-enterprise-integration-overview.md), [build custom connectors](/connectors/custom-connectors/) |
+| **Actions** | Each activity is an Azure function; write code for activity functions |[Large collection of ready-made actions](/connectors/connector-reference/connector-reference-logicapps-connectors)|
+| **Monitoring** | [Azure Application Insights](../azure-monitor/app/app-insights-overview.md) | [Azure portal](../logic-apps/quickstart-create-first-logic-app-workflow.md), [Azure Monitor logs](../logic-apps/monitor-logic-apps-log-analytics.md), [Microsoft Defender for Cloud](../logic-apps/healthy-unhealthy-resource.md) |
| **Management** | [REST API](durable/durable-functions-http-api.md), [Visual Studio](/visualstudio/azure/vs-azure-tools-resources-managing-with-cloud-explorer) | [Azure portal](../logic-apps/quickstart-create-first-logic-app-workflow.md), [REST API](/rest/api/logic/), [PowerShell](/powershell/module/az.logicapp), [Visual Studio](../logic-apps/manage-logic-apps-with-visual-studio.md) | | **Execution context** | Can run [locally](./functions-kubernetes-keda.md) or in the cloud | Runs only in the cloud|
azure-functions Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/ip-addresses.md
You can control the IP address of outbound traffic from your functions by using
### App Service Environments
-For full control over the IP addresses, both inbound and outbound, we recommend [App Service Environments](../app-service/environment/intro.md) (the [Isolated tier](https://azure.microsoft.com/pricing/details/app-service/) of App Service plans). For more information, see [App Service Environment IP addresses](../app-service/environment/network-info.md#ase-ip-addresses) and [How to control inbound traffic to an App Service Environment](../app-service/environment/app-service-app-service-environment-control-inbound-traffic.md).
+For full control over the IP addresses, both inbound and outbound, we recommend [App Service Environments](../app-service/environment/intro.md) (the [Isolated tier](https://azure.microsoft.com/pricing/details/app-service/) of App Service plans). For more information, see [App Service Environment IP addresses](../app-service/environment/network-info.md#ip-addresses) and [How to control inbound traffic to an App Service Environment](../app-service/environment/app-service-app-service-environment-control-inbound-traffic.md).
To find out if your function app runs in an App Service Environment:
azure-maps Azure Maps Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/azure-maps-authentication.md
# Authentication with Azure Maps
-Azure Maps supports two ways to authenticate requests: Shared Key authentication and [Azure Active Directory (Azure AD)](../active-directory/fundamentals/active-directory-whatis.md) authentication. This article explains both authentication methods to help guide your implementation of Azure Maps services.
+Azure Maps supports three ways to authenticate requests: Shared Key authentication, [Azure Active Directory (Azure AD)](../active-directory/fundamentals/active-directory-whatis.md) authentication, and Shared Access Signature (SAS) Token authentication. This article explains authentication methods to help guide your implementation of Azure Maps services. The article also describes additional account controls such as disabling local authentication for Azure Policy and Cross-Origin Resource Sharing (CORS).
> [!NOTE] > To improve secure communication with Azure Maps, we now support Transport Layer Security (TLS) 1.2, and we're retiring support for TLS 1.0 and 1.1. If you currently use TLS 1.x, evaluate your TLS 1.2 readiness and develop a migration plan with the testing described in [Solving the TLS 1.0 Problem](/security/solving-tls1-problem). ## Shared Key authentication
- Primary and secondary keys are generated after the Azure Maps account is created. You're encouraged to use the primary key as the subscription key when calling Azure Maps with shared key authentication. Shared Key authentication passes a key generated by an Azure Maps account to an Azure Maps service. For each request to Azure Maps services, add the *subscription key* as a parameter to the URL. The secondary key can be used in scenarios like rolling key changes.
+For information about viewing your keys in the Azure portal, see [Manage authentication](./how-to-manage-authentication.md#view-authentication-details).
+
+Primary and secondary keys are generated after the Azure Maps account is created. You're encouraged to use the primary key as the subscription key when calling Azure Maps with shared key authentication. Shared Key authentication passes a key generated by an Azure Maps account to an Azure Maps service. For each request to Azure Maps services, add the _subscription key_ as a parameter to the URL. The secondary key can be used in scenarios like rolling key changes.
-Example using the *subscription key* as a parameter in your URL:
+Example using the _subscription key_ as a parameter in your URL:
```http https://atlas.microsoft.com/mapData/upload?api-version=1.0&dataFormat=zip&subscription-key={Azure-Maps-Primary-Subscription-key}
-```
-
-For information about viewing your keys in the Azure portal, see [Manage authentication](./how-to-manage-authentication.md#view-authentication-details).
+```
-> [!NOTE]
-> Primary and Secondary keys should be treated as sensitive data. The shared key is used to authenticate all Azure Maps REST APIs. Users who use a shared key should abstract the API key away, either through environment variables or secure secret storage, where it can be managed centrally.
+> [!IMPORTANT]
+> Primary and Secondary keys should be treated as sensitive data. The shared key is used to authenticate all Azure Maps REST APIs. Users who use a shared key should abstract the API key away, either through environment variables or secure secret storage, where it can be managed centrally.
## Azure AD authentication
Azure Subscriptions are provided with an Azure AD tenant to enable fine grained
Azure Maps accepts **OAuth 2.0** access tokens for Azure AD tenants associated with an Azure subscription that contains an Azure Maps account. Azure Maps also accepts tokens for:
-* Azure AD users
-* Partner applications that use permissions delegated by users
-* Managed identities for Azure resources
+- Azure AD users
+- Partner applications that use permissions delegated by users
+- Managed identities for Azure resources
-Azure Maps generates a *unique identifier (client ID)* for each Azure Maps account. You can request tokens from Azure AD when you combine this client ID with additional parameters.
+Azure Maps generates a _unique identifier_ (client ID) for each Azure Maps account. You can request tokens from Azure AD when you combine this client ID with additional parameters.
For more information about how to configure Azure AD and request tokens for Azure Maps, see [Manage authentication in Azure Maps](./how-to-manage-authentication.md).
-For general information about authenticating with Azure AD, see [What is authentication?](../active-directory/develop/authentication-vs-authorization.md).
+For general information about authenticating with Azure AD, see [Authentication vs. authorization](../active-directory/develop/authentication-vs-authorization.md).
-### Managed identities for Azure resources and Azure Maps
+## Managed identities for Azure resources and Azure Maps
-[Managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md) provide Azure services with an automatically managed application based security principal that can authenticate with Azure AD. With Azure role-based access control (Azure RBAC), the managed identity security principal can be authorized to access Azure Maps services. Some examples of managed identities include: Azure App Service, Azure Functions, and Azure Virtual Machines. For a list of managed identities, see [managed identities for Azure resources](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md).
+[Managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md) provide Azure services with an automatically managed application based security principal that can authenticate with Azure AD. With Azure role-based access control (Azure RBAC), the managed identity security principal can be authorized to access Azure Maps services. Some examples of managed identities include: Azure App Service, Azure Functions, and Azure Virtual Machines. For a list of managed identities, see [Services that support managed identities for Azure resources](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md). To add and remove managed identities read more on [Manage authentication in Azure Maps](./how-to-manage-authentication.md).
-### Configuring application Azure AD authentication
+### Configure application Azure AD authentication
Applications will authenticate with the Azure AD tenant using one or more supported scenarios provided by Azure AD. Each Azure AD application scenario represents different requirements based on business needs. Some applications may require user sign-in experiences and other applications may require an application sign-in experience. For more information, see [Authentication flows and application scenarios](../active-directory/develop/authentication-flows-app-scenarios.md).
After the application receives an access token, the SDK and/or application sends
| x-ms-client-id | 30d7cc….9f55 | | Authorization | Bearer eyJ0e….HNIVN |
-> [!NOTE]
+> [!NOTE]
> `x-ms-client-id` is the Azure Maps account-based GUID that appears on the Azure Maps authentication page. Here's an example of an Azure Maps route request that uses an Azure AD OAuth Bearer token:
For information about viewing your client ID, see [View authentication details](
## Authorization with role-based access control
-Azure Maps supports access to all principal types for [Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md) including: individual Azure AD users, groups, applications, Azure resources, and Azure Managed identities. Principal types are granted a set of permissions, also known as a role definition. A role definition provides permissions to REST API actions. Applying access to one or more Azure Maps accounts is known as a scope. When applying a principal, role definition, and scope then a role assignment is created.
+### Prerequisites
+
+If you are new to Azure RBAC, [Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md) overview provides Principal types are granted a set of permissions, also known as a role definition. A role definition provides permissions to REST API actions. Azure Maps supports access to all principal types for [Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md) including: individual Azure AD users, groups, applications, Azure resources, and Azure managed identities. Applying access to one or more Azure Maps accounts is known as a scope. When applying a principal, role definition, and scope then a role assignment is created.
+
+### Overview
The next sections discuss concepts and components of Azure Maps integration with Azure RBAC. As part of the process to set up your Azure Maps account, an Azure AD directory is associated to the Azure subscription, which the Azure Maps account resides.
When you configure Azure RBAC, you choose a security principal and apply it to a
The following role definition types exist to support application scenarios.
-| Azure Role Definition | Description |
-| :-- | :- |
-| Azure Maps Data Reader | Provides access to immutable Azure Maps REST APIs. |
-| Azure Maps Data Contributor | Provides access to mutable Azure Maps REST APIs. Mutability is defined by the actions: write and delete. |
-| Custom Role Definition | Create a crafted role to enable flexible restricted access to Azure Maps REST APIs. |
+| Azure Role Definition | Description |
+| : | :- |
+| Azure Maps Search and Render Data Reader | Provides access to only search and render Azure Maps REST APIs to limit access to basic web browser use cases. |
+| Azure Maps Data Reader | Provides access to immutable Azure Maps REST APIs. |
+| Azure Maps Data Contributor | Provides access to mutable Azure Maps REST APIs. Mutability is defined by the actions: write and delete. |
+| Custom Role Definition | Create a crafted role to enable flexible restricted access to Azure Maps REST APIs. |
Some Azure Maps services may require elevated privileges to perform write or delete actions on Azure Maps REST APIs. Azure Maps Data Contributor role is required for services, which provide write or delete actions. The following table describes what services Azure Maps Data Contributor is applicable when using write or delete actions. When only read actions are required, the Azure Maps Data Reader role can be used in place of the Azure Maps Data Contributor role.
-| Azure Maps Service | Azure Maps Role Definition |
-| :-- | :-- |
-| Data | Azure Maps Data Contributor |
-| Creator | Azure Maps Data Contributor |
-| Spatial | Azure Maps Data Contributor |
+| Azure Maps Service | Azure Maps Role Definition |
+| : | :-- |
+| [Data](/rest/api/maps/data) | Azure Maps Data Contributor |
+| [Creator](/rest/api/maps-creator/) | Azure Maps Data Contributor |
+| [Spatial](/rest/api/maps/spatial) | Azure Maps Data Contributor |
+| Batch [Search](/rest/api/maps/search) and [Route](/rest/api/maps/route) | Azure Maps Data Contributor |
For information about viewing your Azure RBAC settings, see [How to configure Azure RBAC for Azure Maps](./how-to-manage-authentication.md).
The custom role definition can then be used in a role assignment for any securit
Here are some example scenarios where custom roles can improve application security.
-| Scenario | Custom Role Data Action(s) |
-| :-- | : |
-| A public facing or interactive sign-in web page with base map tiles and no other REST APIs. | `Microsoft.Maps/accounts/services/render/read` |
-| An application, which only requires reverse geocoding and no other REST APIs. | `Microsoft.Maps/accounts/services/search/read` |
-| A role for a security principal, which requests reading of Azure Maps Creator based map data and base map tile REST APIs. | `Microsoft.Maps/accounts/services/data/read`, `Microsoft.Maps/accounts/services/render/read` |
+| Scenario | Custom Role Data Action(s) |
+| :- | : |
+| A public facing or interactive sign-in web page with base map tiles and no other REST APIs. | `Microsoft.Maps/accounts/services/render/read` |
+| An application, which only requires reverse geocoding and no other REST APIs. | `Microsoft.Maps/accounts/services/search/read` |
+| A role for a security principal, which requests reading of Azure Maps Creator based map data and base map tile REST APIs. | `Microsoft.Maps/accounts/services/data/read`, `Microsoft.Maps/accounts/services/render/read` |
| A role for a security principal, which requires reading, writing, and deleting of Creator based map data. This can be defined as a map data editor role, but does not allow access to other REST APIs like base map tiles. | `Microsoft.Maps/accounts/services/data/read`, `Microsoft.Maps/accounts/services/data/write`, `Microsoft.Maps/accounts/services/data/delete` |
-### Understanding scope
+### Understand scope
When creating a role assignment, it is defined within the Azure resource hierarchy. At the top of the hierarchy is a [management group](../governance/management-groups/overview.md) and the lowest is an Azure resource, like an Azure Maps account. Assigning a role assignment to a resource group can enable access to multiple Azure Maps accounts or resources in the group.
Assigning a role assignment to a resource group can enable access to multiple Az
> [!TIP] > Microsoft's general recommendation is to assign access to the Azure Maps account scope because it prevents **unintended access to other Azure Maps accounts** existing in the same Azure subscription.
-## Next steps
+## Disable local authentication
+
+Azure Maps accounts support the standard Azure property in the [Azure Maps Management REST API](/rest/api/maps-management/) for `Microsoft.Maps/accounts` called `disableLocalAuth`. When `true`, all authentication to the Azure Maps data-plane REST API is disabled, except [Azure AD authentication](./azure-maps-authentication.md#azure-ad-authentication). This is configured using Azure Policy to control distribution and management of shared keys and SAS tokens. For more information, see [What is Azure Policy?](../governance/policy/overview.md).
+
+Disabling local authentication doesn't take effect immediately. Allow a few minutes for the service to block future authentication requests. To re-enable local authentication, set the property to `false` and after a few minutes local authentication will resume.
+
+```json
+{
+ // omitted other properties for brevity.
+ "properties": {
+ "disableLocalAuth": true
+ }
+}
+```
+
+## Shared access signature token authentication
++
+Shared Access Signature token authentication is in preview.
+
+Shared access signature (SAS) tokens are authentication tokens created using the JSON Web token (JWT) format and are cryptographically signed to prove authentication for an application to the Azure Maps REST API. A SAS token is created by first integrating a [user-assigned managed identity](/azure/active-directory/managed-identities-azure-resources/overview) with an Azure Maps account in your Azure subscription. The user-assigned managed identity is given authorization to the Azure Maps account through Azure RBAC using one of the built-in or custom role definitions.
+
+Functional key differences of SAS token from Azure AD Access tokens:
+
+- Lifetime of a token for a max expiration of 1 year (365 days).
+- Azure location and geography access control per token.
+- Rate limits per token for an approximate of 1 to 500 requests per second.
+- Private keys of the token are the primary and secondary keys of an Azure Maps account resource.
+- Service Principal object for authorization is supplied by a user-assigned managed identity.
+
+SAS tokens are immutable. This means that once a token is created, the SAS token is valid until the expiry has been met and the configuration of the allowed regions, rate limits, and user-assigned managed identity cannot be changed. Read more below on [understanding access control](./azure-maps-authentication.md#understand-sas-token-access-control) for SAS token revocation and changes to access control.
+
+### Understand SAS token rate limits
+
+#### SAS token maximum rate limit can control billing for an Azure Maps resource
+
+By specifying a maximum rate limit on the token (`maxRatePerSecond`), the excess rate will not be billed to the account allowing you to set an upper limit of billable transactions for the account, when using the token. However, the application will receive client error responses with `429 (TooManyRequests)` for all transactions once that limit it reached. It is the responsibility of the application to manage retry and distribution of SAS tokens. There is no limit on how many SAS tokens can be created for an account. To allow for an increase or decrease in an existing token's limit; a new SAS token must be created but remember that the old SAS token is still valid until its expiration.
+
+Estimated Example:
+
+| Approximate Maximum Rate Per Second | Actual Rate Per Second | Duration of sustained rate in seconds | Total billable transactions |
+| :- | : | : | :-- |
+| 10 | 20 | 600 | 6000 |
+
+This is an estimate, actual rate limits vary slightly based on Azure Maps ability to enforce consistency within a span of time. However, this allows for preventive control of billing cost.
+
+#### Rate limits are enforced per Azure location, not globally or geographically
+
+For example, a single SAS token with a `maxRatePerSecond` of 10 can be used to limit the throughput in the `East US` location. If that same token is used in `West US 2`, a new counter is created to limit the throughput to 10 in that location, independent of the `East US` location. To control costs and improve predictability, we recommend:
+
+1. Create SAS tokens with designated allowed Azure locations for targeted geography. Continue reading to understand creating SAS tokens.
+1. Use geographic data-plane REST API endpoints, `https://us.atlas.microsoft.com` or `https://eu.atlas.microsoft.com`.
+
+Consider the application topology where the endpoint `https://us.atlas.microsoft.com` routes to the same US locations that the Azure Maps services are hosted, such as `East US`, `West Central US`, or `West US 2`. The same idea applies to other geographical endpoints such as `https://eu.atlas.microsoft.com` between `West Europe` and `North Europe`. To prevent unexpected authorization denials, leverage a SAS token that uses the same Azure locations that the application consumes. The endpoint location is defined using the Azure Maps Management REST API.
+
+#### Default rate limits take precedent over SAS token rate limits
+
+As described in [Azure Maps rate limits](./azure-maps-qps-rate-limits.md), individual service offerings have varying rate limits which are enforced as an aggregate of the account.
+
+Consider the case of **Search Service - Non-Batch Reverse**, with its limit of 250 queries per second (QPS) for the following tables. Each table represents estimated total successful transactions from example usage.
+
+The first table shows 1 token which has a maximum request per second of 500, and then actual usage of the application was 500 request per second for a duration of 60 seconds. **Search Service - Non-Batch Reverse** has a rate limit of 250, this means of the total 30000 requests made in the 60 seconds; 15000 of those requests will be billable transactions. The remaining requests will result in status code `429 (TooManyRequests)`.
+
+| Name | Approximate Maximum Rate Per Second | Actual Rate Per Second | Duration of sustained rate in seconds | Approximate total successful transactions |
+| :- | :- | : | : | :- |
+| token | 500 | 500 | 60 | ~15000 |
+
+For example, if two SAS tokens are created in, and use the same location as an Azure Maps account, each token now shares the default rate limit of 250 QPS. If each token are used at the same time with the same throughput token 1 and token 2 would successfully grant 7500 successful transactions each.
+
+| Name | Approximate Maximum Rate Per Second | Actual Rate Per Second | Duration of sustained rate in seconds | Approximate total successful transactions |
+| : | :- | : | : | :- |
+| token 1 | 250 | 250 | 60 | ~7500 |
+| token 2 | 250 | 250 | 60 | ~7500 |
+
+### Understand SAS token access control
+
+SAS tokens use RBAC to control access to the REST API. When you create a SAS token, the prerequisite managed identity on the Map Account is assigned an Azure RBAC role which grants access to specific REST API actions. See [Picking a role definition](./azure-maps-authentication.md#picking-a-role-definition) to determine which API should be allowed by the application.
+
+If you want to assign temporary access and remove access for before the SAS token expires, you will want to revoke the token. Other reasons to revoke access may be if the token is distributed with `Azure Maps Data Contributor` role assignment unintentionally and anyone with the SAS token may be able to read and write data to Azure Maps REST APIs which may expose sensitive data or unexpected financial cost from usage.
+
+there are 2 options to revoke access for SAS token(s):
+
+1. Regenerate the key which was used by the SAS token, the primaryKey or secondaryKey of the map account.
+1. Remove the role assignment for the Managed Identity on the associated map account.
+
+> [!WARNING]
+> Deleting a managed identity used by a SAS token or revoking access control of the managed identity will cause instances of your application using the SAS token and managed identity to intentionally return `401 Unauthorized` or `403 Forbidden` from Azure Maps REST APIs which will create application disruption.
+>
+> To avoid disruption:
+>
+> 1. Add a second managed identity to the Map Account and grant the new managed identity the correct role assignment.
+> 1. Create a SAS token using `secondaryKey` as the `signingKey` and distribute the new SAS token to the application.
+> 1. Regenerate the primary key, remove the managed identity from the account, and remove the role assignment for the managed identity.
++
+### Create SAS tokens
+
+To create SAS tokens you must have `Contributor` role access to both manage Azure Maps accounts and user-assigned identities in the Azure subscription.
+
+> [!IMPORTANT]
+> Existing Azure Maps accounts created in the Azure location `global` don't support managed identities.
-To learn more about Azure RBAC, see
-> [!div class="nextstepaction"]
-> [Azure role-based access control](../role-based-access-control/overview.md)
+First, you should [Create a user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md#create-a-user-assigned-managed-identity) in the same location as the Azure Maps account.
+
+> [!TIP]
+> You should use the same location for both the managed identity and the Azure Maps account.
+
+Once a managed identity is created, you can create or update the Azure Maps account and attach it. See [Manage your Azure Maps account](./how-to-manage-account-keys.md) for more information.
+
+After the account has been successfully created or updated with the managed identity; assign role-based access control for the managed identity to an Azure Maps data role at the account scope. This enables the managed identity to be given access to the Azure Maps REST API for your map account.
+
+Next, you'll need to create a SAS token using the Azure Management SDK tooling, List SAS operation on Account Management API, or the Azure portal Shared Access Signature page of the Map account resource.
+
+SAS token parameters :
+
+| Parameter Name | Example Value | Description |
+| : | :-- | :- |
+| signingKey | `primaryKey` | Required, the string enum value for the signingKey either `primaryKey` or `secondaryKey` is used to create the signature of the SAS. |
+| principalId | `<GUID>` | Required, the principalId is the Object (principal) id of the user-assigned managed identity attached to the map account. |
+| regions | `[ "eastus", "westus2", "westcentralus" ]` | Optional, the default value is `null`. The regions control which regions the SAS token is allowed to be used in the Azure Maps REST [data-plane](../azure-resource-manager/management/control-plane-and-data-plane.md) API. Omitting regions parameter will allow the SAS token to be used without any constraints. When used in combination with an Azure Maps data-plane geographic endpoint like `us.atlas.microsoft.com` and `eu.atlas.microsoft.com` will allow the application to control usage with-in the specified geography. This allows prevention of usage in other geographies. |
+| maxRatePerSecond | 500 | Required, the specified approximate maximum request per second which the SAS token is granted. Once the limit is reached, additional throughput will be rate limited with HTTP status code `429 (TooManyRequests)`. |
+| start | `2021-05-24T10:42:03.1567373Z` | Required, a UTC date that specifies the date and time the token becomes active. |
+| expiry | `2021-05-24T11:42:03.1567373Z` | Required, a UTC date that specifies the date and time the token expires. The duration between start and expiry cannot be more than 365 days. |
+
+### Configuring application with SAS token
+
+After the application receives a SAS token, the Azure Maps SDK and/or applications send an HTTPS request with the following required HTTP header in addition to other REST API HTTP headers:
+
+| Header Name | Value |
+| : | :- |
+| Authorization | jwt-sas eyJ0e….HNIVN |
+
+> [!NOTE]
+> `jwt-sas` is the authentication scheme to denote using SAS token. Do not include `x-ms-client-id` or other Authorization headers or `subscription-key` query string parameter.
+
+## Cross origin resource sharing (CORS)
++
+Cross Origin Resource Sharing (CORS) is in preview.
+
+### Prerequisites
+
+To prevent malicious code execution on the client, modern browsers block requests from web applications to resources running in a separate domain.
+
+- If you're unfamiliar with CORS check out [Cross-origin resource sharing (CORS)](https://developer.mozilla.org/docs/Web/HTTP/CORS), it lets an `Access-Control-Allow-Origin` header declare which origins are allowed to call endpoints of an Azure Maps account. CORS protocol is non-specific to Azure Maps.
+
+### Account CORS
+
+[CORS](https://fetch.spec.whatwg.org/#http-cors-protocol) is an HTTP protocol that enables a web application running under one domain to access resources in another domain. Web browsers implement a security restriction known as [same-origin policy](https://www.w3.org/Security/wiki/Same_Origin_Policy) that prevents a web page from calling APIs in a different domain; CORS provides a secure way to allow one domain (the origin domain) to call APIs in another domain. Azure Maps account resource supports the ability to configure allowed origins for your app which can access the Azure Maps REST API.
+
+> [!IMPORTANT]
+> CORS is not an authorization mechanism. Any request made to a map account using REST API, when CORS is enabled, also needs a valid map account authentication scheme such as Shared Key, Azure AD, or SAS token.
+>
+> CORS is supported for all map account pricing tiers, data-plane endpoints, and locations.
+
+### CORS requests
+
+A CORS request from an origin domain may consist of two separate requests:
+
+- A preflight request, which queries the CORS restrictions imposed by the service. The preflight request is required unless the request is standard method GET, HEAD, POST, or requests which contain `Authorization` request header.
+
+- The actual request, made against the desired resource.
+
+### Preflight request
+
+The preflight request is done not only as a security measure to ensure that the server understands the method and headers that will be sent in the actual request and that the server knows and trusts the source of the request, but it also queries the CORS restrictions that have been established for the map account. The web browser (or other user agent) sends an OPTIONS request that includes the request headers, method and origin domain. The map account service tries to fetch any CORS rules if account authentication is possible through the CORS preflight protocol.
+
+If authentication is not possible, the maps service evaluates pre-configured set of CORS rules that specify which origin domains, request methods, and request headers may be specified on an actual request against the maps service. By default, a maps account is configured to allow all origins to enable seamless integration into web browsers.
+
+The service will respond to the preflight request with the required Access-Control headers if the following criteria are met:
+
+1. The OPTIONS request contains the required CORS headers (the Origin and Access-Control-Request-Method headers)
+1. Authentication was successful and A CORS rule is enabled for the account which matches the preflight request.
+1. Authentication was skipped due to required `Authorization` request headers which cannot be specified on preflight request.
+
+When preflight request is successful, the service responds with status code `200 (OK)`, and includes the required Access-Control headers in the response.
+
+The service will reject preflight requests if the following conditions occur:
+
+1. If the OPTIONS request doesnΓÇÖt contain the required CORS headers (the Origin and Access-Control-Request-Method headers), the service will respond with status code `400 (Bad request)`.
+1. If authentication was successful on preflight request and no CORS rule matches the preflight request, the service will respond with status code `403 (Forbidden)`. This may occur if the CORS rule is configured to accept an origin which does not match the current browser client origin request header.
+
+> [!NOTE]
+> A preflight request is evaluated against the service and not against the requested resource. The account owner must have enabled CORS by setting the appropriate account properties in order for the request to succeed.
+
+### Actual request
+
+Once the preflight request is accepted and the response is returned, the browser will dispatch the actual request against the map service. The browser will deny the actual request immediately if the preflight request is rejected.
+
+The actual request is treated as a normal request against the map service. The presence of the `Origin` header indicates that the request is a CORS request and the service will then validate against the CORS rules. If a match is found, the Access-Control headers are added to the response and sent back to the client. If a match is not found, the response will return a `403 (Forbidden)` indicating a CORS origin error.
+
+### Enable CORS policy
+
+When creating or updating an existing Map account, the Map account properties can specify the allowed origins to be configured. You can set a CORS rule on the Azure Maps account properties through Azure Maps Management SDK, Azure Maps Management REST API, and portal. Once you set the CORS rule for the service, then a properly authorized request made to the service from a different domain will be evaluated to determine whether it is allowed according to the rule you have specified. See an example below:
+
+```json
+{
+ "location": "eastus",
+ "sku": {
+ "name": "G2"
+ },
+ "kind": "Gen2",
+ "properties": {
+ "cors": {
+ "corsRules": [
+ {
+ "allowedOrigins": [
+ "https://www.azure.com",
+ "https://www.microsoft.com"
+ ]
+ }
+ ]
+ }
+ }
+}
+```
+
+Only one CORS rule with its list of allowed origins can be specified. Each origin allows the HTTP request to be made to Azure Maps REST API in the web browser of the specified origin.
+
+### Remove CORS policy
+
+You can remove CORS manually in the Azure portal, or programmatically using the Azure Maps SDK, Azure Maps management REST API or an [ARM template](/azure/azure-resource-manager/templates/overview).
+
+> [!TIP]
+> If you use the Azure Maps management REST API , use `PUT` or `PATCH` with an empty `corsRule` list in the request body.
+
+```json
+{
+ "location": "eastus",
+ "sku": {
+ "name": "G2"
+ },
+ "kind": "Gen2",
+ "properties": {
+ "cors": {
+ "corsRules": []
+ }
+ }
+ }
+}
+```
+
+## Understand billing transactions
+
+Azure Maps does not count billing transactions for:
+
+- 5xx HTTP Status Codes
+- 401 (Unauthorized)
+- 403 (Forbidden)
+- 429 (TooManyRequests)
+- CORS preflight requests
+
+See [Azure Maps pricing](https://azure.microsoft.com/pricing/details/azure-maps) for additional information on billing transactions as well as other Azure Maps pricing information.
+
+## Next steps
To learn more about authenticating an application with Azure AD and Azure Maps, see
-> [!div class="nextstepaction"]
+
+> [!div class="nextstepaction"]
> [Manage authentication in Azure Maps](./how-to-manage-authentication.md) To learn more about authenticating the Azure Maps Map Control with Azure AD, see
-> [!div class="nextstepaction"]
-> [Use the Azure Maps Map Control](./how-to-use-map-control.md)
+
+> [!div class="nextstepaction"]
+> [Use the Azure Maps Map Control](./how-to-use-map-control.md)
azure-maps How To Manage Account Keys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-manage-account-keys.md
You can manage your Azure Maps account through the Azure portal. After you have an account, you can implement the APIs in your website or mobile application.
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+## Prerequisites
+
+- If you don't already have an Azure account, [sign up for a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you continue.
+- For picking account location and you're unfamiliar with managed identities for Azure resources, check out the [overview section](../active-directory/managed-identities-azure-resources/overview.md).
+
+## Account location
+
+Picking a location for your Azure Maps account that aligns with other resources in your subscription, like managed identities, may help to improve the level of service for [control-plane](../azure-resource-manager/management/control-plane-and-data-plane.md) operations.
+
+As an example, the managed identity infrastructure will communicate and notify the Azure Maps management services for changes to the identity resource such as credential renewal or deletion. Sharing the same Azure location enables a consistent infrastructure provisioning for all resources.
+
+Any Azure Maps REST API on endpoint `atlas.microsoft.com`, `*.atlas.microsoft.com`, or other endpoints belonging to the Azure data-plane are not affected by the choice of the Azure Maps account location.
+
+Read more about data-plane service coverage for Azure Maps services on [geographic coverage](./geographic-coverage.md).
## Create a new account
azure-maps How To Manage Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-manage-authentication.md
custom.ms: subject-rbac-steps
# Manage authentication in Azure Maps
-When you create an Azure Maps account, keys and a client ID are generated. The keys and client ID are used to support Azure Active Directory (Azure AD) authentication and Shared Key authentication.
+When you create an Azure Maps account, your client ID is automatically generated along with primary and secondary keys that are required for authentication when using [Azure Active Directory (Azure AD)](../active-directory/fundamentals/active-directory-whatis.md) or [Shared Key authentication](./azure-maps-authentication.md#shared-key-authentication).
+
+## Prerequisites
+
+Sign in to the [Azure portal](https://portal.azure.com). If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+- A familiarization with [managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md). Be sure to understand the two [Managed identity types](../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types) and how they differ.
+- [An Azure Maps account](quick-demo-map-app.md#create-an-azure-maps-account).
+- A familiarization with [Azure Maps Authentication](./azure-maps-authentication.md).
## View authentication details
- > [!IMPORTANT]
- > We recommend that you use the primary key as the subscription key when you use [Shared Key authentication](./azure-maps-authentication.md#shared-key-authentication) to call Azure Maps. It's best to use the secondary key in scenarios like rolling key changes. For more information, see [Authentication with Azure Maps](./azure-maps-authentication.md).
+> [!IMPORTANT]
+> We recommend that you use the primary key as the subscription key when you use Shared Key authentication to call Azure Maps. It's best to use the secondary key in scenarios like rolling key changes.
To view your Azure Maps authentication details: 1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Navigate to the Azure portal menu. Select **All resources**, and then select your Azure Maps account.
+2. Select **All resources** in the **Azure services** section, then select your Azure Maps account.
- :::image type="content" border="true" source="./media/how-to-manage-authentication/select-all-resources.png" alt-text="Select Azure Maps account.":::
+ :::image type="content" border="true" source="./media/how-to-manage-authentication/select-all-resources.png" alt-text="Select Azure Maps account.":::
-3. Under **Settings** in the left pane, select **Authentication**.
+3. Select **Authentication** in the settings section of the left pane.
- :::image type="content" border="true" source="./media/how-to-manage-authentication/view-authentication-keys.png" alt-text="Authentication details.":::
+ :::image type="content" border="true" source="./media/how-to-manage-authentication/view-authentication-keys.png" alt-text="Authentication details.":::
## Choose an authentication category Depending on your application needs, there are specific pathways to application security. Azure AD defines specific authentication categories to support a wide range of authentication flows. To choose the best category for your application, see [application categories](../active-directory/develop/authentication-flows-app-scenarios.md#application-categories). > [!NOTE]
-> Even if you use shared key authentication, understanding categories and scenarios helps you to secure the application.
+> Understanding categories and scenarios will help you secure your Azure Maps application, whether you use Azure Active Directory or shared key authentication.
+
+## How to add and remove managed identities
+
+To enable [Shared access signature (SAS) token authentication](./azure-maps-authentication.md#shared-access-signature-token-authentication) with the Azure Maps REST API you need to add a user-assigned managed identity to your Azure Maps account.
+
+### Create a managed identity
+
+You can create a user-assigned managed identity before or after creating a map account. You can add the managed identity through the portal, Azure management SDKs, or the Azure Resource Manager (ARM) template. To add a user-assigned managed identity through an ARM template, specify the resource identifier of the user-assigned managed identity. See example below:
+
+```json
+"identity": {
+ "type": "UserAssigned",
+ "userAssignedIdentities": {
+ "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/example/providers/Microsoft.ManagedIdentity/userAssignedIdentities/exampleidentity": {}
+ }
+}
+```
+
+### Remove a managed identity
+
+You can remove a system-assigned identity by disabling the feature through the portal or the Azure Resource Manager template in the same way that it was created. User-assigned identities can be removed individually. To remove all identities, set the identity type to `"None"`.
+
+Removing a system-assigned identity in this way will also delete it from Azure AD. System-assigned identities are also automatically removed from Azure AD when the Azure Maps account is deleted.
+
+To remove all identities by using the Azure Resource Manager template, update this section:
+
+```json
+"identity": {
+ "type": "None"
+}
+```
## Choose an authentication and authorization scenario
-This table outlines common authentication and authorization scenarios in Azure Maps. Use the links to learn detailed configuration information for each scenario.
+This table outlines common authentication and authorization scenarios in Azure Maps. Each scenario describes a type of app which can be used to access Azure Maps REST API. Use the links to learn detailed configuration information for each scenario.
> [!IMPORTANT] > For production applications, we recommend implementing Azure AD with Azure role-based access control (Azure RBAC).
-| Scenario | Authentication | Authorization | Development effort | Operational effort |
-| - | -- | - | | |
-| [Trusted daemon / non-interactive client application](./how-to-secure-daemon-app.md) | Shared Key | N/A | Medium | High |
-| [Trusted daemon / non-interactive client application](./how-to-secure-daemon-app.md) | Azure AD | High | Low | Medium |
-| [Web single page application with interactive single-sign-on](./how-to-secure-spa-users.md) | Azure AD | High | Medium | Medium |
-| [Web single page application with non-interactive sign-on](./how-to-secure-spa-app.md) | Azure AD | High | Medium | Medium |
-| [Web application with interactive single-sign-on](./how-to-secure-webapp-users.md) | Azure AD | High | High | Medium |
-| [IoT device / input constrained device](./how-to-secure-device-code.md) | Azure AD | High | Medium | Medium |
+| Scenario | Authentication | Authorization | Development effort | Operational effort |
+| -- | -- | - | | |
+| [Trusted daemon app or non-interactive client app](./how-to-secure-daemon-app.md) | Shared Key | N/A | Medium | High |
+| [Trusted daemon or non-interactive client app](./how-to-secure-daemon-app.md) | Azure AD | High | Low | Medium |
+| [Web single page app with interactive single-sign-on](./how-to-secure-spa-users.md) | Azure AD | High | Medium | Medium |
+| [Web single page app with non-interactive sign-on](./how-to-secure-spa-app.md) | Azure AD | High | Medium | Medium |
+| [Web app, daemon app, or non-interactive sign-on app](./how-to-secure-sas-app.md) | SAS Token | High | Medium | Low |
+| [Web application with interactive single-sign-on](./how-to-secure-webapp-users.md) | Azure AD | High | High | Medium |
+| [IoT device or an input constrained application](./how-to-secure-device-code.md) | Azure AD | High | Medium | Medium |
## View built-in Azure Maps role definitions
Request a token from the Azure AD token endpoint. In your Azure AD request, use
| Azure public cloud | `https://login.microsoftonline.com` | `https://atlas.microsoft.com/` | | Azure Government cloud | `https://login.microsoftonline.us` | `https://atlas.microsoft.com/` |
-For more information about requesting access tokens from Azure AD for users and service principals, see [Authentication scenarios for Azure AD](../active-directory/develop/authentication-vs-authorization.md). To view specific scenarios, see [the table of scenarios](./how-to-manage-authentication.md#choose-an-authentication-and-authorization-scenario).
+For more information about requesting access tokens from Azure AD for users and service principals, see [Authentication scenarios for Azure AD](../active-directory/develop/authentication-vs-authorization.md). To view specific scenarios, see [the table of scenarios](./how-to-manage-authentication.md#choose-an-authentication-and-authorization-scenario).
## Manage and rotate shared keys
To rotate your Azure Maps subscription keys in the Azure portal:
## Next steps Find the API usage metrics for your Azure Maps account:+ > [!div class="nextstepaction"] > [View usage metrics](how-to-view-api-usage.md)
azure-maps How To Secure Sas App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-secure-sas-app.md
+
+ Title: How to secure an application in Microsoft Azure Maps with SAS token
+
+description: This article describes how to configure an application to be secured with SAS token authentication.
++ Last updated : 01/05/2022++++
+custom.ms: subject-rbac-steps
++
+# Secure an application with SAS token
+
+This article describes how to create an Azure Maps account with a SAS token that can be used to call the Azure Maps REST API.
+
+## Prerequisites
+
+This scenario assumes:
+
+- If you don't already have an Azure account, [sign up for a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you continue.
+- The current user must have subscription `Owner` role permissions on the Azure subscription to create an [Azure Key Vault](/azure/key-vault/general/basic-concepts), user-assigned managed identity, assign the managed identity a role, and create an Azure Maps account.
+- Azure CLI is installed to deploy the resources. Read more on [How to install the Azure CLI](/cli/azure/install-azure-cli).
+- The current user is signed-in to Azure CLI with an active Azure subscription using `az login`.
+
+## Scenario: SAS token
+
+Applications that use SAS token authentication should store the keys in a secure store. A SAS token is a credential that grants the level of access specified during its creation to anyone who holds it, until the token expires or access is revoked. This scenario describes how to safely store your SAS token as a secret in Azure Key Vault and distribute the SAS token into a public client. Events in an applicationΓÇÖs lifecycle may generate new SAS tokens without interrupting active connections using existing tokens. To understand how to configure Azure Key Vault, see the [Azure Key Vault developer's guide](../key-vault/general/developers-guide.md).
+
+The following sample scenario will perform the steps outlined below with two Azure Resource Manager (ARM) template deployments:
+
+- Create an Azure Key Vault.
+- Create a user-assigned managed identity.
+- Assign Azure RBAC `Azure Maps Data Reader` role to the user-assigned managed identity.
+- Create a map account with a CORS configuration and attach the user-assigned managed identity.
+- Create and save a SAS token into the Azure Key Vault
+- Retrieve the SAS token secret from Azure Key Vault.
+- Create an Azure Maps REST API request using the SAS token.
+
+When completed, you should see output from Azure Maps `Search Address (Non-Batch)` REST API results on PowerShell with Azure CLI. The Azure resources will be deployed with permissions to connect to the Azure Maps account with controls for maximum rate limit, allowed regions, `localhost` configured CORS policy, and Azure RBAC.
+
+### Azure resource deployment with Azure CLI
+
+The following steps describe how to create and configure an Azure Maps account with SAS token authentication. The Azure CLI is assumed to be running in a PowerShell instance.
+
+1. Register Key Vault, Managed Identities, and Azure Maps for your subscription
+
+ ```azurecli
+ az provider register --namespace Microsoft.KeyVault
+ az provider register --namespace Microsoft.ManagedIdentity
+ az provider register --namespace Microsoft.Maps
+ ```
+
+1. Retrieve your Azure AD object ID
+
+ ```azurecli
+ $id = $(az rest --method GET --url 'https://graph.microsoft.com/v1.0/me?$select=id' --headers 'Content-Type=application/json' --query "id")
+ ```
+
+1. Create a template file `prereq.azuredeploy.json` with the following content.
+
+ ```json
+ {
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]",
+ "metadata": {
+ "description": "Specifies the location for all the resources."
+ }
+ },
+ "keyVaultName": {
+ "type": "string",
+ "defaultValue": "[concat('vault', uniqueString(resourceGroup().id))]",
+ "metadata": {
+ "description": "Specifies the name of the key vault."
+ }
+ },
+ "userAssignedIdentityName": {
+ "type": "string",
+ "defaultValue": "[concat('identity', uniqueString(resourceGroup().id))]",
+ "metadata": {
+ "description": "The name for your managed identity resource."
+ }
+ },
+ "objectId": {
+ "type": "string",
+ "metadata": {
+ "description": "Specifies the object ID of a user, service principal or security group in the Azure Active Directory tenant for the vault. The object ID must be unique for the set of access policies. Get it by using Get-AzADUser or Get-AzADServicePrincipal cmdlets."
+ }
+ },
+ "secretsPermissions": {
+ "type": "array",
+ "defaultValue": [
+ "list",
+ "get",
+ "set"
+ ],
+ "metadata": {
+ "description": "Specifies the permissions to secrets in the vault. Valid values are: all, get, list, set, delete, backup, restore, recover, and purge."
+ }
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.ManagedIdentity/userAssignedIdentities",
+ "name": "[parameters('userAssignedIdentityName')]",
+ "apiVersion": "2018-11-30",
+ "location": "[parameters('location')]"
+ },
+ {
+ "apiVersion": "2021-04-01-preview",
+ "type": "Microsoft.KeyVault/vaults",
+ "name": "[parameters('keyVaultName')]",
+ "location": "[parameters('location')]",
+ "properties": {
+ "tenantId": "[subscription().tenantId]",
+ "sku": {
+ "name": "Standard",
+ "family": "A"
+ },
+ "enabledForTemplateDeployment": true,
+ "accessPolicies": [
+ {
+ "objectId": "[parameters('objectId')]",
+ "tenantId": "[subscription().tenantId]",
+ "permissions": {
+ "secrets": "[parameters('secretsPermissions')]"
+ }
+ }
+ ]
+ }
+ }
+ ],
+ "outputs": {
+ "userIdentityResourceId": {
+ "type": "string",
+ "value": "[resourceId('Microsoft.ManagedIdentity/userAssignedIdentities', parameters('userAssignedIdentityName'))]"
+ },
+ "userAssignedIdentityPrincipalId": {
+ "type": "string",
+ "value": "[reference(parameters('userAssignedIdentityName')).principalId]"
+ },
+ "keyVaultName": {
+ "type": "string",
+ "value": "[parameters('keyVaultName')]"
+ }
+ }
+ }
+
+ ```
+
+1. Deploy the prerequisite resources. Make sure to pick the location where the Azure Maps accounts is enabled.
+
+ ```azurecli
+ az group create --name {group-name} --location "East US"
+ $outputs = $(az deployment group create --name ExampleDeployment --resource-group {group-name} --template-file "./prereq.azuredeploy.json" --parameters objectId=$id --query "[properties.outputs.keyVaultName.value, properties.outputs.userAssignedIdentityPrincipalId.value, properties.outputs.userIdentityResourceId.value]" --output tsv)
+ ```
+
+1. Create a template file `azuredeploy.json` to provision the Map account, role assignment, and SAS token.
+
+ ```json
+ {
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]",
+ "metadata": {
+ "description": "Specifies the location for all the resources."
+ }
+ },
+ "keyVaultName": {
+ "type": "string",
+ "metadata": {
+ "description": "Specifies the resourceId of the key vault."
+ }
+ },
+ "accountName": {
+ "type": "string",
+ "defaultValue": "[concat('map', uniqueString(resourceGroup().id))]",
+ "metadata": {
+ "description": "The name for your Azure Maps account."
+ }
+ },
+ "userAssignedIdentityResourceId": {
+ "type": "string",
+ "metadata": {
+ "description": "Specifies the resourceId for the user assigned managed identity resource."
+ }
+ },
+ "userAssignedIdentityPrincipalId": {
+ "type": "string",
+ "metadata": {
+ "description": "Specifies the resourceId for the user assigned managed identity resource."
+ }
+ },
+ "pricingTier": {
+ "type": "string",
+ "allowedValues": [
+ "S0",
+ "S1",
+ "G2"
+ ],
+ "defaultValue": "G2",
+ "metadata": {
+ "description": "The pricing tier for the account. Use S0 for small-scale development. Use S1 or G2 for large-scale applications."
+ }
+ },
+ "kind": {
+ "type": "string",
+ "allowedValues": [
+ "Gen1",
+ "Gen2"
+ ],
+ "defaultValue": "Gen2",
+ "metadata": {
+ "description": "The pricing tier for the account. Use Gen1 for small-scale development. Use Gen2 for large-scale applications."
+ }
+ },
+ "guid": {
+ "type": "string",
+ "defaultValue": "[guid(resourceGroup().id)]",
+ "metadata": {
+ "description": "Input string for new GUID associated with assigning built in role types"
+ }
+ },
+ "startDateTime": {
+ "type": "string",
+ "defaultValue": "[utcNow('u')]",
+ "metadata": {
+ "description": "Current Universal DateTime in ISO 8601 'u' format to be used as start of the SAS token."
+ }
+ },
+ "duration" : {
+ "type": "string",
+ "defaultValue": "P1Y",
+ "metadata": {
+ "description": "The duration of the SAS token, P1Y is maximum, ISO 8601 format is expected."
+ }
+ },
+ "maxRatePerSecond": {
+ "type": "int",
+ "defaultValue": 500,
+ "minValue": 1,
+ "maxValue": 500,
+ "metadata": {
+ "description": "The approximate maximum rate per second the SAS token can be used."
+ }
+ },
+ "signingKey": {
+ "type": "string",
+ "defaultValue": "primaryKey",
+ "allowedValues": [
+ "primaryKey",
+ "seconaryKey"
+ ],
+ "metadata": {
+ "description": "The specified signing key which will be used to create the SAS token."
+ }
+ },
+ "allowedOrigins": {
+ "type": "array",
+ "defaultValue": [],
+ "maxLength": 10,
+ "metadata": {
+ "description": "The specified application's web host header origins (example: https://www.azure.com) which the Maps account allows for Cross Origin Resource Sharing (CORS)."
+ }
+ },
+ "allowedRegions": {
+ "type": "array",
+ "defaultValue": [],
+ "metadata": {
+ "description": "The specified SAS token allowed locations which the token may be used."
+ }
+ }
+ },
+ "variables": {
+ "accountId": "[resourceId('Microsoft.Maps/accounts', parameters('accountName'))]",
+ "Azure Maps Data Reader": "[subscriptionResourceId('Microsoft.Authorization/roleDefinitions', '423170ca-a8f6-4b0f-8487-9e4eb8f49bfa')]",
+ "sasParameters": {
+ "signingKey": "[parameters('signingKey')]",
+ "principalId": "[parameters('userAssignedIdentityPrincipalId')]",
+ "maxRatePerSecond": "[parameters('maxRatePerSecond')]",
+ "start": "[parameters('startDateTime')]",
+ "expiry": "[dateTimeAdd(parameters('startDateTime'), parameters('duration'))]",
+ "regions": "[parameters('allowedRegions')]"
+ }
+ },
+ "resources": [
+ {
+ "name": "[parameters('accountName')]",
+ "type": "Microsoft.Maps/accounts",
+ "apiVersion": "2021-12-01-preview",
+ "location": "[parameters('location')]",
+ "sku": {
+ "name": "[parameters('pricingTier')]"
+ },
+ "kind": "[parameters('kind')]",
+ "properties": {
+ "cors": {
+ "corsRules": [
+ {
+ "allowedOrigins": "[parameters('allowedOrigins')]"
+ }
+ ]
+ }
+ },
+ "identity": {
+ "type": "UserAssigned",
+ "userAssignedIdentities": {
+ "[parameters('userAssignedIdentityResourceId')]": {}
+ }
+ }
+ },
+ {
+ "apiVersion": "2020-04-01-preview",
+ "name": "[concat(parameters('accountName'), '/Microsoft.Authorization/', parameters('guid'))]",
+ "type": "Microsoft.Maps/accounts/providers/roleAssignments",
+ "dependsOn": [
+ "[parameters('accountName')]"
+ ],
+ "properties": {
+ "roleDefinitionId": "[variables('Azure Maps Data Reader')]",
+ "principalId": "[parameters('userAssignedIdentityPrincipalId')]",
+ "principalType": "ServicePrincipal"
+ }
+ },
+ {
+ "apiVersion": "2021-04-01-preview",
+ "type": "Microsoft.KeyVault/vaults/secrets",
+ "name": "[concat(parameters('keyVaultName'), '/', parameters('accountName'))]",
+ "dependsOn": [
+ "[variables('accountId')]"
+ ],
+ "tags": {
+ "signingKey": "[variables('sasParameters').signingKey]",
+ "start" : "[variables('sasParameters').start]",
+ "expiry" : "[variables('sasParameters').expiry]"
+ },
+ "properties": {
+ "value": "[listSas(variables('accountId'), '2021-12-01-preview', variables('sasParameters')).accountSasToken]"
+ }
+ }
+ ]
+ }
+ ```
+
+1. Deploy the template using ID parameters from the Azure Key Vault and managed identity resources created in the previous step. Note that when creating the SAS token, the `allowedRegions` parameter is set to `eastus`, `westus2`, and `westcentralus`. We use these locations because we plan to make HTTP requests to the `us.atlas.microsoft.com` endpoint.
+
+ > [!IMPORTANT]
+ > We save the SAS token into the Azure Key Vault to prevent its credentials from appearing in the Azure deployment logs. The Azure Key Vault SAS token secret's `tags` also contain the start, expiry, and signing key name to help understand when the SAS token will expire.
+
+ ```azurecli
+ az deployment group create --name ExampleDeployment --resource-group {group-name} --template-file "./azuredeploy.json" --parameters keyVaultName="$($outputs[0])" userAssignedIdentityPrincipalId="$($outputs[1])" userAssignedIdentityResourceId="$($outputs[2])" allowedOrigins="['http://localhost']" allowedRegions="['eastus', 'westus2', 'westcentralus']" maxRatePerSecond="10"
+ ```
+
+1. Locate, then save a copy of the single SAS token secret from Azure Key Vault.
+
+ ```azurecli
+ $secretId = $(az keyvault secret list --vault-name $outputs[0] --query "[? contains(name,'map')].id" --output tsv)
+ $sasToken = $(az keyvault secret show --id "$secretId" --query "value" --output tsv)
+ ```
+
+1. Test the SAS Token by making a request to an Azure Maps endpoint. We specify the `us.atlas.microsoft.com` to ensure that our request will be routed to the US geography because our SAS Token has allowed regions within the geography.
+
+ ```azurecli
+ az rest --method GET --url 'https://us.atlas.microsoft.com/search/address/json?api-version=1.0&query=15127 NE 24th Street, Redmond, WA 98052' --headers "Authorization=jwt-sas $($sasToken)" --query "results[].address"
+ ```
+
+## Complete example
+
+In the current directory of the PowerShell session you should have:
+
+- `prereq.azuredeploy.json` This creates the Key Vault and managed identity.
+- `azuredeploy.json` This creates the Azure Maps account and configures the role assignment and managed identity, then stores the SAS Token into the Azure Key Vault.
+
+```powershell
+az login
+az provider register --namespace Microsoft.KeyVault
+az provider register --namespace Microsoft.ManagedIdentity
+az provider register --namespace Microsoft.Maps
+
+$id = $(az rest --method GET --url 'https://graph.microsoft.com/v1.0/me?$select=id' --headers 'Content-Type=application/json' --query "id")
+az group create --name {group-name} --location "East US"
+$outputs = $(az deployment group create --name ExampleDeployment --resource-group {group-name} --template-file "./prereq.azuredeploy.json" --parameters objectId=$id --query "[properties.outputs.keyVaultName.value, properties.outputs.userAssignedIdentityPrincipalId.value, properties.outputs.userIdentityResourceId.value]" --output tsv)
+az deployment group create --name ExampleDeployment --resource-group {group-name} --template-file "./azuredeploy.json" --parameters keyVaultName="$($outputs[0])" userAssignedIdentityPrincipalId="$($outputs[1])" userAssignedIdentityResourceId="$($outputs[2])" allowedOrigins="['http://localhost']" allowedRegions="['eastus', 'westus2', 'westcentralus']" maxRatePerSecond="10"
+$secretId = $(az keyvault secret list --vault-name $outputs[0] --query "[? contains(name,'map')].id" --output tsv)
+$sasToken = $(az keyvault secret show --id "$secretId" --query "value" --output tsv)
+
+az rest --method GET --url 'https://us.atlas.microsoft.com/search/address/json?api-version=1.0&query=15127 NE 24th Street, Redmond, WA 98052' --headers "Authorization=jwt-sas $($sasToken)" --query "results[].address"
+```
+
+## Clean up resources
+
+When you no longer need the Azure resources, you can delete them:
+
+```azurecli
+az group delete --name {group-name}
+```
+
+## Next steps
+
+For more detailed examples:
+> [!div class="nextstepaction"]
+> [Authentication scenarios for Azure AD](../active-directory/develop/authentication-vs-authorization.md)
+
+Find the API usage metrics for your Azure Maps account:
+> [!div class="nextstepaction"]
+> [View usage metrics](how-to-view-api-usage.md)
+
+Explore samples that show how to integrate Azure AD with Azure Maps:
+> [!div class="nextstepaction"]
+> [Azure Maps samples](https://github.com/Azure-Samples/Azure-Maps-AzureAD-Samples)
azure-maps Tutorial Create Store Locator https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/tutorial-create-store-locator.md
Title: 'Tutorial: Use Microsoft Azure Maps to create store locator web applications'+ description: Tutorial on how to use Microsoft Azure Maps to create store locator web applications. Previously updated : 06/07/2021 Last updated : 01/03/2022 - # Tutorial: Use Azure Maps to create a store locator
-This tutorial guides you through the process of creating a simple store locator using Azure Maps. In this tutorial, you'll learn how to:
+This tutorial guides you through the process of creating a simple store locator using Azure Maps.
+
+In this tutorial, you'll learn how to:
> [!div class="checklist"]
+>
> * Create a new webpage by using the Azure Map Control API. > * Load custom data from a file and display it on a map. > * Use the Azure Maps Search service to find an address or enter a query.
This tutorial guides you through the process of creating a simple store locator
## Prerequisites
-1. [Make an Azure Maps account in Gen 1 (S1) or Gen 2 pricing tier](quick-demo-map-app.md#create-an-azure-maps-account).
-2. [Obtain a primary subscription key](quick-demo-map-app.md#get-the-primary-key-for-your-account), also known as the primary key or the subscription key.
+1. An [Azure Maps account](quick-demo-map-app.md#create-an-azure-maps-account) using the Gen 1 (S1) or Gen 2 pricing tier.
+2. An [Azure Maps primary subscription key](quick-demo-map-app.md#get-the-primary-key-for-your-account).
-For more information about Azure Maps authentication, see [manage authentication in Azure Maps](how-to-manage-authentication.md).
+For more information about Azure Maps authentication, see [Manage authentication in Azure Maps](how-to-manage-authentication.md).
-This tutorial uses the [Visual Studio Code](https://code.visualstudio.com/) application, but you can use a different coding environment.
+[Visual Studio Code](https://code.visualstudio.com/) is recommended for this tutorial, but you can use any suitable integrated development environment (IDE).
## Sample code
-In this tutorial, we'll create a store locator for a fictional company called Contoso Coffee. Also, the tutorial includes some tips to help you learn about extending the store locator with other optional functionalities.
+In this tutorial, you'll create a store locator for a fictional company named *Contoso Coffee*. Also, this tutorial includes some tips to help you learn about extending the store locator with other optional functionality.
-You can view the [Live store locator sample here](https://azuremapscodesamples.azurewebsites.net/?sample=Simple%20Store%20Locator).
+To see a live sample of what you will create in this tutorial, see [Simple Store Locator](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Simple%20Store%20Locator) on the **Azure Maps Code Samples** site.
To more easily follow and engage this tutorial, you'll need to download the following resources:
-* [Full source code for simple store locator sample](https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/AzureMapsCodeSamples/Tutorials/Simple%20Store%20Locator)
-* [Store location data to import into the store locator dataset](https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/AzureMapsCodeSamples/Tutorials/Simple%20Store%20Locator/data)
-* [Map images](https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/AzureMapsCodeSamples/Tutorials/Simple%20Store%20Locator/images)
+* Full source code for the [Simple Store Locator](https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/AzureMapsCodeSamples/Tutorials/Simple%20Store%20Locator) on GitHub.
+* [Store location data](https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/AzureMapsCodeSamples/Tutorials/Simple%20Store%20Locator/data) that you'll import into the store locator dataset.
+* The [Map images](https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/AzureMapsCodeSamples/Tutorials/Simple%20Store%20Locator/images).
## Store locator features
-This section lists the features that are supported in the Contoso Coffee store locator application.
+This section lists the Azure Maps features that are demonstrated in the Contoso Coffee store locator application created in this tutorial.
### User interface features
-* Store logo on the header
-* Map supports panning and zooming
-* A My Location button to search over the user's current location.
-* Page layout adjusts based on the width of the device screen
+* A store logo on the header
+* A map that supports panning and zooming
+* A **My Location** button to search over the user's current location.
+* A Page layout that adjusts based on the width of the devices screen
* A search box and a search button ### Functionality features * A `keypress` event added to the search box triggers a search when the user presses **Enter**.
-* When the map moves, the distance to each location from the center of the map calculates. The results list updates to display the closest locations at the top of the map.
+* When the map moves, the distance to each location from the center of the map recalculates. The results list updates to display the closest locations at the top of the map.
* When the user selects a result in the results list, the map is centered over the selected location and information about the location appears in a pop-up window. * When the user selects a specific location, the map triggers a pop-up window. * When the user zooms out, locations are grouped in clusters. Each cluster is represented by a circle with a number inside the circle. Clusters form and separate as the user changes the zoom level.
This section lists the features that are supported in the Contoso Coffee store l
## Store locator design
-The following figure shows a wireframe of the general layout of our store locator. You can view the live wireframe [here](https://azuremapscodesamples.azurewebsites.net/?sample=Simple%20Store%20Locator).
+The following screenshot shows the general layout of the Contoso Coffee store locator application. To view and interact with the live sample, see the [Simple Store Locator](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Simple%20Store%20Locator) sample application on the **Azure Maps Code Samples** site.
-To maximize the usefulness of this store locator, we include a responsive layout that adjusts when a user's screen width is smaller than 700 pixels wide. A responsive layout makes it easy to use the store locator on a small screen, like on a mobile device. Here's a wireframe of the small-screen layout:
+To maximize the usefulness of this store locator, we include a responsive layout that adjusts when a user's screen width is smaller than 700 pixels wide. A responsive layout makes it easy to use the store locator on a small screen, like on a mobile device. Here's a screenshot showing a sample of the small-screen layout:
<a id="create a data-set"></a>
This section describes how to create a dataset of the stores that you want to di
:::image type="content" source="./media/tutorial-create-store-locator/store-locator-data-spreadsheet.png" alt-text="Screenshot of the store locator data in an Excel workbook.":::
-To view the full dataset, [download the Excel workbook here](https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/AzureMapsCodeSamples/Tutorials/Simple%20Store%20Locator/data).
+The excel file containing the full dataset for the Contoso Coffee locator sample application can be downloaded from the [data](https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/AzureMapsCodeSamples/Tutorials/Simple%20Store%20Locator/data) folder of the _Azure Maps code samples_ repository in GitHub.
-Looking at the screenshot of the data, we can make the following observations:
+From the above screenshot of the data, we can make the following observations:
-* Location information is stored by using the **AddressLine**, **City**, **Municipality** (county), **AdminDivision** (state/province), **PostCode** (postal code), and **Country** columns.
-* The **Latitude** and **Longitude** columns contain the coordinates for each Contoso Coffee location. If you don't have coordinates information, you can use the Search services in Azure Maps to determine the location coordinates.
+* Location information is stored in the following six columns: **AddressLine**, **City**, **Municipality** (county), **AdminDivision** (state/province), **PostCode** (postal code), and **Country**.
+* The **Latitude** and **Longitude** columns contain the coordinates for each Contoso Coffee location. If you don't have coordinate information, you can use the Azure Maps [Search service](/rest/api/maps/search) to determine the location coordinates.
* Some other columns contain metadata that's related to the coffee shops: a phone number, Boolean columns, and store opening and closing times in 24-hour format. The Boolean columns are for Wi-Fi and wheelchair accessibility. You can create your own columns that contain metadata that's more relevant to your location data. > [!NOTE]
-> Azure Maps renders data in the spherical Mercator projection "EPSG:3857" but reads data in "EPSG:4326" that use the WGS84 datum.
+> Azure Maps renders data in the [Spherical Mercator projection](glossary.md#spherical-mercator-projection) "[EPSG:3857](https://epsg.io/3857)" but reads data in "[EPSG:4326](https://epsg.io/4326)" that use the WGS84 datum.
-## Load the store location dataset
+## Load Contoso Coffee shop locator dataset
The Contoso Coffee shop locator dataset is small, so we'll convert the Excel worksheet into a tab-delimited text file. This file can then be downloaded by the browser when the application loads.
- >[!TIP]
->If your dataset is too large for client download, or is updated frequently, you might consider storing your dataset in a database. After your data is loaded into a database, you can then set up a web service that accepts queries for the data, and then sends the results to the user's browser.
+> [!TIP]
+> If your dataset is too large for client download, or is updated frequently, you might consider storing your dataset in a database. After your data is loaded into a database, you can then set up a web service that accepts queries for the data, then sends the results to the user's browser.
### Convert data to tab-delimited text file
-To convert the Contoso Coffee shop location data from an Excel workbook into a flat text file:
-
-1. [Download the Excel workbook](https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/AzureMapsCodeSamples/Tutorials/Simple%20Store%20Locator/data).
-
-2. Save the workbook to your hard drive.
+To convert the Contoso Coffee shop location data from an Excel workbook into a tab-delimited text file:
-3. Load the Excel app.
+1. Download the Excel workbook [ContosoCoffee.xlsx](https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/AzureMapsCodeSamples/Tutorials/Simple%20Store%20Locator/data) and Open it in Excel.
-4. Open the downloaded workbook.
+1. Select **File > Save As...**.
-5. Select **Save As**.
+1. In the **Save as type** drop-down list, select **Text (Tab delimited)(*.txt)**.
-6. In the **Save as type** drop-down list, select **Text (Tab delimited)(*.txt)**.
-
-7. Name the file *ContosoCoffee*.
+1. Name the file *ContosoCoffee*.
:::image type="content" source="./media/tutorial-create-store-locator/data-delimited-text.png" alt-text="Screenshot of the Save as type dialog box.":::
If you open the text file in Notepad, it looks similar to the following text:
## Set up the project
-1. Open the Visual Studio Code app.
+1. Open [Visual Studio Code](https://code.visualstudio.com/), or your choice of development environments.
-2. Select **File**, and then select **Open Workspace...**.
+2. Select **File > Open Workspace...**.
-3. Create a new folder and name it "ContosoCoffee".
+3. Create a new folder named *ContosoCoffee*.
-4. Select **CONTOSOCOFFEE** in the explorer.
+4. Select **ContosoCoffee** in the explorer.
5. Create the following three files that define the layout, style, and logic for the application:
If you open the text file in Notepad, it looks similar to the following text:
6. Create a folder named *data*.
-7. Add *ContosoCoffee.txt* to the *data* folder.
+7. Add the *ContosoCoffee.txt* file that you previously created from the Excel workbook _ContosoCoffee.xlsx_ to the *data* folder.
8. Create another folder named *images*.
-9. If you haven't already, [download these 10 images](https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/AzureMapsCodeSamples/Tutorials/Simple%20Store%20Locator/images).
-
-10. Add the downloaded images to the *images* folder.
+9. If you haven't already, download the 10 [Map images](https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/AzureMapsCodeSamples/Tutorials/Simple%20Store%20Locator/images) from the images directory in the GitHub Repository and add them to the *images* folder.
Your workspace folder should now look like the following screenshot:
- :::image type="content" source="./media/tutorial-create-store-locator/store-locator-workspace.png" alt-text="Screenshot of the Simple Store Locator workspace folder.":::
+ :::image type="content" source="./media/tutorial-create-store-locator/store-locator-workspace.png" alt-text="Screenshot of the images folder in the Contoso Coffee directory.":::
## Create the HTML
To create the HTML:
2. Add references to the Azure Maps web control JavaScript and CSS files: ```HTML
+ <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
<link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css"> <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script> ```
-3. Add a reference to the Azure Maps Services module. The module is a JavaScript library that wraps the Azure Maps REST services and makes them easy to use in JavaScript. The module is useful for powering search functionality.
+3. Next, add a reference to the Azure Maps Services module. This module is a JavaScript library that wraps the Azure Maps REST services, making them easy to use in JavaScript. The Services module is useful for powering search functionality.
```HTML
+ <!-- Add a reference to the Azure Maps Services Module JavaScript file. -->
<script src="https://atlas.microsoft.com/sdk/javascript/service/2/atlas-service.min.js"></script> ``` 4. Add references to *index.js* and *index.css*. ```HTML
+ <!-- Add references to the store locator JavaScript and CSS files. -->
<link rel="stylesheet" href="index.css" type="text/css"> <script src="index.js"></script> ```
To create the HTML:
After you finish, *https://docsupdatetracker.net/index.html* should look like [this example https://docsupdatetracker.net/index.html file](https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/master/AzureMapsCodeSamples/Tutorials/Simple%20Store%20Locator/https://docsupdatetracker.net/index.html).
-## Define the CSS Styles
+## Define the CSS styles
The next step is to define the CSS styles. CSS styles define how the application components are laid out and the application's appearance.
The next step is to define the CSS styles. CSS styles define how the application
} ```
-Run the application. You'll see the header, search box, and search button. However, the map isn't visible because it hasn't been loaded yet. If you try to do a search, nothing happens. We need to set up the JavaScript logic, which is described in the next section. This logic accesses all the functionality of the store locator.
+Run the application. You'll see the header, search box, and search button. However, the map isn't visible because it hasn't been loaded yet. If you try to do a search, nothing happens. We need to add the JavaScript logic described in the next section. This logic accesses all the functionality of the store locator.
## Add JavaScript code
The JavaScript code in the Contoso Coffee shop locator app enables the following
1. Adds an [event listener](/javascript/api/azure-maps-control/atlas.map#events) called `ready` to wait until the page has completed its loading process. When the page loading is complete, the event handler creates more event listeners to monitor the loading of the map, and give functionality to the search and **My location** buttons.
-2. When the user selects the search button, or types a location in the search box then presses enter, a fuzzy search against the user's query is started. The code passes in an array of country/region ISO 2 values to the `countrySet` option to limit the search results to those countries/regions. Limiting the countries/regions to search helps increase the accuracy of the results that are returned.
+2. When the user selects the search button, or types a location in the search box then presses enter, a fuzzy search against the user's query begins. The code passes in an array of country/region ISO 2 values to the `countrySet` option to limit the search results to those countries/regions. Limiting the countries/regions to search helps increase the accuracy of the results that are returned.
-3. Once the search is finished, the first location result is used as the center focus of the map camera. When the user selects the My Location button, the code retrieves the user's location using the *HTML5 Geolocation API* that's built into the browser. After retrieving the location, the code centers the map over the user's location.
+3. Once the search completes, the first location result is used as the center focus of the map. When the user selects the My Location button, the code retrieves the user's location using the *HTML5 Geolocation API* that's built into the browser. After retrieving the location, the code centers the map over the user's location.
To add the JavaScript:
To add the JavaScript:
```JavaScript //The maximum zoom level to cluster data point data on the map. var maxClusterZoomLevel = 11;-
+
//The URL to the store location data. var storeLocationDataUrl = 'data/ContosoCoffee.txt';-
- //The URL to the icon image.
+
+ //The URL to the icon image.
var iconImageUrl = 'images/CoffeeIcon.png';
+
+ //An array of country region ISO2 values to limit searches to.
+ var countrySet = ['US', 'CA', 'GB', 'FR','DE','IT','ES','NL','DK'];
+
+ //
var map, popup, datasource, iconLayer, centerMarker, searchURL;+
+ // Used in function updateListItems
+ var listItemTemplate = '<div class="listItem" onclick="itemSelected(\'{id}\')"><div class="listItem-title">{title}</div>{city}<br />Open until {closes}<br />{distance} miles away</div>';
+ ``` 3. Add the following initialization code. Make sure to replace `<Your Azure Maps Key>` with your primary subscription key.
To add the JavaScript:
//Create a pop-up window, but leave it closed so we can update it and display it later. popup = new atlas.Popup();
- //Use SubscriptionKeyCredential with a subscription key
- const subscriptionKeyCredential = new atlas.service.SubscriptionKeyCredential(atlas.getSubscriptionKey());
-
- //Use subscriptionKeyCredential to create a pipeline
- const pipeline = atlas.service.MapsURL.newPipeline(subscriptionKeyCredential, {
- retryOptions: { maxTries: 4 } // Retry options
- });
+ //Use MapControlCredential to share authentication between a map control and the service module.
+ var pipeline = atlas.service.MapsURL.newPipeline(new atlas.service.MapControlCredential(map));
//Create an instance of the SearchURL client. searchURL = new atlas.service.SearchURL(pipeline);
To add the JavaScript:
} };
- //If the user selects the My Location button, use the Geolocation API (Preview) to get the user's location. Center and zoom the map on that location.
+ //If the user selects the My Location button, use the Geolocation API to get the user's location. Center and zoom the map on that location.
document.getElementById('myLocationBtn').onclick = setMapToUserLocation; //Wait until the map resources are ready. map.events.add('ready', function() {
- //Add your post-map load functionality.
+ //Add your maps post load functionality.
}); }
- //Create an array of country/region ISO 2 values to limit searches to.
- var countrySet = ['US', 'CA', 'GB', 'FR','DE','IT','ES','NL','DK'];
- function performSearch() { var query = document.getElementById('searchTbx').value; //Perform a fuzzy search on the users query. searchURL.searchFuzzy(atlas.service.Aborter.timeout(3000), query, { //Pass in the array of country/region ISO2 for which we want to limit the search to.
- countrySet: countrySet
+ countrySet: countrySet,
+ view: 'Auto'
}).then(results => { //Parse the response into GeoJSON so that the map can understand. var data = results.geojson.getFeatures();
To add the JavaScript:
function setMapToUserLocation() { //Request the user's location. navigator.geolocation.getCurrentPosition(function(position) {
- //Convert the Geolocation API (Preview) position to a longitude and latitude position value that the map can interpret and center the map over it.
+ //Convert the geolocation API position into a longitude/latitude position value the map can understand and center the map over it.
map.setCamera({ center: [position.coords.longitude, position.coords.latitude], zoom: maxClusterZoomLevel + 1 }); }, function(error) {
- //If an error occurs when the API tries to access the user's position information, display an error message.
+ //If an error occurs when trying to access the users position information, display an error message.
switch (error.code) { case error.PERMISSION_DENIED: alert('User denied the request for geolocation.');
To add the JavaScript:
window.onload = initialize; ```
-4. In the map's `ready` event listener, add a zoom control and an HTML marker to display the center of a search area.
+4. In the map's `ready` event handler, add a zoom control and an HTML marker to display the center of a search area.
```JavaScript //Add a zoom control to the map.
To add the JavaScript:
map.markers.add(centerMarker); ```
-5. In the map's `ready` event listener, add a data source. Then, make a call to load and parse the dataset. Enable clustering on the data source. Clustering on the data source groups overlapping points together in a cluster. As the user zooms in, the clusters separate into individual points. This behavior provides a better user experience and improves performance.
+5. In the map's `ready` event handler, add a data source. Then, make a call to load and parse the dataset. Enable clustering on the data source. Clustering on the data source groups overlapping points together in a cluster. As the user zooms in, the clusters separate into individual points. This behavior provides a better user experience and improves performance.
```JavaScript //Create a data source, add it to the map, and then enable clustering.
To add the JavaScript:
map.sources.add(datasource);
- //Load all the store data now that the data source is defined.
+ //Load all the store data now that the data source has been defined.
loadStoreData(); ```
-6. After the dataset loads in the map's `ready` event listener, define a set of layers to render the data. A bubble layer renders clustered data points. A symbol layer renders the number of points in each cluster above the bubble layer. A second symbol layer renders a custom icon for individual locations on the map.
+6. After the dataset loads in the map's `ready` event handler, define a set of layers to render the data. A bubble layer renders clustered data points. A symbol layer renders the number of points in each cluster above the bubble layer. A second symbol layer renders a custom icon for individual locations on the map.
Add `mouseover` and `mouseout` events to the bubble and icon layers to change the mouse cursor when the user hovers over a cluster or icon on the map. Add a `click` event to the cluster bubble layer. This `click` event zooms in the map two levels and centers the map over a cluster when the user selects any cluster. Add a `click` event to the icon layer. This `click` event displays a pop-up window that shows the details of a coffee shop when a user selects an individual location icon. Add an event to the map to monitor when the map is finished moving. When this event fires, update the items in the list panel.
To add the JavaScript:
showPopup(e.shapes[0]); });
- //Add an event to monitor when the map is finished rendering the map after it has moved.
+ //Add an event to monitor when the map has finished rendering.
map.events.add('render', function() { //Update the data in the list. updateListItems();
To add the JavaScript:
}); ```
-7. When the coffee shop dataset is loaded, it must first be downloaded. Then, the text file must be split into lines. The first line contains the header information. To make the code easier to follow, we parse the header into an object, which we can then use to look up the cell index of each property. After the first line, loop through the remaining lines and create a point feature. Add the point feature to the data source. Finally, update the list panel.
+7. When the coffee shop dataset is needed, it must first be downloaded. Once downloaded, the file must be split into lines. The first line contains the header information. To make the code easier to follow, we parse the header into an object, which we can then use to look up the cell index of each property. After the first line, loop through the remaining lines and create a point feature. Add the point feature to the data source. Finally, update the list panel.
```JavaScript function loadStoreData() {
To add the JavaScript:
var camera = map.getCamera(); var listPanel = document.getElementById('listPanel');
- //Check to see whether the user is zoomed out a substantial distance. If they are, tell the user to zoom in and to perform a search or select the My Location button.
+ //Check to see if the user is zoomed out a substantial distance. If they are, tell them to zoom in and to perform a search or select the My Location button.
if (camera.zoom < maxClusterZoomLevel) { //Close the pop-up window; clusters might be displayed on the map. popup.close();
To add the JavaScript:
} ```
-Now, you have a fully functional store locator. In a web browser, open the *https://docsupdatetracker.net/index.html* file for the store locator. When the clusters appear on the map, you can search for a location by using the search box, by selecting the My Location button, by selecting a cluster, or by zooming in on the map to see individual locations.
+Now, you have a fully functional store locator. Open the *https://docsupdatetracker.net/index.html* file in a web browser. When the clusters appear on the map, you can search for a location using any of the following methods:
+
+1. The search box.
+1. Selecting the My Location button
+1. Selecting a cluster
+1. Zooming in on the map to see individual locations.
The first time a user selects the My Location button, the browser displays a security warning that asks for permission to access the user's location. If the user agrees to share their location, the map zooms in on the user's location, and nearby coffee shops are shown.
If you resize the browser window to fewer than 700 pixels wide or open the appli
![Screenshot of the small-screen version of the store locator](./media/tutorial-create-store-locator/finished-simple-store-locator-mobile.png)
-In this tutorial, you learned how to create a basic store locator by using Azure Maps. The store locator you create in this tutorial might have all the functionality you need. You can add features to your store locator or use more advance features for a more custom user experience:
-
- * Enable [suggestions as you type](https://azuremapscodesamples.azurewebsites.net/?sample=Search%20Autosuggest%20and%20JQuery%20UI) in the search box.
- * Add [support for multiple languages](https://azuremapscodesamples.azurewebsites.net/?sample=Map%20Localization).
- * Allow the user to [filter locations along a route](https://azuremapscodesamples.azurewebsites.net/?sample=Filter%20Data%20Along%20Route).
- * Add the ability to [set filters](https://azuremapscodesamples.azurewebsites.net/?sample=Filter%20Symbols%20by%20Property).
- * Add support to specify an initial search value by using a query string. When you include this option in your store locator, users are then able to bookmark and share searches. It also provides an easy method for you to pass searches to this page from another page.
- * Deploy your store locator as an [Azure App Service Web App](../app-service/quickstart-html.md).
- * Store your data in a database and search for nearby locations. To learn more, see the [SQL Server spatial data types overview](/sql/relational-databases/spatial/spatial-data-types-overview?preserve-view=true&view=sql-server-2017) and [Query spatial data for the nearest neighbor](/sql/relational-databases/spatial/query-spatial-data-for-nearest-neighbor?preserve-view=true&view=sql-server-2017).
+In this tutorial, you learned how to create a basic store locator by using Azure Maps. The store locator you create in this tutorial might have all the functionality you need. You can add features to your store locator or use more advance features for a more custom user experience:
-You can [view full source code here](https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/AzureMapsCodeSamples/Tutorials/Simple%20Store%20Locator). [View the live sample](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Simple%20Store%20Locator) and learn more about the coverage and capabilities of Azure Maps by using [Zoom levels and tile grid](zoom-levels-and-tile-grid.md). You can also [Use data-driven style expressions](data-driven-style-expressions-web-sdk.md) to apply to your business logic.
+* Enable [suggestions as you type](https://azuremapscodesamples.azurewebsites.net/?sample=Search%20Autosuggest%20and%20JQuery%20UI) in the search box.
+* Add [support for multiple languages](https://azuremapscodesamples.azurewebsites.net/?sample=Map%20Localization).
+* Allow the user to [filter locations along a route](https://azuremapscodesamples.azurewebsites.net/?sample=Filter%20Data%20Along%20Route).
+* Add the ability to [set filters](https://azuremapscodesamples.azurewebsites.net/?sample=Filter%20Symbols%20by%20Property).
+* Add support to specify an initial search value by using a query string. When you include this option in your store locator, users are then able to bookmark and share searches. It also provides an easy method for you to pass searches to this page from another page.
+* Deploy your store locator as an [Azure App Service Web App](../app-service/quickstart-html.md).
+* Store your data in a database and search for nearby locations. To learn more, see the [SQL Server spatial data types overview](/sql/relational-databases/spatial/spatial-data-types-overview?preserve-view=true&view=sql-server-2017) and [Query spatial data for the nearest neighbor](/sql/relational-databases/spatial/query-spatial-data-for-nearest-neighbor?preserve-view=true&view=sql-server-2017).
-## Clean up resources
+## Additional information
-There are no resources that require cleanup.
+* For the completed code used in this tutorial, see [Simple Store Locator](https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/AzureMapsCodeSamples/Tutorials/Simple%20Store%20Locator) on GitHub.
+* To view this sample live, see [Simple Store Locator](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Simple%20Store%20Locator) on the **Azure Maps Code Samples** site.
+* learn more about the coverage and capabilities of Azure Maps by using [Zoom levels and tile grid](zoom-levels-and-tile-grid.md).
+* You can also [Use data-driven style expressions](data-driven-style-expressions-web-sdk.md) to apply to your business logic.
## Next steps
azure-monitor Azure Monitor Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/azure-monitor-agent-overview.md
The Azure Monitor agent replaces the [legacy agents for Azure Monitor](agents-ov
If the Azure Monitor agent has all the core capabilities you require, consider transitioning to it. If there are critical features that you require, continue with the current agent until the Azure Monitor agent reaches parity. - **Tolerance for rework:** If you're setting up a new environment with resources such as deployment scripts and onboarding templates, assess the effort involved. If the setup will take a significant amount of work, consider setting up your new environment with the new agent as it's now generally available.
- Azure Monitor's Log Analytics agent is retiring on 31 August 2024. The current agents will be supported for several years after deprecation begins.
+ Azure Monitor's Log Analytics agent is retiring on 31 August 2024. The current agents will be supported until the retirement date.
## Supported resource types Azure virtual machines, virtual machine scale sets, and Azure Arc-enabled servers are currently supported. Azure Kubernetes Service and other compute resource types aren't currently supported.
The following table shows the current support for the Azure Monitor agent with o
| Azure service | Current support | More information | |:|:|:| | [Microsoft Defender for Cloud](../../security-center/security-center-introduction.md) | Private preview | [Sign-up link](https://aka.ms/AMAgent) |
-| [Microsoft Sentinel](../../sentinel/overview.md) | <ul><li>Windows Event Forwarding (WEF): Private preview</li><li>Windows Security Events: [Public preview](../../sentinel/connect-windows-security-events.md?tabs=AMA)</li></ul> | <ul><li>[Sign-up link](https://aka.ms/AMAgent) </li><li>No sign-up needed</li></ul> |
+| [Microsoft Sentinel](../../sentinel/overview.md) | <ul><li>Windows Forwarding Event (WEF): [Public preview](/azure/sentinel/data-connectors-reference#windows-forwarded-events-preview)</li><li>Windows Security Events: [GA](../../sentinel/connect-windows-security-events.md?tabs=AMA)</li></ul> | <ul><li>No sign-up needed </li><li>No sign-up needed</li></ul> |
The following table shows the current support for the Azure Monitor agent with Azure Monitor features.
azure-monitor Alerts Activity Log https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-activity-log.md
The following fields are the options that you can use in the Azure Resource Mana
1. `level`: Level of the activity in the activity log event that the alert should be generated on. For example: `Critical`, `Error`, `Warning`, `Informational`, or `Verbose`. 1. `operationName`: The name of the operation in the activity log event. For example: `Microsoft.Resources/deployments/write`. 1. `resourceGroup`: Name of the resource group for the impacted resource in the activity log event.
-1. `resourceProvider`: For more information, see [Azure resource providers and types](https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fazure%2Fazure-resource-manager%2Fmanagement%2Fresource-providers-and-types&data=02%7C01%7CNoga.Lavi%40microsoft.com%7C90b7c2308c0647c0347908d7c9a2918d%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637199572373543634&sdata=4RjpTkO5jsdOgPdt%2F%2FDOlYjIFE2%2B%2BuoHq5%2F7lHpCwQw%3D&reserved=0). For a list that maps resource providers to Azure services, see [Resource providers for Azure services](https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fazure%2Fazure-resource-manager%2Fmanagement%2Fazure-services-resource-providers&data=02%7C01%7CNoga.Lavi%40microsoft.com%7C90b7c2308c0647c0347908d7c9a2918d%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637199572373553639&sdata=0ZgJPK7BYuJsRifBKFytqphMOxMrkfkEwDqgVH1g8lw%3D&reserved=0).
+1. `resourceProvider`: For more information, see [Azure resource providers and types](/azure/azure-resource-manager/management/resource-providers-and-types). For a list that maps resource providers to Azure services, see [Resource providers for Azure services](/azure/azure-resource-manager/management/resource-providers-and-types).
1. `status`: String describing the status of the operation in the activity event. For example: `Started`, `In Progress`, `Succeeded`, `Failed`, `Active`, or `Resolved`. 1. `subStatus`: Usually, this field is the HTTP status code of the corresponding REST call. But it can also include other strings describing a substatus. Examples of HTTP status codes include `OK` (HTTP Status Code: 200), `No Content` (HTTP Status Code: 204), and `Service Unavailable` (HTTP Status Code: 503), among many others. 1. `resourceType`: The type of the resource that was affected by the event. For example: `Microsoft.Resources/deployments`.
azure-monitor Alerts Metric Near Real Time https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-metric-near-real-time.md
Here's the full list of Azure Monitor metric sources supported by the newer aler
|Resource type |Dimensions Supported |Multi-resource alerts| Metrics Available| |||--|-|
-|Microsoft.Aadiam/azureADMetrics | Yes | No | [Azure AD](../essentials/metrics-supported.md#microsoftaadiamazureadmetrics) |
+|Microsoft.Aadiam/azureADMetrics | Yes | No | Azure Active Directory (metrics in private preview) |
|Microsoft.ApiManagement/service | Yes | No | [API Management](../essentials/metrics-supported.md#microsoftapimanagementservice) | |Microsoft.AppConfiguration/configurationStores |Yes | No | [App Configuration](../essentials/metrics-supported.md#microsoftappconfigurationconfigurationstores) | |Microsoft.AppPlatform/spring | Yes | No | [Azure Spring Cloud](../essentials/metrics-supported.md#microsoftappplatformspring) |
azure-monitor Metrics Supported https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/metrics-supported.md
The Azure Monitor agent replaces the Azure Diagnostics extension and Log Analyti
This latest update adds a new column and reorders the metrics to be alphabetical. The additional information means that the tables might have a horizontal scroll bar at the bottom, depending on the width of your browser window. If you seem to be missing information, use the scroll bar to see the entirety of the table.
-## microsoft.aadiam/azureADMetrics
-
-|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
-||||||||
-|ThrottledRequests|No|ThrottledRequests|Count|Average|azureADMetrics type metric|No Dimensions|
- ## Microsoft.AnalysisServices/servers
This latest update adds a new column and reorders the metrics to be alphabetical
- [Read about metrics in Azure Monitor](../data-platform.md) - [Create alerts on metrics](../alerts/alerts-overview.md)-- [Export metrics to storage, Event Hub, or Log Analytics](../essentials/platform-logs-overview.md)
+- [Export metrics to storage, Event Hub, or Log Analytics](../essentials/platform-logs-overview.md)
azure-monitor Resource Logs Categories https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/resource-logs-categories.md
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| |||| |AmlComputeClusterEvent|AmlComputeClusterEvent|No|
-|AmlComputeClusterNodeEvent|AmlComputeClusterNodeEvent|No|
+|AmlComputeClusterNodeEvent (deprecated) |AmlComputeClusterNodeEvent|No|
|AmlComputeCpuGpuUtilization|AmlComputeCpuGpuUtilization|No| |AmlComputeJobEvent|AmlComputeJobEvent|No| |AmlRunStatusChangedEvent|AmlRunStatusChangedEvent|No|
If you think something is missing, you can open a GitHub comment at the bottom o
|RunEvent|RunEvent|Yes| |RunReadEvent|RunReadEvent|Yes|
+> [!NOTE]
+> Effective February 2022, the AmlComputeClusterNodeEvent category will be deprecated. We recommend that you instead use the AmlComputeClusterEvent category.
+ ## Microsoft.Media/mediaservices
If you think something is missing, you can open a GitHub comment at the bottom o
* [Learn more about resource logs](../essentials/platform-logs-overview.md) * [Stream resource resource logs to **Event Hubs**](./resource-logs.md#send-to-azure-event-hubs) * [Change resource log diagnostic settings using the Azure Monitor REST API](/rest/api/monitor/diagnosticsettings)
-* [Analyze logs from Azure storage with Log Analytics](./resource-logs.md#send-to-log-analytics-workspace)
+* [Analyze logs from Azure storage with Log Analytics](./resource-logs.md#send-to-log-analytics-workspace)
azure-monitor Stream Monitoring Data Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/stream-monitoring-data-event-hubs.md
Routing your monitoring data to an event hub with Azure Monitor enables you to e
| Tool | Hosted in Azure | Description | |:|:| :|
-| IBM QRadar | No | The Microsoft Azure DSM and Microsoft Azure Event Hub Protocol are available for download from [the IBM support website](https://www.ibm.com/support). You can learn more about the integration with Azure at [QRadar DSM configuration](https://www.ibm.com/docs/en/dsm?topic=options-configuring-microsoft-azure-event-hubs-communicate-qradar). |
+| IBM QRadar | No | The Microsoft Azure DSM and Microsoft Azure Event Hub Protocol are available for download from [the IBM support website](https://www.ibm.com/support). |
| Splunk | No | [Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110/) is an open source project available in Splunkbase. <br><br> If you cannot install an add-on in your Splunk instance, if for example you're using a proxy or running on Splunk Cloud, you can forward these events to the Splunk HTTP Event Collector using [Azure Function For Splunk](https://github.com/Microsoft/AzureFunctionforSplunkVS), which is triggered by new messages in the event hub. | | SumoLogic | No | Instructions for setting up SumoLogic to consume data from an event hub are available at [Collect Logs for the Azure Audit App from Event Hub](https://help.sumologic.com/Send-Data/Applications-and-Other-Data-Sources/Azure-Audit/02Collect-Logs-for-Azure-Audit-from-Event-Hub). | | ArcSight | No | The ArcSight Azure Event Hub smart connector is available as part of [the ArcSight smart connector collection](https://community.microfocus.com/cyberres/arcsight/f/arcsight-product-announcements/163662/announcing-general-availability-of-arcsight-smart-connectors-7-10-0-8114-0). |
azure-monitor Logs Data Export https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/logs-data-export.md
If you have configured your storage account to allow access from selected networ
[![Storage account firewalls and networks](media/logs-data-export/storage-account-network.png "Screenshot of allow trusted Microsoft services.")](media/logs-data-export/storage-account-network.png#lightbox)
-### Create or update data export rule
-A data export rule defines the tables for which data is exported and destination. You can have 10 enabled rules in your workspace, more rules can be added in 'disable' state. Storage account must be unique across all export rules in workspace, but you can use the same event hub namespace in multiple rules.
-
-> [!NOTE]
-> - If export rule includes unsupported tables, no data will be exported for that tables until the tables becomes supported.
-> - A separate container is created for tables in storage account export.
-> - If event hub name isn't provided in rule, a separate event hub is created for tables in event hub namespace. The [number of supported event hubs in 'Basic' and 'Standard' namespaces tiers is 10](../../event-hubs/event-hubs-quotas.md#common-limits-for-all-tiers). When exporting more than 10 tables to these tiers, either split the tables between several export rules to different event hub namespaces, or provide an event hub name in the rule to export all tables to that event hub.
+### Destinations monitoring
> [!IMPORTANT] > Export destinations have limits and should be monitored to minimize throttling, failures, and latency. See [storage accounts scalability](../../storage/common/scalability-targets-standard-account.md#scale-targets-for-standard-storage-accounts) and [event hub namespace quota](../../event-hubs/event-hubs-quotas.md).
-#### Monitoring storage account
+**Monitoring storage account**
1. Use separate storage account for export
-1. Configure alert on the metric below:
+2. Configure alert on the metric below:
| Scope | Metric Namespace | Metric | Aggregation | Threshold | |:|:|:|:|:| | storage-name | Account | Ingress | Sum | 80% of max ingress per alert evaluation period. For example: limit is 60 Gbps for general-purpose v2 in West US. Threshold is 14,400 Gb per 5-minutes evaluation period |
-1. Alert remediation actions
+3. Alert remediation actions
- Use separate storage account for export that isn't shared with non-monitoring data. - Azure Storage standard accounts support higher ingress limit by request. To request an increase, contact [Azure Support](https://azure.microsoft.com/support/faq/). - Split tables between more storage accounts.
-#### Monitoring event hub
+**Monitoring event hub**
1. Configure alerts on the [metrics](../../event-hubs/monitor-event-hubs-reference.md) below:
A data export rule defines the tables for which data is exported and destination
| namespaces-name | Event Hub standard metrics | Incoming requests | Count | 80% of max events per alert evaluation period. For example, limit is 1000/s per unit (TU or PU) and five units used. Threshold is 1200000 per 5-minutes evaluation period | | namespaces-name | Event Hub standard metrics | Quota Exceeded Errors | Count | Between 1% of request. For example, requests per 5 minutes is 600000. Threshold is 6000 per 5-minutes evaluation period |
-1. Alert remediation actions
+2. Alert remediation actions
- Use separate event hub namespace for export that isn't shared with non-monitoring data. - Configure [Auto-inflate](../../event-hubs/event-hubs-auto-inflate.md) feature to automatically scale up and increase the number of throughput units to meet usage needs - Verify increase of throughput units to accommodate data volume - Split tables between more namespaces - Use 'Premium' or 'Dedicated' tiers for higher throughput
+### Create or update data export rule
+Data export rule defines the destination and tables for which data is exported. You can create 10 rules in 'enable' state in your workspace, more rules are allowed in 'disable' state. Storage account destination must be unique across all export rules in workspace, but multiple rules can export to the same event hub namespace in separate event hubs.
+
+> [!NOTE]
+> - You can include tables that aren't yet supported in export, and no data will be exported for these until the tables are supported.
+> - The current custom log tables wonΓÇÖt be supported in export. The next generation of custom log available early 2022 in preview is supported.
+> - Export to storage account - a separate container is created in storage account for each table.
+> - Export to event hub - if event hub name isn't provided, a separate event hub is created for each table. The [number of supported event hubs in 'Basic' and 'Standard' namespaces tiers is 10](../../event-hubs/event-hubs-quotas.md#common-limits-for-all-tiers). When exporting more than 10 tables to these tiers, either split the tables between several export rules to different event hub namespaces, or provide an event hub name in the rule to export all tables to that event hub.
+ # [Azure portal](#tab/portal) In the **Log Analytics workspace** menu in the Azure portal, select **Data Export** from the **Settings** section and click **New export rule** from the top of the middle pane.
Follow the steps, then click **Create**.
Use the following command to create a data export rule to a storage account using PowerShell. A separate container is created for each table. ```powershell
-$storageAccountResourceId = 'subscriptions/subscription-id/resourceGroups/resource-group-name/providers/Microsoft.Storage/storageAccounts/storage-account-name'
-New-AzOperationalInsightsDataExport -ResourceGroupName resourceGroupName -WorkspaceName workspaceName -DataExportName 'ruleName' -TableName 'SecurityEvent, Heartbeat' -ResourceId $storageAccountResourceId
+$storageAccountResourceId = '/subscriptions/subscription-id/resourceGroups/resource-group-name/providers/Microsoft.Storage/storageAccounts/storage-account-name'
+New-AzOperationalInsightsDataExport -ResourceGroupName resourceGroupName -WorkspaceName workspaceName -DataExportName 'ruleName' -TableName 'SecurityEvent,Heartbeat' -ResourceId $storageAccountResourceId
``` Use the following command to create a data export rule to a specific event hub using PowerShell. All tables are exported to the provided event hub name and can be filtered by "Type" field to separate tables. ```powershell
-$eventHubResourceId = 'subscriptions/subscription-id/resourceGroups/resource-group-name/providers/Microsoft.EventHub/namespaces/namespaces-name/eventhubs/eventhub-name'
-New-AzOperationalInsightsDataExport -ResourceGroupName resourceGroupName -WorkspaceName workspaceName -DataExportName 'ruleName' -TableName 'SecurityEvent, Heartbeat' -ResourceId $eventHubResourceId -EventHubName EventhubName
+$eventHubResourceId = '/subscriptions/subscription-id/resourceGroups/resource-group-name/providers/Microsoft.EventHub/namespaces/namespaces-name/eventhubs/eventhub-name'
+New-AzOperationalInsightsDataExport -ResourceGroupName resourceGroupName -WorkspaceName workspaceName -DataExportName 'ruleName' -TableName 'SecurityEvent,Heartbeat' -ResourceId $eventHubResourceId -EventHubName EventhubName
``` Use the following command to create a data export rule to an event hub using PowerShell. When specific event hub name isn't provided, a separate container is created for each table up to the [number of supported event hubs for your event hub tier](../../event-hubs/event-hubs-quotas.md#common-limits-for-all-tiers). If you have more tables to export, provide event hub name to export any number of tables, or set another rule to export the remaining tables to another event hub namespace. ```powershell
-$eventHubResourceId = 'subscriptions/subscription-id/resourceGroups/resource-group-name/providers/Microsoft.EventHub/namespaces/namespaces-name'
-New-AzOperationalInsightsDataExport -ResourceGroupName resourceGroupName -WorkspaceName workspaceName -DataExportName 'ruleName' -TableName 'SecurityEvent, Heartbeat' -ResourceId $eventHubResourceId
+$eventHubResourceId = '/subscriptions/subscription-id/resourceGroups/resource-group-name/providers/Microsoft.EventHub/namespaces/namespaces-name'
+New-AzOperationalInsightsDataExport -ResourceGroupName resourceGroupName -WorkspaceName workspaceName -DataExportName 'ruleName' -TableName 'SecurityEvent,Heartbeat' -ResourceId $eventHubResourceId
``` # [Azure CLI](#tab/azure-cli)
Export rules can be disabled to let you stop the export for a certain period suc
Export rules can be disabled to let you stop the export for a certain period such as when testing is being held. Use the following command to disable or update rule parameters using PowerShell. ```powershell
-Update-AzOperationalInsightsDataExport -ResourceGroupName resourceGroupName -WorkspaceName workspaceName -DataExportName 'ruleName' -TableName 'SecurityEvent, Heartbeat' -Enable: $false
+Update-AzOperationalInsightsDataExport -ResourceGroupName resourceGroupName -WorkspaceName workspaceName -DataExportName 'ruleName' -TableName 'SecurityEvent,Heartbeat' -Enable: $false
``` # [Azure CLI](#tab/azure-cli)
azure-monitor Monitor Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/monitor-reference.md
The following table lists Azure services and the data they collect into Azure Mo
| Service | Resource Provider Namespace | Has Metrics | Has Logs | Insight | Notes |||-|--|-|--| | [Azure Active Directory Domain Services](../active-directory-domain-services/index.yml) | Microsoft.AAD/DomainServices | No | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftaaddomainservices) | | |
- | [Azure Active Directory](../active-directory/index.yml) | Microsoft.Aadiam/azureADMetrics | [**Yes**](./essentials/metrics-supported.md#microsoftaadiamazureadmetrics) | No | [Azure Monitor Workbooks for Azure Active Directory](../active-directory/reports-monitoring/howto-use-azure-monitor-workbooks.md) | |
+ | [Azure Active Directory](../active-directory/index.yml) | No | No | [Azure Monitor Workbooks for Azure Active Directory](../active-directory/reports-monitoring/howto-use-azure-monitor-workbooks.md) | |
| [Azure Analysis Services](../analysis-services/index.yml) | Microsoft.AnalysisServices/servers | [**Yes**](./essentials/metrics-supported.md#microsoftanalysisservicesservers) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftanalysisservicesservers) | | | | [API Management](../api-management/index.yml) | Microsoft.ApiManagement/service | [**Yes**](./essentials/metrics-supported.md#microsoftapimanagementservice) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftapimanagementservice) | | | | [Azure App Configuration](../azure-app-configuration/index.yml) | Microsoft.AppConfiguration/configurationStores | [**Yes**](./essentials/metrics-supported.md#microsoftappconfigurationconfigurationstores) | [**Yes**](/azure/azure-monitor/essentials/resource-logs-categories#microsoftappconfigurationconfigurationstores) | | |
azure-netapp-files Backup Configure Policy Based https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/backup-configure-policy-based.md
na Previously updated : 10/13/2021 Last updated : 01/05/2022 # Configure policy-based backups for Azure NetApp Files
You need to create a snapshot policy and associate the snapshot policy to the vo
Currently, the backup functionality can back up only daily, weekly, and monthly snapshots. (Hourly backups are not supported).
- * For a daily snapshot configuration, specify the time of the day when you want the snapshot created.
- * For a weekly snapshot configuration, specify the day of the week and time of the day when you want the snapshot created.
- * For a monthly snapshot configuration, specify the day of the month and time of the day when you want the snapshot created.
+ * For a *daily* snapshot configuration, specify the time of the day when you want the snapshot created.
+ * For a *weekly* snapshot configuration, specify the day of the week and time of the day when you want the snapshot created.
+ * For a *monthly* snapshot configuration, specify the day of the month and time of the day when you want the snapshot created.
+
+ > [!IMPORTANT]
+ > Be sure to specify a day that will work for all intended months. If you intend for the monthly snapshot configuration to work for all months in the year, pick a day of the month between 1 and 28. For example, if you specify `31` (day of the month), the monthly snapshot configuration is skipped for the months that have less than 31 days.
+
* For each snapshot configuration, specify the number of snapshots that you want to keep.
- For example, if you want to have daily backups, you must configure a snapshot policy with a daily snapshot schedule and snapshot count, and then apply that daily snapshot policy to the volume. If you change the snapshot policy or delete the daily snapshot configuration, new daily snapshots will not be created, resulting in daily backups not taking place. The same process and behavior apply to weekly, and monthly backups.
+ For example, if you want to have daily backups, you must configure a snapshot policy with a daily snapshot schedule and snapshot count, and then apply that daily snapshot policy to the volume. If you change the snapshot policy or delete the daily snapshot configuration, new daily snapshots will not be created, resulting in daily backups not taking place. The same process and behavior apply to weekly and monthly backups.
Ensure that each snapshot has a unique snapshot schedule configuration. By design, Azure NetApp Files prevents you from deleting the latest backup. If multiple snapshots have the same time (for example, the same daily and weekly schedule configuration), Azure NetApp Files considers them as the latest snapshots, and deleting those backups is prevented.
azure-netapp-files Snapshots Manage Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/snapshots-manage-policy.md
na Previously updated : 09/16/2021 Last updated : 01/05/2022
A snapshot policy enables you to specify the snapshot creation frequency in hour
3. Click the **Hourly**, **Daily**, **Weekly**, or **Monthly** tab to create hourly, daily, weekly, or monthly snapshot policies. Specify the **Number of snapshots to keep**.
+ > [!IMPORTANT]
+ > For *monthly* snapshot policy definition, be sure to specify a day that will work for all intended months. If you intend for the monthly snapshot configuration to work for all months in the year, pick a day of the month between 1 and 28. For example, if you specify `31` (day of the month), the monthly snapshot configuration is skipped for the months that have less than 31 days.
+ >
See [Resource limits for Azure NetApp Files](azure-netapp-files-resource-limits.md) about the maximum number of snapshots allowed for a volume. The following example shows hourly snapshot policy configuration.
You can delete a snapshot policy that you no longer want to keep.
* [Troubleshoot snapshot policies](troubleshoot-snapshot-policies.md) * [Resource limits for Azure NetApp Files](azure-netapp-files-resource-limits.md)
-* [Learn more about snapshots](snapshots-introduction.md)
+* [Learn more about snapshots](snapshots-introduction.md)
azure-resource-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/overview.md
Last updated 01/03/2022
Bicep is a domain-specific language (DSL) that uses declarative syntax to deploy Azure resources. In a Bicep file, you define the infrastructure you want to deploy to Azure, and then use that file throughout the development lifecycle to repeatedly deploy your infrastructure. Your resources are deployed in a consistent manner.
-Bicep provides concise syntax, reliable type safety, and support for code reuse. We believe Bicep offers the best authoring experience for your infrastructure-as-code solutions in Azure.
+Bicep provides concise syntax, reliable type safety, and support for code reuse. We believe Bicep offers the best authoring experience for your [infrastructure-as-code](/devops/deliver/what-is-infrastructure-as-code) solutions in Azure.
## Benefits of Bicep versus other tools
azure-resource-manager Azure Services Resource Providers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/azure-services-resource-providers.md
The resources providers that are marked with **- registered** are registered by
| Microsoft.Migrate | [Azure Migrate](../../migrate/migrate-services-overview.md) | | Microsoft.MixedReality | [Azure Spatial Anchors](../../spatial-anchors/index.yml) | | Microsoft.NetApp | [Azure NetApp Files](../../azure-netapp-files/index.yml) |
-| Microsoft.Network | [Application Gateway](../../application-gateway/index.yml)<br />[Azure Bastion](../../bastion/index.yml)<br />[Azure DDoS Protection](../../ddos-protection/ddos-protection-overview.md)<br />[Azure DNS](../../dns/index.yml)<br />[Azure ExpressRoute](../../expressroute/index.yml)<br />[Azure Firewall](../../firewall/index.yml)<br />[Azure Front Door Service](../../frontdoor/index.yml)<br />[Azure Private Link](../../private-link/index.yml)<br />[Load Balancer](../../load-balancer/index.yml)<br />[Network Watcher](../../network-watcher/index.yml)<br />[Traffic Manager](../../traffic-manager/index.yml)<br />[Virtual Network](../../virtual-network/index.yml)<br />[Virtual WAN](../../virtual-wan/index.yml)<br />[VPN Gateway](../../vpn-gateway/index.yml)<br /> |
+| Microsoft.Network | [Application Gateway](../../application-gateway/index.yml)<br />[Azure Bastion](../../bastion/index.yml)<br />[Azure DDoS Protection](../../ddos-protection/ddos-protection-overview.md)<br />[Azure DNS](../../dns/index.yml)<br />[Azure ExpressRoute](../../expressroute/index.yml)<br />[Azure Firewall](../../firewall/index.yml)<br />[Azure Front Door Service](../../frontdoor/index.yml)<br />[Azure Private Link](../../private-link/index.yml)<br />[Load Balancer](../../load-balancer/index.yml)<br />[Network Watcher](../../network-watcher/index.yml)<br />[Traffic Manager](../../traffic-manager/index.yml)<br />[Virtual Network](../../virtual-network/index.yml)<br />[Virtual Network NAT](/azure/virtual-network/nat-gateway/nat-overview)<br />[Virtual WAN](../../virtual-wan/index.yml)<br />[VPN Gateway](../../vpn-gateway/index.yml)<br /> |
| Microsoft.Notebooks | [Azure Notebooks](https://notebooks.azure.com/help/introduction) | | Microsoft.NotificationHubs | [Notification Hubs](../../notification-hubs/index.yml) | | Microsoft.ObjectStore | Object Store |
azure-sql Azure Sql Iaas Vs Paas What Is Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/azure-sql-iaas-vs-paas-what-is-overview.md
Azure SQL is built upon the familiar SQL Server engine, so you can migrate appli
Learn how each product fits into Microsoft's Azure SQL data platform to match the right option for your business requirements. Whether you prioritize cost savings or minimal administration, this article can help you decide which approach delivers against the business requirements you care about most.
-> [!div class="nextstepaction"]
-> [Survey to improve Azure SQL!](https://aka.ms/AzureSQLSurveyNov2021)
- If you're new to Azure SQL, check out the *What is Azure SQL* video from our in-depth [Azure SQL video series](https://channel9.msdn.com/Series/Azure-SQL-for-Beginners?WT.mc_id=azuresql4beg_azuresql-ch9-niner): > [!VIDEO https://channel9.msdn.com/Series/Azure-SQL-for-Beginners/What-is-Azure-SQL-3-of-61/player]
azure-sql Authentication Aad Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/authentication-aad-configure.md
Last updated 12/15/2021
This article shows you how to create and populate an Azure Active Directory (Azure AD) instance, and then use Azure AD with [Azure SQL Database](sql-database-paas-overview.md), [Azure SQL Managed Instance](../managed-instance/sql-managed-instance-paas-overview.md), and [Azure Synapse Analytics](../../synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-is.md). For an overview, see [Azure Active Directory authentication](authentication-aad-overview.md).
-> [!div class="nextstepaction"]
-> [Survey to improve Azure SQL!](https://aka.ms/AzureSQLSurveyNov2021)
- ## Azure AD authentication methods Azure AD authentication supports the following authentication methods:
azure-sql Auto Failover Group Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/auto-failover-group-overview.md
The auto-failover groups feature allows you to manage the replication and failov
> [!NOTE] > Auto-failover groups support geo-replication of all databases in the group to only one secondary server or instance in a different region. If you need to create multiple Azure SQL Database geo-secondary replicas (in the same or different regions) for the same primary replica, use [active geo-replication](active-geo-replication-overview.md). >
-> Auto-failover groups are not currently supported in the [Hyperscale](service-tier-hyperscale.md) service tier. For geographic failover of a Hyperscale database, use [active geo-replication](active-geo-replication-overview.md).
When you are using auto-failover groups with automatic failover policy, an outage that impacts one or several of the databases in the group will result in an automatic geo-failover. Typically, these are outages that cannot be automatically mitigated by the built-in high availability infrastructure. Examples of geo-failover triggers include an incident caused by a SQL Database tenant ring or control ring being down due to an OS kernel memory leak on compute nodes, or an incident caused by one or more tenant rings being down because a wrong network cable was accidentally cut during routine hardware decommissioning. For more information, see [SQL Database High Availability](high-availability-sla.md).
azure-sql Automated Backups Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/automated-backups-overview.md
Last updated 08/28/2021
[!INCLUDE [GDPR-related guidance](../../../includes/gdpr-intro-sentence.md)]
-> [!div class="nextstepaction"]
-> [Survey to improve Azure SQL!](https://aka.ms/AzureSQLSurveyNov2021)
- ## What is a database backup? Database backups are an essential part of any business continuity and disaster recovery strategy, because they protect your data from corruption or deletion. These backups enable database restore to a point in time within the configured retention period. If your data protection rules require that your backups are available for an extended time (up to 10 years), you can configure [long-term retention](long-term-retention-overview.md) for both single and pooled databases.
azure-sql Connectivity Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/connectivity-architecture.md
This article explains architecture of various components that direct network tra
This article does *not* apply to **Azure SQL Managed Instance**. Refer to [Connectivity architecture for a managed instance](../managed-instance/connectivity-architecture-overview.md).
-> [!div class="nextstepaction"]
-> [Survey to improve Azure SQL!](https://aka.ms/AzureSQLSurveyNov2021)
- ## Connectivity architecture The following diagram provides a high-level overview of the connectivity architecture.
azure-sql Cost Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/cost-management.md
This article describes how you plan for and manage costs for Azure SQL Database.
First, you use the Azure pricing calculator to add Azure resources, and review the estimated costs. After you've started using Azure SQL Database resources, use Cost Management features to set budgets and monitor costs. You can also review forecasted costs and identify spending trends to identify areas where you might want to act. Costs for Azure SQL Database are only a portion of the monthly costs in your Azure bill. Although this article explains how to plan for and manage costs for Azure SQL Database, you're billed for all Azure services and resources used in your Azure subscription, including any third-party services.
-> [!div class="nextstepaction"]
-> [Survey to improve Azure SQL!](https://aka.ms/AzureSQLSurveyNov2021)
- ## Prerequisites Cost analysis supports most Azure account types, but not all of them. To view the full list of supported account types, see [Understand Cost Management data](../../cost-management-billing/costs/understand-cost-mgt-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). To view cost data, you need at least read access for an Azure account.
azure-sql Doc Changes Updates Release Notes Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/doc-changes-updates-release-notes-whats-new.md
This article summarizes the documentation changes associated with new features a
For Azure SQL Managed Instance, see [What's new](../managed-instance/doc-changes-updates-release-notes-whats-new.md).
-> [!div class="nextstepaction"]
-> [Survey to improve Azure SQL!](https://aka.ms/AzureSQLSurveyNov2021)
-- ## Preview
azure-sql Elastic Pool Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/elastic-pool-overview.md
Last updated 06/23/2021
Azure SQL Database elastic pools are a simple, cost-effective solution for managing and scaling multiple databases that have varying and unpredictable usage demands. The databases in an elastic pool are on a single server and share a set number of resources at a set price. Elastic pools in Azure SQL Database enable SaaS developers to optimize the price performance for a group of databases within a prescribed budget while delivering performance elasticity for each database.
-> [!div class="nextstepaction"]
-> [Survey to improve Azure SQL!](https://aka.ms/AzureSQLSurveyNov2021)
- ## What are SQL elastic pools SaaS developers build applications on top of large scale data-tiers consisting of multiple databases. A common application pattern is to provision a single database for each customer. But different customers often have varying and unpredictable usage patterns, and it's difficult to predict the resource requirements of each individual database user. Traditionally, you had two options:
azure-sql Features Comparison https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/features-comparison.md
If you need more details about the differences, you can find them in the separat
- [Azure SQL Database vs. SQL Server differences](transact-sql-tsql-differences-sql-server.md) - [Azure SQL Managed Instance vs. SQL Server differences](../managed-instance/transact-sql-tsql-differences-sql-server.md)
-> [!div class="nextstepaction"]
-> [Survey to improve Azure SQL!](https://aka.ms/AzureSQLSurveyNov2021)
-- ## Features of SQL Database and SQL Managed Instance The following table lists the major features of SQL Server and provides information about whether the feature is partially or fully supported in Azure SQL Database and Azure SQL Managed Instance, with a link to more information about the feature.
azure-sql Firewall Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/firewall-configure.md
When you create a new server in Azure SQL Database or Azure Synapse Analytics na
> Azure Synapse only supports server-level IP firewall rules. It doesn't support database-level IP firewall rules.
-> [!div class="nextstepaction"]
-> [Survey to improve Azure SQL!](https://aka.ms/AzureSQLSurveyNov2021)
- ## How the firewall works Connection attempts from the internet and Azure must pass through the firewall before they reach your server or database, as the following diagram shows.
azure-sql High Availability Sla https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/high-availability-sla.md
There are two high availability architectural models:
SQL Database and SQL Managed Instance both run on the latest stable version of the SQL Server database engine and Windows operating system, and most users would not notice that upgrades are performed continuously.
-> [!div class="nextstepaction"]
-> [Survey to improve Azure SQL!](https://aka.ms/AzureSQLSurveyNov2021)
- ## Basic, Standard, and General Purpose service tier locally redundant availability The Basic, Standard, and General Purpose service tiers leverage the standard availability architecture for both serverless and provisioned compute. The following figure shows four different nodes with the separated compute and storage layers.
azure-sql Logins Create Manage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/logins-create-manage.md
In this article, you learn about:
> [!IMPORTANT] > Databases in Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse are referred to collectively in the remainder of this article as databases, and the server is referring to the [server](logical-servers.md) that manages databases for Azure SQL Database and Azure Synapse.
-> [!div class="nextstepaction"]
-> [Survey to improve Azure SQL!](https://aka.ms/AzureSQLSurveyNov2021)
- ## Authentication and authorization [**Authentication**](security-overview.md#authentication) is the process of proving the user is who they claim to be. A user connects to a database using a user account.
azure-sql Monitor Tune Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/monitor-tune-overview.md
Azure SQL Database and Azure SQL Managed Instance provide advanced monitoring an
SQL Server has its own monitoring and diagnostic capabilities that SQL Database and SQL Managed Instance leverage, such as [query store](/sql/relational-databases/performance/monitoring-performance-by-using-the-query-store) and [dynamic management views (DMVs)](/sql/relational-databases/system-dynamic-management-views/system-dynamic-management-views). See [Monitoring using DMVs](monitoring-with-dmvs.md) for scripts to monitor for a variety of performance issues.
-> [!div class="nextstepaction"]
-> [Survey to improve Azure SQL!](https://aka.ms/AzureSQLSurveyNov2021)
- ## Monitoring and tuning capabilities in the Azure portal In the Azure portal, Azure SQL Database and Azure SQL Managed Instance provide monitoring of resource metrics. Azure SQL Database provides database advisors, and Query Performance Insight provides query tuning recommendations and query performance analysis. In the Azure portal, you can enable automatic tuning for [logical SQL servers](logical-servers.md) and their single and pooled databases.
azure-sql Purchasing Models https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/purchasing-models.md
Azure SQL Database and Azure SQL Managed Instance let you easily purchase a full
- [Database transaction unit (DTU)-based purchasing model](service-tiers-dtu.md). This purchasing model provides bundled compute and storage packages balanced for common workloads.
-> [!div class="nextstepaction"]
-> [Survey to improve Azure SQL!](https://aka.ms/AzureSQLSurveyNov2021)
-- There are two purchasing models: - [vCore-based purchasing model](service-tiers-vcore.md) is available for both [Azure SQL Database](sql-database-paas-overview.md) and [Azure SQL Managed Instance](../managed-instance/sql-managed-instance-paas-overview.md). The [Hyperscale service tier](service-tier-hyperscale.md) is available for single databases that are using the [vCore-based purchasing model](service-tiers-vcore.md).
azure-sql Service Tier Hyperscale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/service-tier-hyperscale.md
The Hyperscale service tier in Azure SQL Database is the newest service tier in
> - For details on the General Purpose and Business Critical service tiers in the vCore-based purchasing model, see [General Purpose](service-tier-general-purpose.md) and [Business Critical](service-tier-business-critical.md) service tiers. For a comparison of the vCore-based purchasing model with the DTU-based purchasing model, see [Azure SQL Database purchasing models and resources](purchasing-models.md). > - The Hyperscale service tier is currently only available for Azure SQL Database, and not Azure SQL Managed Instance. -
-> [!div class="nextstepaction"]
-> [Survey to improve Azure SQL!](https://aka.ms/AzureSQLSurveyNov2021)
- ## What are the Hyperscale capabilities The Hyperscale service tier in Azure SQL Database provides the following additional capabilities:
These are the current limitations to the Hyperscale service tier as of GA. We'r
| Elastic Pools | Elastic Pools aren't currently supported with Hyperscale.| | Migration to Hyperscale is currently a one-way operation | Once a database is migrated to Hyperscale, it can't be migrated directly to a non-Hyperscale service tier. At present, the only way to migrate a database from Hyperscale to non-Hyperscale is to export/import using a bacpac file or other data movement technologies (Bulk Copy, Azure Data Factory, Azure Databricks, SSIS, etc.) Bacpac export/import from Azure portal, from PowerShell using [New-AzSqlDatabaseExport](/powershell/module/az.sql/new-azsqldatabaseexport) or [New-AzSqlDatabaseImport](/powershell/module/az.sql/new-azsqldatabaseimport), from Azure CLI using [az sql db export](/cli/azure/sql/db#az_sql_db_export) and [az sql db import](/cli/azure/sql/db#az_sql_db_import), and from [REST API](/rest/api/sql/) is not supported. Bacpac import/export for smaller Hyperscale databases (up to 200 GB) is supported using SSMS and [SqlPackage](/sql/tools/sqlpackage) version 18.4 and later. For larger databases, bacpac export/import may take a long time, and may fail for various reasons.| | Migration of databases with In-Memory OLTP objects | Hyperscale supports a subset of In-Memory OLTP objects, including memory-optimized table types, table variables, and natively compiled modules. However, when any kind of In-Memory OLTP objects are present in the database being migrated, migration from Premium and Business Critical service tiers to Hyperscale is not supported. To migrate such a database to Hyperscale, all In-Memory OLTP objects and their dependencies must be dropped. After the database is migrated, these objects can be recreated. Durable and non-durable memory-optimized tables are not currently supported in Hyperscale, and must be changed to disk tables.|
-| Geo-replication | [Geo-replication](active-geo-replication-overview.md) on Hyperscale is now in public preview. |
+| Geo-replication | [Geo-replication](active-geo-replication-overview.md) and [auto-failover groups](auto-failover-group-overview.md) on Hyperscale is now in public preview. |
| Intelligent Database Features | With the exception of the "Force Plan" option, all other Automatic Tuning options aren't yet supported on Hyperscale: options may appear to be enabled, but there won't be any recommendations or actions made. | | Query Performance Insights | Query Performance Insights is currently not supported for Hyperscale databases. | | Shrink Database | DBCC SHRINKDATABASE or DBCC SHRINKFILE isn't currently supported for Hyperscale databases. |
azure-sql Service Tiers Vcore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/service-tiers-vcore.md
The virtual core (vCore) purchasing model used by Azure SQL Database and Azure S
For more information on choosing between the vCore and DTU purchase models, see [Choose between the vCore and DTU purchasing models](purchasing-models.md).
-> [!div class="nextstepaction"]
-> [Survey to improve Azure SQL!](https://aka.ms/AzureSQLSurveyNov2021)
- ## Service tiers The following articles provide specific information on the vCore purchase model in each product.
azure-sql Single Database Create Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/single-database-create-quickstart.md
Last updated 12/09/2021
In this quickstart, you create a [single database](single-database-overview.md) in Azure SQL Database using either the Azure portal, a PowerShell script, or an Azure CLI script. You then query the database using **Query editor** in the Azure portal.
-> [!div class="nextstepaction"]
-> [Survey to improve Azure SQL!](https://aka.ms/AzureSQLSurveyNov2021)
- ## Prerequisites - An active Azure subscription. If you don't have one, [create a free account](https://azure.microsoft.com/free/).
azure-sql Single Database Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/single-database-overview.md
The single database resource type creates a database in Azure SQL Database with
Single database is a deployment model for Azure SQL Database. The other is [elastic pools](elastic-pool-overview.md).
-> [!div class="nextstepaction"]
-> [Survey to improve Azure SQL!](https://aka.ms/AzureSQLSurveyNov2021)
-- ## Dynamic scalability You can build your first app on a small, single database at low cost in the serverless compute tier or a small compute size in the provisioned compute tier. You change the [compute or service tier](single-database-scale.md) manually or programmatically at any time to meet the needs of your solution. You can adjust performance without downtime to your app or to your customers. Dynamic scalability enables your database to transparently respond to rapidly changing resource requirements and enables you to only pay for the resources that you need when you need them.
azure-sql Sql Database Paas Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/sql-database-paas-overview.md
Azure SQL Database is based on the latest stable version of the [Microsoft SQL S
SQL Database enables you to easily define and scale performance within two different purchasing models: a [vCore-based purchasing model](service-tiers-vcore.md) and a [DTU-based purchasing model](service-tiers-dtu.md). SQL Database is a fully managed service that has built-in high availability, backups, and other common maintenance operations. Microsoft handles all patching and updating of the SQL and operating system code. You don't have to manage the underlying infrastructure.
-> [!div class="nextstepaction"]
-> [Survey to improve Azure SQL!](https://aka.ms/AzureSQLSurveyNov2021)
- If you're new to Azure SQL Database, check out the *Azure SQL Database Overview* video from our in-depth [Azure SQL video series](https://channel9.msdn.com/Series/Azure-SQL-for-Beginners?WT.mc_id=azuresql4beg_azuresql-ch9-niner): > [!VIDEO https://channel9.msdn.com/Series/Azure-SQL-for-Beginners/Azure-SQL-Database-Overview-7-of-61/player]
azure-sql Troubleshoot Common Errors Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/troubleshoot-common-errors-issues.md
The Azure infrastructure has the ability to dynamically reconfigure servers when
| 49919 |16 |Cannot process create or update request. Too many create or update operations in progress for subscription "%ld".<br/><br/>The service is busy processing multiple create or update requests for your subscription or server. Requests are currently blocked for resource optimization. Query [sys.dm_operation_status](/sql/relational-databases/system-dynamic-management-views/sys-dm-operation-status-azure-sql-database) for pending operations. Wait until pending create or update requests are complete or delete one of your pending requests and retry your request later. For more information, see: <br/>&bull; &nbsp;[Logical SQL server resource limits](resource-limits-logical-server.md)<br/>&bull; &nbsp;[DTU-based limits for single databases](service-tiers-dtu.md)<br/>&bull; &nbsp;[DTU-based limits for elastic pools](resource-limits-dtu-elastic-pools.md)<br/>&bull; &nbsp;[vCore-based limits for single databases](resource-limits-vcore-single-databases.md)<br/>&bull; &nbsp;[vCore-based limits for elastic pools](resource-limits-vcore-elastic-pools.md)<br/>&bull; &nbsp;[Azure SQL Managed Instance resource limits](../managed-instance/resource-limits.md). | | 49920 |16 |Cannot process request. Too many operations in progress for subscription "%ld".<br/><br/>The service is busy processing multiple requests for this subscription. Requests are currently blocked for resource optimization. Query [sys.dm_operation_status](/sql/relational-databases/system-dynamic-management-views/sys-dm-operation-status-azure-sql-database) for operation status. Wait until pending requests are complete or delete one of your pending requests and retry your request later. For more information, see: <br/>&bull; &nbsp;[Logical SQL server resource limits](resource-limits-logical-server.md)<br/>&bull; &nbsp;[DTU-based limits for single databases](service-tiers-dtu.md)<br/>&bull; &nbsp;[DTU-based limits for elastic pools](resource-limits-dtu-elastic-pools.md)<br/>&bull; &nbsp;[vCore-based limits for single databases](resource-limits-vcore-single-databases.md)<br/>&bull; &nbsp;[vCore-based limits for elastic pools](resource-limits-vcore-elastic-pools.md)<br/>&bull; &nbsp;[Azure SQL Managed Instance resource limits](../managed-instance/resource-limits.md). | | 4221 |16 |Login to read-secondary failed due to long wait on 'HADR_DATABASE_WAIT_FOR_TRANSITION_TO_VERSIONING'. The replica is not available for login because row versions are missing for transactions that were in-flight when the replica was recycled. The issue can be resolved by rolling back or committing the active transactions on the primary replica. Occurrences of this condition can be minimized by avoiding long write transactions on the primary. |
+| 615 | 21 | Could not find database ID %d, name '%.&#x2a;ls' . Error Code 615. <br/> This means in-memory cache is not in-sync with SQL server instance and lookups are retrieving stale database ID. <br/> <br/>SQL logins use in-memory cache to get the database name to ID mapping. The cache should be in sync with backend database and updated whenever attach and detach of database to/from the SQL server instance occurs. <br/>You receive this error when detach workflow fail to clean-up the in-memory cache on time and subsequent lookups to the database point to stale database ID. <br/><br/>Try reconnecting to SQL Database until the resource are available, and the connection is established again. For more information, see [Transient errors](troubleshoot-common-connectivity-issues.md#transient-errors-transient-faults).|
### Steps to resolve transient connectivity issues
azure-sql Connectivity Architecture Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/connectivity-architecture-overview.md
SQL Managed Instance is placed inside the Azure virtual network and the subnet t
- The ability to connect SQL Managed Instance to a linked server or another on-premises data store. - The ability to connect SQL Managed Instance to Azure resources.
-> [!div class="nextstepaction"]
-> [Survey to improve Azure SQL!](https://aka.ms/AzureSQLSurveyNov2021)
- ## Communication overview The following diagram shows entities that connect to SQL Managed Instance. It also shows the resources that need to communicate with a managed instance. The communication process at the bottom of the diagram represents customer applications and tools that connect to SQL Managed Instance as data sources.
azure-sql Doc Changes Updates Release Notes Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/doc-changes-updates-release-notes-whats-new.md
ms.devlang: Previously updated : 11/30/2021 Last updated : 01/05/2022 # What's new in Azure SQL Managed Instance? [!INCLUDE[appliesto-sqldb-sqlmi](../includes/appliesto-sqlmi.md)]
This article summarizes the documentation changes associated with new features a
For Azure SQL Database, see [What's new](../database/doc-changes-updates-release-notes-whats-new.md).
-> [!div class="nextstepaction"]
-> [Survey to improve Azure SQL!](https://aka.ms/AzureSQLSurveyNov2021)
-- ## Preview The following table lists the features of Azure SQL Managed Instance that are currently in preview:
The following changes were added to SQL Managed Instance and the documentation i
| **TDE-encrypted backup performance improvements** | It's now possible to set the point-in-time restore (PITR) backup retention period, and automated compression of backups encrypted with transparent data encryption (TDE) are now 30 percent more efficient in consuming backup storage space, saving costs for the end user. See [Change PITR](../database/automated-backups-overview.md?tabs=managed-instance#change-the-short-term-retention-policy) to learn more. | | **Azure AD authentication improvements** | Automate user creation using Azure AD applications and create individual Azure AD guest users (preview). To learn more, see [Directory readers in Azure AD](../database/authentication-aad-directory-readers-role.md)| | **Global VNet peering support** | Global virtual network peering support has been added to SQL Managed Instance, improving the geo-replication experience. See [geo-replication between managed instances](../database/auto-failover-group-overview.md?tabs=azure-powershell#enabling-geo-replication-between-managed-instances-and-their-vnets). |
-| **Hosting SSRS catalog databases** | SQL Managed Instance can now host catalog databases for all supported versions of SQL Server Reporting Services (SSRS). |
+| **Hosting SSRS catalog databases** | SQL Managed Instance can now host catalog databases of SQL Server Reporting Services (SSRS) for versions 2017 and newer. |
| **Major performance improvements** | Introducing improvements to SQL Managed Instance performance, including improved transaction log write throughput, improved data and log IOPS for business critical instances, and improved TempDB performance. See the [improved performance](https://techcommunity.microsoft.com/t5/azure-sql/announcing-major-performance-improvements-for-azure-sql-database/ba-p/1701256) tech community blog to learn more. | **Enhanced management experience** | Using the new [OPERATIONS API](/rest/api/sql/2021-02-01-preview/managed-instance-operations), it's now possible to check the progress of long-running instance operations. To learn more, see [Management operations](management-operations-overview.md?tabs=azure-portal). | **Machine learning support** | Machine Learning Services with support for R and Python languages now include preview support on Azure SQL Managed Instance (Preview). To learn more, see [Machine learning with SQL Managed Instance](machine-learning-services-overview.md). |
azure-sql Sql Managed Instance Paas Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/sql-managed-instance-paas-overview.md
Last updated 01/14/2021
Azure SQL Managed Instance is the intelligent, scalable cloud database service that combines the broadest SQL Server database engine compatibility with all the benefits of a fully managed and evergreen platform as a service. SQL Managed Instance has near 100% compatibility with the latest SQL Server (Enterprise Edition) database engine, providing a native [virtual network (VNet)](../../virtual-network/virtual-networks-overview.md) implementation that addresses common security concerns, and a [business model](https://azure.microsoft.com/pricing/details/sql-database/) favorable for existing SQL Server customers. SQL Managed Instance allows existing SQL Server customers to lift and shift their on-premises applications to the cloud with minimal application and database changes. At the same time, SQL Managed Instance preserves all PaaS capabilities (automatic patching and version updates, [automated backups](../database/automated-backups-overview.md), [high availability](../database/high-availability-sla.md)) that drastically reduce management overhead and TCO.
-> [!div class="nextstepaction"]
-> [Survey to improve Azure SQL!](https://aka.ms/AzureSQLSurveyNov2021)
- If you're new to Azure SQL Managed Instance, check out the *Azure SQL Managed Instance* video from our in-depth [Azure SQL video series](https://channel9.msdn.com/Series/Azure-SQL-for-Beginners?WT.mc_id=azuresql4beg_azuresql-ch9-niner): > [!VIDEO https://channel9.msdn.com/Series/Azure-SQL-for-Beginners/Azure-SQL-Managed-Instance-Overview-6-of-61/player]
azure-video-analyzer Video Indexer Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-for-media-docs/video-indexer-overview.md
Title: What is Azure Video Analyzer for Media (formerly Video Indexer)?- description: This article gives an overview of the Azure Video Analyzer for Media (formerly Video Indexer) service. Last updated 12/10/2021
azure-web-pubsub Tutorial Build Chat https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-web-pubsub/tutorial-build-chat.md
You may remember in the [publish and subscribe message tutorial](./tutorial-pub-
```bash dotnet add package Microsoft.Extensions.Azure
- dotnet user-secrets init
- dotnet user-secrets set Azure:WebPubSub:ConnectionString "<connection-string>"
``` 2. DI the service client inside `ConfigureServices` and don't forget to replace `<connection_string>` with the one of your services.
You may remember in the [publish and subscribe message tutorial](./tutorial-pub-
{ services.AddAzureClients(builder => {
- builder.AddWebPubSubServiceClient(Configuration["Azure:WebPubSub:ConnectionString"], "chat");
+ builder.AddWebPubSubServiceClient("<connection_string>", "chat");
}); } ```
You may remember in the [publish and subscribe message tutorial](./tutorial-pub-
await context.Response.WriteAsync("missing user id"); return; }
- var serviceClient = context.RequestServices.GetRequiredService<WebPubSubServiceClient>();
+ var serviceClient = context.RequestServices.GetRequiredService<Azure.Messaging.WebPubSub.WebPubSubServiceClient>();
await context.Response.WriteAsync(serviceClient.GetClientAccessUri(userId: id).AbsoluteUri); }); });
The `ce-type` of `message` event is always `azure.webpubsub.user.message`, detai
// abuse protection endpoints.Map("/eventhandler/{*path}", async context => {
- var serviceClient = context.RequestServices.GetRequiredService<WebPubSubServiceClient>();
+ var serviceClient = context.RequestServices.GetRequiredService<Azure.Messaging.WebPubSub.WebPubSubServiceClient>();
if (context.Request.Method == "OPTIONS") { if (context.Request.Headers["WebHook-Request-Origin"].Count > 0)
The `ce-type` of `message` event is always `azure.webpubsub.user.message`, detai
```csharp app.UseEndpoints(endpoints => {
+ var serviceClient = context.RequestServices.GetRequiredService<Azure.Messaging.WebPubSub.WebPubSubServiceClient>();
// abuse protection endpoints.Map("/eventhandler/{*path}", async context => {
azure-web-pubsub Tutorial Subprotocol https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-web-pubsub/tutorial-subprotocol.md
Now let's create a web application using the `json.webpubsub.azure.v1` subprotoc
# [Java](#tab/java) Create an HTML page with below content and save it to */src/main/resources/public/https://docsupdatetracker.net/index.html*:+ + ```html <html>
backup Backup Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-overview.md
Title: What is Azure Backup? description: Provides an overview of the Azure Backup service, and how it contributes to your business continuity and disaster recovery (BCDR) strategy. Previously updated : 07/28/2021 Last updated : 01/04/2022 # What is the Azure Backup service?
Azure Backup delivers these key benefits:
- [Geo-redundant storage (GRS)](../storage/common/storage-redundancy.md#geo-redundant-storage) is the default and recommended replication option. GRS replicates your data to a secondary region (hundreds of miles away from the primary location of the source data). GRS costs more than LRS, but GRS provides a higher level of durability for your data, even if there's a regional outage. - [Zone-redundant storage (ZRS)](../storage/common/storage-redundancy.md#zone-redundant-storage) replicates your data in [availability zones](../availability-zones/az-overview.md#availability-zones), guaranteeing data residency and resiliency in the same region. ZRS has no downtime. So your critical workloads that require [data residency](https://azure.microsoft.com/resources/achieving-compliant-data-residency-and-security-with-azure/), and must have no downtime, can be backed up in ZRS.
+## How Azure Backup protects from ransomware?
+
+Azure Backup helps protect your critical business systems and backup data against a ransomware attack by implementing preventive measures and providing tools that protect your organization from every step that attackers take to infiltrate your systems. It provides security to your backup environment, both when your data is in transit and at rest. [Learn more](/azure/security/fundamentals/backup-plan-to-protect-against-ransomware)
+ ## Next steps - [Review](backup-architecture.md) the architecture and components for different backup scenarios.
backup Backup Rbac Rs Vault https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-rbac-rs-vault.md
The following table captures the Backup management actions and corresponding min
| Create Recovery Services vault | Backup Contributor | Resource group containing the vault | | | Enable backup of Azure VMs | Backup Operator | Resource group containing the vault | | | | Virtual Machine Contributor | VM resource | Alternatively, instead of a built-in-role, you can consider a custom role which has the following permissions: Microsoft.Compute/virtualMachines/write Microsoft.Compute/virtualMachines/read |
+| Enable backup of Azure VMs (from VM blade) | Backup Operator | Resource group containing the vault | |
+| | Backup Operator | Resource group containing the virtual machine | |
+| | Virtual Machine Contributor | VM resource | Alternatively, instead of a built-in-role, you can consider a custom role which has the following permissions: Microsoft.Compute/virtualMachines/write Microsoft.Compute/virtualMachines/read Microsoft.Compute/virtualMachines/instanceView/read |
| On-demand backup of VM | Backup Operator | Recovery Services vault | | | Restore VM | Backup Operator | Recovery Services vault | | | | Contributor | Resource group in which VM will be deployed | Alternatively, instead of a built-in-role, you can consider a custom role which has the following permissions: Microsoft.Resources/subscriptions/resourceGroups/write Microsoft.DomainRegistration/domains/write, Microsoft.Compute/virtualMachines/write Microsoft.Compute/virtualMachines/read Microsoft.Network/virtualNetworks/read Microsoft.Network/virtualNetworks/subnets/join/action |
backup Backup Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-support-matrix.md
The following table describes the features of Recovery Services vaults:
**Move vaults** | You can [move vaults](./backup-azure-move-recovery-services-vault.md) across subscriptions or between resource groups in the same subscription. However, moving vaults across regions isn't supported. **Move data between vaults** | Moving backed-up data between vaults isn't supported. **Modify vault storage type** | You can modify the storage replication type (either geo-redundant storage or locally redundant storage) for a vault before backups are stored. After backups begin in the vault, the replication type can't be modified.
-**Zone-redundant storage (ZRS)** | Supported in preview in UK South, South East Asia, Australia East, North Europe, Central US, East US 2, Brazil South, South Central US, Korea Central, Norway East, France Central, West Europe, East Asia, Sweden Central, Canada Central and Japan East.
+**Zone-redundant storage (ZRS)** | Supported in preview in UK South, South East Asia, Australia East, North Europe, Central US, East US 2, Brazil South, South Central US, Korea Central, Norway East, France Central, West Europe, East Asia, Sweden Central, Canada Central, Japan East and West US 3.
**Private Endpoints** | See [this section](./private-endpoints.md#before-you-start) for requirements to create private endpoints for a recovery service vault. ## On-premises backup support
backup Sql Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/sql-support-matrix.md
Title: Azure Backup support matrix for SQL Server Backup in Azure VMs description: Provides a summary of support settings and limitations when backing up SQL Server in Azure VMs with the Azure Backup service. Previously updated : 10/22/2021 Last updated : 01/04/2022 +++ # Support matrix for SQL Server Backup in Azure VMs
You can use Azure Backup to back up SQL Server databases in Azure VMs hosted on
| **Supported deployments** | SQL Marketplace Azure VMs and non-Marketplace (SQL Server manually installed) VMs are supported. **Supported regions** | Azure Backup for SQL Server databases is available in all regions, except France South (FRS), UK North (UKN), UK South 2 (UKS2), UG IOWA (UGI), and Germany (Black Forest).
-**Supported operating systems** | Windows Server 2019, Windows Server 2016, Windows Server 2012, Windows Server 2008 R2 SP1 <br/><br/> Linux isn't currently supported.
+**Supported operating systems** | Windows Server 2022, Windows Server 2019, Windows Server 2016, Windows Server 2012, Windows Server 2008 R2 SP1 <br/><br/> Linux isn't currently supported.
**Supported SQL Server versions** | SQL Server 2019, SQL Server 2017 as detailed on the [Search product lifecycle page](https://support.microsoft.com/lifecycle/search?alpha=SQL%20server%202017), SQL Server 2016 and SPs as detailed on the [Search product lifecycle page](https://support.microsoft.com/lifecycle/search?alpha=SQL%20server%202016%20service%20pack), SQL Server 2014, SQL Server 2012, SQL Server 2008 R2, SQL Server 2008 <br/><br/> Enterprise, Standard, Web, Developer, Express.<br><br>Express Local DB versions aren't supported. **Supported .NET versions** | .NET Framework 4.5.2 or later installed on the VM **Supported deployments** | SQL Marketplace Azure VMs and non-Marketplace (SQL Server that is manually installed) VMs are supported. Support for standalone instances is always on [availability groups](backup-sql-server-on-availability-groups.md).
_*The database size limit depends on the data transfer rate that we support and
* TDE - enabled database backup is supported. To restore a TDE-encrypted database to another SQL Server, you need to first [restore the certificate to the destination server](/sql/relational-databases/security/encryption/move-a-tde-protected-database-to-another-sql-server). The backup compression for TDE-enabled databases for SQL Server 2016 and newer versions is available, but at lower transfer size as explained [here](https://techcommunity.microsoft.com/t5/sql-server/backup-compression-for-tde-enabled-databases-important-fixes-in/ba-p/385593). * The backup and restore operations for mirror databases and database snapshots aren't supported. * SQL Server **Failover Cluster Instance (FCI)** isn't supported.
+* Azure Backup supports only back up of database files with the following extensions - _.ad_, _.cs_, and _.master_. Database files with other extensions, such as _.dll_, aren't backed-up because the IIS server performs the [file extension request filtering](/iis/configuration/system.webserver/security/requestfiltering/fileextensions).
## Backup throughput performance
bastion Connect Native Client Windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/connect-native-client-windows.md
Currently, this feature has the following limitations:
Before you begin, verify that you have met the following criteria:
-* The latest version of the CLI commands (version 2.30 or later) is installed. For information about installing the CLI commands, see [Install the Azure CLI](/cli/azure/install-azure-cli) and [Get Started with Azure CLI](/cli/azure/get-started-with-azure-cli).
+* The latest version of the CLI commands (version 2.32 or later) is installed. For information about installing the CLI commands, see [Install the Azure CLI](/cli/azure/install-azure-cli) and [Get Started with Azure CLI](/cli/azure/get-started-with-azure-cli).
* An Azure virtual network. * A virtual machine in the virtual network. * If you plan to sign in to your virtual machine using your Azure AD credentials, make sure your virtual machine is set up using one of the following methods:
bastion Quickstart Host Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/quickstart-host-portal.md
# Quickstart: Configure Azure Bastion from VM settings
-This quickstart article shows you how to configure Azure Bastion based on your VM settings in the Azure portal, and then connect to a VM via private IP address. Once the service is provisioned, the RDP/SSH experience is available to all of the virtual machines in the same virtual network. The VM doesn't need a public IP address, client software, agent, or a special configuration. If you don't need the public IP address on your VM for anything else, you can remove it. You then connect to your VM through the portal using the private IP address. For more information about Azure Bastion, see [What is Azure Bastion?](bastion-overview.md)
+This quickstart article shows you how to configure Azure Bastion based on your VM settings, and then connect to the VM via private IP address using the Azure portal. Once the Bastion service is provisioned, the RDP/SSH experience is available to all of the virtual machines in the same virtual network.
+
+When connecting via Azure Bastion, your VM doesn't need a public IP address, client software, agent, or a special configuration. Additionally, if you don't need the public IP address on your VM for anything else, you can remove it and connect to your VM through the portal using the private IP address. For more information about Azure Bastion, see [What is Azure Bastion?](bastion-overview.md)
## <a name="prereq"></a>Prerequisites
chaos-studio Chaos Studio Fault Library https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/chaos-studio/chaos-studio-fault-library.md
description: Understand the available actions you can use with Chaos Studio incl
Previously updated : 11/10/2021 Last updated : 01/05/2022
Known issues on Linux:
"value": "{\"action\":\"delay\",\"mode\":\"one\",\"selector\":{\"namespaces\":[\"default\"],\"labelSelectors\":{\"app\":\"web-show\"}},\"delay\":{\"latency\":\"10ms\",\"correlation\":\"100\",\"jitter\":\"0ms\"}}" } ],
- "duration": "PT10M",
"selectorid": "myResources" } ]
Known issues on Linux:
"value": "{\"action\":\"pod-failure\",\"mode\":\"one\",\"duration\":\"30s\",\"selector\":{\"labelSelectors\":{\"app.kubernetes.io\/component\":\"tikv\"}}}" } ],
- "duration": "PT10M",
"selectorid": "myResources" } ]
Known issues on Linux:
"value": "{\"mode\":\"one\",\"selector\":{\"labelSelectors\":{\"app\":\"app1\"}},\"stressors\":{\"memory\":{\"workers\":4,\"size\":\"256MB\"}}}" } ],
- "duration": "PT10M",
"selectorid": "myResources" } ]
Known issues on Linux:
"value": "{\"action\":\"latency\",\"mode\":\"one\",\"selector\":{\"labelSelectors\":{\"app\":\"etcd\"}},\"volumePath\":\"\/var\/run\/etcd\",\"path\":\"\/var\/run\/etcd\/**\/*\",\"delay\":\"100ms\",\"percent\":50,\"duration\":\"400s\"}" } ],
- "duration": "PT10M",
"selectorid": "myResources" } ]
Known issues on Linux:
"value": "{\"mode\":\"one\",\"selector\":{\"labelSelectors\":{\"app\":\"app1\"}},\"timeOffset\":\"-10m100ns\"}" } ],
- "duration": "PT10M",
"selectorid": "myResources" } ]
Known issues on Linux:
"value": "{\"mode\":\"one\",\"selector\":{\"namespaces\":[\"chaos-mount\"]},\"failKernRequest\":{\"callchain\":[{\"funcname\":\"__x64_sys_mount\"}],\"failtype\":0}}" } ],
- "duration": "PT10M",
"selectorid": "myResources" } ]
Known issues on Linux:
"value": "{\"mode\":\"all\",\"selector\":{\"labelSelectors\":{\"app\":\"nginx\"}},\"target\":\"Request\",\"port\":80,\"method\":\"GET\",\"path\":\"\/api\",\"abort\":true,\"duration\":\"5m\",\"scheduler\":{\"cron\":\"@every 10m\"}}" } ],
- "duration": "PT10M",
"selectorid": "myResources" } ]
Known issues on Linux:
"value": "{\"action\":\"random\",\"mode\":\"all\",\"patterns\":[\"google.com\",\"chaos-mesh.*\",\"github.?om\"],\"selector\":{\"namespaces\":[\"busybox\"]}}" } ],
- "duration": "PT10M",
"selectorid": "myResources" } ]
cognitive-services Howtocallvisionapi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/Vision-API-How-to-Topics/HowToCallVisionAPI.md
Previously updated : 09/09/2019 Last updated : 01/05/2022
This article demonstrates how to call the Image Analysis API to return information about an image's visual features.
-This guide assumes you have already <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title="created a Computer Vision resource" target="_blank">create a Computer Vision resource </a> and obtained a subscription key and endpoint URL. If you haven't, follow a [quickstart](../quickstarts-sdk/image-analysis-client-library.md) to get started.
+This guide assumes you have already <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title="created a Computer Vision resource" target="_blank">created a Computer Vision resource </a> and obtained a subscription key and endpoint URL. If you haven't, follow a [quickstart](../quickstarts-sdk/image-analysis-client-library.md) to get started.
## Submit data to the service
The [Analyze API](https://westus.dev.cognitive.microsoft.com/docs/services/compu
|URL parameter | Value | Description| |||--| |`visualFeatures`|`Adult` | detects if the image is pornographic in nature (depicts nudity or a sex act), or is gory (depicts extreme violence or blood). Sexually suggestive content (aka racy content) is also detected.|
-||`Brands` | detects various brands within an image, including the approximate location. The Brands argument is only available in English.|
-||`Categories` | categorizes image content according to a taxonomy defined in documentation. This is the default value of `visualFeatures`.|
-||`Color` | determines the accent color, dominant color, and whether an image is black&white.|
-||`Description` | describes the image content with a complete sentence in supported languages.|
-||`Faces` | detects if faces are present. If present, generate coordinates, gender and age.|
-||`ImageType` | detects if image is clip art or a line drawing.|
-||`Objects` | detects various objects within an image, including the approximate location. The Objects argument is only available in English.|
-||`Tags` | tags the image with a detailed list of words related to the image content.|
+|`visualFeatures`|`Brands` | detects various brands within an image, including the approximate location. The Brands argument is only available in English.|
+|`visualFeatures`|`Categories` | categorizes image content according to a taxonomy defined in documentation. This is the default value of `visualFeatures`.|
+|`visualFeatures`|`Color` | determines the accent color, dominant color, and whether an image is black&white.|
+|`visualFeatures`|`Description` | describes the image content with a complete sentence in supported languages.|
+|`visualFeatures`|`Faces` | detects if faces are present. If present, generate coordinates, gender and age.|
+|`visualFeatures`|`ImageType` | detects if image is clip art or a line drawing.|
+|`visualFeatures`|`Objects` | detects various objects within an image, including the approximate location. The Objects argument is only available in English.|
+|`visualFeatures`|`Tags` | tags the image with a detailed list of words related to the image content.|
|`details`| `Celebrities` | identifies celebrities if detected in the image.|
-||`Landmarks` |identifies landmarks if detected in the image.|
+|`details`|`Landmarks` |identifies landmarks if detected in the image.|
A populated URL might look like the following:
You can also specify the language of the returned data. The following URL query
|URL parameter | Value | Description| |||--| |`language`|`en` | English|
-||`es` | Spanish|
-||`ja` | Japanese|
-||`pt` | Portuguese|
-||`zh` | Simplified Chinese|
+|`language`|`es` | Spanish|
+|`language`|`ja` | Japanese|
+|`language`|`pt` | Portuguese|
+|`language`|`zh` | Simplified Chinese|
A populated URL might look like the following:
description.captions[].confidence | `number` | The confidence score for th
See the following list of possible errors and their causes: * 400
- * InvalidImageUrl - Image URL is badly formatted or not accessible.
- * InvalidImageFormat - Input data is not a valid image.
- * InvalidImageSize - Input image is too large.
- * NotSupportedVisualFeature - Specified feature type is not valid.
- * NotSupportedImage - Unsupported image, e.g. child pornography.
- * InvalidDetails - Unsupported `detail` parameter value.
- * NotSupportedLanguage - The requested operation is not supported in the language specified.
- * BadArgument - Additional details are provided in the error message.
+ * `InvalidImageUrl` - Image URL is badly formatted or not accessible.
+ * `InvalidImageFormat` - Input data is not a valid image.
+ * `InvalidImageSize` - Input image is too large.
+ * `NotSupportedVisualFeature` - Specified feature type is not valid.
+ * `NotSupportedImage` - Unsupported image, for example child pornography.
+ * `InvalidDetails` - Unsupported `detail` parameter value.
+ * `NotSupportedLanguage` - The requested operation is not supported in the language specified.
+ * `BadArgument` - Additional details are provided in the error message.
* 415 - Unsupported media type error. The Content-Type is not in the allowed types:
- * For an image URL: Content-Type should be application/json
- * For a binary image data: Content-Type should be application/octet-stream or multipart/form-data
+ * For an image URL, Content-Type should be `application/json`
+ * For a binary image data, Content-Type should be `application/octet-stream` or `multipart/form-data`
* 500
- * FailedToProcess
- * Timeout - Image processing timed out.
- * InternalServerError
+ * `FailedToProcess`
+ * `Timeout` - Image processing timed out.
+ * `InternalServerError`
> [!TIP] > While working with Computer Vision, you might encounter transient failures caused by [rate limits](https://azure.microsoft.com/pricing/details/cognitive-services/computer-vision/) enforced by the service, or other transient problems like network outages. For information about handling these types of failures, see [Retry pattern](/azure/architecture/patterns/retry) in the Cloud Design Patterns guide, and the related [Circuit Breaker pattern](/azure/architecture/patterns/circuit-breaker).
cognitive-services Concept Brand Detection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/concept-brand-detection.md
Previously updated : 08/08/2019 Last updated : 01/05/2022
Brand detection is a specialized mode of [object detection](concept-object-detection.md) that uses a database of thousands of global logos to identify commercial brands in images or video. You can use this feature, for example, to discover which brands are most popular on social media or most prevalent in media product placement.
-The Computer Vision service detects whether there are brand logos in a given image; if so, it returns the brand name, a confidence score, and the coordinates of a bounding box around the logo.
+The Computer Vision service detects whether there are brand logos in a given image; if there are, it returns the brand name, a confidence score, and the coordinates of a bounding box around the logo.
-The built-in logo database covers popular brands in consumer electronics, clothing, and more. If you find that the brand you're looking for is not detected by the Computer Vision service, you may be better served creating and training your own logo detector using the [Custom Vision](../custom-vision-service/index.yml) service.
+The built-in logo database covers popular brands in consumer electronics, clothing, and more. If you find that the brand you're looking for is not detected by the Computer Vision service, you could also try creating and training your own logo detector using the [Custom Vision](../custom-vision-service/index.yml) service.
## Brand detection example
cognitive-services Concept Describing Images https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/concept-describing-images.md
Previously updated : 02/11/2019 Last updated : 01/05/2022 # Describe images with human-readable language
-Computer Vision can analyze an image and generate a human-readable sentence that describes its contents. The algorithm actually returns several descriptions based on different visual features, and each description is given a confidence score. The final output is a list of descriptions ordered from highest to lowest confidence.
+Computer Vision can analyze an image and generate a human-readable phrase that describes its contents. The algorithm returns several descriptions based on different visual features, and each description is given a confidence score. The final output is a list of descriptions ordered from highest to lowest confidence.
+
+At this time, English is the only supported language for image description.
## Image description example
cognitive-services Concept Detecting Faces https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/concept-detecting-faces.md
Previously updated : 04/17/2019 Last updated : 01/05/2022 # Face detection with Computer Vision
-Computer Vision can detect human faces within an image and generate the age, gender, and rectangle for each detected face.
+Computer Vision can detect human faces within an image and generate rectangle coordinates for each detected face.
> [!NOTE]
-> This feature is also offered by the Azure [Face](../face/index.yml) service. See this alternative for more detailed face analysis, including face identification and pose detection.
+> This feature is also offered by the Azure [Face](../face/index.yml) service. Use this alternative for more detailed face analysis, including face identification and head pose detection.
## Face detection examples
cognitive-services Concept Tagging Images https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/concept-tagging-images.md
Previously updated : 02/08/2019 Last updated : 01/05/2022
-# Applying content tags to images
+# Apply content tags to images
-Computer Vision returns tags based on thousands of recognizable objects, living beings, scenery, and actions. When tags are ambiguous or not common knowledge, the API response provides 'hints' to clarify the meaning of the tag in context of a known setting. Tags are not organized as a taxonomy and no inheritance hierarchies exist. A collection of content tags forms the foundation for an image 'description' displayed as human readable language formatted in complete sentences. Note, that at this point English is the only supported language for image description.
+Computer Vision can return content tags for thousands of recognizable objects, living beings, scenery, and actions that appear in images. Tags are not organized as a taxonomy and do not have inheritance hierarchies. A collection of content tags forms the foundation for an image [description](./concept-describing-images.md) displayed as human readable language formatted in complete sentences. When tags are ambiguous or not common knowledge, the API response provides hints to clarify the meaning of the tag in context of a known setting.
-After uploading an image or specifying an image URL, Computer Vision algorithms output tags based on the objects, living beings, and actions identified in the image. Tagging is not limited to the main subject, such as a person in the foreground, but also includes the setting (indoor or outdoor), furniture, tools, plants, animals, accessories, gadgets etc.
+After you upload an image or specify an image URL, the Computer Vision algorithm can output tags based on the objects, living beings, and actions identified in the image. Tagging is not limited to the main subject, such as a person in the foreground, but also includes the setting (indoor or outdoor), furniture, tools, plants, animals, accessories, gadgets, and so on.
## Image tagging example
cognitive-services Export Model Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/export-model-python.md
Previously updated : 11/23/2020 Last updated : 01/05/2022 ms.devlang: python
-# Tutorial: Run TensorFlow model in Python
+# Tutorial: Run a TensorFlow model in Python
After you have [exported your TensorFlow model](./export-your-model.md) from the Custom Vision Service, this quickstart will show you how to use this model locally to classify images.
After you have [exported your TensorFlow model](./export-your-model.md) from the
## Prerequisites
-To use the tutorial, you need to do the following:
+To use the tutorial, first to do the following:
- Install either Python 2.7+ or Python 3.6+. - Install pip.
pip install opencv-python
## Load your model and tags
-The downloaded .zip file contains a _model.pb_ and a _labels.txt_ file. These files represent the trained model and the classification labels. The first step is to load the model into your project. Add the following code to a new Python script.
+The downloaded _.zip_ file contains a _model.pb_ and a _labels.txt_ file. These files represent the trained model and the classification labels. The first step is to load the model into your project. Add the following code to a new Python script.
```Python import tensorflow as tf
with open(labels_filename, 'rt') as lf:
## Prepare an image for prediction
-There are a few steps you need to take to prepare the image for prediction. These steps mimic the image manipulation performed during training:
+There are a few steps you need to take to prepare the image for prediction. These steps mimic the image manipulation performed during training.
### Open the file and create an image in the BGR color space
def update_orientation(image):
## Classify an image
-Once the image is prepared as a tensor, we can send it through the model for a prediction:
+Once the image is prepared as a tensor, we can send it through the model for a prediction.
```Python
cognitive-services Get Started Build Detector https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/get-started-build-detector.md
If you don't have an Azure subscription, create a [free account](https://azure.m
## Prerequisites - A set of images with which to train your detector model. You can use the set of [sample images](https://github.com/Azure-Samples/cognitive-services-python-sdk-samples/tree/master/samples/vision/images) on GitHub. Or, you can choose your own images using the tips below.-- A [supported web browser](overview.md#supported-browsers-for-custom-vision-website)
+- A [supported web browser](overview.md#supported-browsers-for-custom-vision-web-portal)
## Create Custom Vision resources
cognitive-services Getting Started Build A Classifier https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/getting-started-build-a-classifier.md
If you don't have an Azure subscription, create a [free account](https://azure.m
## Prerequisites - A set of images with which to train your classifier. See below for tips on choosing images.-- A [supported web browser](overview.md#supported-browsers-for-custom-vision-website)
+- A [supported web browser](overview.md#supported-browsers-for-custom-vision-web-portal)
## Create Custom Vision resources
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/overview.md
Additionally, you can choose from several variations of the Custom Vision algori
The Custom Vision Service is available as a set of native SDKs as well as through a web-based interface on the [Custom Vision website](https://customvision.ai/). You can create, test, and train a model through either interface or use both together.
-### Supported browsers for Custom Vision website
+### Supported browsers for Custom Vision web portal
The Custom Vision web interface can be used by the following web browsers: - Microsoft Edge (latest version)
cognitive-services Select Domain https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/select-domain.md
Previously updated : 03/06/2020 Last updated : 01/05/2022 # Select a domain for a Custom Vision project
-From the settings tab of your Custom Vision project, you can select a domain for your project. Choose the domain that is closest to your scenario. If you're accessing Custom Vision through a client library or REST API, you'll need to specify a domain ID when creating the project. You can get a list of domain IDs with [Get Domains](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddeab), or use the table below.
+From the **settings** tab of your project on the Custom Vision web portal, you can select a model domain for your project. You'll want to choose the domain that's closest to your scenario. If you're accessing Custom Vision through a client library or REST API, you'll need to specify a domain ID when creating the project. You can get a list of domain IDs with [Get Domains](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddeab), or use the table below.
## Image Classification
container-instances Container Instances Gpu https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-instances/container-instances-gpu.md
When deploying GPU resources, set CPU and memory resources appropriate for the w
* **CUDA drivers** - Container instances with GPU resources are pre-provisioned with NVIDIA CUDA drivers and container runtimes, so you can use container images developed for CUDA workloads.
- We support only CUDA 9.0 at this stage. For example, you can use the following base images for your Dockerfile:
- * [nvidia/cuda:9.0-base-ubuntu16.04](https://hub.docker.com/r/nvidia/cuda/)
- * [tensorflow/tensorflow: 1.12.0-gpu-py3](https://hub.docker.com/r/tensorflow/tensorflow)
+ We support up through CUDA 11 at this stage. For example, you can use the following base images for your Dockerfile:
+ * [nvidia/cuda:11.4.2-base-ubuntu20.04](https://hub.docker.com/r/nvidia/cuda/)
+ * [tensorflow/tensorflow:devel-gpu](https://hub.docker.com/r/tensorflow/tensorflow)
> [!NOTE] > To improve reliability when using a public container image from Docker Hub, import and manage the image in a private Azure container registry, and update your Dockerfile to use your privately managed base image. [Learn more about working with public images](../container-registry/buffer-gate-public-content.md).
One way to add GPU resources is to deploy a container group by using a [YAML fil
```yaml additional_properties: {}
-apiVersion: '2019-12-01'
+apiVersion: '2021-09-01'
name: gpucontainergroup properties: containers:
Another way to deploy a container group with GPU resources is by using a [Resour
{ "name": "[parameters('containerGroupName')]", "type": "Microsoft.ContainerInstance/containerGroups",
- "apiVersion": "2019-12-01",
+ "apiVersion": "2021-09-01",
"location": "[resourceGroup().location]", "properties": { "containers": [
cosmos-db Troubleshoot Nohostavailable Exception https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cassandra/troubleshoot-nohostavailable-exception.md
Title: Troubleshooting NoHostAvailableException and NoNodeAvailableException
-description: This article discusses the different possible reasons for having a NoHostException and ways to handle it.
+ Title: Troubleshoot NoHostAvailableException and NoNodeAvailableException
+description: This article discusses the various reasons for having a NoHostException and ways to handle it.
ms.devlang: csharp, java
-# Troubleshooting NoHostAvailableException and NoNodeAvailableException
-The NoHostAvailableException is a top-level wrapper exception with many possible causes and inner exceptions, many of which can be client-related. This exception tends to occur if there are some issues with cluster, connection settings or one or more Cassandra nodes is unavailable. Here we explore possible reasons for this exception along with details specific to the client driver being used.
+# Troubleshoot NoHostAvailableException and NoNodeAvailableException
+NoHostAvailableException is a top-level wrapper exception with many possible causes and inner exceptions, many of which can be client-related. This exception tends to occur if there are some issues with the cluster or connection settings, or if one or more Cassandra nodes are unavailable.
-## Driver Settings
-One of the most common causes of a NoHostAvailableException is because of the default driver settings. We advised the following [settings](#code-sample).
+This article explores possible reasons for this exception, and it discusses specific details about the client driver that's being used.
-- The default value of the connections per host is 1, which is not recommended for CosmosDB, a minimum value of 10 is advised. While more aggregated RUs are provisioned, increase connection count. The general guideline is 10 connections per 200k RU.-- Use cosmos retry policy to handle intermittent throttling responses, please reference [cosmosdb extension library](https://github.com/Azure/azure-cosmos-cassandra-extensions)(https://github.com/Azure/azure-cosmos-cassandra-extensions/tree/release/java-driver-4/1.0.1)-- For multi-region account, CosmosDB load-balancing policy in the extension should be used.-- Read request timeout should be set greater than 1 minute. We recommend 90 seconds.
+## Driver settings
+One of the most common causes of NoHostAvailableException is the default driver settings. We recommend that you use the [settings](#code-sample) listed at the end of this article. Here is some explanatory information:
-## Exception Messages
-If exception still persists after the recommended settings, review the exception messages below. Follow the recommendation, if your error log contains any of these messages.
+- The default value of the connections per host is 1, which we don't recommend for Azure Cosmos DB. We do recommend a minimum value of 10. Although more aggregated Request Units (RU) are provided, increase the connection count. The general guideline is 10 connections per 200,000 RU.
+- Use the Azure Cosmos DB retry policy to handle intermittent throttling responses. For more information, see the Azure Cosmos DB extension libraries:
+ - [Driver 3 extension library](https://github.com/Azure/azure-cosmos-cassandra-extensions)
+ - [Driver 4 extension library](https://github.com/Azure/azure-cosmos-cassandra-extensions/tree/release/java-driver-4/1.0.1)
+- For multi-region accounts, use the Azure Cosmos DB load-balancing policy in the extension.
+- The read request timeout should be set at greater than 1 minute. We recommend 90 seconds.
+
+## Exception messages
+If the exception persists after you've made the recommended changes, review the exception messages in the next three sections. If your error log contains any of these exception messages, follow the recommendation for that exception.
### BusyPoolException
-This client-side error indicates that the maximum number of request connections for a host has been reached. If unable to remove, request from the queue, you might see this error. If the connection per host has been set to minimum of 10, this could be caused by high server-side latency.
+This client-side error indicates that the maximum number of request connections for a host has been reached. If you're unable to remove the request from the queue, you might see this error. If the connection per host has been set to minimum of 10, the exception could be caused by high server-side latency.
```
-Java driver v3 exception:
+Java driver v3 exception:
All host(s) tried for query failed (tried: :10350 (com.datastax.driver.core.exceptions.BusyPoolException: [:10350] Pool is busy (no available connection and the queue has reached its max size 256))) All host(s) tried for query failed (tried: :10350 (com.datastax.driver.core.exceptions.BusyPoolException: [:10350] Pool is busy (no available connection and timed out after 5000 MILLISECONDS))) ```
C# driver 3:
All hosts tried for query failed (tried :10350: BusyPoolException 'All connections to host :10350 are busy, 2048 requests are in-flight on each 10 connection(s)') ``` #### Recommendation
-Instead of tuning the `max requests per connection`, we advise making sure the `connections per host` is set to a minimum of 10. See the [code sample section](#code-sample).
+Instead of tuning `max requests per connection`, make sure that `connections per host` is set to a minimum of 10. See the [code sample section](#code-sample).
### TooManyRequest(429)
-OverloadException is thrown when the request rate is too large. Which may be because of insufficient throughput being provisioned for the table and the RU budget being exceeded. Learn more about [large request](../sql/troubleshoot-request-rate-too-large.md#request-rate-is-large) and [server-side retry](prevent-rate-limiting-errors.md)
+OverloadException is thrown when the request rate is too great, which might happen when insufficient throughput is provisioned for the table and the RU budget is exceeded. For more information, see [large request](../sql/troubleshoot-request-rate-too-large.md#request-rate-is-large) and [server-side retry](prevent-rate-limiting-errors.md).
#### Recommendation
-We recommend using either of the following options:
-- If throttling is persistent, increase provisioned RU.-- If throttling is intermittent, use the CosmosRetryPolicy.-- If the extension library cannot be referenced [enable server side retry](prevent-rate-limiting-errors.md).
+Apply one of the following options:
+- If throttling is persistent, increase the provisioned RU.
+- If throttling is intermittent, use the Azure Cosmos DB retry policy.
+- If the extension library can't be referenced, [enable server-side retry](prevent-rate-limiting-errors.md).
### All hosts tried for query failed
-When the client is set to connect to a different region other than the primary contact point region, you will get below exception during the initial a few seconds upon start-up.
+When the client is set to connect to a region other than the primary contact point region, during the initial few seconds at startup, you'll get one of the following exception messages:
-Exception message with a Java driver 3: `Exception in thread "main" com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (no host was tried)at cassandra.driver.core@3.10.2/com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:83)`
+- For Java driver 3: `Exception in thread "main" com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (no host was tried)at cassandra.driver.core@3.10.2/com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:83)`
-Exception message with a Java driver 4: `No node was available to execute the query`
+- For Java driver 4: `No node was available to execute the query`
-Exception message with a C# driver 3: `System.ArgumentException: Datacenter West US does not match any of the nodes, available datacenters: West US 2`
+- For C# driver 3: `System.ArgumentException: Datacenter West US does not match any of the nodes, available datacenters: West US 2`
#### Recommendation
-We advise using the CosmosLoadBalancingPolicy in [Java driver 3](https://github.com/Azure/azure-cosmos-cassandra-extensions) and [Java driver 4](https://github.com/Azure/azure-cosmos-cassandra-extensions/tree/release/java-driver-4/1.0.1). This policy falls back to the ContactPoint of the primary write region where the specified local data is unavailable.
+Use CosmosLoadBalancingPolicy in [Java driver 3](https://github.com/Azure/azure-cosmos-cassandra-extensions) and [Java driver 4](https://github.com/Azure/azure-cosmos-cassandra-extensions/tree/release/java-driver-4/1.0.1). This policy falls back to the contact point of the primary write region where the specified local data is unavailable.
> [!NOTE]
-> Please reach out to Azure Cosmos DB support with details around - exception message, exception stacktrace, datastax driver log, universal time of failure, consistent or intermittent failures, failing keyspace and table, request type that failed, SDK version if none of the above recommendations help resolve your issue.
+> If the preceding recommendations don't help resolve your issue, contact Azure Cosmos DB support. Be sure to provide the following details: exception message, exception stacktrace, datastax driver log, universal time of failure, consistent or intermittent failures, failing keyspace and table, request type that failed, and SDK version.
-## Code Sample
+## Code sample
-#### Java Driver 3 Settings
+#### Java driver 3 settings
``` java // socket options with default values // https://docs.datastax.com/en/developer/java-driver/3.6/manual/socket_options/
We advise using the CosmosLoadBalancingPolicy in [Java driver 3](https://github.
.build(); ```
-#### Java Driver 4 Settings
+#### Java driver 4 settings
```java // driver configurations // https://docs.datastax.com/en/developer/java-driver/4.6/manual/core/configuration/
We advise using the CosmosLoadBalancingPolicy in [Java driver 3](https://github.
.build(); ```
-#### C# v3 Driver Settings
+#### C# v3 driver settings
```dotnetcli PoolingOptions poolingOptions = PoolingOptions.Create() .SetCoreConnectionsPerHost(HostDistance.Local, 10) // default 2
We advise using the CosmosLoadBalancingPolicy in [Java driver 3](https://github.
``` ## Next steps
-* [Server-side diagnostics](error-codes-solution.md) to understand different error codes and their meaning.
-* [Diagnose and troubleshoot](../sql/troubleshoot-dot-net-sdk.md) issues when you use the Azure Cosmos DB .NET SDK.
+* To understand the various error codes and their meaning, see [Server-side diagnostics](error-codes-solution.md).
+* See [Diagnose and troubleshoot issues with the Azure Cosmos DB .NET SDK](../sql/troubleshoot-dot-net-sdk.md).
* Learn about performance guidelines for [.NET v3](../sql/performance-tips-dotnet-sdk-v3-sql.md) and [.NET v2](../sql/performance-tips.md).
-* [Diagnose and troubleshoot](../sql/troubleshoot-java-sdk-v4-sql.md) issues when you use the Azure Cosmos DB Java v4 SDK.
-* Learn about performance guidelines for [Java v4 SDK](../sql/performance-tips-java-sdk-v4-sql.md).
+* See [Troubleshoot issues with the Azure Cosmos DB Java SDK v4 with SQL API accounts](../sql/troubleshoot-java-sdk-v4-sql.md).
+* See [Performance tips for the Azure Cosmos DB Java SDK v4](../sql/performance-tips-java-sdk-v4-sql.md).
cosmos-db Dedicated Gateway https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/dedicated-gateway.md
The dedicated gateway is available in the following sizes:
> [!NOTE] > Once created, you can't modify the size of the dedicated gateway nodes. However, you can add or remove nodes.
+There are many different ways to provision a dedicated gateway:
+
+- [Provision a dedicated gateway using the Azure Portal](how-to-configure-integrated-cache.md#provision-a-dedicated-gateway-cluster)
+- [Use Azure Cosmos DB's REAT API](https://docs.microsoft.com/rest/api/cosmos-db-resource-provider/2021-04-01-preview/service/create)
+- [Azure CLI](https://docs.microsoft.com/cli/azure/cosmosdb/service?view=azure-cli-latest#az_cosmosdb_service_create)
+- [ARM template](https://docs.microsoft.com/azure/templates/microsoft.documentdb/databaseaccounts/services?tabs=bicep)
+ - Note: You cannot deprovision a dedicated gateway using ARM templates
+ ## Dedicated gateway in multi-region accounts When you provision a dedicated gateway cluster in multi-region accounts, identical dedicated gateway clusters are provisioned in each region. For example, consider an Azure Cosmos DB account in East US and North Europe. If you provision a dedicated gateway cluster with two D8 nodes in this account, you'd have four D8 nodes in total - two in East US and two in North Europe. You don't need to explicitly configure dedicated gateways in each region and your connection string remains the same. There are also no changes to best practices for performing failovers.
cosmos-db Partial Document Update Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/partial-document-update-getting-started.md
if (response.isSuccessStatusCode()) {
} ```
-## Node
+## Node.js
+
+Support for Partial document update (Patch API) in the [Azure Cosmos DB JavaScript SDK](sql/sql-api-sdk-node.md) is available from version *3.15.0* onwards. You can download it from the [NPM Registry](https://www.npmjs.com/package/@azure/cosmos/v/3.15.0)
+
+> [!NOTE]
+> A complete partial document update sample can be found in the [.js v3 samples repository](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/cosmosdb/cosmos/samples/v3/typescript/src/ItemManagement.ts#L167) on GitHub.
**Executing a single patch operation**
Partial Document Update operations can also be [executed on the server-side](sto
); }; ```
+> [!NOTE]
+> Definition of validateOptionsAndCallback can be found in the [.js DocDbWrapperScript](https://github.com/Azure/azure-cosmosdb-js-server/blob/1dbe69893d09a5da29328c14ec087ef168038009/utils/DocDbWrapperScript.js#L289) on GitHub.
++
+**Sample parameter for patch operation**
+
+```javascript
+function () {
+ var doc = {
+ "id": "exampleDoc",
+ "field1": {
+ "field2": 10,
+ "field3": 20
+ }
+ };
+ var isAccepted = __.createDocument(__.getSelfLink(), doc, (err, doc) => {
+ if (err) throw err;
+ var patchSpec = [
+ {"op": "add", "path": "/field1/field2", "value": 20},
+ {"op": "remove", "path": "/field1/field3"}
+ ];
+ isAccepted = __.patchDocument(doc._self, patchSpec, (err, doc) => {
+ if (err) throw err;
+ else {
+ getContext().getResponse().setBody(docPatched);
+ }
+ }
+ }
+ if(!isAccepted) throw new Error("patch was't accepted")
+ }
+ }
+ if(!isAccepted) throw new Error("create wasn't accepted")
+}
+```
## Troubleshooting
cosmos-db Sql Api Sdk Node https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-api-sdk-node.md
|Resource |Link | |||
-|Download SDK | [NPM](https://www.npmjs.com/package/@azure/cosmos)
+|Download SDK | [@azure/cosmos](https://www.npmjs.com/package/@azure/cosmos)
|API Documentation | [JavaScript SDK reference documentation](/javascript/api/%40azure/cosmos/)
-|SDK installation instructions | [Installation instructions](https://github.com/Azure/azure-sdk-for-js)
-|Contribute to SDK | [GitHub](https://github.com/Azure/azure-sdk-for-js/tree/main)
+|SDK installation instructions | `npm install @azure/cosmos`
+|Contribute to SDK | [Contributing guide for azure-sdk-for-js repo](https://github.com/Azure/azure-sdk-for-js/blob/main/CONTRIBUTING.md)
| Samples | [Node.js code samples](sql-api-nodejs-samples.md) | Getting started tutorial | [Get started with the JavaScript SDK](sql-api-nodejs-get-started.md) | Web app tutorial | [Build a Node.js web application using Azure Cosmos DB](sql-api-nodejs-application.md)
-| Current supported platform | [Node.js v12.x](https://nodejs.org/en/blog/release/v12.7.0/) - SDK Version 3.x.x<br/>[Node.js v10.x](https://nodejs.org/en/blog/release/v10.6.0/) - SDK Version 3.x.x<br/>[Node.js v8.x](https://nodejs.org/en/blog/release/v8.16.0/) - SDK Version 3.x.x<br/>[Node.js v6.x](https://nodejs.org/en/blog/release/v6.10.3/) - SDK Version 2.x.x<br/>[Node.js v4.2.0](https://nodejs.org/en/blog/release/v4.2.0/)- SDK Version 1.x.x<br/> [Node.js v0.12](https://nodejs.org/en/blog/release/v0.12.0/)- SDK Version 1.x.x<br/> [Node.js v0.10](https://nodejs.org/en/blog/release/v0.10.0/)- SDK Version 1.x.x
+| Current supported Node.js platforms | [LTS versions of Node.js](https://nodejs.org/about/releases/)
## Release notes
Not always the most visible changes, but they help our team ship better code, fa
## Release & Retirement Dates
-Microsoft provides notification at least **12 months** in advance of retiring an SDK in order to smooth the transition to a newer/supported version. New features and functionality and optimizations are only added to the current SDK, as such it is recommended that you always upgrade to the latest SDK version as early as possible.
+Microsoft provides notification at least **12 months** in advance of retiring an SDK in order to smooth the transition to a newer/supported version. New features and functionality and optimizations are only added to the current SDK, as such it is recommended that you always upgrade to the latest SDK version as early as possible. Read the [Microsoft Support Policy for SDKs](https://github.com/Azure/azure-sdk-for-js/blob/main/SUPPORT.md#microsoft-support-policy) for more details.
| Version | Release Date | Retirement Date | | | | |
-| 3.4.2 | November 7, 2019 | |
-| 3.4.1 | November 5, 2019 | |
-| 3.4.0 | October 28, 2019 | |
-| 3.3.6 | October 14, 2019 | |
-| 3.3.5 | October 14, 2019 | |
-| 3.3.4 | October 14, 2019 | |
-| 3.3.3 | October 3, 2019 | |
-| 3.3.2 | October 3, 2019 | |
-| 3.3.1 | October 1, 2019 | |
-| 3.3.0 | September 24, 2019 | |
-| 3.2.0 | August 26, 2019 | |
-| 3.1.1 | August 7, 2019 | |
-| 3.1.0 |July 26, 2019 | |
-| 3.0.4 |July 22, 2019 | |
-| 3.0.3 |July 17, 2019 | |
-| 3.0.2 |July 9, 2019 | |
-| 3.0.0 |June 28, 2019 | |
-| 2.1.5 |March 20, 2019 | |
-| 2.1.4 |March 15, 2019 | |
-| 2.1.3 |March 8, 2019 | |
-| 2.1.2 |January 28, 2019 | |
-| 2.1.1 |December 5, 2018 | |
-| 2.1.0 |December 4, 2018 | |
-| 2.0.5 |November 7, 2018 | |
-| 2.0.4 |October 30, 2018 | |
-| 2.0.3 |October 30, 2018 | |
-| 2.0.2 |October 10, 2018 | |
-| 2.0.1 |September 25, 2018 | |
-| 2.0.0 |September 24, 2018 | |
-| 2.0.0-3 (RC) |August 2, 2018 | |
-| 1.14.4 |May 03, 2018 |August 30, 2020 |
-| 1.14.3 |May 03, 2018 |August 30, 2020 |
-| 1.14.2 |December 21, 2017 |August 30, 2020 |
-| 1.14.1 |November 10, 2017 |August 30, 2020 |
-| 1.14.0 |November 9, 2017 |August 30, 2020 |
-| 1.13.0 |October 11, 2017 |August 30, 2020 |
-| 1.12.2 |August 10, 2017 |August 30, 2020 |
-| 1.12.1 |August 10, 2017 |August 30, 2020 |
-| 1.12.0 |May 10, 2017 |August 30, 2020 |
-| 1.11.0 |March 16, 2017 |August 30, 2020 |
-| 1.10.2 |January 27, 2017 |August 30, 2020 |
-| 1.10.1 |December 22, 2016 |August 30, 2020 |
-| 1.10.0 |October 03, 2016 |August 30, 2020 |
-| 1.9.0 |July 07, 2016 |August 30, 2020 |
-| 1.8.0 |June 14, 2016 |August 30, 2020 |
-| 1.7.0 |April 26, 2016 |August 30, 2020 |
-| 1.6.0 |March 29, 2016 |August 30, 2020 |
-| 1.5.6 |March 08, 2016 |August 30, 2020 |
-| 1.5.5 |February 02, 2016 |August 30, 2020 |
-| 1.5.4 |February 01, 2016 |August 30, 2020 |
-| 1.5.2 |January 26, 2016 |August 30, 2020 |
-| 1.5.2 |January 22, 2016 |August 30, 2020 |
-| 1.5.1 |January 4, 2016 |August 30, 2020 |
-| 1.5.0 |December 31, 2015 |August 30, 2020 |
-| 1.4.0 |October 06, 2015 |August 30, 2020 |
-| 1.3.0 |October 06, 2015 |August 30, 2020 |
-| 1.2.2 |September 10, 2015 |August 30, 2020 |
-| 1.2.1 |August 15, 2015 |August 30, 2020 |
-| 1.2.0 |August 05, 2015 |August 30, 2020 |
-| 1.1.0 |July 09, 2015 |August 30, 2020 |
-| 1.0.3 |June 04, 2015 |August 30, 2020 |
-| 1.0.2 |May 23, 2015 |August 30, 2020 |
-| 1.0.1 |May 15, 2015 |August 30, 2020 |
-| 1.0.0 |April 08, 2015 |August 30, 2020 |
+| v3 | June 28, 2019 | |
+| v2 | September 24, 2018 | September 24, 2021 |
+| v1 | April 08, 2015 | August 30, 2020 |
## FAQ [!INCLUDE [cosmos-db-sdk-faq](../includes/cosmos-db-sdk-faq.md)]
cost-management-billing Mpa Request Ownership https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/mpa-request-ownership.md
tags: billing
Previously updated : 11/17/2021 Last updated : 01/05/2022
Access for existing users, groups, or service principals that was assigned using
The partners should work with the customer to get access to subscriptions. The partners need to get either [Admin on Behalf Of - AOBO](https://channel9.msdn.com/Series/cspdev/Module-11-Admin-On-Behalf-Of-AOBO) or [Azure Lighthouse](../../lighthouse/concepts/cloud-solution-provider.md) access open support tickets.
+### Power BI connectivity
+
+The Azure Cost Management connector for Power BI doesn't currently support Microsoft Partner Agreements. The connector only supports Enterprise Agreements and direct Microsoft Customer Agreements. For more information about Azure Cost Management connector support, see [Create visuals and reports with the Azure Cost Management connector in Power BI Desktop](/power-bi/connect-data/desktop-connect-azure-cost-management). After you transfer a subscription from one of the agreements to a Microsoft Partner Agreement, your Power BI reports stop working.
+
+As an alternative, you can always use Exports in Cost Management to save the consumption and usage information and then use it in Power BI. For more information, see [Create and manage exported data](../costs/tutorial-export-acm-data.md).
+ ### Azure support plan Azure support doesn't transfer with the subscriptions. If the user transfers all Azure subscriptions, ask them to cancel their support plan. After the transfer, CSP partner is responsible for the support. The customer should work with CSP partner for any support request.
data-share Concepts Pricing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/concepts-pricing.md
Previously updated : 08/11/2020 Last updated : 01/03/2022 # Understand Azure Data Share pricing
data-share Concepts Roles Permissions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/concepts-roles-permissions.md
Follow these steps to register the Microsoft.DataShare resource provider into yo
To learn more about resource provider, refer to [Azure resource providers and types](../azure-resource-manager/management/resource-providers-and-types.md).
+## Custom roles for Data Share
+This section describes custom roles and permissions required within the custom roles for sharing and receiving data, specific to a Storage account. There are also pre-requisites that are independent of custom role or Azure Data Share role.
+
+### Pre-requisites for Data Share, in addition to custom role
+* For storage and data lake snapshot-based sharing, to add a dataset in Azure Data Share, the provider data share resource's managed identity needs to be granted access to the source Azure data store. For example, in the case of a storage account, the data share resource's managed identity is granted the Storage Blob Data Reader role.
+* To receive data into a storage account, the consumer data share resource's managed identity needs to be granted access to the target storage account. The data share resource's managed identity needs to be granted the Storage Blob Data Contributor role.
+* See the [Data Provider](#data-provider) and [Data Consumer](#data-consumer) sections of this article for more specific steps.
+* You may also need to manually register the Microsoft.DataShare resource provider into your Azure subscription for some scenarios. See in [Resource provider registration](#resource-provider-registration) section of this article for specific details.
+
+### Create custom roles and required permissions
+Custom roles can be created in a subscription or resource group for sharing and receiving data. Users and groups can then be assigned the custom role.
+
+* For creating a custom role, there are actions required for Storage, Data Share, Resources group, and Authorization. Please see the [Azure resource provider operations document](../role-based-access-control/resource-provider-operations.md#microsoftdatashare) for Data Share to understand the different levels of permissions and choose the ones relevant for your custom role.
+* Alternately, you can use the Azure Portal to navigate to IAM, Custom role, Add permissions, Search, search for Microsoft.DataShare permissions to see the list of actions available.
+* To learn more about custom role assignment, refer to [Azure custom roles](../role-based-access-control/custom-roles.md). Once you have your custom role, test it to verify that it works as you expect.
+
+The following shows an example of how the required actions will be listed in JSON view for a custom role to share and receive data.
+
+```json
+{
+"Actions": [
+
+"Microsoft.Storage/storageAccounts/read",
+
+"Microsoft.Storage/storageAccounts/write",
+
+"Microsoft.Storage/storageAccounts/blobServices/containers/read",
+
+"Microsoft.Storage/storageAccounts/blobServices/generateUserDelegationKey/action",
+
+"Microsoft.DataShare/accounts/read",
+
+"Microsoft.DataShare/accounts/providers/Microsoft.Insights/metricDefinitions/read",
+
+"Microsoft.DataShare/accounts/shares/listSynchronizations/action",
+
+"Microsoft.DataShare/accounts/shares/synchronizationSettings/read",
+
+"Microsoft.DataShare/accounts/shares/synchronizationSettings/write",
+
+"Microsoft.DataShare/accounts/shares/synchronizationSettings/delete",
+
+"Microsoft.DataShare/accounts/shareSubscriptions/*",
+
+"Microsoft.DataShare/listInvitations/read",
+
+"Microsoft.DataShare/locations/rejectInvitation/action",
+
+"Microsoft.DataShare/locations/consumerInvitations/read",
+
+"Microsoft.DataShare/locations/operationResults/read",
+
+"Microsoft.Resources/subscriptions/resourceGroups/read",
+
+"Microsoft.Resources/subscriptions/resourcegroups/resources/read",
+
+"Microsoft.Authorization/roleAssignments/read",
+ ]
+}
+```
+ ## Next steps -- Learn more about roles in Azure - [Understand Azure role definitions](../role-based-access-control/role-definitions.md)
+- Learn more about roles in Azure - [Understand Azure role definitions](../role-based-access-control/role-definitions.md)
data-share Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/disaster-recovery.md
Previously updated : 07/30/2020 Last updated : 01/03/2022 # Disaster recovery for Azure Data Share
Data consumers can either have an active share subscription that is idle for DR
## Next steps
-To learn how to start sharing data, continue to the [share your data](share-your-data.md) tutorial.
+To learn how to start sharing data, continue to the [share your data](share-your-data.md) tutorial.
data-share How To Add Datasets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/how-to-add-datasets.md
Previously updated : 07/30/2020 Last updated : 01/03/2022 # How to add datasets to an existing share in Azure Data Share
Without snapshot settings configured, the consumer must manually trigger a full
For more information on snapshots, see [Snapshots](terminology.md). ## Next steps
-Learn more about how to [add recipients to an existing data share](how-to-add-recipients.md).
+Learn more about how to [add recipients to an existing data share](how-to-add-recipients.md).
data-share How To Configure Mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/how-to-configure-mapping.md
Previously updated : 08/14/2020 Last updated : 01/03/2022 # How to configure a dataset mapping for a received share in Azure Data Share
data-share How To Delete Invitation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/how-to-delete-invitation.md
Previously updated : 07/30/2020 Last updated : 01/03/2022 # How to delete an invitation to a recipient in Azure Data Share
In Azure Data Share, navigate to your sent share and select the **Invitations**
![Delete Invitation](./media/how-to/how-to-delete-invitation/delete-invitation.png) ## Next steps
-Learn more about how to [revoke a share subscription](how-to-revoke-share-subscription.md).
+Learn more about how to [revoke a share subscription](how-to-revoke-share-subscription.md).
data-share How To Monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/how-to-monitor.md
Previously updated : 07/30/2020 Last updated : 01/03/2022 # Monitor Azure Data Share
You can configure diagnostic setting to save log data or events. Navigate to Mon
## Next Steps
-Learn more about [Azure Data Share terminology](terminology.md)
+Learn more about [Azure Data Share terminology](terminology.md)
data-share How To Revoke Share Subscription https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/how-to-revoke-share-subscription.md
Previously updated : 07/30/2020 Last updated : 01/03/2022 # How to revoke a consumer's share subscription in Azure Data Share
In Azure Data Share, navigate to your sent share and select the **Share Subscrip
Check the boxes next to the recipients whose share subscriptions you would like to delete and then click **Revoke**. The consumer will no longer get updates to their data. ## Next steps
-Learn more about how to [monitor your data shares](how-to-monitor.md).
+Learn more about how to [monitor your data shares](how-to-monitor.md).
data-share Samples Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/samples-powershell.md
Previously updated : 07/06/2019 Last updated : 01/03/2022 # Azure PowerShell samples for Azure Data Share
data-share Accept Share Invitations Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/scripts/powershell/accept-share-invitations-powershell.md
Previously updated : 07/07/2019 Last updated : 01/03/2022
data-share Add Datasets Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/scripts/powershell/add-datasets-powershell.md
Previously updated : 07/07/2019 Last updated : 01/03/2022
data-share Create New Share Account Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/scripts/powershell/create-new-share-account-powershell.md
description: This PowerShell script creates a new Data Share account.
Previously updated : 07/07/2019 Last updated : 01/03/2022
data-share Create New Share Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/scripts/powershell/create-new-share-powershell.md
description: This PowerShell script creates a new data share within an existing
Previously updated : 07/07/2019 Last updated : 01/03/2022
data-share Create Share Invitation Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/scripts/powershell/create-share-invitation-powershell.md
description: This PowerShell script sends a data share invitation.
Previously updated : 07/07/2019 Last updated : 01/03/2022
data-share Create View Trigger Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/scripts/powershell/create-view-trigger-powershell.md
Previously updated : 07/07/2019 Last updated : 01/03/2022
data-share Monitor Usage Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/scripts/powershell/monitor-usage-powershell.md
description: This PowerShell script retrieves usage metrics of a sent data share
Previously updated : 07/07/2019 Last updated : 01/03/2022
data-share Set View Synchronizations Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/scripts/powershell/set-view-synchronizations-powershell.md
description: This PowerShell script sets and gets share synchronization settings
Previously updated : 07/07/2019 Last updated : 01/03/2022
data-share View Sent Invitations Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/scripts/powershell/view-sent-invitations-powershell.md
Previously updated : 07/07/2019 Last updated : 01/03/2022
data-share View Share Details Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/scripts/powershell/view-share-details-powershell.md
Previously updated : 07/07/2019 Last updated : 01/03/2022
data-share Share Your Data Arm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/share-your-data-arm.md
Last updated : 01/03/2022 Previously updated : 08/19/2020 # Quickstart: Share data using Azure Data Share and ARM template
data-share Share Your Data Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/share-your-data-portal.md
Previously updated : 10/30/2020 Last updated : 01/03/2022 # Quickstart: Share data using Azure Data Share in the Azure portal
Create an Azure Data Share resource in an Azure resource group.
1. Select the dataset type that you would like to add. You will see a different list of dataset types depending on the share type (snapshot or in-place) you have selected in the previous step.
- ![AddDatasets](./media/add-datasets.png "Add Datasets")
+ ![AddDatasets](./media/add-datasets-updated.png "Add Datasets")
1. Navigate to the object you would like to share and select 'Add Datasets'.
data-share Terminology https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/terminology.md
Previously updated : 07/10/2019 Last updated : 01/03/2022 # Azure Data Share Concepts
databox-online Azure Stack Edge Pro R Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-pro-r-overview.md
Title: Microsoft Azure Stack Edge Pro R overview | Microsoft Docs
-description: Describes Azure Stack Edge Pro R devices, a storage solution that uses a physical device for network-based transfer into Azure and the solution can deployed in harsh environments.
+ Title: Microsoft Azure Stack Edge Pro R overview
+description: Describes Azure Stack Edge Pro R devices, a storage solution that uses a physical device for network-based transfer into Azure and the solution can be deployed in harsh environments.
Previously updated : 10/05/2021 Last updated : 01/05/2022 #Customer intent: As an IT admin, I need to understand what Azure Stack Edge Pro R is and how it works so I can use it to process and transform data before sending to Azure.
Azure Stack Edge Pro R has the following capabilities:
|Disconnected mode| Device and service can be optionally managed via Azure Stack Hub. Deploy, run, manage applications in offline mode. <br> Disconnected mode supports offline upload scenarios.| |Supported file transfer protocols |Support for standard SMB, NFS, and REST protocols for data ingestion. <br> For more information on supported versions, go to [Azure Stack Edge Pro R system requirements](azure-stack-edge-gpu-system-requirements.md).| |Data refresh | Ability to refresh local files with the latest from cloud. <br> For more information, see [Refresh a share on your Azure Stack Edge](azure-stack-edge-gpu-manage-shares.md#refresh-shares).|
-|Double encryption | Use of self-encrypting drives provides the first layer of encryption. VPN provides the second layer of encryption. BitLocker support to locally encrypt data and secure data transfer to cloud over *https* . <br> For more information, see [Configure VPN on your Azure Stack Edge Pro R device](azure-stack-edge-mini-r-configure-vpn-powershell.md).|
+|Double encryption | Use of self-encrypting drives provides the first layer of encryption. VPN provides the second layer of encryption. BitLocker support to locally encrypt data and secure data transfer to cloud over *https*. <br> For more information, see [Configure VPN on your Azure Stack Edge Pro R device](azure-stack-edge-mini-r-configure-vpn-powershell.md).|
|Bandwidth throttling| Throttle to limit bandwidth usage during peak hours. <br> For more information, see [Manage bandwidth schedules on your Azure Stack Edge](azure-stack-edge-gpu-manage-bandwidth-schedules.md).| |Easy ordering| Bulk ordering and tracking of the device via Azure Edge Hardware Center (Preview). <br> For more information, see [Order a device via Azure Edge Hardware Center](azure-stack-edge-gpu-deploy-prep.md#create-a-new-resource). |
databox Data Box Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/data-box-troubleshoot.md
Previously updated : 08/11/2021 Last updated : 01/04/2022
For help troubleshooting issues with accessing the shares on your device, see [T
The errors in Data Box and Data Box Heavy are summarized as follows:
-| Error category* | Description | Recommended action |
+| Error category | Description | Recommended action |
|-||--|
-| Container or share names | The container or share names do not follow the Azure naming rules. |Download the error lists. <br> Rename the containers or shares. [Learn more](#container-or-share-name-errors). |
-| Container or share size limit | The total data in containers or shares exceeds the Azure limit. |Download the error lists. <br> Reduce the overall data in the container or share. [Learn more](#container-or-share-size-limit-errors).|
-| Object or file size limit | The object or files in containers or shares exceeds the Azure limit.|Download the error lists. <br> Reduce the file size in the container or share. [Learn more](#object-or-file-size-limit-errors). |
-| Data or file type | The data format or the file type is not supported. |Download the error lists. <br> For page blobs or managed disks, ensure the data is 512-bytes aligned and copied to the pre-created folders. [Learn more](#data-or-file-type-errors). |
-| Folder or file internal errors | The file or folder have an internal error. |Download the error lists. <br> Remove the file and copy again. For a folder, modify it by renaming or adding or deleting a file. The error should go away in 30 minutes. [Learn more](#folder-or-file-internal-errors). |
+| Container or share names<sup>*</sup> | The container or share names do not follow the Azure naming rules. |Download the error lists. <br> Rename the containers or shares. [Learn more](#container-or-share-name-errors). |
+| Container or share size limit<sup>*</sup> | The total data in containers or shares exceeds the Azure limit. |Download the error lists. <br> Reduce the overall data in the container or share. [Learn more](#container-or-share-size-limit-errors).|
+| Object or file size limit<sup>*</sup> | The object or files in containers or shares exceeds the Azure limit.|Download the error lists. <br> Reduce the file size in the container or share. [Learn more](#object-or-file-size-limit-errors). |
+| Data or file type<sup>*</sup> | The data format or the file type is not supported. |Download the error lists. <br> For page blobs or managed disks, ensure the data is 512-bytes aligned and copied to the pre-created folders. [Learn more](#data-or-file-type-errors). |
+| Folder or file internal errors<sup>*</sup> | The file or folder have an internal error. |Download the error lists. <br> Remove the file and copy again. For a folder, modify it by renaming or adding or deleting a file. The error should go away in 30 minutes. [Learn more](#folder-or-file-internal-errors). |
+| General error<sup>*</sup> | Internal exceptions or error paths in the code caused a critical error. | Reboot the device and rerun the **Prepare to Ship** operation. If the error doesn't go away, contact Microsoft Support. [Learn more](#general-errors). |
| Non-critical blob or file errors | The blob or file names do not follow the Azure naming rules or the file type is not supported. | These blob or files may not be copied or the names may be changed. [Learn how to fix these errors](#non-critical-blob-or-file-errors). |
-\* The first five error categories are critical errors and must be fixed before you can proceed to prepare to ship.
+<sup>*</sup> Errors in this category are critical errors that must be fixed before you can proceed to **Prepare to Ship**.
## Container or share name errors
-These are errors related to container and share names.
+These errors are related to container and share names.
-### ERROR_CONTAINER_OR_SHARE_NAME_LENGTH
+### ERROR_CONTAINER_OR_SHARE_NAME_LENGTH
**Error description:** The container or share name must be between 3 and 63 characters.
For more information, see the Azure naming conventions for [directories](/rest/
## Container or share size limit errors
-These are errors related to data exceeding the size of data allowed in a container or a share.
+These errors are related to data exceeding the size of data allowed in a container or a share.
### ERROR_CONTAINER_OR_SHARE_CAPACITY_EXCEEDED
These are errors related to data exceeding the size of data allowed in a contain
**Suggested resolution:** On the **Connect and copy** page of the local web UI, download, and review the error files. - Identify the folders that have this issue from the error logs and make sure that the files in that folder are under 5 TiB.-- The 5 TiB limit does not apply to a storage account that allows large file shares. However, you must have large file shares configured when you place your order.
+- The 5-TiB limit does not apply to a storage account that allows large file shares. However, you must have large file shares configured when you place your order.
- Contact [Microsoft Support](data-box-disk-contact-microsoft-support.md) and request a new shipping label. - [Enable large file shares on the storage account](../storage/files/storage-how-to-create-file-share.md#enable-large-files-shares-on-an-existing-account) - [Expand the file shares in the storage account](../storage/files/storage-how-to-create-file-share.md#expand-existing-file-shares) and set the quota to 100 TiB.
These are errors related to data exceeding the size of data allowed in a contain
## Object or file size limit errors
-These are errors related to data exceeding the maximum size of object or the file that is allowed in Azure.
+These errors are related to data exceeding the maximum size of object or the file that is allowed in Azure.
### ERROR_BLOB_OR_FILE_SIZE_LIMIT
These are errors related to data exceeding the maximum size of object or the fil
## Data or file type errors
-These are errors related to unsupported file type or data type found in the container or share.
+These errors are related to unsupported file type or data type found in the container or share.
### ERROR_BLOB_OR_FILE_SIZE_ALIGNMENT
For more information, see [Copy to managed disks](data-box-deploy-copy-data-from
**Suggested resolution:** If this is a file, remove the file and copy it again. If this is a folder, modify the folder. Either rename the folder or add or delete a file from the folder. The error should clear on its own in 30 minutes. Contact Microsoft Support, if the error persists.
+## General errors
+
+General errors are caused by internal exceptions or error paths in the code.
+
+### ERROR_GENERAL
+
+**Error description** This general error is caused by internal exceptions or error paths in the code.
+
+**Suggested resolution:** Reboot the device and rerun the **Prepare to Ship** operation. If the error doesn't go away, [contact Microsoft Support](data-box-disk-contact-microsoft-support.md).
+ ## Non-critical blob or file errors All the non-critical errors related to names of blobs, files, or containers that are seen during data copy are summarized in the following section. If these errors are present, then the names will be modified to conform to the Azure naming conventions. The corresponding order status for data upload will be **Completed with warnings**.
defender-for-cloud Adaptive Network Hardening https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/adaptive-network-hardening.md
To add an adaptive network hardening rule:
1. From the top toolbar, select **Add rule**.
- ![add rule.](./media/adaptive-network-hardening/add-hard-rule.png)
+ ![add rule.](./media/adaptive-network-hardening/add-new-hard-rule.png)
1. In the **New rule** window, enter the details and select **Add**.
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/upcoming-changes.md
Title: Important changes coming to Microsoft Defender for Cloud description: Upcoming changes to Microsoft Defender for Cloud that you might need to be aware of and for which you might need to plan Previously updated : 12/12/2021 Last updated : 01/05/2022 # Important upcoming changes to Microsoft Defender for Cloud
If you're looking for the latest release notes, you'll find them in the [What's
| Planned change | Estimated date for change | |-||
-| [Deprecating a preview alert: ARM.MCAS_ActivityFromAnonymousIPAddresses](#deprecating-a-preview-alert-armmcas_activityfromanonymousipaddresses) | December 2021 |
-| [Legacy implementation of ISO 27001 is being replaced with new ISO 27001:2013](#legacy-implementation-of-iso-27001-is-being-replaced-with-new-iso-270012013) | December 2021 |
-| [Multiple changes to identity recommendations](#multiple-changes-to-identity-recommendations) | December 2021 |
+| [Deprecating a preview alert: ARM.MCAS_ActivityFromAnonymousIPAddresses](#deprecating-a-preview-alert-armmcas_activityfromanonymousipaddresses) | January 2022 |
+| [Legacy implementation of ISO 27001 is being replaced with new ISO 27001:2013](#legacy-implementation-of-iso-27001-is-being-replaced-with-new-iso-270012013) | January 2022 |
+| [Multiple changes to identity recommendations](#multiple-changes-to-identity-recommendations) | February 2022 |
+| [Deprecating the recommendation to use service principals to protect your subscriptions](#deprecating-the-recommendation-to-use-service-principals-to-protect-your-subscriptions) | February 2022 |
+| [Deprecating the recommendations to install the network traffic data collection agent](#deprecating-the-recommendations-to-install-the-network-traffic-data-collection-agent) | February 2022 |
| [Enhancements to recommendation to classify sensitive data in SQL databases](#enhancements-to-recommendation-to-classify-sensitive-data-in-sql-databases) | Q1 2022 | | [Changes to recommendations for managing endpoint protection solutions](#changes-to-recommendations-for-managing-endpoint-protection-solutions) | March 2022 | | | | ### Deprecating a preview alert: ARM.MCAS_ActivityFromAnonymousIPAddresses
-**Estimated date for change:** December 2021
+**Estimated date for change:** January 2022
We'll be deprecating the following preview alert:
We've created new alerts that provide this information and add to it. In additio
### Legacy implementation of ISO 27001 is being replaced with new ISO 27001:2013
-**Estimated date for change:** November 2021
+**Estimated date for change:** January 2022
The legacy implementation of ISO 27001 will be removed from Defender for Cloud's regulatory compliance dashboard. If you're tracking your ISO 27001 compliance with Defender for Cloud, onboard the new ISO 27001:2013 standard for all relevant management groups or subscriptions, and the current legacy ISO 27001 will soon be removed from the dashboard.
The legacy implementation of ISO 27001 will be removed from Defender for Cloud's
### Multiple changes to identity recommendations
-**Estimated date for change:** December 2021
+**Estimated date for change:** February 2022
Defender for Cloud includes multiple recommendations for improving the management of users and accounts. In December, we'll be making the changes outlined below.
Defender for Cloud includes multiple recommendations for improving the managemen
|Description |User accounts that have been blocked from signing in, should be removed from your subscriptions.<br>These accounts can be targets for attackers looking to find ways to access your data without being noticed.|User accounts that have been blocked from signing into Active Directory, should be removed from your subscriptions. These accounts can be targets for attackers looking to find ways to access your data without being noticed.<br>Learn more about securing the identity perimeter in [Azure Identity Management and access control security best practices](../security/fundamentals/identity-management-best-practices.md).| |Related policy |[Deprecated accounts should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f6b1cbf55-e8b6-442f-ba4c-7246b6381474)|Subscriptions should be purged of accounts that are blocked in Active Directory and have read and write permissions| |||
-
++
+### Deprecating the recommendation to use service principals to protect your subscriptions
+
+**Estimated date for change:** February 2022
+
+As organizations are moving away from using management certificates to manage their subscriptions, and [our recent announcement that we're retiring the Cloud Services (classic) deployment model](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/), we'll be deprecating the following Defender for Cloud recommendation and its related policy:
+
+|Recommendation |Description |Severity |
+||||
+|[Service principals should be used to protect your subscriptions instead of Management Certificates](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/2acd365d-e8b5-4094-bce4-244b7c51d67c) |Management certificates allow anyone who authenticates with them to manage the subscription(s) they are associated with. To manage subscriptions more securely, using service principals with Resource Manager is recommended to limit the blast radius in the case of a certificate compromise. It also automates resource management. <br />(Related policy: [Service principals should be used to protect your subscriptions instead of management certificates](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f6646a0bd-e110-40ca-bb97-84fcee63c414)) |Medium |
+|||
+
+Learn more:
+
+- [Cloud Services (classic) deployment model is retiring on 31 August 2024](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/)
+- [Overview of Azure Cloud Services (classic)](../cloud-services/cloud-services-choose-me.md)
+- [Workflow of Windows Azure classic VM Architecture - including RDFE workflow basics](../cloud-services/cloud-services-workflow-process.md)
++
+### Deprecating the recommendations to install the network traffic data collection agent
+
+**Estimated date for change:** February 2022
+
+Changes in our roadmap and priorities have removed the need for the network traffic data collection agent. Consequently, we'll be deprecating the following two recommendations and their related policies.
+
+|Recommendation |Description |Severity |
+||||
+|[Network traffic data collection agent should be installed on Linux virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/8c3e93d3-0276-4d06-b20a-9a9f3012742c) |Defender for Cloud uses the Microsoft Dependency agent to collect network traffic data from your Azure virtual machines to enable advanced network protection features such as traffic visualization on the network map, network hardening recommendations and specific network threats.<br />(Related policy: [Network traffic data collection agent should be installed on Linux virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f04c4380f-3fae-46e8-96c9-30193528f602)) |Medium |
+|[Network traffic data collection agent should be installed on Windows virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/24d8af06-d441-40b4-a49c-311421aa9f58) |Defender for Cloud uses the Microsoft Dependency agent to collect network traffic data from your Azure virtual machines to enable advanced network protection features such as traffic visualization on the network map, network hardening recommendations, and specific network threats.<br />(Related policy: [Network traffic data collection agent should be installed on Windows virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f2f2ee1de-44aa-4762-b6bd-0893fc3f306d)) |Medium |
+|||
+++ ### Enhancements to recommendation to classify sensitive data in SQL databases
defender-for-iot Tutorial Configure Micro Agent Twin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/device-builders/tutorial-configure-micro-agent-twin.md
description: In this tutorial, you will learn how to configure a micro agent twi
Previously updated : 12/22/2021 Last updated : 01/05/2022
In this tutorial, you learn how to:
- A Defender for IoT subscription. -- An existing IoT Hub with:-
- - [A connected device](quickstart-standalone-agent-binary-installation.md).
-
- - [A micro agent module twin](quickstart-create-micro-agent-module-twin.md).
+- An existing IoT Hub with: [A connected device](quickstart-standalone-agent-binary-installation.md), and [A micro agent module twin](quickstart-create-micro-agent-module-twin.md).
## Micro agent configuration
-To view and update the micro agent twin configuration:
+**To view and update the micro agent twin configuration**:
1. Navigate to the [Azure portal](https://ms.portal.azure.com).
devtest Troubleshoot Expired Removed Subscription https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest/offer/troubleshoot-expired-removed-subscription.md
If your Visual Studio subscription expires or is removed, all the subscription b
> [!IMPORTANT] > You must transfer your resources to another Azure subscription before your current Azure subscription is disabled or you will lose access to your data. >
-> If you donΓÇÖt take one of these actions, your Azure subscription will be disabled at the time specified in your email notification. If the subscription is disabled, you can reenable it as a pay-as-you-go subscription by following [these steps](/azure/cost-management-billing/manage/switch-azure-offer.md).
+> If you donΓÇÖt take one of these actions, your Azure subscription will be disabled at the time specified in your email notification. If the subscription is disabled, you can reenable it as a pay-as-you-go subscription by following [these steps](/azure/cost-management-billing/manage/switch-azure-offer).
## Maintain a subscription to use monthly credits
There are several ways to continue using a monthly credit for Azure. To save you
- [Visual Studio Test Professional](https://www.microsoft.com/p/visual-studio-test-professional-subscription/dg7gmgf0dst6?activetab=pivot%3aoverviewtab) -- **If someone in your organization purchases subscriptions for your organization**, [contact your Visual Studio subscription admin](/visualstudio/subscriptions/contact-my-admin.md) and request a subscription that provides the monthly credit that you need.
+- **If someone in your organization purchases subscriptions for your organization**, [contact your Visual Studio subscription admin](/visualstudio/subscriptions/contact-my-admin) and request a subscription that provides the monthly credit that you need.
- **If you have another active Visual Studio subscription** at the same subscription level, you can use it to set up a new Azure credit subscription. ## Convert your Azure subscription to pay-as-you-go If you no longer need a Visual Studio subscription or credit but you want to continue using your Azure resources, convert your Azure subscription to pay-as-you-go pricing by [removing your spending limit](/azure/cost-management-billing/manage/spending-limit#remove-the-spending-limit-in-azure-portal).-
dms Tutorial Azure Postgresql To Azure Postgresql Online Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-azure-postgresql-to-azure-postgresql-online-portal.md
To complete this tutorial, you need to:
Also note that the target Azure Database for PostgreSQL version must be equal to or later than the on-premises PostgreSQL version. For example, PostgreSQL 10 can migrate to Azure Database for PostgreSQL 10, or 11, but not to Azure Database for PostgreSQL 9.6.
-* [Create an Azure Database for PostgreSQL server](../postgresql/quickstart-create-server-database-portal.md) or [Create an Azure Database for PostgreSQL - Hyperscale (Citus) server](../postgresql/quickstart-create-hyperscale-portal.md) as the target database server to migrate data into.
+* [Create an Azure Database for PostgreSQL server](../postgresql/quickstart-create-server-database-portal.md) or [Create an Azure Database for PostgreSQL - Hyperscale (Citus) server](../postgresql/hyperscale/quickstart-create-portal.md) as the target database server to migrate data into.
* Create a Microsoft Azure Virtual Network for Azure Database Migration Service by using the Azure Resource Manager deployment model. For more information about creating a virtual network, see the [Virtual Network Documentation](../virtual-network/index.yml), and especially the quickstart articles with step-by-step details. * Ensure that the Network Security Group (NSG) rules for your virtual network don't block the outbound port 443 of ServiceTag for ServiceBus, Storage and AzureMonitor. For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).
To complete all the database objects like table schemas, indexes and stored proc
2. Create an empty database in your target environment, which is Azure Database for PostgreSQL.
- For details on how to connect and create a database, see the article [Create an Azure Database for PostgreSQL server in the Azure portal](../postgresql/quickstart-create-server-database-portal.md) or [Create an Azure Database for PostgreSQL - Hyperscale (Citus) server in the Azure portal](../postgresql/quickstart-create-hyperscale-portal.md).
+ For details on how to connect and create a database, see the article [Create an Azure Database for PostgreSQL server in the Azure portal](../postgresql/quickstart-create-server-database-portal.md) or [Create an Azure Database for PostgreSQL - Hyperscale (Citus) server in the Azure portal](../postgresql/hyperscale/quickstart-create-portal.md).
> [!NOTE] > An instance of Azure Database for PostgreSQL - Hyperscale (Citus) has only a single database: **citus**.
dms Tutorial Postgresql Azure Postgresql Online Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-postgresql-azure-postgresql-online-portal.md
To complete this tutorial, you need to:
Also note that the target Azure Database for PostgreSQL version must be equal to or later than the on-premises PostgreSQL version. For example, PostgreSQL 9.6 can migrate to Azure Database for PostgreSQL 9.6, 10, or 11, but not to Azure Database for PostgreSQL 9.5.
-* [Create an Azure Database for PostgreSQL server](../postgresql/quickstart-create-server-database-portal.md) or [Create an Azure Database for PostgreSQL - Hyperscale (Citus) server](../postgresql/quickstart-create-hyperscale-portal.md).
+* [Create an Azure Database for PostgreSQL server](../postgresql/quickstart-create-server-database-portal.md) or [Create an Azure Database for PostgreSQL - Hyperscale (Citus) server](../postgresql/hyperscale/quickstart-create-portal.md).
* Create a Microsoft Azure Virtual Network for Azure Database Migration Service by using the Azure Resource Manager deployment model, which provides site-to-site connectivity to your on-premises source servers by using either [ExpressRoute](../expressroute/expressroute-introduction.md) or [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md). For more information about creating a virtual network, see the [Virtual Network Documentation](../virtual-network/index.yml), and especially the quickstart articles with step-by-step details. > [!NOTE]
To complete all the database objects like table schemas, indexes and stored proc
2. Create an empty database in your target environment, which is Azure Database for PostgreSQL.
- For details on how to connect and create a database, see the article [Create an Azure Database for PostgreSQL server in the Azure portal](../postgresql/quickstart-create-server-database-portal.md) or [Create an Azure Database for PostgreSQL - Hyperscale (Citus) server in the Azure portal](../postgresql/quickstart-create-hyperscale-portal.md).
+ For details on how to connect and create a database, see the article [Create an Azure Database for PostgreSQL server in the Azure portal](../postgresql/quickstart-create-server-database-portal.md) or [Create an Azure Database for PostgreSQL - Hyperscale (Citus) server in the Azure portal](../postgresql/hyperscale/quickstart-create-portal.md).
> [!NOTE] > An instance of Azure Database for PostgreSQL - Hyperscale (Citus) has only a single database: **citus**.
dms Tutorial Postgresql Azure Postgresql Online https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-postgresql-azure-postgresql-online.md
To complete this tutorial, you need to:
Also note that the target Azure Database for PostgreSQL version must be equal to or later than the on-premises PostgreSQL version. For example, PostgreSQL 9.6 can only migrate to Azure Database for PostgreSQL 9.6, 10, or 11, but not to Azure Database for PostgreSQL 9.5.
-* [Create an instance in Azure Database for PostgreSQL](../postgresql/quickstart-create-server-database-portal.md) or [Create an Azure Database for PostgreSQL - Hyperscale (Citus) server](../postgresql/quickstart-create-hyperscale-portal.md).
+* [Create an instance in Azure Database for PostgreSQL](../postgresql/quickstart-create-server-database-portal.md) or [Create an Azure Database for PostgreSQL - Hyperscale (Citus) server](../postgresql/hyperscale/quickstart-create-portal.md).
* Create a Microsoft Azure Virtual Network for Azure Database Migration Service by using the Azure Resource Manager deployment model, which provides site-to-site connectivity to your on-premises source servers by using either [ExpressRoute](../expressroute/expressroute-introduction.md) or [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md). For more information about creating a virtual network, see the [Virtual Network Documentation](../virtual-network/index.yml), and especially the quickstart articles with step-by-step details. > [!NOTE]
To complete all the database objects like table schemas, indexes and stored proc
2. Create an empty database in your target environment, which is Azure Database for PostgreSQL.
- For details on how to connect and create a database, see the article [Create an Azure Database for PostgreSQL server in the Azure portal](../postgresql/quickstart-create-server-database-portal.md) or [Create an Azure Database for PostgreSQL - Hyperscale (Citus) server in the Azure portal](../postgresql/quickstart-create-hyperscale-portal.md).
+ For details on how to connect and create a database, see the article [Create an Azure Database for PostgreSQL server in the Azure portal](../postgresql/quickstart-create-server-database-portal.md) or [Create an Azure Database for PostgreSQL - Hyperscale (Citus) server in the Azure portal](../postgresql/hyperscale/quickstart-create-portal.md).
3. Import the schema into the target database you created by restoring the schema dump file.
If you need to cancel or delete any DMS task, project, or service, perform the c
* For information about known issues and limitations when performing online migrations to Azure Database for PostgreSQL, see the article [Known issues and workarounds with Azure Database for PostgreSQL online migrations](known-issues-azure-postgresql-online.md). * For information about the Azure Database Migration Service, see the article [What is the Azure Database Migration Service?](./dms-overview.md).
-* For information about Azure Database for PostgreSQL, see the article [What is Azure Database for PostgreSQL?](../postgresql/overview.md).
+* For information about Azure Database for PostgreSQL, see the article [What is Azure Database for PostgreSQL?](../postgresql/overview.md).
dms Tutorial Rds Postgresql Server Azure Db For Postgresql Online https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-rds-postgresql-server-azure-db-for-postgresql-online.md
To complete this tutorial, you need to:
Also note that the target Azure Database for PostgreSQL version must be equal to or later than the RDS PostgreSQL version. For example, RDS PostgreSQL 9.6 can only migrate to Azure Database for PostgreSQL 9.6, 10, or 11, but not to Azure Database for PostgreSQL 9.5.
-* Create an instance of [Azure Database for PostgreSQL](../postgresql/quickstart-create-server-database-portal.md) or [Azure Database for PostgreSQL - Hyperscale (Citus)](../postgresql/quickstart-create-hyperscale-portal.md). Refer to this [section](../postgresql/quickstart-create-server-database-portal.md#connect-to-the-server-with-psql) of the document for detail on how to connect to the PostgreSQL Server using pgAdmin.
+* Create an instance of [Azure Database for PostgreSQL](../postgresql/quickstart-create-server-database-portal.md) or [Azure Database for PostgreSQL - Hyperscale (Citus)](../postgresql/hyperscale/quickstart-create-portal.md). Refer to this [section](../postgresql/quickstart-create-server-database-portal.md#connect-to-the-server-with-psql) of the document for detail on how to connect to the PostgreSQL Server using pgAdmin.
* Create a Microsoft Azure Virtual Network for Azure Database Migration Service by using the Azure Resource Manager deployment model, which provides site-to-site connectivity to your on-premises source servers by using either [ExpressRoute](../expressroute/expressroute-introduction.md) or [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md). For more information about creating a virtual network, see the [Virtual Network Documentation](../virtual-network/index.yml), and especially the quickstart articles with step-by-step details. * Ensure that your virtual network Network Security Group rules don't block the outbound port 443 of ServiceTag for ServiceBus, Storage and AzureMonitor. For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md). * Configure your [Windows Firewall for database engine access](/sql/database-engine/configure-windows/configure-a-windows-firewall-for-database-engine-access).
To complete this tutorial, you need to:
2. Create an empty database in the target service, which is Azure Database for PostgreSQL. To connect and create a database, refer to one of the following articles: * [Create an Azure Database for PostgreSQL server by using the Azure portal](../postgresql/quickstart-create-server-database-portal.md)
- * [Create an Azure Database for PostgreSQL - Hyperscale (Citus) server using the Azure portal](../postgresql/quickstart-create-hyperscale-portal.md)
+ * [Create an Azure Database for PostgreSQL - Hyperscale (Citus) server using the Azure portal](../postgresql/hyperscale/quickstart-create-portal.md)
3. Import the schema to target service, which is Azure Database for PostgreSQL. To restore the schema dump file, run the following command:
event-grid Authenticate With Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/authenticate-with-active-directory.md
Title: Authenticate Event Grid publishing clients using Azure Active Directory (Preview)
+ Title: Authenticate Event Grid publishing clients using Azure Active Directory
description: This article describes how to authenticate Azure Event Grid publishing client using Azure Active Directory. Previously updated : 08/10/2021 Last updated : 01/05/2022
-# Authentication and authorization with Azure Active Directory (Preview)
+# Authentication and authorization with Azure Active Directory
This article describes how to authenticate Azure Event Grid publishing clients using Azure Active Directory (Azure AD). ## Overview
Following are the prerequisites to authenticate to Event Grid.
### Publish events using Azure AD Authentication
-To send events to a topic, domain, or partner namespace, you can build the client in the following way. The api version that first provided support for Azure AD authentication is ``2021-06-01-preview``. Use that API version or a more recent version in your application.
+To send events to a topic, domain, or partner namespace, you can build the client in the following way. The api version that first provided support for Azure AD authentication is ``2018-01-01``. Use that API version or a more recent version in your application.
-```java
- DefaultAzureCredential credential = new DefaultAzureCredentialBuilder().build();
- EventGridPublisherClient cloudEventClient = new EventGridPublisherClientBuilder()
- .endpoint("<your-event-grid-topic-domain-or-partner-namespace-endpoint>?api-version=2021-06-01-preview")
- .credential(credential)
- .buildCloudEventPublisherClient();
-```
-If you're using a security principal associated with a client publishing application, you have to configure environmental variables as shown in the [Java SDK readme article](/java/api/overview/azure/identity-readme#environment-variables). The `DefaultCredentialBuilder` reads those environment variables to use the right identity. For more information, see [Java API overview](/java/api/overview/azure/identity-readme#defaultazurecredential).
+Sample:
+
+This C# snippet creates an Event Grid publisher client using an Application (Service Principal) with a client secret, to enable the DefaultAzureCredential method you will need to add the [Azure.Identity library](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/identity/Azure.Identity/README.md). If you are using the official SDK it will handle the version for you.
+```csharp
+Environment.SetEnvironmentVariable("AZURE_CLIENT_ID", "");
+Environment.SetEnvironmentVariable("AZURE_TENANT_ID", "");
+Environment.SetEnvironmentVariable("AZURE_CLIENT_SECRET", "");
+
+EventGridPublisherClient client = new EventGridPublisherClient(new Uri("your-event-grid-topic-domain-or-partner-namespace-endpoint"), new DefaultAzureCredential());
+```
For more information, see the following articles:
event-grid Authentication Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/authentication-overview.md
Title: Authenticate clients publishing events to Event Grid custom topics, domains, and partner namespaces. description: This article describes different ways of authenticating clients publishing events to Event Grid custom topics, domains, and partner namespaces. Previously updated : 08/10/2021 Last updated : 01/05/2022 # Client authentication when publishing events to Event Grid
Authentication for clients publishing events to Event Grid is supported using th
- Azure Active Directory (Azure AD) - Access key or shared access signature (SAS)
-## Authenticate using Azure Active Directory (preview)
+## Authenticate using Azure Active Directory
Azure AD integration for Event Grid resources provides Azure role-based access control (RBAC) for fine-grained control over a clientΓÇÖs access to resources. You can use Azure RBAC to grant permissions to a security principal, which may be a user, a group, or an application service principal. The security principal is authenticated by Azure AD to return an OAuth 2.0 token. The token can be used to authorize a request to access Event Grid resources (topics, domains, or partner namespaces). For detailed information, see [Authenticate and authorize with the Microsoft Identity platform](authenticate-with-active-directory.md).
Azure AD integration for Event Grid resources provides Azure role-based access c
> Authenticating and authorizing users or applications using Azure AD identities provides superior security and ease of use over key-based and shared access signatures (SAS) authentication. With Azure AD, there is no need to store secrets used for authentication in your code and risk potential security vulnerabilities. We strongly recommend that you use Azure AD with your Azure Event Grid event publishing applications. > [!NOTE]
-> Azure AD authentication support by Azure Event Grid has been released as preview.
> Azure Event Grid on Kubernetes does not support Azure AD authentication yet. ## Authenticate using access keys and shared access signatures
event-grid Enable Diagnostic Logs Topic https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/enable-diagnostic-logs-topic.md
Last updated 11/11/2021
This article provides step-by-step instructions for enabling diagnostic settings for Event Grid resources. These settings allow you to capture and view diagnostic information so that you can troubleshoot any failures. The following table shows the settings available for different types of Event Grid resources - custom topics, system topics, and domains.
-| Diagnostic setting | Event Grid topics | Event Grid system topics | Event Grid domains |
-| - | | -- | -- |
-| [DeliveryFailures](diagnostic-logs.md#schema-for-publishdelivery-failure-logs) | Yes | Yes | Yes |
-| [PublishFailures](diagnostic-logs.md#schema-for-publishdelivery-failure-logs) | Yes | No | Yes |
-| [DataPlaneRequests](diagnostic-logs.md#schema-for-data-plane-requests) | Yes | No | Yes |
+| Diagnostic setting | Event Grid topics | Event Grid system topics | Event domains | Event Grid partner namespaces |
+| - | | -- | -- | -- |
+| [DeliveryFailures](diagnostic-logs.md#schema-for-publishdelivery-failure-logs) | Yes | Yes | Yes | No |
+| [PublishFailures](diagnostic-logs.md#schema-for-publishdelivery-failure-logs) | Yes | No | Yes | Yes |
+| [DataPlaneRequests](diagnostic-logs.md#schema-for-data-plane-requests) | Yes | No | Yes | Yes |
> [!IMPORTANT] > For schemas of delivery failures, publish failures, and data plane requests, see [Diagnostic logs](diagnostic-logs.md).
Then, it creates a diagnostic setting on the topic to send diagnostic informatio
Event Grid can publish audit traces for data plane operations. To enable the feature, select **audit** in the **Category groups** section or select **DataPlaneRequests** in the **Categories** section.
-The audit trace can be used to ensure that data access is allowed only for authorized purposes. It collects information about security control such as resource name, operation type, network access, level, region and more. For more information about how to enable the diagnostic setting, see [Diagnostic logs in Event Grid topics and Event Grid domains](enable-diagnostic-logs-topic.md#enable-diagnostic-logs-for-event-grid-topics-and-domains).
+The audit trace can be used to ensure that data access is allowed only for authorized purposes. It collects information about security control such as resource name, operation type, network access, level, region and more. For more information about how to enable the diagnostic setting, see [Diagnostic logs in Event Grid topics and Event domains](enable-diagnostic-logs-topic.md#enable-diagnostic-logs-for-event-grid-topics-and-domains).
![Select the audit traces](./media/enable-diagnostic-logs-topic/enable-audit-logs.png) > [!IMPORTANT]
event-grid Monitor Virtual Machine Changes Event Grid Logic App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/monitor-virtual-machine-changes-event-grid-logic-app.md
Previously updated : 07/01/2021 Last updated : 01/01/2022 # Tutorial: Monitor virtual machine changes by using Azure Event Grid and Logic Apps
For example, here are some events that publishers can send to subscribers throug
* A new message appears in a queue.
-This tutorial creates a logic app resource that runs in [*multi-tenant* Azure Logic Apps](../logic-apps/logic-apps-overview.md) and is based on the [Consumption pricing model](../logic-apps/logic-apps-pricing.md#consumption-pricing). Using this logic app resource, you create a workflow that monitors changes to a virtual machine, and sends emails about those changes. When you create a workflow that has an event subscription to an Azure resource, events flow from that resource through an event grid to the workflow. For more information about multi-tenant versus single-tenant Azure Logic Apps, review [Single-tenant versus multi-tenant and integration service environment](../logic-apps/single-tenant-overview-compare.md).
+This tutorial creates a Consumption logic app resource that runs in [*multi-tenant* Azure Logic Apps](../logic-apps/logic-apps-overview.md) and is based on the [Consumption pricing model](../logic-apps/logic-apps-pricing.md#consumption-pricing). Using this logic app resource, you create a workflow that monitors changes to a virtual machine, and sends emails about those changes. When you create a workflow that has an event subscription to an Azure resource, events flow from that resource through an event grid to the workflow.
![Screenshot showing the workflow designer with a workflow that monitors a virtual machine using Azure Event Grid.](./media/monitor-virtual-machine-changes-event-grid-logic-app/monitor-virtual-machine-event-grid-logic-app-overview.png)
In this tutorial, you learn how to:
1. Sign in to the [Azure portal](https://portal.azure.com) with your Azure account.
-1. From the main Azure menu, select **Create a resource** > **Integration** > **Logic App**.
+1. From the Azure home page, select **Create a resource** > **Integration** > **Logic App**.
![Screenshot of Azure portal, showing button to create a logic app resource.](./media/monitor-virtual-machine-changes-event-grid-logic-app/azure-portal-create-logic-app.png)
-1. Under **Logic App**, provide information about your logic app resource. When you're done, select **Create**.
+1. Under **Create Logic App**, provide information about your logic app resource:
![Screenshot of logic apps creation menu, showing details like name, subscription, resource group, and location.](./media/monitor-virtual-machine-changes-event-grid-logic-app/create-logic-app-for-event-grid.png) | Property | Required | Value | Description | |-|-|-|-|
- | **Name** | Yes | <*logic-app-name*> | Provide a unique name for your logic app. |
| **Subscription** | Yes | <*Azure-subscription-name*> | Select the same Azure subscription for all the services in this tutorial. |
- | **Resource group** | Yes | <*Azure-resource-group*> | The Azure resource group name for your logic app, which you can select for all the services in this tutorial. |
- | **Location** | Yes | <*Azure-region*> | Select the same region for all services in this tutorial. |
- |||
+ | **Resource Group** | Yes | <*Azure-resource-group*> | The Azure resource group name for your logic app, which you can select for all the services in this tutorial. |
+ | **Type** | Yes | Consumption | The resource type for your logic app. For this tutorial, make sure that you select **Consumption**. |
+ | **Logic App name** | Yes | <*logic-app-name*> | Provide a unique name for your logic app. |
+ | **Publish** | Yes | Workflow | Select the deployment destination for your logic app. For this tutorial, make sure that you select **Workflow**, which deploys to Azure. |
+ | **Region** | Yes | <*Azure-region*> | Select the same region for all services in this tutorial. |
+ |||||
+
+ > [!NOTE]
+ > If you later want to use the Event Grid operations with a Standard logic app resource instead, make sure that you create a *stateful* workflow, not a stateless workflow.
+ > To add the Event Grid operations to your workflow in the designer, on the operations picker pane, make sure that you select the **Azure** tab.
+ > For more information about multi-tenant versus single-tenant Azure Logic Apps, review [Single-tenant versus multi-tenant and integration service environment](../logic-apps/single-tenant-overview-compare.md).
+
+1. When you're done, select **Review + create**. On the next pane, confirm the provided information, and select **Create**.
+
+1. After Azure deploys your logic app, select **Go to resource**.
-1. After Azure deploys your logic app, the workflow designer shows a page with an introduction video and commonly used triggers. Scroll past the video and triggers.
+ The workflow designer shows a page with an introduction video and commonly used triggers.
+
+1. Scroll past the video window and commonly used triggers section.
1. Under **Templates**, select **Blank Logic App**.
event-grid Post To Custom Topic https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/post-to-custom-topic.md
For custom topics, the top-level data contains the same fields as standard resou
] ```
-For a description of these properties, see [Azure Event Grid event schema](event-schema.md). When posting events to an event grid topic, the array can have a total size of up to 1 MB. The maximum allowed size for an event is also 1 MB. Events over 64 KB are charged in 64-KB increments.
+For a description of these properties, see [Azure Event Grid event schema](event-schema.md). When posting events to an event grid topic, the array can have a total size of up to 1 MB. The maximum allowed size for an event is also 1 MB. Events over 64 KB are charged in 64-KB increments. When receiving events in a batch, the maximum allowed number of events is 5,000 per batch.
For example, a valid event data schema is:
event-hubs Event Hubs Ip Filtering https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-ip-filtering.md
Title: Azure Event Hubs Firewall Rules | Microsoft Docs description: Use Firewall Rules to allow connections from specific IP addresses to Azure Event Hubs. Previously updated : 05/10/2021 Last updated : 10/28/2021 # Allow access to Azure Event Hubs namespaces from specific IP addresses or ranges
This section shows you how to use the Azure portal to create IP firewall rules f
1. Navigate to your **Event Hubs namespace** in the [Azure portal](https://portal.azure.com). 4. Select **Networking** under **Settings** on the left menu.
+1. On the **Networking** page, for **Public network access**, you can set one of the three following options. Choose **Selected networks** option to allow access from only specified IP addresses.
+ - **Disabled**. This option disables any public access to the namespace. The namespace will be accessible only through [private endpoints](private-link-service.md).
+
+ :::image type="content" source="./media/event-hubs-firewall/public-access-disabled.png" alt-text="Networking page - public access tab - public network access is disabled.":::
+ - **Selected networks**. This option enables public access to the namespace using an access key from selected networks.
+
+ > [!IMPORTANT]
+ > If you choose **Selected networks**, add at least one IP firewall rule or a virtual network that will have access to the namespace. Choose **Disabled** if you want to restrict all traffic to this namespace over [private endpoints](private-link-service.md) only.
- > [!WARNING]
- > If you select the **Selected networks** option and don't add at least one IP firewall rule or a virtual network on this page, the namespace can be accessed via **public internet** (using the access key).
-
- :::image type="content" source="./media/event-hubs-firewall/selected-networks.png" alt-text="Networks tab - selected networks option" lightbox="./media/event-hubs-firewall/selected-networks.png":::
-
- If you select the **All networks** option, the event hub accepts connections from any IP address (using the access key). This setting is equivalent to a rule that accepts the 0.0.0.0/0 IP address range.
-
- ![Screenshot that shows the "Firewall and virtual networks" page with the "All networks" option selected.](./media/event-hubs-firewall/firewall-all-networks-selected.png)
-1. To restrict access to specific IP addresses, confirm that the **Selected networks** option is selected. In the **Firewall** section, follow these steps:
- 1. Select **Add your client IP address** option to give your current client IP the access to the namespace.
- 2. For **address range**, enter a specific IPv4 address or a range of IPv4 address in CIDR notation.
-
- >[!WARNING]
- > If you select the **Selected networks** option and don't add at least one IP firewall rule or a virtual network on this page, the namespace can be accessed over public internet (using the access key).
-1. Specify whether you want to **allow trusted Microsoft services to bypass this firewall**. See [Trusted Microsoft services](#trusted-microsoft-services) for details.
-
- ![Firewall - All networks option selected](./media/event-hubs-firewall/firewall-selected-networks-trusted-access-disabled.png)
+ :::image type="content" source="./media/event-hubs-firewall/selected-networks.png" alt-text="Networking page with the selected networks option selected." lightbox="./media/event-hubs-firewall/selected-networks.png":::
+ - **All networks** (default). This option enables public access from all networks using an access key. If you select the **All networks** option, the event hub accepts connections from any IP address (using the access key). This setting is equivalent to a rule that accepts the 0.0.0.0/0 IP address range.
+
+ :::image type="content" source="./media/event-hubs-firewall/firewall-all-networks-selected.png" lightbox="./media/event-hubs-firewall/firewall-all-networks-selected.png" alt-text="Screenshot that shows the Public access page with the All networks option selected.":::
+1. To restrict access to **specific IP addresses**, follow these steps:
+ 1. In the **Firewall** section, select **Add your client IP address** option to give your current client IP the access to the namespace.
+ 3. For **address range**, enter a specific IPv4 address or a range of IPv4 address in CIDR notation.
+
+ To restrict access to **specific virtual networks**, see [Allow access from specific networks](event-hubs-service-endpoints.md).
+ 1. Specify whether you want to **allow trusted Microsoft services to bypass this firewall**. See [Trusted Microsoft services](#trusted-microsoft-services) for details.
+
+ :::image type="content" source="./media/event-hubs-firewall/firewall-selected-networks-trusted-access-disabled.png" lightbox="./media/event-hubs-firewall/firewall-selected-networks-trusted-access-disabled.png" alt-text="Firewall section highlighted in the Public access tab of the Networking page.":::
3. Select **Save** on the toolbar to save the settings. Wait for a few minutes for the confirmation to show up on the portal notifications. > [!NOTE]
event-hubs Event Hubs Service Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-service-endpoints.md
Title: Virtual Network service endpoints - Azure Event Hubs | Microsoft Docs description: This article provides information on how to add a Microsoft.EventHub service endpoint to a virtual network. Previously updated : 05/10/2021 Last updated : 10/28/2021 # Allow access to Azure Event Hubs namespaces from specific virtual networks
The integration of Event Hubs with [Virtual Network (VNet) Service Endpoints][vn
Once configured to bound to at least one virtual network subnet service endpoint, the respective Event Hubs namespace no longer accepts traffic from anywhere but authorized subnets in virtual networks. From the virtual network perspective, binding an Event Hubs namespace to a service endpoint configures an isolated networking tunnel from the virtual network subnet to the messaging service.
-The result is a private and isolated relationship between the workloads bound to the subnet and the respective Event Hubs namespace, in spite of the observable network address of the messaging service endpoint being in a public IP range. There's an exception to this behavior. Enabling a service endpoint, by default, enables the `denyall` rule in the [IP firewall](event-hubs-ip-filtering.md) associated with the virtual network. You can add specific IP addresses in the IP firewall to enable access to the Event Hub public endpoint.
+The result is a private and isolated relationship between the workloads bound to the subnet and the respective Event Hubs namespace, in spite of the observable network address of the messaging service endpoint being in a public IP range. There's an exception to this behavior. Enabling a service endpoint, by default, enables the `denyall` rule in the [IP firewall](event-hubs-ip-filtering.md) associated with the virtual network. You can add specific IP addresses in the IP firewall to enable access to the Event Hubs public endpoint.
## Important points - This feature isn't supported in the **basic** tier.
This section shows you how to use Azure portal to add a virtual network service
1. Navigate to your **Event Hubs namespace** in the [Azure portal](https://portal.azure.com). 4. Select **Networking** under **Settings** on the left menu. -
- > [!WARNING]
- > If you select the **Selected networks** option and don't add at least one IP firewall rule or a virtual network on this page, the namespace can be accessed via **public internet** (using the access key).
-
- :::image type="content" source="./media/event-hubs-firewall/selected-networks.png" alt-text="Networks tab - selected networks option" lightbox="./media/event-hubs-firewall/selected-networks.png":::
-
- If you select the **All networks** option, the event hub accepts connections from any IP address (using the access key). This setting is equivalent to a rule that accepts the 0.0.0.0/0 IP address range.
-
- ![Firewall - All networks option selected](./media/event-hubs-firewall/firewall-all-networks-selected.png)
-1. To restrict access to specific networks, select the **Selected Networks** option at the top of the page if it isn't already selected.
-2. In the **Virtual Network** section of the page, select **+Add existing virtual network***. Select **+ Create new virtual network** if you want to create a new VNet.
-
- ![add existing virtual network](./media/event-hubs-tutorial-vnet-and-firewalls/add-vnet-menu.png)
-
- >[!WARNING]
- > If you select the **Selected networks** option and don't add at least one IP firewall rule or a virtual network on this page, the namespace can be accessed over public internet (using the access key).
+1. On the **Networking** page, for **Public network access**, you can set one of the three following options. Choose **Selected networks** option to allow access only from specific virtual networks.
+ - **Disabled**. This option disables any public access to the namespace. The namespace will be accessible only through [private endpoints](private-link-service.md).
+
+ :::image type="content" source="./media/event-hubs-firewall/public-access-disabled.png" alt-text="Networking page - public access tab - public network access is disabled.":::
+ - **Selected networks**. This option enables public access to the namespace using an access key from selected networks.
+
+ > [!IMPORTANT]
+ > If you choose **Selected networks**, add at least one IP firewall rule or a virtual network that will have access to the namespace. Choose **Disabled** if you want to restrict all traffic to this namespace over [private endpoints](private-link-service.md) only.
+
+ :::image type="content" source="./media/event-hubs-firewall/selected-networks.png" alt-text="Networking page with the selected networks option selected." lightbox="./media/event-hubs-firewall/selected-networks.png":::
+ - **All networks** (default). This option enables public access from all networks using an access key. If you select the **All networks** option, the event hub accepts connections from any IP address (using the access key). This setting is equivalent to a rule that accepts the 0.0.0.0/0 IP address range.
+
+ :::image type="content" source="./media/event-hubs-firewall/firewall-all-networks-selected.png" lightbox="./media/event-hubs-firewall/firewall-all-networks-selected.png" alt-text="Screenshot that shows the Public access page with the All networks option selected.":::
+1. To restrict access to specific networks, choose the **Selected Networks** option at the top of the page if it isn't already selected.
+2. In the **Virtual networks** section of the page, select **+Add existing virtual network***. Select **+ Create new virtual network** if you want to create a new VNet.
+
+ :::image type="content" source="./media/event-hubs-tutorial-vnet-and-firewalls/add-vnet-menu.png" lightbox="./media/event-hubs-tutorial-vnet-and-firewalls/add-vnet-menu.png" alt-text="Selection of Add existing virtual network menu item.":::
+
+ > [!IMPORTANT]
+ > If you choose **Selected networks**, add at least one IP firewall rule or a virtual network that will have access to the namespace. Choose **Disabled** if you want to restrict all traffic to this namespace over [private endpoints](private-link-service.md) only.
3. Select the virtual network from the list of virtual networks, and then pick the **subnet**. You have to enable the service endpoint before adding the virtual network to the list. If the service endpoint isn't enabled, the portal will prompt you to enable it.
- ![select subnet](./media/event-hubs-tutorial-vnet-and-firewalls/select-subnet.png)
-
+ :::image type="content" source="./media/event-hubs-tutorial-vnet-and-firewalls/select-subnet.png" lightbox="./media/event-hubs-tutorial-vnet-and-firewalls/select-subnet.png" alt-text="Image showing the selection of a subnet.":::
4. You should see the following successful message after the service endpoint for the subnet is enabled for **Microsoft.EventHub**. Select **Add** at the bottom of the page to add the network.
- ![select subnet and enable endpoint](./media/event-hubs-tutorial-vnet-and-firewalls/subnet-service-endpoint-enabled.png)
+ :::image type="content" source="./media/event-hubs-tutorial-vnet-and-firewalls/subnet-service-endpoint-enabled.png" lightbox="./media/event-hubs-tutorial-vnet-and-firewalls/subnet-service-endpoint-enabled.png" alt-text="Image showing the selection of a subnet and enabling an endpoint.":::
> [!NOTE] > If you are unable to enable the service endpoint, you may ignore the missing virtual network service endpoint using the Resource Manager template. This functionality is not available on the portal. 5. Specify whether you want to **allow trusted Microsoft services to bypass this firewall**. See [Trusted Microsoft services](#trusted-microsoft-services) for details. 6. Select **Save** on the toolbar to save the settings. Wait for a few minutes for the confirmation to show up on the portal notifications.
- ![Save network](./media/event-hubs-tutorial-vnet-and-firewalls/save-vnet.png)
+ :::image type="content" source="./media/event-hubs-tutorial-vnet-and-firewalls/save-vnet.png" lightbox="./media/event-hubs-tutorial-vnet-and-firewalls/save-vnet.png" alt-text="Image showing the saving of virtual network.":::
> [!NOTE] > To restrict access to specific IP addresses or ranges, see [Allow access from specific IP addresses or ranges](event-hubs-ip-filtering.md).
event-hubs Private Link Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/private-link-service.md
If you already have an Event Hubs namespace, you can create a private link conne
1. Sign in to the [Azure portal](https://portal.azure.com). 2. In the search bar, type in **event hubs**. 3. Select the **namespace** from the list to which you want to add a private endpoint.
-4. Select **Networking** under **Settings** on the left menu.
-
- :::image type="content" source="./media/private-link-service/selected-networks-page.png" alt-text="Networks tab - selected networks option" lightbox="./media/private-link-service/selected-networks-page.png":::
+1. On the **Networking** page, for **Public network access**, you can set one of the three following options. Select **Disabled** if you want the namespace to be accessed only via private endpoints.
+ - **Disabled**. This option disables any public access to the namespace. The namespace will be accessible only through [private endpoints](private-link-service.md).
+
+ :::image type="content" source="./media/event-hubs-firewall/public-access-disabled.png" alt-text="Networking page - public access tab - public network access is disabled.":::
+ - **Selected networks**. This option enables public access to the namespace using an access key from selected networks.
+
+ > [!IMPORTANT]
+ > If you choose **Selected networks**, add at least one IP firewall rule or a virtual network that will have access to the namespace. Choose **Disabled** if you want to restrict all traffic to this namespace over [private endpoints](private-link-service.md) only.
+
+ :::image type="content" source="./media/event-hubs-firewall/selected-networks.png" alt-text="Networking page with the selected networks option selected." lightbox="./media/event-hubs-firewall/selected-networks.png":::
+ - **All networks** (default). This option enables public access from all networks using an access key. If you select the **All networks** option, the event hub accepts connections from any IP address (using the access key). This setting is equivalent to a rule that accepts the 0.0.0.0/0 IP address range.
- > [!WARNING]
- > By default, the **Selected networks** option is selected. If you don't specify an IP firewall rule or add a virtual network, the namespace can be accessed via public internet (using the access key).
-1. Select the **Private endpoint connections** tab at the top of the page.
+ :::image type="content" source="./media/event-hubs-firewall/firewall-all-networks-selected.png" lightbox="./media/event-hubs-firewall/firewall-all-networks-selected.png" alt-text="Screenshot that shows the Public access page with the All networks option selected.":::
+1. Switch to the **Private endpoint connections** tab.
1. Select the **+ Private Endpoint** button at the top of the page.
- :::image type="content" source="./media/private-link-service/private-link-service-3.png" alt-text="Networking page - Private endpoint connections tab - Add private endpoint link":::
+ :::image type="content" source="./media/private-link-service/private-link-service-3.png" lightbox="./media/private-link-service/private-link-service-3.png" alt-text="Networking page - Private endpoint connections tab - Add private endpoint link.":::
7. On the **Basics** page, follow these steps: 1. Select the **Azure subscription** in which you want to create the private endpoint. 2. Select the **resource group** for the private endpoint resource.
$privateEndpointConnection = New-AzPrivateLinkServiceConnection `
-PrivateLinkServiceId $namespaceResource.ResourceId ` -GroupId "namespace"
-# get subnet object that you will use later
+# get subnet object that you'll use later
$virtualNetwork = Get-AzVirtualNetwork -ResourceGroupName $rgName -Name $vnetName $subnet = $virtualNetwork | Select -ExpandProperty subnets ` | Where-Object {$_.Name -eq $subnetName}
There are four provisioning states:
5. Go to the appropriate section below based on the operation you want to: approve, reject, or remove. ### Approve a private endpoint connection
-1. If there are any connections that are pending, you will see a connection listed with **Pending** in the provisioning state.
+1. If there are any connections that are pending, you'll see a connection listed with **Pending** in the provisioning state.
2. Select the **private endpoint** you wish to approve 3. Select the **Approve** button.
There are four provisioning states:
### Reject a private endpoint connection
-1. If there are any private endpoint connections you want to reject, whether it is a pending request or existing connection, select the connection and click the **Reject** button.
+1. If there are any private endpoint connections you want to reject, whether it's a pending request or existing connection, select the connection and click the **Reject** button.
![Reject private endpoint](./media/private-link-service/private-endpoint-reject-button.png) 2. On the **Reject connection** page, enter a comment (optional), and select **Yes**. If you select **No**, nothing happens.
There are four provisioning states:
1. To remove a private endpoint connection, select it in the list, and select **Remove** on the toolbar. 2. On the **Delete connection** page, select **Yes** to confirm the deletion of the private endpoint. If you select **No**, nothing happens.
-3. You should see the status changed to **Disconnected**. Then, you will see the endpoint disappear from the list.
+3. You should see the status changed to **Disconnected**. Then, you'll see the endpoint disappear from the list.
## Validate that the private link connection works
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-locations.md
If you are remote and don't have fiber connectivity or you want to explore other
| **[Masergy](https://www.masergy.com/solutions/hybrid-networking/cloud-marketplace/microsoft-azure)** | Equinix | Washington DC | | **[MTN](https://www.mtnbusiness.co.za/en/Cloud-Solutions/Pages/microsoft-express-route.aspx)** | Teraco | Cape Town, Johannesburg | | **[NexGen Networks](https://www.nexgen-net.com/nexgen-networks-direct-connect-microsoft-azure-expressroute.html)** | Interxion | London |
-| **[Nianet](https://nianet.dk/produkter/internet/microsoft-expressroute)** |Equinix | Amsterdam, Frankfurt |
+| **[Nianet](https://www.globalconnect.dk/)** |Equinix | Amsterdam, Frankfurt |
| **[Oncore Cloud Service Inc](https://www.oncore.cloud/services/ue-for-expressroute)**| Equinix | Toronto | | **[POST Telecom Luxembourg](https://www.teralinksolutions.com/cloud-connectivity/cloudbridge-to-azure-expressroute/)**|Equinix | Amsterdam | | **[Proximus](https://www.proximus.be/en/id_b_cl_proximus_external_cloud_connect/companies-and-public-sector/discover/magazines/expert-blog/proximus-external-cloud-connect.html)**|Equinix | Amsterdam, Dublin, London, Paris |
governance Guest Configuration Custom https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/concepts/guest-configuration-custom.md
Before you begin, it's a good idea to read the overview of
[A video walk-through of this document is available](https://youtu.be/nYd55FiKpgs). Guest configuration uses
-[Desired State Configuration (DSC)](/powershell/dsc/overview/overview)
+[Desired State Configuration (DSC)](/powershell/dsc/overview)
version 3 to audit and configure machines. The DSC configuration defines the state that the machine should be in. There's many notable differences in how DSC is implemented in guest configuration.
governance Guest Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/concepts/guest-configuration.md
these tools automatically.
|Operating system|Validation tool|Notes| |-|-|-|
-|Windows|[PowerShell Desired State Configuration](/powershell/dsc/overview/overview) v3| Side-loaded to a folder only used by Azure Policy. Won't conflict with Windows PowerShell DSC. PowerShell Core isn't added to system path.|
-|Linux|[PowerShell Desired State Configuration](/powershell/dsc/overview/overview) v3| Side-loaded to a folder only used by Azure Policy. PowerShell Core isn't added to system path.|
+|Windows|[PowerShell Desired State Configuration](/powershell/dsc/overview) v3| Side-loaded to a folder only used by Azure Policy. Won't conflict with Windows PowerShell DSC. PowerShell Core isn't added to system path.|
+|Linux|[PowerShell Desired State Configuration](/powershell/dsc/overview) v3| Side-loaded to a folder only used by Azure Policy. PowerShell Core isn't added to system path.|
|Linux|[Chef InSpec](https://www.chef.io/inspec/) | Installs Chef InSpec version 2.2.61 in default location and added to system path. Dependencies for the InSpec package including Ruby and Python are installed as well. | ### Validation frequency
iot-develop Quickstart Devkit Nxp Mimxrt1050 Evkb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/quickstart-devkit-nxp-mimxrt1050-evkb.md
[![Browse code](media/common/browse-code.svg)](https://github.com/azure-rtos/getting-started/tree/master/NXP/MIMXRT1050-EVKB/)
-In this quickstart, you use Azure RTOS to connect an NXP MIMXRT1050-EVKB Evaluation kit (hereafter, NXP EVK) to Azure IoT.
+In this quickstart, you use Azure RTOS to connect an NXP MIMXRT1050-EVKB Evaluation kit (from now on, NXP EVK) to Azure IoT.
-You will complete the following tasks:
+You'll complete the following tasks:
* Install a set of embedded development tools for programming an NXP EVK in C * Build an image and flash it onto the NXP EVK
iot-hub Iot Hub Devguide Jobs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-devguide-jobs.md
Jobs are initiated by the solution back end and maintained by IoT Hub. You can i
> [!NOTE] > When you initiate a job, property names and values can only contain US-ASCII printable alphanumeric, except any in the following set: `$ ( ) < > @ , ; : \ " / [ ] ? = { } SP HT`
+> [!NOTE]
+> The `jobId` field must be 64 characters or less and can only contain US-ASCII letters, numbers, and the dash (`-`) character.
+ ## Jobs to execute direct methods The following snippet shows the HTTPS 1.1 request details for executing a [direct method](iot-hub-devguide-direct-methods.md) on a set of devices using a job:
Other reference topics in the IoT Hub developer guide include:
To try out some of the concepts described in this article, see the following IoT Hub tutorial:
-* [Schedule and broadcast jobs](iot-hub-node-node-schedule-jobs.md)
+* [Schedule and broadcast jobs](iot-hub-node-node-schedule-jobs.md)
key-vault Rbac Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/rbac-migration.md
Access policy predefined permission templates:
| Exchange Online Customer Key | Keys: get, list, wrap key, unwrap key | Key Vault Crypto Service Encryption User| | Azure Information BYOK | Keys: get, decrypt, sign | N/A<br>Custom role required|
+> [!NOTE]
+> Azure App Service certificate configuration does not support Key Vault RBAC permission model.
## Assignment scopes mapping
The vault access policy permission model is limited to assigning policies only a
In general, it's best practice to have one key vault per application and manage access at key vault level. There are scenarios when managing access at other scopes can simplify access management. -- **Infrastructure, security administrators and operators: managing group of key vaults at management group, subscription or resource group level with vault access policies requires maintaining policies for each key vault. Azure RBAC allows creating one role assignment at management group, subscription, or resource group. That assignment will apply to any new key vaults created under the same scope. In this scenario, it's recommended to use Privileged Identity Management with just-in time access over providing permanent access.
+- **Infrastructure, security administrators and operators**: managing group of key vaults at management group, subscription or resource group level with vault access policies requires maintaining policies for each key vault. Azure RBAC allows creating one role assignment at management group, subscription, or resource group. That assignment will apply to any new key vaults created under the same scope. In this scenario, it's recommended to use Privileged Identity Management with just-in time access over providing permanent access.
-- **Applications: there are scenarios when application would need to share secret with other application. Using vault access polices separate key vault had to be created to avoid giving access to all secrets. Azure RBAC allows assign role with scope for individual secret instead using single key vault.
+- **Applications**: there are scenarios when application would need to share secret with other application. Using vault access polices separate key vault had to be created to avoid giving access to all secrets. Azure RBAC allows assign role with scope for individual secret instead using single key vault.
## Vault access policy to Azure RBAC migration steps There are many differences between Azure RBAC and vault access policy permission model. In order, to avoid outages during migration, below steps are recommended.
For more information, see
## Troubleshooting - Role assignment not working after several minutes - there are situations when role assignments can take longer. It's important to write retry logic in code to cover those cases.-- Role assignments disappeared when Key Vault was deleted (soft-delete) and recovered - it's currently a limitation of soft-delete feature across all Azure services. It's required to recreate all role assignments after recovery.
+- Role assignments disappeared when Key Vault was deleted (soft-delete) and recovered - it's currently a limitation of soft-delete feature across all Azure services. It's required to recreate all role assignments after recovery.
## Learn more
key-vault How To Configure Key Rotation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/keys/how-to-configure-key-rotation.md
Key rotation policy can also be configured using ARM templates.
"description": "The name of the key to be created." } },
- "rotateTimeAfterCreation": {
+ "rotatationTimeAfterCreate": {
"defaultValue": "P18M", "type": "String", "metadata": {
Key rotation policy can also be configured using ARM templates.
"lifetimeActions": [ { "trigger": {
- "timeAfterCreate": "[parameters('rotateTimeAfterCreation')]",
+ "timeAfterCreate": "[parameters('rotatationTimeAfterCreate')]",
"timeBeforeExpiry": "" }, "action": {
load-testing Tutorial Identify Bottlenecks Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-testing/tutorial-identify-bottlenecks-azure-portal.md
Now that you have the application deployed and running, you can run your first l
## Configure and create the load test
-In this section, you'll create a load test by using an existing Apache JMeter test script.
+In this section, you'll create a load test by using a sample Apache JMeter test script.
### Configure the Apache JMeter script
-The sample application's source repo includes an Apache JMeter script named *SampleApp.jmx*. This script makes three API calls on each test iteration:
+The sample application's source repo includes an Apache JMeter script named *SampleApp.jmx*. This script makes three API calls to the web app on each test iteration:
* `add`: Carries out a data insert operation on Azure Cosmos DB for the number of visitors on the web app. * `get`: Carries out a GET operation from Azure Cosmos DB to retrieve the count. * `lasttimestamp`: Updates the time stamp since the last user went to the website.
-In this section, you'll update the Apache JMeter script with the URL of the sample web app that you just deployed.
+> [!NOTE]
+> The sample Apache JMeter script requires two plugins: ```Custom Thread Groups``` and ```Throughput Shaping Timer```. To open the script on your local Apache JMeter instance, you need to install both plugins. You can use the [Apache JMeter Plugins Manager](https://jmeter-plugins.org/install/Install/) to do this.
+
+To load test the sample web app that you deployed previously, you need to update the API URLs in the Apache JMeter script.
1. Open the directory of the cloned sample app in Visual Studio Code:
logic-apps Logic Apps Using Sap Connector https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-using-sap-connector.md
An ISE provides access to resources that are protected by an Azure virtual netwo
1. If you don't already have an Azure Storage account with a blob container, create a container using either the [Azure portal](../storage/blobs/storage-quickstart-blobs-portal.md) or [Azure Storage Explorer](../storage/blobs/quickstart-storage-explorer.md).
-1. [Download and install the latest SAP client library](#sap-client-library-prerequisites) on your local computer. You should have the following assembly files:
+1. [Download and install the latest SAP client library](#sap-client-library-prerequisites) on your local computer. You should have the following assembly (.dll) files:
* libicudecnumber.dll
The following list describes the prerequisites for the SAP client library that y
* You must have the 64-bit version of the SAP client library installed, because the data gateway only runs on 64-bit systems. Installing the unsupported 32-bit version results in a "bad image" error.
-* Copy the assembly files from the default installation folder to another location, based on your scenario as follows.
+* From the client library's default installation folder, copy the assembly (.dll) files to another location, based on your scenario as follows:
* For a logic app workflow that runs in an ISE, follow the [ISE prerequisites](#ise-prerequisites) instead.
- * For a logic app workflow that runs in multi-tenant Azure and uses your on-premises data gateway, copy the assembly files to the data gateway installation folder.
+ * For a logic app workflow that runs in multi-tenant Azure and uses your on-premises data gateway, copy the DLL files to the on-premises data gateway installation folder, for example, "C:\Program Files\On-Premises Data Gateway".
> [!NOTE] > If your SAP connection fails with the error message, **Please check your account info and/or permissions and try again**,
- > make sure you copied the assembly files to the data gateway installation folder.
+ > make sure you copied the assembly (.dll) files to the data gateway installation folder, for example, "C:\Program Files\On-Premises Data Gateway".
> > You can troubleshoot further issues using the [.NET assembly binding log viewer](/dotnet/framework/tools/fuslogvw-exe-assembly-binding-log-viewer). > This tool lets you check that your assembly files are in the correct location.
If you're enabling SNC through an external security product, copy the SNC librar
> The version of your SNC library and its dependencies must be compatible with your SAP environment. > > * You must use `sapgenpse.exe` specifically as the SAPGENPSE utility.
-> * If you use an on-premises data gateway, also copy these same binary files to the installation folder there.
+> * If you use an on-premises data gateway, also copy these same binary files to the installation folder there, for example, "C:\Program Files\On-Premises Data Gateway".
> * If PSE is provided in your connection, you don't need to copy and set up PSE and SECUDIR for your on-premises data gateway. > * You can also use your on-premises data gateway to troubleshoot any library compatibility issues.
To enable sending SAP telemetry to Application insights, follow these steps:
1. Download the NuGet package for **Microsoft.ApplicationInsights.EventSourceListener.dll** from this location: [https://www.nuget.org/packages/Microsoft.ApplicationInsights.EventSourceListener/2.14.0](https://www.nuget.org/packages/Microsoft.ApplicationInsights.EventSourceListener/2.14.0).
-1. Add the downloaded file to your on-premises data gateway installation directory.
+1. Add the downloaded file to your on-premises data gateway installation directory, for example, "C:\Program Files\On-Premises Data Gateway".
1. In your on-premises data gateway installation directory, check that the **Microsoft.ApplicationInsights.dll** file has the same version number as the **Microsoft.ApplicationInsights.EventSourceListener.dll** file that you added. The gateway currently uses version 2.14.0.
machine-learning Concept Workspace https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-workspace.md
Previously updated : 10/21/2021 Last updated : 01/04/2022 #Customer intent: As a data scientist, I want to understand the purpose of a workspace for Azure Machine Learning.
When you create a new workspace, it automatically creates several Azure resource
> [!NOTE] > If your subscription setting requires adding tags to resources under it, Azure Container Registry (ACR) created by Azure Machine Learning will fail, since we cannot set tags to ACR.
-+ [Azure Application Insights](https://azure.microsoft.com/services/application-insights/): Stores monitoring information about your models.
++ [Azure Application Insights](https://azure.microsoft.com/services/application-insights/): Stores monitoring and diagnostics information. For more information, see [Monitor and collect data from Machine Learning web service endpoints](../../articles/machine-learning/how-to-enable-app-insights.md).+
+ > [!NOTE]
+ > You can delete the Application Insights instance after cluster creation if you want. Deleting it limits the information gathered from the workspace, and may make it more difficult to troubleshoot problems. __If you delete the Application Insights instance created by the workspace, you cannot re-create it without deleting and recreating the workspace__.
+ [Azure Key Vault](https://azure.microsoft.com/services/key-vault/): Stores secrets that are used by compute targets and other sensitive information that's needed by the workspace.
machine-learning How To Access Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-access-data.md
# Customer intent: As an experienced Python developer, I need to make my data in Azure storage available to my remote compute to train my machine learning models.
-# Connect to storage services on Azure
+# Connect to storage services on Azure with datastores
In this article, learn how to connect to data storage services on Azure with Azure Machine Learning datastores and the [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/intro).
machine-learning How To Configure Private Link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-configure-private-link.md
Previously updated : 10/21/2021 Last updated : 01/05/2022 # Configure a private endpoint for an Azure Machine Learning workspace
Finally, select __Create__ to create the private endpoint.
## Remove a private endpoint
-Use one of the following methods to remove a private endpoint from a workspace:
+You can remove one or all private endpoints for a workspace. Removing a private endpoint removes the workspace from the VNet that the endpoint was associated with. This may prevent the workspace from accessing resources in that VNet, or resources in the VNet from accessing the workspace. For example, if the VNet does not allow access to or from the public internet.
-> [!IMPORTANT]
-> Public access is not enabled when you delete a private endpoint for a workspace. To enable public access, see the [Enable public access section](how-to-configure-private-link.md#enable-public-access).
+> [!WARNING]
+> Removing the private endpoints for a workspace __doesn't make it publicly accessible__. To make the workspace publicly accessible, use the steps in the [Enable public access](#enable-public-access) section.
+
+To remove a private endpoint, use the following information:
# [Python](#tab/python)
-Use [Workspace.delete_private_endpoint_connection](/python/api/azureml-core/azureml.core.workspace(class)#delete-private-endpoint-connection-private-endpoint-connection-name-) to remove a private endpoint.
+To remove a private endpoint, use [Workspace.delete_private_endpoint_connection](/python/api/azureml-core/azureml.core.workspace(class)#delete-private-endpoint-connection-private-endpoint-connection-name-). The following example demonstrates how to remove a private endpoint:
```python from azureml.core import Workspace
The Azure CLI [extension 1.0 for machine learning](reference-azure-machine-learn
# [Portal](#tab/azure-portal)
-From the Azure Machine Learning workspace in the portal, select __Private endpoint connections__, and then select the endpoint you want to remove. Finally, select __Remove__.
+1. From the [Azure portal](https://portal.azure.com), select your Azure Machine Learning workspace.
+1. From the left side of the page, select __Networking__ and then select the __Private endpoint connections__ tab.
+1. Select the endpoint to remove and then select __Remove__.
++++
+## Enable public access
+
+In some situations, you may want to allow someone to connect to your secured workspace over a public endpoint, instead of through the VNet. Or you may want to remove the workspace from the VNet and re-enable public access.
+
+> [!IMPORTANT]
+> Enabling public access doesn't remove any private endpoints that exist. All communications between components behind the VNet that the private endpoint(s) connect to is still secured. It enables public access only to the workspace, in addition to the private access through any private endpoints.
+
+> [!WARNING]
+> When connecting over the public endpoint while the workspace uses a private endpoint to communicate with other resources:
+> * __Some features of studio will fail to access your data__. This problem happens when the _data is stored on a service that is secured behind the VNet_. For example, an Azure Storage Account.
+> * Using Jupyter, JupyterLab, and RStudio on a compute instance, including running notebooks, __is not supported__.
+
+To enable public access, use the following steps:
+
+# [Python](#tab/python)
+
+To enable public access, use [Workspace.update](/python/api/azureml-core/azureml.core.workspace(class)#update-friendly-name-none--description-none--tags-none--image-build-compute-none--service-managed-resources-settings-none--primary-user-assigned-identity-none--allow-public-access-when-behind-vnet-none-) and set `allow_public_access_when_behind_vnet=True`.
+
+```python
+from azureml.core import Workspace
+
+ws = Workspace.from_config()
+ws.update(allow_public_access_when_behind_vnet=True)
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+The Azure CLI [extension 1.0 for machine learning](reference-azure-machine-learning-cli.md) provides the [az ml workspace update](/cli/azure/ml/workspace#az_ml_workspace_update) command. To enable public access to the workspace, add the parameter `--allow-public-access true`.
+
+# [Portal](#tab/azure-portal)
+
+1. From the [Azure portal](https://portal.azure.com), select your Azure Machine Learning workspace.
+1. From the left side of the page, select __Networking__ and then select the __Public access__ tab.
+1. Select __All networks__, and then select __Save__.
+
If you want to create an isolated Azure Kubernetes Service used by the workspace
:::image type="content" source="./media/how-to-configure-private-link/multiple-private-endpoint-workspace-aks.png" alt-text="Diagram of isolated AKS VNet":::
-## Enable public access
-
-In some situations, you may want to allow someone to connect to your secured workspace over a public endpoint, instead of through the VNet. After configuring a workspace with a private endpoint, you can optionally enable public access to the workspace. Doing so does not remove the private endpoint. All communications between components behind the VNet is still secured. It enables public access only to the workspace, in addition to the private access through the VNet.
-
-> [!WARNING]
-> When connecting over the public endpoint:
-> * __Some features of studio will fail to access your data__. This problem happens when the _data is stored on a service that is secured behind the VNet_. For example, an Azure Storage Account.
-> * Using Jupyter, JupyterLab, and RStudio on a compute instance, including running notebooks, __is not supported__.
-
-To enable public access to a private endpoint-enabled workspace, use the following steps:
-
-# [Python](#tab/python)
-
-Use [Workspace.delete_private_endpoint_connection](/python/api/azureml-core/azureml.core.workspace(class)#delete-private-endpoint-connection-private-endpoint-connection-name-) to remove a private endpoint.
-
-```python
-from azureml.core import Workspace
-
-ws = Workspace.from_config()
-ws.update(allow_public_access_when_behind_vnet=True)
-```
-
-# [Azure CLI](#tab/azure-cli)
-
-The Azure CLI [extension 1.0 for machine learning](reference-azure-machine-learning-cli.md) provides the [az ml workspace update](/cli/azure/ml/workspace#az_ml_workspace_update) command. To enable public access to the workspace, add the parameter `--allow-public-access true`.
-
-# [Portal](#tab/azure-portal)
-
-Currently there is no way to enable this functionality using the portal.
--- ## Next steps * For more information on securing your Azure Machine Learning workspace, see the [Virtual network isolation and privacy overview](how-to-network-security-overview.md) article.
machine-learning How To Enable App Insights https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-enable-app-insights.md
Previously updated : 10/21/2021 Last updated : 01/04/2022
In this article, you learn how to collect data from models deployed to web servi
The [enable-app-insights-in-production-service.ipynb](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/deployment/enable-app-insights-in-production-service/enable-app-insights-in-production-service.ipynb) notebook demonstrates concepts in this article. [!INCLUDE [aml-clone-in-azure-notebook](../../includes/aml-clone-for-examples.md)]+
+> [!IMPORTANT]
+> The information in this article relies on the Azure Application Insights instance that was created with your workspace. If you deleted this Application Insights instance, there is no way to re-create it other than deleting and recreating the workspace.
## Prerequisites
machine-learning How To Manage Workspace Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-manage-workspace-cli.md
Previously updated : 09/23/2021 Last updated : 01/05/2022
In this article, you learn how to create and manage Azure Machine Learning works
[!INCLUDE [register-namespace](../../includes/machine-learning-register-namespace.md)] + ## Connect the CLI to your Azure subscription > [!IMPORTANT]
machine-learning How To Manage Workspace Terraform https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-manage-workspace-terraform.md
Previously updated : 10/21/2021 Last updated : 01/05/2022
A Terraform configuration is a document that defines the resources that are need
* An installed version of the [Azure CLI](/cli/azure/). * Configure Terraform: follow the directions in this article and the [Terraform and configure access to Azure](/azure/developer/terraform/get-started-cloud-shell) article.
+## Limitations
+++ ## Declare the Azure provider Create the Terraform configuration file that declares the Azure provider:
machine-learning How To Manage Workspace https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-manage-workspace.md
As your needs change or requirements for automation increase you can also manage
By default, creating a workspace also creates an Azure Container Registry (ACR). Since ACR does not currently support unicode characters in resource group names, use a resource group that does not contain these characters. + ## Create a workspace # [Python](#tab/python)
marketplace Co Sell Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/co-sell-configure.md
description: The information you provide on the Co-sell with Microsoft tab for y
--++ Last updated 1/04/2021
marketplace Co Sell Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/co-sell-overview.md
description: The Microsoft Partner Center Co-sell program for partners can help
--++ Last updated 12/03/2021
marketplace Co Sell Requirements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/co-sell-requirements.md
description: Learn about the requirements an offer in the Microsoft commercial m
--++ Last updated 12/03/2021
marketplace Co Sell Solution Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/co-sell-solution-migration.md
description: Migrate Co-sell solutions from OCP GTM to Partner Center (Azure Mar
--++ Last updated 09/27/2021
marketplace Co Sell Status https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/co-sell-status.md
description: Learn how to verify the co-sell status of an offer in the Microsoft
--++ Last updated 09/27/2021
marketplace Commercial Marketplace Co Sell Countries https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/commercial-marketplace-co-sell-countries.md
description: Use these two-letter country/region codes when providing contact in
--+++ Last updated 04/27/2021
marketplace Commercial Marketplace Co Sell States https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/commercial-marketplace-co-sell-states.md
description: Get the available state and province codes when providing contact i
--+++ Last updated 04/27/2021
orbital Contact Profile https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/orbital/contact-profile.md
Configure a contact profile with Azure Orbital to save and reuse contact configu
## Sign in to Azure
-Sign in to the [Azure portal](https://portal.azure.com).
+Sign in to the [Azure portal - Orbital Preview](https://aka.ms/orbital/portal).
## Create a contact profile resource
-1. Select **Create a resource** in the upper left-hand corner of the portal.
-2. In the search box, enter **Contact profile**. Select **Contact profile** in the search results.
-3. In the **Contact profile** page, select **Create**.
-4. In **Create contact profile resource**, enter or select this information in the **Basics** tab:
+1. In the Azure portal search box, enter **Contact profile**. Select **Contact profile** in the search results.
+2. In the **Contact profile** page, select **Create**.
+3. In **Create contact profile resource**, enter or select this information in the **Basics** tab:
| **Field** | **Value** | | | | | Subscription | Select your subscription | | Resource group | Select your resource group |
- | Name | Enter contact profile name. Specify the antenna provider and mission information here. *i.e. Microsoft_Aqua_Uplink+Downlink_1* |
+ | Name | Enter contact profile name. Specify the antenna provider and mission information here. Like *Microsoft_Aqua_Uplink+Downlink_1* |
| Region | Select **West US 2** |
- | Minimum viable contact duration | Define the minimum duration of the contact as a prerequisite to show you available time slots to communicate with your spacecraft. If an available time window is less than this time, it won't show in the list of available options. Provide minimum contact duration in ISO 8601 format. *i.e. PT1M* |
+ | Minimum viable contact duration | Define the minimum duration of the contact as a prerequisite to show you available time slots to communicate with your spacecraft. If an available time window is less than this time, it won't show in the list of available options. Provide minimum contact duration in ISO 8601 format. Like *PT1M* |
| Minimum elevation | Define minimum elevation of the contact, after acquisition of signal (AOS), as a prerequisite to show you available time slots to communicate with your spacecraft. Using higher value can reduce the duration of the contact. Provide minimum viable elevation in decimal degrees. | | Auto track configuration | Select the frequency band to be used for autotracking during the contact. X band, S band, or Disabled. |
- | Event Hubs Namespace | Select an Event Hubs Namespace to which you will send telemetry data of your contacts. Select a Subscription before you can select an Event Hubs Namespace. |
+ | Event Hubs Namespace | Select an Event Hubs Namespace to which you'll send telemetry data of your contacts. Select a Subscription before you can select an Event Hubs Namespace. |
| Event Hubs Instance | Select an Event Hubs Instance that belongs to the previously selected Namespace. *This field will only appear if an Event Hubs Namespace is selected first*. | :::image type="content" source="media/orbital-eos-contact-profile.png" alt-text="Contact Profile Resource Page" lightbox="media/orbital-eos-contact-profile.png":::
-5. Select the **Links** tab, or select the **Next: Links** button at the bottom of the page.
-6. In the **Links** page, select **Add new Link**
-7. In the **Add Link** page, enter, or select this information per link direction:
+4. Select the **Links** tab, or select the **Next: Links** button at the bottom of the page.
+5. In the **Links** page, select **Add new Link**
+6. In the **Add Link** page, enter, or select this information per link direction:
| **Field** | **Value** | | | |
Sign in to the [Azure portal](https://portal.azure.com).
:::image type="content" source="media/orbital-eos-contact-link.png" alt-text="Contact Profile Links Page" lightbox="media/orbital-eos-contact-link.png":::
-8. Select the **Submit** button
-9. Select the **Review + create** tab or select the **Review + create** button
-10. Select the **Create** button
+7. Select the **Submit** button
+8. Select the **Review + create** tab or select the **Review + create** button
+9. Select the **Create** button
## Next steps
orbital Delete Contact https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/orbital/delete-contact.md
To cancel a scheduled contact, the contact entry must be deleted on the **Contac
## Sign in to Azure
-Sign in to the [Azure portal](https://portal.azure.com).
+Sign in to the [Azure portal - Orbital Preview](https://aka.ms/orbital/portal).
## Delete a scheduled contact entry
orbital Orbital Preview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/orbital/orbital-preview.md
# Onboard to the Azure Orbital Preview
-Azure Orbital is now on preview, to get access an Azure subscription must be onboarded. Without this onboarding process, the Azure Orbital resources won't be available in the Azure portal.
+Azure Orbital is now on preview, to get access an Azure subscription must be onboarded. Without this onboarding process, the Azure Orbital resources won't be available in the portal.
## Prerequisites
Azure Orbital is now on preview, to get access an Azure subscription must be onb
## Sign in to Azure
-Sign in to the [Azure portal](https://portal.azure.com).
+Sign in to the [Azure portal - Orbital Preview](https://aka.ms/orbital/portal).
## Register the Resource Provider
orbital Register Spacecraft https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/orbital/register-spacecraft.md
To contact a satellite, it must be registered as a spacecraft resource with the
## Sign in to Azure
-Sign in to the [Azure portal](https://portal.azure.com).
+Sign in to the [Azure portal - Orbital Preview](https://aka.ms/orbital/portal).
## Create spacecraft resource
-1. Select **Create a resource** in the upper left-hand corner of the portal.
-2. In the search box, enter **Spacecrafts*. Select **Spacecrafts** in the search results.
-3. In the **Spacecrafts** page, select Create.
-4. In **Create spacecraft resource**, enter or select this information in the Basics tab:
+> [!NOTE]
+> These steps must be followed as is or you won't be able to find the resources. Please use the specific link above to sign in directly to the Azure Orbital Preview page.
+
+1. In the Azure portal search box, enter **Spacecrafts*. Select **Spacecrafts** in the search results.
+2. In the **Spacecrafts** page, select Create.
+3. In **Create spacecraft resource**, enter or select this information in the Basics tab:
| **Field** | **Value** | | | |
Sign in to the [Azure portal](https://portal.azure.com).
:::image type="content" source="media/orbital-eos-register-bird.png" alt-text="Register Spacecraft Resource Page" lightbox="media/orbital-eos-register-bird.png":::
-5. Select the **Links** tab, or select the **Next: Links** button at the bottom of the page.
-6. In the **Links** page, enter or select this information:
+4. Select the **Links** tab, or select the **Next: Links** button at the bottom of the page.
+5. In the **Links** page, enter or select this information:
| **Field** | **Value** | | | |
Sign in to the [Azure portal](https://portal.azure.com).
:::image type="content" source="media/orbital-eos-register-links.png" alt-text="Spacecraft Links Resource Page" lightbox="media/orbital-eos-register-links.png":::
-7. Select the **Review + create** tab, or select the **Review + create** button.
-8. Select **Create**
+6. Select the **Review + create** tab, or select the **Review + create** button.
+7. Select **Create**
## Authorize the new spacecraft resource
orbital Schedule Contact https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/orbital/schedule-contact.md
Schedule a contact with the selected satellite for data retrieval/delivery on Az
## Sign in to Azure
-Sign in to the [Azure portal](https://portal.azure.com).
+Sign in to the [Azure portal - Orbital Preview](https://aka.ms/orbital/portal).
## Select an available contact
orbital Update Tle https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/orbital/update-tle.md
Update the TLE of an existing spacecraft resource.
## Sign in to Azure
-Sign in to the [Azure portal](https://portal.azure.com).
+Sign in to the [Azure portal - Orbital Preview](https://aka.ms/orbital/portal).
## Update the spacecraft TLE
postgresql Concept Reserved Pricing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/concept-reserved-pricing.md
Azure Database for PostgreSQL now helps you save money by prepaying for compute
You don't need to assign the reservation to specific Azure Database for PostgreSQL servers. An already running Azure Database for PostgreSQL (or ones that are newly deployed) will automatically get the benefit of reserved pricing. By purchasing a reservation, you're pre-paying for the compute costs for a period of one or three years. As soon as you buy a reservation, the Azure database for PostgreSQL compute charges that match the reservation attributes are no longer charged at the pay-as-you go rates. A reservation does not cover software, networking, or storage charges associated with the PostgreSQL Database servers. At the end of the reservation term, the billing benefit expires, and the Azure Database for PostgreSQL are billed at the pay-as-you go price. Reservations do not auto-renew. For pricing information, see the [Azure Database for PostgreSQL reserved capacity offering](https://azure.microsoft.com/pricing/details/postgresql/). </br> > [!IMPORTANT]
-> Reserved capacity pricing is available for the Azure Database for PostgreSQL in [Single server](./overview.md#azure-database-for-postgresqlsingle-server), [Flexible Server](flexible-server/overview.md), and [Hyperscale Citus](./overview.md#azure-database-for-postgresql--hyperscale-citus) deployment options. For information about RI pricing on Hyperscale (Citus), see [this page](concepts-hyperscale-reserved-pricing.md).
+> Reserved capacity pricing is available for the Azure Database for PostgreSQL in [Single server](./overview.md#azure-database-for-postgresqlsingle-server), [Flexible Server](flexible-server/overview.md), and [Hyperscale Citus](./overview.md#azure-database-for-postgresql--hyperscale-citus) deployment options. For information about RI pricing on Hyperscale (Citus), see [this page](hyperscale/concepts-reserved-pricing.md).
You can buy Azure Database for PostgreSQL reserved capacity in the [Azure portal](https://portal.azure.com/). Pay for the reservation [up front or with monthly payments](../cost-management-billing/reservations/prepare-buy-reservation.md). To buy the reserved capacity:
To learn more about Azure Reservations, see the following articles:
* [Understand Azure Reservations discount](../cost-management-billing/reservations/understand-reservation-charges.md) * [Understand reservation usage for your Pay-As-You-Go subscription](../cost-management-billing/reservations/understand-reservation-charges-postgresql.md) * [Understand reservation usage for your Enterprise enrollment](../cost-management-billing/reservations/understand-reserved-instance-usage-ea.md)
-* [Azure Reservations in Partner Center Cloud Solution Provider (CSP) program](/partner-center/azure-reservations)
+* [Azure Reservations in Partner Center Cloud Solution Provider (CSP) program](/partner-center/azure-reservations)
postgresql Concepts Extensions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/concepts-extensions.md
Now you can run pg_dump on the original database and then do pg_restore. After t
```SQL SELECT timescaledb_post_restore(); ```
-For more details on restore method wiith Timescae enabled database see [Timescale documentation](https://docs.timescale.com/timescaledb/latest/how-to-guides/backup-and-restore/pg-dump-and-restore/#restore-your-entire-database-from-backup)
+For more details on restore method wiith Timescale enabled database see [Timescale documentation](https://docs.timescale.com/timescaledb/latest/how-to-guides/backup-and-restore/pg-dump-and-restore/#restore-your-entire-database-from-backup)
### Restoring a Timescale database using timescaledb-backup
For more details on restore method wiith Timescae enabled database see [Timescal
4. Grant azure_pg_admin [role](https://www.postgresql.org/docs/11/database-roles.html) to user that will be used by [ts-restore](https://github.com/timescale/timescaledb-backup#using-ts-restore) 5. Run [ts-restore](https://github.com/timescale/timescaledb-backup#using-ts-restore) to restore database
- More details on hese utilities can be found [here](https://github.com/timescale/timescaledb-backup).
+ More details on these utilities can be found [here](https://github.com/timescale/timescaledb-backup).
> [!NOTE] > When using `timescale-backup` utilities to restore to Azure is that since database user names for non-flexible Azure Database for PostgresQL must use the `<user@db-name>` format, you need to replace `@` with `%40` character encoding.
postgresql Concepts Version Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/concepts-version-policy.md
Before PostgreSQL version 10, the [PostgreSQL versioning policy](https://www.pos
## Next steps - See Azure Database for PostgreSQL - Single Server [supported versions](./concepts-supported-versions.md) - See Azure Database for PostgreSQL - Flexible Server [supported versions](flexible-server/concepts-supported-versions.md)-- See Azure Database for PostgreSQL - Hyperscale (Citus) [supported versions](concepts-hyperscale-versions.md)
+- See Azure Database for PostgreSQL - Hyperscale (Citus) [supported versions](hyperscale/concepts-versions.md)
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/flexible-server/overview.md
One advantage of running your workload in Azure is global reach. The flexible se
| West US 3 | :heavy_check_mark: | :x: | :x: | <!-- We continue to add more regions for flexible server. -->
+> [!NOTE]
+> If your application requires Zone redundant HA and it's not available in your preferred Azure region, consider using other regions within the same geography where Zone redundant HA is available, such as US East for US East 2, Central US for North Central US, and so on.
## Migration
postgresql Concepts App Type https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/concepts-app-type.md
+
+ Title: Determine application type - Hyperscale (Citus) - Azure Database for PostgreSQL
+description: Identify your application for effective distributed data modeling
+++++ Last updated : 07/17/2020++
+# Determining Application Type
+
+Running efficient queries on a Hyperscale (Citus) server group requires that
+tables be properly distributed across servers. The recommended distribution
+varies by the type of application and its query patterns.
+
+There are broadly two kinds of applications that work well on Hyperscale
+(Citus). The first step in data modeling is to identify which of them more
+closely resembles your application.
+
+## At a Glance
+
+| Multi-Tenant Applications | Real-Time Applications |
+|--|-|
+| Sometimes dozens or hundreds of tables in schema | Small number of tables |
+| Queries relating to one tenant (company/store) at a time | Relatively simple analytics queries with aggregations |
+| OLTP workloads for serving web clients | High ingest volume of mostly immutable data |
+| OLAP workloads that serve per-tenant analytical queries | Often centering around large table of events |
+
+## Examples and Characteristics
+
+**Multi-Tenant Application**
+
+> These are typically SaaS applications that serve other companies,
+> accounts, or organizations. Most SaaS applications are inherently
+> relational. They have a natural dimension on which to distribute data
+> across nodes: just shard by tenant\_id.
+>
+> Hyperscale (Citus) enables you to scale out your database to millions of
+> tenants without having to re-architect your application. You can keep the
+> relational semantics you need, like joins, foreign key constraints,
+> transactions, ACID, and consistency.
+>
+> - **Examples**: Websites which host store-fronts for other
+> businesses, such as a digital marketing solution, or a sales
+> automation tool.
+> - **Characteristics**: Queries relating to a single tenant rather
+> than joining information across tenants. This includes OLTP
+> workloads for serving web clients, and OLAP workloads that serve
+> per-tenant analytical queries. Having dozens or hundreds of tables
+> in your database schema is also an indicator for the multi-tenant
+> data model.
+>
+> Scaling a multi-tenant app with Hyperscale (Citus) also requires minimal
+> changes to application code. We have support for popular frameworks like Ruby
+> on Rails and Django.
+
+**Real-Time Analytics**
+
+> Applications needing massive parallelism, coordinating hundreds of cores for
+> fast results to numerical, statistical, or counting queries. By sharding and
+> parallelizing SQL queries across multiple nodes, Hyperscale (Citus) makes it
+> possible to perform real-time queries across billions of records in under a
+> second.
+>
+> Tables in real-time analytics data models are typically distributed by
+> columns like user\_id, host\_id, or device\_id.
+>
+> - **Examples**: Customer-facing analytics dashboards requiring
+> sub-second response times.
+> - **Characteristics**: Few tables, often centering around a big
+> table of device-, site- or user-events and requiring high ingest
+> volume of mostly immutable data. Relatively simple (but
+> computationally intensive) analytics queries involving several
+> aggregations and GROUP BYs.
+
+If your situation resembles either case above, then the next step is to decide
+how to shard your data in the server group. The database administrator\'s
+choice of distribution columns needs to match the access patterns of typical
+queries to ensure performance.
+
+## Next steps
+
+* [Choose a distribution
+ column](concepts-choose-distribution-column.md) for tables in your
+ application to distribute data efficiently
postgresql Concepts Audit https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/concepts-audit.md
+
+ Title: Audit logging - Azure Database for PostgreSQL - Hyperscale (Citus)
+description: Concepts for pgAudit audit logging in Azure Database for PostgreSQL - Hyperscale (Citus).
+++++ Last updated : 08/03/2021++
+# Audit logging in Azure Database for PostgreSQL - Hyperscale (Citus)
+
+> [!IMPORTANT]
+> The pgAudit extension in Hyperscale (Citus) is currently in preview. This
+> preview version is provided without a service level agreement, and it's not
+> recommended for production workloads. Certain features might not be supported
+> or might have constrained capabilities.
+>
+> You can see a complete list of other new features in [preview features for
+> Hyperscale (Citus)](product-updates.md).
+
+Audit logging of database activities in Azure Database for PostgreSQL - Hyperscale (Citus) is available through the PostgreSQL Audit extension: [pgAudit](https://www.pgaudit.org/). pgAudit provides detailed session or object audit logging.
+
+If you want Azure resource-level logs for operations like compute and storage scaling, see the [Azure Activity Log](../../azure-monitor/essentials/platform-logs-overview.md).
+
+## Usage considerations
+By default, pgAudit log statements are emitted along with your regular log statements by using Postgres's standard logging facility. In Azure Database for PostgreSQL - Hyperscale (Citus), you can configure all logs to be sent to Azure Monitor Log store for later analytics in Log Analytics. If you enable Azure Monitor resource logging, your logs will be automatically sent (in JSON format) to Azure Storage, Event Hubs, or Azure Monitor logs, depending on your choice.
+
+## Enabling pgAudit
+
+The pgAudit extension is pre-installed and enabled on a limited number of
+Hyperscale (Citus) server groups at this time. It may or may not be available
+for preview yet on your server group.
+
+## pgAudit settings
+
+pgAudit allows you to configure session or object audit logging. [Session audit logging](https://github.com/pgaudit/pgaudit/blob/master/README.md#session-audit-logging) emits detailed logs of executed statements. [Object audit logging](https://github.com/pgaudit/pgaudit/blob/master/README.md#object-audit-logging) is audit scoped to specific relations. You can choose to set up one or both types of logging.
+
+> [!NOTE]
+> pgAudit settings are specified globally and cannot be specified at a database or role level.
+>
+> Also, pgAudit settings are specified per-node in a server group. To make a change on all nodes, you must apply it to each node individually.
+
+You must configure pgAudit parameters to start logging. The [pgAudit documentation](https://github.com/pgaudit/pgaudit/blob/master/README.md#settings) provides the definition of each parameter. Test the parameters first and confirm that you're getting the expected behavior.
+
+> [!NOTE]
+> Setting `pgaudit.log_client` to ON will redirect logs to a client process (like psql) instead of being written to file. This setting should generally be left disabled. <br> <br>
+> `pgaudit.log_level` is only enabled when `pgaudit.log_client` is on.
+
+> [!NOTE]
+> In Azure Database for PostgreSQL - Hyperscale (Citus), `pgaudit.log` cannot be set using a `-` (minus) sign shortcut as described in the pgAudit documentation. All required statement classes (READ, WRITE, etc.) should be individually specified.
+
+## Audit log format
+Each audit entry is indicated by `AUDIT:` near the beginning of the log line. The format of the rest of the entry is detailed in the [pgAudit documentation](https://github.com/pgaudit/pgaudit/blob/master/README.md#format).
+
+## Getting started
+To quickly get started, set `pgaudit.log` to `WRITE`, and open your server logs to review the output.
+
+## Viewing audit logs
+The way you access the logs depends on which endpoint you choose. For Azure Storage, see the [logs storage account](../../azure-monitor/essentials/resource-logs.md#send-to-azure-storage) article. For Event Hubs, see the [stream Azure logs](../../azure-monitor/essentials/resource-logs.md#send-to-azure-event-hubs) article.
+
+For Azure Monitor Logs, logs are sent to the workspace you selected. The Postgres logs use the **AzureDiagnostics** collection mode, so they can be queried from the AzureDiagnostics table. The fields in the table are described below. Learn more about querying and alerting in the [Azure Monitor Logs query](../../azure-monitor/logs/log-query-overview.md) overview.
+
+You can use this query to get started. You can configure alerts based on queries.
+
+Search for all pgAudit entries in Postgres logs for a particular server in the last day
+```kusto
+AzureDiagnostics
+| where LogicalServerName_s == "myservername"
+| where TimeGenerated > ago(1d)
+| where Message contains "AUDIT:"
+```
+
+## Next steps
+
+- [Learn how to setup logging in Azure Database for PostgreSQL - Hyperscale (Citus) and how to access logs](howto-logging.md)
postgresql Concepts Backup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/concepts-backup.md
+
+ Title: Backup and restore ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL
+description: Protecting data from accidental corruption or deletion
+++++ Last updated : 04/14/2021++
+# Backup and restore in Azure Database for PostgreSQL - Hyperscale (Citus)
+
+Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) automatically creates
+backups of each node and stores them in locally redundant storage. Backups can
+be used to restore your Hyperscale (Citus) server group to a specified time.
+Backup and restore are an essential part of any business continuity strategy
+because they protect your data from accidental corruption or deletion.
+
+## Backups
+
+At least once a day, Azure Database for PostgreSQL takes snapshot backups of
+data files and the database transaction log. The backups allow you to restore a
+server to any point in time within the retention period. (The retention period
+is currently 35 days for all server groups.) All backups are encrypted using
+AES 256-bit encryption.
+
+In Azure regions that support availability zones, backup snapshots are stored
+in three availability zones. As long as at least one availability zone is
+online, the Hyperscale (Citus) server group is restorable.
+
+Backup files can't be exported. They may only be used for restore operations
+in Azure Database for PostgreSQL.
+
+### Backup storage cost
+
+For current backup storage pricing, see the Azure Database for PostgreSQL -
+Hyperscale (Citus) [pricing
+page](https://azure.microsoft.com/pricing/details/postgresql/hyperscale-citus/).
+
+## Restore
+
+You can restore a Hyperscale (Citus) server group to any point in time within
+the last 35 days. Point-in-time restore is useful in multiple scenarios. For
+example, when a user accidentally deletes data, drops an important table or
+database, or if an application accidentally overwrites good data with bad data.
+
+> [!IMPORTANT]
+> Deleted Hyperscale (Citus) server groups can't be restored. If you delete the
+> server group, all nodes that belong to the server group are deleted and can't
+> be recovered. To protect server group resources, post deployment, from
+> accidental deletion or unexpected changes, administrators can leverage
+> [management locks](../../azure-resource-manager/management/lock-resources.md).
+
+The restore process creates a new server group in the same Azure region,
+subscription, and resource group as the original. The server group has the
+original's configuration: the same number of nodes, number of vCores, storage
+size, user roles, PostgreSQL version, and version of the Citus extension.
+
+Firewall settings and PostgreSQL server parameters are not preserved from the
+original server group, they are reset to default values. The firewall will
+prevent all connections. You will need to manually adjust these settings after
+restore. In general, see our list of suggested [post-restore
+tasks](howto-restore-portal.md#post-restore-tasks).
+
+## Next steps
+
+* See the steps to [restore a server group](howto-restore-portal.md)
+ in the Azure portal.
+* Learn aboutΓÇ»[Azure availability zones](../../availability-zones/az-overview.md).
postgresql Concepts Choose Distribution Column https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/concepts-choose-distribution-column.md
+
+ Title: Choose distribution columns ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL
+description: Learn how to choose distribution columns in common scenarios in Azure Database for PostgreSQL - Hyperscale (Citus).
+++++ Last updated : 12/06/2021++
+# Choose distribution columns in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)
+
+Choosing each table's distribution column is one of the most important modeling decisions you'll make. Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) stores rows in shards based on the value of the rows' distribution column.
+
+The correct choice groups related data together on the same physical nodes, which makes queries fast and adds support for all SQL features. An incorrect choice makes the system run slowly and won't support all SQL features across nodes.
+
+This article gives distribution column tips for the two most common Hyperscale (Citus) scenarios.
+
+### Multi-tenant apps
+
+The multi-tenant architecture uses a form of hierarchical database modeling to
+distribute queries across nodes in the server group. The top of the data
+hierarchy is known as the *tenant ID* and needs to be stored in a column on
+each table.
+
+Hyperscale (Citus) inspects queries to see which tenant ID they involve and finds the matching table shard. It
+routes the query to a single worker node that contains the shard. Running a query with
+all relevant data placed on the same node is called colocation.
+
+The following diagram illustrates colocation in the multi-tenant data
+model. It contains two tables, Accounts and Campaigns, each distributed
+by `account_id`. The shaded boxes represent shards. Green shards are stored
+together on one worker node, and blue shards are stored on another worker node. Notice how a join
+query between Accounts and Campaigns has all the necessary data
+together on one node when both tables are restricted to the same
+account\_id.
+
+![Multi-tenant
+colocation](../media/concepts-hyperscale-choosing-distribution-column/multi-tenant-colocation.png)
+
+To apply this design in your own schema, identify
+what constitutes a tenant in your application. Common instances include
+company, account, organization, or customer. The column name will be
+something like `company_id` or `customer_id`. Examine each of your
+queries and ask yourself, would it work if it had additional WHERE
+clauses to restrict all tables involved to rows with the same tenant ID?
+Queries in the multi-tenant model are scoped to a tenant. For
+instance, queries on sales or inventory are scoped within a certain
+store.
+
+#### Best practices
+
+- **Partition distributed tables by a common tenant\_id column.** For
+ instance, in a SaaS application where tenants are companies, the
+ tenant\_id is likely to be the company\_id.
+- **Convert small cross-tenant tables to reference tables.** When
+ multiple tenants share a small table of information, distribute it
+ as a reference table.
+- **Restrict filter all application queries by tenant\_id.** Each
+ query should request information for one tenant at a time.
+
+Read the [multi-tenant
+tutorial](./tutorial-design-database-multi-tenant.md) for an example of how to
+build this kind of application.
+
+### Real-time apps
+
+The multi-tenant architecture introduces a hierarchical structure
+and uses data colocation to route queries per tenant. By contrast, real-time
+architectures depend on specific distribution properties of their data
+to achieve highly parallel processing.
+
+We use "entity ID" as a term for distribution columns in the real-time
+model. Typical entities are users, hosts, or devices.
+
+Real-time queries typically ask for numeric aggregates grouped by date or
+category. Hyperscale (Citus) sends these queries to each shard for partial results and
+assembles the final answer on the coordinator node. Queries run fastest when as
+many nodes contribute as possible, and when no single node must do a
+disproportionate amount of work.
+
+#### Best practices
+
+- **Choose a column with high cardinality as the distribution
+ column.** For comparison, a Status field on an order table with
+ values New, Paid, and Shipped is a poor choice of
+ distribution column. It assumes only those few values, which limits the number of shards that can hold
+ the data, and the number of nodes that can process it. Among columns
+ with high cardinality, it's also good to choose those columns that
+ are frequently used in group-by clauses or as join keys.
+- **Choose a column with even distribution.** If you distribute a
+ table on a column skewed to certain common values, data in the
+ table tends to accumulate in certain shards. The nodes that hold
+ those shards end up doing more work than other nodes.
+- **Distribute fact and dimension tables on their common columns.**
+ Your fact table can have only one distribution key. Tables that join
+ on another key won't be colocated with the fact table. Choose
+ one dimension to colocate based on how frequently it's joined and
+ the size of the joining rows.
+- **Change some dimension tables into reference tables.** If a
+ dimension table can't be colocated with the fact table, you can
+ improve query performance by distributing copies of the dimension
+ table to all of the nodes in the form of a reference table.
+
+Read the [real-time dashboard
+tutorial](./tutorial-design-database-realtime.md) for an example of how to build this kind of application.
+
+### Time-series data
+
+In a time-series workload, applications query recent information while they
+archive old information.
+
+The most common mistake in modeling time-series information in Hyperscale (Citus) is to
+use the timestamp itself as a distribution column. A hash distribution based
+on time distributes times seemingly at random into different shards rather
+than keeping ranges of time together in shards. Queries that involve time
+generally reference ranges of time, for example, the most recent data. This type of
+hash distribution leads to network overhead.
+
+#### Best practices
+
+- **Don't choose a timestamp as the distribution column.** Choose a
+ different distribution column. In a multi-tenant app, use the tenant
+ ID, or in a real-time app use the entity ID.
+- **Use PostgreSQL table partitioning for time instead.** Use table
+ partitioning to break a large table of time-ordered data into
+ multiple inherited tables with each table containing different time
+ ranges. Distributing a Postgres-partitioned table in Hyperscale (Citus)
+ creates shards for the inherited tables.
+
+## Next steps
+
+- Learn how [colocation](concepts-colocation.md) between distributed data helps queries run fast.
+- Discover the distribution column of a distributed table, and other [useful diagnostic queries](howto-useful-diagnostic-queries.md).
postgresql Concepts Colocation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/concepts-colocation.md
+
+ Title: Table colocation - Hyperscale (Citus) - Azure Database for PostgreSQL
+description: How to store related information together for faster queries
+++++ Last updated : 05/06/2019++
+# Table colocation in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)
+
+Colocation means storing related information together on the same nodes. Queries can go fast when all the necessary data is available without any network traffic. Colocating related data on different nodes allows queries to run efficiently in parallel on each node.
+
+## Data colocation for hash-distributed tables
+
+In Azure Database for PostgreSQL ΓÇô Hyperscale (Citus), a row is stored in a shard if the hash of the value in the distribution column falls within the shard's hash range. Shards with the same hash range are always placed on the same node. Rows with equal distribution column values are always on the same node across tables.
++
+## A practical example of colocation
+
+Consider the following tables that might be part of a multi-tenant web
+analytics SaaS:
+
+```sql
+CREATE TABLE event (
+ tenant_id int,
+ event_id bigint,
+ page_id int,
+ payload jsonb,
+ primary key (tenant_id, event_id)
+);
+
+CREATE TABLE page (
+ tenant_id int,
+ page_id int,
+ path text,
+ primary key (tenant_id, page_id)
+);
+```
+
+Now we want to answer queries that might be issued by a customer-facing
+dashboard. An example query is "Return the number of visits in the past week for
+all pages starting with '/blog' in tenant six."
+
+If our data was in the Single-Server deployment option, we could easily express
+our query by using the rich set of relational operations offered by SQL:
+
+```sql
+SELECT page_id, count(event_id)
+FROM
+ page
+LEFT JOIN (
+ SELECT * FROM event
+ WHERE (payload->>'time')::timestamptz >= now() - interval '1 week'
+) recent
+USING (tenant_id, page_id)
+WHERE tenant_id = 6 AND path LIKE '/blog%'
+GROUP BY page_id;
+```
+
+As long as the [working set](https://en.wikipedia.org/wiki/Working_set) for this query fits in memory, a single-server table is an appropriate solution. Let's consider the opportunities of scaling the data model with the Hyperscale (Citus) deployment option.
+
+### Distribute tables by ID
+
+Single-server queries start slowing down as the number of tenants and the data stored for each tenant grows. The working set stops fitting in memory and CPU becomes a bottleneck.
+
+In this case, we can shard the data across many nodes by using Hyperscale (Citus). The
+first and most important choice we need to make when we decide to shard is the
+distribution column. Let's start with a naive choice of using `event_id` for
+the event table and `page_id` for the `page` table:
+
+```sql
+-- naively use event_id and page_id as distribution columns
+
+SELECT create_distributed_table('event', 'event_id');
+SELECT create_distributed_table('page', 'page_id');
+```
+
+When data is dispersed across different workers, we can't perform a join like we would on a single PostgreSQL node. Instead, we need to issue two queries:
+
+```sql
+-- (Q1) get the relevant page_ids
+SELECT page_id FROM page WHERE path LIKE '/blog%' AND tenant_id = 6;
+
+-- (Q2) get the counts
+SELECT page_id, count(*) AS count
+FROM event
+WHERE page_id IN (/*…page IDs from first query…*/)
+ AND tenant_id = 6
+ AND (payload->>'time')::date >= now() - interval '1 week'
+GROUP BY page_id ORDER BY count DESC LIMIT 10;
+```
+
+Afterwards, the results from the two steps need to be combined by the
+application.
+
+Running the queries must consult data in shards scattered across nodes.
++
+In this case, the data distribution creates substantial drawbacks:
+
+- Overhead from querying each shard and running multiple queries.
+- Overhead of Q1 returning many rows to the client.
+- Q2 becomes large.
+- The need to write queries in multiple steps requires changes in the application.
+
+The data is dispersed, so the queries can be parallelized. It's
+only beneficial if the amount of work that the query does is substantially
+greater than the overhead of querying many shards.
+
+### Distribute tables by tenant
+
+In Hyperscale (Citus), rows with the same distribution column value are guaranteed to
+be on the same node. Starting over, we can create our tables with `tenant_id`
+as the distribution column.
+
+```sql
+-- co-locate tables by using a common distribution column
+SELECT create_distributed_table('event', 'tenant_id');
+SELECT create_distributed_table('page', 'tenant_id', colocate_with => 'event');
+```
+
+Now Hyperscale (Citus) can answer the original single-server query without modification (Q1):
+
+```sql
+SELECT page_id, count(event_id)
+FROM
+ page
+LEFT JOIN (
+ SELECT * FROM event
+ WHERE (payload->>'time')::timestamptz >= now() - interval '1 week'
+) recent
+USING (tenant_id, page_id)
+WHERE tenant_id = 6 AND path LIKE '/blog%'
+GROUP BY page_id;
+```
+
+Because of filter and join on tenant_id, Hyperscale (Citus) knows that the entire
+query can be answered by using the set of colocated shards that contain the data
+for that particular tenant. A single PostgreSQL node can answer the query in
+a single step.
++
+In some cases, queries and table schemas must be changed to include the tenant ID in unique constraints and join conditions. This change is usually straightforward.
+
+## Next steps
+
+- See how tenant data is colocated in the [multi-tenant tutorial](tutorial-design-database-multi-tenant.md).
postgresql Concepts Columnar https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/concepts-columnar.md
+
+ Title: Columnar table storage - Hyperscale (Citus) - Azure Database for PostgreSQL
+description: Compressing data using columnar storage
+++++ Last updated : 08/03/2021++
+# Columnar table storage
+
+Azure Database for PostgreSQL - Hyperscale (Citus) supports append-only
+columnar table storage for analytic and data warehousing workloads. When
+columns (rather than rows) are stored contiguously on disk, data becomes more
+compressible, and queries can request a subset of columns more quickly.
+
+## Usage
+
+To use columnar storage, specify `USING columnar` when creating a table:
+
+```postgresql
+CREATE TABLE contestant (
+ handle TEXT,
+ birthdate DATE,
+ rating INT,
+ percentile FLOAT,
+ country CHAR(3),
+ achievements TEXT[]
+) USING columnar;
+```
+
+Hyperscale (Citus) converts rows to columnar storage in "stripes" during
+insertion. Each stripe holds one transaction's worth of data, or 150000 rows,
+whichever is less. (The stripe size and other parameters of a columnar table
+can be changed with the
+[alter_columnar_table_set](reference-functions.md#alter_columnar_table_set)
+function.)
+
+For example, the following statement puts all five rows into the same stripe,
+because all values are inserted in a single transaction:
+
+```postgresql
+-- insert these values into a single columnar stripe
+
+INSERT INTO contestant VALUES
+ ('a','1990-01-10',2090,97.1,'XA','{a}'),
+ ('b','1990-11-01',2203,98.1,'XA','{a,b}'),
+ ('c','1988-11-01',2907,99.4,'XB','{w,y}'),
+ ('d','1985-05-05',2314,98.3,'XB','{}'),
+ ('e','1995-05-05',2236,98.2,'XC','{a}');
+```
+
+It's best to make large stripes when possible, because Hyperscale (Citus)
+compresses columnar data separately per stripe. We can see facts about our
+columnar table like compression rate, number of stripes, and average rows per
+stripe by using `VACUUM VERBOSE`:
+
+```postgresql
+VACUUM VERBOSE contestant;
+```
+```
+INFO: statistics for "contestant":
+storage id: 10000000000
+total file size: 24576, total data size: 248
+compression rate: 1.31x
+total row count: 5, stripe count: 1, average rows per stripe: 5
+chunk count: 6, containing data for dropped columns: 0, zstd compressed: 6
+```
+
+The output shows that Hyperscale (Citus) used the zstd compression algorithm to
+obtain 1.31x data compression. The compression rate compares a) the size of
+inserted data as it was staged in memory against b) the size of that data
+compressed in its eventual stripe.
+
+Because of how it's measured, the compression rate may or may not match the
+size difference between row and columnar storage for a table. The only way
+to truly find that difference is to construct a row and columnar table that
+contain the same data, and compare.
+
+## Measuring compression
+
+Let's create a new example with more data to benchmark the compression savings.
+
+```postgresql
+-- first a wide table using row storage
+CREATE TABLE perf_row(
+ c00 int8, c01 int8, c02 int8, c03 int8, c04 int8, c05 int8, c06 int8, c07 int8, c08 int8, c09 int8,
+ c10 int8, c11 int8, c12 int8, c13 int8, c14 int8, c15 int8, c16 int8, c17 int8, c18 int8, c19 int8,
+ c20 int8, c21 int8, c22 int8, c23 int8, c24 int8, c25 int8, c26 int8, c27 int8, c28 int8, c29 int8,
+ c30 int8, c31 int8, c32 int8, c33 int8, c34 int8, c35 int8, c36 int8, c37 int8, c38 int8, c39 int8,
+ c40 int8, c41 int8, c42 int8, c43 int8, c44 int8, c45 int8, c46 int8, c47 int8, c48 int8, c49 int8,
+ c50 int8, c51 int8, c52 int8, c53 int8, c54 int8, c55 int8, c56 int8, c57 int8, c58 int8, c59 int8,
+ c60 int8, c61 int8, c62 int8, c63 int8, c64 int8, c65 int8, c66 int8, c67 int8, c68 int8, c69 int8,
+ c70 int8, c71 int8, c72 int8, c73 int8, c74 int8, c75 int8, c76 int8, c77 int8, c78 int8, c79 int8,
+ c80 int8, c81 int8, c82 int8, c83 int8, c84 int8, c85 int8, c86 int8, c87 int8, c88 int8, c89 int8,
+ c90 int8, c91 int8, c92 int8, c93 int8, c94 int8, c95 int8, c96 int8, c97 int8, c98 int8, c99 int8
+);
+
+-- next a table with identical columns using columnar storage
+CREATE TABLE perf_columnar(LIKE perf_row) USING COLUMNAR;
+```
+
+Fill both tables with the same large dataset:
+
+```postgresql
+INSERT INTO perf_row
+ SELECT
+ g % 00500, g % 01000, g % 01500, g % 02000, g % 02500, g % 03000, g % 03500, g % 04000, g % 04500, g % 05000,
+ g % 05500, g % 06000, g % 06500, g % 07000, g % 07500, g % 08000, g % 08500, g % 09000, g % 09500, g % 10000,
+ g % 10500, g % 11000, g % 11500, g % 12000, g % 12500, g % 13000, g % 13500, g % 14000, g % 14500, g % 15000,
+ g % 15500, g % 16000, g % 16500, g % 17000, g % 17500, g % 18000, g % 18500, g % 19000, g % 19500, g % 20000,
+ g % 20500, g % 21000, g % 21500, g % 22000, g % 22500, g % 23000, g % 23500, g % 24000, g % 24500, g % 25000,
+ g % 25500, g % 26000, g % 26500, g % 27000, g % 27500, g % 28000, g % 28500, g % 29000, g % 29500, g % 30000,
+ g % 30500, g % 31000, g % 31500, g % 32000, g % 32500, g % 33000, g % 33500, g % 34000, g % 34500, g % 35000,
+ g % 35500, g % 36000, g % 36500, g % 37000, g % 37500, g % 38000, g % 38500, g % 39000, g % 39500, g % 40000,
+ g % 40500, g % 41000, g % 41500, g % 42000, g % 42500, g % 43000, g % 43500, g % 44000, g % 44500, g % 45000,
+ g % 45500, g % 46000, g % 46500, g % 47000, g % 47500, g % 48000, g % 48500, g % 49000, g % 49500, g % 50000
+ FROM generate_series(1,50000000) g;
+
+INSERT INTO perf_columnar
+ SELECT
+ g % 00500, g % 01000, g % 01500, g % 02000, g % 02500, g % 03000, g % 03500, g % 04000, g % 04500, g % 05000,
+ g % 05500, g % 06000, g % 06500, g % 07000, g % 07500, g % 08000, g % 08500, g % 09000, g % 09500, g % 10000,
+ g % 10500, g % 11000, g % 11500, g % 12000, g % 12500, g % 13000, g % 13500, g % 14000, g % 14500, g % 15000,
+ g % 15500, g % 16000, g % 16500, g % 17000, g % 17500, g % 18000, g % 18500, g % 19000, g % 19500, g % 20000,
+ g % 20500, g % 21000, g % 21500, g % 22000, g % 22500, g % 23000, g % 23500, g % 24000, g % 24500, g % 25000,
+ g % 25500, g % 26000, g % 26500, g % 27000, g % 27500, g % 28000, g % 28500, g % 29000, g % 29500, g % 30000,
+ g % 30500, g % 31000, g % 31500, g % 32000, g % 32500, g % 33000, g % 33500, g % 34000, g % 34500, g % 35000,
+ g % 35500, g % 36000, g % 36500, g % 37000, g % 37500, g % 38000, g % 38500, g % 39000, g % 39500, g % 40000,
+ g % 40500, g % 41000, g % 41500, g % 42000, g % 42500, g % 43000, g % 43500, g % 44000, g % 44500, g % 45000,
+ g % 45500, g % 46000, g % 46500, g % 47000, g % 47500, g % 48000, g % 48500, g % 49000, g % 49500, g % 50000
+ FROM generate_series(1,50000000) g;
+
+VACUUM (FREEZE, ANALYZE) perf_row;
+VACUUM (FREEZE, ANALYZE) perf_columnar;
+```
+
+For this data, you can see a compression ratio of better than 8X in the
+columnar table.
+
+```postgresql
+SELECT pg_total_relation_size('perf_row')::numeric/
+ pg_total_relation_size('perf_columnar') AS compression_ratio;
+ compression_ratio
+--
+ 8.0196135873627944
+(1 row)
+```
+
+## Example
+
+Columnar storage works well with table partitioning. For an example, see the
+Citus Engine community documentation, [archiving with columnar
+storage](https://docs.citusdata.com/en/stable/use_cases/timeseries.html#archiving-with-columnar-storage).
+
+## Gotchas
+
+* Columnar storage compresses per stripe. Stripes are created per transaction,
+ so inserting one row per transaction will put single rows into their own
+ stripes. Compression and performance of single row stripes will be worse than
+ a row table. Always insert in bulk to a columnar table.
+* If you mess up and columnarize a bunch of tiny stripes, you're stuck.
+ The only fix is to create a new columnar table and copy
+ data from the original in one transaction:
+ ```postgresql
+ BEGIN;
+ CREATE TABLE foo_compacted (LIKE foo) USING columnar;
+ INSERT INTO foo_compacted SELECT * FROM foo;
+ DROP TABLE foo;
+ ALTER TABLE foo_compacted RENAME TO foo;
+ COMMIT;
+ ```
+* Fundamentally non-compressible data can be a problem, although columnar
+ storage is still useful when selecting specific columns. It doesn't need
+ to load the other columns into memory.
+* On a partitioned table with a mix of row and column partitions, updates must
+ be carefully targeted. Filter them to hit only the row partitions.
+ * If the operation is targeted at a specific row partition (for example,
+ `UPDATE p2 SET i = i + 1`), it will succeed; if targeted at a specified columnar
+ partition (for example, `UPDATE p1 SET i = i + 1`), it will fail.
+ * If the operation is targeted at the partitioned table and has a WHERE
+ clause that excludes all columnar partitions (for example
+ `UPDATE parent SET i = i + 1 WHERE timestamp = '2020-03-15'`),
+ it will succeed.
+ * If the operation is targeted at the partitioned table, but does not
+ filter on the partition key columns, it will fail. Even if there are
+ WHERE clauses that match rows in only columnar partitions, it's not
+ enough--the partition key must also be filtered.
+
+## Limitations
+
+This feature still has significant limitations. See [Hyperscale
+(Citus) limits and limitations](concepts-limits.md#columnar-storage).
+
+## Next steps
+
+* See an example of columnar storage in a Citus [time series
+ tutorial](https://docs.citusdata.com/en/stable/use_cases/timeseries.html#archiving-with-columnar-storage)
+ (external link).
postgresql Concepts Configuration Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/concepts-configuration-options.md
+
+ Title: Configuration options ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL
+description: Options for a Hyperscale (Citus) server group, including node compute, storage, and regions.
++++++ Last updated : 12/17/2021++
+# Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) configuration options
+
+## Compute and storage
+
+You can select the compute and storage settings independently for
+worker nodes and the coordinator node in a Hyperscale (Citus) server
+group. Compute resources are provided as vCores, which represent
+the logical CPU of the underlying hardware. The storage size for
+provisioning refers to the capacity available to the coordinator
+and worker nodes in your Hyperscale (Citus) server group. The storage
+includes database files, temporary files, transaction logs, and
+the Postgres server logs.
+
+### Standard tier
+
+| Resource | Worker node | Coordinator node |
+|--|--|--|
+| Compute, vCores | 4, 8, 16, 32, 64 | 4, 8, 16, 32, 64 |
+| Memory per vCore, GiB | 8 | 4 |
+| Storage size, TiB | 0.5, 1, 2 | 0.5, 1, 2 |
+| Storage type | General purpose (SSD) | General purpose (SSD) |
+| IOPS | Up to 3 IOPS/GiB | Up to 3 IOPS/GiB |
+
+The total amount of RAM in a single Hyperscale (Citus) node is based on the
+selected number of vCores.
+
+| vCores | One worker node, GiB RAM | Coordinator node, GiB RAM |
+|--|--||
+| 4 | 32 | 16 |
+| 8 | 64 | 32 |
+| 16 | 128 | 64 |
+| 32 | 256 | 128 |
+| 64 | 432 | 256 |
+
+The total amount of storage you provision also defines the I/O capacity
+available to each worker and coordinator node.
+
+| Storage size, TiB | Maximum IOPS |
+|-|--|
+| 0.5 | 1,536 |
+| 1 | 3,072 |
+| 2 | 6,148 |
+
+For the entire Hyperscale (Citus) cluster, the aggregated IOPS work out to the
+following values:
+
+| Worker nodes | 0.5 TiB, total IOPS | 1 TiB, total IOPS | 2 TiB, total IOPS |
+|--||-|-|
+| 2 | 3,072 | 6,144 | 12,296 |
+| 3 | 4,608 | 9,216 | 18,444 |
+| 4 | 6,144 | 12,288 | 24,592 |
+| 5 | 7,680 | 15,360 | 30,740 |
+| 6 | 9,216 | 18,432 | 36,888 |
+| 7 | 10,752 | 21,504 | 43,036 |
+| 8 | 12,288 | 24,576 | 49,184 |
+| 9 | 13,824 | 27,648 | 55,332 |
+| 10 | 15,360 | 30,720 | 61,480 |
+| 11 | 16,896 | 33,792 | 67,628 |
+| 12 | 18,432 | 36,864 | 73,776 |
+| 13 | 19,968 | 39,936 | 79,924 |
+| 14 | 21,504 | 43,008 | 86,072 |
+| 15 | 23,040 | 46,080 | 92,220 |
+| 16 | 24,576 | 49,152 | 98,368 |
+| 17 | 26,112 | 52,224 | 104,516 |
+| 18 | 27,648 | 55,296 | 110,664 |
+| 19 | 29,184 | 58,368 | 116,812 |
+| 20 | 30,720 | 61,440 | 122,960 |
+
+### Basic tier
+
+The Hyperscale (Citus) [basic tier](concepts-tiers.md) is a server
+group with just one node. Because there isn't a distinction between
+coordinator and worker nodes, it's less complicated to choose compute and
+storage resources.
+
+| Resource | Available options |
+|--|--|
+| Compute, vCores | 2, 4, 8 |
+| Memory per vCore, GiB | 4 |
+| Storage size, GiB | 128, 256, 512 |
+| Storage type | General purpose (SSD) |
+| IOPS | Up to 3 IOPS/GiB |
+
+The total amount of RAM in a single Hyperscale (Citus) node is based on the
+selected number of vCores.
+
+| vCores | GiB RAM |
+|--||
+| 2 | 8 |
+| 4 | 16 |
+| 8 | 32 |
+
+The total amount of storage you provision also defines the I/O capacity
+available to the basic tier node.
+
+| Storage size, GiB | Maximum IOPS |
+|-|--|
+| 128 | 384 |
+| 256 | 768 |
+| 512 | 1,536 |
+
+## Regions
+Hyperscale (Citus) server groups are available in the following Azure regions:
+
+* Americas:
+ * Brazil South
+ * Canada Central
+ * Central US
+ * East US
+ * East US 2
+ * North Central US
+ * West US 2
+* Asia Pacific:
+ * Australia East
+ * Central India
+ * East Asia
+ * Japan East
+ * Japan West
+ * Korea Central
+ * Southeast Asia
+* Europe:
+ * France Central
+ * Germany West Central
+ * North Europe
+ * Switzerland North
+ * UK South
+ * West Europe
+
+Some of these regions may not be initially activated on all Azure
+subscriptions. If you want to use a region from the list above and don't see it
+in your subscription, or if you want to use a region not on this list, open a
+[support
+request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
+
+## Pricing
+For the most up-to-date pricing information, see the service
+[pricing page](https://azure.microsoft.com/pricing/details/postgresql/).
+To see the cost for the configuration you want, the
+[Azure portal](https://portal.azure.com/#create/Microsoft.PostgreSQLServer)
+shows the monthly cost on the **Configure** tab based on the options you
+select. If you don't have an Azure subscription, you can use the Azure pricing
+calculator to get an estimated price. On the
+[Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/)
+website, select **Add items**, expand the **Databases** category, and choose
+**Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)** to customize the
+options.
+
+## Next steps
+Learn how to [create a Hyperscale (Citus) server group in the portal](quickstart-create-portal.md).
postgresql Concepts Connection Pool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/concepts-connection-pool.md
+
+ Title: Connection pooling ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL
+description: Scaling client database connections
+++++ Last updated : 08/03/2021++
+# Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) connection pooling
+
+Establishing new connections takes time. That works against most applications,
+which request many short-lived connections. We recommend using a connection
+pooler, both to reduce idle transactions and reuse existing connections. To
+learn more, visit our [blog
+post](https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/not-all-postgres-connection-pooling-is-equal/ba-p/825717).
+
+You can run your own connection pooler, or use PgBouncer managed by Azure.
+
+## Managed PgBouncer
+
+Connection poolers such as PgBouncer allow more clients to connect to the
+coordinator node at once. Applications connect to the pooler, and the pooler
+relays commands to the destination database.
+
+When clients connect through PgBouncer, the number of connections that can
+actively run in the database doesn't change. Instead, PgBouncer queues excess
+connections and runs them when the database is ready.
+
+Hyperscale (Citus) is now offering a managed instance of PgBouncer for server
+groups. It supports up to 2,000 simultaneous client connections. To connect
+through PgBouncer, follow these steps:
+
+1. Go to the **Connection strings** page for your server group in the Azure
+ portal.
+2. Enable the checkbox **PgBouncer connection strings**. (The listed connection
+ strings will change.)
+
+ > [!IMPORTANT]
+ >
+ > If the checkbox does not exist, PgBouncer isn't enabled for your server
+ > group yet. Managed PgBouncer is being rolled out to all [supported
+ > regions](concepts-configuration-options.md#regions). Once
+ > enabled in a region, it'll be added to existing server groups in the
+ > region during a [scheduled
+ > maintenance](concepts-maintenance.md) event.
+
+3. Update client applications to connect with the new string.
+
+## Next steps
+
+Discover more about the [limits and limitations](concepts-limits.md)
+of Hyperscale (Citus).
postgresql Concepts Distributed Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/concepts-distributed-data.md
+
+ Title: Distributed data ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL
+description: Learn about distributed tables, reference tables, local tables, and shards in Azure Database for PostgreSQL.
+++++ Last updated : 05/06/2019++
+# Distributed data in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)
+
+This article outlines the three table types in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus).
+It shows how distributed tables are stored as shards, and the way that shards are placed on nodes.
+
+## Table types
+
+There are three types of tables in a Hyperscale (Citus) server group, each
+used for different purposes.
+
+### Type 1: Distributed tables
+
+The first type, and most common, is distributed tables. They
+appear to be normal tables to SQL statements, but they're horizontally
+partitioned across worker nodes. What this means is that the rows
+of the table are stored on different nodes, in fragment tables called
+shards.
+
+Hyperscale (Citus) runs not only SQL but DDL statements throughout a cluster.
+Changing the schema of a distributed table cascades to update
+all the table's shards across workers.
+
+#### Distribution column
+
+Hyperscale (Citus) uses algorithmic sharding to assign rows to shards. The assignment is made deterministically based on the value
+of a table column called the distribution column. The cluster
+administrator must designate this column when distributing a table.
+Making the right choice is important for performance and functionality.
+
+### Type 2: Reference tables
+
+A reference table is a type of distributed table whose entire
+contents are concentrated into a single shard. The shard is replicated on every worker. Queries on any worker can access the reference information locally, without the network overhead of requesting rows from another node. Reference tables have no distribution column
+because there's no need to distinguish separate shards per row.
+
+Reference tables are typically small and are used to store data that's
+relevant to queries running on any worker node. An example is enumerated
+values like order statuses or product categories.
+
+### Type 3: Local tables
+
+When you use Hyperscale (Citus), the coordinator node you connect to is a regular PostgreSQL database. You can create ordinary tables on the coordinator and choose not to shard them.
+
+A good candidate for local tables would be small administrative tables that don't participate in join queries. An example is a users table for application sign-in and authentication.
+
+## Shards
+
+The previous section described how distributed tables are stored as shards on
+worker nodes. This section discusses more technical details.
+
+The `pg_dist_shard` metadata table on the coordinator contains a
+row for each shard of each distributed table in the system. The row
+matches a shard ID with a range of integers in a hash space
+(shardminvalue, shardmaxvalue).
+
+```sql
+SELECT * from pg_dist_shard;
+ logicalrelid | shardid | shardstorage | shardminvalue | shardmaxvalue
+++--++
+ github_events | 102026 | t | 268435456 | 402653183
+ github_events | 102027 | t | 402653184 | 536870911
+ github_events | 102028 | t | 536870912 | 671088639
+ github_events | 102029 | t | 671088640 | 805306367
+ (4 rows)
+```
+
+If the coordinator node wants to determine which shard holds a row of
+`github_events`, it hashes the value of the distribution column in the
+row. Then the node checks which shard\'s range contains the hashed value. The
+ranges are defined so that the image of the hash function is their
+disjoint union.
+
+### Shard placements
+
+Suppose that shard 102027 is associated with the row in question. The row
+is read or written in a table called `github_events_102027` in one of
+the workers. Which worker? That's determined entirely by the metadata
+tables. The mapping of shard to worker is known as the shard placement.
+
+The coordinator node
+rewrites queries into fragments that refer to the specific tables
+like `github_events_102027` and runs those fragments on the
+appropriate workers. Here's an example of a query run behind the scenes to find the node holding shard ID 102027.
+
+```sql
+SELECT
+ shardid,
+ node.nodename,
+ node.nodeport
+FROM pg_dist_placement placement
+JOIN pg_dist_node node
+ ON placement.groupid = node.groupid
+ AND node.noderole = 'primary'::noderole
+WHERE shardid = 102027;
+```
+
+```output
+ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
+Γöé shardid Γöé nodename Γöé nodeport Γöé
+Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
+Γöé 102027 Γöé localhost Γöé 5433 Γöé
+ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
+```
+
+## Next steps
+
+- Learn how to [choose a distribution column](concepts-choose-distribution-column.md) for distributed tables.
postgresql Concepts Extensions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/concepts-extensions.md
+
+ Title: Extensions ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL
+description: Describes the ability to extend the functionality of your database by using extensions in Azure Database for PostgreSQL - Hyperscale (Citus)
+++++ Last updated : 10/01/2021+
+# PostgreSQL extensions in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)
+
+PostgreSQL provides the ability to extend the functionality of your database by using extensions. Extensions allow for bundling multiple related SQL objects together in a single package that can be loaded or removed from your database with a single command. After being loaded in the database, extensions can function like built-in features. For more information on PostgreSQL extensions, see [Package related objects into an extension](https://www.postgresql.org/docs/current/static/extend-extensions.html).
+
+## Use PostgreSQL extensions
+
+PostgreSQL extensions must be installed in your database before you can use them. To install a particular extension, run the [CREATE EXTENSION](https://www.postgresql.org/docs/current/static/sql-createextension.html) command from the psql tool to load the packaged objects into your database.
+
+> [!NOTE]
+> If `CREATE EXTENSION` fails with a permission denied error, try the
+> `create_extension()` function instead. For instance:
+>
+> ```sql
+> SELECT create_extension('postgis');
+> ```
+
+Azure Database for PostgreSQL - Hyperscale (Citus) currently supports a subset of key extensions as listed here. Extensions other than the ones listed aren't supported. You can't create your own extension with Azure Database for PostgreSQL.
+
+## Extensions supported by Azure Database for PostgreSQL
+
+The following tables list the standard PostgreSQL extensions that are currently supported by Azure Database for PostgreSQL. This information is also available by running `SELECT * FROM pg_available_extensions;`.
+
+The versions of each extension installed in a server group sometimes differ based on the version of PostgreSQL (11, 12, or 13). The tables list extension versions per database version.
+
+### Citus extension
+
+> [!div class="mx-tableFixed"]
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** |
+> ||||||
+> | [citus](https://github.com/citusdata/citus) | Citus distributed database. | 9.5 | 10.0.5 | 10.2.1 | 10.2.1 |
+
+### Data types extensions
+
+> [!div class="mx-tableFixed"]
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** |
+> ||||||
+> | [citext](https://www.postgresql.org/docs/current/static/citext.html) | Provides a case-insensitive character string type. | 1.5 | 1.6 | 1.6 | 1.6 |
+> | [cube](https://www.postgresql.org/docs/current/static/cube.html) | Provides a data type for multidimensional cubes. | 1.4 | 1.4 | 1.4 | 1.5 |
+> | [hll](https://github.com/citusdata/postgresql-hll) | Provides a HyperLogLog data structure. | 2.15 | 2.15 | 2.16 | 2.16 |
+> | [hstore](https://www.postgresql.org/docs/current/static/hstore.html) | Provides a data type for storing sets of key-value pairs. | 1.5 | 1.6 | 1.7 | 1.8 |
+> | [isn](https://www.postgresql.org/docs/current/static/isn.html) | Provides data types for international product numbering standards. | 1.2 | 1.2 | 1.2 | 1.2 |
+> | [lo](https://www.postgresql.org/docs/current/lo.html) | Large Object maintenance. | 1.1 | 1.1 | 1.1 | 1.1 |
+> | [ltree](https://www.postgresql.org/docs/current/static/ltree.html) | Provides a data type for hierarchical tree-like structures. | 1.1 | 1.1 | 1.2 | 1.2 |
+> | [seg](https://www.postgresql.org/docs/current/seg.html) | Data type for representing line segments or floating-point intervals. | 1.3 | 1.3 | 1.3 | 1.4 |
+> | [tdigest](https://github.com/tvondra/tdigest) | Data type for on-line accumulation of rank-based statistics such as quantiles and trimmed means. | 1.0 | 1.0 | 1.2.0 | 1.2.0 |
+> | [topn](https://github.com/citusdata/postgresql-topn/) | Type for top-n JSONB. | 2.3.1 | 2.3.1 | 2.4.0 | 2.4.0 |
+
+### Full-text search extensions
+
+> [!div class="mx-tableFixed"]
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** |
+> ||||||
+> | [dict\_int](https://www.postgresql.org/docs/current/static/dict-int.html) | Provides a text search dictionary template for integers. | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [dict\_xsyn](https://www.postgresql.org/docs/current/dict-xsyn.html) | Text search dictionary template for extended synonym processing. | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [unaccent](https://www.postgresql.org/docs/current/static/unaccent.html) | A text search dictionary that removes accents (diacritic signs) from lexemes. | 1.1 | 1.1 | 1.1 | 1.1 |
+
+### Functions extensions
+
+> [!div class="mx-tableFixed"]
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** |
+> ||||||
+> | [autoinc](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.7) | Functions for autoincrementing fields. | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [earthdistance](https://www.postgresql.org/docs/current/static/earthdistance.html) | Provides a means to calculate great-circle distances on the surface of the Earth. | 1.1 | 1.1 | 1.1 | 1.1 |
+> | [fuzzystrmatch](https://www.postgresql.org/docs/current/static/fuzzystrmatch.html) | Provides several functions to determine similarities and distance between strings. | 1.1 | 1.1 | 1.1 | 1.1 |
+> | [insert\_username](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.8) | Functions for tracking who changed a table. | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [intagg](https://www.postgresql.org/docs/current/intagg.html) | Integer aggregator and enumerator (obsolete). | 1.1 | 1.1 | 1.1 | 1.1 |
+> | [intarray](https://www.postgresql.org/docs/current/static/intarray.html) | Provides functions and operators for manipulating null-free arrays of integers. | 1.2 | 1.2 | 1.3 | 1.5 |
+> | [moddatetime](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.9) | Functions for tracking last modification time. | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [pg\_partman](https://pgxn.org/dist/pg_partman/doc/pg_partman.html) | Manages partitioned tables by time or ID. | 4.5.1 | 4.5.1 | 4.5.1 | 4.5.1 |
+> | [pg\_trgm](https://www.postgresql.org/docs/current/static/pgtrgm.html) | Provides functions and operators for determining the similarity of alphanumeric text based on trigram matching. | 1.4 | 1.4 | 1.5 | 1.6 |
+> | [pgcrypto](https://www.postgresql.org/docs/current/static/pgcrypto.html) | Provides cryptographic functions. | 1.3 | 1.3 | 1.3 | 1.3 |
+> | [refint](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.5) | Functions for implementing referential integrity (obsolete). | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [tablefunc](https://www.postgresql.org/docs/current/static/tablefunc.html) | Provides functions that manipulate whole tables, including crosstab. | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [tcn](https://www.postgresql.org/docs/current/tcn.html) | Triggered change notifications. | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [timetravel](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.6) | Functions for implementing time travel. | 1.0 | | | |
+> | [uuid-ossp](https://www.postgresql.org/docs/current/static/uuid-ossp.html) | Generates universally unique identifiers (UUIDs). | 1.1 | 1.1 | 1.1 | 1.1 |
+
+### Index types extensions
+
+> [!div class="mx-tableFixed"]
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** |
+> ||||||
+> | [bloom](https://www.postgresql.org/docs/current/bloom.html) | Bloom access method - signature file-based index. | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [btree\_gin](https://www.postgresql.org/docs/current/static/btree-gin.html) | Provides sample GIN operator classes that implement B-tree-like behavior for certain data types. | 1.3 | 1.3 | 1.3 | 1.3 |
+> | [btree\_gist](https://www.postgresql.org/docs/current/static/btree-gist.html) | Provides GiST index operator classes that implement B-tree. | 1.5 | 1.5 | 1.5 | 1.6 |
+
+### Language extensions
+
+> [!div class="mx-tableFixed"]
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** |
+> ||||||
+> | [plpgsql](https://www.postgresql.org/docs/current/static/plpgsql.html) | PL/pgSQL loadable procedural language. | 1.0 | 1.0 | 1.0 | 1.0 |
+
+### Miscellaneous extensions
+
+> [!div class="mx-tableFixed"]
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** |
+> ||||||
+> | [adminpack](https://www.postgresql.org/docs/current/adminpack.html) | Administrative functions for PostgreSQL. | 2.0 | 2.0 | 2.1 | 2.1 |
+> | [amcheck](https://www.postgresql.org/docs/current/amcheck.html) | Functions for verifying relation integrity. | 1.1 | 1.2 | 1.2 | 1.3 |
+> | [dblink](https://www.postgresql.org/docs/current/dblink.html) | A module that supports connections to other PostgreSQL databases from within a database session. See the "dblink and postgres_fdw" section for information about this extension. | 1.2 | 1.2 | 1.2 | 1.2 |
+> | [file\_fdw](https://www.postgresql.org/docs/current/file-fdw.html) | Foreign-data wrapper for flat file access. | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [pageinspect](https://www.postgresql.org/docs/current/pageinspect.html) | Inspect the contents of database pages at a low level. | 1.7 | 1.7 | 1.8 | 1.9 |
+> | [pg\_buffercache](https://www.postgresql.org/docs/current/static/pgbuffercache.html) | Provides a means for examining what's happening in the shared buffer cache in real time. | 1.3 | 1.3 | 1.3 | 1.3 |
+> | [pg\_cron](https://github.com/citusdata/pg_cron) | Job scheduler for PostgreSQL. | 1.3 | 1.3 | 1.3 | 1.4 |
+> | [pg\_freespacemap](https://www.postgresql.org/docs/current/pgfreespacemap.html) | Examine the free space map (FSM). | 1.2 | 1.2 | 1.2 | 1.2 |
+> | [pg\_prewarm](https://www.postgresql.org/docs/current/static/pgprewarm.html) | Provides a way to load relation data into the buffer cache. | 1.2 | 1.2 | 1.2 | 1.2 |
+> | [pg\_stat\_statements](https://www.postgresql.org/docs/current/static/pgstatstatements.html) | Provides a means for tracking execution statistics of all SQL statements executed by a server. See the "pg_stat_statements" section for information about this extension. | 1.6 | 1.7 | 1.8 | 1.9 |
+> | [pg\_visibility](https://www.postgresql.org/docs/current/pgvisibility.html) | Examine the visibility map (VM) and page-level visibility information. | 1.2 | 1.2 | 1.2 | 1.2 |
+> | [pgrowlocks](https://www.postgresql.org/docs/current/static/pgrowlocks.html) | Provides a means for showing row-level locking information. | 1.2 | 1.2 | 1.2 | 1.2 |
+> | [pgstattuple](https://www.postgresql.org/docs/current/static/pgstattuple.html) | Provides a means for showing tuple-level statistics. | 1.5 | 1.5 | 1.5 | 1.5 |
+> | [postgres\_fdw](https://www.postgresql.org/docs/current/static/postgres-fdw.html) | Foreign-data wrapper used to access data stored in external PostgreSQL servers. See the "dblink and postgres_fdw" section for information about this extension.| 1.0 | 1.0 | 1.0 | 1.1 |
+> | [sslinfo](https://www.postgresql.org/docs/current/sslinfo.html) | Information about TLS/SSL certificates. | 1.2 | 1.2 | 1.2 | 1.2 |
+> | [tsm\_system\_rows](https://www.postgresql.org/docs/current/tsm-system-rows.html) | TABLESAMPLE method, which accepts number of rows as a limit. | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [tsm\_system\_time](https://www.postgresql.org/docs/current/tsm-system-time.html) | TABLESAMPLE method, which accepts time in milliseconds as a limit. | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [xml2](https://www.postgresql.org/docs/current/xml2.html) | XPath querying and XSLT. | 1.1 | 1.1 | 1.1 | 1.1 |
++
+### PostGIS extensions
+
+> [!div class="mx-tableFixed"]
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** |
+> ||||||
+> | [PostGIS](https://www.postgis.net/), postgis\_topology, postgis\_tiger\_geocoder, postgis\_sfcgal | Spatial and geographic objects for PostgreSQL. | 2.5.5 | 3.0.3 | 3.0.3 | 3.1.4 |
+> | address\_standardizer, address\_standardizer\_data\_us | Used to parse an address into constituent elements. Used to support geocoding address normalization step. | 2.5.5 | 3.0.3 | 3.0.3 | 3.1.4 |
+> | postgis\_sfcgal | PostGIS SFCGAL functions. | 2.5.5 | 3.0.3 | 3.0.3 | 3.1.4 |
+> | postgis\_tiger\_geocoder | PostGIS tiger geocoder and reverse geocoder. | 2.5.5 | 3.0.3 | 3.0.3 | 3.1.4 |
+> | postgis\_topology | PostGIS topology spatial types and functions. | 2.5.5 | 3.0.3 | 3.0.3 | 3.1.4 |
++
+## pg_stat_statements
+The [pg\_stat\_statements extension](https://www.postgresql.org/docs/current/pgstatstatements.html) is preloaded on every Azure Database for PostgreSQL server to provide you with a means of tracking execution statistics of SQL statements.
+
+The setting `pg_stat_statements.track` controls what statements are counted by the extension. It defaults to `top`, which means that all statements issued directly by clients are tracked. The two other tracking levels are `none` and `all`. This setting is configurable as a server parameter through the [Azure portal](../howto-configure-server-parameters-using-portal.md) or the [Azure CLI](../howto-configure-server-parameters-using-cli.md).
+
+There's a tradeoff between the query execution information pg_stat_statements provides and the effect on server performance as it logs each SQL statement. If you aren't actively using the pg_stat_statements extension, we recommend that you set `pg_stat_statements.track` to `none`. Some third-party monitoring services might rely on pg_stat_statements to deliver query performance insights, so confirm whether this is the case for you or not.
+
+## dblink and postgres_fdw
+
+You can use dblink and postgres\_fdw to connect from one PostgreSQL server to
+another, or to another database in the same server. The receiving server needs
+to allow connections from the sending server through its firewall. To use
+these extensions to connect between Azure Database for PostgreSQL servers or
+Hyperscale (Citus) server groups, set **Allow Azure services and resources to
+access this server group (or server)** to ON. You also need to turn this
+setting ON if you want to use the extensions to loop back to the same server.
+The **Allow Azure services and resources to access this server group** setting
+can be found in the Azure portal page for the Hyperscale (Citus) server group
+under **Networking**. Currently, outbound connections from Azure Database for
+PostgreSQL Single server and Hyperscale (Citus) aren't supported, except for
+connections to other Azure Database for PostgreSQL servers and Hyperscale
+(Citus) server groups.
postgresql Concepts Firewall Rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/concepts-firewall-rules.md
+
+ Title: Public access - Hyperscale (Citus) - Azure Database for PostgreSQL
+description: This article describes public access for Azure Database for PostgreSQL - Hyperscale (Citus).
+++++ Last updated : 10/15/2021++
+# Public access in Azure Database for PostgreSQL - Hyperscale (Citus)
++
+This page describes the public access option. For private access, see
+[here](concepts-private-access.md).
+
+## Firewall overview
+
+Azure Database for PostgreSQL server firewall prevents all access to your Hyperscale (Citus) coordinator node until you specify which computers have permission. The firewall grants access to the server based on the originating IP address of each request.
+To configure your firewall, you create firewall rules that specify ranges of acceptable IP addresses. You can create firewall rules at the server level.
+
+**Firewall rules:** These rules enable clients to access your Hyperscale (Citus) coordinator node, that is, all the databases within the same logical server. Server-level firewall rules can be configured by using the Azure portal. To create server-level firewall rules, you must be the subscription owner or a subscription contributor.
+
+All database access to your coordinator node is blocked by the firewall by default. To begin using your server from another computer, you need to specify one or more server-level firewall rules to enable access to your server. Use the firewall rules to specify which IP address ranges from the Internet to allow. Access to the Azure portal website itself is not impacted by the firewall rules.
+Connection attempts from the Internet and Azure must first pass through the firewall before they can reach your PostgreSQL Database, as shown in the following diagram:
++
+## Connecting from the Internet and from Azure
+
+A Hyperscale (Citus) server group firewall controls who can connect to the group's coordinator node. The firewall determines access by consulting a configurable list of rules. Each rule is an IP address, or range of addresses, that are allowed in.
+
+When the firewall blocks connections, it can cause application errors. Using the PostgreSQL JDBC driver, for instance, raises an error like this:
+
+> java.util.concurrent.ExecutionException: java.lang.RuntimeException:
+> org.postgresql.util.PSQLException: FATAL: no pg\_hba.conf entry for host "123.45.67.890", user "citus", database "citus", SSL
+
+See [Create and manage firewall rules](howto-manage-firewall-using-portal.md) to learn how the rules are defined.
+
+## Troubleshooting the database server firewall
+When access to the Microsoft Azure Database for PostgreSQL - Hyperscale (Citus) service doesn't behave as you expect, consider these points:
+
+* **Changes to the allow list have not taken effect yet:** There may be as much as a five-minute delay for changes to the Hyperscale (Citus) firewall configuration to take effect.
+
+* **The user is not authorized or an incorrect password was used:** If a user does not have permissions on the server or the password used is incorrect, the connection to the server is denied. Creating a firewall setting only provides clients with an opportunity to attempt connecting to your server; each client must still provide the necessary security credentials.
+
+For example, using a JDBC client, the following error may appear.
+> java.util.concurrent.ExecutionException: java.lang.RuntimeException: org.postgresql.util.PSQLException: FATAL: password authentication failed for user "yourusername"
+
+* **Dynamic IP address:** If you have an Internet connection with dynamic IP addressing and you are having trouble getting through the firewall, you could try one of the following solutions:
+
+* Ask your Internet Service Provider (ISP) for the IP address range assigned to your client computers that access the Hyperscale (Citus) coordinator node, and then add the IP address range as a firewall rule.
+
+* Get static IP addressing instead for your client computers, and then add the static IP address as a firewall rule.
+
+## Next steps
+For articles on creating server-level and database-level firewall rules, see:
+* [Create and manage Azure Database for PostgreSQL firewall rules using the Azure portal](howto-manage-firewall-using-portal.md)
postgresql Concepts High Availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/concepts-high-availability.md
+
+ Title: High availability ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL
+description: High availability and disaster recovery concepts
+++++ Last updated : 11/15/2021++
+# High availability in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)
+
+High availability (HA) avoids database downtime by maintaining standby replicas
+of every node in a server group. If a node goes down, Hyperscale (Citus) switches
+incoming connections from the failed node to its standby. Failover happens
+within a few minutes, and promoted nodes always have fresh data through
+PostgreSQL synchronous streaming replication.
+
+Even without HA enabled, each Hyperscale (Citus) node has its own locally
+redundant storage (LRS) with three synchronous replicas maintained by Azure
+Storage service. If there's a single replica failure, itΓÇÖs detected by Azure
+Storage service and is transparently re-created. For LRS storage durability,
+see metrics [on this
+page](../../storage/common/storage-redundancy.md#summary-of-redundancy-options).
+
+When HA *is* enabled, Hyperscale (Citus) runs one standby node for each primary
+node in the server group. The primary and its standby use synchronous
+PostgreSQL replication. This replication allows customers to have predictable
+downtime if a primary node fails. In a nutshell, our service detects a failure
+on primary nodes, and fails over to standby nodes with zero data loss.
+
+To take advantage of HA on the coordinator node, database applications need to
+detect and retry dropped connections and failed transactions. The newly
+promoted coordinator will be accessible with the same connection string.
+
+Recovery can be broken into three stages: detection, failover, and full
+recovery. Hyperscale (Citus) runs periodic health checks on every node, and after four
+failed checks it determines that a node is down. Hyperscale (Citus) then promotes a
+standby to primary node status (failover), and provisions a new standby-to-be.
+Streaming replication begins, bringing the new node up-to-date. When all data
+has been replicated, the node has reached full recovery.
+
+### Next steps
+
+- Learn how to [enable high
+ availability](howto-high-availability.md) in a Hyperscale (Citus) server
+ group.
postgresql Concepts Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/concepts-limits.md
+
+ Title: Limits and limitations ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL
+description: Current limits for Hyperscale (Citus) server groups
+++++ Last updated : 12/10/2021++
+# Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) limits and limitations
+
+The following section describes capacity and functional limits in the
+Hyperscale (Citus) service.
+
+## Networking
+
+### Maximum connections
+
+Every PostgreSQL connection (even idle ones) uses at least 10 MB of memory, so
+it's important to limit simultaneous connections. Here are the limits we chose
+to keep nodes healthy:
+
+* Coordinator node
+ * Maximum connections
+ * 300 for 0-3 vCores
+ * 500 for 4-15 vCores
+ * 1000 for 16+ vCores
+ * Maximum user connections
+ * 297 for 0-3 vCores
+ * 497 for 4-15 vCores
+ * 997 for 16+ vCores
+* Worker node
+ * Maximum connections
+ * 600
+
+Attempts to connect beyond these limits will fail with an error. The system
+reserves three connections for monitoring nodes, which is why there are three
+fewer connections available for user queries than connections total.
+
+#### Connection pooling
+
+You can scale connections further using [connection
+pooling](concepts-connection-pool.md). Hyperscale (Citus) offers a
+managed pgBouncer connection pooler configured for up to 2,000 simultaneous
+client connections.
+
+### Private access (preview)
+
+#### Server group name
+
+To be compatible with [private access](concepts-private-access.md),
+a Hyperscale (Citus) server group must have a name that is 40 characters or
+shorter.
+
+#### Regions
+
+The private access feature is available in preview in only these regions:
+
+* Americas
+ * East US
+ * East US 2
+ * West US 2
+* Asia Pacific
+ * Japan East
+ * Japan West
+ * Korea Central
+* Europe
+ * Germany West Central
+ * UK South
+ * West Europe
+
+## Storage
+
+### Storage scaling
+
+Storage on coordinator and worker nodes can be scaled up (increased) but can't
+be scaled down (decreased).
+
+### Storage size
+
+Up to 2 TiB of storage is supported on coordinator and worker nodes. See the
+available storage options and IOPS calculation
+[above](concepts-configuration-options.md#compute-and-storage) for
+node and cluster sizes.
+
+## Compute
+
+### Subscription vCore limits
+
+Azure enforces a vCore quota per subscription per region. There are two
+independently adjustable quotas: vCores for coordinator nodes, and vCores for
+worker nodes. The default quota should be more than enough to experiment with
+Hyperscale (Citus). If you do need more vCores for a region in your
+subscription, see how to [adjust compute
+quotas](howto-compute-quota.md).
+
+## PostgreSQL
+
+### Database creation
+
+The Azure portal provides credentials to connect to exactly one database per
+Hyperscale (Citus) server group, the `citus` database. Creating another
+database is currently not allowed, and the CREATE DATABASE command will fail
+with an error.
+
+### Columnar storage
+
+Hyperscale (Citus) currently has these limitations with [columnar
+tables](concepts-columnar.md):
+
+* Compression is on disk, not in memory
+* Append-only (no UPDATE/DELETE support)
+* No space reclamation (for example, rolled-back transactions may still consume
+ disk space)
+* No index support, index scans, or bitmap index scans
+* No tidscans
+* No sample scans
+* No TOAST support (large values supported inline)
+* No support for ON CONFLICT statements (except DO NOTHING actions with no
+ target specified).
+* No support for tuple locks (SELECT ... FOR SHARE, SELECT ... FOR UPDATE)
+* No support for serializable isolation level
+* Support for PostgreSQL server versions 12+ only
+* No support for foreign keys, unique constraints, or exclusion constraints
+* No support for logical decoding
+* No support for intra-node parallel scans
+* No support for AFTER ... FOR EACH ROW triggers
+* No UNLOGGED columnar tables
+* No TEMPORARY columnar tables
+
+## Next steps
+
+* Learn how to [create a Hyperscale (Citus) server group in the
+ portal](quickstart-create-portal.md).
+* Learn to enable [connection pooling](concepts-connection-pool.md).
postgresql Concepts Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/concepts-maintenance.md
+
+ Title: Scheduled maintenance - Azure Database for PostgreSQL - Hyperscale (Citus)
+description: This article describes the scheduled maintenance feature in Azure Database for PostgreSQL - Hyperscale (Citus).
+++++ Last updated : 04/07/2021++
+# Scheduled maintenance in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)
+
+Azure Database for PostgreSQL - Hyperscale (Citus) does periodic maintenance to
+keep your managed database secure, stable, and up-to-date. During maintenance,
+all nodes in the server group get new features, updates, and patches.
+
+The key features of scheduled maintenance for Hyperscale (Citus) are:
+
+* Updates are applied at the same time on all nodes in the server group
+* Notifications about upcoming maintenance are posted to Azure Service Health
+ five days in advance
+* Usually there are at least 30 days between successful maintenance events for
+ a server group
+* Preferred day of the week and time window within that day for maintenance
+ start can be defined for each server group individually
+
+## Selecting a maintenance window and notification about upcoming maintenance
+
+You can schedule maintenance during a specific day of the week and a time
+window within that day. Or you can let the system pick a day and a time window
+for you automatically. Either way, the system will alert you five days before
+running any maintenance. The system will also let you know when maintenance is
+started, and when it's successfully completed.
+
+Notifications about upcoming scheduled maintenance are posted to Azure Service
+Health and can be:
+
+* Emailed to a specific address
+* Emailed to an Azure Resource Manager Role
+* Sent in a text message (SMS) to mobile devices
+* Pushed as a notification to an Azure app
+* Delivered as a voice message
+
+When specifying preferences for the maintenance schedule, you can pick a day of
+the week and a time window. If you don't specify, the system will pick times
+between 11pm and 7am in your server group's region time. You can define
+different schedules for each Hyperscale (Citus) server group in your Azure
+subscription.
+
+> [!IMPORTANT]
+> Normally there are at least 30 days between successful scheduled maintenance
+> events for a server group.
+>
+> However, in case of a critical emergency update such as a severe
+> vulnerability, the notification window could be shorter than five days. The
+> critical update may be applied to your server even if a successful scheduled
+> maintenance was performed in the last 30 days.
+
+You can update scheduling settings at any time. If there's maintenance
+scheduled for your Hyperscale (Citus) server group and you update the schedule,
+existing events will continue as originally scheduled. The settings change will
+take effect after successful completion of existing events.
+
+If maintenance fails or gets canceled, the system will create a notification.
+It will try maintenance again according to current scheduling settings, and
+notify you five days before the next maintenance event.
+
+## Next steps
+
+* Learn how to [change the maintenance schedule](howto-maintenance.md)
+* Learn how to [get notifications about upcoming maintenance](../../service-health/service-notifications.md) using Azure Service Health
+* Learn how to [set up alerts about upcoming scheduled maintenance events](../../service-health/resource-health-alert-monitor-guide.md)
postgresql Concepts Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/concepts-monitoring.md
+
+ Title: Monitor and tune - Hyperscale (Citus) - Azure Database for PostgreSQL
+description: This article describes monitoring and tuning features in Azure Database for PostgreSQL - Hyperscale (Citus)
+++++ Last updated : 12/06/2021++
+# Monitor and tune Azure Database for PostgreSQL - Hyperscale (Citus)
+
+Monitoring data about your servers helps you troubleshoot and optimize for your
+workload. Hyperscale (Citus) provides various monitoring options to provide
+insight into the behavior of nodes in a server group.
+
+## Metrics
+
+Hyperscale (Citus) provides metrics for nodes in a server group, and aggregate
+metrics for the group as a whole. The metrics give insight into the behavior of
+supporting resources. Each metric is emitted at a one-minute frequency, and has
+up to 30 days of history.
+
+In addition to viewing graphs of the metrics, you can configure alerts. For
+step-by-step guidance, see [How to set up
+alerts](howto-alert-on-metric.md). Other tasks include setting up
+automated actions, running advanced analytics, and archiving history. For more
+information, see the [Azure Metrics
+Overview](../../azure-monitor/data-platform.md).
+
+### Per node vs aggregate
+
+By default, the Azure portal aggregates Hyperscale (Citus) metrics across nodes
+in a server group. However, some metrics, such as disk usage percentage, are
+more informative on a per-node basis. To see metrics for nodes displayed
+individually, use Azure Monitor [metric
+splitting](howto-monitoring.md#view-metrics-per-node) by server
+name.
+
+> [!NOTE]
+>
+> Some Hyperscale (Citus) server groups do not support metric splitting. On
+> these server groups, you can view metrics for individual nodes by clicking
+> the node name in the server group **Overview** page. Then open the
+> **Metrics** page for the node.
+
+### List of metrics
+
+These metrics are available for Hyperscale (Citus) nodes:
+
+|Metric|Metric Display Name|Unit|Description|
+|||||
+|active_connections|Active Connections|Count|The number of active connections to the server.|
+|cpu_percent|CPU percent|Percent|The percentage of CPU in use.|
+|iops|IOPS|Count|See the [IOPS definition](../../virtual-machines/premium-storage-performance.md#iops) and [Hyperscale (Citus) throughput](concepts-configuration-options.md)|
+|memory_percent|Memory percent|Percent|The percentage of memory in use.|
+|network_bytes_ingress|Network In|Bytes|Network In across active connections.|
+|network_bytes_egress|Network Out|Bytes|Network Out across active connections.|
+|storage_percent|Storage percentage|Percent|The percentage of storage used out of the server's maximum.|
+|storage_used|Storage used|Bytes|The amount of storage in use. The storage used by the service may include the database files, transaction logs, and the server logs.|
+
+Azure supplies no aggregate metrics for the cluster as a whole, but metrics for
+multiple nodes can be placed on the same graph.
+
+## Next steps
+
+- Learn how to [view metrics](howto-monitoring.md) for a
+ Hyperscale (Citus) server group.
+- See [how to set up alerts](howto-alert-on-metric.md) for guidance
+ on creating an alert on a metric.
+- Learn how to do [metric
+ splitting](../../azure-monitor/essentials/metrics-charts.md#metric-splitting) to
+ inspect metrics per node in a server group.
+- See other measures of database health with [useful diagnostic queries](howto-useful-diagnostic-queries.md).
postgresql Concepts Nodes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/concepts-nodes.md
+
+ Title: Nodes ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL
+description: Learn about the types of nodes and tables in a server group in Azure Database for PostgreSQL.
+++++ Last updated : 07/28/2019++
+# Nodes and tables in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)
+
+## Nodes
+
+The Hyperscale (Citus) hosting type allows Azure Database for PostgreSQL
+servers (called nodes) to coordinate with one another in a "shared nothing"
+architecture. The nodes in a server group collectively hold more data and use
+more CPU cores than would be possible on a single server. The architecture also
+allows the database to scale by adding more nodes to the server group.
+
+### Coordinator and workers
+
+Every server group has a coordinator node and multiple workers. Applications
+send their queries to the coordinator node, which relays it to the relevant
+workers and accumulates their results. Applications are not able to connect
+directly to workers.
+
+Hyperscale (Citus) allows the database administrator to *distribute* tables,
+storing different rows on different worker nodes. Distributed tables are the
+key to Hyperscale (Citus) performance. Failing to distribute tables leaves them entirely
+on the coordinator node and cannot take advantage of cross-machine parallelism.
+
+For each query on distributed tables, the coordinator either routes it to a
+single worker node, or parallelizes it across several depending on whether the
+required data lives on a single node or multiple. The coordinator decides what
+to do by consulting metadata tables. These tables track the DNS names and
+health of worker nodes, and the distribution of data across nodes.
+
+## Table types
+
+There are three types of tables in a Hyperscale (Citus) server group, each
+stored differently on nodes and used for different purposes.
+
+### Type 1: Distributed tables
+
+The first type, and most common, is distributed tables. They
+appear to be normal tables to SQL statements, but they're horizontally
+partitioned across worker nodes. What this means is that the rows
+of the table are stored on different nodes, in fragment tables called
+shards.
+
+Hyperscale (Citus) runs not only SQL but DDL statements throughout a cluster.
+Changing the schema of a distributed table cascades to update
+all the table's shards across workers.
+
+#### Distribution column
+
+Hyperscale (Citus) uses algorithmic sharding to assign rows to shards. The assignment is made deterministically based on the value
+of a table column called the distribution column. The cluster
+administrator must designate this column when distributing a table.
+Making the right choice is important for performance and functionality.
+
+### Type 2: Reference tables
+
+A reference table is a type of distributed table whose entire
+contents are concentrated into a single shard. The shard is replicated on every worker. Queries on any worker can access the reference information locally, without the network overhead of requesting rows from another node. Reference tables have no distribution column
+because there's no need to distinguish separate shards per row.
+
+Reference tables are typically small and are used to store data that's
+relevant to queries running on any worker node. An example is enumerated
+values like order statuses or product categories.
+
+### Type 3: Local tables
+
+When you use Hyperscale (Citus), the coordinator node you connect to is a regular PostgreSQL database. You can create ordinary tables on the coordinator and choose not to shard them.
+
+A good candidate for local tables would be small administrative tables that don't participate in join queries. An example is a users table for application sign-in and authentication.
+
+## Shards
+
+The previous section described how distributed tables are stored as shards on
+worker nodes. This section discusses more technical details.
+
+The `pg_dist_shard` metadata table on the coordinator contains a
+row for each shard of each distributed table in the system. The row
+matches a shard ID with a range of integers in a hash space
+(shardminvalue, shardmaxvalue).
+
+```sql
+SELECT * from pg_dist_shard;
+ logicalrelid | shardid | shardstorage | shardminvalue | shardmaxvalue
+++--++
+ github_events | 102026 | t | 268435456 | 402653183
+ github_events | 102027 | t | 402653184 | 536870911
+ github_events | 102028 | t | 536870912 | 671088639
+ github_events | 102029 | t | 671088640 | 805306367
+ (4 rows)
+```
+
+If the coordinator node wants to determine which shard holds a row of
+`github_events`, it hashes the value of the distribution column in the
+row. Then the node checks which shard\'s range contains the hashed value. The
+ranges are defined so that the image of the hash function is their
+disjoint union.
+
+### Shard placements
+
+Suppose that shard 102027 is associated with the row in question. The row
+is read or written in a table called `github_events_102027` in one of
+the workers. Which worker? That's determined entirely by the metadata
+tables. The mapping of shard to worker is known as the shard placement.
+
+The coordinator node
+rewrites queries into fragments that refer to the specific tables
+like `github_events_102027` and runs those fragments on the
+appropriate workers. Here's an example of a query run behind the scenes to find the node holding shard ID 102027.
+
+```sql
+SELECT
+ shardid,
+ node.nodename,
+ node.nodeport
+FROM pg_dist_placement placement
+JOIN pg_dist_node node
+ ON placement.groupid = node.groupid
+ AND node.noderole = 'primary'::noderole
+WHERE shardid = 102027;
+```
+
+```output
+ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
+Γöé shardid Γöé nodename Γöé nodeport Γöé
+Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
+Γöé 102027 Γöé localhost Γöé 5433 Γöé
+ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
+```
+
+## Next steps
+
+- [Determine your application's type](concepts-app-type.md) to prepare for data modeling
postgresql Concepts Private Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/concepts-private-access.md
+
+ Title: Private access (preview) - Hyperscale (Citus) - Azure Database for PostgreSQL
+description: This article describes private access for Azure Database for PostgreSQL - Hyperscale (Citus).
+++++ Last updated : 10/15/2021++
+# Private access (preview) in Azure Database for PostgreSQL - Hyperscale (Citus)
++
+This page describes the private access option. For public access, see
+[here](concepts-firewall-rules.md).
+
+> [!NOTE]
+>
+> Private access is available for preview in only [certain
+> regions](concepts-limits.md#regions).
+>
+> If the private access option is not selectable for your server group even
+> though your server group is within an allowed region, please open an Azure
+> [support
+> request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest),
+> and include your Azure subscription ID, to get access.
+
+## Definitions
+
+**Virtual network**. An Azure Virtual Network (VNet) is the fundamental
+building block for private networking in Azure. Virtual networks enable many
+types of Azure resources, such as database servers and Azure Virtual Machines
+(VM), to securely communicate with each other. Virtual networks support on-prem
+connections, allow hosts in multiple virtual networks to interact with each
+other through peering, and provide added benefits of scale, security options,
+and isolation. Each private endpoint for a Hyperscale (Citus) server group
+requires an associated virtual network.
+
+**Subnet**. Subnets segment a virtual network into one or more subnetworks.
+Each subnetwork gets a portion of the address space, improving address
+allocation efficiency. You can secure resources within subnets using Network
+Security Groups. For more information, see Network security groups.
+
+When you select a subnet for a Hyperscale (Citus)ΓÇÖs private endpoint, make sure
+enough private IP addresses are available in that subnet for your current and
+future needs.
+
+**Private endpoint**. A private endpoint is a network interface that uses a
+private IP address from a virtual network. This network interface connects
+privately and securely to a service powered by Azure Private Link. Private
+endpoints bring the services into your virtual network.
+
+Enabling private access for Hyperscale (Citus) creates a private endpoint for
+the server groupΓÇÖs coordinator node. The endpoint allows hosts in the selected
+virtual network to access the coordinator. You can optionally create private
+endpoints for worker nodes too.
+
+**Private DNS zone**. An Azure private DNS zone resolves hostnames within a
+linked virtual network, and within any peered virtual network. Domain records
+for Hyperscale (Citus) nodes are created in a private DNS zone selected for
+their server group. Be sure to use fully qualified domain names (FQDN) for
+nodes' PostgreSQL connection strings.
+
+## Private link
+
+You can use [private endpoints](../../private-link/private-endpoint-overview.md)
+for your Hyperscale (Citus) server groups to allow hosts on a virtual network
+(VNet) to securely access data over a [Private
+Link](../../private-link/private-link-overview.md).
+
+The server group's private endpoint uses an IP address from the virtual
+network's address space. Traffic between hosts on the virtual network and
+Hyperscale (Citus) nodes goes over a private link on the Microsoft backbone
+network, eliminating exposure to the public Internet.
+
+Applications in the virtual network can connect to the Hyperscale (Citus) nodes
+over the private endpoint seamlessly, using the same connection strings and
+authorization mechanisms that they would use otherwise.
+
+You can select private access during Hyperscale (Citus) server group creation,
+and you can switch from public access to private access at any point.
+
+### Using a private DNS zone
+
+A new private DNS zone is automatically provisioned for each private endpoint,
+unless you select one of the private DNS zones previously created by Hyperscale
+(Citus). For more information, see the [private DNS zones
+overview](../../dns/private-dns-overview.md).
+
+Hyperscale (Citus) service creates DNS records such as
+`c.privatelink.mygroup01.postgres.database.azure.com` in the selected private
+DNS zone for each node with a private endpoint. When you connect to a
+Hyperscale (Citus) node from an Azure VM via private endpoint, Azure DNS
+resolves the nodeΓÇÖs FQDN into a private IP address.
+
+Private DNS zone settings and virtual network peering are independent of each
+other. If you want to connect to a node in the server group from a client
+that's provisioned in another virtual network (from the same region or a
+different region), you have to link the private DNS zone with the virtual
+network. For more information, see [Link the virtual
+network](../../dns/private-dns-getstarted-portal.md#link-the-virtual-network).
+
+> [!NOTE]
+>
+> The service also always creates public CNAME records such as
+> `c.mygroup01.postgres.database.azure.com` for every node. However, selected
+> computers on the public internet can connect to the public hostname only if
+> the database administrator enables [public
+> access](concepts-firewall-rules.md) to the server group.
+
+If you're using a custom DNS server, you must use a DNS forwarder to resolve
+the FQDN of Hyperscale (Citus) nodes. The forwarder IP address should be
+168.63.129.16. The custom DNS server should be inside the virtual network or
+reachable via the virtual network's DNS server setting. To learn more, see
+[Name resolution that uses your own DNS
+server](../../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server).
+
+### Recommendations
+
+When you enable private access for your Hyperscale (Citus) server group,
+consider:
+
+* **Subnet size**: When selecting subnet size for Hyperscale (Citus) server
+ group, consider current needs such as IP addresses for coordinator or all
+ nodes in that server group, and future needs such as growth of that server
+ group. Make sure you have enough private IP addresses for the current and
+ future needs. Keep in mind, Azure reserves five IP addresses in each subnet.
+ See more details [in this
+ FAQ](../../virtual-network/virtual-networks-faq.md#configuration).
+* **Private DNS zone**: DNS records with private IP addresses are going to be
+ maintained by Hyperscale (Citus) service. Make sure you donΓÇÖt delete private
+ DNS zone used for Hyperscale (Citus) server groups.
+
+## Limits and limitations
+
+See Hyperscale (Citus) [limits and limitations](concepts-limits.md)
+page.
+
+## Next steps
+
+* Learn how to [enable and manage private
+ access](howto-private-access.md) (preview)
+* Follow a [tutorial](tutorial-private-access.md) to see
+ private access (preview) in action.
+* Learn about [private
+ endpoints](../../private-link/private-endpoint-overview.md)
+* Learn about [virtual
+ networks](../../virtual-network/concepts-and-best-practices.md)
+* Learn about [private DNS zones](../../dns/private-dns-overview.md)
postgresql Concepts Read Replicas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/concepts-read-replicas.md
+
+ Title: Read replicas - Azure Database for PostgreSQL - Hyperscale (Citus)
+description: This article describes the read replica feature in Azure Database for PostgreSQL - Hyperscale (Citus).
+++++ Last updated : 08/03/2021++
+# Read replicas in Azure Database for PostgreSQL - Hyperscale (Citus)
+
+The read replica feature allows you to replicate data from a Hyperscale (Citus)
+server group to a read-only server group. Replicas are updated
+**asynchronously** with PostgreSQL physical replication technology. You can
+replicate from the primary server to an unlimited number of replicas.
+
+Replicas are new server groups that you manage similar to regular Hyperscale
+(Citus) server groups. For each read replica, you're billed for the provisioned
+compute in vCores and storage in GB/ month.
+
+Learn how to [create and manage
+replicas](howto-read-replicas-portal.md).
+
+## When to use a read replica
+
+The read replica feature helps to improve the performance and scale of
+read-intensive workloads. Read workloads can be isolated to the replicas, while
+write workloads can be directed to the primary.
+
+A common scenario is to have BI and analytical workloads use the read replica
+as the data source for reporting.
+
+Because replicas are read-only, they don't directly reduce write-capacity
+burdens on the primary.
+
+### Considerations
+
+The feature is meant for scenarios where replication lag is acceptable, and is
+meant for offloading queries. It isn't meant for synchronous replication
+scenarios where replica data is expected to be up to date. There will be a
+measurable delay between the primary and the replica. The delay can be minutes
+or even hours depending on the workload and the latency between the primary and
+the replica. The data on the replica eventually becomes consistent with the
+data on the primary. Use this feature for workloads that can accommodate this
+delay.
+
+## Create a replica
+
+When you start the create replica workflow, a blank Hyperscale (Citus) server
+group is created. The new group is filled with the data that was on the primary
+server group. The creation time depends on the amount of data on the primary
+and the time since the last weekly full backup. The time can range from a few
+minutes to several hours.
+
+The read replica feature uses PostgreSQL physical replication, not logical
+replication. The default mode is streaming replication using replication slots.
+When necessary, log shipping is used to catch up.
+
+Learn how to [create a read replica in the Azure
+portal](howto-read-replicas-portal.md).
+
+## Connect to a replica
+
+When you create a replica, it doesn't inherit firewall rules the primary
+server group. These rules must be set up independently for the replica.
+
+The replica inherits the admin ("citus") account from the primary server group.
+All user accounts are replicated to the read replicas. You can only connect to
+a read replica by using the user accounts that are available on the primary
+server.
+
+You can connect to the replica's coordinator node by using its hostname and a
+valid user account, as you would on a regular Hyperscale (Citus) server group.
+For a server named **my replica** with the admin username **citus**, you can
+connect to the coordinator node of the replica by using psql:
+
+```bash
+psql -h c.myreplica.postgres.database.azure.com -U citus@myreplica -d postgres
+```
+
+At the prompt, enter the password for the user account.
+
+## Considerations
+
+This section summarizes considerations about the read replica feature.
+
+### New replicas
+
+A read replica is created as a new Hyperscale (Citus) server group. An existing
+server group can't be made into a replica. You can't create a replica of
+another read replica.
+
+### Replica configuration
+
+A replica is created by using the same compute, storage, and worker node
+settings as the primary. After a replica is created, several settings can be
+changed, including storage and backup retention period. Other settings can't be
+changed in replicas, such as storage size and number of worker nodes.
+
+Remember to keep replicas strong enough to keep up changes arriving from the
+primary. For instance, be sure to upscale compute power in replicas if you
+upscale it on the primary.
+
+Firewall rules and parameter settings are not inherited from the primary server
+to the replica when the replica is created or afterwards.
+
+### Regions
+
+Hyperscale (Citus) server groups support only same-region replication.
+
+## Next steps
+
+* Learn how to [create and manage read replicas in the Azure
+ portal](howto-read-replicas-portal.md).
postgresql Concepts Reserved Pricing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/concepts-reserved-pricing.md
+
+ Title: Reserved compute pricing - Azure Database for PostgreSQL - Hyperscale (Citus)
+description: Prepay for Azure Database for PostgreSQL - Hyperscale (Citus) compute resources with reserved capacity.
+++++ Last updated : 06/15/2020++
+# Prepay for Azure Database for PostgreSQL - Hyperscale (Citus) compute resources with reserved capacity
+
+Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) now helps you save money by prepaying for compute resources compared to pay-as-you-go prices. With Hyperscale (Citus) reserved capacity, you make an upfront commitment on Hyperscale (Citus) server group for a one- or three-year period to get a significant discount on the compute costs. To purchase Hyperscale (Citus) reserved capacity, you need to specify the Azure region, reservation term, and billing frequency.
+
+> [!IMPORTANT]
+> This article is about reserved capacity for Azure Database for PostgreSQL ΓÇô Hyperscale (Citus). For information about reserved capacity for Azure Database for PostgreSQL ΓÇô Single Server, see [Prepay for Azure Database for PostgreSQL ΓÇô Single Server compute resources with reserved capacity](../concept-reserved-pricing.md).
+
+You don't need to assign the reservation to specific Hyperscale (Citus) server groups. An already running Hyperscale (Citus) server group or ones that are newly deployed automatically get the benefit of reserved pricing. By purchasing a reservation, you're prepaying for the compute costs for one year or three years. As soon as you buy a reservation, the Hyperscale (Citus) compute charges that match the reservation attributes are no longer charged at the pay-as-you-go rates.
+
+A reservation doesn't cover software, networking, or storage charges associated with the Hyperscale (Citus) server groups. At the end of the reservation term, the billing benefit expires, and the Hyperscale (Citus) server groups are billed at the pay-as-you go price. Reservations don't autorenew. For pricing information, see the [Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) reserved capacity offering](https://azure.microsoft.com/pricing/details/postgresql/hyperscale-citus/).
+
+You can buy Hyperscale (Citus) reserved capacity in the [Azure portal](https://portal.azure.com/). Pay for the reservation [up front or with monthly payments](../../cost-management-billing/reservations/prepare-buy-reservation.md). To buy the reserved capacity:
+
+* You must be in the owner role for at least one Enterprise Agreement (EA) or individual subscription with pay-as-you-go rates.
+* For Enterprise Agreement subscriptions, **Add Reserved Instances** must be enabled in the [EA Portal](https://ea.azure.com/). Or, if that setting is disabled, you must be an Enterprise Agreement admin on the subscription.
+* For the Cloud Solution Provider (CSP) program, only the admin agents or sales agents can purchase Hyperscale (Citus) reserved capacity.
+
+For information on how Enterprise Agreement customers and pay-as-you-go customers are charged for reservation purchases, see:
+- [Understand Azure reservation usage for your Enterprise Agreement enrollment](../../cost-management-billing/reservations/understand-reserved-instance-usage-ea.md)
+- [Understand Azure reservation usage for your pay-as-you-go subscription](../../cost-management-billing/reservations/understand-reserved-instance-usage.md)
+
+## Determine the right server group size before purchase
+
+The size of reservation is based on the total amount of compute used by the existing or soon-to-be-deployed coordinator and worker nodes in Hyperscale (Citus) server groups within a specific region.
+
+For example, let's suppose you're running one Hyperscale (Citus) server group with 16 vCore coordinator and three 8 vCore worker nodes. Further, let's assume you plan to deploy within the next month an additional Hyperscale (Citus) server group with a 32 vCore coordinator and two 4 vCore worker nodes. Let's also suppose you need these resources for at least one year.
+
+In this case, purchase a one-year reservation for:
+
+* Total 16 vCores + 32 vCores = 48 vCores for coordinator nodes
+* Total 3 nodes x 8 vCores + 2 nodes x 4 vCores = 24 + 8 = 32 vCores for worker nodes
+
+## Buy Azure Database for PostgreSQL reserved capacity
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. Select **All services** > **Reservations**.
+1. Select **Add**. In the **Purchase reservations** pane, select **Azure Database for PostgreSQL** to purchase a new reservation for your PostgreSQL databases.
+1. Select the **Hyperscale (Citus) Compute** type to purchase, and click **Select**.
+1. Review the quantity for the selected compute type on the **Products** tab.
+1. Continue to the **Buy + Review** tab to finish your purchase.
+
+The following table describes required fields.
+
+| Field | Description |
+|--|--|
+| Subscription | The subscription used to pay for the Azure Database for PostgreSQL reserved capacity reservation. The payment method on the subscription is charged the upfront costs for the Azure Database for PostgreSQL reserved capacity reservation. The subscription type must be an Enterprise Agreement (offer numbers: MS-AZR-0017P or MS-AZR-0148P) or an individual agreement with pay-as-you-go pricing (offer numbers: MS-AZR-0003P or MS-AZR-0023P). For an Enterprise Agreement subscription, the charges are deducted from the enrollment's Azure Prepayment (previously called monetary commitment) balance or charged as overage. For an individual subscription with pay-as-you-go pricing, the charges are billed to the credit card or invoice payment method on the subscription. |
+| Scope | The vCore reservation's scope can cover one subscription or multiple subscriptions (shared scope). If you select **Shared**, the vCore reservation discount is applied to Hyperscale (Citus) server groups running in any subscriptions within your billing context. For Enterprise Agreement customers, the shared scope is the enrollment and includes all subscriptions within the enrollment. For pay-as-you-go customers, the shared scope is all pay-as-you-go subscriptions created by the account administrator. If you select **Management group**, the reservation discount is applied to Hyperscale (Citus) server groups running in any subscriptions that are a part of both the management group and billing scope. If you select **Single subscription**, the vCore reservation discount is applied to Hyperscale (Citus) server groups in this subscription. If you select **Single resource group**, the reservation discount is applied to Hyperscale (Citus) server groups in the selected subscription and the selected resource group within that subscription. |
+| Region | The Azure region that's covered by the Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) reserved capacity reservation. |
+| Term | One year or three years. |
+| Quantity | The amount of compute resources being purchased within the Hyperscale (Citus) reserved capacity reservation. In particular, the number of coordinator or worker node vCores in the selected Azure region that are being reserved and which will get the billing discount. For example, if you're running (or plan to run) Hyperscale (Citus) server groups with the total compute capacity of 64 coordinator node vCores and 32 worker node vCores in the East US region, specify the quantity as 64 and 32 for coordinator and worker nodes, respectively, to maximize the benefit for all servers. |
+++
+## Cancel, exchange, or refund reservations
+
+You can cancel, exchange, or refund reservations with certain limitations. For more information, see [Self-service exchanges and refunds for Azure reservations](../../cost-management-billing/reservations/exchange-and-refund-azure-reservations.md).
+
+## vCore size flexibility
+
+vCore size flexibility helps you scale up or down coordinator and worker nodes within a region, without losing the reserved capacity benefit.
+
+## Need help? Contact us
+
+If you have questions or need help, [create a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
+
+## Next steps
+
+The vCore reservation discount is applied automatically to the number of Hyperscale (Citus) server groups that match the Azure Database for PostgreSQL reserved capacity reservation scope and attributes. You can update the scope of the Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) reserved capacity reservation through the Azure portal, PowerShell, the Azure CLI, or the API.
+
+To learn more about Azure reservations, see the following articles:
+
+* [What are Azure reservations?](../../cost-management-billing/reservations/save-compute-costs-reservations.md)
+* [Manage Azure reservations](../../cost-management-billing/reservations/manage-reserved-vm-instance.md)
+* [Understand reservation usage for your Enterprise Agreement enrollment](../../cost-management-billing/reservations/understand-reserved-instance-usage-ea.md)
postgresql Concepts Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/concepts-security-overview.md
+
+ Title: Security overview - Hyperscale (Citus) - Azure Database for PostgreSQL
+description: Information protection and network security for Azure Database for PostgreSQL - Hyperscale (Citus).
+++++ Last updated : 10/15/2021++
+# Security in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)
+
+This page outlines the multiple layers of security available to protect the data in your
+Hyperscale (Citus) server group.
+
+## Information protection and encryption
+
+### In transit
+
+Whenever data is ingested into a node, Hyperscale (Citus) secures your data by
+encrypting it in-transit with Transport Layer Security 1.2. Encryption
+(SSL/TLS) is always enforced, and canΓÇÖt be disabled.
+
+### At rest
+
+The Hyperscale (Citus) service uses the FIPS 140-2 validated cryptographic
+module for storage encryption of data at-rest. Data, including backups, are
+encrypted on disk, including the temporary files created while running queries.
+The service uses the AES 256-bit cipher included in Azure storage encryption,
+and the keys are system-managed. Storage encryption is always on, and can't be
+disabled.
+
+## Network security
++
+## Limits and limitations
+
+See Hyperscale (Citus) [limits and limitations](concepts-limits.md)
+page.
+
+## Next steps
+
+* Learn how to [enable and manage private
+ access](howto-private-access.md) (preview)
+* Learn about [private
+ endpoints](../../private-link/private-endpoint-overview.md)
+* Learn about [virtual
+ networks](../../virtual-network/concepts-and-best-practices.md)
+* Learn about [private DNS zones](../../dns/private-dns-overview.md)
postgresql Concepts Ssl Connection Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/concepts-ssl-connection-security.md
+
+ Title: Transport Layer Security (TLS) - Hyperscale (Citus) - Azure Database for PostgreSQL
+description: Instructions and information to configure Azure Database for PostgreSQL - Hyperscale (Citus) and associated applications to properly use TLS connections.
+++++ Last updated : 07/16/2020+
+# Configure TLS in Azure Database for PostgreSQL - Hyperscale (Citus)
+The Hyperscale (Citus) coordinator node requires client applications to connect with Transport Layer Security (TLS). Enforcing TLS between the database server and client applications helps keep data confidential in transit. Extra verification settings described below also protect against "man-in-the-middle" attacks.
+
+## Enforcing TLS connections
+Applications use a "connection string" to identify the destination database and settings for a connection. Different clients require different settings. To see a list of connection strings used by common clients, consult the **Connection Strings** section for your server group in the Azure portal.
+
+The TLS parameters `ssl` and `sslmode` vary based on the capabilities of the connector, for example `ssl=true` or `sslmode=require` or `sslmode=required`.
+
+## Ensure your application or framework supports TLS connections
+Some application frameworks don't enable TLS by default for PostgreSQL connections. However, without a secure connection an application can't connect to a Hyperscale (Citus) coordinator node. Consult your application's documentation to learn how to enable TLS connections.
+
+## Applications that require certificate verification for TLS connectivity
+In some cases, applications require a local certificate file generated from a trusted Certificate Authority (CA) certificate file (.cer) to connect securely. The certificate to connect to an Azure Database for PostgreSQL - Hyperscale (Citus) is located at https://cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem. Download the certificate file and save it to your preferred location.
+
+> [!NOTE]
+>
+> To check the certificate's authenticity, you can verify its SHA-256
+> fingerprint using the OpenSSL command line tool:
+>
+> ```sh
+> openssl x509 -in DigiCertGlobalRootCA.crt.pem -noout -sha256 -fingerprint
+>
+> # should output:
+> # 43:48:A0:E9:44:4C:78:CB:26:5E:05:8D:5E:89:44:B4:D8:4F:96:62:BD:26:DB:25:7F:89:34:A4:43:C7:01:61
+> ```
+
+### Connect using psql
+The following example shows how to connect to your Hyperscale (Citus) coordinator node using the psql command-line utility. Use the `sslmode=verify-full` connection string setting to enforce TLS certificate verification. Pass the local certificate file path to the `sslrootcert` parameter.
+
+Below is an example of the psql connection string:
+```
+psql "sslmode=verify-full sslrootcert=DigiCertGlobalRootCA.crt.pem host=mydemoserver.postgres.database.azure.com dbname=citus user=citus password=your_pass"
+```
+> [!TIP]
+> Confirm that the value passed to `sslrootcert` matches the file path for the certificate you saved.
+
+## Next steps
+Increase security further with [Firewall rules in Azure Database for PostgreSQL - Hyperscale (Citus)](concepts-firewall-rules.md).
postgresql Concepts Tiers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/concepts-tiers.md
+
+ Title: Basic tier - Hyperscale (Citus) - Azure Database for PostgreSQL
+description: The single node basic tier for Azure Database for PostgreSQL - Hyperscale (Citus)
+++++ Last updated : 08/03/2021++
+# Basic tier
+
+The basic tier in Azure Database for PostgreSQL - Hyperscale (Citus) is a
+simple way to create a small server group that you can scale later. While
+server groups in the standard tier have a coordinator node and at least two
+worker nodes, the basic tier runs everything in a single database node.
+
+Other than using fewer nodes, the basic tier has all the features of the
+standard tier. Like the standard tier, it supports high availability, read
+replicas, and columnar table storage, among other features.
+
+## Choosing basic vs standard tier
+
+The basic tier can be an economical and convenient deployment option for
+initial development, testing, and continuous integration. It uses a single
+database node and presents the same SQL API as the standard tier. You can test
+applications with the basic tier and later [graduate to the standard
+tier](howto-scale-grow.md#add-worker-nodes) with confidence that the
+interface remains the same.
+
+The basic tier is also appropriate for smaller workloads in production. There
+is room to scale vertically *within* the basic tier by increasing the number of
+server vCores.
+
+When greater scale is required right away, use the standard tier. Its smallest
+allowed server group has one coordinator node and two workers. You can choose
+to use more nodes based on your use-case, as described in our [initial
+sizing](howto-scale-initial.md) how-to.
+
+## Next steps
+
+* Learn to [provision the basic tier](quickstart-create-basic-tier.md)
+* When you're ready, see [how to graduate](howto-scale-grow.md#add-worker-nodes) from the basic tier to the standard tier
+* The [columnar storage](concepts-columnar.md) option is available in both the basic and standard tier
postgresql Concepts Versions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/concepts-versions.md
+
+ Title: Supported versions ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL
+description: PostgreSQL versions available in Azure Database for PostgreSQL - Hyperscale (Citus)
+++++ Last updated : 10/01/2021++
+# Supported database versions in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)
+
+## PostgreSQL versions
+
+The version of PostgreSQL running in a Hyperscale (Citus) server group is
+customizable during creation. Hyperscale (Citus) currently supports the
+following major versions:
+
+### PostgreSQL version 14
+
+The current minor release is 14.0. Refer to the [PostgreSQL
+documentation](https://www.postgresql.org/docs/14/release-14.html) to
+learn more about improvements and fixes in this minor release.
+
+### PostgreSQL version 13
+
+The current minor release is 13.4. Refer to the [PostgreSQL
+documentation](https://www.postgresql.org/docs/13/release-13-4.html) to
+learn more about improvements and fixes in this minor release.
+
+### PostgreSQL version 12
+
+The current minor release is 12.8. Refer to the [PostgreSQL
+documentation](https://www.postgresql.org/docs/12/release-12-8.html) to
+learn more about improvements and fixes in this minor release.
+
+### PostgreSQL version 11
+
+The current minor release is 11.13. Refer to the [PostgreSQL
+documentation](https://www.postgresql.org/docs/11/release-11-13.html) to
+learn more about improvements and fixes in this minor release.
+
+### PostgreSQL version 10 and older
+
+We don't support PostgreSQL version 10 and older for Azure Database for
+PostgreSQL - Hyperscale (Citus).
+
+## Citus and other extension versions
+
+Depending on which version of PostgreSQL is running in a server group,
+different [versions of Postgres extensions](concepts-extensions.md)
+will be installed as well. In particular, Postgres versions 12-14 come with
+Citus 10, and earlier Postgres versions come with Citus 9.5.
+
+## Next steps
+
+* See which [extensions](concepts-extensions.md) are installed in
+ which versions.
+* Learn to [create a Hyperscale (Citus) server
+ group](quickstart-create-portal.md).
postgresql Howto Alert On Metric https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/howto-alert-on-metric.md
+
+ Title: Configure alerts - Hyperscale (Citus) - Azure Database for PostgreSQL
+description: This article describes how to configure and access metric alerts for Azure Database for PostgreSQL - Hyperscale (Citus)
+++++ Last updated : 3/16/2020++
+# Use the Azure portal to set up alerts on metrics for Azure Database for PostgreSQL - Hyperscale (Citus)
+
+This article shows you how to set up Azure Database for PostgreSQL alerts using the Azure portal. You can receive an alert based on [monitoring metrics](concepts-monitoring.md) for your Azure services.
+
+We'll set up an alert to trigger when the value of a specified metric crosses a threshold. The alert triggers when the condition is first met, and continues to trigger afterwards.
+
+You can configure an alert to do the following actions when it triggers:
+* Send email notifications to the service administrator and coadministrators.
+* Send email to additional emails that you specify.
+* Call a webhook.
+
+You can configure and get information about alert rules using:
+* [Azure portal](../../azure-monitor/alerts/alerts-metric.md#create-with-azure-portal)
+* [Azure CLI](../../azure-monitor/alerts/alerts-metric.md#with-azure-cli)
+* [Azure Monitor REST API](/rest/api/monitor/metricalerts)
+
+## Create an alert rule on a metric from the Azure portal
+1. In the [Azure portal](https://portal.azure.com/), select the Azure Database for PostgreSQL server you want to monitor.
+
+2. Under the **Monitoring** section of the sidebar, select **Alerts** as shown:
+
+ :::image type="content" source="../media/howto-hyperscale-alert-on-metric/2-alert-rules.png" alt-text="Select Alert Rules":::
+
+3. Select **New alert rule** (+ icon).
+
+4. The **Create rule** page opens as shown below. Fill in the required information:
+
+ :::image type="content" source="../media/howto-hyperscale-alert-on-metric/4-add-rule-form.png" alt-text="Add metric alert form":::
+
+5. Within the **Condition** section, select **Add**.
+
+6. Select a metric from the list of signals to be alerted on. In this example, select "Storage percent".
+
+ :::image type="content" source="../media/howto-hyperscale-alert-on-metric/6-configure-signal-logic.png" alt-text="Screenshot shows the Configure signal logic page where you can view several signals.":::
+
+7. Configure the alert logic:
+
+ * **Operator** (ex. "Greater than")
+ * **Threshold value** (ex. 85 percent)
+ * **Aggregation granularity** amount of time the metric rule must be satisfied before the alert triggers (ex. "Over the last 30 minutes")
+ * and **Frequency of evaluation** (ex. "1 minute")
+
+ Select **Done** when complete.
+
+ :::image type="content" source="../media/howto-hyperscale-alert-on-metric/7-set-threshold-time.png" alt-text="Screenshot shows the pane where you can configure Alert logic.":::
+
+8. Within the **Action Groups** section, select **Create New** to create a new group to receive notifications on the alert.
+
+9. Fill out the "Add action group" form with a name, short name, subscription, and resource group.
+
+ :::image type="content" source="../media/howto-hyperscale-alert-on-metric/9-add-action-group.png" alt-text="Screenshot shows the Add action group form where you can enter the described values.":::
+
+10. Configure an **Email/SMS/Push/Voice** action type.
+
+ Choose "Email Azure Resource Manager Role" to send notifications to subscription owners, contributors, and readers.
+
+ Select **OK** when completed.
+
+ :::image type="content" source="../media/howto-hyperscale-alert-on-metric/10-action-group-type.png" alt-text="Screenshot shows the Email/S M S/Push/Voice pane.":::
+
+11. Specify an Alert rule name, Description, and Severity.
+
+ :::image type="content" source="../media/howto-hyperscale-alert-on-metric/11-name-description-severity.png" alt-text="Screenshot shows the Alert Details pane.":::
+
+12. Select **Create alert rule** to create the alert.
+
+ Within a few minutes, the alert is active and triggers as previously described.
+
+### Managing alerts
+
+Once you've created an alert, you can select it and do the following actions:
+
+* View a graph showing the metric threshold and the actual values from the previous day relevant to this alert.
+* **Edit** or **Delete** the alert rule.
+* **Disable** or **Enable** the alert, if you want to temporarily stop or resume receiving notifications.
+
+## Suggested alerts
+
+### Disk space
+
+Monitoring and alerting is important for every production Hyperscale (Citus) server group. The underlying PostgreSQL database requires free disk space to operate correctly. If the disk becomes full, the database server node will go offline and refuse to start until space is available. At that point, it requires a Microsoft support request to fix the situation.
+
+We recommend setting disk space alerts on every node in every server group, even for non-production usage. Disk space usage alerts provide the advance warning needed to intervene and keep nodes healthy. For best results, try a series of alerts at 75%, 85%, and 95% usage. The percentages to choose depend on data ingestion speed, since fast data ingestion fills up the disk faster.
+
+As the disk approaches its space limit, try these techniques to get more free space:
+
+* Review data retention policy. Move older data to cold storage if feasible.
+* Consider [adding nodes](howto-scale-grow.md#add-worker-nodes) to the server group and rebalancing shards. Rebalancing distributes the data across more computers.
+* Consider [growing the capacity](howto-scale-grow.md#increase-or-decrease-vcores-on-nodes) of worker nodes. Each worker can have up to 2 TiB of storage. However adding nodes should be attempted before resizing nodes because adding nodes completes faster.
+
+### CPU usage
+
+Monitoring CPU usage is useful to establish a baseline for performance. For example, you may notice that CPU usage is usually around 40-60%. If CPU usage suddenly begins hovering around 95%, you can recognize an anomaly. The CPU usage may reflect organic growth, but it may also reveal a stray query. When creating a CPU alert, set a long aggregation granularity to catch prolonged increases and ignore momentary spikes.
+
+## Next steps
+* Learn more about [configuring webhooks in alerts](../../azure-monitor/alerts/alerts-webhooks.md).
+* Get an [overview of metrics collection](../../azure-monitor/data-platform.md) to make sure your service is available and responsive.
postgresql Howto Compute Quota https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/howto-compute-quota.md
+
+ Title: Change compute quotas - Azure portal - Azure Database for PostgreSQL - Hyperscale (Citus)
+description: Learn how to increase vCore quotas per region in Azure Database for PostgreSQL - Hyperscale (Citus) from the Azure portal.
+++++ Last updated : 12/10/2021++
+# Change compute quotas in Azure Database for PostgreSQL - Hyperscale (Citus) from the Azure portal
+
+Azure enforces a vCore quota per subscription per region. There are two
+independently adjustable limits: vCores for coordinator nodes, and vCores for
+worker nodes.
+
+## Request quota increase
+
+1. Select **New support request** in the Azure portal menu for your Hyperscale
+ (Citus) server group.
+2. Fill out **Summary** with the quota increase request for your region, for
+ example "Quota increase in West Europe region."
+3. These fields should be autoselected, but verify:
+ * **Issue Type** should be "Technical + your subscription"
+ * **Service type** should be "Azure Database for PostgreSQL"
+4. Select "Create, Update, and Drop Resources" for **Problem type**.
+5. Select "Node compute or storage scaling" for **Problem subtype**.
+6. Select **Next: Solutions >>** then **Next: Details >>**
+7. In the problem description include two pieces of information:
+ * The region where you want the quota(s) increased
+ * Quota increase details, for example "Need to increase worker node quota
+ in West Europe to 512 vCores"
+
+![support request in Azure portal](../media/howto-hyperscale-compute-quota/support-request.png)
+
+## Next steps
+
+* Learn about other Hyperscale (Citus) [quotas and limits](concepts-limits.md).
postgresql Howto Create Users https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/howto-create-users.md
+
+ Title: Create users - Hyperscale (Citus) - Azure Database for PostgreSQL
+description: This article describes how you can create new user accounts to interact with an Azure Database for PostgreSQL - Hyperscale (Citus).
+++++ Last updated : 1/8/2019++
+# Create users in Azure Database for PostgreSQL - Hyperscale (Citus)
+
+## The server admin account
+
+The PostgreSQL engine uses
+[roles](https://www.postgresql.org/docs/current/sql-createrole.html) to control
+access to database objects, and a newly created Hyperscale (Citus) server group
+comes with several roles pre-defined:
+
+* The [default PostgreSQL roles](https://www.postgresql.org/docs/current/default-roles.html)
+* `azure_pg_admin`
+* `postgres`
+* `citus`
+
+Since Hyperscale (Citus) is a managed PaaS service, only Microsoft can sign in with the
+`postgres` super user role. For limited administrative access, Hyperscale (Citus)
+provides the `citus` role.
+
+Permissions for the `citus` role:
+
+* Read all configuration variables, even variables normally visible only to
+ superusers.
+* Read all pg\_stat\_\* views and use various statistics-related extensions --
+ even views or extensions normally visible only to superusers.
+* Execute monitoring functions that may take ACCESS SHARE locks on tables,
+ potentially for a long time.
+* [Create PostgreSQL extensions](concepts-extensions.md) (because
+ the role is a member of `azure_pg_admin`).
+
+Notably, the `citus` role has some restrictions:
+
+* Can't create roles
+* Can't create databases
+
+## How to create additional user roles
+
+As mentioned, the `citus` admin account lacks permission to create additional
+users. To add a user, use the Azure portal interface.
+
+1. Go to the **Roles** page for your Hyperscale (Citus) server group, and click **+ Add**:
+
+ :::image type="content" source="../media/howto-hyperscale-create-users/1-role-page.png" alt-text="The roles page":::
+
+2. Enter the role name and password. Click **Save**.
+
+ :::image type="content" source="../media/howto-hyperscale-create-users/2-add-user-fields.png" alt-text="Add role":::
+
+The user will be created on the coordinator node of the server group,
+and propagated to all the worker nodes. Roles created through the Azure
+portal have the `LOGIN` attribute, which means they are true users who
+can sign in to the database.
+
+## How to modify privileges for user role
+
+New user roles are commonly used to provide database access with restricted
+privileges. To modify user privileges, use standard PostgreSQL commands, using
+a tool such as PgAdmin or psql. (See [connecting with
+psql](quickstart-create-portal.md#connect-to-the-database-using-psql)
+in the Hyperscale (Citus) quickstart.)
+
+For example, to allow `db_user` to read `mytable`, grant the permission:
+
+```sql
+GRANT SELECT ON mytable TO db_user;
+```
+
+Hyperscale (Citus) propagates single-table GRANT statements through the entire
+cluster, applying them on all worker nodes. It also propagates GRANTs that are
+system-wide (e.g. for all tables in a schema):
+
+```sql
+-- applies to the coordinator node and propagates to workers
+GRANT SELECT ON ALL TABLES IN SCHEMA public TO db_user;
+```
+
+## How to delete a user role or change their password
+
+To update a user, visit the **Roles** page for your Hyperscale (Citus) server group,
+and click the ellipses **...** next to the user. The ellipses will open a menu
+to delete the user or reset their password.
+
+ :::image type="content" source="../media/howto-hyperscale-create-users/edit-role.png" alt-text="Edit a role":::
+
+The `citus` role is privileged and can't be deleted.
+
+## Next steps
+
+Open the firewall for the IP addresses of the new users' machines to enable
+them to connect: [Create and manage Hyperscale (Citus) firewall rules using
+the Azure portal](howto-manage-firewall-using-portal.md).
+
+For more information about database user account management, see PostgreSQL
+product documentation:
+
+* [Database Roles and Privileges](https://www.postgresql.org/docs/current/static/user-manag.html)
+* [GRANT Syntax](https://www.postgresql.org/docs/current/static/sql-grant.html)
+* [Privileges](https://www.postgresql.org/docs/current/static/ddl-priv.html)
postgresql Howto High Availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/howto-high-availability.md
+
+ Title: Configure high availability - Hyperscale (Citus) - Azure Database for PostgreSQL
+description: How to enable or disable high availability
+++++ Last updated : 07/27/2020++
+# Configure Hyperscale (Citus) high availability
+
+Azure Database for PostgreSQL - Hyperscale (Citus) provides high availability
+(HA) to avoid database downtime. With HA enabled, every node in a server group
+will get a standby. If the original node becomes unhealthy, its standby will be
+promoted to replace it.
+
+> [!IMPORTANT]
+> Because HA doubles the number of servers in the group, it will also double
+> the cost.
+
+Enabling HA is possible during server group creation, or afterward in the
+**Compute + storage** tab for your server group in the Azure portal. The user
+interface looks similar in either case. Drag the slider for **High
+availability** from NO to YES:
++
+Click the **Save** button to apply your selection. Enabling HA can take some
+time as the server group provisions standbys and streams data to them.
+
+The **Overview** tab for the server group will list all nodes and their
+standbys, along with a **High availability** column indicating whether HA is
+successfully enabled for each node.
++
+### Next steps
+
+Learn more about [high availability](concepts-high-availability.md).
postgresql Howto Logging https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/howto-logging.md
+
+ Title: Logs - Hyperscale (Citus) - Azure Database for PostgreSQL
+description: How to access database logs for Azure Database for PostgreSQL - Hyperscale (Citus)
+++++ Last updated : 9/13/2021++
+# Logs in Azure Database for PostgreSQL - Hyperscale (Citus)
+
+PostgreSQL database server logs are available for every node of a Hyperscale
+(Citus) server group. You can ship logs to a storage server, or to an analytics
+service. The logs can be used to identify, troubleshoot, and repair
+configuration errors and suboptimal performance.
+
+## Capturing logs
+
+To access PostgreSQL logs for a Hyperscale (Citus) coordinator or worker node,
+you have to enable the PostgreSQLLogs diagnostic setting. In the Azure
+portal, open **Diagnostic settings**, and select **+ Add diagnostic setting**.
++
+Pick a name for the new diagnostics settings, check the **PostgreSQLLogs** box,
+and check the **Send to Log Analytics workspace** box. Then select **Save**.
++
+## Viewing logs
+
+To view and filter the logs, we'll use Kusto queries. Open **Logs** in the
+Azure portal for your Hyperscale (Citus) server group. If a query selection
+dialog appears, close it:
++
+You'll then see an input box to enter queries.
++
+Enter the following query and select the **Run** button.
+
+```kusto
+AzureDiagnostics
+| project TimeGenerated, Message, errorLevel_s, LogicalServerName_s
+```
+
+The above query lists log messages from all nodes, along with their severity
+and timestamp. You can add `where` clauses to filter the results. For instance,
+to see errors from the coordinator node only, filter the error level and server
+name like this:
+
+```kusto
+AzureDiagnostics
+| project TimeGenerated, Message, errorLevel_s, LogicalServerName_s
+| where LogicalServerName_s == 'example-server-group-c'
+| where errorLevel_s == 'ERROR'
+```
+
+Replace the server name in the above example with the name of your server. The
+coordinator node name has the suffix `-c` and worker nodes are named
+with a suffix of `-w0`, `-w1`, and so on.
+
+The Azure logs can be filtered in different ways. Here's how to find logs
+within the past day whose messages match a regular expression.
+
+```kusto
+AzureDiagnostics
+| where TimeGenerated > ago(24h)
+| order by TimeGenerated desc
+| where Message matches regex ".*error.*"
+```
+
+## Next steps
+
+- [Get started with log analytics queries](../../azure-monitor/logs/log-analytics-tutorial.md)
+- Learn about [Azure event hubs](../../event-hubs/event-hubs-about.md)
postgresql Howto Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/howto-maintenance.md
+
+ Title: Azure Database for PostgreSQL - Hyperscale (Citus) - Scheduled maintenance - Azure portal
+description: Learn how to configure scheduled maintenance settings for an Azure Database for PostgreSQL - Hyperscale (Citus) from the Azure portal.
+++++ Last updated : 04/07/2021++
+# Manage scheduled maintenance settings for Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)
+
+You can specify maintenance options for each Hyperscale (Citus) server group in
+your Azure subscription. Options include the maintenance schedule and
+notification settings for upcoming and finished maintenance events.
+
+## Prerequisites
+
+To complete this how-to guide, you need:
+
+- An [Azure Database for PostgreSQL - Hyperscale (Citus) server
+ group](quickstart-create-portal.md)
+
+## Specify maintenance schedule options
+
+1. On the Hyperscale (Citus) server group page, under the **Settings** heading,
+ choose **Maintenance** to open scheduled maintenance options.
+2. The default (system-managed) schedule is a random day of the week, and
+ 30-minute window for maintenance start between 11pm and 7am server group's
+ [Azure region time](https://go.microsoft.com/fwlink/?linkid=2143646). If you
+ want to customize this schedule, choose **Custom schedule**. You can then
+ select a preferred day of the week, and a 30-minute window for maintenance
+ start time.
+
+## Notifications about scheduled maintenance events
+
+You can use Azure Service Health to [view
+notifications](../../service-health/service-notifications.md) about upcoming
+and past scheduled maintenance on your Hyperscale (Citus) server group. You can
+also [set up](../../service-health/resource-health-alert-monitor-guide.md)
+alerts in Azure Service Health to get notifications about maintenance events.
+
+## Next steps
+
+* Learn about [scheduled maintenance in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)](concepts-maintenance.md)
+* Lean about [Azure Service Health](../../service-health/overview.md)
postgresql Howto Manage Firewall Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/howto-manage-firewall-using-portal.md
+
+ Title: Manage firewall rules - Hyperscale (Citus) - Azure Database for PostgreSQL
+description: Create and manage firewall rules for Azure Database for PostgreSQL - Hyperscale (Citus) using the Azure portal
+++++ Last updated : 11/16/2021+
+# Manage public access for Azure Database for PostgreSQL - Hyperscale (Citus)
+
+Server-level firewall rules can be used to manage [public
+access](concepts-firewall-rules.md) to a Hyperscale (Citus)
+coordinator node from a specified IP address (or range of IP addresses) in the
+public internet.
+
+## Prerequisites
+To step through this how-to guide, you need:
+- A server group [Create an Azure Database for PostgreSQL ΓÇô Hyperscale (Citus) server group](quickstart-create-portal.md).
+
+## Create a server-level firewall rule in the Azure portal
+
+> [!NOTE]
+> These settings are also accessible during the creation of an Azure Database for PostgreSQL - Hyperscale (Citus) server group. Under the **Networking** tab, select **Public access (allowed IP address)**.
+>
+> :::image type="content" source="../media/howto-hyperscale-manage-firewall-using-portal/0-create-public-access.png" alt-text="Azure portal - networking tab":::
+
+1. On the PostgreSQL server group page, under the Security heading, click **Networking** to open the Firewall rules.
+
+ :::image type="content" source="../media/howto-hyperscale-manage-firewall-using-portal/1-connection-security.png" alt-text="Azure portal - click Networking":::
+
+2. Select **Allow public access from Azure services and resources within Azure to this server group**.
+
+3. If desired, select **Enable access to the worker nodes**. With this option, the firewall rules will allow access to all worker nodes as well as the coordinator node.
+
+4. Click **Add current client IP address** to create a firewall rule with the public IP address of your computer, as perceived by the Azure system.
+
+Alternately, clicking **+Add 0.0.0.0 - 255.255.255.255** (to the right of option B) allows not just your IP, but the whole internet to access the coordinator node's port 5432 (and 6432 for connection pooling). In this situation, clients still must log in with the correct username and password to use the cluster. Nevertheless, we recommend allowing worldwide access for only short periods of time and for only non-production databases.
+
+5. Verify your IP address before saving the configuration. In some situations, the IP address observed by Azure portal differs from the IP address used when accessing the internet and Azure servers. Thus, you may need to change the Start IP and End IP to make the rule function as expected.
+ Use a search engine or other online tool to check your own IP address. For example, search for "what is my IP."
+
+ :::image type="content" source="../media/howto-hyperscale-manage-firewall-using-portal/3-what-is-my-ip.png" alt-text="Bing search for What is my IP":::
+
+6. Add more address ranges. In the firewall rules, you can specify a single IP address or a range of addresses. If you want to limit the rule to a single IP address, type the same address in the field for Start IP and End IP. Opening the firewall enables administrators, users, and applications to access the coordinator node on ports 5432 and 6432.
+
+7. Click **Save** on the toolbar to save this server-level firewall rule. Wait for the confirmation that the update to the firewall rules was successful.
+
+## Connecting from Azure
+
+There is an easy way to grant Hyperscale (Citus) database access to applications hosted on Azure (such as an Azure Web Apps application, or those running in an Azure VM). Select the checkbox **Allow Azure services and resources to access this server group** in the portal from the **Networking** pane and hit **Save**.
+
+> [!IMPORTANT]
+> This option configures the firewall to allow all connections from Azure including connections from the subscriptions of other customers. When selecting this option, make sure your login and user permissions limit access to only authorized users.
+
+## Manage existing server-level firewall rules through the Azure portal
+Repeat the steps to manage the firewall rules.
+* To add the current computer, click the button to + **Add current client IP address**. Click **Save** to save the changes.
+* To add more IP addresses, type in the Rule Name, Start IP Address, and End IP Address. Click **Save** to save the changes.
+* To modify an existing rule, click any of the fields in the rule and modify. Click **Save** to save the changes.
+* To delete an existing rule, click the ellipsis […] and click **Delete** to remove the rule. Click **Save** to save the changes.
+
+## Next steps
+- Learn more about [Concept of firewall rules](concepts-firewall-rules.md), including how to troubleshoot connection problems.
postgresql Howto Modify Distributed Tables https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/howto-modify-distributed-tables.md
+
+ Title: Modify distributed tables - Hyperscale (Citus) - Azure Database for PostgreSQL
+description: SQL commands to create and modify distributed tables - Hyperscale (Citus) using the Azure portal
+++++ Last updated : 8/10/2020++
+# Distribute and modify tables
+
+## Distributing tables
+
+To create a distributed table, you need to first define the table schema. To do
+so, you can define a table using the [CREATE
+TABLE](http://www.postgresql.org/docs/current/static/sql-createtable.html)
+statement in the same way as you would do with a regular PostgreSQL table.
+
+```sql
+CREATE TABLE github_events
+(
+ event_id bigint,
+ event_type text,
+ event_public boolean,
+ repo_id bigint,
+ payload jsonb,
+ repo jsonb,
+ actor jsonb,
+ org jsonb,
+ created_at timestamp
+);
+```
+
+Next, you can use the create\_distributed\_table() function to specify
+the table distribution column and create the worker shards.
+
+```sql
+SELECT create_distributed_table('github_events', 'repo_id');
+```
+
+The function call informs Hyperscale (Citus) that the github\_events table
+should be distributed on the repo\_id column (by hashing the column value). The
+function also creates shards on the worker nodes using the citus.shard\_count
+and citus.shard\_replication\_factor configuration values.
+
+It creates a total of citus.shard\_count number of shards, where each shard
+owns a portion of a hash space and gets replicated based on the default
+citus.shard\_replication\_factor configuration value. The shard replicas
+created on the worker have the same table schema, index, and constraint
+definitions as the table on the coordinator. Once the replicas are created, the
+function saves all distributed metadata on the coordinator.
+
+Each created shard is assigned a unique shard ID and all its replicas have the
+same shard ID. Shards are represented on the worker node as regular PostgreSQL
+tables named \'tablename\_shardid\' where tablename is the name of the
+distributed table, and shard ID is the unique ID assigned. You can connect to
+the worker postgres instances to view or run commands on individual shards.
+
+You're now ready to insert data into the distributed table and run queries on
+it. You can also learn more about the UDF used in this section in the [table
+and shard DDL](reference-functions.md#table-and-shard-ddl)
+reference.
+
+### Reference Tables
+
+The above method distributes tables into multiple horizontal shards. Another
+possibility is distributing tables into a single shard and replicating the
+shard to every worker node. Tables distributed this way are called *reference
+tables.* They are used to store data that needs to be frequently accessed by
+multiple nodes in a cluster.
+
+Common candidates for reference tables include:
+
+- Smaller tables that need to join with larger distributed tables.
+- Tables in multi-tenant apps that lack a tenant ID column or which aren\'t
+ associated with a tenant. (Or, during migration, even for some tables
+ associated with a tenant.)
+- Tables that need unique constraints across multiple columns and are
+ small enough.
+
+For instance, suppose a multi-tenant eCommerce site needs to calculate sales
+tax for transactions in any of its stores. Tax information isn\'t specific to
+any tenant. It makes sense to put it in a shared table. A US-centric reference
+table might look like this:
+
+```postgresql
+-- a reference table
+
+CREATE TABLE states (
+ code char(2) PRIMARY KEY,
+ full_name text NOT NULL,
+ general_sales_tax numeric(4,3)
+);
+
+-- distribute it to all workers
+
+SELECT create_reference_table('states');
+```
+
+Now queries such as one calculating tax for a shopping cart can join on the
+`states` table with no network overhead, and can add a foreign key to the state
+code for better validation.
+
+In addition to distributing a table as a single replicated shard, the
+`create_reference_table` UDF marks it as a reference table in the Hyperscale
+(Citus) metadata tables. Hyperscale (Citus) automatically performs two-phase
+commits ([2PC](https://en.wikipedia.org/wiki/Two-phase_commit_protocol)) for
+modifications to tables marked this way, which provides strong consistency
+guarantees.
+
+If you have a distributed table with a shard count of one, you can upgrade it
+to be a recognized reference table like this:
+
+```postgresql
+SELECT upgrade_to_reference_table('table_name');
+```
+
+For another example of using reference tables, see the [multi-tenant database
+tutorial](tutorial-design-database-multi-tenant.md).
+
+### Distributing Coordinator Data
+
+If an existing PostgreSQL database is converted into the coordinator node for a
+Hyperscale (Citus) cluster, the data in its tables can be distributed
+efficiently and with minimal interruption to an application.
+
+The `create_distributed_table` function described earlier works on both empty
+and non-empty tables, and for the latter it automatically distributes table
+rows throughout the cluster. You will know if it copies data by the presence of
+the message, \"NOTICE: Copying data from local table\...\" For example:
+
+```postgresql
+CREATE TABLE series AS SELECT i FROM generate_series(1,1000000) i;
+SELECT create_distributed_table('series', 'i');
+NOTICE: Copying data from local table...
+ create_distributed_table
+ --
+
+ (1 row)
+```
+
+Writes on the table are blocked while the data is migrated, and pending writes
+are handled as distributed queries once the function commits. (If the function
+fails then the queries become local again.) Reads can continue as normal and
+will become distributed queries once the function commits.
+
+When distributing tables A and B, where A has a foreign key to B, distribute
+the key destination table B first. Doing it in the wrong order will cause an
+error:
+
+```
+ERROR: cannot create foreign key constraint
+DETAIL: Referenced table must be a distributed table or a reference table.
+```
+
+If it's not possible to distribute in the correct order, then drop the foreign
+keys, distribute the tables, and recreate the foreign keys.
+
+When migrating data from an external database, such as from Amazon RDS to
+Hyperscale (Citus) Cloud, first create the Hyperscale (Citus) distributed
+tables via `create_distributed_table`, then copy the data into the table.
+Copying into distributed tables avoids running out of space on the coordinator
+node.
+
+## Colocating tables
+
+Colocation means placing keeping related information on the same machines. It
+enables efficient queries, while taking advantage of the horizontal scalability
+for the whole dataset. For more information, see
+[colocation](concepts-colocation.md).
+
+Tables are colocated in groups. To manually control a table's colocation group
+assignment, use the optional `colocate_with` parameter of
+`create_distributed_table`. If you don\'t care about a table\'s colocation then
+omit this parameter. It defaults to the value `'default'`, which groups the
+table with any other default colocation table having the same distribution
+column type, shard count, and replication factor. If you want to break or
+update this implicit colocation, you can use
+`update_distributed_table_colocation()`.
+
+```postgresql
+-- these tables are implicitly co-located by using the same
+-- distribution column type and shard count with the default
+-- co-location group
+
+SELECT create_distributed_table('A', 'some_int_col');
+SELECT create_distributed_table('B', 'other_int_col');
+```
+
+When a new table is not related to others in its would-be implicit
+colocation group, specify `colocated_with => 'none'`.
+
+```postgresql
+-- not co-located with other tables
+
+SELECT create_distributed_table('A', 'foo', colocate_with => 'none');
+```
+
+Splitting unrelated tables into their own colocation groups will improve [shard
+rebalancing](howto-scale-rebalance.md) performance, because
+shards in the same group have to be moved together.
+
+When tables are indeed related (for instance when they will be joined), it can
+make sense to explicitly colocate them. The gains of appropriate colocation are
+more important than any rebalancing overhead.
+
+To explicitly colocate multiple tables, distribute one and then put the others
+into its colocation group. For example:
+
+```postgresql
+-- distribute stores
+SELECT create_distributed_table('stores', 'store_id');
+
+-- add to the same group as stores
+SELECT create_distributed_table('orders', 'store_id', colocate_with => 'stores');
+SELECT create_distributed_table('products', 'store_id', colocate_with => 'stores');
+```
+
+Information about colocation groups is stored in the
+[pg_dist_colocation](reference-metadata.md#colocation-group-table)
+table, while
+[pg_dist_partition](reference-metadata.md#partition-table) reveals
+which tables are assigned to which groups.
+
+## Dropping tables
+
+You can use the standard PostgreSQL DROP TABLE command to remove your
+distributed tables. As with regular tables, DROP TABLE removes any indexes,
+rules, triggers, and constraints that exist for the target table. In addition,
+it also drops the shards on the worker nodes and cleans up their metadata.
+
+```sql
+DROP TABLE github_events;
+```
+
+## Modifying tables
+
+Hyperscale (Citus) automatically propagates many kinds of DDL statements.
+Modifying a distributed table on the coordinator node will update shards on the
+workers too. Other DDL statements require manual propagation, and certain
+others are prohibited such as any which would modify a distribution column.
+Attempting to run DDL that is ineligible for automatic propagation will raise
+an error and leave tables on the coordinator node unchanged.
+
+Here is a reference of the categories of DDL statements that propagate.
+Automatic propagation can be enabled or disabled with a [configuration
+parameter](reference-parameters.md#citusenable_ddl_propagation-boolean)
+
+### Adding/Modifying Columns
+
+Hyperscale (Citus) propagates most [ALTER
+TABLE](https://www.postgresql.org/docs/current/static/ddl-alter.html) commands
+automatically. Adding columns or changing their default values work as they
+would in a single-machine PostgreSQL database:
+
+```postgresql
+-- Adding a column
+
+ALTER TABLE products ADD COLUMN description text;
+
+-- Changing default value
+
+ALTER TABLE products ALTER COLUMN price SET DEFAULT 7.77;
+```
+
+Significant changes to an existing column like renaming it or changing its data
+type are fine too. However the data type of the [distribution
+column](concepts-nodes.md#distribution-column) cannot be altered.
+This column determines how table data distributes through the Hyperscale
+(Citus) cluster, and modifying its data type would require moving the data.
+
+Attempting to do so causes an error:
+
+```postgres
+-- assumining store_id is the distribution column
+-- for products, and that it has type integer
+
+ALTER TABLE products
+ALTER COLUMN store_id TYPE text;
+
+/*
+ERROR: XX000: cannot execute ALTER TABLE command involving partition column
+LOCATION: ErrorIfUnsupportedAlterTableStmt, multi_utility.c:2150
+*/
+```
+
+### Adding/Removing Constraints
+
+Using Hyperscale (Citus) allows you to continue to enjoy the safety of a
+relational database, including database constraints (see the PostgreSQL
+[docs](https://www.postgresql.org/docs/current/static/ddl-constraints.html)).
+Due to the nature of distributed systems, Hyperscale (Citus) will not
+cross-reference uniqueness constraints or referential integrity between worker
+nodes.
+
+To set up a foreign key between colocated distributed tables, always include
+the distribution column in the key. Including the distribution column may
+involve making the key compound.
+
+Foreign keys may be created in these situations:
+
+- between two local (non-distributed) tables,
+- between two reference tables,
+- between two [colocated](concepts-colocation.md) distributed
+ tables when the key includes the distribution column, or
+- as a distributed table referencing a [reference
+ table](concepts-nodes.md#type-2-reference-tables)
+
+Foreign keys from reference tables to distributed tables are not
+supported.
+
+> [!NOTE]
+>
+> Primary keys and uniqueness constraints must include the distribution
+> column. Adding them to a non-distribution column will generate an error
+
+This example shows how to create primary and foreign keys on distributed
+tables:
+
+```postgresql
+--
+-- Adding a primary key
+-- --
+
+-- We'll distribute these tables on the account_id. The ads and clicks
+-- tables must use compound keys that include account_id.
+
+ALTER TABLE accounts ADD PRIMARY KEY (id);
+ALTER TABLE ads ADD PRIMARY KEY (account_id, id);
+ALTER TABLE clicks ADD PRIMARY KEY (account_id, id);
+
+-- Next distribute the tables
+
+SELECT create_distributed_table('accounts', 'id');
+SELECT create_distributed_table('ads', 'account_id');
+SELECT create_distributed_table('clicks', 'account_id');
+
+--
+-- Adding foreign keys
+-- -
+
+-- Note that this can happen before or after distribution, as long as
+-- there exists a uniqueness constraint on the target column(s) which
+-- can only be enforced before distribution.
+
+ALTER TABLE ads ADD CONSTRAINT ads_account_fk
+ FOREIGN KEY (account_id) REFERENCES accounts (id);
+ALTER TABLE clicks ADD CONSTRAINT clicks_ad_fk
+ FOREIGN KEY (account_id, ad_id) REFERENCES ads (account_id, id);
+```
+
+Similarly, include the distribution column in uniqueness constraints:
+
+```postgresql
+-- Suppose we want every ad to use a unique image. Notice we can
+-- enforce it only per account when we distribute by account id.
+
+ALTER TABLE ads ADD CONSTRAINT ads_unique_image
+ UNIQUE (account_id, image_url);
+```
+
+Not-null constraints can be applied to any column (distribution or not)
+because they require no lookups between workers.
+
+```postgresql
+ALTER TABLE ads ALTER COLUMN image_url SET NOT NULL;
+```
+
+### Using NOT VALID Constraints
+
+In some situations it can be useful to enforce constraints for new rows, while
+allowing existing non-conforming rows to remain unchanged. Hyperscale (Citus)
+supports this feature for CHECK constraints and foreign keys, using
+PostgreSQL\'s \"NOT VALID\" constraint designation.
+
+For example, consider an application that stores user profiles in a
+[reference table](concepts-nodes.md#type-2-reference-tables).
+
+```postgres
+-- we're using the "text" column type here, but a real application
+-- might use "citext" which is available in a postgres contrib module
+
+CREATE TABLE users ( email text PRIMARY KEY );
+SELECT create_reference_table('users');
+```
+
+In the course of time imagine that a few non-addresses get into the
+table.
+
+```postgres
+INSERT INTO users VALUES
+ ('foo@example.com'), ('hacker12@aol.com'), ('lol');
+```
+
+We would like to validate the addresses, but PostgreSQL does not
+ordinarily allow us to add a CHECK constraint that fails for existing
+rows. However it *does* allow a constraint marked not valid:
+
+```postgres
+ALTER TABLE users
+ADD CONSTRAINT syntactic_email
+CHECK (email ~
+ '^[a-zA-Z0-9.!#$%&''*+/=?^_`{|}~-]+@[a-zA-Z0-9](?:[a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?(?:\.[a-zA-Z0-9](?:[a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?)*$'
+) NOT VALID;
+```
+
+New rows are now protected.
+
+```postgres
+INSERT INTO users VALUES ('fake');
+
+/*
+ERROR: new row for relation "users_102010" violates
+ check constraint "syntactic_email_102010"
+DETAIL: Failing row contains (fake).
+*/
+```
+
+Later, during non-peak hours, a database administrator can attempt to
+fix the bad rows and revalidate the constraint.
+
+```postgres
+-- later, attempt to validate all rows
+ALTER TABLE users
+VALIDATE CONSTRAINT syntactic_email;
+```
+
+The PostgreSQL documentation has more information about NOT VALID and
+VALIDATE CONSTRAINT in the [ALTER
+TABLE](https://www.postgresql.org/docs/current/sql-altertable.html)
+section.
+
+### Adding/Removing Indices
+
+Hyperscale (Citus) supports adding and removing
+[indices](https://www.postgresql.org/docs/current/static/sql-createhttps://docsupdatetracker.net/index.html):
+
+```postgresql
+-- Adding an index
+
+CREATE INDEX clicked_at_idx ON clicks USING BRIN (clicked_at);
+
+-- Removing an index
+
+DROP INDEX clicked_at_idx;
+```
+
+Adding an index takes a write lock, which can be undesirable in a
+multi-tenant \"system-of-record.\" To minimize application downtime,
+create the index
+[concurrently](https://www.postgresql.org/docs/current/static/sql-createhttps://docsupdatetracker.net/index.html#SQL-CREATEINDEX-CONCURRENTLY)
+instead. This method requires more total work than a standard index
+build and takes longer to complete. However, since it
+allows normal operations to continue while the index is built, this
+method is useful for adding new indexes in a production environment.
+
+```postgresql
+-- Adding an index without locking table writes
+
+CREATE INDEX CONCURRENTLY clicked_at_idx ON clicks USING BRIN (clicked_at);
+```
postgresql Howto Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/howto-monitoring.md
+
+ Title: How to view metrics - Hyperscale (Citus) - Azure Database for PostgreSQL
+description: How to access database metrics for Azure Database for PostgreSQL - Hyperscale (Citus)
+++++ Last updated : 10/05/2021++
+# How to view metrics in Azure Database for PostgreSQL - Hyperscale (Citus)
+
+Resource metrics are available for every node of a Hyperscale (Citus) server
+group, and in aggregate across the nodes.
+
+## View metrics
+
+To access metrics for a Hyperscale (Citus) server group, open **Metrics**
+under **Monitoring** in the Azure portal.
++
+Choose a dimension and an aggregation, for instance **CPU percent** and
+**Max**, to view the metric aggregated across all nodes. For an explanation of
+each metric, see [here](concepts-monitoring.md#list-of-metrics).
++
+### View metrics per node
+
+Viewing each node's metrics separately on the same graph is called "splitting."
+To enable it, select **Apply splitting**:
++
+Select the value by which to split. For Hyperscale (Citus) nodes, choose **Server name**.
++
+The metrics will now be plotted in one color-coded line per node.
++
+## Next steps
+
+* Review Hyperscale (Citus) [monitoring concepts](concepts-monitoring.md)
postgresql Howto Private Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/howto-private-access.md
+
+ Title: Enable private access (preview) - Hyperscale (Citus) - Azure Database for PostgreSQL
+description: How to set up private link in a server group for Azure Database for PostgreSQL - Hyperscale (Citus)
+++++ Last updated : 11/16/2021++
+# Private access (preview) in Azure Database for PostgreSQL Hyperscale (Citus)
+
+[Private access](concepts-private-access.md) (preview) allows
+resources in an Azure virtual network to connect securely and privately to
+nodes in a Hyperscale (Citus) server group. This how-to assumes you've already
+created a virtual network and subnet. For an example of setting up
+prerequisites, see the [private access
+tutorial](tutorial-private-access.md).
+
+## Create a server group with a private endpoint
+
+1. Select **Create a resource** in the upper left-hand corner of the Azure portal.
+
+2. Select **Databases** from the **New** page, and select **Azure Database for
+ PostgreSQL** from the **Databases** page.
+
+3. For the deployment option, select the **Create** button under **Hyperscale
+ (Citus) server group**.
+
+4. Fill out the new server details form with your resource group, desired
+ server group name, location, and database user password.
+
+5. Select **Configure server group**, choose the desired plan, and select
+ **Save**.
+
+6. Select **Next: Networking** at the bottom of the page.
+
+7. Select **Private access (preview)**.
+
+ > [!NOTE]
+ >
+ > Private access is available for preview in only [certain
+ > regions](concepts-limits.md#regions).
+ >
+ > If the private access option is not selectable for your server group
+ > even though your server group is within an allowed region,
+ > please open an Azure [support
+ > request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest),
+ > and include your Azure subscription ID, to get access.
+
+8. A screen appears called **Create private endpoint**. Choose appropriate values
+ for your existing resources, and click **OK**:
+
+ - **Resource group**
+ - **Location**
+ - **Name**
+ - **Target sub-resource**
+ - **Virtual network**
+ - **Subnet**
+ - **Integrate with private DNS zone**
+
+9. After creating the private endpoint, select **Review + create** to create
+ your Hyperscale (Citus) server group.
+
+## Enable private access on an existing server group
+
+To create a private endpoint to a node in an existing server group, open the
+**Networking** page for the server group.
+
+1. Select **+ Add private endpoint**.
+
+ :::image type="content" source="../media/howto-hyperscale-private-access/networking.png" alt-text="Networking screen":::
+
+2. In the **Basics** tab, confirm the **Subscription**, **Resource group**, and
+ **Region**. Enter a **Name** for the endpoint, such as `my-server-group-eq`.
+
+ > [!NOTE]
+ >
+ > Unless you have a good reason to choose otherwise, we recommend picking a
+ > subscription and region that match those of your server group. The
+ > default values for the form fields may not be correct; check them and
+ > update if necessary.
+
+3. Select **Next: Resource >**. In the **Target sub-resource** choose the target
+ node of the server group. Generally `coordinator` is the desired node.
+
+4. Select **Next: Configuration >**. Choose the desired **Virtual network** and
+ **Subnet**. Customize the **Private DNS integration** or accept its default
+ settings.
+
+5. Select **Next: Tags >** and add any desired tags.
+
+6. Finally, select **Review + create >**. Review the settings and select
+ **Create** when satisfied.
+
+## Next steps
+
+* Learn more about [private access](concepts-private-access.md)
+ (preview).
+* Follow a [tutorial](tutorial-private-access.md) to see private
+ access (preview) in action.
postgresql Howto Read Replicas Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/howto-read-replicas-portal.md
+
+ Title: Manage read replicas - Azure portal - Azure Database for PostgreSQL - Hyperscale (Citus)
+description: Learn how to manage read replicas Azure Database for PostgreSQL - Hyperscale (Citus) from the Azure portal.
+++++ Last updated : 08/03/2021++
+# Create and manage read replicas in Azure Database for PostgreSQL - Hyperscale (Citus) from the Azure portal
+
+In this article, you learn how to create and manage read replicas in Hyperscale
+(Citus) from the Azure portal. To learn more about read replicas, see the
+[overview](concepts-read-replicas.md).
++
+## Prerequisites
+
+A [Hyperscale (Citus) server group](quickstart-create-portal.md) to
+be the primary.
+
+## Create a read replica
+
+To create a read replica, follow these steps:
+
+1. Select an existing Azure Database for PostgreSQL server group to use as the
+ primary.
+
+2. On the server group sidebar, under **Server group management**, select
+ **Replication**.
+
+3. Select **Add Replica**.
+
+4. Enter a name for the read replica.
+
+5. Select **OK** to confirm the creation of the replica.
+
+After the read replica is created, it can be viewed from the **Replication** window.
+
+> [!IMPORTANT]
+>
+> Review the [considerations section of the Read Replica
+> overview](concepts-read-replicas.md#considerations).
+>
+> Before a primary server group setting is updated to a new value, update the
+> replica setting to an equal or greater value. This action helps the replica
+> keep up with any changes made to the master.
+
+## Delete a primary server group
+
+To delete a primary server group, you use the same steps as to delete a
+standalone Hyperscale (Citus) server group. From the Azure portal, follow these
+steps:
+
+1. In the Azure portal, select your primary Azure Database for PostgreSQL
+ server group.
+
+2. Open the **Overview** page for the server group. Select **Delete**.
+
+3. Enter the name of the primary server group to delete. Select **Delete** to
+ confirm deletion of the primary server group.
+
+
+## Delete a replica
+
+You can delete a read replica similarly to how you delete a primary server
+group.
+
+- In the Azure portal, open the **Overview** page for the read replica. Select
+ **Delete**.
+
+You can also delete the read replica from the **Replication** window by
+following these steps:
+
+1. In the Azure portal, select your primary Hyperscale (Citus) server group.
+
+2. On the server group menu, under **Server group management**, select
+ **Replication**.
+
+3. Select the read replica to delete.
+
+4. Select **Delete replica**.
+
+5. Enter the name of the replica to delete. Select **Delete** to confirm
+ deletion of the replica.
+
+## Next steps
+
+* Learn more about [read replicas in Azure Database for
+ PostgreSQL - Hyperscale (Citus)](concepts-read-replicas.md).
postgresql Howto Restart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/howto-restart.md
+
+ Title: Restart server - Hyperscale (Citus) - Azure Database for PostgreSQL
+description: How to restart the database in Azure Database for PostgreSQL - Hyperscale (Citus)
+++++ Last updated : 7/9/2021++
+# Restart Azure Database for PostgreSQL - Hyperscale (Citus)
+
+If you'd like to restart your Hyperscale (Citus) server group, you can do it
+from the group's **Overview** page in the Azure portal. Select the **Restart**
+button on the top bar. A confirmation dialog will appear. Select **Restart
+all** to continue.
+
+> [!NOTE]
+> If the Restart button is not yet present for your server group, please open
+> an Azure support request to restart the server group.
+
+Restarting the server group applies to all nodes; you can't selectively restart
+individual nodes. The restart applies to the nodes' entire virtual machines,
+not just the PostgreSQL server instances. Any applications attempting to use
+the database will experience connectivity downtime while the restart happens.
+
+**Next steps**
+
+- Changing some server parameters requires a restart. See the list of [all
+ server parameters](reference-parameters.md) configurable on
+ Hyperscale (Citus).
postgresql Howto Restore Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/howto-restore-portal.md
+
+ Title: Restore - Hyperscale (Citus) - Azure Database for PostgreSQL - Azure portal
+description: This article describes how to perform restore operations in Azure Database for PostgreSQL - Hyperscale (Citus) through the Azure portal.
+++++ Last updated : 07/09/2021++
+# Point-in-time restore of a Hyperscale (Citus) server group
+
+This article provides step-by-step procedures to perform [point-in-time
+recoveries](concepts-backup.md#restore) for a Hyperscale (Citus)
+server group using backups. You can restore either to the earliest backup or to
+a custom restore point within your retention period.
+
+## Restoring to the earliest restore point
+
+Follow these steps to restore your Hyperscale (Citus) server group to its
+earliest existing backup.
+
+1. In the [Azure portal](https://portal.azure.com/), choose the server group
+ that you want to restore.
+
+2. Click **Overview** from the left panel and click **Restore**.
+
+ > [!IMPORTANT]
+ > If the **Restore** button is not yet present for your server group,
+ > please open an Azure support request to restore your server group.
+
+3. The restore page will ask you to choose between the **Earliest** and a
+ **Custom** restore point, and will display the earliest date.
+
+4. Select **Earliest restore point**.
+
+5. Provide a new server group name in the **Restore to new server** field. The
+ other fields (subscription, resource group, and location) are displayed but
+ not editable.
+
+6. Click **OK**.
+
+7. A notification will be shown that the restore operation has been initiated.
+
+Finally, follow the [post-restore tasks](#post-restore-tasks).
+
+## Restoring to a custom restore point
+
+Follow these steps to restore your Hyperscale (Citus) server group to a date
+and time of your choosing.
+
+1. In the [Azure portal](https://portal.azure.com/), choose the server group
+ that you want to restore.
+
+2. Click **Overview** from the left panel and click **Restore**
+
+ > [!IMPORTANT]
+ > If the **Restore** button is not yet present for your server group,
+ > please open an Azure support request to restore your server group.
+
+3. The restore page will ask you to choose between the **Earliest** and a
+ **Custom** restore point, and will display the earliest date.
+
+4. Choose **Custom restore point**.
+
+5. Select date and time for **Restore point (UTC)**, and provide a new server
+ group name in the **Restore to new server** field. The other fields
+ (subscription, resource group, and location) are displayed but not editable.
+
+6. Click **OK**.
+
+7. A notification will be shown that the restore operation has been
+ initiated.
+
+Finally, follow the [post-restore tasks](#post-restore-tasks).
+
+## Post-restore tasks
+
+After a restore, you should do the following to get your users and applications
+back up and running:
+
+* If the new server is meant to replace the original server, redirect clients
+ and client applications to the new server
+* Ensure an appropriate server-level firewall is in place for
+ users to connect. These rules aren't copied from the original server group.
+* Adjust PostgreSQL server parameters as needed. The parameters aren't copied
+ from the original server group.
+* Ensure appropriate logins and database level permissions are in place.
+* Configure alerts, as appropriate.
+
+## Next steps
+
+* Learn more about [backup and restore](concepts-backup.md) in
+ Hyperscale (Citus).
+* SetΓÇ»[suggested
+ alerts](./howto-alert-on-metric.md#suggested-alerts) on Hyperscale
+ (Citus) server groups.
postgresql Howto Scale Grow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/howto-scale-grow.md
+
+ Title: Scale server group - Hyperscale (Citus) - Azure Database for PostgreSQL
+description: Adjust server group memory, disk, and CPU resources to deal with increased load
+++++ Last updated : 12/10/2021++
+# Scale a Hyperscale (Citus) server group
+
+Azure Database for PostgreSQL - Hyperscale (Citus) provides self-service
+scaling to deal with increased load. The Azure portal makes it easy to add new
+worker nodes, and to increase the vCores of existing nodes. Adding nodes causes
+no downtime, and even moving shards to the new nodes (called [shard
+rebalancing](howto-scale-rebalance.md)) happens without interrupting
+queries.
+
+## Add worker nodes
+
+To add nodes, go to the **Compute + storage** tab in your Hyperscale (Citus) server
+group. Dragging the slider for **Worker node count** changes the value.
+
+> [!NOTE]
+>
+> A Hyperscale (Citus) server group created with the [basic
+> tier](concepts-tiers.md) has no workers. Increasing the worker
+> count automatically graduates the server group to the standard tier. After
+> graduating a server group to the standard tier, you can't downgrade it back
+> to the basic tier.
++
+Click the **Save** button to make the changed value take effect.
+
+> [!NOTE]
+> Once increased and saved, the number of worker nodes cannot be decreased
+> using the slider.
+
+> [!NOTE]
+> To take advantage of newly added nodes you must [rebalance distributed table
+> shards](howto-scale-rebalance.md), which means moving some
+> [shards](concepts-distributed-data.md#shards) from existing nodes
+> to the new ones. Rebalancing can work in the background, and requires no
+> downtime.
+
+## Increase or decrease vCores on nodes
+
+In addition to adding new nodes, you can increase the capabilities of existing
+nodes. Adjusting compute capacity up and down can be useful for performance
+experiments, and short- or long-term changes to traffic demands.
+
+To change the vCores for all worker nodes, adjust the **vCores** slider under
+**Configuration (per worker node)**. The coordinator node's vCores can be
+adjusted independently. Adjust the **vCores** slider under **Configuration
+(coordinator node)**.
+
+> [!NOTE]
+> There is a vCore quota per Azure subscription per region. The default quota
+> should be more than enough to experiment with Hyperscale (Citus). If you
+> need more vCores for a region in your subscription, see how to [adjust
+> compute quotas](howto-compute-quota.md).
+
+## Increase storage on nodes
+
+In addition to adding new nodes, you can increase the disk space of existing
+nodes. Increasing disk space can allow you to do more with existing worker
+nodes before needing to add more worker nodes.
+
+To change the storage for all worker nodes, adjust the **storage** slider under
+**Configuration (per worker node)**. The coordinator node's storage can be
+adjusted independently. Adjust the **storage** slider under **Configuration
+(coordinator node)**.
+
+> [!NOTE]
+> Once increased and saved, the storage per node cannot be decreased using the
+> slider.
+
+## Next steps
+
+- Learn more about server group [performance
+ options](concepts-configuration-options.md).
+- [Rebalance distributed table shards](howto-scale-rebalance.md)
+ so that all worker nodes can participate in parallel queries
+- See the sizes of distributed tables, and other [useful diagnostic
+ queries](howto-useful-diagnostic-queries.md).
postgresql Howto Scale Initial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/howto-scale-initial.md
+
+ Title: Initial server group size - Hyperscale (Citus) - Azure Database for PostgreSQL
+description: Pick the right initial size for your use case
+++++ Last updated : 08/03/2021++
+# Pick initial size for Hyperscale (Citus) server group
+
+The size of a server group, both number of nodes and their hardware capacity,
+is [easy to change](howto-scale-grow.md)). However you still need to
+choose an initial size for a new server group. Here are some tips for a
+reasonable choice.
+
+## Use-cases
+
+Hyperscale (Citus) is frequently used in the following ways.
+
+### Multi-tenant SaaS
+
+When migrating to Hyperscale (Citus) from an existing single-node PostgreSQL
+database instance, choose a cluster where the number of worker vCores and RAM
+in total equals that of the original instance. In such scenarios we have seen
+2-3x performance improvements because sharding improves resource utilization,
+allowing smaller indices etc.
+
+The vCore count is actually the only decision. RAM allocation is currently
+determined based on vCore count, as described in the [Hyperscale (Citus)
+configuration options](concepts-configuration-options.md) page.
+The coordinator node doesn't require as much RAM as workers, but there's
+no way to choose RAM and vCores independently.
+
+### Real-time analytics
+
+Total vCores: when working data fits in RAM, you can expect a linear
+performance improvement on Hyperscale (Citus) proportional to the number of
+worker cores. To determine the right number of vCores for your needs, consider
+the current latency for queries in your single-node database and the required
+latency in Hyperscale (Citus). Divide current latency by desired latency, and
+round the result.
+
+Worker RAM: the best case would be providing enough memory that most
+the working set fits in memory. The type of queries your application uses
+affect memory requirements. You can run EXPLAIN ANALYZE on a query to determine
+how much memory it requires. Remember that vCores and RAM are scaled together
+as described in the [Hyperscale (Citus) configuration
+options](concepts-configuration-options.md) article.
+
+## Choosing a Hyperscale (Citus) tier
+
+The sections above give an idea how many vCores and how much RAM are needed for
+each use case. You can meet these demands through a choice between two
+Hyperscale (Citus) tiers: the basic tier and the standard tier.
+
+The basic tier uses a single database node to perform processing, while the
+standard tier allows more nodes. The tiers are otherwise identical, offering
+the same features. In some cases, a single node's vCores and disk space can be
+scaled to suffice, and in other cases it requires the cooperation of multiple
+nodes.
+
+For a comparison of the tiers, see the [basic
+tier](concepts-tiers.md) concepts page.
+
+## Next steps
+
+- [Scale a server group](howto-scale-grow.md)
+- Learn more about server group [performance
+ options](concepts-configuration-options.md).
postgresql Howto Scale Rebalance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/howto-scale-rebalance.md
+
+ Title: Rebalance shards - Hyperscale (Citus) - Azure Database for PostgreSQL
+description: Distribute shards evenly across servers for better performance
+++++ Last updated : 07/20/2021++
+# Rebalance shards in Hyperscale (Citus) server group
+
+To take advantage of newly added nodes you must rebalance distributed table
+[shards](concepts-distributed-data.md#shards), which means moving
+some shards from existing nodes to the new ones. Hyperscale (Citus) offers
+zero-downtime rebalancing, meaning queries can run without interruption during
+shard rebalancing.
+
+## Determine if the server group needs a rebalance
+
+The Azure portal can show you whether data is distributed equally between
+worker nodes in a server group. To see it, go to the **Shard rebalancer** page
+in the **Server group management** menu. If data is skewed between workers,
+you'll see the message **Rebalancing is recommended**, along with a list of the
+size of each node.
+
+If data is already balanced, you'll see the message **Rebalancing is not
+recommended at this time**.
+
+## Run the shard rebalancer
+
+To start the shard rebalancer, you need to connect to the coordinator node of
+the server group and run the
+[rebalance_table_shards](reference-functions.md#rebalance_table_shards)
+SQL function on distributed tables. The function rebalances all tables in the
+[colocation](concepts-colocation.md) group of the table named in its
+argument. Thus you do not have to call the function for every distributed
+table, just call it on a representative table from each colocation group.
+
+```sql
+SELECT rebalance_table_shards('distributed_table_name');
+```
+
+## Monitor rebalance progress
+
+To watch the rebalancer after you start it, go back to the Azure portal. Open
+the **Shard rebalancer** page in **Server group management**. It will show the
+message **Rebalancing is underway** along with two tables.
+
+The first table shows the number of shards moving into or out of a node, for
+example, "6 of 24 moved in." The second table shows progress per database
+table: name, shard count affected, data size affected, and rebalancing status.
+
+Select the **Refresh** button to update the page. When rebalancing is complete,
+it will again say **Rebalancing is not recommended at this time**.
+
+## Next steps
+
+- Learn more about server group [performance
+ options](concepts-configuration-options.md).
+- [Scale a server group](howto-scale-grow.md) up or out
+- See the
+ [rebalance_table_shards](reference-functions.md#rebalance_table_shards)
+ reference material
postgresql Howto Table Size https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/howto-table-size.md
+
+ Title: Determine table size - Hyperscale (Citus) - Azure Database for PostgreSQL
+description: How to find the true size of distributed tables in a Hyperscale (Citus) server group
+++++ Last updated : 12/06/2021++
+# Determine table and relation size
+
+The usual way to find table sizes in PostgreSQL, `pg_total_relation_size`,
+drastically under-reports the size of distributed tables on Hyperscale (Citus).
+All this function does on a Hyperscale (Citus) server group is to reveal the size
+of tables on the coordinator node. In reality, the data in distributed tables
+lives on the worker nodes (in shards), not on the coordinator. A true measure
+of distributed table size is obtained as a sum of shard sizes. Hyperscale
+(Citus) provides helper functions to query this information.
+
+<table>
+<colgroup>
+<col width="40%" />
+<col width="59%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th>Function</th>
+<th>Returns</th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td>citus_relation_size(relation_name)</td>
+<td><ul>
+<li>Size of actual data in table (the "<a href="https://www.postgresql.org/docs/current/static/storage-file-layout.html">main fork</a>").</li>
+<li>A relation can be the name of a table or an index.</li>
+</ul></td>
+</tr>
+<tr class="even">
+<td>citus_table_size(relation_name)</td>
+<td><ul>
+<li><p>citus_relation_size plus:</p>
+<blockquote>
+<ul>
+<li>size of <a href="https://www.postgresql.org/docs/current/static/storage-fsm.html">free space map</a></li>
+<li>size of <a href="https://www.postgresql.org/docs/current/static/storage-vm.html">visibility map</a></li>
+</ul>
+</blockquote></li>
+</ul></td>
+</tr>
+<tr class="odd">
+<td>citus_total_relation_size(relation_name)</td>
+<td><ul>
+<li><p>citus_table_size plus:</p>
+<blockquote>
+<ul>
+<li>size of indices</li>
+</ul>
+</blockquote></li>
+</ul></td>
+</tr>
+</tbody>
+</table>
+
+These functions are analogous to three of the standard PostgreSQL [object size
+functions](https://www.postgresql.org/docs/current/static/functions-admin.html#FUNCTIONS-ADMIN-DBSIZE),
+except if they can't connect to a node, they error out.
+
+## Example
+
+Here's how to list the sizes of all distributed tables:
+
+``` postgresql
+SELECT logicalrelid AS name,
+ pg_size_pretty(citus_table_size(logicalrelid)) AS size
+ FROM pg_dist_partition;
+```
+
+Output:
+
+```
+ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
+Γöé name Γöé size Γöé
+Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
+Γöé github_users Γöé 39 MB Γöé
+Γöé github_events Γöé 37 MB Γöé
+ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
+```
+
+## Next steps
+
+* Learn to [scale a server group](howto-scale-grow.md) to hold more data.
+* Distinguish [table types](concepts-nodes.md) in a Hyperscale (Citus) server group.
+* See other [useful diagnostic queries](howto-useful-diagnostic-queries.md).
postgresql Howto Troubleshoot Common Connection Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/howto-troubleshoot-common-connection-issues.md
+
+ Title: Troubleshoot connections - Hyperscale (Citus) - Azure Database for PostgreSQL
+description: Learn how to troubleshoot connection issues to Azure Database for PostgreSQL - Hyperscale (Citus)
+keywords: postgresql connection,connection string,connectivity issues,transient error,connection error
+++++ Last updated : 12/17/2021++
+# Troubleshoot connection issues to Azure Database for PostgreSQL - Hyperscale (Citus)
+
+Connection problems may be caused by several things, such as:
+
+* Firewall settings
+* Connection time-out
+* Incorrect sign in information
+* Connection limit reached for server group
+* Issues with the infrastructure of the service
+* Service maintenance
+* The coordinator node failing over to new hardware
+
+Generally, connection issues to Hyperscale (Citus) can be classified as follows:
+
+* Transient errors (short-lived or intermittent)
+* Persistent or non-transient errors (errors that regularly recur)
+
+## Troubleshoot transient errors
+
+Transient errors occur for a number of reasons. The most common include system
+Maintenance, error with hardware or software, and coordinator node vCore
+upgrades.
+
+Enabling high availability for Hyperscale (Citus) server group nodes can mitigate these
+types of problems automatically. However, your application should still be
+prepared to lose its connection briefly. Also other events can take longer to
+mitigate, such as when a large transaction causes a long-running recovery.
+
+### Steps to resolve transient connectivity issues
+
+1. Check the [Microsoft Azure Service
+ Dashboard](https://azure.microsoft.com/status) for any known outages that
+ occurred during the time in which the application was reporting errors.
+2. Applications that connect to a cloud service such as Hyperscale (Citus)
+ should expect transient errors and react gracefully. For instance,
+ applications should implement retry logic to handle these errors instead of
+ surfacing them as application errors to users.
+3. As the server group approaches its resource limits, errors can seem like
+ transient connectivity issues. Increasing node RAM, or adding worker nodes
+ and rebalancing data may help.
+4. If connectivity problems continue, or last longer than 60 seconds, or happen
+ more than once per day, file an Azure support request by
+ selecting **Get Support** on the [Azure
+ Support](https://azure.microsoft.com/support/options) site.
+
+## Troubleshoot persistent errors
+
+If the application persistently fails to connect to Hyperscale (Citus), the
+most common causes are firewall misconfiguration or user error.
+
+* Coordinator node firewall configuration: Make sure that the Hyperscale (Citus) server
+ firewall is configured to allow connections from your client, including proxy
+ servers and gateways.
+* Client firewall configuration: The firewall on your client must allow
+ connections to your database server. Some firewalls require allowing not only
+ application by name, but allowing the IP addresses and ports of the server.
+* User error: Double-check the connection string. You might have mistyped
+ parameters like the server name. You can find connection strings for various
+ language frameworks and psql in the Azure portal. Go to the **Connection
+ strings** page in your Hyperscale (Citus) server group. Also keep in mind that
+ Hyperscale (Citus) clusters have only one database and its predefined name is
+ **citus**.
+
+### Steps to resolve persistent connectivity issues
+
+1. Set up [firewall rules](howto-manage-firewall-using-portal.md) to
+ allow the client IP address. For temporary testing purposes only, set up a
+ firewall rule using 0.0.0.0 as the starting IP address and using
+ 255.255.255.255 as the ending IP address. That rule opens the server to all IP
+ addresses. If the rule resolves your connectivity issue, remove it and
+ create a firewall rule for an appropriately limited IP address or address
+ range.
+2. On all firewalls between the client and the internet, make sure that port
+ 5432 is open for outbound connections (and 6432 if using [connection
+ pooling](concepts-connection-pool.md)).
+3. Verify your connection string and other connection settings.
+4. Check the service health in the dashboard.
+
+## Next steps
+
+* Learn the concepts of [Firewall rules in Azure Database for PostgreSQL - Hyperscale (Citus)](concepts-firewall-rules.md)
+* See how to [Manage firewall rules for Azure Database for PostgreSQL - Hyperscale (Citus)](howto-manage-firewall-using-portal.md)
postgresql Howto Troubleshoot Read Only https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/howto-troubleshoot-read-only.md
+
+ Title: Troubleshoot read-only access - Hyperscale (Citus) - Azure Database for PostgreSQL
+description: Learn why a Hyperscale (Citus) server group can become read-only, and what to do
+keywords: postgresql connection,read only
+++++ Last updated : 08/03/2021++
+# Troubleshoot read-only access to Azure Database for PostgreSQL - Hyperscale (Citus)
+
+PostgreSQL can't run on a machine without some free disk space. To maintain
+access to PostgreSQL servers, it's necessary to prevent the disk space from
+running out.
+
+In Hyperscale (Citus), nodes are set to a read-only (RO) state when the disk is
+almost full. Preventing writes stops the disk from continuing to fill, and
+keeps the node available for reads. During the read-only state, you can take
+measures to free more disk space.
+
+Specifically, a Hyperscale (Citus) node becomes read-only when it has less than
+5 GiB of free storage left. When the server becomes read-only, all existing
+sessions are disconnected, and uncommitted transactions are rolled back. Any
+write operations and transaction commits will fail, while read queries will
+continue to work.
+
+## Ways to recover write-access
+
+### On the coordinator node
+
+* [Increase storage
+ size](howto-scale-grow.md#increase-storage-on-nodes)
+ on the coordinator node, and/or
+* Distribute local tables to worker nodes, or drop data. You'll need to run
+ `SET SESSION CHARACTERISTICS AS TRANSACTION READ WRITE` after you've
+ connected to the database and before you execute other commands.
+
+### On a worker node
+
+* [Increase storage
+ size](howto-scale-grow.md#increase-storage-on-nodes)
+ on the worker nodes, and/or
+* [Rebalance data](howto-scale-rebalance.md) to other nodes, or drop
+ some data.
+ * You'll need to set the worker node as read-write temporarily. You can
+ connect directly to worker nodes and use `SET SESSION CHARACTERISTICS` as
+ described above for the coordinator node.
+
+## Prevention
+
+We recommend that you set up an alert to notify you when server storage is
+approaching the threshold. That way you can act early to avoid getting into the
+read-only state. For more information, see the documentation about [recommended
+alerts](howto-alert-on-metric.md#suggested-alerts).
+
+## Next steps
+
+* [Set up Azure
+ alerts](howto-alert-on-metric.md#suggested-alerts)
+ for advance notice so you can take action before reaching the read-only state.
+* Learn about [disk
+ usage](https://www.postgresql.org/docs/current/diskusage.html) in PostgreSQL
+ documentation.
+* Learn about [session
+ characteristics](https://www.postgresql.org/docs/13/sql-set-transaction.html)
+ in PostgreSQL documentation.
postgresql Howto Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/howto-upgrade.md
+
+ Title: Upgrade server group - Hyperscale (Citus) - Azure Database for PostgreSQL
+description: This article describes how you can upgrade PostgreSQL and Citus in Azure Database for PostgreSQL - Hyperscale (Citus).
+++++ Last updated : 4/5/2021++
+# Upgrade Hyperscale (Citus) server group
+
+These instructions describe how to upgrade to a new major version of PostgreSQL
+on all server group nodes.
+
+## Test the upgrade first
+
+Upgrading PostgreSQL causes more changes than you might imagine, because
+Hyperscale (Citus) will also upgrade the [database
+extensions](concepts-extensions.md), including the Citus extension.
+We strongly recommend you to test your application with the new PostgreSQL and
+Citus version before you upgrade your production environment.
+
+A convenient way to test is to make a copy of your server group using
+[point-in-time restore](concepts-backup.md#restore). Upgrade the
+copy and test your application against it. Once you've verified everything
+works properly, upgrade the original server group.
+
+## Upgrade a server group in the Azure portal
+
+1. In the **Overview** section of a Hyperscale (Citus) server group, select the
+ **Upgrade** button.
+1. A dialog appears, showing the current version of PostgreSQL and Citus.
+ Choose a new PostgreSQL version in the **Upgrade to** list.
+1. Verify the value in **Citus version after upgrade** is what you expect.
+ This value changes based on the PostgreSQL version you selected.
+1. Select the **Upgrade** button to continue.
+
+## Next steps
+
+* Learn about [supported PostgreSQL versions](concepts-versions.md).
+* See [which extensions](concepts-extensions.md) are packaged with
+ each PostgreSQL version in a Hyperscale (Citus) server group.
postgresql Howto Useful Diagnostic Queries https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/howto-useful-diagnostic-queries.md
+
+ Title: Useful diagnostic queries - Hyperscale (Citus) - Azure Database for PostgreSQL
+description: Queries to learn about distributed data and more
+++++ Last updated : 8/23/2021++
+# Useful Diagnostic Queries
+
+## Finding which node contains data for a specific tenant
+
+In the multi-tenant use case, we can determine which worker node contains the
+rows for a specific tenant. Hyperscale (Citus) groups the rows of distributed
+tables into shards, and places each shard on a worker node in the server group.
+
+Suppose our application's tenants are stores, and we want to find which worker
+node holds the data for store ID=4. In other words, we want to find the
+placement for the shard containing rows whose distribution column has value 4:
+
+``` postgresql
+SELECT shardid, shardstate, shardlength, nodename, nodeport, placementid
+ FROM pg_dist_placement AS placement,
+ pg_dist_node AS node
+ WHERE placement.groupid = node.groupid
+ AND node.noderole = 'primary'
+ AND shardid = (
+ SELECT get_shard_id_for_distribution_column('stores', 4)
+ );
+```
+
+The output contains the host and port of the worker database.
+
+```
+ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
+Γöé shardid Γöé shardstate Γöé shardlength Γöé nodename Γöé nodeport Γöé placementid Γöé
+Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
+Γöé 102009 Γöé 1 Γöé 0 Γöé 10.0.0.16 Γöé 5432 Γöé 2 Γöé
+ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
+```
+
+## Finding the distribution column for a table
+
+Each distributed table in Hyperscale (Citus) has a "distribution column." (For
+more information, see [Distributed Data
+Modeling](concepts-choose-distribution-column.md).) It can be
+important to know which column it is. For instance, when joining or filtering
+tables, you may see error messages with hints like, "add a filter to the
+distribution column."
+
+The `pg_dist_*` tables on the coordinator node contain diverse metadata about
+the distributed database. In particular `pg_dist_partition` holds information
+about the distribution column for each table. You can use a convenient utility
+function to look up the distribution column name from the low-level details in
+the metadata. Here's an example and its output:
+
+``` postgresql
+-- create example table
+
+CREATE TABLE products (
+ store_id bigint,
+ product_id bigint,
+ name text,
+ price money,
+
+ CONSTRAINT products_pkey PRIMARY KEY (store_id, product_id)
+);
+
+-- pick store_id as distribution column
+
+SELECT create_distributed_table('products', 'store_id');
+
+-- get distribution column name for products table
+
+SELECT column_to_column_name(logicalrelid, partkey) AS dist_col_name
+ FROM pg_dist_partition
+ WHERE logicalrelid='products'::regclass;
+```
+
+Example output:
+
+```
+ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
+Γöé dist_col_name Γöé
+Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
+Γöé store_id Γöé
+ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
+```
+
+## Detecting locks
+
+This query will run across all worker nodes and identify locks, how long
+they've been open, and the offending queries:
+
+``` postgresql
+SELECT run_command_on_workers($cmd$
+ SELECT array_agg(
+ blocked_statement || ' $ ' || cur_stmt_blocking_proc
+ || ' $ ' || cnt::text || ' $ ' || age
+ )
+ FROM (
+ SELECT blocked_activity.query AS blocked_statement,
+ blocking_activity.query AS cur_stmt_blocking_proc,
+ count(*) AS cnt,
+ age(now(), min(blocked_activity.query_start)) AS "age"
+ FROM pg_catalog.pg_locks blocked_locks
+ JOIN pg_catalog.pg_stat_activity blocked_activity
+ ON blocked_activity.pid = blocked_locks.pid
+ JOIN pg_catalog.pg_locks blocking_locks
+ ON blocking_locks.locktype = blocked_locks.locktype
+ AND blocking_locks.DATABASE IS NOT DISTINCT FROM blocked_locks.DATABASE
+ AND blocking_locks.relation IS NOT DISTINCT FROM blocked_locks.relation
+ AND blocking_locks.page IS NOT DISTINCT FROM blocked_locks.page
+ AND blocking_locks.tuple IS NOT DISTINCT FROM blocked_locks.tuple
+ AND blocking_locks.virtualxid IS NOT DISTINCT FROM blocked_locks.virtualxid
+ AND blocking_locks.transactionid IS NOT DISTINCT FROM blocked_locks.transactionid
+ AND blocking_locks.classid IS NOT DISTINCT FROM blocked_locks.classid
+ AND blocking_locks.objid IS NOT DISTINCT FROM blocked_locks.objid
+ AND blocking_locks.objsubid IS NOT DISTINCT FROM blocked_locks.objsubid
+ AND blocking_locks.pid != blocked_locks.pid
+ JOIN pg_catalog.pg_stat_activity blocking_activity ON blocking_activity.pid = blocking_locks.pid
+ WHERE NOT blocked_locks.GRANTED
+ AND blocking_locks.GRANTED
+ GROUP BY blocked_activity.query,
+ blocking_activity.query
+ ORDER BY 4
+ ) a
+$cmd$);
+```
+
+Example output:
+
+```
+ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
+Γöé run_command_on_workers Γöé
+Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
+Γöé (10.0.0.16,5432,t,"") Γöé
+│ (10.0.0.20,5432,t,"{""update ads_102277 set name = 'new name' where id = 1; $ sel…│
+│…ect * from ads_102277 where id = 1 for update; $ 1 $ 00:00:03.729519""}") │
+ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
+```
+
+## Querying the size of your shards
+
+This query will provide you with the size of every shard of a given
+distributed table, called `my_distributed_table`:
+
+``` postgresql
+SELECT *
+FROM run_command_on_shards('my_distributed_table', $cmd$
+ SELECT json_build_object(
+ 'shard_name', '%1$s',
+ 'size', pg_size_pretty(pg_table_size('%1$s'))
+ );
+$cmd$);
+```
+
+Example output:
+
+```
+ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
+Γöé shardid Γöé success Γöé result Γöé
+Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
+Γöé 102008 Γöé t Γöé {"shard_name" : "my_distributed_table_102008", "size" : "2416 kB"} Γöé
+Γöé 102009 Γöé t Γöé {"shard_name" : "my_distributed_table_102009", "size" : "3960 kB"} Γöé
+Γöé 102010 Γöé t Γöé {"shard_name" : "my_distributed_table_102010", "size" : "1624 kB"} Γöé
+Γöé 102011 Γöé t Γöé {"shard_name" : "my_distributed_table_102011", "size" : "4792 kB"} Γöé
+ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
+```
+
+## Querying the size of all distributed tables
+
+This query gets a list of the sizes for each distributed table plus the
+size of their indices.
+
+``` postgresql
+SELECT
+ tablename,
+ pg_size_pretty(
+ citus_total_relation_size(tablename::text)
+ ) AS total_size
+FROM pg_tables pt
+JOIN pg_dist_partition pp
+ ON pt.tablename = pp.logicalrelid::text
+WHERE schemaname = 'public';
+```
+
+Example output:
+
+```
+ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
+Γöé tablename Γöé total_size Γöé
+Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
+Γöé github_users Γöé 39 MB Γöé
+Γöé github_events Γöé 98 MB Γöé
+ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
+```
+
+Note there are other Hyperscale (Citus) functions for querying distributed
+table size, see [determining table size](howto-table-size.md).
+
+## Identifying unused indices
+
+The following query will identify unused indexes on worker nodes for a given
+distributed table (`my_distributed_table`)
+
+``` postgresql
+SELECT *
+FROM run_command_on_shards('my_distributed_table', $cmd$
+ SELECT array_agg(a) as infos
+ FROM (
+ SELECT (
+ schemaname || '.' || relname || '##' || indexrelname || '##'
+ || pg_size_pretty(pg_relation_size(i.indexrelid))::text
+ || '##' || idx_scan::text
+ ) AS a
+ FROM pg_stat_user_indexes ui
+ JOIN pg_index i
+ ON ui.indexrelid = i.indexrelid
+ WHERE NOT indisunique
+ AND idx_scan < 50
+ AND pg_relation_size(relid) > 5 * 8192
+ AND (schemaname || '.' || relname)::regclass = '%s'::regclass
+ ORDER BY
+ pg_relation_size(i.indexrelid) / NULLIF(idx_scan, 0) DESC nulls first,
+ pg_relation_size(i.indexrelid) DESC
+ ) sub
+$cmd$);
+```
+
+Example output:
+
+```
+ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
+Γöé shardid Γöé success Γöé result Γöé
+Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
+Γöé 102008 Γöé t Γöé Γöé
+Γöé 102009 Γöé t Γöé {"public.my_distributed_table_102009##some_index_102009##28 MB##0"} Γöé
+Γöé 102010 Γöé t Γöé Γöé
+Γöé 102011 Γöé t Γöé Γöé
+ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
+```
+
+## Monitoring client connection count
+
+The following query counts the connections open on the coordinator, and groups
+them by type.
+
+``` sql
+SELECT state, count(*)
+FROM pg_stat_activity
+GROUP BY state;
+```
+
+Example output:
+
+```
+ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
+Γöé state Γöé count Γöé
+Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
+Γöé active Γöé 3 Γöé
+Γöé idle Γöé 3 Γöé
+Γöé Γêà Γöé 6 Γöé
+ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
+```
+
+## Viewing system queries
+
+### Active queries
+
+The `pg_stat_activity` view shows which queries are currently executing. You
+can filter to find the actively executing ones, along with the process ID of
+their backend:
+
+```sql
+SELECT pid, query, state
+ FROM pg_stat_activity
+ WHERE state != 'idle';
+```
+
+### Why are queries waiting
+
+We can also query to see the most common reasons that non-idle queries that are
+waiting. For an explanation of the reasons, check the [PostgreSQL
+documentation](https://www.postgresql.org/docs/current/monitoring-stats.html#WAIT-EVENT-TABLE).
+
+```sql
+SELECT wait_event || ':' || wait_event_type AS type, count(*) AS number_of_occurences
+ FROM pg_stat_activity
+ WHERE state != 'idle'
+GROUP BY wait_event, wait_event_type
+ORDER BY number_of_occurences DESC;
+```
+
+Example output when running `pg_sleep` in a separate query concurrently:
+
+```
+ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
+Γöé type Γöé number_of_occurences Γöé
+Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
+Γöé Γêà Γöé 1 Γöé
+Γöé PgSleep:Timeout Γöé 1 Γöé
+ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
+```
+
+## Index hit rate
+
+This query will provide you with your index hit rate across all nodes. Index
+hit rate is useful in determining how often indices are used when querying.
+A value of 95% or higher is ideal.
+
+``` postgresql
+-- on coordinator
+SELECT 100 * (sum(idx_blks_hit) - sum(idx_blks_read)) / sum(idx_blks_hit) AS index_hit_rate
+ FROM pg_statio_user_indexes;
+
+-- on workers
+SELECT nodename, result as index_hit_rate
+FROM run_command_on_workers($cmd$
+ SELECT 100 * (sum(idx_blks_hit) - sum(idx_blks_read)) / sum(idx_blks_hit) AS index_hit_rate
+ FROM pg_statio_user_indexes;
+$cmd$);
+```
+
+Example output:
+
+```
+ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
+Γöé nodename Γöé index_hit_rate Γöé
+Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
+Γöé 10.0.0.16 Γöé 96.0 Γöé
+Γöé 10.0.0.20 Γöé 98.0 Γöé
+ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
+```
+
+## Cache hit rate
+
+Most applications typically access a small fraction of their total data at
+once. PostgreSQL keeps frequently accessed data in memory to avoid slow reads
+from disk. You can see statistics about it in the
+[pg_statio_user_tables](https://www.postgresql.org/docs/current/monitoring-stats.html#MONITORING-PG-STATIO-ALL-TABLES-VIEW)
+view.
+
+An important measurement is what percentage of data comes from the memory cache
+vs the disk in your workload:
+
+``` postgresql
+-- on coordinator
+SELECT
+ sum(heap_blks_read) AS heap_read,
+ sum(heap_blks_hit) AS heap_hit,
+ 100 * sum(heap_blks_hit) / (sum(heap_blks_hit) + sum(heap_blks_read)) AS cache_hit_rate
+FROM
+ pg_statio_user_tables;
+
+-- on workers
+SELECT nodename, result as cache_hit_rate
+FROM run_command_on_workers($cmd$
+ SELECT
+ 100 * sum(heap_blks_hit) / (sum(heap_blks_hit) + sum(heap_blks_read)) AS cache_hit_rate
+ FROM
+ pg_statio_user_tables;
+$cmd$);
+```
+
+Example output:
+
+```
+ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
+Γöé heap_read Γöé heap_hit Γöé cache_hit_rate Γöé
+Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
+Γöé 1 Γöé 132 Γöé 99.2481203007518796 Γöé
+ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
+```
+
+If you find yourself with a ratio significantly lower than 99%, then you likely
+want to consider increasing the cache available to your database.
+
+## Next steps
+
+* Learn about other [system tables](reference-metadata.md)
+ that are useful for diagnostics
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/overview.md
+
+ Title: Overview of Azure Database for PostgreSQL - Hyperscale (Citus)
+description: Provides an overview of the Hyperscale (Citus) deployment option
++++++ Last updated : 09/01/2020++
+# What is Azure Database for PostgreSQL - Hyperscale (Citus)?
+
+Azure Database for PostgreSQL is a relational database service in the Microsoft
+cloud built for developers. It's based on the community version of open-source
+[PostgreSQL](https://www.postgresql.org/) database engine.
+
+Hyperscale (Citus) is a deployment option that horizontally scales queries
+across multiple machines using sharding. Its query engine parallelizes incoming
+SQL queries across these servers for faster responses on large datasets. It
+serves applications that require greater scale and performance than other
+deployment options: generally workloads that are approaching--or already
+exceed--100 GB of data.
+
+Hyperscale (Citus) delivers:
+
+- Horizontal scaling across multiple machines using sharding
+- Query parallelization across these servers for faster responses on large
+ datasets
+- Excellent support for multi-tenant applications, real-time operational
+ analytics, and high throughput transactional workloads
+
+Applications built for PostgreSQL can run distributed queries on Hyperscale
+(Citus) with standard [connection
+libraries](../concepts-connection-libraries.md) and minimal changes.
+
+## Next steps
+
+- Get started by [creating your
+ first](./quickstart-create-portal.md) Azure Database for
+PostgreSQL - Hyperscale (Citus) server group.
+- See the [pricing
+ page](https://azure.microsoft.com/pricing/details/postgresql/) for cost
+comparisons and calculators. Hyperscale (Citus) offers prepaid Reserved
+Instance discounts as well, see [Hyperscale (Citus) RI
+pricing](concepts-reserved-pricing.md) pages for details.
+- Determine the best [initial
+ size](howto-scale-initial.md) for your server group
postgresql Product Updates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/product-updates.md
+
+ Title: Product updates for Azure Database for PostgreSQL - Hyperscale (Citus)
+description: New features and features in preview
++++++ Last updated : 10/15/2021++
+# Product updates for PostgreSQL - Hyperscale (Citus)
+
+## Updates feed
+
+The Microsoft Azure website lists newly available features per product, plus
+features in preview and development. Check the [Hyperscale (Citus)
+updates](https://azure.microsoft.com/updates/?category=databases&query=citus)
+section for the latest. An RSS feed is also available on that page.
+
+## Features in preview
+
+Azure Database for PostgreSQL - Hyperscale (Citus) offers
+previews for unreleased features. Preview versions are provided
+without a service level agreement, and aren't recommended for
+production workloads. Certain features might not be supported or
+might have constrained capabilities. For more information, see
+[Supplemental Terms of Use for Microsoft Azure
+Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/)
+
+Here are the features currently available for preview:
+
+* **[pgAudit](concepts-audit.md)**. Provides detailed
+ session and object audit logging via the standard PostgreSQL
+ logging facility. It produces audit logs required to pass
+ certain government, financial, or ISO certification audits.
+* **[Private access](concepts-private-access.md)**.
+ Allow hosts on a virtual network (VNet) to securely access a
+ Hyperscale (Citus) server group over a private endpoint.
+
+> [!NOTE]
+>
+> Private access is available for preview in only [certain
+> regions](concepts-limits.md#regions).
+
+## Contact us
+
+Let us know about your experience using preview features, by emailing [Ask
+Azure DB for PostgreSQL](mailto:AskAzureDBforPostgreSQL@service.microsoft.com).
+(This email address isn't a technical support channel. For technical problems,
+open a [support
+request](https://ms.portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).)
postgresql Quickstart Create Basic Tier https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/quickstart-create-basic-tier.md
+
+ Title: 'Quickstart: create a basic tier server group - Hyperscale (Citus) - Azure Database for PostgreSQL'
+description: Get started with the Azure Database for PostgreSQL Hyperscale (Citus) basic tier.
++++++ Last updated : 11/16/2021
+#Customer intent: As a developer, I want to provision a hyperscale server group so that I can run queries quickly on large datasets.
++
+# Create a Hyperscale (Citus) basic tier server group in the Azure portal
+
+Azure Database for PostgreSQL - Hyperscale (Citus) is a managed service that
+you use to run, manage, and scale highly available PostgreSQL databases in the
+cloud. Its [basic tier](concepts-tiers.md) is a a convenient
+deployment option for initial development and testing.
+
+This quickstart shows you how to create a Hyperscale (Citus) basic tier
+server group using the Azure portal. You'll provision the server group
+and verify that you can connect to it to run queries.
++
+## Next steps
+
+In this quickstart, you learned how to provision a Hyperscale (Citus) server group. You connected to it with psql, created a schema, and distributed data.
+
+- Follow a tutorial to [build scalable multi-tenant
+ applications](./tutorial-design-database-multi-tenant.md)
+- Determine the best [initial
+ size](howto-scale-initial.md) for your server group
postgresql Quickstart Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/quickstart-create-portal.md
+
+ Title: 'Quickstart: create a server group - Hyperscale (Citus) - Azure Database for PostgreSQL'
+description: Quickstart to create and query distributed tables on Azure Database for PostgreSQL Hyperscale (Citus).
++++++ Last updated : 11/16/2021
+#Customer intent: As a developer, I want to provision a hyperscale server group so that I can run queries quickly on large datasets.
++
+# Quickstart: create a Hyperscale (Citus) server group in the Azure portal
+
+Azure Database for PostgreSQL is a managed service that you use to run, manage, and scale highly available PostgreSQL databases in the cloud. This Quickstart shows you how to create an Azure Database for PostgreSQL - Hyperscale (Citus) server group using the Azure portal. You'll explore distributed data: sharding tables across nodes, ingesting sample data, and running queries that execute on multiple nodes.
++
+## Create and distribute tables
+
+Once connected to the hyperscale coordinator node using psql, you can complete some basic tasks.
+
+Within Hyperscale (Citus) servers there are three types of tables:
+
+- Distributed or sharded tables (spread out to help scaling for performance and parallelization)
+- Reference tables (multiple copies maintained)
+- Local tables (often used for internal admin tables)
+
+In this quickstart, we'll primarily focus on distributed tables and getting familiar with them.
+
+The data model we're going to work with is simple: user and event data from GitHub. Events include fork creation, git commits related to an organization, and more.
+
+Once you've connected via psql, let's create our tables. In the psql console run:
+
+```sql
+CREATE TABLE github_events
+(
+ event_id bigint,
+ event_type text,
+ event_public boolean,
+ repo_id bigint,
+ payload jsonb,
+ repo jsonb,
+ user_id bigint,
+ org jsonb,
+ created_at timestamp
+);
+
+CREATE TABLE github_users
+(
+ user_id bigint,
+ url text,
+ login text,
+ avatar_url text,
+ gravatar_id text,
+ display_login text
+);
+```
+
+The `payload` field of `github_events` has a JSONB datatype. JSONB is the JSON datatype in binary form in Postgres. The datatype makes it easy to store a flexible schema in a single column.
+
+Postgres can create a `GIN` index on this type, which will index every key and value within it. With an index, it becomes fast and easy to query the payload with various conditions. Let's go ahead and create a couple of indexes before we load our data. In psql:
+
+```sql
+CREATE INDEX event_type_index ON github_events (event_type);
+CREATE INDEX payload_index ON github_events USING GIN (payload jsonb_path_ops);
+```
+
+Next weΓÇÖll take those Postgres tables on the coordinator node and tell Hyperscale (Citus) to shard them across the workers. To do so, weΓÇÖll run a query for each table specifying the key to shard it on. In the current example weΓÇÖll shard both the events and users table on `user_id`:
+
+```sql
+SELECT create_distributed_table('github_events', 'user_id');
+SELECT create_distributed_table('github_users', 'user_id');
+```
++
+We're ready to load data. In psql still, shell out to download the files:
+
+```sql
+\! curl -O https://examples.citusdata.com/users.csv
+\! curl -O https://examples.citusdata.com/events.csv
+```
+
+Next, load the data from the files into the distributed tables:
+
+```sql
+SET CLIENT_ENCODING TO 'utf8';
+
+\copy github_events from 'events.csv' WITH CSV
+\copy github_users from 'users.csv' WITH CSV
+```
+
+## Run queries
+
+Now it's time for the fun part, actually running some queries. Let's start with a simple `count (*)` to see how much data we loaded:
+
+```sql
+SELECT count(*) from github_events;
+```
+
+That worked nicely. We'll come back to that sort of aggregation in a bit, but for now letΓÇÖs look at a few other queries. Within the JSONB `payload` column there's a good bit of data, but it varies based on event type. `PushEvent` events contain a size that includes the number of distinct commits for the push. We can use it to find the total number of commits per hour:
+
+```sql
+SELECT date_trunc('hour', created_at) AS hour,
+ sum((payload->>'distinct_size')::int) AS num_commits
+FROM github_events
+WHERE event_type = 'PushEvent'
+GROUP BY hour
+ORDER BY hour;
+```
+
+So far the queries have involved the github\_events exclusively, but we can combine this information with github\_users. Since we sharded both users and events on the same identifier (`user_id`), the rows of both tables with matching user IDs will be [colocated](concepts-colocation.md) on the same database nodes and can easily be joined.
+
+If we join on `user_id`, Hyperscale (Citus) can push the join execution down into shards for execution in parallel on worker nodes. For example, let's find the users who created the greatest number of repositories:
+
+```sql
+SELECT gu.login, count(*)
+ FROM github_events ge
+ JOIN github_users gu
+ ON ge.user_id = gu.user_id
+ WHERE ge.event_type = 'CreateEvent'
+ AND ge.payload @> '{"ref_type": "repository"}'
+ GROUP BY gu.login
+ ORDER BY count(*) DESC;
+```
+
+## Clean up resources
+
+In the preceding steps, you created Azure resources in a server group. If you don't expect to need these resources in the future, delete the server group. Press the **Delete** button in the **Overview** page for your server group. When prompted on a pop-up page, confirm the name of the server group and click the final **Delete** button.
+
+## Next steps
+
+In this quickstart, you learned how to provision a Hyperscale (Citus) server group. You connected to it with psql, created a schema, and distributed data.
+
+- Follow a tutorial to [build scalable multi-tenant
+ applications](./tutorial-design-database-multi-tenant.md)
+- Determine the best [initial
+ size](howto-scale-initial.md) for your server group
postgresql Reference Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/reference-functions.md
+
+ Title: SQL functions ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL
+description: Functions in the Hyperscale (Citus) SQL API
+++++ Last updated : 04/07/2021++
+# Functions in the Hyperscale (Citus) SQL API
+
+This section contains reference information for the user-defined functions
+provided by Hyperscale (Citus). These functions help in providing
+distributed functionality to Hyperscale (Citus).
+
+> [!NOTE]
+>
+> Hyperscale (Citus) server groups running older versions of the Citus Engine may not
+> offer all the functions listed below.
+
+## Table and Shard DDL
+
+### create\_distributed\_table
+
+The create\_distributed\_table() function is used to define a distributed table
+and create its shards if it's a hash-distributed table. This function takes in
+a table name, the distribution column, and an optional distribution method and
+inserts appropriate metadata to mark the table as distributed. The function
+defaults to 'hash' distribution if no distribution method is specified. If the
+table is hash-distributed, the function also creates worker shards based on the
+shard count and shard replication factor configuration values. If the table
+contains any rows, they are automatically distributed to worker nodes.
+
+This function replaces usage of master\_create\_distributed\_table() followed
+by master\_create\_worker\_shards().
+
+#### Arguments
+
+**table\_name:** Name of the table that needs to be distributed.
+
+**distribution\_column:** The column on which the table is to be
+distributed.
+
+**distribution\_type:** (Optional) The method according to which the
+table is to be distributed. Permissible values are append or hash, with
+a default value of 'hash'.
+
+**colocate\_with:** (Optional) include current table in the colocation group
+of another table. By default tables are colocated when they are distributed by
+columns of the same type, have the same shard count, and have the same
+replication factor. Possible values for `colocate_with` are `default`, `none`
+to start a new colocation group, or the name of another table to colocate
+with that table. (See [table colocation](concepts-colocation.md).)
+
+Keep in mind that the default value of `colocate_with` does implicit
+colocation. [Colocation](concepts-colocation.md)
+can be a great thing when tables are related or will be joined. However when
+two tables are unrelated but happen to use the same datatype for their
+distribution columns, accidentally colocating them can decrease performance
+during [shard rebalancing](howto-scale-rebalance.md). The
+table shards will be moved together unnecessarily in a \"cascade.\"
+
+If a new distributed table is not related to other tables, it's best to
+specify `colocate_with => 'none'`.
+
+#### Return Value
+
+N/A
+
+#### Example
+
+This example informs the database that the github\_events table should
+be distributed by hash on the repo\_id column.
+
+```postgresql
+SELECT create_distributed_table('github_events', 'repo_id');
+
+-- alternatively, to be more explicit:
+SELECT create_distributed_table('github_events', 'repo_id',
+ colocate_with => 'github_repo');
+```
+
+### create\_reference\_table
+
+The create\_reference\_table() function is used to define a small
+reference or dimension table. This function takes in a table name, and
+creates a distributed table with just one shard, replicated to every
+worker node.
+
+#### Arguments
+
+**table\_name:** Name of the small dimension or reference table that
+needs to be distributed.
+
+#### Return Value
+
+N/A
+
+#### Example
+
+This example informs the database that the nation table should be
+defined as a reference table
+
+```postgresql
+SELECT create_reference_table('nation');
+```
+
+### upgrade\_to\_reference\_table
+
+The upgrade\_to\_reference\_table() function takes an existing distributed
+table that has a shard count of one, and upgrades it to be a recognized
+reference table. After calling this function, the table will be as if it had
+been created with [create_reference_table](#create_reference_table).
+
+#### Arguments
+
+**table\_name:** Name of the distributed table (having shard count = 1)
+which will be distributed as a reference table.
+
+#### Return Value
+
+N/A
+
+#### Example
+
+This example informs the database that the nation table should be
+defined as a reference table
+
+```postgresql
+SELECT upgrade_to_reference_table('nation');
+```
+
+### mark\_tables\_colocated
+
+The mark\_tables\_colocated() function takes a distributed table (the
+source), and a list of others (the targets), and puts the targets into
+the same colocation group as the source. If the source is not yet in a
+group, this function creates one, and assigns the source and targets to
+it.
+
+Colocating tables ought to be done at table distribution time via the
+`colocate_with` parameter of
+[create_distributed_table](#create_distributed_table), but
+`mark_tables_colocated` can take care of it later if necessary.
+
+#### Arguments
+
+**source\_table\_name:** Name of the distributed table whose colocation
+group the targets will be assigned to match.
+
+**target\_table\_names:** Array of names of the distributed target
+tables, must be non-empty. These distributed tables must match the
+source table in:
+
+> - distribution method
+> - distribution column type
+> - replication type
+> - shard count
+
+If none of the above apply, Hyperscale (Citus) will raise an error. For
+instance, attempting to colocate tables `apples` and `oranges` whose
+distribution column types differ results in:
+
+```
+ERROR: XX000: cannot colocate tables apples and oranges
+DETAIL: Distribution column types don't match for apples and oranges.
+```
+
+#### Return Value
+
+N/A
+
+#### Example
+
+This example puts `products` and `line_items` in the same colocation
+group as `stores`. The example assumes that these tables are all
+distributed on a column with matching type, most likely a \"store id.\"
+
+```postgresql
+SELECT mark_tables_colocated('stores', ARRAY['products', 'line_items']);
+```
+
+### create\_distributed\_function
+
+Propagates a function from the coordinator node to workers, and marks it for
+distributed execution. When a distributed function is called on the
+coordinator, Hyperscale (Citus) uses the value of the \"distribution argument\"
+to pick a worker node to run the function. Executing the function on workers
+increases parallelism, and can bring the code closer to data in shards for
+lower latency.
+
+The Postgres search path is not propagated from the coordinator to workers
+during distributed function execution, so distributed function code should
+fully qualify the names of database objects. Also notices emitted by the
+functions will not be displayed to the user.
+
+#### Arguments
+
+**function\_name:** Name of the function to be distributed. The name
+must include the function's parameter types in parentheses, because
+multiple functions can have the same name in PostgreSQL. For instance,
+`'foo(int)'` is different from `'foo(int, text)'`.
+
+**distribution\_arg\_name:** (Optional) The argument name by which to
+distribute. For convenience (or if the function arguments do not have
+names), a positional placeholder is allowed, such as `'$1'`. If this
+parameter is not specified, then the function named by `function_name`
+is merely created on the workers. If worker nodes are added in the
+future, the function will automatically be created there too.
+
+**colocate\_with:** (Optional) When the distributed function reads or writes to
+a distributed table (or, more generally, colocation group), be sure to name
+that table using the `colocate_with` parameter. Then each invocation of the
+function will run on the worker node containing relevant shards.
+
+#### Return Value
+
+N/A
+
+#### Example
+
+```postgresql
+-- an example function which updates a hypothetical
+-- event_responses table which itself is distributed by event_id
+CREATE OR REPLACE FUNCTION
+ register_for_event(p_event_id int, p_user_id int)
+RETURNS void LANGUAGE plpgsql AS $fn$
+BEGIN
+ INSERT INTO event_responses VALUES ($1, $2, 'yes')
+ ON CONFLICT (event_id, user_id)
+ DO UPDATE SET response = EXCLUDED.response;
+END;
+$fn$;
+
+-- distribute the function to workers, using the p_event_id argument
+-- to determine which shard each invocation affects, and explicitly
+-- colocating with event_responses which the function updates
+SELECT create_distributed_function(
+ 'register_for_event(int, int)', 'p_event_id',
+ colocate_with := 'event_responses'
+);
+```
+
+### alter_columnar_table_set
+
+The alter_columnar_table_set() function changes settings on a [columnar
+table](concepts-columnar.md). Calling this function on a
+non-columnar table gives an error. All arguments except the table name are
+optional.
+
+To view current options for all columnar tables, consult this table:
+
+```postgresql
+SELECT * FROM columnar.options;
+```
+
+The default values for columnar settings for newly created tables can be
+overridden with these GUCs:
+
+* columnar.compression
+* columnar.compression_level
+* columnar.stripe_row_count
+* columnar.chunk_row_count
+
+#### Arguments
+
+**table_name:** Name of the columnar table.
+
+**chunk_row_count:** (Optional) The maximum number of rows per chunk for
+newly inserted data. Existing chunks of data will not be changed and may have
+more rows than this maximum value. The default value is 10000.
+
+**stripe_row_count:** (Optional) The maximum number of rows per stripe for
+newly inserted data. Existing stripes of data will not be changed and may have
+more rows than this maximum value. The default value is 150000.
+
+**compression:** (Optional) `[none|pglz|zstd|lz4|lz4hc]` The compression type
+for newly inserted data. Existing data will not be recompressed or
+decompressed. The default and suggested value is zstd (if support has
+been compiled in).
+
+**compression_level:** (Optional) Valid settings are from 1 through 19. If the
+compression method does not support the level chosen, the closest level will be
+selected instead.
+
+#### Return value
+
+N/A
+
+#### Example
+
+```postgresql
+SELECT alter_columnar_table_set(
+ 'my_columnar_table',
+ compression => 'none',
+ stripe_row_count => 10000);
+```
+
+## Metadata / Configuration Information
+
+### master\_get\_table\_metadata
+
+The master\_get\_table\_metadata() function can be used to return
+distribution-related metadata for a distributed table. This metadata includes
+the relation ID, storage type, distribution method, distribution column,
+replication count, maximum shard size, and shard placement policy for the
+table. Behind the covers, this function queries Hyperscale (Citus) metadata
+tables to get the required information and concatenates it into a tuple before
+returning it to the user.
+
+#### Arguments
+
+**table\_name:** Name of the distributed table for which you want to
+fetch metadata.
+
+#### Return Value
+
+A tuple containing the following information:
+
+**logical\_relid:** Oid of the distributed table. It references
+the relfilenode column in the pg\_class system catalog table.
+
+**part\_storage\_type:** Type of storage used for the table. May be
+'t' (standard table), 'f' (foreign table) or 'c' (columnar table).
+
+**part\_method:** Distribution method used for the table. May be 'a'
+(append), or 'h' (hash).
+
+**part\_key:** Distribution column for the table.
+
+**part\_replica\_count:** Current shard replication count.
+
+**part\_max\_size:** Current maximum shard size in bytes.
+
+**part\_placement\_policy:** Shard placement policy used for placing the
+table's shards. May be 1 (local-node-first) or 2 (round-robin).
+
+#### Example
+
+The example below fetches and displays the table metadata for the
+github\_events table.
+
+```postgresql
+SELECT * from master_get_table_metadata('github_events');
+ logical_relid | part_storage_type | part_method | part_key | part_replica_count | part_max_size | part_placement_policy
++-+-+-+--++--
+ 24180 | t | h | repo_id | 2 | 1073741824 | 2
+(1 row)
+```
+
+### get\_shard\_id\_for\_distribution\_column
+
+Hyperscale (Citus) assigns every row of a distributed table to a shard based on
+the value of the row's distribution column and the table's method of
+distribution. In most cases, the precise mapping is a low-level detail that the
+database administrator can ignore. However it can be useful to determine a
+row's shard, either for manual database maintenance tasks or just to satisfy
+curiosity. The `get_shard_id_for_distribution_column` function provides this
+info for hash-distributed, range-distributed, and reference tables. It
+does not work for the append distribution.
+
+#### Arguments
+
+**table\_name:** The distributed table.
+
+**distribution\_value:** The value of the distribution column.
+
+#### Return Value
+
+The shard ID Hyperscale (Citus) associates with the distribution column value
+for the given table.
+
+#### Example
+
+```postgresql
+SELECT get_shard_id_for_distribution_column('my_table', 4);
+
+ get_shard_id_for_distribution_column
+--
+ 540007
+(1 row)
+```
+
+### column\_to\_column\_name
+
+Translates the `partkey` column of `pg_dist_partition` into a textual column
+name. The translation is useful to determine the distribution column of a
+distributed table.
+
+For a more detailed discussion, see [choosing a distribution
+column](concepts-choose-distribution-column.md).
+
+#### Arguments
+
+**table\_name:** The distributed table.
+
+**column\_var\_text:** The value of `partkey` in the `pg_dist_partition`
+table.
+
+#### Return Value
+
+The name of `table_name`'s distribution column.
+
+#### Example
+
+```postgresql
+-- get distribution column name for products table
+
+SELECT column_to_column_name(logicalrelid, partkey) AS dist_col_name
+ FROM pg_dist_partition
+ WHERE logicalrelid='products'::regclass;
+```
+
+Output:
+
+```
+ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
+Γöé dist_col_name Γöé
+Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
+Γöé company_id Γöé
+ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
+```
+
+### citus\_relation\_size
+
+Get the disk space used by all the shards of the specified distributed table.
+The disk space includes the size of the \"main fork,\" but excludes the
+visibility map and free space map for the shards.
+
+#### Arguments
+
+**logicalrelid:** the name of a distributed table.
+
+#### Return Value
+
+Size in bytes as a bigint.
+
+#### Example
+
+```postgresql
+SELECT pg_size_pretty(citus_relation_size('github_events'));
+```
+
+```
+pg_size_pretty
+--
+23 MB
+```
+
+### citus\_table\_size
+
+Get the disk space used by all the shards of the specified distributed table,
+excluding indexes (but including TOAST, free space map, and visibility map).
+
+#### Arguments
+
+**logicalrelid:** the name of a distributed table.
+
+#### Return Value
+
+Size in bytes as a bigint.
+
+#### Example
+
+```postgresql
+SELECT pg_size_pretty(citus_table_size('github_events'));
+```
+
+```
+pg_size_pretty
+--
+37 MB
+```
+
+### citus\_total\_relation\_size
+
+Get the total disk space used by the all the shards of the specified
+distributed table, including all indexes and TOAST data.
+
+#### Arguments
+
+**logicalrelid:** the name of a distributed table.
+
+#### Return Value
+
+Size in bytes as a bigint.
+
+#### Example
+
+```postgresql
+SELECT pg_size_pretty(citus_total_relation_size('github_events'));
+```
+
+```
+pg_size_pretty
+--
+73 MB
+```
+
+### citus\_stat\_statements\_reset
+
+Removes all rows from
+[citus_stat_statements](reference-metadata.md#query-statistics-table).
+This function works independently from `pg_stat_statements_reset()`. To reset
+all stats, call both functions.
+
+#### Arguments
+
+N/A
+
+#### Return Value
+
+None
+
+## Server group management and repair
+
+### master\_copy\_shard\_placement
+
+If a shard placement fails to be updated during a modification command or a DDL
+operation, then it gets marked as inactive. The master\_copy\_shard\_placement
+function can then be called to repair an inactive shard placement using data
+from a healthy placement.
+
+To repair a shard, the function first drops the unhealthy shard placement and
+recreates it using the schema on the coordinator. Once the shard placement is
+created, the function copies data from the healthy placement and updates the
+metadata to mark the new shard placement as healthy. This function ensures that
+the shard will be protected from any concurrent modifications during the
+repair.
+
+#### Arguments
+
+**shard\_id:** ID of the shard to be repaired.
+
+**source\_node\_name:** DNS name of the node on which the healthy shard
+placement is present (\"source\" node).
+
+**source\_node\_port:** The port on the source worker node on which the
+database server is listening.
+
+**target\_node\_name:** DNS name of the node on which the invalid shard
+placement is present (\"target\" node).
+
+**target\_node\_port:** The port on the target worker node on which the
+database server is listening.
+
+#### Return Value
+
+N/A
+
+#### Example
+
+The example below will repair an inactive shard placement of shard
+12345, which is present on the database server running on 'bad\_host'
+on port 5432. To repair it, it will use data from a healthy shard
+placement present on the server running on 'good\_host' on port
+5432.
+
+```postgresql
+SELECT master_copy_shard_placement(12345, 'good_host', 5432, 'bad_host', 5432);
+```
+
+### master\_move\_shard\_placement
+
+This function moves a given shard (and shards colocated with it) from one node
+to another. It is typically used indirectly during shard rebalancing rather
+than being called directly by a database administrator.
+
+There are two ways to move the data: blocking or nonblocking. The blocking
+approach means that during the move all modifications to the shard are paused.
+The second way, which avoids blocking shard writes, relies on Postgres 10
+logical replication.
+
+After a successful move operation, shards in the source node get deleted. If
+the move fails at any point, this function throws an error and leaves the
+source and target nodes unchanged.
+
+#### Arguments
+
+**shard\_id:** ID of the shard to be moved.
+
+**source\_node\_name:** DNS name of the node on which the healthy shard
+placement is present (\"source\" node).
+
+**source\_node\_port:** The port on the source worker node on which the
+database server is listening.
+
+**target\_node\_name:** DNS name of the node on which the invalid shard
+placement is present (\"target\" node).
+
+**target\_node\_port:** The port on the target worker node on which the
+database server is listening.
+
+**shard\_transfer\_mode:** (Optional) Specify the method of replication,
+whether to use PostgreSQL logical replication or a cross-worker COPY
+command. The possible values are:
+
+> - `auto`: Require replica identity if logical replication is
+> possible, otherwise use legacy behaviour (e.g. for shard repair,
+> PostgreSQL 9.6). This is the default value.
+> - `force_logical`: Use logical replication even if the table
+> doesn't have a replica identity. Any concurrent update/delete
+> statements to the table will fail during replication.
+> - `block_writes`: Use COPY (blocking writes) for tables lacking
+> primary key or replica identity.
+
+#### Return Value
+
+N/A
+
+#### Example
+
+```postgresql
+SELECT master_move_shard_placement(12345, 'from_host', 5432, 'to_host', 5432);
+```
+
+### rebalance\_table\_shards
+
+The rebalance\_table\_shards() function moves shards of the given table to make
+them evenly distributed among the workers. The function first calculates the
+list of moves it needs to make in order to ensure that the server group is
+balanced within the given threshold. Then, it moves shard placements one by one
+from the source node to the destination node and updates the corresponding
+shard metadata to reflect the move.
+
+Every shard is assigned a cost when determining whether shards are \"evenly
+distributed.\" By default each shard has the same cost (a value of 1), so
+distributing to equalize the cost across workers is the same as equalizing the
+number of shards on each. The constant cost strategy is called
+\"by\_shard\_count\" and is the default rebalancing strategy.
+
+The default strategy is appropriate under these circumstances:
+
+* The shards are roughly the same size
+* The shards get roughly the same amount of traffic
+* Worker nodes are all the same size/type
+* Shards haven't been pinned to particular workers
+
+If any of these assumptions don't hold, then the default rebalancing
+can result in a bad plan. In this case you may customize the strategy,
+using the `rebalance_strategy` parameter.
+
+It's advisable to call
+[get_rebalance_table_shards_plan](#get_rebalance_table_shards_plan) before
+running rebalance\_table\_shards, to see and verify the actions to be
+performed.
+
+#### Arguments
+
+**table\_name:** (Optional) The name of the table whose shards need to
+be rebalanced. If NULL, then rebalance all existing colocation groups.
+
+**threshold:** (Optional) A float number between 0.0 and 1.0 that
+indicates the maximum difference ratio of node utilization from average
+utilization. For example, specifying 0.1 will cause the shard rebalancer
+to attempt to balance all nodes to hold the same number of shards ┬▒10%.
+Specifically, the shard rebalancer will try to converge utilization of
+all worker nodes to the (1 - threshold) \* average\_utilization \... (1
++ threshold) \* average\_utilization range.+
+**max\_shard\_moves:** (Optional) The maximum number of shards to move.
+
+**excluded\_shard\_list:** (Optional) Identifiers of shards that
+shouldn't be moved during the rebalance operation.
+
+**shard\_transfer\_mode:** (Optional) Specify the method of replication,
+whether to use PostgreSQL logical replication or a cross-worker COPY
+command. The possible values are:
+
+> - `auto`: Require replica identity if logical replication is
+> possible, otherwise use legacy behaviour (e.g. for shard repair,
+> PostgreSQL 9.6). This is the default value.
+> - `force_logical`: Use logical replication even if the table
+> doesn't have a replica identity. Any concurrent update/delete
+> statements to the table will fail during replication.
+> - `block_writes`: Use COPY (blocking writes) for tables lacking
+> primary key or replica identity.
+
+**drain\_only:** (Optional) When true, move shards off worker nodes who have
+`shouldhaveshards` set to false in
+[pg_dist_node](reference-metadata.md#worker-node-table); move no
+other shards.
+
+**rebalance\_strategy:** (Optional) the name of a strategy in
+[pg_dist_rebalance_strategy](reference-metadata.md#rebalancer-strategy-table).
+If this argument is omitted, the function chooses the default strategy, as
+indicated in the table.
+
+#### Return Value
+
+N/A
+
+#### Example
+
+The example below will attempt to rebalance the shards of the
+github\_events table within the default threshold.
+
+```postgresql
+SELECT rebalance_table_shards('github_events');
+```
+
+This example usage will attempt to rebalance the github\_events table
+without moving shards with ID 1 and 2.
+
+```postgresql
+SELECT rebalance_table_shards('github_events', excluded_shard_list:='{1,2}');
+```
+
+### get\_rebalance\_table\_shards\_plan
+
+Output the planned shard movements of
+[rebalance_table_shards](#rebalance_table_shards) without performing them.
+While it's unlikely, get\_rebalance\_table\_shards\_plan can output a slightly
+different plan than what a rebalance\_table\_shards call with the same
+arguments will do. They are not executed at the same time, so facts about the
+server group \-- for example, disk space \-- might differ between the calls.
+
+#### Arguments
+
+The same arguments as rebalance\_table\_shards: relation, threshold,
+max\_shard\_moves, excluded\_shard\_list, and drain\_only. See
+documentation of that function for the arguments' meaning.
+
+#### Return Value
+
+Tuples containing these columns:
+
+- **table\_name**: The table whose shards would move
+- **shardid**: The shard in question
+- **shard\_size**: Size in bytes
+- **sourcename**: Hostname of the source node
+- **sourceport**: Port of the source node
+- **targetname**: Hostname of the destination node
+- **targetport**: Port of the destination node
+
+### get\_rebalance\_progress
+
+Once a shard rebalance begins, the `get_rebalance_progress()` function lists
+the progress of every shard involved. It monitors the moves planned and
+executed by `rebalance_table_shards()`.
+
+#### Arguments
+
+N/A
+
+#### Return Value
+
+Tuples containing these columns:
+
+- **sessionid**: Postgres PID of the rebalance monitor
+- **table\_name**: The table whose shards are moving
+- **shardid**: The shard in question
+- **shard\_size**: Size in bytes
+- **sourcename**: Hostname of the source node
+- **sourceport**: Port of the source node
+- **targetname**: Hostname of the destination node
+- **targetport**: Port of the destination node
+- **progress**: 0 = waiting to be moved; 1 = moving; 2 = complete
+
+#### Example
+
+```sql
+SELECT * FROM get_rebalance_progress();
+```
+
+```
+ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
+Γöé sessionid Γöé table_name Γöé shardid Γöé shard_size Γöé sourcename Γöé sourceport Γöé targetname Γöé targetport Γöé progress Γöé
+Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
+Γöé 7083 Γöé foo Γöé 102008 Γöé 1204224 Γöé n1.foobar.com Γöé 5432 Γöé n4.foobar.com Γöé 5432 Γöé 0 Γöé
+Γöé 7083 Γöé foo Γöé 102009 Γöé 1802240 Γöé n1.foobar.com Γöé 5432 Γöé n4.foobar.com Γöé 5432 Γöé 0 Γöé
+Γöé 7083 Γöé foo Γöé 102018 Γöé 614400 Γöé n2.foobar.com Γöé 5432 Γöé n4.foobar.com Γöé 5432 Γöé 1 Γöé
+Γöé 7083 Γöé foo Γöé 102019 Γöé 8192 Γöé n3.foobar.com Γöé 5432 Γöé n4.foobar.com Γöé 5432 Γöé 2 Γöé
+ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
+```
+
+### citus\_add\_rebalance\_strategy
+
+Append a row to
+[pg_dist_rebalance_strategy](reference-metadata.md?#rebalancer-strategy-table)
+.
+
+#### Arguments
+
+For more about these arguments, see the corresponding column values in
+`pg_dist_rebalance_strategy`.
+
+**name:** identifier for the new strategy
+
+**shard\_cost\_function:** identifies the function used to determine the
+\"cost\" of each shard
+
+**node\_capacity\_function:** identifies the function to measure node
+capacity
+
+**shard\_allowed\_on\_node\_function:** identifies the function that
+determines which shards can be placed on which nodes
+
+**default\_threshold:** a floating point threshold that tunes how
+precisely the cumulative shard cost should be balanced between nodes
+
+**minimum\_threshold:** (Optional) a safeguard column that holds the
+minimum value allowed for the threshold argument of
+rebalance\_table\_shards(). Its default value is 0
+
+#### Return Value
+
+N/A
+
+### citus\_set\_default\_rebalance\_strategy
+
+Update the
+[pg_dist_rebalance_strategy](reference-metadata.md#rebalancer-strategy-table)
+table, changing the strategy named by its argument to be the default chosen
+when rebalancing shards.
+
+#### Arguments
+
+**name:** the name of the strategy in pg\_dist\_rebalance\_strategy
+
+#### Return Value
+
+N/A
+
+#### Example
+
+```postgresql
+SELECT citus_set_default_rebalance_strategy('by_disk_size');
+```
+
+### citus\_remote\_connection\_stats
+
+The citus\_remote\_connection\_stats() function shows the number of
+active connections to each remote node.
+
+#### Arguments
+
+N/A
+
+#### Example
+
+```postgresql
+SELECT * from citus_remote_connection_stats();
+```
+
+```
+ hostname | port | database_name | connection_count_to_node
+-+++--
+ citus_worker_1 | 5432 | postgres | 3
+(1 row)
+```
+
+### replicate\_table\_shards
+
+The replicate\_table\_shards() function replicates the under-replicated shards
+of the given table. The function first calculates the list of under-replicated
+shards and locations from which they can be fetched for replication. The
+function then copies over those shards and updates the corresponding shard
+metadata to reflect the copy.
+
+#### Arguments
+
+**table\_name:** The name of the table whose shards need to be
+replicated.
+
+**shard\_replication\_factor:** (Optional) The desired replication
+factor to achieve for each shard.
+
+**max\_shard\_copies:** (Optional) Maximum number of shards to copy to
+reach the desired replication factor.
+
+**excluded\_shard\_list:** (Optional) Identifiers of shards that
+shouldn't be copied during the replication operation.
+
+#### Return Value
+
+N/A
+
+#### Examples
+
+The example below will attempt to replicate the shards of the
+github\_events table to shard\_replication\_factor.
+
+```postgresql
+SELECT replicate_table_shards('github_events');
+```
+
+This example will attempt to bring the shards of the github\_events table to
+the desired replication factor with a maximum of 10 shard copies. The
+rebalancer will copy a maximum of 10 shards in its attempt to reach the desired
+replication factor.
+
+```postgresql
+SELECT replicate_table_shards('github_events', max_shard_copies:=10);
+```
+
+### isolate\_tenant\_to\_new\_shard
+
+This function creates a new shard to hold rows with a specific single value in
+the distribution column. It is especially handy for the multi-tenant Hyperscale
+(Citus) use case, where a large tenant can be placed alone on its own shard and
+ultimately its own physical node.
+
+#### Arguments
+
+**table\_name:** The name of the table to get a new shard.
+
+**tenant\_id:** The value of the distribution column that will be
+assigned to the new shard.
+
+**cascade\_option:** (Optional) When set to \"CASCADE,\" also isolates a shard
+from all tables in the current table's [colocation
+group](concepts-colocation.md).
+
+#### Return Value
+
+**shard\_id:** The function returns the unique ID assigned to the newly
+created shard.
+
+#### Examples
+
+Create a new shard to hold the lineitems for tenant 135:
+
+```postgresql
+SELECT isolate_tenant_to_new_shard('lineitem', 135);
+```
+
+```
+ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
+Γöé isolate_tenant_to_new_shard Γöé
+Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
+Γöé 102240 Γöé
+ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
+```
+
+## Next steps
+
+* Many of the functions in this article modify system [metadata tables](reference-metadata.md)
+* [Server parameters](reference-parameters.md) customize the behavior of some functions
postgresql Reference Metadata https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/reference-metadata.md
+
+ Title: System tables ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL
+description: Metadata for distributed query execution
+++++ Last updated : 08/10/2020++
+# System tables and views
+
+Hyperscale (Citus) creates and maintains special tables that contain
+information about distributed data in the server group. The coordinator node
+consults these tables when planning how to run queries across the worker nodes.
+
+## Coordinator Metadata
+
+Hyperscale (Citus) divides each distributed table into multiple logical shards
+based on the distribution column. The coordinator then maintains metadata
+tables to track statistics and information about the health and location of
+these shards.
+
+In this section, we describe each of these metadata tables and their schema.
+You can view and query these tables using SQL after logging into the
+coordinator node.
+
+> [!NOTE]
+>
+> Hyperscale (Citus) server groups running older versions of the Citus Engine may not
+> offer all the tables listed below.
+
+### Partition table
+
+The pg\_dist\_partition table stores metadata about which tables in the
+database are distributed. For each distributed table, it also stores
+information about the distribution method and detailed information about
+the distribution column.
+
+| Name | Type | Description |
+|--|-|-|
+| logicalrelid | regclass | Distributed table to which this row corresponds. This value references the relfilenode column in the pg_class system catalog table. |
+| partmethod | char | The method used for partitioning / distribution. The values of this column corresponding to different distribution methods are append: ΓÇÿaΓÇÖ, hash: ΓÇÿhΓÇÖ, reference table: ΓÇÿnΓÇÖ |
+| partkey | text | Detailed information about the distribution column including column number, type and other relevant information. |
+| colocationid | integer | Colocation group to which this table belongs. Tables in the same group allow colocated joins and distributed rollups among other optimizations. This value references the colocationid column in the pg_dist_colocation table. |
+| repmodel | char | The method used for data replication. The values of this column corresponding to different replication methods are: Citus statement-based replication: ΓÇÿcΓÇÖ, postgresql streaming replication: ΓÇÿsΓÇÖ, two-phase commit (for reference tables): ΓÇÿtΓÇÖ |
+
+```
+SELECT * from pg_dist_partition;
+ logicalrelid | partmethod | partkey | colocationid | repmodel
++++--+-
+ github_events | h | {VAR :varno 1 :varattno 4 :vartype 20 :vartypmod -1 :varcollid 0 :varlevelsup 0 :varnoold 1 :varoattno 4 :location -1} | 2 | c
+ (1 row)
+```
+
+### Shard table
+
+The pg\_dist\_shard table stores metadata about individual shards of a
+table. Pg_dist_shard has information about which distributed table shards
+belong to, and statistics about the distribution column for shards.
+For append distributed tables, these statistics correspond to min / max
+values of the distribution column. For hash distributed tables,
+they are hash token ranges assigned to that shard. These statistics are
+used for pruning away unrelated shards during SELECT queries.
+
+| Name | Type | Description |
+||-|-|
+| logicalrelid | regclass | Distributed table to which this row corresponds. This value references the relfilenode column in the pg_class system catalog table. |
+| shardid | bigint | Globally unique identifier assigned to this shard. |
+| shardstorage | char | Type of storage used for this shard. Different storage types are discussed in the table below. |
+| shardminvalue | text | For append distributed tables, minimum value of the distribution column in this shard (inclusive). For hash distributed tables, minimum hash token value assigned to that shard (inclusive). |
+| shardmaxvalue | text | For append distributed tables, maximum value of the distribution column in this shard (inclusive). For hash distributed tables, maximum hash token value assigned to that shard (inclusive). |
+
+```
+SELECT * from pg_dist_shard;
+ logicalrelid | shardid | shardstorage | shardminvalue | shardmaxvalue
+++--++
+ github_events | 102026 | t | 268435456 | 402653183
+ github_events | 102027 | t | 402653184 | 536870911
+ github_events | 102028 | t | 536870912 | 671088639
+ github_events | 102029 | t | 671088640 | 805306367
+ (4 rows)
+```
+
+#### Shard Storage Types
+
+The shardstorage column in pg\_dist\_shard indicates the type of storage
+used for the shard. A brief overview of different shard storage types
+and their representation is below.
+
+| Storage Type | Shardstorage value | Description |
+|--|--||
+| TABLE | 't' | Indicates that shard stores data belonging to a regular distributed table. |
+| COLUMNAR | 'c' | Indicates that shard stores columnar data. (Used by distributed cstore_fdw tables) |
+| FOREIGN | 'f' | Indicates that shard stores foreign data. (Used by distributed file_fdw tables) |
+
+### Shard placement table
+
+The pg\_dist\_placement table tracks the location of shard replicas on
+worker nodes. Each replica of a shard assigned to a specific node is
+called a shard placement. This table stores information about the health
+and location of each shard placement.
+
+| Name | Type | Description |
+|-|--|-|
+| shardid | bigint | Shard identifier associated with this placement. This value references the shardid column in the pg_dist_shard catalog table. |
+| shardstate | int | Describes the state of this placement. Different shard states are discussed in the section below. |
+| shardlength | bigint | For append distributed tables, the size of the shard placement on the worker node in bytes. For hash distributed tables, zero. |
+| placementid | bigint | Unique autogenerated identifier for each individual placement. |
+| groupid | int | Denotes a group of one primary server and zero or more secondary servers when the streaming replication model is used. |
+
+```
+SELECT * from pg_dist_placement;
+ shardid | shardstate | shardlength | placementid | groupid
+ ++-+-+
+ 102008 | 1 | 0 | 1 | 1
+ 102008 | 1 | 0 | 2 | 2
+ 102009 | 1 | 0 | 3 | 2
+ 102009 | 1 | 0 | 4 | 3
+ 102010 | 1 | 0 | 5 | 3
+ 102010 | 1 | 0 | 6 | 4
+ 102011 | 1 | 0 | 7 | 4
+```
+
+#### Shard Placement States
+
+Hyperscale (Citus) manages shard health on a per-placement basis. If a placement
+puts the system in an inconsistent state, Citus automatically marks it as unavailable. Placement state is recorded in the pg_dist_shard_placement table,
+within the shardstate column. Here's a brief overview of different shard placement states:
+
+| State name | Shardstate value | Description |
+|||-|
+| FINALIZED | 1 | The state new shards are created in. Shard placements in this state are considered up to date and are used in query planning and execution. |
+| INACTIVE | 3 | Shard placements in this state are considered inactive due to being out-of-sync with other replicas of the same shard. The state can occur when an append, modification (INSERT, UPDATE, DELETE), or a DDL operation fails for this placement. The query planner will ignore placements in this state during planning and execution. Users can synchronize the data in these shards with a finalized replica as a background activity. |
+| TO_DELETE | 4 | If Citus attempts to drop a shard placement in response to a master_apply_delete_command call and fails, the placement is moved to this state. Users can then delete these shards as a subsequent background activity. |
+
+### Worker node table
+
+The pg\_dist\_node table contains information about the worker nodes in
+the cluster.
+
+| Name | Type | Description |
+|||--|
+| nodeid | int | Autogenerated identifier for an individual node. |
+| groupid | int | Identifier used to denote a group of one primary server and zero or more secondary servers, when the streaming replication model is used. By default it is the same as the nodeid. |
+| nodename | text | Host Name or IP Address of the PostgreSQL worker node. |
+| nodeport | int | Port number on which the PostgreSQL worker node is listening. |
+| noderack | text | (Optional) Rack placement information for the worker node. |
+| hasmetadata | boolean | Reserved for internal use. |
+| isactive | boolean | Whether the node is active accepting shard placements. |
+| noderole | text | Whether the node is a primary or secondary |
+| nodecluster | text | The name of the cluster containing this node |
+| shouldhaveshards | boolean | If false, shards will be moved off node (drained) when rebalancing, nor will shards from new distributed tables be placed on the node, unless they are colocated with shards already there |
+
+```
+SELECT * from pg_dist_node;
+ nodeid | groupid | nodename | nodeport | noderack | hasmetadata | isactive | noderole | nodecluster | shouldhaveshards
+--++--+-+-+-+-+-+-+
+ 1 | 1 | localhost | 12345 | default | f | t | primary | default | t
+ 2 | 2 | localhost | 12346 | default | f | t | primary | default | t
+ 3 | 3 | localhost | 12347 | default | f | t | primary | default | t
+(3 rows)
+```
+
+### Distributed object table
+
+The citus.pg\_dist\_object table contains a list of objects such as
+types and functions that have been created on the coordinator node and
+propagated to worker nodes. When an administrator adds new worker nodes
+to the cluster, Hyperscale (Citus) automatically creates copies of the distributed
+objects on the new nodes (in the correct order to satisfy object
+dependencies).
+
+| Name | Type | Description |
+|--|||
+| classid | oid | Class of the distributed object |
+| objid | oid | Object ID of the distributed object |
+| objsubid | integer | Object sub ID of the distributed object, for example, attnum |
+| type | text | Part of the stable address used during pg upgrades |
+| object_names | text[] | Part of the stable address used during pg upgrades |
+| object_args | text[] | Part of the stable address used during pg upgrades |
+| distribution_argument_index | integer | Only valid for distributed functions/procedures |
+| colocationid | integer | Only valid for distributed functions/procedures |
+
+\"Stable addresses\" uniquely identify objects independently of a
+specific server. Hyperscale (Citus) tracks objects during a PostgreSQL upgrade using
+stable addresses created with the
+[pg\_identify\_object\_as\_address()](https://www.postgresql.org/docs/current/functions-info.html#FUNCTIONS-INFO-OBJECT-TABLE)
+function.
+
+Here\'s an example of how `create_distributed_function()` adds entries
+to the `citus.pg_dist_object` table:
+
+```psql
+CREATE TYPE stoplight AS enum ('green', 'yellow', 'red');
+
+CREATE OR REPLACE FUNCTION intersection()
+RETURNS stoplight AS $$
+DECLARE
+ color stoplight;
+BEGIN
+ SELECT *
+ FROM unnest(enum_range(NULL::stoplight)) INTO color
+ ORDER BY random() LIMIT 1;
+ RETURN color;
+END;
+$$ LANGUAGE plpgsql VOLATILE;
+
+SELECT create_distributed_function('intersection()');
+
+-- will have two rows, one for the TYPE and one for the FUNCTION
+TABLE citus.pg_dist_object;
+```
+
+```
+-[ RECORD 1 ]+
+classid | 1247
+objid | 16780
+objsubid | 0
+type |
+object_names |
+object_args |
+distribution_argument_index |
+colocationid |
+-[ RECORD 2 ]+
+classid | 1255
+objid | 16788
+objsubid | 0
+type |
+object_names |
+object_args |
+distribution_argument_index |
+colocationid |
+```
+
+### Colocation group table
+
+The pg\_dist\_colocation table contains information about which tables\' shards
+should be placed together, or [colocated](concepts-colocation.md).
+When two tables are in the same colocation group, Hyperscale (Citus) ensures
+shards with the same partition values will be placed on the same worker nodes.
+Colocation enables join optimizations, certain distributed rollups, and foreign key
+support. Shard colocation is inferred when the shard counts, replication
+factors, and partition column types all match between two tables; however, a
+custom colocation group may be specified when creating a distributed table, if
+so desired.
+
+| Name | Type | Description |
+|||-|
+| colocationid | int | Unique identifier for the colocation group this row corresponds to. |
+| shardcount | int | Shard count for all tables in this colocation group |
+| replicationfactor | int | Replication factor for all tables in this colocation group. |
+| distributioncolumntype | oid | The type of the distribution column for all tables in this colocation group. |
+
+```
+SELECT * from pg_dist_colocation;
+ colocationid | shardcount | replicationfactor | distributioncolumntype
+ --++-+
+ 2 | 32 | 2 | 20
+ (1 row)
+```
+
+### Rebalancer strategy table
+
+This table defines strategies that
+[rebalance_table_shards](reference-functions.md#rebalance_table_shards)
+can use to determine where to move shards.
+
+| Name | Type | Description |
+|--|||
+| default_strategy | boolean | Whether rebalance_table_shards should choose this strategy by default. Use citus_set_default_rebalance_strategy to update this column |
+| shard_cost_function | regproc | Identifier for a cost function, which must take a shardid as bigint, and return its notion of a cost, as type real |
+| node_capacity_function | regproc | Identifier for a capacity function, which must take a nodeid as int, and return its notion of node capacity as type real |
+| shard_allowed_on_node_function | regproc | Identifier for a function that given shardid bigint, and nodeidarg int, returns boolean for whether Citus may store the shard on the node |
+| default_threshold | float4 | Threshold for deeming a node too full or too empty, which determines when the rebalance_table_shards should try to move shards |
+| minimum_threshold | float4 | A safeguard to prevent the threshold argument of rebalance_table_shards() from being set too low |
+
+A Hyperscale (Citus) installation ships with these strategies in the table:
+
+```postgresql
+SELECT * FROM pg_dist_rebalance_strategy;
+```
+
+```
+-[ RECORD 1 ]-+--
+Name | by_shard_count
+default_strategy | true
+shard_cost_function | citus_shard_cost_1
+node_capacity_function | citus_node_capacity_1
+shard_allowed_on_node_function | citus_shard_allowed_on_node_true
+default_threshold | 0
+minimum_threshold | 0
+-[ RECORD 2 ]-+--
+Name | by_disk_size
+default_strategy | false
+shard_cost_function | citus_shard_cost_by_disk_size
+node_capacity_function | citus_node_capacity_1
+shard_allowed_on_node_function | citus_shard_allowed_on_node_true
+default_threshold | 0.1
+minimum_threshold | 0.01
+```
+
+The default strategy, `by_shard_count`, assigns every shard the same
+cost. Its effect is to equalize the shard count across nodes. The other
+predefined strategy, `by_disk_size`, assigns a cost to each shard
+matching its disk size in bytes plus that of the shards that are
+colocated with it. The disk size is calculated using
+`pg_total_relation_size`, so it includes indices. This strategy attempts
+to achieve the same disk space on every node. Note the threshold of 0.1--it prevents unnecessary shard movement caused by insignificant
+differences in disk space.
+
+#### Creating custom rebalancer strategies
+
+Here are examples of functions that can be used within new shard rebalancer
+strategies, and registered in the
+[pg_dist_rebalance_strategy](reference-metadata.md?#rebalancer-strategy-table)
+with the
+[citus_add_rebalance_strategy](reference-functions.md#citus_add_rebalance_strategy)
+function.
+
+- Setting a node capacity exception by hostname pattern:
+
+ ```postgresql
+ CREATE FUNCTION v2_node_double_capacity(nodeidarg int)
+ RETURNS boolean AS $$
+ SELECT
+ (CASE WHEN nodename LIKE '%.v2.worker.citusdata.com' THEN 2 ELSE 1 END)
+ FROM pg_dist_node where nodeid = nodeidarg
+ $$ LANGUAGE sql;
+ ```
+
+- Rebalancing by number of queries that go to a shard, as measured by the
+ [citus_stat_statements](reference-metadata.md#query-statistics-table):
+
+ ```postgresql
+ -- example of shard_cost_function
+
+ CREATE FUNCTION cost_of_shard_by_number_of_queries(shardid bigint)
+ RETURNS real AS $$
+ SELECT coalesce(sum(calls)::real, 0.001) as shard_total_queries
+ FROM citus_stat_statements
+ WHERE partition_key is not null
+ AND get_shard_id_for_distribution_column('tab', partition_key) = shardid;
+ $$ LANGUAGE sql;
+ ```
+
+- Isolating a specific shard (10000) on a node (address \'10.0.0.1\'):
+
+ ```postgresql
+ -- example of shard_allowed_on_node_function
+
+ CREATE FUNCTION isolate_shard_10000_on_10_0_0_1(shardid bigint, nodeidarg int)
+ RETURNS boolean AS $$
+ SELECT
+ (CASE WHEN nodename = '10.0.0.1' THEN shardid = 10000 ELSE shardid != 10000 END)
+ FROM pg_dist_node where nodeid = nodeidarg
+ $$ LANGUAGE sql;
+
+ -- The next two definitions are recommended in combination with the above function.
+ -- This way the average utilization of nodes is not impacted by the isolated shard.
+ CREATE FUNCTION no_capacity_for_10_0_0_1(nodeidarg int)
+ RETURNS real AS $$
+ SELECT
+ (CASE WHEN nodename = '10.0.0.1' THEN 0 ELSE 1 END)::real
+ FROM pg_dist_node where nodeid = nodeidarg
+ $$ LANGUAGE sql;
+ CREATE FUNCTION no_cost_for_10000(shardid bigint)
+ RETURNS real AS $$
+ SELECT
+ (CASE WHEN shardid = 10000 THEN 0 ELSE 1 END)::real
+ $$ LANGUAGE sql;
+ ```
+
+### Query statistics table
+
+Hyperscale (Citus) provides `citus_stat_statements` for stats about how queries are
+being executed, and for whom. It\'s analogous to (and can be joined
+with) the
+[pg\_stat\_statements](https://www.postgresql.org/docs/current/static/pgstatstatements.html)
+view in PostgreSQL, which tracks statistics about query speed.
+
+This view can trace queries to originating tenants in a multi-tenant
+application, which helps for deciding when to do tenant isolation.
+
+| Name | Type | Description |
+||--|-|
+| queryid | bigint | identifier (good for pg_stat_statements joins) |
+| userid | oid | user who ran the query |
+| dbid | oid | database instance of coordinator |
+| query | text | anonymized query string |
+| executor | text | Citus executor used: adaptive, real-time, task-tracker, router, or insert-select |
+| partition_key | text | value of distribution column in router-executed queries, else NULL |
+| calls | bigint | number of times the query was run |
+
+```sql
+-- create and populate distributed table
+create table foo ( id int );
+select create_distributed_table('foo', 'id');
+insert into foo select generate_series(1,100);
+
+-- enable stats
+-- pg_stat_statements must be in shared_preload libraries
+create extension pg_stat_statements;
+
+select count(*) from foo;
+select * from foo where id = 42;
+
+select * from citus_stat_statements;
+```
+
+Results:
+
+```
+-[ RECORD 1 ]-+-
+queryid | -909556869173432820
+userid | 10
+dbid | 13340
+query | insert into foo select generate_series($1,$2)
+executor | insert-select
+partition_key |
+calls | 1
+-[ RECORD 2 ]-+-
+queryid | 3919808845681956665
+userid | 10
+dbid | 13340
+query | select count(*) from foo;
+executor | adaptive
+partition_key |
+calls | 1
+-[ RECORD 3 ]-+-
+queryid | 5351346905785208738
+userid | 10
+dbid | 13340
+query | select * from foo where id = $1
+executor | adaptive
+partition_key | 42
+calls | 1
+```
+
+Caveats:
+
+- The stats data is not replicated, and won\'t survive database
+ crashes or failover
+- Tracks a limited number of queries, set by the
+ `pg_stat_statements.max` GUC (default 5000)
+- To truncate the table, use the `citus_stat_statements_reset()`
+ function
+
+### Distributed Query Activity
+
+Hyperscale (Citus) provides special views to watch queries and locks throughout the
+cluster, including shard-specific queries used internally to build
+results for distributed queries.
+
+- **citus\_dist\_stat\_activity**: shows the distributed queries that
+ are executing on all nodes. A superset of `pg_stat_activity`, usable
+ wherever the latter is.
+- **citus\_worker\_stat\_activity**: shows queries on workers,
+ including fragment queries against individual shards.
+- **citus\_lock\_waits**: Blocked queries throughout the cluster.
+
+The first two views include all columns of
+[pg\_stat\_activity](https://www.postgresql.org/docs/current/static/monitoring-stats.html#PG-STAT-ACTIVITY-VIEW)
+plus the host host/port of the worker that initiated the query and the
+host/port of the coordinator node of the cluster.
+
+For example, consider counting the rows in a distributed table:
+
+```postgresql
+-- run from worker on localhost:9701
+
+SELECT count(*) FROM users_table;
+```
+
+We can see the query appear in `citus_dist_stat_activity`:
+
+```postgresql
+SELECT * FROM citus_dist_stat_activity;
+
+-[ RECORD 1 ]-+-
+query_hostname | localhost
+query_hostport | 9701
+master_query_host_name | localhost
+master_query_host_port | 9701
+transaction_number | 1
+transaction_stamp | 2018-10-05 13:27:20.691907+03
+datid | 12630
+datname | postgres
+pid | 23723
+usesysid | 10
+usename | citus
+application\_name | psql
+client\_addr |
+client\_hostname |
+client\_port | -1
+backend\_start | 2018-10-05 13:27:14.419905+03
+xact\_start | 2018-10-05 13:27:16.362887+03
+query\_start | 2018-10-05 13:27:20.682452+03
+state\_change | 2018-10-05 13:27:20.896546+03
+wait\_event_type | Client
+wait\_event | ClientRead
+state | idle in transaction
+backend\_xid |
+backend\_xmin |
+query | SELECT count(*) FROM users_table;
+backend\_type | client backend
+```
+
+This query requires information from all shards. Some of the information is in
+shard `users_table_102038`, which happens to be stored in `localhost:9700`. We can
+see a query accessing the shard by looking at the `citus_worker_stat_activity`
+view:
+
+```postgresql
+SELECT * FROM citus_worker_stat_activity;
+
+-[ RECORD 1 ]-+--
+query_hostname | localhost
+query_hostport | 9700
+master_query_host_name | localhost
+master_query_host_port | 9701
+transaction_number | 1
+transaction_stamp | 2018-10-05 13:27:20.691907+03
+datid | 12630
+datname | postgres
+pid | 23781
+usesysid | 10
+usename | citus
+application\_name | citus
+client\_addr | ::1
+client\_hostname |
+client\_port | 51773
+backend\_start | 2018-10-05 13:27:20.75839+03
+xact\_start | 2018-10-05 13:27:20.84112+03
+query\_start | 2018-10-05 13:27:20.867446+03
+state\_change | 2018-10-05 13:27:20.869889+03
+wait\_event_type | Client
+wait\_event | ClientRead
+state | idle in transaction
+backend\_xid |
+backend\_xmin |
+query | COPY (SELECT count(*) AS count FROM users_table_102038 users_table WHERE true) TO STDOUT
+backend\_type | client backend
+```
+
+The `query` field shows data being copied out of the shard to be
+counted.
+
+> [!NOTE]
+> If a router query (e.g. single-tenant in a multi-tenant application, `SELECT
+> * FROM table WHERE tenant_id = X`) is executed without a transaction block,
+> then master\_query\_host\_name and master\_query\_host\_port columns will be
+> NULL in citus\_worker\_stat\_activity.
+
+Here are examples of useful queries you can build using
+`citus_worker_stat_activity`:
+
+```postgresql
+-- active queries' wait events on a certain node
+
+SELECT query, wait_event_type, wait_event
+ FROM citus_worker_stat_activity
+ WHERE query_hostname = 'xxxx' and state='active';
+
+-- active queries' top wait events
+
+SELECT wait_event, wait_event_type, count(*)
+ FROM citus_worker_stat_activity
+ WHERE state='active'
+ GROUP BY wait_event, wait_event_type
+ ORDER BY count(*) desc;
+
+-- total internal connections generated per node by Citus
+
+SELECT query_hostname, count(*)
+ FROM citus_worker_stat_activity
+ GROUP BY query_hostname;
+
+-- total internal active connections generated per node by Citus
+
+SELECT query_hostname, count(*)
+ FROM citus_worker_stat_activity
+ WHERE state='active'
+ GROUP BY query_hostname;
+```
+
+The next view is `citus_lock_waits`. To see how it works, we can generate a
+locking situation manually. First we'll set up a test table from the
+coordinator:
+
+```postgresql
+CREATE TABLE numbers AS
+ SELECT i, 0 AS j FROM generate_series(1,10) AS i;
+SELECT create_distributed_table('numbers', 'i');
+```
+
+Then, using two sessions on the coordinator, we can run this sequence of
+statements:
+
+```postgresql
+-- session 1 -- session 2
+- -
+BEGIN;
+UPDATE numbers SET j = 2 WHERE i = 1;
+ BEGIN;
+ UPDATE numbers SET j = 3 WHERE i = 1;
+ -- (this blocks)
+```
+
+The `citus_lock_waits` view shows the situation.
+
+```postgresql
+SELECT * FROM citus_lock_waits;
+
+-[ RECORD 1 ]-+-
+waiting_pid | 88624
+blocking_pid | 88615
+blocked_statement | UPDATE numbers SET j = 3 WHERE i = 1;
+current_statement_in_blocking_process | UPDATE numbers SET j = 2 WHERE i = 1;
+waiting_node_id | 0
+blocking_node_id | 0
+waiting_node_name | coordinator_host
+blocking_node_name | coordinator_host
+waiting_node_port | 5432
+blocking_node_port | 5432
+```
+
+In this example the queries originated on the coordinator, but the view
+can also list locks between queries originating on workers (executed
+with Hyperscale (Citus) MX for instance).
+
+## Next steps
+
+* Learn how some [Hyperscale (Citus) functions](reference-functions.md) alter system tables
+* Review the concepts of [nodes and tables](concepts-nodes.md)
postgresql Reference Parameters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/reference-parameters.md
+
+ Title: Server parameters ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL
+description: Parameters in the Hyperscale (Citus) SQL API
+++++ Last updated : 08/10/2020++
+# Server parameters
+
+There are various server parameters that affect the behavior of Hyperscale
+(Citus), both from standard PostgreSQL, and specific to Hyperscale (Citus).
+These parameters can be set in the Azure portal for a Hyperscale (Citus) server
+group. Under the **Settings** category, choose **Worker node parameters** or
+**Coordinator node parameters**. These pages allow you to set parameters for
+all worker nodes, or just for the coordinator node.
+
+## Hyperscale (Citus) parameters
+
+> [!NOTE]
+>
+> Hyperscale (Citus) server groups running older versions of the Citus Engine may not
+> offer all the parameters listed below.
+
+### General configuration
+
+#### citus.use\_secondary\_nodes (enum)
+
+Sets the policy to use when choosing nodes for SELECT queries. If it
+is set to 'always', then the planner will query only nodes that are
+marked as 'secondary' noderole in
+[pg_dist_node](reference-metadata.md#worker-node-table).
+
+The supported values for this enum are:
+
+- **never:** (default) All reads happen on primary nodes.
+- **always:** Reads run against secondary nodes instead, and
+ insert/update statements are disabled.
+
+#### citus.cluster\_name (text)
+
+Informs the coordinator node planner which cluster it coordinates. Once
+cluster\_name is set, the planner will query worker nodes in that
+cluster alone.
+
+#### citus.enable\_version\_checks (boolean)
+
+Upgrading Hyperscale (Citus) version requires a server restart (to pick up the
+new shared-library), followed by the ALTER EXTENSION UPDATE command. The
+failure to execute both steps could potentially cause errors or crashes.
+Hyperscale (Citus) thus validates the version of the code and that of the
+extension match, and errors out if they don\'t.
+
+This value defaults to true, and is effective on the coordinator. In
+rare cases, complex upgrade processes may require setting this parameter
+to false, thus disabling the check.
+
+#### citus.log\_distributed\_deadlock\_detection (boolean)
+
+Whether to log distributed deadlock detection-related processing in the
+server log. It defaults to false.
+
+#### citus.distributed\_deadlock\_detection\_factor (floating point)
+
+Sets the time to wait before checking for distributed deadlocks. In particular
+the time to wait will be this value multiplied by PostgreSQL\'s
+[deadlock\_timeout](https://www.postgresql.org/docs/current/static/runtime-config-locks.html)
+setting. The default value is `2`. A value of `-1` disables distributed
+deadlock detection.
+
+#### citus.node\_connection\_timeout (integer)
+
+The `citus.node_connection_timeout` GUC sets the maximum duration (in
+milliseconds) to wait for connection establishment. Hyperscale (Citus) raises
+an error if the timeout elapses before at least one worker connection is
+established. This GUC affects connections from the coordinator to workers, and
+workers to each other.
+
+- Default: five seconds
+- Minimum: 10 milliseconds
+- Maximum: one hour
+
+```postgresql
+-- set to 30 seconds
+ALTER DATABASE foo
+SET citus.node_connection_timeout = 30000;
+```
+
+### Query Statistics
+
+#### citus.stat\_statements\_purge\_interval (integer)
+
+Sets the frequency at which the maintenance daemon removes records from
+[citus_stat_statements](reference-metadata.md#query-statistics-table)
+that are unmatched in `pg_stat_statements`. This configuration value sets the
+time interval between purges in seconds, with a default value of 10. A value of
+0 disables the purges.
+
+```psql
+SET citus.stat_statements_purge_interval TO 5;
+```
+
+This parameter is effective on the coordinator and can be changed at
+runtime.
+
+### Data Loading
+
+#### citus.multi\_shard\_commit\_protocol (enum)
+
+Sets the commit protocol to use when performing COPY on a hash distributed
+table. On each individual shard placement, the COPY is performed in a
+transaction block to ensure that no data is ingested if an error occurs during
+the COPY. However, there is a particular failure case in which the COPY
+succeeds on all placements, but a (hardware) failure occurs before all
+transactions commit. This parameter can be used to prevent data loss in that
+case by choosing between the following commit protocols:
+
+- **2pc:** (default) The transactions in which COPY is performed on
+ the shard placements are first prepared using PostgreSQL\'s
+ [two-phase
+ commit](http://www.postgresql.org/docs/current/static/sql-prepare-transaction.html)
+ and then committed. Failed commits can be manually recovered or
+ aborted using COMMIT PREPARED or ROLLBACK PREPARED, respectively.
+ When using 2pc,
+ [max\_prepared\_transactions](http://www.postgresql.org/docs/current/static/runtime-config-resource.html)
+ should be increased on all the workers, typically to the same value
+ as max\_connections.
+- **1pc:** The transactions in which COPY is performed on the shard
+ placements is committed in a single round. Data may be lost if a
+ commit fails after COPY succeeds on all placements (rare).
+
+#### citus.shard\_replication\_factor (integer)
+
+Sets the replication factor for shards that is, the number of nodes on which
+shards will be placed, and defaults to 1. This parameter can be set at run-time
+and is effective on the coordinator. The ideal value for this parameter depends
+on the size of the cluster and rate of node failure. For example, you may want
+to increase this replication factor if you run large clusters and observe node
+failures on a more frequent basis.
+
+#### citus.shard\_count (integer)
+
+Sets the shard count for hash-partitioned tables and defaults to 32. This
+value is used by the
+[create_distributed_table](reference-functions.md#create_distributed_table)
+UDF when creating hash-partitioned tables. This parameter can be set at
+run-time and is effective on the coordinator.
+
+#### citus.shard\_max\_size (integer)
+
+Sets the maximum size to which a shard will grow before it gets split
+and defaults to 1 GB. When the source file\'s size (which is used for
+staging) for one shard exceeds this configuration value, the database
+ensures that a new shard gets created. This parameter can be set at
+run-time and is effective on the coordinator.
+
+### Planner Configuration
+
+#### citus.limit\_clause\_row\_fetch\_count (integer)
+
+Sets the number of rows to fetch per task for limit clause optimization.
+In some cases, select queries with limit clauses may need to fetch all
+rows from each task to generate results. In those cases, and where an
+approximation would produce meaningful results, this configuration value
+sets the number of rows to fetch from each shard. Limit approximations
+are disabled by default and this parameter is set to -1. This value can
+be set at run-time and is effective on the coordinator.
+
+#### citus.count\_distinct\_error\_rate (floating point)
+
+Hyperscale (Citus) can calculate count(distinct) approximates using the
+postgresql-hll extension. This configuration entry sets the desired
+error rate when calculating count(distinct). 0.0, which is the default,
+disables approximations for count(distinct); and 1.0 provides no
+guarantees about the accuracy of results. We recommend setting this
+parameter to 0.005 for best results. This value can be set at run-time
+and is effective on the coordinator.
+
+#### citus.task\_assignment\_policy (enum)
+
+> [!NOTE]
+> This GUC is applicable only when
+> [shard_replication_factor](reference-parameters.md#citusshard_replication_factor-integer)
+> is greater than one, or for queries against
+> [reference_tables](concepts-distributed-data.md#type-2-reference-tables).
+
+Sets the policy to use when assigning tasks to workers. The coordinator
+assigns tasks to workers based on shard locations. This configuration
+value specifies the policy to use when making these assignments.
+Currently, there are three possible task assignment policies that can
+be used.
+
+- **greedy:** The greedy policy is the default and aims to evenly
+ distribute tasks across workers.
+- **round-robin:** The round-robin policy assigns tasks to workers in
+ a round-robin fashion alternating between different replicas. This policy
+ enables better cluster utilization when the shard count for a
+ table is low compared to the number of workers.
+- **first-replica:** The first-replica policy assigns tasks on the
+ basis of the insertion order of placements (replicas) for the
+ shards. In other words, the fragment query for a shard is assigned to the worker that has the first replica of that shard.
+ This method allows you to have strong guarantees about which shards
+ will be used on which nodes (that is, stronger memory residency
+ guarantees).
+
+This parameter can be set at run-time and is effective on the
+coordinator.
+
+### Intermediate Data Transfer
+
+#### citus.binary\_worker\_copy\_format (boolean)
+
+Use the binary copy format to transfer intermediate data between workers.
+During large table joins, Hyperscale (Citus) may have to dynamically
+repartition and shuffle data between different workers. By default, this data
+is transferred in text format. Enabling this parameter instructs the database
+to use PostgreSQL's binary serialization format to transfer this data. This
+parameter is effective on the workers and needs to be changed in the
+postgresql.conf file. After editing the config file, users can send a SIGHUP
+signal or restart the server for this change to take effect.
+
+#### citus.binary\_master\_copy\_format (boolean)
+
+Use the binary copy format to transfer data between coordinator and the
+workers. When running distributed queries, the workers transfer their
+intermediate results to the coordinator for final aggregation. By default, this
+data is transferred in text format. Enabling this parameter instructs the
+database to use PostgreSQL's binary serialization format to transfer this data.
+This parameter can be set at runtime and is effective on the coordinator.
+
+#### citus.max\_intermediate\_result\_size (integer)
+
+The maximum size in KB of intermediate results for CTEs that are unable
+to be pushed down to worker nodes for execution, and for complex
+subqueries. The default is 1 GB, and a value of -1 means no limit.
+Queries exceeding the limit will be canceled and produce an error
+message.
+
+### DDL
+
+#### citus.enable\_ddl\_propagation (boolean)
+
+Specifies whether to automatically propagate DDL changes from the coordinator
+to all workers. The default value is true. Because some schema changes require
+an access exclusive lock on tables, and because the automatic propagation
+applies to all workers sequentially, it can make a Hyperscale (Citus) cluster
+temporarily less responsive. You may choose to disable this setting and
+propagate changes manually.
+
+### Executor Configuration
+
+#### General
+
+##### citus.all\_modifications\_commutative
+
+Hyperscale (Citus) enforces commutativity rules and acquires appropriate locks
+for modify operations in order to guarantee correctness of behavior. For
+example, it assumes that an INSERT statement commutes with another INSERT
+statement, but not with an UPDATE or DELETE statement. Similarly, it assumes
+that an UPDATE or DELETE statement does not commute with another UPDATE or
+DELETE statement. This precaution means that UPDATEs and DELETEs require
+Hyperscale (Citus) to acquire stronger locks.
+
+If you have UPDATE statements that are commutative with your INSERTs or
+other UPDATEs, then you can relax these commutativity assumptions by
+setting this parameter to true. When this parameter is set to true, all
+commands are considered commutative and claim a shared lock, which can
+improve overall throughput. This parameter can be set at runtime and is
+effective on the coordinator.
+
+##### citus.remote\_task\_check\_interval (integer)
+
+Sets the frequency at which Hyperscale (Citus) checks for statuses of jobs
+managed by the task tracker executor. It defaults to 10 ms. The coordinator
+assigns tasks to workers, and then regularly checks with them about each
+task\'s progress. This configuration value sets the time interval between two
+consequent checks. This parameter is effective on the coordinator and can be
+set at runtime.
+
+##### citus.task\_executor\_type (enum)
+
+Hyperscale (Citus) has three executor types for running distributed SELECT
+queries. The desired executor can be selected by setting this configuration
+parameter. The accepted values for this parameter are:
+
+- **adaptive:** The default. It is optimal for fast responses to
+ queries that involve aggregations and colocated joins spanning
+ across multiple shards.
+- **task-tracker:** The task-tracker executor is well suited for long
+ running, complex queries that require shuffling of data across
+ worker nodes and efficient resource management.
+- **real-time:** (deprecated) Serves a similar purpose as the adaptive
+ executor, but is less flexible and can cause more connection
+ pressure on worker nodes.
+
+This parameter can be set at run-time and is effective on the coordinator.
+
+##### citus.multi\_task\_query\_log\_level (enum) {#multi_task_logging}
+
+Sets a log-level for any query that generates more than one task (that is,
+which hits more than one shard). Logging is useful during a multi-tenant
+application migration, as you can choose to error or warn for such queries, to
+find them and add a tenant\_id filter to them. This parameter can be set at
+runtime and is effective on the coordinator. The default value for this
+parameter is \'off\'.
+
+The supported values for this enum are:
+
+- **off:** Turn off logging any queries that generate multiple tasks
+ (that is, span multiple shards)
+- **debug:** Logs statement at DEBUG severity level.
+- **log:** Logs statement at LOG severity level. The log line will
+ include the SQL query that was run.
+- **notice:** Logs statement at NOTICE severity level.
+- **warning:** Logs statement at WARNING severity level.
+- **error:** Logs statement at ERROR severity level.
+
+It may be useful to use `error` during development testing,
+and a lower log-level like `log` during actual production deployment.
+Choosing `log` will cause multi-task queries to appear in the database
+logs with the query itself shown after \"STATEMENT.\"
+
+```
+LOG: multi-task query about to be executed
+HINT: Queries are split to multiple tasks if they have to be split into several queries on the workers.
+STATEMENT: select * from foo;
+```
+
+##### citus.enable\_repartition\_joins (boolean)
+
+Ordinarily, attempting to perform repartition joins with the adaptive executor
+will fail with an error message. However setting
+`citus.enable_repartition_joins` to true allows Hyperscale (Citus) to
+temporarily switch into the task-tracker executor to perform the join. The
+default value is false.
+
+#### Task tracker executor configuration
+
+##### citus.task\_tracker\_delay (integer)
+
+This parameter sets the task tracker sleep time between task management rounds
+and defaults to 200 ms. The task tracker process wakes up regularly, walks over
+all tasks assigned to it, and schedules and executes these tasks. Then, the
+task tracker sleeps for a time period before walking over these tasks again.
+This configuration value determines the length of that sleeping period. This
+parameter is effective on the workers and needs to be changed in the
+postgresql.conf file. After editing the config file, users can send a SIGHUP
+signal or restart the server for the change to take effect.
+
+This parameter can be decreased to trim the delay caused due to the task
+tracker executor by reducing the time gap between the management rounds.
+Decreasing the delay is useful in cases when the shard queries are short and
+hence update their status regularly.
+
+##### citus.max\_assign\_task\_batch\_size (integer)
+
+The task tracker executor on the coordinator synchronously assigns tasks in
+batches to the daemon on the workers. This parameter sets the maximum number of
+tasks to assign in a single batch. Choosing a larger batch size allows for
+faster task assignment. However, if the number of workers is large, then it may
+take longer for all workers to get tasks. This parameter can be set at runtime
+and is effective on the coordinator.
+
+##### citus.max\_running\_tasks\_per\_node (integer)
+
+The task tracker process schedules and executes the tasks assigned to it as
+appropriate. This configuration value sets the maximum number of tasks to
+execute concurrently on one node at any given time and defaults to 8.
+
+The limit ensures that you don\'t have many tasks hitting disk at the same
+time, and helps in avoiding disk I/O contention. If your queries are served
+from memory or SSDs, you can increase max\_running\_tasks\_per\_node without
+much concern.
+
+##### citus.partition\_buffer\_size (integer)
+
+Sets the buffer size to use for partition operations and defaults to 8 MB.
+Hyperscale (Citus) allows for table data to be repartitioned into multiple
+files when two large tables are being joined. After this partition buffer fills
+up, the repartitioned data is flushed into files on disk. This configuration
+entry can be set at run-time and is effective on the workers.
+
+#### Explain output
+
+##### citus.explain\_all\_tasks (boolean)
+
+By default, Hyperscale (Citus) shows the output of a single, arbitrary task
+when running
+[EXPLAIN](http://www.postgresql.org/docs/current/static/sql-explain.html) on a
+distributed query. In most cases, the explain output will be similar across
+tasks. Occasionally, some of the tasks will be planned differently or have much
+higher execution times. In those cases, it can be useful to enable this
+parameter, after which the EXPLAIN output will include all tasks. Explaining
+all tasks may cause the EXPLAIN to take longer.
+
+## PostgreSQL parameters
+
+* [DateStyle](https://www.postgresql.org/docs/current/datatype-datetime.html#DATATYPE-DATETIME-OUTPUT) - Sets the display format for date and time values
+* [IntervalStyle](https://www.postgresql.org/docs/current/datatype-datetime.html#DATATYPE-INTERVAL-OUTPUT) - Sets the display format for interval values
+* [TimeZone](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-TIMEZONE) - Sets the time zone for displaying and interpreting time stamps
+* [application_name](https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-APPLICATION-NAME) - Sets the application name to be reported in statistics and logs
+* [array_nulls](https://www.postgresql.org/docs/current/runtime-config-compatible.html#GUC-ARRAY-NULLS) - Enables input of NULL elements in arrays
+* [autovacuum](https://www.postgresql.org/docs/current/runtime-config-autovacuum.html#GUC-AUTOVACUUM) - Starts the autovacuum subprocess
+* [autovacuum_analyze_scale_factor](https://www.postgresql.org/docs/current/runtime-config-autovacuum.html#GUC-AUTOVACUUM-ANALYZE-SCALE-FACTOR) - Number of tuple inserts, updates, or deletes prior to analyze as a fraction of reltuples
+* [autovacuum_analyze_threshold](https://www.postgresql.org/docs/current/runtime-config-autovacuum.html#GUC-AUTOVACUUM-ANALYZE-THRESHOLD) - Minimum number of tuple inserts, updates, or deletes prior to analyze
+* [autovacuum_naptime](https://www.postgresql.org/docs/current/runtime-config-autovacuum.html#GUC-AUTOVACUUM-NAPTIME) - Time to sleep between autovacuum runs
+* [autovacuum_vacuum_cost_delay](https://www.postgresql.org/docs/current/runtime-config-autovacuum.html#GUC-AUTOVACUUM-VACUUM-COST-DELAY) - Vacuum cost delay in milliseconds, for autovacuum
+* [autovacuum_vacuum_cost_limit](https://www.postgresql.org/docs/current/runtime-config-autovacuum.html#GUC-AUTOVACUUM-VACUUM-COST-LIMIT) - Vacuum cost amount available before napping, for autovacuum
+* [autovacuum_vacuum_scale_factor](https://www.postgresql.org/docs/current/runtime-config-autovacuum.html#GUC-AUTOVACUUM-VACUUM-SCALE-FACTOR) - Number of tuple updates or deletes prior to vacuum as a fraction of reltuples
+* [autovacuum_vacuum_threshold](https://www.postgresql.org/docs/current/runtime-config-autovacuum.html#GUC-AUTOVACUUM-VACUUM-THRESHOLD) - Minimum number of tuple updates or deletes prior to vacuum
+* [autovacuum_work_mem](https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-AUTOVACUUM-WORK-MEM) - Sets the maximum memory to be used by each autovacuum worker process
+* [backend_flush_after](https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-BACKEND-FLUSH-AFTER) - Number of pages after which previously performed writes are flushed to disk
+* [backslash_quote](https://www.postgresql.org/docs/current/runtime-config-compatible.html#GUC-BACKSLASH-QUOTE) - Sets whether "\'" is allowed in string literals
+* [bgwriter_delay](https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-BGWRITER-DELAY) - Background writer sleep time between rounds
+* [bgwriter_flush_after](https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-BGWRITER-FLUSH-AFTER) - Number of pages after which previously performed writes are flushed to disk
+* [bgwriter_lru_maxpages](https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-BGWRITER-LRU-MAXPAGES) - Background writer maximum number of LRU pages to flush per round
+* [bgwriter_lru_multiplier](https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-BGWRITER-LRU-MULTIPLIER) - Multiple of the average buffer usage to free per round
+* [bytea_output](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-BYTEA-OUTPUT) - Sets the output format for bytea
+* [check_function_bodies](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-CHECK-FUNCTION-BODIES) - Checks function bodies during CREATE FUNCTION
+* [checkpoint_completion_target](https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-CHECKPOINT-COMPLETION-TARGET) - Time spent flushing dirty buffers during checkpoint, as fraction of checkpoint interval
+* [checkpoint_timeout](https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-CHECKPOINT-TIMEOUT) - Sets the maximum time between automatic WAL checkpoints
+* [checkpoint_warning](https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-CHECKPOINT-WARNING) - Enables warnings if checkpoint segments are filled more frequently than this
+* [client_encoding](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-CLIENT-ENCODING) - Sets the client's character set encoding
+* [client_min_messages](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-CLIENT-MIN-MESSAGES) - Sets the message levels that are sent to the client
+* [commit_delay](https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-COMMIT-DELAY) - Sets the delay in microseconds between transaction commit and flushing WAL to disk
+* [commit_siblings](https://www.postgresql.org/docs/12/runtime-config-wal.html#GUC-COMMIT-SIBLINGS) - Sets the minimum concurrent open transactions before performing commit_delay
+* [constraint_exclusion](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-CONSTRAINT-EXCLUSION) - Enables the planner to use constraints to optimize queries
+* [cpu_index_tuple_cost](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-CPU-INDEX-TUPLE-COST) - Sets the planner's estimate of the cost of processing each index entry during an index scan
+* [cpu_operator_cost](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-CPU-OPERATOR-COST) - Sets the planner's estimate of the cost of processing each operator or function call
+* [cpu_tuple_cost](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-CPU-TUPLE-COST) - Sets the planner's estimate of the cost of processing each tuple (row)
+* [cursor_tuple_fraction](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-CURSOR-TUPLE-FRACTION) - Sets the planner's estimate of the fraction of a cursor's rows that will be retrieved
+* [deadlock_timeout](https://www.postgresql.org/docs/current/runtime-config-locks.html#GUC-DEADLOCK-TIMEOUT) - Sets the amount of time, in milliseconds, to wait on a lock before checking for deadlock
+* [debug_pretty_print](https://www.postgresql.org/docs/current/runtime-config-logging.html#id-1.6.6.11.5.2.3.1.3) - Indents parse and plan tree displays
+* [debug_print_parse](https://www.postgresql.org/docs/current/runtime-config-logging.html#id-1.6.6.11.5.2.2.1.3) - Logs each query's parse tree
+* [debug_print_plan](https://www.postgresql.org/docs/current/runtime-config-logging.html#id-1.6.6.11.5.2.2.1.3) - Logs each query's execution plan
+* [debug_print_rewritten](https://www.postgresql.org/docs/current/runtime-config-logging.html#id-1.6.6.11.5.2.2.1.3) - Logs each query's rewritten parse tree
+* [default_statistics_target](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-DEFAULT-STATISTICS-TARGET) - Sets the default statistics target
+* [default_tablespace](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-DEFAULT-TABLESPACE) - Sets the default tablespace to create tables and indexes in
+* [default_text_search_config](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-DEFAULT-TEXT-SEARCH-CONFIG) - Sets default text search configuration
+* [default_transaction_deferrable](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-DEFAULT-TRANSACTION-DEFERRABLE) - Sets the default deferrable status of new transactions
+* [default_transaction_isolation](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-DEFAULT-TRANSACTION-ISOLATION) - Sets the transaction isolation level of each new transaction
+* [default_transaction_read_only](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-DEFAULT-TRANSACTION-READ-ONLY) - Sets the default read-only status of new transactions
+* default_with_oids - Creates new tables with OIDs by default
+* [effective_cache_size](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-EFFECTIVE-CACHE-SIZE) - Sets the planner's assumption about the size of the disk cache
+* [enable_bitmapscan](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-ENABLE-BITMAPSCAN) - Enables the planner's use of bitmap-scan plans
+* [enable_gathermerge](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-ENABLE-GATHERMERGE) - Enables the planner's use of gather merge plans
+* [enable_hashagg](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-ENABLE-HASHAGG) - Enables the planner's use of hashed aggregation plans
+* [enable_hashjoin](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-ENABLE-HASHJOIN) - Enables the planner's use of hash join plans
+* [enable_indexonlyscan](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-ENABLE-INDEXONLYSCAN) - Enables the planner's use of index-only-scan plans
+* [enable_indexscan](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-ENABLE-INDEXSCAN) - Enables the planner's use of index-scan plans
+* [enable_material](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-ENABLE-MATERIAL) - Enables the planner's use of materialization
+* [enable_mergejoin](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-ENABLE-MERGEJOIN) - Enables the planner's use of merge join plans
+* [enable_nestloop](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-ENABLE-NESTLOOP) - Enables the planner's use of nested loop join plans
+* [enable_seqscan](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-ENABLE-SEQSCAN) - Enables the planner's use of sequential-scan plans
+* [enable_sort](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-ENABLE-SORT) - Enables the planner's use of explicit sort steps
+* [enable_tidscan](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-ENABLE-TIDSCAN) - Enables the planner's use of TID scan plans
+* [escape_string_warning](https://www.postgresql.org/docs/current/runtime-config-compatible.html#GUC-ESCAPE-STRING-WARNING) - Warns about backslash escapes in ordinary string literals
+* [exit_on_error](https://www.postgresql.org/docs/current/runtime-config-error-handling.html#GUC-EXIT-ON-ERROR) - Terminates session on any error
+* [extra_float_digits](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-EXTRA-FLOAT-DIGITS) - Sets the number of digits displayed for floating-point values
+* [force_parallel_mode](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-FORCE-PARALLEL-MODE) - Forces use of parallel query facilities
+* [from_collapse_limit](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-FROM-COLLAPSE-LIMIT) - Sets the FROM-list size beyond which subqueries are not collapsed
+* [geqo](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-GEQO) - Enables genetic query optimization
+* [geqo_effort](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-GEQO-EFFORT) - GEQO: effort is used to set the default for other GEQO parameters
+* [geqo_generations](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-GEQO-GENERATIONS) - GEQO: number of iterations of the algorithm
+* [geqo_pool_size](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-GEQO-POOL-SIZE) - GEQO: number of individuals in the population
+* [geqo_seed](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-GEQO-SEED) - GEQO: seed for random path selection
+* [geqo_selection_bias](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-GEQO-SELECTION-BIAS) - GEQO: selective pressure within the population
+* [geqo_threshold](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-GEQO-THRESHOLD) - Sets the threshold of FROM items beyond which GEQO is used
+* [gin_fuzzy_search_limit](https://www.postgresql.org/docs/current/runtime-config-client.html#id-1.6.6.14.5.2.2.1.3) - Sets the maximum allowed result for exact search by GIN
+* [gin_pending_list_limit](https://www.postgresql.org/docs/current/runtime-config-client.html#id-1.6.6.14.2.2.23.1.3) - Sets the maximum size of the pending list for GIN index
+* [idle_in_transaction_session_timeout](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-IDLE-IN-TRANSACTION-SESSION-TIMEOUT) - Sets the maximum allowed duration of any idling transaction
+* [join_collapse_limit](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-JOIN-COLLAPSE-LIMIT) - Sets the FROM-list size beyond which JOIN constructs are not flattened
+* [lc_monetary](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-LC-MONETARY) - Sets the locale for formatting monetary amounts
+* [lc_numeric](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-LC-NUMERIC) - Sets the locale for formatting numbers
+* [lo_compat_privileges](https://www.postgresql.org/docs/current/runtime-config-compatible.html#GUC-LO-COMPAT-PRIVILEGES) - Enables backward compatibility mode for privilege checks on large objects
+* [lock_timeout](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-LOCK-TIMEOUT) - Sets the maximum allowed duration (in milliseconds) of any wait for a lock. 0 turns this off
+* [log_autovacuum_min_duration](https://www.postgresql.org/docs/current/runtime-config-autovacuum.html#) - Sets the minimum execution time above which autovacuum actions will be logged
+* [log_checkpoints](https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-CHECKPOINTS) - Logs each checkpoint
+* [log_connections](https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-CONNECTIONS) - Logs each successful connection
+* [log_destination](https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-DESTINATION) - Sets the destination for server log output
+* [log_disconnections](https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-DISCONNECTIONS) - Logs end of a session, including duration
+* [log_duration](https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-DURATION) - Logs the duration of each completed SQL statement
+* [log_error_verbosity](https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-ERROR-VERBOSITY) - Sets the verbosity of logged messages
+* [log_lock_waits](https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-LOCK-WAITS) - Logs long lock waits
+* [log_min_duration_statement](https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-MIN-DURATION-STATEMENT) - Sets the minimum execution time (in milliseconds) above which statements will be logged. -1 disables logging statement durations
+* [log_min_error_statement](https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-MIN-ERROR-STATEMENT) - Causes all statements generating error at or above this level to be logged
+* [log_min_messages](https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-MIN-MESSAGES) - Sets the message levels that are logged
+* [log_replication_commands](https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-REPLICATION-COMMANDS) - Logs each replication command
+* [log_statement](https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-STATEMENT) - Sets the type of statements logged
+* [log_statement_stats](https://www.postgresql.org/docs/current/runtime-config-statistics.html#id-1.6.6.12.3.2.1.1.3) - For each query, writes cumulative performance statistics to the server log
+* [log_temp_files](https://www.postgresql.org/docs/current/runtime-config-logging.html#GUC-LOG-TEMP-FILES) - Logs the use of temporary files larger than this number of kilobytes
+* [maintenance_work_mem](https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-MAINTENANCE-WORK-MEM) - Sets the maximum memory to be used for maintenance operations
+* [max_parallel_workers](https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-MAX-PARALLEL-WORKERS) - Sets the maximum number of parallel workers than can be active at one time
+* [max_parallel_workers_per_gather](https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-MAX-PARALLEL-WORKERS-PER-GATHER) - Sets the maximum number of parallel processes per executor node
+* [max_pred_locks_per_page](https://www.postgresql.org/docs/current/runtime-config-locks.html#GUC-MAX-PRED-LOCKS-PER-PAGE) - Sets the maximum number of predicate-locked tuples per page
+* [max_pred_locks_per_relation](https://www.postgresql.org/docs/current/runtime-config-locks.html#GUC-MAX-PRED-LOCKS-PER-RELATION) - Sets the maximum number of predicate-locked pages and tuples per relation
+* [max_standby_archive_delay](https://www.postgresql.org/docs/current/runtime-config-replication.html#GUC-MAX-STANDBY-ARCHIVE-DELAY) - Sets the maximum delay before canceling queries when a hot standby server is processing archived WAL data
+* [max_standby_streaming_delay](https://www.postgresql.org/docs/current/runtime-config-replication.html#GUC-MAX-STANDBY-STREAMING-DELAY) - Sets the maximum delay before canceling queries when a hot standby server is processing streamed WAL data
+* [max_sync_workers_per_subscription](https://www.postgresql.org/docs/current/runtime-config-replication.html#GUC-MAX-SYNC-WORKERS-PER-SUBSCRIPTION) - Maximum number of table synchronization workers per subscription
+* [max_wal_size](https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-MAX-WAL-SIZE) - Sets the WAL size that triggers a checkpoint
+* [min_parallel_index_scan_size](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-MIN-PARALLEL-INDEX-SCAN-SIZE) - Sets the minimum amount of index data for a parallel scan
+* [min_wal_size](https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-MIN-WAL-SIZE) - Sets the minimum size to shrink the WAL to
+* [operator_precedence_warning](https://www.postgresql.org/docs/current/runtime-config-compatible.html#GUC-OPERATOR-PRECEDENCE-WARNING) - Emits a warning for constructs that changed meaning since PostgreSQL 9.4
+* [parallel_setup_cost](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-PARALLEL-SETUP-COST) - Sets the planner's estimate of the cost of starting up worker processes for parallel query
+* [parallel_tuple_cost](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-PARALLEL-TUPLE-COST) - Sets the planner's estimate of the cost of passing each tuple (row) from worker to master backend
+* [pg_stat_statements.save](https://www.postgresql.org/docs/current/pgstatstatements.html#id-1.11.7.38.8) - Saves pg_stat_statements statistics across server shutdowns
+* [pg_stat_statements.track](https://www.postgresql.org/docs/current/pgstatstatements.html#id-1.11.7.38.8) - Selects which statements are tracked by pg_stat_statements
+* [pg_stat_statements.track_utility](https://www.postgresql.org/docs/current/pgstatstatements.html#id-1.11.7.38.8) - Selects whether utility commands are tracked by pg_stat_statements
+* [quote_all_identifiers](https://www.postgresql.org/docs/current/runtime-config-compatible.html#GUC-QUOTE-ALL-IDENTIFIERS) - When generating SQL fragments, quotes all identifiers
+* [random_page_cost](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-RANDOM-PAGE-COST) - Sets the planner's estimate of the cost of a nonsequentially fetched disk page
+* [row_security](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-ROW-SECURITY) - Enables row security
+* [search_path](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-SEARCH-PATH) - Sets the schema search order for names that are not schema-qualified
+* [seq_page_cost](https://www.postgresql.org/docs/current/runtime-config-query.html#GUC-SEQ-PAGE-COST) - Sets the planner's estimate of the cost of a sequentially fetched disk page
+* [session_replication_role](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-SESSION-REPLICATION-ROLE) - Sets the session's behavior for triggers and rewrite rules
+* [standard_conforming_strings](https://www.postgresql.org/docs/current/runtime-config-compatible.html#id-1.6.6.16.2.2.7.1.3) - Causes '...' strings to treat backslashes literally
+* [statement_timeout](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-STATEMENT-TIMEOUT) - Sets the maximum allowed duration (in milliseconds) of any statement. 0 turns this off
+* [synchronize_seqscans](https://www.postgresql.org/docs/current/runtime-config-compatible.html#id-1.6.6.16.2.2.8.1.3) - Enables synchronized sequential scans
+* [synchronous_commit](https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-SYNCHRONOUS-COMMIT) - Sets the current transaction's synchronization level
+* [tcp_keepalives_count](https://www.postgresql.org/docs/current/runtime-config-connection.html#GUC-TCP-KEEPALIVES-COUNT) - Maximum number of TCP keepalive retransmits
+* [tcp_keepalives_idle](https://www.postgresql.org/docs/current/runtime-config-connection.html#GUC-TCP-KEEPALIVES-IDLE) - Time between issuing TCP keepalives
+* [tcp_keepalives_interval](https://www.postgresql.org/docs/current/runtime-config-connection.html#GUC-TCP-KEEPALIVES-INTERVAL) - Time between TCP keepalive retransmits
+* [temp_buffers](https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-TEMP-BUFFERS) - Sets the maximum number of temporary buffers used by each database session
+* [temp_tablespaces](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-TEMP-TABLESPACES) - Sets the tablespace(s) to use for temporary tables and sort files
+* [track_activities](https://www.postgresql.org/docs/current/runtime-config-statistics.html#GUC-TRACK-ACTIVITIES) - Collects information about executing commands
+* [track_counts](https://www.postgresql.org/docs/current/runtime-config-statistics.html#GUC-TRACK-COUNTS) - Collects statistics on database activity
+* [track_functions](https://www.postgresql.org/docs/current/runtime-config-statistics.html#GUC-TRACK-FUNCTIONS) - Collects function-level statistics on database activity
+* [track_io_timing](https://www.postgresql.org/docs/current/runtime-config-statistics.html#GUC-TRACK-IO-TIMING) - Collects timing statistics for database I/O activity
+* [transform_null_equals](https://www.postgresql.org/docs/current/runtime-config-compatible.html#GUC-TRANSFORM-NULL-EQUALS) - Treats "expr=NULL" as "expr IS NULL"
+* [vacuum_cost_delay](https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-VACUUM-COST-DELAY) - Vacuum cost delay in milliseconds
+* [vacuum_cost_limit](https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-VACUUM-COST-LIMIT) - Vacuum cost amount available before napping
+* [vacuum_cost_page_dirty](https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-VACUUM-COST-PAGE-DIRTY) - Vacuum cost for a page dirtied by vacuum
+* [vacuum_cost_page_hit](https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-VACUUM-COST-PAGE-HIT) - Vacuum cost for a page found in the buffer cache
+* [vacuum_cost_page_miss](https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-VACUUM-COST-PAGE-MISS) - Vacuum cost for a page not found in the buffer cache
+* [vacuum_defer_cleanup_age](https://www.postgresql.org/docs/current/runtime-config-replication.html#GUC-VACUUM-DEFER-CLEANUP-AGE) - Number of transactions by which VACUUM and HOT cleanup should be deferred, if any
+* [vacuum_freeze_min_age](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-VACUUM-FREEZE-MIN-AGE) - Minimum age at which VACUUM should freeze a table row
+* [vacuum_freeze_table_age](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-VACUUM-FREEZE-TABLE-AGE) - Age at which VACUUM should scan whole table to freeze tuples
+* [vacuum_multixact_freeze_min_age](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-VACUUM-MULTIXACT-FREEZE-MIN-AGE) - Minimum age at which VACUUM should freeze a MultiXactId in a table row
+* [vacuum_multixact_freeze_table_age](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-VACUUM-MULTIXACT-FREEZE-TABLE-AGE) - Multixact age at which VACUUM should scan whole table to freeze tuples
+* [wal_receiver_status_interval](https://www.postgresql.org/docs/current/runtime-config-replication.html#GUC-WAL-RECEIVER-STATUS-INTERVAL) - Sets the maximum interval between WAL receiver status reports to the primary
+* [wal_writer_delay](https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-WAL-WRITER-DELAY) - Time between WAL flushes performed in the WAL writer
+* [wal_writer_flush_after](https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-WAL-WRITER-FLUSH-AFTER) - Amount of WAL written out by WAL writer that triggers a flush
+* [work_mem](https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-WORK-MEM) - Sets the amount of memory to be used by internal sort operations and hash tables before writing to temporary disk files
+* [xmlbinary](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-XMLBINARY) - Sets how binary values are to be encoded in XML
+* [xmloption](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-XMLOPTION) - Sets whether XML data in implicit parsing and serialization operations is to be considered as documents or content fragments
+
+## Next steps
+
+* Another form of configuration, besides server parameters, are the resource [configuration options](concepts-configuration-options.md) in a Hyperscale (Citus) server group.
+* The underlying PostgreSQL data base also has [configuration parameters](http://www.postgresql.org/docs/current/static/runtime-config.html).
postgresql Tutorial Design Database Multi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/tutorial-design-database-multi-tenant.md
+
+ Title: 'Tutorial: Design a multi-tenant database - Hyperscale (Citus) - Azure Database for PostgreSQL'
+description: This tutorial shows how to power a scalable multi-tenant application with Azure Database for PostgreSQL Hyperscale (Citus).
+++++
+ms.devlang: azurecli
+ Last updated : 05/14/2019
+#Customer intent: As an developer, I want to design a hyperscale database so that my multi-tenant application runs efficiently for all tenants.
++
+# Tutorial: design a multi-tenant database by using Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)
+
+In this tutorial, you use Azure Database for PostgreSQL - Hyperscale (Citus) to learn how to:
+
+> [!div class="checklist"]
+> * Create a Hyperscale (Citus) server group
+> * Use psql utility to create a schema
+> * Shard tables across nodes
+> * Ingest sample data
+> * Query tenant data
+> * Share data between tenants
+> * Customize the schema per-tenant
+
+## Prerequisites
++
+## Use psql utility to create a schema
+
+Once connected to the Azure Database for PostgreSQL - Hyperscale (Citus) using psql, you can complete some basic tasks. This tutorial walks you through creating a web app that allows advertisers to track their campaigns.
+
+Multiple companies can use the app, so let's create a table to hold companies and another for their campaigns. In the psql console, run these commands:
+
+```sql
+CREATE TABLE companies (
+ id bigserial PRIMARY KEY,
+ name text NOT NULL,
+ image_url text,
+ created_at timestamp without time zone NOT NULL,
+ updated_at timestamp without time zone NOT NULL
+);
+
+CREATE TABLE campaigns (
+ id bigserial,
+ company_id bigint REFERENCES companies (id),
+ name text NOT NULL,
+ cost_model text NOT NULL,
+ state text NOT NULL,
+ monthly_budget bigint,
+ blacklisted_site_urls text[],
+ created_at timestamp without time zone NOT NULL,
+ updated_at timestamp without time zone NOT NULL,
+
+ PRIMARY KEY (company_id, id)
+);
+```
+
+>[!NOTE]
+> This article contains references to the term *blacklisted*, a term that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article.
+
+Each campaign will pay to run ads. Add a table for ads too, by running the following code in psql after the code above:
+
+```sql
+CREATE TABLE ads (
+ id bigserial,
+ company_id bigint,
+ campaign_id bigint,
+ name text NOT NULL,
+ image_url text,
+ target_url text,
+ impressions_count bigint DEFAULT 0,
+ clicks_count bigint DEFAULT 0,
+ created_at timestamp without time zone NOT NULL,
+ updated_at timestamp without time zone NOT NULL,
+
+ PRIMARY KEY (company_id, id),
+ FOREIGN KEY (company_id, campaign_id)
+ REFERENCES campaigns (company_id, id)
+);
+```
+
+Finally, we'll track statistics about clicks and impressions for each ad:
+
+```sql
+CREATE TABLE clicks (
+ id bigserial,
+ company_id bigint,
+ ad_id bigint,
+ clicked_at timestamp without time zone NOT NULL,
+ site_url text NOT NULL,
+ cost_per_click_usd numeric(20,10),
+ user_ip inet NOT NULL,
+ user_data jsonb NOT NULL,
+
+ PRIMARY KEY (company_id, id),
+ FOREIGN KEY (company_id, ad_id)
+ REFERENCES ads (company_id, id)
+);
+
+CREATE TABLE impressions (
+ id bigserial,
+ company_id bigint,
+ ad_id bigint,
+ seen_at timestamp without time zone NOT NULL,
+ site_url text NOT NULL,
+ cost_per_impression_usd numeric(20,10),
+ user_ip inet NOT NULL,
+ user_data jsonb NOT NULL,
+
+ PRIMARY KEY (company_id, id),
+ FOREIGN KEY (company_id, ad_id)
+ REFERENCES ads (company_id, id)
+);
+```
+
+You can see the newly created tables in the list of tables now in psql by running:
+
+```postgres
+\dt
+```
+
+Multi-tenant applications can enforce uniqueness only per tenant,
+which is why all primary and foreign keys include the company ID.
+
+## Shard tables across nodes
+
+A hyperscale deployment stores table rows on different nodes based on the value of a user-designated column. This "distribution column" marks which tenant owns which rows.
+
+Let's set the distribution column to be company\_id, the tenant
+identifier. In psql, run these functions:
+
+```sql
+SELECT create_distributed_table('companies', 'id');
+SELECT create_distributed_table('campaigns', 'company_id');
+SELECT create_distributed_table('ads', 'company_id');
+SELECT create_distributed_table('clicks', 'company_id');
+SELECT create_distributed_table('impressions', 'company_id');
+```
++
+## Ingest sample data
+
+Outside of psql now, in the normal command line, download sample data sets:
+
+```bash
+for dataset in companies campaigns ads clicks impressions geo_ips; do
+ curl -O https://examples.citusdata.com/mt_ref_arch/${dataset}.csv
+done
+```
+
+Back inside psql, bulk load the data. Be sure to run psql in the same directory where you downloaded the data files.
+
+```sql
+SET CLIENT_ENCODING TO 'utf8';
+
+\copy companies from 'companies.csv' with csv
+\copy campaigns from 'campaigns.csv' with csv
+\copy ads from 'ads.csv' with csv
+\copy clicks from 'clicks.csv' with csv
+\copy impressions from 'impressions.csv' with csv
+```
+
+This data will now be spread across worker nodes.
+
+## Query tenant data
+
+When the application requests data for a single tenant, the database
+can execute the query on a single worker node. Single-tenant queries
+filter by a single tenant ID. For example, the following query
+filters `company_id = 5` for ads and impressions. Try running it in
+psql to see the results.
+
+```sql
+SELECT a.campaign_id,
+ RANK() OVER (
+ PARTITION BY a.campaign_id
+ ORDER BY a.campaign_id, count(*) desc
+ ), count(*) as n_impressions, a.id
+ FROM ads as a
+ JOIN impressions as i
+ ON i.company_id = a.company_id
+ AND i.ad_id = a.id
+ WHERE a.company_id = 5
+GROUP BY a.campaign_id, a.id
+ORDER BY a.campaign_id, n_impressions desc;
+```
+
+## Share data between tenants
+
+Until now all tables have been distributed by `company_id`, but
+some data doesn't naturally "belong" to any tenant in particular,
+and can be shared. For instance, all companies in the example ad
+platform might want to get geographical information for their
+audience based on IP addresses.
+
+Create a table to hold shared geographic information. Run the following commands in psql:
+
+```sql
+CREATE TABLE geo_ips (
+ addrs cidr NOT NULL PRIMARY KEY,
+ latlon point NOT NULL
+ CHECK (-90 <= latlon[0] AND latlon[0] <= 90 AND
+ -180 <= latlon[1] AND latlon[1] <= 180)
+);
+CREATE INDEX ON geo_ips USING gist (addrs inet_ops);
+```
+
+Next make `geo_ips` a "reference table" to store a copy of the
+table on every worker node.
+
+```sql
+SELECT create_reference_table('geo_ips');
+```
+
+Load it with example data. Remember to run this command in psql from inside the directory where you downloaded the dataset.
+
+```sql
+\copy geo_ips from 'geo_ips.csv' with csv
+```
+
+Joining the clicks table with geo\_ips is efficient on all nodes.
+Here is a join to find the locations of everyone who clicked on ad
+290. Try running the query in psql.
+
+```sql
+SELECT c.id, clicked_at, latlon
+ FROM geo_ips, clicks c
+ WHERE addrs >> c.user_ip
+ AND c.company_id = 5
+ AND c.ad_id = 290;
+```
+
+## Customize the schema per-tenant
+
+Each tenant may need to store special information not needed by
+others. However, all tenants share a common infrastructure with
+an identical database schema. Where can the extra data go?
+
+One trick is to use an open-ended column type like PostgreSQL's
+JSONB. Our schema has a JSONB field in `clicks` called `user_data`.
+A company (say company five), can use the column to track whether
+the user is on a mobile device.
+
+Here's a query to find who clicks more: mobile, or traditional
+visitors.
+
+```sql
+SELECT
+ user_data->>'is_mobile' AS is_mobile,
+ count(*) AS count
+FROM clicks
+WHERE company_id = 5
+GROUP BY user_data->>'is_mobile'
+ORDER BY count DESC;
+```
+
+We can optimize this query for a single company by creating a
+[partial
+index](https://www.postgresql.org/docs/current/static/indexes-partial.html).
+
+```sql
+CREATE INDEX click_user_data_is_mobile
+ON clicks ((user_data->>'is_mobile'))
+WHERE company_id = 5;
+```
+
+More generally, we can create a [GIN
+indices](https://www.postgresql.org/docs/current/static/gin-intro.html) on
+every key and value within the column.
+
+```sql
+CREATE INDEX click_user_data
+ON clicks USING gin (user_data);
+
+-- this speeds up queries like, "which clicks have
+-- the is_mobile key present in user_data?"
+
+SELECT id
+ FROM clicks
+ WHERE user_data ? 'is_mobile'
+ AND company_id = 5;
+```
+
+## Clean up resources
+
+In the preceding steps, you created Azure resources in a server group. If you don't expect to need these resources in the future, delete the server group. Press the *Delete* button in the *Overview* page for your server group. When prompted on a pop-up page, confirm the name of the server group and click the final *Delete* button.
+
+## Next steps
+
+In this tutorial, you learned how to provision a Hyperscale (Citus) server group. You connected to it with psql, created a schema, and distributed data. You learned to query data both within and between tenants, and to customize the schema per tenant.
+
+- Learn about server group [node types](./concepts-nodes.md)
+- Determine the best [initial
+ size](howto-scale-initial.md) for your server group
postgresql Tutorial Design Database Realtime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/tutorial-design-database-realtime.md
+
+ Title: 'Tutorial: Design a real-time dashboard - Hyperscale (Citus) - Azure Database for PostgreSQL'
+description: This tutorial shows how to parallelize real-time dashboard queries with Azure Database for PostgreSQL Hyperscale (Citus).
++++++ Last updated : 05/14/2019
+#Customer intent: As a developer, I want to parallelize queries so that I can make a real-time dashboard application.
++
+# Tutorial: Design a real-time analytics dashboard by using Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)
+
+In this tutorial, you use Azure Database for PostgreSQL - Hyperscale (Citus) to learn how to:
+
+> [!div class="checklist"]
+> * Create a Hyperscale (Citus) server group
+> * Use psql utility to create a schema
+> * Shard tables across nodes
+> * Generate sample data
+> * Perform rollups
+> * Query raw and aggregated data
+> * Expire data
+
+## Prerequisites
++
+## Use psql utility to create a schema
+
+Once connected to the Azure Database for PostgreSQL - Hyperscale (Citus) using psql, you can complete some basic tasks. This tutorial walks you through ingesting traffic data from web analytics, then rolling up the data to provide real-time dashboards based on that data.
+
+Let's create a table that will consume all of our raw web traffic data. Run the following commands in the psql terminal:
+
+```sql
+CREATE TABLE http_request (
+ site_id INT,
+ ingest_time TIMESTAMPTZ DEFAULT now(),
+
+ url TEXT,
+ request_country TEXT,
+ ip_address TEXT,
+
+ status_code INT,
+ response_time_msec INT
+);
+```
+
+We're also going to create a table that will hold our per-minute aggregates, and a table that maintains the position of our last rollup. Run the following commands in psql as well:
+
+```sql
+CREATE TABLE http_request_1min (
+ site_id INT,
+ ingest_time TIMESTAMPTZ, -- which minute this row represents
+
+ error_count INT,
+ success_count INT,
+ request_count INT,
+ average_response_time_msec INT,
+ CHECK (request_count = error_count + success_count),
+ CHECK (ingest_time = date_trunc('minute', ingest_time))
+);
+
+CREATE INDEX http_request_1min_idx ON http_request_1min (site_id, ingest_time);
+
+CREATE TABLE latest_rollup (
+ minute timestamptz PRIMARY KEY,
+
+ CHECK (minute = date_trunc('minute', minute))
+);
+```
+
+You can see the newly created tables in the list of tables now with this psql command:
+
+```postgres
+\dt
+```
+
+## Shard tables across nodes
+
+A hyperscale deployment stores table rows on different nodes based on the value of a user-designated column. This "distribution column" marks how data is sharded across nodes.
+
+Let's set the distribution column to be site\_id, the shard
+key. In psql, run these functions:
+
+ ```sql
+SELECT create_distributed_table('http_request', 'site_id');
+SELECT create_distributed_table('http_request_1min', 'site_id');
+```
++
+## Generate sample data
+
+Now our server group should be ready to ingest some data. We can run the
+following locally from our `psql` connection to continuously insert data.
+
+```sql
+DO $$
+ BEGIN LOOP
+ INSERT INTO http_request (
+ site_id, ingest_time, url, request_country,
+ ip_address, status_code, response_time_msec
+ ) VALUES (
+ trunc(random()*32), clock_timestamp(),
+ concat('http://example.com/', md5(random()::text)),
+ ('{China,India,USA,Indonesia}'::text[])[ceil(random()*4)],
+ concat(
+ trunc(random()*250 + 2), '.',
+ trunc(random()*250 + 2), '.',
+ trunc(random()*250 + 2), '.',
+ trunc(random()*250 + 2)
+ )::inet,
+ ('{200,404}'::int[])[ceil(random()*2)],
+ 5+trunc(random()*150)
+ );
+ COMMIT;
+ PERFORM pg_sleep(random() * 0.25);
+ END LOOP;
+END $$;
+```
+
+The query inserts approximately eight rows every second. The rows are stored on different worker nodes as directed by the distribution column, `site_id`.
+
+ > [!NOTE]
+ > Leave the data generation query running, and open a second psql
+ > connection for the remaining commands in this tutorial.
+ >
+
+## Query
+
+The hyperscale hosting option allows multiple nodes to process queries in
+parallel for speed. For instance, the database calculates aggregates like SUM
+and COUNT on worker nodes, and combines the results into a final answer.
+
+Here's a query to count web requests per minute along with a few statistics.
+Try running it in psql and observe the results.
+
+```sql
+SELECT
+ site_id,
+ date_trunc('minute', ingest_time) as minute,
+ COUNT(1) AS request_count,
+ SUM(CASE WHEN (status_code between 200 and 299) THEN 1 ELSE 0 END) as success_count,
+ SUM(CASE WHEN (status_code between 200 and 299) THEN 0 ELSE 1 END) as error_count,
+ SUM(response_time_msec) / COUNT(1) AS average_response_time_msec
+FROM http_request
+WHERE date_trunc('minute', ingest_time) > now() - '5 minutes'::interval
+GROUP BY site_id, minute
+ORDER BY minute ASC;
+```
+
+## Rolling up data
+
+The previous query works fine in the early stages, but its performance
+degrades as your data scales. Even with distributed processing, it's faster to pre-compute the data than to recalculate it repeatedly.
+
+We can ensure our dashboard stays fast by regularly rolling up the
+raw data into an aggregate table. You can experiment with the aggregation duration. We used a per-minute aggregation table, but you could break data into 5, 15, or 60 minutes instead.
+
+To run this roll-up more easily, we're going to put it into a plpgsql function. Run these commands in psql to create the `rollup_http_request` function.
+
+```sql
+-- initialize to a time long ago
+INSERT INTO latest_rollup VALUES ('10-10-1901');
+
+-- function to do the rollup
+CREATE OR REPLACE FUNCTION rollup_http_request() RETURNS void AS $$
+DECLARE
+ curr_rollup_time timestamptz := date_trunc('minute', now());
+ last_rollup_time timestamptz := minute from latest_rollup;
+BEGIN
+ INSERT INTO http_request_1min (
+ site_id, ingest_time, request_count,
+ success_count, error_count, average_response_time_msec
+ ) SELECT
+ site_id,
+ date_trunc('minute', ingest_time),
+ COUNT(1) as request_count,
+ SUM(CASE WHEN (status_code between 200 and 299) THEN 1 ELSE 0 END) as success_count,
+ SUM(CASE WHEN (status_code between 200 and 299) THEN 0 ELSE 1 END) as error_count,
+ SUM(response_time_msec) / COUNT(1) AS average_response_time_msec
+ FROM http_request
+ -- roll up only data new since last_rollup_time
+ WHERE date_trunc('minute', ingest_time) <@
+ tstzrange(last_rollup_time, curr_rollup_time, '(]')
+ GROUP BY 1, 2;
+
+ -- update the value in latest_rollup so that next time we run the
+ -- rollup it will operate on data newer than curr_rollup_time
+ UPDATE latest_rollup SET minute = curr_rollup_time;
+END;
+$$ LANGUAGE plpgsql;
+```
+
+With our function in place, execute it to roll up the data:
+
+```sql
+SELECT rollup_http_request();
+```
+
+And with our data in a pre-aggregated form we can query the rollup
+table to get the same report as earlier. Run the following query:
+
+```sql
+SELECT site_id, ingest_time as minute, request_count,
+ success_count, error_count, average_response_time_msec
+ FROM http_request_1min
+ WHERE ingest_time > date_trunc('minute', now()) - '5 minutes'::interval;
+ ```
+
+## Expiring old data
+
+The rollups make queries faster, but we still need to expire old data to avoid unbounded storage costs. Decide how long youΓÇÖd like to keep data for each granularity, and use standard queries to delete expired data. In the following example, we decided to keep raw data for one day, and per-minute aggregations for one month:
+
+```sql
+DELETE FROM http_request WHERE ingest_time < now() - interval '1 day';
+DELETE FROM http_request_1min WHERE ingest_time < now() - interval '1 month';
+```
+
+In production, you could wrap these queries in a function and call it every minute in a cron job.
+
+## Clean up resources
+
+In the preceding steps, you created Azure resources in a server group. If you don't expect to need these resources in the future, delete the server group. Press the *Delete* button in the *Overview* page for your server group. When prompted on a pop-up page, confirm the name of the server group and click the final *Delete* button.
+
+## Next steps
+
+In this tutorial, you learned how to provision a Hyperscale (Citus) server group. You connected to it with psql, created a schema, and distributed data. You learned to query data in the raw form, regularly aggregate that data, query the aggregated tables, and expire old data.
+
+- Learn about server group [node types](./concepts-nodes.md)
+- Determine the best [initial
+ size](howto-scale-initial.md) for your server group
postgresql Tutorial Private Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/tutorial-private-access.md
+
+ Title: Create server group with private access (preview) - Hyperscale (Citus) - Azure Database for PostgreSQL
+description: Connect a VM to a server group private endpoint
+++++ Last updated : 10/15/2021++
+# Create server group with private access (preview) in Azure Database for PostgreSQL - Hyperscale (Citus)
+
+This tutorial creates a virtual machine and a Hyperscale (Citus) server group,
+and establishes [private access](concepts-private-access.md) between
+them.
+
+## Create a virtual network
+
+First, weΓÇÖll set up a resource group and virtual network. It will hold our
+server group and virtual machine.
+
+```sh
+az group create \
+ --name link-demo \
+ --location eastus
+
+az network vnet create \
+ --resource-group link-demo \
+ --name link-demo-net \
+ --address-prefix 10.0.0.0/16
+
+az network nsg create \
+ --resource-group link-demo \
+ --name link-demo-nsg
+
+az network vnet subnet create \
+ --resource-group link-demo \
+ --vnet-name link-demo-net \
+ --name link-demo-subnet \
+ --address-prefixes 10.0.1.0/24 \
+ --network-security-group link-demo-nsg
+```
+
+## Create a virtual machine
+
+For demonstration, weΓÇÖll use a virtual machine running Debian Linux, and the
+`psql` PostgreSQL client.
+
+```sh
+# provision the VM
+az vm create \
+ --resource-group link-demo \
+ --name link-demo-vm \
+ --vnet-name link-demo-net \
+ --subnet link-demo-subnet \
+ --nsg link-demo-nsg \
+ --public-ip-address link-demo-net-ip \
+ --image debian \
+ --admin-username azureuser \
+ --generate-ssh-keys
+
+# install psql database client
+az vm run-command invoke \
+ --resource-group link-demo \
+ --name link-demo-vm \
+ --command-id RunShellScript \
+ --scripts \
+ "sudo touch /home/azureuser/.hushlogin" \
+ "sudo DEBIAN_FRONTEND=noninteractive apt-get update" \
+ "sudo DEBIAN_FRONTEND=noninteractive apt-get install -q -y postgresql-client"
+```
+
+## Create a server group with a private link
+
+1. Select **Create a resource** in the upper left-hand corner of the Azure portal.
+
+2. Select **Databases** from the **New** page, and select **Azure Database for
+ PostgreSQL** from the **Databases** page.
+
+3. For the deployment option, select the **Create** button under **Hyperscale
+ (Citus) server group**.
+
+4. Fill out the new server details form with the following information:
+
+ - **Resource group**: `link-demo`
+ - **Server group name**: `link-demo-sg`
+ - **Location**: `East US`
+ - **Password**: (your choice)
+
+ > [!NOTE]
+ >
+ > The server group name must be globally unique across Azure because it
+ > creates a DNS entry. If `link-demo-sg` is unavailable, please choose
+ > another name and adjust the steps below accordingly.
+
+5. Select **Configure server group**, choose the **Basic** plan, and select
+ **Save**.
+
+6. Select **Next: Networking** at the bottom of the page.
+
+7. Select **Private access (preview)**.
+
+ > [!NOTE]
+ >
+ > Private access is available for preview in only [certain
+ > regions](concepts-limits.md#regions).
+ >
+ > If the private access option is not selectable for your server group
+ > even though your server group is within an allowed region,
+ > please open an Azure [support
+ > request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest),
+ > and include your Azure subscription ID, to get access.
+
+8. A screen appears called **Create private endpoint**. Enter these values and
+ select **OK**:
+
+ - **Resource group**: `link-demo`
+ - **Location**: `(US) East US`
+ - **Name**: `link-demo-sg-c-pe1`
+ - **Target sub-resource**: `coordinator`
+ - **Virtual network**: `link-demo-net`
+ - **Subnet**: `link-demo-subnet`
+ - **Integrate with private DNS zone**: Yes
+
+9. After creating the private endpoint, select **Review + create** to create
+ your Hyperscale (Citus) server group.
+
+## Access the server group privately from the virtual machine
+
+The private link allows our virtual machine to connect to our server group,
+and prevents external hosts from doing so. In this step, we'll check that
+the `psql` database client on our virtual machine can communicate with the
+coordinator node of the server group.
+
+```sh
+# save db URI
+#
+# obtained from Settings -> Connection Strings in the Azure portal
+#
+# replace {your_password} in the string with your actual password
+PG_URI='host=c.link-demo-sg.postgres.database.azure.com port=5432 dbname=citus user=citus password={your_password} sslmode=require'
+
+# attempt to connect to server group with psql in the virtual machine
+az vm run-command invoke \
+ --resource-group link-demo \
+ --name link-demo-vm \
+ --command-id RunShellScript \
+ --scripts "psql '$PG_URI' -c 'SHOW citus.version;'" \
+ --query 'value[0].message' \
+ | xargs printf
+```
+
+You should see a version number for Citus in the output. If you do, then psql
+was able to execute the command, and the private link worked.
+
+## Clean up resources
+
+We've seen how to create a private link between a virtual machine and a
+Hyperscale (Citus) server group. Now we can deprovision the resources.
+
+Delete the resource group, and the resources inside will be deprovisioned:
+
+```sh
+az group delete --resource-group link-demo
+
+# press y to confirm
+```
+
+## Next steps
+
+* Learn more about [private access](concepts-private-access.md)
+ (preview)
+* Learn about [private
+ endpoints](../../private-link/private-endpoint-overview.md)
+* Learn about [virtual
+ networks](../../virtual-network/concepts-and-best-practices.md)
+* Learn about [private DNS zones](../../dns/private-dns-overview.md)
postgresql Tutorial Server Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/tutorial-server-group.md
+
+ Title: 'Tutorial: create server group - Hyperscale (Citus) - Azure Database for PostgreSQL'
+description: How to create an Azure Database for PostgreSQL Hyperscale (Citus) server group.
+++++
+ms.devlang: azurecli
+ Last updated : 11/16/2021++
+# Tutorial: create server group
+
+In this tutorial, you create a server group in Azure Database for PostgreSQL - Hyperscale (Citus). You'll do these steps:
+
+> [!div class="checklist"]
+> * Provision the nodes
+> * Allow network access
+> * Connect to the coordinator node
++
+## Next steps
+
+With a server group provisioned, it's time to go on to the next tutorial:
+
+* [Work with distributed data](tutorial-shard.md)
postgresql Tutorial Shard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/hyperscale/tutorial-shard.md
+
+ Title: 'Tutorial: Shard data on worker nodes - Hyperscale (Citus) - Azure Database for PostgreSQL'
+description: This tutorial shows how to create distributed tables and visualize their data distribution with Azure Database for PostgreSQL Hyperscale (Citus).
+++++
+ms.devlang: azurecli
+ Last updated : 12/16/2020++
+# Tutorial: Shard data on worker nodes in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)
+
+In this tutorial, you use Azure Database for PostgreSQL - Hyperscale (Citus) to learn how to:
+
+> [!div class="checklist"]
+> * Create hash-distributed shards
+> * See where table shards are placed
+> * Identify skewed distribution
+> * Create constraints on distributed tables
+> * Run queries on distributed data
+
+## Prerequisites
+
+This tutorial requires a running Hyperscale (Citus) server group with two
+worker nodes. If you don't have a running server group, follow the [create
+server group](tutorial-server-group.md) tutorial and then come back
+to this one.
+
+## Hash-distributed data
+
+Distributing table rows across multiple PostgreSQL servers is a key technique
+for scalable queries in Hyperscale (Citus). Together, multiple nodes can hold
+more data than a traditional database, and in many cases can use worker CPUs in
+parallel to execute queries.
+
+In the prerequisites section, we created a Hyperscale (Citus) server group with
+two worker nodes.
+
+![coordinator and two workers](../tutorial-hyperscale-shard/nodes.png)
+
+The coordinator node's metadata tables track workers and distributed data. We
+can check the active workers in the
+[pg_dist_node](reference-metadata.md#worker-node-table) table.
+
+```sql
+select nodeid, nodename from pg_dist_node where isactive;
+```
+```
+ nodeid | nodename
+--+--
+ 1 | 10.0.0.21
+ 2 | 10.0.0.23
+```
+
+> [!NOTE]
+> Nodenames on Hyperscale (Citus) are internal IP addresses in a virtual
+> network, and the actual addresses you see may differ.
+
+### Rows, shards, and placements
+
+To use the CPU and storage resources of worker nodes, we have to distribute
+table data throughout the server group. Distributing a table assigns each row
+to a logical group called a *shard.* Let's create a table and distribute it:
+
+```sql
+-- create a table on the coordinator
+create table users ( email text primary key, bday date not null );
+
+-- distribute it into shards on workers
+select create_distributed_table('users', 'email');
+```
+
+Hyperscale (Citus) assigns each row to a shard based on the value of the
+*distribution column*, which, in our case, we specified to be `email`. Every
+row will be in exactly one shard, and every shard can contain multiple rows.
+
+![users table with rows pointing to shards](../tutorial-hyperscale-shard/table.png)
+
+By default `create_distributed_table()` makes 32 shards, as we can see by
+counting in the metadata table
+[pg_dist_shard](reference-metadata.md#shard-table):
+
+```sql
+select logicalrelid, count(shardid)
+ from pg_dist_shard
+ group by logicalrelid;
+```
+```
+ logicalrelid | count
+--+-
+ users | 32
+```
+
+Hyperscale (Citus) uses the `pg_dist_shard` table to assign rows to shards,
+based on a hash of the value in the distribution column. The hashing details
+are unimportant for this tutorial. What matters is that we can query to see
+which values map to which shard IDs:
+
+```sql
+-- Where would a row containing hi@test.com be stored?
+-- (The value doesn't have to actually be present in users, the mapping
+-- is a mathematical operation consulting pg_dist_shard.)
+select get_shard_id_for_distribution_column('users', 'hi@test.com');
+```
+```
+ get_shard_id_for_distribution_column
+--
+ 102008
+```
+
+The mapping of rows to shards is purely logical. Shards must be assigned to
+specific worker nodes for storage, in what Hyperscale (Citus) calls *shard
+placement*.
+
+![shards assigned to workers](../tutorial-hyperscale-shard/shard-placement.png)
+
+We can look at the shard placements in
+[pg_dist_placement](reference-metadata.md#shard-placement-table).
+Joining it with the other metadata tables we've seen shows where each shard
+lives.
+
+```sql
+-- limit the output to the first five placements
+
+select
+ shard.logicalrelid as table,
+ placement.shardid as shard,
+ node.nodename as host
+from
+ pg_dist_placement placement,
+ pg_dist_node node,
+ pg_dist_shard shard
+where placement.groupid = node.groupid
+ and shard.shardid = placement.shardid
+order by shard
+limit 5;
+```
+```
+ table | shard | host
+-+--+
+ users | 102008 | 10.0.0.21
+ users | 102009 | 10.0.0.23
+ users | 102010 | 10.0.0.21
+ users | 102011 | 10.0.0.23
+ users | 102012 | 10.0.0.21
+```
+
+### Data skew
+
+A server group runs most efficiently when you place data evenly on worker
+nodes, and when you place related data together on the same workers. In this
+section we'll focus on the first part, the uniformity of placement.
+
+To demonstrate, let's create sample data for our `users` table:
+
+```sql
+-- load sample data
+insert into users
+select
+ md5(random()::text) || '@test.com',
+ date_trunc('day', now() - random()*'100 years'::interval)
+from generate_series(1, 1000);
+```
+
+To see shard sizes, we can run [table size
+functions](https://www.postgresql.org/docs/current/functions-admin.html#FUNCTIONS-ADMIN-DBSIZE)
+on the shards.
+
+```sql
+-- sizes of the first five shards
+select *
+from
+ run_command_on_shards('users', $cmd$
+ select pg_size_pretty(pg_table_size('%1$s'));
+ $cmd$)
+order by shardid
+limit 5;
+```
+```
+ shardid | success | result
+++--
+ 102008 | t | 16 kB
+ 102009 | t | 16 kB
+ 102010 | t | 16 kB
+ 102011 | t | 16 kB
+ 102012 | t | 16 kB
+```
+
+We can see the shards are of equal size. We already saw that placements are
+evenly distributed among workers, so we can infer that the worker nodes hold
+roughly equal numbers of rows.
+
+The rows in our `users` example distributed evenly because properties of the
+distribution column, `email`.
+
+1. The number of email addresses was greater than or equal to the number of shards.
+2. The number of rows per email address was similar (in our case, exactly one
+ row per address because we declared email a key).
+
+Any choice of table and distribution column where either property fails will
+end up with uneven data size on workers, that is, *data skew*.
+
+### Add constraints to distributed data
+
+Using Hyperscale (Citus) allows you to continue to enjoy the safety of a
+relational database, including [database
+constraints](https://www.postgresql.org/docs/current/ddl-constraints.html).
+However, there's a limitation. Because of the nature of distributed systems,
+Hyperscale (Citus) won't cross-reference uniqueness constraints or referential
+integrity between worker nodes.
+
+Let's consider our `users` table example with a related table.
+
+```sql
+-- books that users own
+create table books (
+ owner_email text references users (email),
+ isbn text not null,
+ title text not null
+);
+
+-- distribute it
+select create_distributed_table('books', 'owner_email');
+```
+
+For efficiency, we distribute `books` the same way as `users`: by the owner's
+email address. Distributing by similar column values is called
+[colocation](concepts-colocation.md).
+
+We had no problem distributing books with a foreign key to users, because the
+key was on a distribution column. However, we would have trouble making `isbn`
+a key:
+
+```sql
+-- will not work
+alter table books add constraint books_isbn unique (isbn);
+```
+```
+ERROR: cannot create constraint on "books"
+DETAIL: Distributed relations cannot have UNIQUE, EXCLUDE, or
+ PRIMARY KEY constraints that do not include the partition column
+ (with an equality operator if EXCLUDE).
+```
+
+In a distributed table the best we can do is make columns unique modulo
+the distribution column:
+
+```sql
+-- a weaker constraint is allowed
+alter table books add constraint books_isbn unique (owner_email, isbn);
+```
+
+The above constraint merely makes isbn unique per user. Another option is to
+make books a [reference
+table](howto-modify-distributed-tables.md#reference-tables) rather
+than a distributed table, and create a separate distributed table associating
+books with users.
+
+## Query distributed tables
+
+In the previous sections, we saw how distributed table rows are placed in shards
+on worker nodes. Most of the time you don't need to know how or where data is
+stored in a server group. Hyperscale (Citus) has a distributed query executor
+that automatically splits up regular SQL queries. It runs them in parallel on
+worker nodes close to the data.
+
+For instance, we can run a query to find the average age of users, treating the
+distributed `users` table like it's a normal table on the coordinator.
+
+```sql
+select avg(current_date - bday) as avg_days_old from users;
+```
+```
+ avg_days_old
+--
+ 17926.348000000000
+```
+
+![query going to shards via coordinator](../tutorial-hyperscale-shard/query-fragments.png)
+
+Behind the scenes, the Hyperscale (Citus) executor creates a separate query for
+each shard, runs them on the workers, and combines the result. You can see it
+if you use the PostgreSQL EXPLAIN command:
+
+```sql
+explain select avg(current_date - bday) from users;
+```
+```
+ QUERY PLAN
+-
+ Aggregate (cost=500.00..500.02 rows=1 width=32)
+ -> Custom Scan (Citus Adaptive) (cost=0.00..0.00 rows=100000 width=16)
+ Task Count: 32
+ Tasks Shown: One of 32
+ -> Task
+ Node: host=10.0.0.21 port=5432 dbname=citus
+ -> Aggregate (cost=41.75..41.76 rows=1 width=16)
+ -> Seq Scan on users_102040 users (cost=0.00..22.70 rows=1270 width=4)
+```
+
+The output shows an example of an execution plan for a *query fragment* running
+on shard 102040 (the table `users_102040` on worker 10.0.0.21). The other
+fragments aren't shown because they're similar. We can see that the worker node
+scans the shard tables and applies the aggregate. The coordinator node combines
+aggregates for the final result.
+
+## Next steps
+
+In this tutorial, we created a distributed table, and learned about its shards
+and placements. We saw a challenge of using uniqueness and foreign key
+constraints, and finally saw how distributed queries work at a high level.
+
+* Read more about Hyperscale (Citus) [table types](concepts-nodes.md)
+* Get more tips on [choosing a distribution column](concepts-choose-distribution-column.md)
+* Learn the benefits of [table colocation](concepts-colocation.md)
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/overview.md
Learn more about the three deployment modes for Azure Database for PostgreSQL an
- [Single Server](./overview-single-server.md) - [Flexible Server](./flexible-server/overview.md)-- [Hyperscale (Citus)](./hyperscale-overview.md)
+- [Hyperscale (Citus)](hyperscale/overview.md)
purview Concept Best Practices Classification https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/concept-best-practices-classification.md
Last updated 11/18/2021
Data classification, in the context of Azure Purview, is a way of categorizing data assets by assigning unique logical labels or classes to the data assets. Classification is based on the business context of the data. For example, you might classify assets by *Passport Number*, *Driver's License Number*, *Credit Card Number*, *SWIFT Code*, *PersonΓÇÖs Name*, and so on.
-When you classify data assets, you make them easier to understand, search, and govern. Classifying data assets also helps you understand the risks associated with them. This in turn can help you implement measures to protect sensitive or important data from ungoverned proliferation and unauthorized access across the data estate.
-
-Azure Purview provides an automated classification capability while you scan your data sources. You get more than 200 system built-in classifications and the ability to create custom classifications for your data. You can classify assets automatically when they're configured as part of a scan, or you can edit them manually in Azure Purview Studio after they're scanned and ingested.
-
-## Why classification is a good idea
-
-Classification is the process of organizing data into *logical categories* that make the data easy to retrieve, sort, and identify for future use. This can be particularly important for data governance. Among other reasons, classifying data assets is important because it helps you:
-* Narrow down the search for data assets that you're interested in.
-* Organize and understand the variety of data classes that are important in your organization and where they're stored.
-* Understand the risks associated with your most important data assets and then take appropriate measures to mitigate them.
-
-As shown in the following image, it's possible to apply classifications at both the asset level and the schema level for the *Customers* table in Azure SQL Database.
--
-Azure Purview supports both system and custom classifications.
-
-* **System classifications**: Azure Purview supports a large set of system classifications by default. For the entire list of available system classifications, see [Supported classifications in Azure Purview](./supported-classifications.md).
-
- In the example in the preceding image, *PersonΓÇÖs Name* is a system classification.
-
-* **Custom classifications**: You can create custom classifications when you want to classify assets based on a pattern or a specific column name that's unavailable as a default system classification.
-Custom classification rules can be based on a *regular expression* pattern or *dictionary*.
-
- Let's say that the *Employee ID* column follows the EMPLOYEE{GUID} pattern (for example, EMPLOYEE9c55c474-9996-420c-a285-0d0fc23f1f55). You can create your own custom classification by using a regular expression, such as `\^Employee\[A-Za-z0-9\]{8}-\[A-Za-z0-9\]{4}-\[A-Za-z0-9\]{4}-\[A-Za-z0-9\]{4}-\[A-Za-z0-9\]{12}\$`.
--
-> [!NOTE]
-> Sensitivity labels are different from classifications. Sensitivity labels categorize assets in the context of data security and privacy, such as *Highly Confidential*, *Restricted*, *Public*, and so on. To use sensitivity labels in Azure Purview, you'll need at least one Microsoft 365 license or account within the same Azure Active Directory (Azure AD) tenant as your Azure Purview account. For more information about the differences between sensitivity labels and classifications, see [Sensitivity labels in Azure Purview FAQ](sensitivity-labels-frequently-asked-questions.yml#what-is-the-difference-between-classifications-and-sensitivity-labels-in-azure-purview).
+To learn more about classification, see [Classification](concept-classification.md).
## Classification best practices This section describes best practices to adopt when you're classifying data assets.+ ### Scan rule set By using a *scan rule set*, you can configure the relevant classifications that should be applied to the particular scan for the data source. Select the relevant system classifications, or select custom classifications if you've created one for the data you're scanning.
If there are multiple column patterns to be classified for the same classificati
For more information, see [regex alternation construct](/dotnet/standard/base-types/regular-expression-language-quick-reference#alternation-constructs).
-### Manually apply and edit classifications in Purview Studio
-
-You can manually edit and update classification labels at both the asset and schema levels in Purview Studio.
-
-> [!NOTE]
-> Applying classifications manually at the schema level will prevent updates on future scans.
-
-
-
-With Azure Purview, you can delete custom classification rules. You also have options for removing the classifications applied on the data assets, as shown in the following image:
-
-
-You can also edit classifications in bulk through Purview Studio. For more information, see [Bulk edit assets to annotate classifications and glossary terms and to modify contacts](how-to-bulk-edit-assets.md).
- ## Classification considerations Here are some considerations to bear in mind as you're defining classifications:
Here are some considerations to bear in mind as you're defining classifications:
* For automatic assignment, see [Supported data stores in Azure Purview](./purview-connector-overview.md). * Before you scan your data sources in Azure Purview, it is important to understand your data and configure the appropriate scan rule set for it (for example, by selecting relevant system classification, custom classifications, or a combination of both), because it could affect your scan performance. For more information, see [Supported classifications in Azure Purview](./supported-classifications.md). * The Azure Purview scanner applies data sampling rules for deep scans (subject to classification) for both system and custom classifications. The sampling rule is based on the type of data sources. For more information, see the "Sampling within a file" section in [Supported data sources and file types in Azure Purview](./sources-and-scans.md#sampling-within-a-file). +
+ > [!Note]
+ > **Distinct data threshold**: This is the total number of distinct data values that need to be found in a column before the scanner runs the data pattern on it. Distinct data threshold has nothing to do with pattern matching but it is a pre-requisite for pattern matching. System classification rules require there to be at least 8 distinct values in each column to subject them to classification. The system requires this value to make sure that the column contains enough data for the scanner to accurately classify it. For example, a column that contains multiple rows that all contain the value 1 won't be classified. Columns that contain one row with a value and the rest of the rows have null values also won't get classified. If you specify multiple patterns, this value applies to each of them.
+ * The sampling rules apply to resource sets as well. For more information, see the "Resource set file sampling" section in [Supported data sources and file types in Azure Purview](./sources-and-scans.md#resource-set-file-sampling). * Custom classifications can't be applied on document type assets using custom classification rules. Classifications for such types can be applied manually only. * Custom classifications aren't included in any default scan rules. Therefore, if automatic assignment of custom classifications is expected, you must deploy and use a custom scan rule that includes the custom classification to run the scan.
purview Concept Classification https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/concept-classification.md
+
+ Title: Understand data classification feature in Azure Purview
+description: This article explains the concept of data classification in Azure Purview.
+++++ Last updated : 01/04/2022++
+# Data Classification in Azure Purview
+
+Data classification, in the context of Azure Purview, is a way of categorizing data assets by assigning unique logical tags or classes to the data assets. Classification is based on the business context of the data. For example, you might classify assets by *Passport Number*, *Driver's License Number*, *Credit Card Number*, *SWIFT Code*, *PersonΓÇÖs Name*, and so on.
+
+When you classify data assets, you make them easier to understand, search, and govern. Classifying data assets also helps you understand the risks associated with them. This in turn can help you implement measures to protect sensitive or important data from ungoverned proliferation and unauthorized access across the data estate.
+
+Azure Purview provides an automated classification capability while you scan your data sources. You get more than 200+ built-in system classifications and the ability to create custom classifications for your data. You can classify assets automatically when they're configured as part of a scan, or you can edit them manually in Azure Purview Studio after they're scanned and ingested.
+
+## Use of classification
+
+Classification is the process of organizing data into *logical categories* that make the data easy to retrieve, sort, and identify for future use. This can be particularly important for data governance. Among other reasons, classifying data assets is important because it helps you:
+
+* Narrow down the search for data assets that you're interested in.
+* Organize and understand the variety of data classes that are important in your organization and where they're stored.
+* Understand the risks associated with your most important data assets and then take appropriate measures to mitigate them.
+
+As shown in the following image, it's possible to apply classifications at both the asset level and the schema level for the *Customers* table in Azure SQL Database.
++
+## Types of classification
+
+Azure Purview supports both system and custom classifications.
+
+* **System classifications**: Azure Purview supports 200+ system classifications out of the box. For the entire list of available system classifications, see [Supported classifications in Azure Purview](./supported-classifications.md).
+
+ In the example in the preceding image, *PersonΓÇÖs Name* is a system classification.
+
+* **Custom classifications**: You can create custom classifications when you want to classify assets based on a pattern or a specific column name that's unavailable as a system classification.
+Custom classification rules can be based on a *regular expression* pattern or *dictionary*.
+
+ Let's say that the *Employee ID* column follows the EMPLOYEE{GUID} pattern (for example, EMPLOYEE9c55c474-9996-420c-a285-0d0fc23f1f55). You can create your own custom classification by using a regular expression, such as `\^Employee\[A-Za-z0-9\]{8}-\[A-Za-z0-9\]{4}-\[A-Za-z0-9\]{4}-\[A-Za-z0-9\]{4}-\[A-Za-z0-9\]{12}\$`.
+
+> [!NOTE]
+> Sensitivity labels are different from classifications. Sensitivity labels categorize assets in the context of data security and privacy, such as *Highly Confidential*, *Restricted*, *Public*, and so on. To use sensitivity labels in Azure Purview, you'll need at least one Microsoft 365 license or account within the same Azure Active Directory (Azure AD) tenant as your Azure Purview account. For more information about the differences between sensitivity labels and classifications, see [Sensitivity labels in Azure Purview FAQ](sensitivity-labels-frequently-asked-questions.yml#what-is-the-difference-between-classifications-and-sensitivity-labels-in-azure-purview).
+
+## Next steps
+
+* [Read about classification best practices](concept-best-practices-classification.md)
+* [Create custom classifications](create-a-custom-classification-and-classification-rule.md)
+* [Apply classifications](apply-classifications.md)
+* [Use the Purview Studio](use-purview-studio.md)
purview Create Catalog Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/create-catalog-powershell.md
For more information about Purview, [see our overview page](overview.md). For mo
Use the [New-AzPurviewAccount](/powershell/module/az.purview/new-azpurviewaccount) cmdlet to create the Purview account: ```azurepowershell
- New-AzPurviewAccount -Name yourPurviewAccountName -ResourceGroupName myResourceGroup -Location eastus -IdentityType SystemAssigned -SkuCapacity 4 -SkuName Standard -PublicNetworkAccess
+ New-AzPurviewAccount -Name yourPurviewAccountName -ResourceGroupName myResourceGroup -Location eastus -IdentityType SystemAssigned -SkuCapacity 4 -SkuName Standard -PublicNetworkAccess Enabled
``` # [Azure CLI](#tab/azure-cli)
purview How To Access Policies Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/how-to-access-policies-storage.md
Previously updated : 12/15/2021 Last updated : 1/5/2022
Register and scan each data source with Purview to later define access policies.
If you would like to use a data source to create access policies in Purview, enable it for access policy through the **Data use governance** toggle, as shown in the picture.
+![Image shows how to register a data source for policy.](./media/how-to-access-policies-storage/register-data-source-for-policy-storage.png)
>[!Note] > - To disable a source for *Data use Governance*, remove it first from being bound (i.e. published) in any policy.
If you would like to use a data source to create access policies in Purview, ena
- **Case 2** shows a valid configuration where a Storage account is registered in a Purview account in a different subscription. - **Case 3** shows an invalid configuration arising because Storage accounts S3SA1 and S3SA2 both belong to Subscription 3, but are registered to different Purview accounts. In that case, the *Data use governance* toggle will only work in the Purview account that wins and registers a data source in that subscription first. The toggle will then be greyed out for the other data source.
+![Diagram shows valid and invalid configurations when using multiple Purview accounts to manage policies.](./media/how-to-access-policies-storage/valid-and-invalid-configurations.png)"
## Policy authoring
This section describes the steps to create a new policy in Azure Purview.
1. Select the **New Policy** button in the policy page.
- :::image type="content" source="./media/how-to-access-policies-storage/policy-onboard-guide-1.png" alt-text="Image shows how a data owner can access the Policy functionality in Azure Purview when it wants to create policies.":::
+ ![Image shows how a data owner can access the Policy functionality in Azure Purview when it wants to create policies.](./media/how-to-access-policies-storage/policy-onboard-guide-1.png)
1. The new policy page will appear. Enter the policy **Name** and **Description**. 1. To add policy statements to the new policy, select the **New policy statement** button. This will bring up the policy statement builder.
- :::image type="content" source="./media/how-to-access-policies-storage/create-new-policy.png" alt-text="Image shows how a data owner can create a new policy statement.":::
+ ![Image shows how a data owner can create a new policy statement.](./media/how-to-access-policies-storage/create-new-policy.png)"
1. Select the **Effect** button and choose *Allow* from the drop-down list.
This section describes the steps to create a new policy in Azure Purview.
1. Use the **Assets** box if you scanned the data source, otherwise use the **Data sources** box above. Assuming the first, in the **Assets** box, enter the **Data Source Type** and select the **Name** of a previously registered data source.
- :::image type="content" source="./media/how-to-access-policies-storage/select-data-source-type.png" alt-text="Image shows how a data owner can select a Data Resource when editing a policy statement.":::
+ ![Image shows how a data owner can select a Data Resource when editing a policy statement.](./media/how-to-access-policies-storage/select-data-source-type.png)
1. Select the **Continue** button and transverse the hierarchy to select the folder or file. Then select the **Add** button. This will take you back to the policy editor.
- :::image type="content" source="./media/how-to-access-policies-storage/select-asset.png" alt-text="Image shows how a data owner can select the asset when creating or editing a policy statement.":::
+ ![Image shows how a data owner can select the asset when creating or editing a policy statement.](./media/how-to-access-policies-storage/select-asset.png)"
1. Select the **Subjects** button and enter the subject identity as a principal, group, or MSI. Then select the **OK** button. This will take you back to the policy editor
- :::image type="content" source="./media/how-to-access-policies-storage/select-subject.png" alt-text="Image shows how a data owner can select the subject when creating or editing a policy statement.":::
+ ![Image shows how a data owner can select the subject when creating or editing a policy statement.](./media/how-to-access-policies-storage/select-subject.png)
1. Repeat the steps #5 to #11 to enter any more policy statements.
Steps to create a new policy in Purview are as follows.
1. Navigate to Purview policy app using the left side panel.
- :::image type="content" source="./media/how-to-access-policies-storage/policy-onboard-guide-2.png" alt-text="Image shows how a data owner can access the Policy functionality in Azure Purview when it wants to update a policy.":::
+ ![Image shows how a data owner can access the Policy functionality in Azure Purview when it wants to update a policy.](./media/how-to-access-policies-storage/policy-onboard-guide-2.png)
1. The Policy portal will present the list of existing policies in Purview. Select the policy that needs to be updated. 1. The policy details page will appear, including Edit and Delete options. Select the **Edit** button, which brings up the policy statement builder for the statements in this policy. Now, any parts of the statements in this policy can be updated. To delete the policy, use the **Delete** button.
- :::image type="content" source="./media/how-to-access-policies-storage/edit-policy.png" alt-text="Image shows how a data owner can edit or delete a policy statement.":::
+ ![Image shows how a data owner can edit or delete a policy statement.](./media/how-to-access-policies-storage/edit-policy.png)
### Publish the policy
The steps to publish a policy are as follows
1. Navigate to the Purview Policy app using the left side panel.
- :::image type="content" source="./media/how-to-access-policies-storage/policy-onboard-guide-2.png" alt-text="Image shows how a data owner can access the Policy functionality in Azure Purview when it wants to publish a policy.":::
+ ![Image shows how a data owner can access the Policy functionality in Azure Purview when it wants to publish a policy.](./media/how-to-access-policies-storage/policy-onboard-guide-2.png)
1. The Policy portal will present the list of existing policies in Purview. Locate the policy that needs to be published. Select the **Publish** button on the right top corner of the page.
- :::image type="content" source="./media/how-to-access-policies-storage/publish-policy.png" alt-text="Image shows how a data owner can publish a policy.":::
+ ![Image shows how a data owner can publish a policy.](./media/how-to-access-policies-storage/publish-policy.png)
1. A list of data sources is displayed. You can enter a name to filter the list. Then, select each data source where this policy is to be published and then select the **Publish** button.
- :::image type="content" source="./media/how-to-access-policies-storage/select-data-sources-publish-policy.png" alt-text="Image shows how a data owner can select the data source where the policy will be published.":::
+ ![Image shows how a data owner can select the data source where the policy will be published.](./media/how-to-access-policies-storage/select-data-sources-publish-policy.png)
>[!Important] > - Publish is a background operation. It can take up to **2 hours** for the changes to be reflected in the data source.
purview Register Scan Hive Metastore Source https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-hive-metastore-source.md
When scanning Hive metastore source, Purview supports:
* You must have an active [Azure Purview resource](create-catalog-portal.md).
-* You need Data Source Administrator or Data Reader permissions to register a source and manage it in Azure Purview Studio. For more information about permissions, see [Access control in Azure Purview](catalog-permissions.md).
+* You need Data Source Administrator and Data Reader permissions to register a source and manage it in Azure Purview Studio. For more information about permissions, see [Access control in Azure Purview](catalog-permissions.md).
* Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [Create and configure a self-hosted integration runtime](manage-integration-runtimes.md).
purview Register Scan Sap Hana https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-sap-hana.md
When scanning SAP HANA source, Purview supports extracting technical metadata in
- Views including the columns - Stored procedures including the parameter dataset and result set - Functions including the parameter dataset
+- Sequences
- Synonyms ## Prerequisites
When scanning SAP HANA source, Purview supports extracting technical metadata in
* You must have an active [Azure Purview resource](create-catalog-portal.md).
-* You need Data Source Administrator or Data Reader permissions to register a source and manage it in Azure Purview Studio. For more information about permissions, see [Access control in Azure Purview](catalog-permissions.md).
+* You need Data Source Administrator and Data Reader permissions to register a source and manage it in Azure Purview Studio. For more information about permissions, see [Access control in Azure Purview](catalog-permissions.md).
* Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [Create and configure a self-hosted integration runtime](manage-integration-runtimes.md). The minimal supported Self-hosted Integration Runtime version is 5.13.8013.1.
purview Register Scan Snowflake https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-snowflake.md
This article outlines how to register Snowflake, and how to authenticate and int
When scanning Snowflake source, Purview supports: -- Extract technical metadata including:
+- Extracting technical metadata including:
- Server - Databases
When scanning Snowflake source, Purview supports:
- Tasks - Sequences -- Fetch static lineage on assets relationships among tables, views, and streams.
+- Fetching static lineage on assets relationships among tables, views, and streams.
## Prerequisites
purview Supported Classifications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/supported-classifications.md
Last updated 09/27/2021
#Customer intent: As a data steward or catalog administrator, I need to understand what's supported under classifications.
-# Supported classifications in Azure Purview
+# System classifications in Azure Purview
-This article lists the supported and defined system classifications in Azure Purview.
+This article lists the supported system classifications in Azure Purview. To learn more about classification, see [Classification](concept-classification.md).
--- **Distinct data threshold**: The total number of distinct data values that need to be found in a column before the scanner runs the data pattern on it. Distinct data threshold has nothing to do with pattern matching but it is a pre-requisite for pattern matching. Our system classification rules require there to be at least 8 distinct values in each column to subject them to classification. The system requires this value to make sure that the column contains enough data for the scanner to accurately classify it. For example, a column that contains multiple rows that all contain the value 1 won't be classified. Columns that contain one row with a value and the rest of the rows have null values also won't get classified. If you specify multiple patterns, this value applies to each of them.--- **Minimum match threshold**: It is the minimum percentage of data value matches in a column that must be found by the scanner for the classification to be applied. The system classification value is set at 60%.--
-## Defined system classifications
-
-Azure Purview classifies data by [RegEx](https://wikipedia.org/wiki/Regular_expression) and [Bloom Filter](https://wikipedia.org/wiki/Bloom_filter). The following lists describe the format, pattern, and keywords for the Azure Purview defined system classifications.
-
-Each classification name is prefixed by MICROSOFT.
+Azure Purview classifies data by [RegEx](https://wikipedia.org/wiki/Regular_expression) and [Bloom Filter](https://wikipedia.org/wiki/Bloom_filter). The following lists describe the format, pattern, and keywords for the Azure Purview defined system classifications. Each classification name is prefixed by *MICROSOFT*.
> [!Note] > Azure Purview can classify both structured (CSV, TSV, JSON, SQL Table etc.) as well as unstructured data (DOC, PDF, TXT etc.). However, there are certain classifications that are only applicable to structured data. Here is the list of classifications that Purview doesn't apply on unstructured data - City Name, Country Name, Date Of Birth, Email, Ethnic Group, GeoLocation, Person Name, U.S. Phone Number, U.S. States, U.S. ZipCode -
-## Bloom Filter Classifications
+## Bloom Filter based classifications
## City, Country, and Place
remote-rendering Network Requirements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/remote-rendering/reference/network-requirements.md
We recommend running the test multiple times and taking the worst results.
While low latency is not a guarantee that Azure Remote Rendering will work well on your network, we have usually seen it perform fine in situations where these tests passed successfully. If you are encountering artifacts such as unstable, jittery, or jumping holograms when running Azure Remote Rendering, refer to the [troubleshooting guide](../resources/troubleshoot.md).
+### How to 'ping' a rendering session
+
+It might be useful to measure latencies against a specific session VM, as this value may differ from values reported by www.azurespeed.com. The hostname of a session is logged by the [powershell script to create a new session](../samples/powershell-example-scripts.md#create-a-rendering-session). Similarly, there is a hostname property in the REST call response and also in the C++/C# runtime API (`RenderingSessionProperties.Hostname`). Furthermore, the handshake port is needed, which can be retrieved similarly.
+
+Here is some sample output from running the ```RenderingSession.ps1``` script:
+
+![Retrieve hostname from powershell output](./media/session-hostname-powershell.png)
+
+ARR session VMs do not work with the built-in command line 'ping' tool. Instead, a ping tool that works with TCP/UDP must be used. A simple tool called PsPing [(download link)](https://docs.microsoft.com/sysinternals/downloads/psping) can be used for this purpose.
+The calling syntax is:
+
+```PowerShell
+psping.exe <hostname>:<handshakeport>
+```
+
+Example output from running PsPing:
+
+![PsPing an ARR session](./media/psping-arr-session.png)
+
+
## Next steps * [Quickstart: Render a model with Unity](../quickstarts/render-model.md)
search Cognitive Search Common Errors Warnings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-common-errors-warnings.md
Incremental progress during indexing ensures that if indexer execution is interr
The ability to resume an unfinished indexing job is predicated on having documents ordered by the `_ts` column. The indexer uses the timestamp to determine which document to pick up next. If the `_ts` column is missing or if the indexer can't determine if a custom query is ordered by it, the indexer starts at beginning and you'll see this warning.
-It is possible to override this behavior, enabling incremental progress and suppressing this warning by using the `assumeOrderByHighWatermarkColumn` configuration property.
+It is possible to override this behavior, enabling incremental progress and suppressing this warning by using the `assumeOrderByHighWaterMarkColumn` configuration property.
For more information, see [Incremental progress and custom queries](search-howto-index-cosmosdb.md#IncrementalProgress).
Collections with [Lazy](../cosmos-db/index-policy.md#indexing-mode) indexing pol
## Warning: The document contains very long words (longer than 64 characters). These words may result in truncated and/or unreliable model predictions.
-This warning is passed from the Language service of Azure Cognitive Services. In some cases, it is safe to ignore this warning, such as when your document contains a long URL (which likely isn't a key phrase or driving sentiment, etc.). Be aware that when a word is longer than 64 characters, it will be truncated to 64 characters which can affect model predictions.
+This warning is passed from the Language service of Azure Cognitive Services. In some cases, it is safe to ignore this warning, such as when your document contains a long URL (which likely isn't a key phrase or driving sentiment, etc.). Be aware that when a word is longer than 64 characters, it will be truncated to 64 characters which can affect model predictions.
search Search Indexer Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-indexer-overview.md
Previously updated : 12/17/2021 Last updated : 01/05/2022 # Indexers in Azure Cognitive Search An *indexer* in Azure Cognitive Search is a crawler that extracts searchable content from cloud data sources and populates a search index using field-to-field mappings between source data and a search index. This approach is sometimes referred to as a 'pull model' because the search service pulls data in without you having to write any code that adds data to an index. Indexers also drive the [AI enrichment](cognitive-search-concept-intro.md) capabilities of Cognitive Search, integrating external processing of content en route to an index.
-Indexers are cloud-only, with individual indexers for supported data sources. When configuring an indexer, you'll specify a data source (origin) and a search index (destination). Several sources, such as Azure Blob storage, have additional configuration properties specific to that content type.
+Indexers are cloud-only, with individual indexers for [supported data sources](#supported-data-sources). When configuring an indexer, you'll specify a data source (origin) and a search index (destination). Several sources, such as Azure Blob Storage, have additional configuration properties specific to that content type.
You can run indexers on demand or on a recurring data refresh schedule that runs as often as every five minutes. More frequent updates require a ['push model'](search-what-is-data-import.md) that simultaneously updates data in both Azure Cognitive Search and your external data source.
For each document it receives, an indexer implements or coordinates multiple ste
### Stage 1: Document cracking
-Document cracking is the process of opening files and extracting content. Text-based content can be extracted from files on a service, rows in a table, or items in container or collection. If you add a skillset and [image skills](cognitive-search-concept-image-scenarios.md) to an indexer, document cracking can also extract images and queue them for processing.
+Document cracking is the process of opening files and extracting content. Text-based content can be extracted from files on a service, rows in a table, or items in container or collection. If you add a skillset and [image skills](cognitive-search-concept-image-scenarios.md), document cracking can also extract images and queue them for image processing.
Depending on the data source, the indexer will try different operations to extract potentially indexable content:
Depending on the data source, the indexer will try different operations to extra
### Stage 2: Field mappings
-An indexer extracts text from a source field and sends it to a destination field in an index or knowledge store. When field names and types coincide, the path is clear. However, you might want different names or types in the output, in which case you need to tell the indexer how to map the field.
+An indexer extracts text from a source field and sends it to a destination field in an index or knowledge store. When field names and data types coincide, the path is clear. However, you might want different names or types in the output, in which case you need to tell the indexer how to map the field.
-This step occurs after document cracking, but before transformations, when the indexer is reading from the source documents. When you define a [field mapping](search-indexer-field-mappings.md), the value of the source field is sent as-is to the destination field with no modifications.
+To [specify field mappings](search-indexer-field-mappings.md), enter the source and destination fields in the indexer definition.
+
+Field mapping occurs after document cracking, but before transformations, when the indexer is reading from the source documents. When you define a field mapping, the value of the source field is sent as-is to the destination field with no modifications.
### Stage 3: Skillset execution
-Skillset execution is an optional step that invokes built-in or custom AI processing. You might need it for optical character recognition (OCR) in the form of image analysis if the source data is a binary image, or you might need language translation if content is in different languages.
+Skillset execution is an optional step that invokes built-in or custom AI processing. You might need it for optical character recognition (OCR) in the form of image analysis if the source data is a binary image, or you might need text translation if content is in different languages.
Whatever the transformation, skillset execution is where enrichment occurs. If an indexer is a pipeline, you can think of a [skillset](cognitive-search-defining-skillset.md) as a "pipeline within the pipeline". ### Stage 4: Output field mappings
-If you include a skillset, you will most likely need to include output field mappings. The output of a skillset is really a tree of information called the *enriched document*. Output field mappings allow you to select which parts of this tree to map into fields in your index. Learn how to [define output field mappings](cognitive-search-output-field-mapping.md).
+If you include a skillset, you will need to [specify output field mappings](cognitive-search-output-field-mapping.md) in the indexer definition. The output of a skillset is manifested internally as a tree structure referred to as an *enriched document*. Output field mappings allow you to select which parts of this tree to map into fields in your index.
-Whereas field mappings associate verbatim values from the data source to destination fields, output field mappings tell the indexer how to associate the transformed values in the enriched document to destination fields in the index. Unlike field mappings, which are considered optional, you will always need to define an output field mapping for any transformed content that needs to reside in an index.
+Despite the similarity in names, output field mappings and field mappings build associations from different sources. Field mappings associate the content of source field to a destination field in a search index. Output field mappings associate the content of an internal enriched document (skill outputs) to destination fields in the index. Unlike field mappings, which are considered optional, you will always need to define an output field mapping for any transformed content that needs to reside in an index.
The next image shows a sample indexer [debug session](cognitive-search-debug-session.md) representation of the indexer stages: document cracking, field mappings, skillset execution, and output field mappings.
An indexer will automate some tasks related to data ingestion, but creating an i
### Step 3: Create and run (or schedule) the indexer
-An indexer runs when you first [create an indexer](/rest/api/searchservice/Create-Indexer) on the search service. It's only when you create or run the indexer that you'll find out if the data source is accessible or the skillset is valid. After the first run, you can re-run it on demand using [Run Indexer](/rest/api/searchservice/run-indexer), or you can [define a recurring schedule](search-howto-schedule-indexers.md).
+By default, the first indexer execution occurs when you [create an indexer](/rest/api/searchservice/Create-Indexer) on the search service. You can set the "disabled" property in an indexer to create it without running it.
+
+During indexer execution is when you'll find out if the data source is accessible or the skillset is valid. Until indexer execution starts, dependent objects such as data sources and skillsets are inactive on the search service.
+
+After the first indexer run, you can re-run it on demand using [Run Indexer](/rest/api/searchservice/run-indexer), or you can [define a recurring schedule](search-howto-schedule-indexers.md).
-You can monitor the indexer status in the portal or through [Get Indexer Status API](/rest/api/searchservice/get-indexer-status). You should also [run queries on the index](search-query-create.md) to verify the result is what you expected.
+You can monitor [indexer status in the portal](search-howto-monitor-indexers.md) or through [Get Indexer Status API](/rest/api/searchservice/get-indexer-status). You should also [run queries on the index](search-query-create.md) to verify the result is what you expected.
## Next steps
-Now that you've been introduced, a next step is to review indexer properties and parameters, scheduling, and indexer monitoring. Alternatively, you could return to the list of [supported data sources](#supported-data-sources) for more information about a specific source.
+Now that you've been introduced to indexers, a next step is to review indexer properties and parameters, scheduling, and indexer monitoring. Alternatively, you could return to the list of [supported data sources](#supported-data-sources) for more information about a specific source.
+ [Create indexers](search-howto-create-indexers.md) + [Reset and run indexers](search-howto-run-reset-indexers.md)
search Search Pagination Page Layout https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-pagination-page-layout.md
Previously updated : 11/29/2021 Last updated : 01/04/2022 # How to work with search results in Azure Cognitive Search This article explains how to work with a query response in Azure Cognitive Search.
-The structure of a response is determined by parameters in the query itself: [Search Documents (REST)](/rest/api/searchservice/Search-Documents) or [SearchResults Class (Azure for .NET)](/dotnet/api/azure.search.documents.models.searchresults-1). Parameters on the query determine:
+The structure of a response is determined by parameters in the query itself, as described in [Search Documents (REST)](/rest/api/searchservice/Search-Documents) or [SearchResults Class (Azure for .NET)](/dotnet/api/azure.search.documents.models.searchresults-1). Parameters on the query determine:
+ Number of results in the response (up to 50, by default) + Fields in each result
The structure of a response is determined by parameters in the query itself: [Se
While a search document might consist of a large number of fields, typically only a few are needed to represent each document in the result set. On a query request, append `$select=<field list>` to specify which fields show up in the response. A field must be attributed as **Retrievable** in the index to be included in a result.
-Fields that work best include those that contrast and differentiate among documents, providing sufficient information to invite a click-through response on the part of the user. On an e-commerce site, it might be a product name, description, brand, color, size, price, and rating. For the hotels-sample-index built-in sample, it might be fields in the following example:
+Fields that work best include those that contrast and differentiate among documents, providing sufficient information to invite a click-through response on the part of the user. On an e-commerce site, it might be a product name, description, brand, color, size, price, and rating. For the built-in hotels-sample index, it might be the "select" fields in the following example:
```http POST /indexes/hotels-sample-index/docs/search?api-version=2020-06-30
POST /indexes/hotels-sample-index/docs/search?api-version=2020-06-30
### Tips for unexpected results
-Occasionally, the substance and not the structure of results are unexpected. When query outcomes are unexpected, you can try these query modifications to see if results improve:
+Occasionally, the substance and not the structure of results are unexpected. For example, you might find that some results appear to be duplicates, or a result that *should* appear near the top is positioned lower in the results. When query outcomes are unexpected, you can try these query modifications to see if results improve:
+ Change **`searchMode=any`** (default) to **`searchMode=all`** to require matches on all criteria instead of any of the criteria. This is especially true when boolean operators are included the query.
Occasionally, the substance and not the structure of results are unexpected. Whe
## Paging results
-By default, the search engine returns up to the first 50 matches. The top 50 is determined by search score, assuming the query is full text search or semantic search, or in an arbitrary order for exact match queries (where "@searchScore=1.0").
+By default, the search engine returns up to the first 50 matches. The top 50 are determined by search score, assuming the query is full text search or semantic search, or in an arbitrary order for exact match queries (where "@searchScore=1.0").
To control the paging of all documents returned in a result set, add `$top` and `$skip` parameters to the query request. The following list explains the logic.
To control the paging of all documents returned in a result set, add `$top` and
+ Return the second set, skipping the first 15 to get the next 15: `$top=15&$skip=15`. Repeat for the third set of 15: `$top=15&$skip=30` The results of paginated queries are not guaranteed to be stable if the underlying index is changing. Paging changes the value of `$skip` for each page, but each query is independent and operates on the current view of the data as it exists in the index at query time (in other words, there is no caching or snapshot of results, such as those found in a general purpose database).
- 
+ Following is an example of how you might get duplicates. Assume an index with four documents: ```text
Following is an example of how you might get duplicates. Assume an index with fo
{ "id": "3", "rating": 2 } { "id": "4", "rating": 1 } ```
- 
+ Now assume you want results returned two at a time, ordered by rating. You would execute this query to get the first page of results: `$top=2&$skip=0&$orderby=rating desc`, producing the following results: ```text { "id": "1", "rating": 5 } { "id": "2", "rating": 3 } ```
- 
+ On the service, assume a fifth document is added to the index in between query calls: `{ "id": "5", "rating": 4 }`. Shortly thereafter, you execute a query to fetch the second page: `$top=2&$skip=2&$orderby=rating desc`, and get these results: ```text { "id": "2", "rating": 3 } { "id": "3", "rating": 2 } ```
- 
+ Notice that document 2 is fetched twice. This is because the new document 5 has a greater value for rating, so it sorts before document 2 and lands on the first page. While this behavior might be unexpected, it's typical of how a search engine behaves. ## Ordering results
Another approach that promotes order consistency is using a [custom scoring prof
## Hit highlighting
-Hit highlighting refers to text formatting (such as bold or yellow highlights) applied to matching terms in a result, making it easy to spot the match. Hit highlighting instructions are provided on the [query request](/rest/api/searchservice/search-documents). Queries that trigger query expansion in the engine, such as fuzzy and wildcard search, have limited support for hit highlighting.
+Hit highlighting refers to text formatting (such as bold or yellow highlights) applied to matching terms in a result, making it easy to spot the match. Highlighting is useful for longer content fields, such as a description field, where the match is not immediately obvious.
+
+Hit highlighting instructions are provided on the [query request](/rest/api/searchservice/search-documents). Queries that trigger query expansion in the engine, such as fuzzy and wildcard search, have limited support for hit highlighting.
+
+### Requirements for hit highlighting
-To enable hit highlighting, add `highlight=[comma-delimited list of string fields]` to specify which fields will use highlighting. Highlighting is useful for longer content fields, such as a description field, where the match is not immediately obvious. Only field definitions that are attributed as **searchable** qualify for hit highlighting.
++ Fields must be Edm.String or Collection(Edm.String)++ Fields must be attributed at **searchable**
-By default, Azure Cognitive Search returns up to five highlights per field. You can adjust this number by appending to the field a dash followed by an integer. For example, `highlight=Description-10` returns up to 10 highlights on matching content in the Description field.
+### Specify highlighting in the request
-Formatting is applied to whole term queries. The type of formatting is determined by tags, `highlightPreTag` and `highlightPostTag`, and your code handles the response (for example, applying a bold font or a yellow background).
+To return highlighted terms, include the "highlight" parameter in the query request. The parameter is set to a comma-delimited list of fields.
-In the following query request example, the terms "divine", "secrets", and "secret" found within the Description field are tagged for highlighting.
+By default, the format mark up is `<em>`, but you can override the tag using `highlightPreTag` and `highlightPostTag` parameters. Your client code handles the response (for example, applying a bold font or a yellow background).
```http POST /indexes/good-books/docs/search?api-version=2020-06-30 { "search": "divine secrets",
- "highlight": "Description"
+ "highlight": "title, original_title",
+ "highlightPreTag": "<b>",
+ "highlightPostTag": "</b>"
} ```
-The following portal screenshot illustrates the results of phrase query highlighting. Results are returned in the "@search.highlights" field. Individual terms, single or consecutive, are marked up in the result.
+By default, Azure Cognitive Search returns up to five highlights per field. You can adjust this number by appending a dash followed by an integer. For example, `"highlight": "description-10"` returns up to 10 highlighted terms on matching content in the "description" field.
+
+### Highlighted results
+
+When highlighting is added to the query, the response includes an "@search.highlights" for each result so that your application code can target that structure. The list of fields specified for "highlight" are included in the response.
+
+In a keyword search, each term is scanned for independently. A query for "divine secrets" will return matches on any document containing either term.
:::image type="content" source="media/search-pagination-page-layout/highlighting-example.png" alt-text="Screenshot of highlighting over a phrase query." border="true":::
-### Highlighting behavior on older search services
+### Keyword search highlighting
+
+Within a highlighted field, formatting is applied to whole terms. For example, on a match against "The Divine Secrets of the Ya-Ya Sisterhood", formatting is applied to each term separately, even though they are consecutive.
+
+```json
+"@odata.count": 39,
+"value": [
+ {
+ "@search.score": 19.593246,
+ "@search.highlights": {
+ "original_title": [
+ "<em>Divine</em> <em>Secrets</em> of the Ya-Ya Sisterhood"
+ ],
+ "title": [
+ "<em>Divine</em> <em>Secrets</em> of the Ya-Ya Sisterhood"
+ ]
+ },
+ "original_title": "Divine Secrets of the Ya-Ya Sisterhood",
+ "title": "Divine Secrets of the Ya-Ya Sisterhood"
+ },
+ {
+ "@search.score": 12.779835,
+ "@search.highlights": {
+ "original_title": [
+ "<em>Divine</em> Madness"
+ ],
+ "title": [
+ "<em>Divine</em> Madness (Cherub, #5)"
+ ]
+ },
+ "original_title": "Divine Madness",
+ "title": "Divine Madness (Cherub, #5)"
+ },
+ {
+ "@search.score": 12.62534,
+ "@search.highlights": {
+ "original_title": [
+ "Grave <em>Secrets</em>"
+ ],
+ "title": [
+ "Grave <em>Secrets</em> (Temperance Brennan, #5)"
+ ]
+ },
+ "original_title": "Grave Secrets",
+ "title": "Grave Secrets (Temperance Brennan, #5)"
+ }
+```
+
+### Phrase search highlighting
+
+Whole-term formatting applies even on a phrase search, where multiple terms are enclosed in double quotation marks. The following example is the same query, except that "divine search" is submitted as a quotation-enclosed phrase (some clients, such as Postman, require that you escape the interior quotation marks with a backslash `\"`):
+
+```http
+POST /indexes/good-books/docs/search?api-version=2020-06-30
+ {
+ "search": "\"divine secrets\"",,
+ "select": "title,original_title",
+ "highlight": "title",
+ "highlightPreTag": "<b>",
+ "highlightPostTag": "</b>",
+ "count": true
+ }
+```
+
+Because the criteria now specifies both terms, only one match is found in the search index. The response to the above query looks like this:
+
+```json
+{
+ "@odata.count": 1,
+ "value": [
+ {
+ "@search.score": 19.593246,
+ "@search.highlights": {
+ "title": [
+ "<b>Divine</b> <b>Secrets</b> of the Ya-Ya Sisterhood"
+ ]
+ },
+ "original_title": "Divine Secrets of the Ya-Ya Sisterhood",
+ "title": "Divine Secrets of the Ya-Ya Sisterhood"
+ }
+ ]
+}
+```
+
+#### Phrase highlighting on older services
Search services that were created before July 15, 2020 implement a different highlighting experience for phrase queries.
-Before July 2020, any term in the phrase is highlighted:
+For the following examples, assume a query string that includes the quote-enclosed phrase "super bowl". Before July 2020, any term in the phrase is highlighted:
```json "@search.highlights": {
Before July 2020, any term in the phrase is highlighted:
] ```
-After July 2020, only phrases that match the full phrase query will be returned in "@search.highlights":
+For search services created after July 2020, only phrases that match the full phrase query will be returned in "@search.highlights":
```json "@search.highlights": {
security Operational Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/fundamentals/operational-security.md
For customers interested in storing their [audit events](../../active-directory/
## Summary
-This article summaries protecting your privacy and securing your data, while delivering software and services that help you manage the IT infrastructure of your organization. Microsoft recognizes that when they entrust their data to others, that trust requires rigorous security. Microsoft adheres to strict compliance and security guidelinesΓÇöfrom coding to operating a service. Securing and protecting data is a top priority at Microsoft.
+This article summarizes protecting your privacy and securing your data, while delivering software and services that help you manage the IT infrastructure of your organization. Microsoft recognizes that when they entrust their data to others, that trust requires rigorous security. Microsoft adheres to strict compliance and security guidelinesΓÇöfrom coding to operating a service. Securing and protecting data is a top priority at Microsoft.
This article explains
sentinel Connect Azure Windows Microsoft Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-azure-windows-microsoft-services.md
You can find and query the data for each resource type using the table name that
> [!IMPORTANT] >
-> - Some connectors based on the Azure Monitor Agent (AMA) are currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> Some connectors based on the Azure Monitor Agent (AMA) are currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
+> The Azure Monitor Agent is currently supported only for Windows Security Events and Windows Forwarded Events.
The [Azure Monitor agent](../azure-monitor/agents/azure-monitor-agent-overview.md) uses **Data collection rules (DCRs)** to define the data to collect from each agent. Data collection rules offer you two distinct advantages:
sentinel Connect Syslog https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-syslog.md
Title: Connect Syslog data to Microsoft Sentinel | Microsoft Docs
description: Connect any machine or appliance that supports Syslog to Microsoft Sentinel by using an agent on a Linux machine between the appliance and Microsoft Sentinel. Previously updated : 11/09/2021 Last updated : 01/05/2022
Many device types have their own data connectors appearing in the **Data connect
All connectors listed in the gallery will display any specific instructions on their respective connector pages in the portal, as well as in their sections of the [Microsoft Sentinel data connectors reference](data-connectors-reference.md) page.
+If the instructions on your data connector's page in Microsoft Sentinel indicate that the Kusto functions are deployed as [Advanced SIEM Information Model (ASIM)](normalization.md) parsers, make sure that you have the ASIM parsers deployed to your workspace.
+
+Use the link in the data connector page to deploy your parsers, or follow the instructions from the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Parsers/ASim).
+
+For more information, see [Advanced SIEM Information Model (ASIM) parsers](normalization-about-parsers.md).
## Configure the Log Analytics agent
sentinel Data Connectors Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/data-connectors-reference.md
Title: Find your Microsoft Sentinel data connector | Microsoft Docs
description: Learn about specific configuration steps for Microsoft Sentinel data connectors. Previously updated : 12/23/2021 Last updated : 01/04/2022
Add http://localhost:8081/ under **Authorized redirect URIs** while creating [We
| **Supported by** | Microsoft | | | |
+## Microsoft Sysmon for Linux (Preview)
+
+| Connector attribute | Description |
+| | |
+| **Data ingestion method** | [**Syslog**](connect-syslog.md), with, [ASIM parsers](normalization-about-parsers.md) based on Kusto functons |
+| **Log Analytics table(s)** | Syslog |
+| **Supported by** | Microsoft |
+| | |
## Morphisec UTPP (Preview)
If a longer timeout duration is required, consider upgrading to an [App Service
| Connector attribute | Description | | | |
-| **Data ingestion method** | **Azure service-to-service integration: <br>[Log Analytics agent-based connections](connect-azure-windows-microsoft-services.md?tabs=LAA#windows-agent-based-connections)** |
+| **Data ingestion method** | **Azure service-to-service integration: <br>[Log Analytics agent-based connections](connect-azure-windows-microsoft-services.md?tabs=LAA#windows-agent-based-connections) (Legacy)** |
| **Log Analytics table(s)** | SecurityEvents | | **Supported by** | Microsoft | | | |
Follow the instructions to obtain the credentials.
| Connector attribute | Description | | | |
-| **Data ingestion method** | **Azure service-to-service integration: <br>[Log Analytics agent-based connections](connect-azure-windows-microsoft-services.md?tabs=LAA#windows-agent-based-connections)** |
+| **Data ingestion method** | **Azure service-to-service integration: <br>[Log Analytics agent-based connections](connect-azure-windows-microsoft-services.md?tabs=LAA#windows-agent-based-connections) (Legacy)** |
| **Log Analytics table(s)** | DnsEvents<br>DnsInventory | | **Supported by** | Microsoft | | | |
+### Troubleshooting your Windows DNS Server data connector
+
+If your DNS events don't show up in Microsoft Sentinel:
+
+1. Make sure that DNS analytics logs on your servers are [enabled](/previous-versions/windows/it-pro/windows-server-2012-r2-and-2012/dn800669(v=ws.11)#to-enable-dns-diagnostic-logging).
+1. Go to Azure DNS Analytics.
+1. In the **Configuration** area, change any of the settings and save your changes. Change your settings back if you need to, and then save your changes again.
+1. Check your Azure DNS Analytics to make sure that your events and queries display properly.
+
+For more information, see [Gather insights about your DNS infrastructure with the DNS Analytics Preview solution](/azure/azure-monitor/insights/dns-analytics).
+ ## Windows Forwarded Events (Preview) | Connector attribute | Description |
We recommend installing the [Advanced SIEM Information Model (ASIM)](normalizati
| Connector attribute | Description | | | |
-| **Data ingestion method** | **Azure service-to-service integration: <br>[Log Analytics agent-based connections](connect-azure-windows-microsoft-services.md?tabs=LAA#windows-agent-based-connections)** |
+| **Data ingestion method** | **Azure service-to-service integration: <br>[Log Analytics agent-based connections](connect-azure-windows-microsoft-services.md?tabs=LAA#windows-agent-based-connections) (Legacy)** |
| **Log Analytics table(s)** | WindowsFirewall | | **Supported by** | Microsoft | | | |
sentinel Sentinel Solutions Catalog https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/sentinel-solutions-catalog.md
Title: Microsoft Sentinel content hub catalog | Microsoft Docs
description: This article displays and details the currently available Microsoft Sentinel content hub packages. Previously updated : 12/20/2021 Last updated : 01/04/2022
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
|**Microsoft Sentinel 4 Microsoft Dynamics 365** | [Data connector](data-connectors-reference.md#dynamics-365), workbooks, analytics rules, and hunting queries | Application |Microsoft | |**Microsoft Sentinel for Teams** | Analytics rules, playbooks, hunting queries | Application | Microsoft | | **IoT OT Threat Monitoring with Defender for IoT** | [Analytics rules, playbooks, workbook](iot-solution.md) | Internet of Things (IoT), Security - Threat Protection | Microsoft |
+| **Microsoft Sysmon for Linux** | [Data connector](data-connectors-reference.md#microsoft-sysmon-for-linux-preview) | Platform | Microsoft |
| | | | |
service-bus-messaging Private Link Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/private-link-service.md
Title: Integrate Azure Service Bus with Azure Private Link Service
description: Learn how to integrate Azure Service Bus with Azure Private Link Service Previously updated : 03/29/2021 Last updated : 01/04/2022
To integrate a Service Bus namespace with Azure Private Link, you'll need the fo
Your private endpoint and virtual network must be in the same region. When you select a region for the private endpoint using the portal, it will automatically filter only virtual networks that are in that region. Your Service Bus namespace can be in a different region. And, Your private endpoint uses a private IP address in your virtual network.
-### steps
+### Steps
If you already have an existing namespace, you can create a private endpoint by following these steps: 1. Sign in to the [Azure portal](https://portal.azure.com). 2. In the search bar, type in **Service Bus**. 3. Select the **namespace** from the list to which you want to add a private endpoint.
-2. On the left menu, select **Networking** option under **Settings**. By default, the **Selected networks** option is selected.
-
+2. On the left menu, select **Networking** option under **Settings**.
+ > [!NOTE] > You see the **Networking** tab only for **premium** namespaces.
-
- :::image type="content" source="./media/service-bus-ip-filtering/default-networking-page.png" alt-text="Networking page - default" lightbox="./media/service-bus-ip-filtering/default-networking-page.png":::
-
- > [!WARNING]
- > If you don't add at least one IP firewall rule or a virtual network on this page, the namespace can be accessed over public internet (using the access key).
-
- If you select the **All networks** option, your Service Bus namespace accepts connections from any IP address (using the access key). This default setting is equivalent to a rule that accepts the 0.0.0.0/0 IP address range.
+1. On the **Networking** page, for **Public network access**, you can set one of the three following options. Select **Disabled** if you want the namespace to be accessed only via private endpoints.
+ - **Disabled**. This option disables any public access to the namespace. The namespace will be accessible only through [private endpoints](private-link-service.md).
+
+ :::image type="content" source="./media/service-bus-ip-filtering/public-access-disabled.png" alt-text="Networking page - public access tab - public network access is disabled.":::
+ - **Selected networks**. This option enables public access to the namespace using an access key from selected networks.
+
+ > [!IMPORTANT]
+ > If you choose **Selected networks**, add at least one IP firewall rule or a virtual network that will have access to the namespace. Choose **Disabled** if you want to restrict all traffic to this namespace over [private endpoints](private-link-service.md) only.
+
+ :::image type="content" source="./media/service-bus-ip-filtering/selected-networks.png" alt-text="Networking page with the selected networks option selected." lightbox="./media/service-bus-ip-filtering/selected-networks.png":::
+ - **All networks** (default). This option enables public access from all networks using an access key. If you select the **All networks** option, the event hub accepts connections from any IP address (using the access key). This setting is equivalent to a rule that accepts the 0.0.0.0/0 IP address range.
- ![Firewall - All networks option selected](./media/service-bus-ip-filtering/firewall-all-networks-selected.png)
+ :::image type="content" source="./media/service-bus-ip-filtering/firewall-all-networks-selected.png" alt-text="Screenshot of the Azure portal Networking page. The option to allow access from All networks is selected on the Firewalls and virtual networks tab.":::
5. To allow access to the namespace via private endpoints, select the **Private endpoint connections** tab at the top of the page 6. Select the **+ Private Endpoint** button at the top of the page.
Aliases: <service-bus-namespace-name>.servicebus.windows.net
For more, see [Azure Private Link service: Limitations](../private-link/private-link-service-overview.md#limitations)
-## Next Steps
+## Next steps
- Learn more about [Azure Private Link](../private-link/private-link-service-overview.md) - Learn more about [Azure Service Bus](service-bus-messaging-overview.md)
service-bus-messaging Service Bus Ip Filtering https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-ip-filtering.md
Title: Configure IP firewall rules for Azure Service Bus description: How to use Firewall Rules to allow connections from specific IP addresses to Azure Service Bus. Previously updated : 03/29/2021 Last updated : 01/04/2022 # Allow access to Azure Service Bus namespace from specific IP addresses or ranges
This section shows you how to use the Azure portal to create IP firewall rules f
> [!NOTE] > You see the **Networking** tab only for **premium** namespaces.
+1. On the **Networking** page, for **Public network access**, you can set one of the three following options. Choose **Selected networks** option to allow access from only specified IP addresses.
+ - **Disabled**. This option disables any public access to the namespace. The namespace will be accessible only through [private endpoints](private-link-service.md).
+
+ :::image type="content" source="./media/service-bus-ip-filtering/public-access-disabled.png" alt-text="Networking page - public access tab - public network access is disabled.":::
+ - **Selected networks**. This option enables public access to the namespace using an access key from selected networks.
+
+ > [!IMPORTANT]
+ > If you choose **Selected networks**, add at least one IP firewall rule or a virtual network that will have access to the namespace. Choose **Disabled** if you want to restrict all traffic to this namespace over [private endpoints](private-link-service.md) only.
- :::image type="content" source="./media/service-bus-ip-filtering/default-networking-page.png" alt-text="Networking page - default" lightbox="./media/service-bus-ip-filtering/default-networking-page.png":::
-
- If you select the **All networks** option, your Service Bus namespace accepts connections from any IP address. This default setting is equivalent to a rule that accepts the 0.0.0.0/0 IP address range.
+ :::image type="content" source="./media/service-bus-ip-filtering/selected-networks.png" alt-text="Networking page with the selected networks option selected." lightbox="./media/service-bus-ip-filtering/selected-networks.png":::
+ - **All networks** (default). This option enables public access from all networks using an access key. If you select the **All networks** option, the event hub accepts connections from any IP address (using the access key). This setting is equivalent to a rule that accepts the 0.0.0.0/0 IP address range.
- ![Screenshot of the Azure portal Networking page. The option to allow access from All networks is selected on the Firewalls and virtual networks tab.](./media/service-bus-ip-filtering/firewall-all-networks-selected.png)
+ :::image type="content" source="./media/service-bus-ip-filtering/firewall-all-networks-selected.png" alt-text="Screenshot of the Azure portal Networking page. The option to allow access from All networks is selected on the Firewalls and virtual networks tab.":::
1. To allow access from only specified IP address, select the **Selected networks** option if it isn't already selected. In the **Firewall** section, follow these steps: 1. Select **Add your client IP address** option to give your current client IP the access to the namespace. 2. For **address range**, enter a specific IPv4 address or a range of IPv4 address in CIDR notation.
This section shows you how to use the Azure portal to create IP firewall rules f
>[!WARNING] > If you select the **Selected networks** option and don't add at least one IP firewall rule or a virtual network on this page, the namespace can be accessed over public internet (using the access key).
- ![Screenshot of the Azure portal Networking page. The option to allow access from Selected networks is selected and the Firewall section is highlighted.](./media/service-bus-ip-filtering/firewall-selected-networks-trusted-access-disabled.png)
+ :::image type="content" source="./media/service-bus-ip-filtering/firewall-selected-networks-trusted-access-disabled.png" lightbox="./media/service-bus-ip-filtering/firewall-selected-networks-trusted-access-disabled.png" alt-text="Screenshot of the Azure portal Networking page. The option to allow access from Selected networks is selected and the Firewall section is highlighted.":::
3. Select **Save** on the toolbar to save the settings. Wait for a few minutes for the confirmation to show up on the portal notifications. > [!NOTE]
service-bus-messaging Service Bus Service Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-service-endpoints.md
Title: Configure virtual network service endpoints for Azure Service Bus description: This article provides information on how to add a Microsoft.ServiceBus service endpoint to a virtual network. Previously updated : 03/29/2021 Last updated : 01/04/2022
This section shows you how to use Azure portal to add a virtual network service
> [!NOTE] > You see the **Networking** tab only for **premium** namespaces.
+1. On the **Networking** page, for **Public network access**, you can set one of the three following options. Choose **Selected networks** option to allow access from only specified IP addresses.
+ - **Disabled**. This option disables any public access to the namespace. The namespace will be accessible only through [private endpoints](private-link-service.md).
+
+ :::image type="content" source="./media/service-bus-ip-filtering/public-access-disabled.png" alt-text="Networking page - public access tab - public network access is disabled.":::
+ - **Selected networks**. This option enables public access to the namespace using an access key from selected networks.
+
+ > [!IMPORTANT]
+ > If you choose **Selected networks**, add at least one IP firewall rule or a virtual network that will have access to the namespace. Choose **Disabled** if you want to restrict all traffic to this namespace over [private endpoints](private-link-service.md) only.
- :::image type="content" source="./media/service-bus-ip-filtering/default-networking-page.png" alt-text="Networking page - default" lightbox="./media/service-bus-ip-filtering/default-networking-page.png":::
-
- If you select the **All networks** option, your Service Bus namespace accepts connections from any IP address. This default setting is equivalent to a rule that accepts the 0.0.0.0/0 IP address range.
+ :::image type="content" source="./media/service-bus-ip-filtering/selected-networks.png" alt-text="Networking page with the selected networks option selected." lightbox="./media/service-bus-ip-filtering/selected-networks.png":::
+ - **All networks** (default). This option enables public access from all networks using an access key. If you select the **All networks** option, the event hub accepts connections from any IP address (using the access key). This setting is equivalent to a rule that accepts the 0.0.0.0/0 IP address range.
- ![Firewall - All networks option selected](./media/service-bus-ip-filtering/firewall-all-networks-selected.png)
+ :::image type="content" source="./media/service-bus-ip-filtering/firewall-all-networks-selected.png" alt-text="Screenshot of the Azure portal Networking page. The option to allow access from All networks is selected on the Firewalls and virtual networks tab.":::
2. To restrict access to specific virtual networks, select the **Selected networks** option if it isn't already selected.
-1. In the **Virtual Network** section of the page, select **+Add existing virtual network**.
+1. In the **Virtual Network** section of the page, select **+Add existing virtual network**. Select **+ Create new virtual network** if you want to create a new VNet.
- ![add existing virtual network](./media/service-endpoints/add-vnet-menu.png)
+ :::image type="content" source="./media/service-endpoints/add-vnet-menu.png" lightbox="./media/service-endpoints/add-vnet-menu.png" alt-text="Image showing the selection of Add existing virtual network button on the toolbar.":::
>[!WARNING] > If you select the **Selected networks** option and don't add at least one IP firewall rule or a virtual network on this page, the namespace can be accessed over public internet (using the access key). 3. Select the virtual network from the list of virtual networks, and then pick the **subnet**. You have to enable the service endpoint before adding the virtual network to the list. If the service endpoint isn't enabled, the portal will prompt you to enable it.
- ![select subnet](./media/service-endpoints/select-subnet.png)
-
+ :::image type="content" source="./media/service-endpoints/select-subnet.png" alt-text="Image showing the selection of VNet and subnet.":::
4. You should see the following successful message after the service endpoint for the subnet is enabled for **Microsoft.ServiceBus**. Select **Add** at the bottom of the page to add the network.
- ![select subnet and enable endpoint](./media/service-endpoints/subnet-service-endpoint-enabled.png)
+ :::image type="content" source="./media/service-endpoints/subnet-service-endpoint-enabled.png" alt-text="Image showing the success message of enabling the service endpoint.":::
> [!NOTE] > If you are unable to enable the service endpoint, you may ignore the missing virtual network service endpoint using the Resource Manager template. This functionality is not available on the portal. 6. Select **Save** on the toolbar to save the settings. Wait for a few minutes for the confirmation to show up in the portal notifications. The **Save** button should be disabled.
- ![Save network](./media/service-endpoints/save-vnet.png)
+ :::image type="content" source="./media/service-endpoints/save-vnet.png" lightbox="./media/service-endpoints/save-vnet.png" alt-text="Image showing the network service endpoint saved.":::
> [!NOTE] > For instructions on allowing access from specific IP addresses or ranges, see [Allow access from specific IP addresses or ranges](service-bus-ip-filtering.md).
service-connector Concept Region Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-connector/concept-region-support.md
If your compute service instance is located in one of the regions that Service C
- North Europe - East US - West US 2
+- Australia East
+- UK South
+- Japan East
+- Southeast Asia
## Supported regions with geographical endpoint
-Your compute service instance might be created in the region that Service Connector has geographical region support. It means that your service connection will be created in a different region from your compute instance. You will see a banner about this information when you create a service connection. The region difference may impact your compliance, data residency, and data latency.
+Your compute service instance might be created in the region that Service Connector has geographical region support. It means that your service connection will be created in a different region from your compute instance. You will see an information banner about the region details when you create a service connection in this case. The region difference may impact your compliance, data residency, and data latency.
- East US 2 - West US 3 - South Central US
+- Australia Central
+- Australia Southeast
+- UK West
+- Japan West
+- West US
+- North Central US
## Not supported regions in public preview
site-recovery Site Recovery Test Failover To Azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/site-recovery-test-failover-to-azure.md
This procedure describes how to run a test failover for a recovery plan. If you
- If same IP address isn't available in the subnet, then the VM receives another available IP address in the subnet. [Learn more](#create-a-network-for-test-failover). 4. If you're failing over to Azure and data encryption is enabled, in **Encryption Key**, select the certificate that was issued when you enabled encryption during Provider installation. You can ignore this step if encryption isn't enabled. 5. Track failover progress on the **Jobs** tab. You should be able to see the test replica machine in the Azure portal.
-6. To initiate an RDP connection to the Azure VM, you need to [add a public IP address](/archive/blogs/srinathv/how-to-add-a-public-ip-address-to-azure-vm-for-vm-failed-over-using-asr) on the network interface of the failed over VM.
+6. To initiate an RDP connection to the Azure VM, you need to [add a public IP address](/archive/blogs/srinathv/how-to-add-a-public-ip-address-to-azure-vm-for-vm-failed-over-using-asr) on the network interface of the failed over VM.
+ If you don't want to add a public IP address to the virtual machine, check the recommended alternatives [here](https://docs.microsoft.com/azure/cloud-adoption-framework/migrate/azure-best-practices/migrate-best-practices-networking#best-practice-control-public-ip-addresses).
7. When everything is working as expected, click **Cleanup test failover**. This deletes the VMs that were created during test failover. 8. In **Notes**, record and save any observations associated with the test failover.
spring-cloud How To Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spring-cloud/how-to-github-actions.md
az spring-cloud create -n <service instance name> -g <resource group name>
az spring-cloud config-server git set -n <service instance name> --uri https://github.com/xxx/piggymetrics --label config ```
-## Build the workflow
+## End-to-end sample workflows
-The workflow is defined using the following options.
+The following examples demonstrate common usage scenarios.
-### Prepare for deployment with Azure CLI
-The command `az spring-cloud app create` is currently not idempotent. We recommend this workflow on existing Azure Spring Cloud apps and instances.
-
-Use the following Azure CLI commands for preparation:
+### Deploying
-```azurecli
-az config set defaults.group=<service group name>
-az config set defaults.spring-cloud=<service instance name>
-az spring-cloud app create --name gateway
-az spring-cloud app create --name auth-service
-az spring-cloud app create --name account-service
-```
+The following sections show you various options for deploying your app.
-### Deploy with Azure CLI directly
+#### To production
-Create the *.github/workflow/main.yml* file in the repository:
+Azure Spring Cloud supports deploying to deployments with built artifacts (e.g., JAR or .NET Core ZIP) or source code archive.
+The following example deploys to the default production deployment in Azure Spring Cloud using JAR file built by Maven. This is the only possible deployment scenario when using the Basic SKU:
-```yaml
+```yml
name: AzureSpringCloud on: push- env:
- GROUP: <resource group name>
- SERVICE_NAME: <service instance name>
+ ASC_PACKAGE_PATH: ${{ github.workspace }}
+ AZURE_SUBSCRIPTION: <azure subscription name>
jobs:
- build-and-deploy:
+ deploy_to_production:
runs-on: ubuntu-latest
+ name: deploy to production with artifact
steps:
+ - name: Checkout Github Action
+ uses: actions/checkout@v2
+
+ - name: Set up JDK 1.8
+ uses: actions/setup-java@v1
+ with:
+ java-version: 1.8
- - uses: actions/checkout@main
+ - name: maven build, clean
+ run: |
+ mvn clean package
- - name: Set up JDK 1.8
- uses: actions/setup-java@v1
- with:
- java-version: 1.8
+ - name: Login via Azure CLI
+ uses: azure/login@v1
+ with:
+ creds: ${{ secrets.AZURE_CREDENTIALS }}
- - name: maven build, clean
- run: |
- mvn clean package -DskipTests
+ - name: deploy to production with artifact
+ uses: azure/spring-cloud-deploy@v1
+ with:
+ azure-subscription: ${{ env.AZURE_SUBSCRIPTION }}
+ action: Deploy
+ service-name: <service instance name>
+ app-name: <app name>
+ use-staging-deployment: false
+ package: ${{ env.ASC_PACKAGE_PATH }}/**/*.jar
+```
- - name: Azure Login
- uses: azure/login@v1
- with:
- creds: ${{ secrets.AZURE_CREDENTIALS }}
+The following example deploys to the default production deployment in Azure Spring Cloud using source code.
- - name: Install ASC AZ extension
- run: az extension add --name spring-cloud
+```yml
+name: AzureSpringCloud
+on: push
+env:
+ ASC_PACKAGE_PATH: ${{ github.workspace }}
+ AZURE_SUBSCRIPTION: <azure subscription name>
- - name: Deploy with AZ CLI commands
- run: |
- az config set defaults.group=$GROUP
- az config set defaults.spring-cloud=$SERVICE_NAME
- az spring-cloud app deploy -n gateway --jar-path ${{ github.workspace }}/gateway/target/gateway.jar
- az spring-cloud app deploy -n account-service --jar-path ${{ github.workspace }}/account-service/target/account-service.jar
- az spring-cloud app deploy -n auth-service --jar-path ${{ github.workspace }}/auth-service/target/auth-service.jar
-```
+jobs:
+ deploy_to_production:
+ runs-on: ubuntu-latest
+ name: deploy to production with soruce code
+ steps:
+ - name: Checkout Github Action
+ uses: actions/checkout@v2
-### Deploy with Azure CLI action
+ - name: Login via Azure CLI
+ uses: azure/login@v1
+ with:
+ creds: ${{ secrets.AZURE_CREDENTIALS }}
-The az `run` command will use the latest version of Azure CLI. If there are breaking changes, you can also use a specific version of Azure CLI with azure/CLI `action`.
+ - name: deploy to production step with soruce code
+ uses: azure/spring-cloud-deploy@v1
+ with:
+ azure-subscription: ${{ env.AZURE_SUBSCRIPTION }}
+ action: deploy
+ service-name: <service instance name>
+ app-name: <app name>
+ use-staging-deployment: false
+ package: ${{ env.ASC_PACKAGE_PATH }}
+```
-> [!Note]
-> This command will run in a new container, so `env` will not work, and cross action file access may have extra restrictions.
+#### Blue-green
-Create the *.github/workflow/main.yml* file in the repository:
+The following examples deploy to an existing staging deployment. This deployment will not receive production traffic until it is set as a production deployment. You can set use-staging-deployment true to find the staging deployment automatically or just allocate specific deployment-name. We will only focus on the spring-cloud-deploy action and leave out the preparatory jobs in the rest of the article.
-```yaml
-name: AzureSpringCloud
-on: push
+```yml
+# environment preparation configurations omitted
+ steps:
+ - name: blue green deploy step use-staging-deployment
+ uses: azure/spring-cloud-deploy@v1
+ with:
+ azure-subscription: ${{ env.AZURE_SUBSCRIPTION }}
+ action: deploy
+ service-name: <service instance name>
+ app-name: <app name>
+ use-staging-deployment: true
+ package: ${{ env.ASC_PACKAGE_PATH }}/**/*.jar
+```
-jobs:
- build-and-deploy:
- runs-on: ubuntu-latest
+```yml
+# environment preparation configurations omitted
steps:
+ - name: blue green deploy step with deployment-name
+ uses: azure/spring-cloud-deploy@v1
+ with:
+ azure-subscription: ${{ env.AZURE_SUBSCRIPTION }}
+ action: deploy
+ service-name: <service instance name>
+ app-name: <app name>
+ deployment-name: staging
+ package: ${{ env.ASC_PACKAGE_PATH }}/**/*.jar
+```
- - uses: actions/checkout@main
+For more information on blue-green deployments, including an alternative approach, see [Blue-green deployment strategies](./concepts-blue-green-deployment-strategies.md).
- - name: Set up JDK 1.8
- uses: actions/setup-java@v1
- with:
- java-version: 1.8
+### Setting production deployment
- - name: maven build, clean
- run: |
- mvn clean package -DskipTests
+The following example will set the current staging deployment as production, effectively swapping which deployment will receive production traffic.
- - name: Azure Login
- uses: azure/login@v1
- with:
- creds: ${{ secrets.AZURE_CREDENTIALS }}
+```yml
+# environment preparation configurations omitted
+ steps:
+ - name: set production deployment step
+ uses: azure/spring-cloud-deploy@v1
+ with:
+ azure-subscription: ${{ env.AZURE_SUBSCRIPTION }}
+ action: set-production
+ service-name: <service instance name>
+ app-name: <app name>
+ use-staging-deployment: true
+```
+### Deleting a staging deployment
- - name: Azure CLI script
- uses: azure/CLI@v1
- with:
- azcliversion: 2.0.75
- inlineScript: |
- az extension add --name spring-cloud
- az config set defaults.group=<service group name>
- az config set defaults.spring-cloud=<service instance name>
- az spring-cloud app deploy -n gateway --jar-path $GITHUB_WORKSPACE/gateway/target/gateway.jar
- az spring-cloud app deploy -n account-service --jar-path $GITHUB_WORKSPACE/account-service/target/account-service.jar
- az spring-cloud app deploy -n auth-service --jar-path $GITHUB_WORKSPACE/auth-service/target/auth-service.jar
+The "Delete Staging Deployment" action allows you to delete the deployment not receiving production traffic. This frees up resources used by that deployment and makes room for a new staging deployment:
+
+```yml
+# environment preparation configurations omitted
+ steps:
+ - name: Delete staging deployment step
+ uses: azure/spring-cloud-deploy@v1
+ with:
+ azure-subscription: ${{ env.AZURE_SUBSCRIPTION }}
+ action: delete-staging-deployment
+ service-name: <service instance name>
+ app-name: <app name>
``` ## Deploy with Maven Plugin
storage Secure File Transfer Protocol Support How To https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/secure-file-transfer-protocol-support-how-to.md
Before you can enable SFTP support, you must register the SFTP feature with your
> [!div class="mx-imgBorder"] > ![Preview setting](./media/secure-file-transfer-protocol-support-how-to/preview-features-setting.png)
-4. In the **Preview features** page, select the **AllowSFTP** feature, and then select **Register**.
+4. In the **Preview features** page, select the **SFTP support for Azure Blob Storage** feature, and then select **Register**.
### Verify feature registration
Verify that the feature is registered before continuing with the other steps in
1. Open the **Preview features** page of your subscription.
-2. Locate the **AllowSFTP** feature and make sure that **Registered** appears in the **State** column.
+2. Locate the **SFTP support for Azure Blob Storage** feature and make sure that **Registered** appears in the **State** column.
## Enable SFTP support
stream-analytics Postgresql Database Output https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/stream-analytics/postgresql-database-output.md
For more information about Azure Database for PostgreSQL please visit the: [What
To learn more about how to create an Azure Database for PostgreSQL server by using the Azure portal please visit: * [Quick start for Azure Database for PostgreSQL ΓÇô Single server](../postgresql/quickstart-create-server-database-portal.md) * [Quick start for Azure Database for PostgreSQL ΓÇô Flexible server](../postgresql/flexible-server/quickstart-create-server-portal.md)
-* [Quick start for Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)](../postgresql/quickstart-create-hyperscale-portal.md)
+* [Quick start for Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)](../postgresql/hyperscale/quickstart-create-portal.md)
> [!NOTE]
Partitioning needs to enabled and is based on the PARTITION BY clause in the que
## Next steps
-* [Quickstart: Create a Stream Analytics job by using the Azure portal](stream-analytics-quick-create-portal.md)
+* [Quickstart: Create a Stream Analytics job by using the Azure portal](stream-analytics-quick-create-portal.md)
synapse-analytics Apache Spark External Metastore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/spark/apache-spark-external-metastore.md
Title: Use external Hive Metastore for Azure Synapse Spark Pool description: Learn how to set up external Hive Metastore for Azure Synapse Spark Pool.
-keywords: external Hive metastore,share,Synapse
+keywords: external Hive Metastore,share,Synapse
Last updated 09/08/2021
-# Use external Hive Metastore for Synapse Spark Pool (Preview)
+# Use external Hive Metastore for Synapse Spark Pool
-Azure Synapse Analytics allows Apache Spark pools in the same workspace to share a managed HMS (Hive Metastore Service) compatible metastore as their catalog. When customers want to persist the Hive catalog outside of the workspace, and share catalog objects with other computational engines outside of the workspace, such as HDInsight and Azure Databricks, they can connect to an external Hive Metastore. In this article, you learn how to connect Synapse Spark to an external Apache Hive Metastore.
+Azure Synapse Analytics allows Apache Spark pools in the same workspace to share a managed HMS (Hive Metastore) compatible metastore as their catalog. When customers want to persist the Hive catalog metadata outside of the workspace, and share catalog objects with other computational engines outside of the workspace, such as HDInsight and Azure Databricks, they can connect to an external Hive Metastore. In this article, you can learn how to connect Synapse Spark to an external Apache Hive Metastore.
-## Supported Hive metastore versions
-
-The feature works with both Spark 2.4 and Spark 3.0. The following table shows the supported Hive metastore service (HMS) versions for each Spark version.
+## Supported Hive Metastore versions
+The feature works with both Spark 2.4 and Spark 3.1. The following table shows the supported Hive Metastore versions for each Spark version.
|Spark Version|HMS 0.13.X|HMS 1.2.X|HMS 2.1.X|HMS 2.3.x|HMS 3.1.X| |--|--|--|--|--|--| |2.4|Yes|Yes|Yes|Yes|No|
-|3|Yes|Yes|Yes|Yes|Yes|
+|3.1|Yes|Yes|Yes|Yes|Yes|
-## Set up Hive metastore linked service
+## Set up linked service to Hive Metastore
> [!NOTE]
-> Only Azure SQL Database and Azure Database for MySQL are supported as an external Hive metastore.
+> Only Azure SQL Database and Azure Database for MySQL are supported as an external Hive Metastore. And currently we only support User-Password authentication. If the provided database is blank, please provision it via [Hive Schema Tool](https://cwiki.apache.org/confluence/display/Hive/Hive+Schema+Tool) to create database schema.
-Follow below steps to set up a linked service to the external Hive metastore in Synapse workspace.
+Follow below steps to set up a linked service to the external Hive Metastore in Synapse workspace.
1. Open Synapse Studio, go to **Manage > Linked services** at left, click **New** to create a new linked service.
Follow below steps to set up a linked service to the external Hive metastore in
3. Provide **Name** of the linked service. Record the name of the linked service, this info will be used to configure Spark shortly.
-4. You can either select **Azure SQL Database**/**Azure Database for MySQL** for the external Hive metastore from Azure subscription list, or enter the info manually.
+4. You can either select **Azure SQL Database**/**Azure Database for MySQL** for the external Hive Metastore from Azure subscription list, or enter the info manually.
-5. Currently we only support User-Password authentication. Provide **User name** and **Password** to set up the connection.
+5. Provide **User name** and **Password** to set up the connection.
6. **Test connection** to verify the username and password. 7. Click **Create** to create the linked service. ### Test connection and get the metastore version in notebook
-Some network security rule settings may block access from Spark pool to the external Hive metastore DB. Before you configure the Spark pool, run below code in any Spark pool notebook to test connection to the external Hive metastore DB.
+Some network security rule settings may block access from Spark pool to the external Hive Metastore DB. Before you configure the Spark pool, run below code in any Spark pool notebook to test connection to the external Hive Metastore DB.
-You can also get your Hive metastore version from the output results. The Hive metastore version will be used in the Spark configuration.
+You can also get your Hive Metastore version from the output results. The Hive Metastore version will be used in the Spark configuration.
#### Connection testing code for Azure SQL ```scala
try {
val connection = DriverManager.getConnection(url) val result = connection.createStatement().executeQuery("select t.SCHEMA_VERSION from VERSION t") result.next();
- println(s"Successful to test connection. Hive metastore version is ${result.getString(1)}")
+ println(s"Successful to test connection. Hive Metastore version is ${result.getString(1)}")
} catch {
- case ex: Throwable =>println(s"Failed to establish connection:\n $ex")
+ case ex: Throwable => println(s"Failed to establish connection:\n $ex")
} ```
try {
val connection = DriverManager.getConnection(url, "{your_username_here}", "{your_password_here}"); val result = connection.createStatement().executeQuery("select t.SCHEMA_VERSION from VERSION t") result.next();
- println(s"Successful to test connection. Hive metastore version is ${result.getString(1)}")
+ println(s"Successful to test connection. Hive Metastore version is ${result.getString(1)}")
} catch {
- case ex: Throwable =>println(s"Failed to establish connection:\n $ex")
+ case ex: Throwable => println(s"Failed to establish connection:\n $ex")
} ```
-## Configure Spark to use the external Hive metastore
-After creating the linked service to the external Hive metastore successfully, you need to setup a few configurations in the Spark to use the external Hive metastore. You can both set up the configuration at Spark pool level, or at Spark session level.
+## Configure Spark to use the external Hive Metastore
+After creating the linked service to the external Hive Metastore successfully, you need to setup a few Spark configurations to use the external Hive Metastore. You can both set up the configuration at Spark pool level, or at Spark session level.
Here are the configurations and descriptions: > [!NOTE]
-> Synapse aims to works smoothly with computes from HDI. However HMS 3.1 in HDI 4.0 is not full compatible with the OSS HMS 3.1. For OSS HMS 3.1, please check [here](#hms-schema-change-for-oss-hms-31).
+> Synapse aims to work smoothly with computes from HDI. However HMS 3.1 in HDI 4.0 is not fully compatible with the OSS HMS 3.1. For OSS HMS 3.1, please check [here](#hms-schema-change-for-oss-hms-31).
|Spark config|Description| |--|--|
Here are the configurations and descriptions:
|`spark.sql.hive.metastore.jars`|<ul><li>Version 0.13: `/opt/hive-metastore/lib-0.13/*:/usr/hdp/current/hadoop-client/lib/*` </li><li>Version 1.2: `/opt/hive-metastore/lib-1.2/*:/usr/hdp/current/hadoop-client/lib/*` </li><li>Version 2.1: `/opt/hive-metastore/lib-2.1/*:/usr/hdp/current/hadoop-client/lib/*` </li><li>Version 2.3: `/opt/hive-metastore/lib-2.3/*:/usr/hdp/current/hadoop-client/lib/*` </li><li>Version 3.1: `/opt/hive-metastore/lib-3.1/*:/usr/hdp/current/hadoop-client/lib/*`</li></ul>| |`spark.hadoop.hive.synapse.externalmetastore.linkedservice.name`|Name of your linked service|
-### Configure Spark pool
+### Configure at Spark pool level
When creating the Spark pool, under **Additional Settings** tab, put below configurations in a text file and upload it in **Apache Spark configuration** section. You can also use the context menu for an existing Spark pool, choose Apache Spark configuration to add these configurations. :::image type="content" source="./media/use-external-metastore/config-spark-pool.png" alt-text="Configure the Spark pool":::
Update metastore version and linked service name, and save below configs in a te
```properties spark.sql.hive.metastore.version <your hms version, Make sure you use the first 2 parts without the 3rd part>
-spark.hadoop.hive.synapse.externalmetastore.linkedservice.name <your linked service name to Azure SQL DB>
+spark.hadoop.hive.synapse.externalmetastore.linkedservice.name <your linked service name>
spark.sql.hive.metastore.jars /opt/hive-metastore/lib-<your hms version, 2 parts>/*:/usr/hdp/current/hadoop-client/lib/* ```
spark.hadoop.hive.synapse.externalmetastore.linkedservice.name HiveCatalog21
spark.sql.hive.metastore.jars /opt/hive-metastore/lib-2.1/*:/usr/hdp/current/hadoop-client/lib/* ```
-### Configure a Spark session
-If you don't want to configure your Spark pool, you can also configure the Spark session in notebook using %%configure magic command. Here is the code. Same configuration can also be applied to a Spark batch job.
+### Configure at Spark session level
+For notebook session, you can also configure the Spark session in notebook using `%%configure` magic command. Here is the code.
```json %%configure -f
If you don't want to configure your Spark pool, you can also configure the Spark
} ```
+For batch job, same configuration can also be applied via `SparkConf`.
+ ### Run queries to verify the connection After all these settings, try listing catalog objects by running below query in Spark notebook to check the connectivity to the external Hive Metastore. ```python
spark.sql("show databases").show()
``` ## Set up storage connection
-The linked service to Hive metastore database just provides access to Hive catalog metadata. To query the existing tables, you need to set up connection to the storage account that stores the underlying data for your Hive tables as well.
+The linked service to Hive Metastore database just provides access to Hive catalog metadata. To query the existing tables, you need to set up connection to the storage account that stores the underlying data for your Hive tables as well.
-### Set up connection to ADLS Gen 2
+### Set up connection to Azure Data Lake Storage Gen 2
#### Workspace primary storage account If the underlying data of your Hive tables is stored in the workspace primary storage account, you don't need to do extra settings. It will just work as long as you followed storage setting up instructions during workspace creation.
If the underlying data of your Hive tables are stored in Azure Blob storage acco
:::image type="content" source="./media/use-external-metastore/connect-to-storage-account.png" alt-text="Connect to storage account" border="true"::: 2. Choose **Azure Blob Storage** and click **Continue**.
-3. Provide **Name** of the linked service. Record the name of the linked service, this info will be used in Spark session configuration shortly.
+3. Provide **Name** of the linked service. Record the name of the linked service, this info will be used in Spark configuration shortly.
4. Select the Azure Blob Storage account. Make sure Authentication method is **Account key**. Currently Spark pool can only access Blob Storage account via account key. 5. **Test connection** and click **Create**. 6. After creating the linked service to Blob Storage account, when you run Spark queries, make sure you run below Spark code in the notebook to get access to the the Blob Storage account for the Spark session. Learn more about why you need to do this [here](./apache-spark-secure-credentials-with-tokenlibrary.md).
After setting up storage connections, you can query the existing tables in the H
## Known limitations - Synapse Studio object explorer will continue to show objects in managed Synapse metastore instead of the external HMS, we are improving the experience of this.-- [SQL <-> spark synchronization](../sql/develop-storage-files-spark-tables.md) doesn't work when using external HMS.
+- [SQL <-> Spark synchronization](../sql/develop-storage-files-spark-tables.md) doesn't work when using external HMS.
- Only Azure SQL Database and Azure Database for MySQL are supported as external Hive Metastore database. Only SQL authorization is supported. - Currently Spark only works on external Hive tables and non-transactional/non-ACID managed Hive tables. It doesn't support Hive ACID/transactional tables now. - Apache Ranger integration is not supported as of now.
spark.conf.set('fs.azure.sas.%s.%s.blob.core.windows.net' % (blob_container_name
``` ### See below error when query a table stored in ADLS Gen2 account
-```
+```text
Operation failed: "This request is not authorized to perform this operation using this permission.", 403, HEAD ```
-This could happen because the user who run Spark query doesn't have enough access to the underlying storage account. Make sure the users who run Spark queries have **Storage Blob Data Contributor** role on the ADLS Gen2 storage account. This step can be done later after creating the linked service.
+This could happen because the user who runs Spark query doesn't have enough access to the underlying storage account. Make sure the user who runs Spark queries has **Storage Blob Data Contributor** role on the ADLS Gen2 storage account. This step can be done after creating the linked service.
### HMS schema related settings To avoid changing HMS backend schema/version, following hive configs are set by system by default:
spark.hadoop.datanucleus.fixedDatastore true
spark.hadoop.datanucleus.schema.autoCreateAll false ```
-If your HMS version is 1.2.1 or 1.2.2, there's an issue in Hive that claims requiring only 1.2.0 if you turn spark.hadoop.hive.metastore.schema.verification to true. Our suggestion is either you can modify your HMS version to 1.2.0, or overwrite below two configurations to work around:
+If your HMS version is `1.2.1` or `1.2.2`, there's an issue in Hive that claims requiring only `1.2.0` if you turn `spark.hadoop.hive.metastore.schema.verification` to `true`. Our suggestion is either you can modify your HMS version to `1.2.0`, or overwrite below two configurations to work around:
```properties spark.hadoop.hive.metastore.schema.verification false
spark.hadoop.hive.synapse.externalmetastore.schema.usedefault false
If you need to migrate your HMS version, we recommend using [hive schema tool](https://cwiki.apache.org/confluence/display/Hive/Hive+Schema+Tool). And if the HMS has been used by HDInsight clusters, we suggest using [HDI provided version](../../hdinsight/interactive-query/apache-hive-migrate-workloads.md). ### HMS schema change for OSS HMS 3.1
-Synapse aims to works smoothly with computes from HDI. However HMS 3.1 in HDI 4.0 is not full compatible with the OSS HMS 3.1. So please apply the following manually to your HMS 3.1 if itΓÇÖs not provisioned by HDI.
+Synapse aims to work smoothly with computes from HDI. However HMS 3.1 in HDI 4.0 is not fully compatible with the OSS HMS 3.1. So please apply the following manually to your HMS 3.1 if itΓÇÖs not provisioned by HDI.
```sql -- HIVE-19416
ALTER TABLE TBLS ADD WRITE_ID bigint NOT NULL DEFAULT(0);
ALTER TABLE PARTITIONS ADD WRITE_ID bigint NOT NULL DEFAULT(0); ```
-### When sharing the metastore with HDInsight 4.0 Spark clusters, I cannot see the tables
+### When sharing the metastore with HDInsight 4.0 Spark cluster, I cannot see the tables
If you want to share the Hive catalog with a spark cluster in HDInsight 4.0, please ensure your property `spark.hadoop.metastore.catalog.default` in Synapse spark aligns with the value in HDInsight spark. The default value for HDI spark is `spark` and the default value for Synapse spark is `hive`.
-### When sharing the Hive metastore with HDInsight 4.0 Hive clusters, I can list the tables successfully, but only get empty result when I query the table
-As mentioned in the limitations, Synapse Spark pool only supports external hive tables and non-transactional/ACID managed tables, it doesn't support Hive ACID/transactional tables currently. By default in HDInsight 4.0 Hive clusters, all managed tables are created as ACID/transactional tables by default, that's why you get empty results when querying those tables.
+### When sharing the Hive Metastore with HDInsight 4.0 Hive cluster, I can list the tables successfully, but only get empty result when I query the table
+As mentioned in the limitations, Synapse Spark pool only supports external hive tables and non-transactional/ACID managed tables, it doesn't support Hive ACID/transactional tables currently. In HDInsight 4.0 Hive clusters, all managed tables are created as ACID/transactional tables by default, that's why you get empty results when querying those tables.
synapse-analytics Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/whats-new-archive.md
The following updates are new to Azure Synapse Analytics this month.
* The Synapse Machine Learning library [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId--463873803) [article](https://microsoft.github.io/SynapseML/docs/about/) * Getting started with state-of-the-art pre-built intelligent models [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-2023639030) [article](./machine-learning/tutorial-form-recognizer-use-mmlspark.md)
-* Building responsible AI systems with the Synapse ML library [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-914346508) [article](https://microsoft.github.io/SynapseML/docs/features/responsible_ai/Model%20Interpretation%20on%20Spark/.md)
+* Building responsible AI systems with the Synapse ML library [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-914346508) [article](https://microsoft.github.io/SynapseML/docs/features/responsible_ai/Model%20Interpretation%20on%20Spark/)
* PREDICT is now GA for Synapse Dedicated SQL pools [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-1594404878) [article](./machine-learning/tutorial-sql-pool-model-scoring-wizard.md) * Simple & scalable scoring with PREDICT and MLFlow for Apache Spark for Synapse [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId--213049585) [article](./machine-learning/tutorial-score-model-predict-spark-pool.md) * Retail AI solutions [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId--2020504048) [article](./machine-learning/quickstart-industry-ai-solutions.md)
The following updates are new to Azure Synapse Analytics this month.
### Synapse Link
-* Synapse Link for Dataverse [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-1397891373) [article](/powerapps/maker/data-platform/azure-synapse-link-synaps)
+* Synapse Link for Dataverse [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId-1397891373) [article](/powerapps/maker/data-platform/azure-synapse-link-synapse)
* Custom partitions for Synapse link for Azure Cosmos DB in preview [blog](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/azure-synapse-analytics-november-2021-update/ba-p/3020740#toc-hId--409563090) [article](../cosmos-db/custom-partitioning-analytical-store.md) ## October 2021 update
The following updates are new to Azure Synapse Analytics this month.
## Next steps [Get started with Azure Synapse Analytics](get-started.md)--
virtual-desktop Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/whats-new.md
Title: What's new in Azure Virtual Desktop? - Azure
description: New features and product updates for Azure Virtual Desktop. Previously updated : 11/30/2021 Last updated : 01/05/2022
Here's what's changed in the Azure Virtual Desktop Agent:
Curious about the latest updates for FSLogix? Check out [What's new at FSLogix](/fslogix/whats-new).
+## December 2021
+
+Here's what changed in December 2021:
+
+### Azure portal updates
+
+You can now automatically create trusted launch virtual machines through the host pool creation process instead of having to manually create and add them to a host pool after deployment. To access this feature, select the **Virtual machines** tab while creating a host pool. Learn more at [Trusted launch for Azure virtual machines](../virtual-machines/trusted-launch.md).
+
+### Azure Active Directory Join VMs with FSLogix profiles on Azure Files
+
+Azure Active Directory (Azure AD)-joined session hosts for FSLogix profiles on Azure Files in Windows 10 and 11 multi-session is now in public preview. We've updated Azure Files to use a Kerberos protocol for Azure AD that lets you secure folders in the file share to individual users. This new feature also allows FSLogix to function within your deployment without an Active Directory Domain Controller. For more information, check out [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop-blog/announcing-public-preview-of-fslogix-profiles-for-azure-ad/ba-p/3019855).
+
+### Azure Virtual Desktop pricing calculator updates
+
+We've made some significant updates to improve the Azure Virtual Desktop pricing experience on the Azure pricing calculator, including the following:
+
+- You can now calculate costs for any number of users greater than zero.
+- The calculator now includes storage and networking or bandwidth costs.
+- We've added new info messages for clarity.
+- Fixed bugs that affected storage configuration.
+
+For more information, see the [pricing calculator](https://azure.microsoft.com/pricing/calculator/).
+ ## November 2021 Here's what changed in November 2021:
virtual-machine-scale-sets Virtual Machine Scale Sets Health Extension https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-health-extension.md
The following JSON shows the schema for the Application Health extension. The ex
"location": "<location>", "properties": { "publisher": "Microsoft.ManagedServices",
- "type": "< ApplicationHealthLinux or ApplicationHealthWindows>",
+ "type": "<ApplicationHealthLinux or ApplicationHealthWindows>",
"autoUpgradeMinorVersion": true, "typeHandlerVersion": "1.0", "settings": {
PUT on `/subscriptions/subscription_id/resourceGroups/myResourceGroup/providers/
{ "name": "myHealthExtension", "properties": {
- "publisher": " Microsoft.ManagedServices",
- "type": "< ApplicationHealthWindows>",
+ "publisher": "Microsoft.ManagedServices",
+ "type": "ApplicationHealthWindows",
"autoUpgradeMinorVersion": true, "typeHandlerVersion": "1.0", "settings": {
virtual-machines Capacity Reservation Associate Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/capacity-reservation-associate-vm.md
New-AzVm
-SubnetName "mySubnet" -SecurityGroupName "myNetworkSecurityGroup" -PublicIpAddressName "myPublicIpAddress"--OpenPorts 80,3389 -Size "Standard_D2s_v3" -CapacityReservationGroupId "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/capacityReservationGroups/{capacityReservationGroupName}" ```
virtual-machines Dedicated Host Storage Optimized Skus https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/dedicated-host-storage-optimized-skus.md
The sizes and hardware types available for dedicated hosts vary by region. Refer
## Lsv2 ### Lsv2-Type1
-The Lsv2-Type1 is a Dedicated Host SKU utilizing the Intel® Cascade Lake (Xeon® Gold 6246R) processor. It offers 64 physical cores, 80 vCPUs, and 640 GiB of RAM. The Lsv2-Type1 runs [Lsv2-series](lsv2-series.md) VMs.
+The Lsv2-Type1 is a Dedicated Host SKU utilizing the AMD's 2.55 GHz EPYCΓäó 7551 processor. It offers 64 physical cores, 80 vCPUs, and 640 GiB of RAM. The Lsv2-Type1 runs [Lsv2-series](lsv2-series.md) VMs.
The following packing configuration outlines the max packing of uniform VMs you can put onto a Lsv2-Type1 host.
virtual-machines Disks Deploy Zrs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/disks-deploy-zrs.md
$vm1 = New-AzVm `
-SubnetName $($vmNamePrefix+"_subnet") ` -SecurityGroupName $($vmNamePrefix+"01_sg") ` -PublicIpAddressName $($vmNamePrefix+"01_ip") `
- -Credential $credential `
- -OpenPorts 80,3389
+ -Credential $credential
$vm1 = Add-AzVMDataDisk -VM $vm1 -Name $sharedDiskName -CreateOption Attach -ManagedDiskId $sharedDisk.Id -Lun 0
virtual-machines Hpccompute Amd Gpu Windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/extensions/hpccompute-amd-gpu-windows.md
Title: AMD GPU driver extension - Azure Windows VMs
-description: Microsoft Azure extension for installing AMD GPU Drivers on NVv4-series VMs running Windows.
+ Title: AMD GPU Driver Extension - Azure Windows VMs
+description: Microsoft Azure extension for installing AMD GPU drivers on NVv4-series VMs running Windows.
-# AMD GPU driver extension for Windows
+# AMD GPU Driver Extension for Windows
-This article provides an overview of the VM extension to deploy AMD GPU drivers on Windows [NVv4-series](../nvv4-series.md) VMs. When you install AMD drivers using this extension, you are accepting and agreeing to the terms of the [AMD End-User License Agreement](https://amd.com/radeonsoftwarems). During the installation process, the VM may reboot to complete the driver setup.
+This article provides an overview of the virtual machine (VM) extension to deploy AMD GPU drivers on Windows [NVv4-series](../nvv4-series.md) VMs. When you install AMD drivers by using this extension, you're accepting and agreeing to the terms of the [AMD End-User License Agreement](https://amd.com/radeonsoftwarems). During the installation process, the VM might reboot to complete the driver setup.
Instructions on manual installation of the drivers and the current supported versions are available. For more information, see [Azure N-series AMD GPU driver setup for Windows](../windows/n-series-amd-driver-setup.md).
This extension supports the following OSs:
### Internet connectivity
-The Microsoft Azure Extension for AMD GPU Drivers requires that the target VM is connected to the internet and have access.
+The Microsoft Azure Extension for AMD GPU Drivers requires that the target VM is connected to the internet and has access.
## Extension schema
-The following JSON shows the schema for the extension.
+The following JSON shows the schema for the extension:
```json {
The following JSON shows the schema for the extension.
### Properties
-| Name | Value / Example | Data Type |
+| Name | Value/Example | Data type |
| - | - | - | | apiVersion | 2015-06-15 | date | | publisher | Microsoft.HpcCompute | string | | type | AmdGpuDriverWindows | string | | typeHandlerVersion | 1.1 | int | - ## Deployment+ ### Azure portal You can deploy Azure AMD VM extensions in the Azure portal. 1. In a browser, go to the [Azure portal](https://portal.azure.com).
-2. Go to the virtual machine on which you want to install the driver.
+1. Go to the virtual machine on which you want to install the driver.
-3. In the left menu, select **Extensions**.
+1. On the left menu, select **Extensions**.
:::image type="content" source="./medi-ext-portal/extensions-menu.png" alt-text="Screenshot that shows selecting Extensions in the Azure portal menu.":::
-4. Select **Add**.
+1. Select **Add**.
:::image type="content" source="./medi-ext-portal/add-extension.png" alt-text="Screenshot that shows adding a V M extension for the selected V M.":::
-5. Scroll to find and select **AMD GPU Driver Extension**, and then select **Next**.
+1. Scroll to find and select **AMD GPU Driver Extension**, and then select **Next**.
- :::image type="content" source="./medi G P U driver.":::
+ :::image type="content" source="./medi G P U Driver Extension.":::
-6. Select **Review + create** and then click **Create**, wait a few minutes for the driver to be deployed.
+1. Select **Review + create**, and select **Create**. Wait a few minutes for the driver to deploy.
- :::image type="content" source="./medi-extension.png" alt-text="Screenshot that shows selecting the review and create button.":::
+ :::image type="content" source="./medi-extension.png" alt-text="Screenshot that shows selecting the Review + create button.":::
-7. Verify that the extension is added to the list of installed extensions.
+1. Verify that the extension was added to the list of installed extensions.
:::image type="content" source="./medi-ext-portal/verify-extension.png" alt-text="Screenshot that shows the new extension in the list of extensions for the V M.":::
-### Azure Resource Manager Template
+### Azure Resource Manager template
-Azure VM extensions can be deployed with Azure Resource Manager templates. Templates are ideal when deploying one or more virtual machines that require post deployment configuration.
+You can use Azure Resource Manager templates to deploy Azure VM extensions. Templates are ideal when you deploy one or more virtual machines that require post-deployment configuration.
-The JSON configuration for a virtual machine extension can be nested inside the virtual machine resource, or placed at the root or top level of a Resource Manager JSON template. The placement of the JSON configuration affects the value of the resource name and type. For more information, see [Set name and type for child resources](../../azure-resource-manager/templates/child-resource-name-type.md).
+The JSON configuration for a virtual machine extension can be nested inside the virtual machine resource or placed at the root or top level of a Resource Manager JSON template. The placement of the JSON configuration affects the value of the resource name and type. For more information, see [Set name and type for child resources](../../azure-resource-manager/templates/child-resource-name-type.md).
-The following example assumes the extension is nested inside the virtual machine resource. When nesting the extension resource, the JSON is placed in the `"resources": []` object of the virtual machine.
+The following example assumes the extension is nested inside the virtual machine resource. When the extension resource is nested, the JSON is placed in the `"resources": []` object of the virtual machine.
```json {
az vm extension set `
### Troubleshoot
-Data about the state of extension deployments can be retrieved from the Azure portal, and by using Azure PowerShell and Azure CLI. To see the deployment state of extensions for a given VM, run the following command.
+You can retrieve data about the state of extension deployments from the Azure portal and by using Azure PowerShell and the Azure CLI. To see the deployment state of extensions for a given VM, run the following command:
```powershell Get-AzVMExtension -ResourceGroupName myResourceGroup -VMName myVM -Name myExtensionName
C:\WindowsAzure\Logs\Plugins\Microsoft.HpcCompute.AmdGpuDriverMicrosoft\
### Error codes
-| Error Code | Meaning | Possible Action |
+| Error Code | Meaning | Possible action |
| :: | | |
-| 0 | Operation successful |
+| 0 | Operation successful. |
| 1 | Operation successful. Reboot required. |
-| 100 | Operation not supported or could not be completed. | Possible causes: PowerShell version not supported, VM size is not an N-series VM, Failure downloading data. Check the log files to determine cause of error. |
+| 100 | Operation not supported or couldn't be completed. | Possible causes are that the PowerShell version isn't supported, the VM size isn't an N-series VM, and a failure occurred in downloading data. Check the log files to determine the cause of the error. |
| 240, 840 | Operation timeout. | Retry operation. |
-| -1 | Exception occurred. | Check the log files to determine cause of exception. |
-| -5x | Operation interrupted due to pending reboot. | Reboot VM. Installation will continue after reboot. Uninstall should be invoked manually. |
-
+| -1 | Exception occurred. | Check the log files to determine the cause of the exception. |
+| -5x | Operation interrupted due to pending reboot. | Reboot VM. Installation continues after the reboot. Uninstall should be invoked manually. |
### Support
-If you need more help at any point in this article, you can contact the Azure experts on the [MSDN Azure and Stack Overflow forums](https://azure.microsoft.com/support/community/). Alternatively, you can file an Azure support incident. Go to the [Azure support site](https://azure.microsoft.com/support/options/) and select Get support. For information about using Azure Support, read the [Microsoft Azure support FAQ](https://azure.microsoft.com/support/faq/).
+If you need more help at any point in this article, contact the Azure experts on the [MSDN Azure and Stack Overflow forums](https://azure.microsoft.com/support/community/). Alternatively, you can file an Azure support incident. Go to [Azure support](https://azure.microsoft.com/support/options/) and select **Get support**. For information about using Azure support, read the [Azure support FAQ](https://azure.microsoft.com/support/faq/).
## Next steps
-For more information about extensions, see [Virtual machine extensions and features for Windows](features-windows.md).
-For more information about N-series VMs, see [GPU optimized virtual machine sizes](../sizes-gpu.md).
+- For more information about extensions, see [Virtual machine extensions and features for Windows](features-windows.md).
+- For more information about N-series VMs, see [GPU optimized virtual machine sizes](../sizes-gpu.md).
virtual-machines Hpccompute Gpu Linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/extensions/hpccompute-gpu-linux.md
Title: NVIDIA GPU Driver Extension - Azure Linux VMs
-description: Microsoft Azure Extension for installing NVIDIA GPU Drivers on N-series compute VMs running Linux.
+description: Microsoft Azure extension for installing NVIDIA GPU drivers on N-series compute VMs running Linux.
documentationcenter: ''
# NVIDIA GPU Driver Extension for Linux
-## Overview
-
-This extension installs NVIDIA GPU drivers on Linux N-series VMs. Depending on the VM family, the extension installs CUDA or GRID drivers. When you install NVIDIA drivers using this extension, you are accepting and agreeing to the terms of the [NVIDIA End-User License Agreement](https://go.microsoft.com/fwlink/?linkid=874330). During the installation process, the VM may reboot to complete the driver setup.
+This extension installs NVIDIA GPU drivers on Linux N-series virtual machines (VMs). Depending on the VM family, the extension installs CUDA or GRID drivers. When you install NVIDIA drivers by using this extension, you're accepting and agreeing to the terms of the [NVIDIA End-User License Agreement](https://go.microsoft.com/fwlink/?linkid=874330). During the installation process, the VM might reboot to complete the driver setup.
Instructions on manual installation of the drivers and the current supported versions are available. For more information, see [Azure N-series GPU driver setup for Linux](../linux/n-series-driver-setup.md). An extension is also available to install NVIDIA GPU drivers on [Windows N-series VMs](hpccompute-gpu-windows.md).
An extension is also available to install NVIDIA GPU drivers on [Windows N-serie
### Operating system
-This extension supports the following OS distros, depending on driver support for specific OS version.
+This extension supports the following OS distros, depending on driver support for the specific OS version:
| Distribution | Version | |||
This extension supports the following OS distros, depending on driver support fo
| Linux: CentOS | 7.3, 7.4, 7.5, 7.6, 7.7, 7.8 | > [!NOTE]
-> The latest supported CUDA drivers for NC-series VMs is currently 470.82.01. Later driver versions are not supported on the K80 cards in NC. While the exension is being updated with this end-of-support for NC, please install CUDA drivers manually for K80 cards on the NC-series.
-
+> The latest supported CUDA drivers for NC-series VMs are currently 470.82.01. Later driver versions aren't supported on the K80 cards in NC. While the extension is being updated with this end of support for NC, install CUDA drivers manually for K80 cards on the NC-series.
### Internet connectivity
-The Microsoft Azure Extension for NVIDIA GPU Drivers requires that the target VM is connected to the internet and have access.
+The Microsoft Azure Extension for NVIDIA GPU Drivers requires that the target VM is connected to the internet and has access.
## Extension schema
-The following JSON shows the schema for the extension.
+The following JSON shows the schema for the extension:
```json {
The following JSON shows the schema for the extension.
### Properties
-| Name | Value / Example | Data Type |
+| Name | Value/Example | Data type |
| - | - | - | | apiVersion | 2015-06-15 | date | | publisher | Microsoft.HpcCompute | string |
The following JSON shows the schema for the extension.
### Settings
-All settings are optional. The default behavior is to not update the kernel if not required for driver installation, install the latest supported driver and the CUDA toolkit (as applicable).
+All settings are optional. The default behavior is to not update the kernel if not required for driver installation and install the latest supported driver and the CUDA toolkit (as applicable).
-| Name | Description | Default Value | Valid Values | Data Type |
+| Name | Description | Default value | Valid values | Data type |
| - | - | - | - | - |
-| updateOS | Update the kernel even if not required for driver installation | false | true, false | boolean |
-| driverVersion | NV: GRID driver version<br> NC/ND: CUDA toolkit version. The latest drivers for the chosen CUDA are installed automatically. | latest | [List](https://github.com/Azure/azhpc-extensions/blob/master/NvidiaGPU/resources.json) of supported driver versions | string |
+| updateOS | Update the kernel even if not required for driver installation. | false | true, false | boolean |
+| driverVersion | NV: GRID driver version.<br> NC/ND: CUDA toolkit version. The latest drivers for the chosen CUDA are installed automatically. | latest | [List](https://github.com/Azure/azhpc-extensions/blob/master/NvidiaGPU/resources.json) of supported driver versions | string |
| installCUDA | Install CUDA toolkit. Only relevant for NC/ND series VMs. | true | true, false | boolean | - ## Deployment+ ### Azure portal
-You can deploy Azure Nvidia VM extensions in the Azure portal.
+You can deploy Azure NVIDIA VM extensions in the Azure portal.
1. In a browser, go to the [Azure portal](https://portal.azure.com).
-2. Go to the virtual machine on which you want to install the driver.
+1. Go to the virtual machine on which you want to install the driver.
-3. In the left menu, select **Extensions**.
+1. On the left menu, select **Extensions**.
:::image type="content" source="./media/nvidia-ext-portal/extensions-menu-linux.png" alt-text="Screenshot that shows selecting Extensions in the Azure portal menu.":::
-4. Select **Add**.
+1. Select **Add**.
:::image type="content" source="./media/nvidia-ext-portal/add-extension-linux.png" alt-text="Screenshot that shows adding a V M extension for the selected V M.":::
-5. Scroll to find and select **NVIDIA GPU Driver Extension**, and then select **Next**.
+1. Scroll to find and select **NVIDIA GPU Driver Extension**, and then select **Next**.
- :::image type="content" source="./media/nvidia-ext-portal/select-nvidia-extension-linux.png" alt-text="Screenshot that shows selecting NVIDIA G P U driver.":::
+ :::image type="content" source="./media/nvidia-ext-portal/select-nvidia-extension-linux.png" alt-text="Screenshot that shows selecting NVIDIA G P U Driver Extension.":::
-6. Select **Review + create** and then click **Create**, wait a few minutes for the driver to be deployed.
+1. Select **Review + create**, and select **Create**. Wait a few minutes for the driver to deploy.
- :::image type="content" source="./media/nvidia-ext-portal/create-nvidia-extension-linux.png" alt-text="Screenshot that shows selecting the review and create button.":::
+ :::image type="content" source="./media/nvidia-ext-portal/create-nvidia-extension-linux.png" alt-text="Screenshot that shows selecting the Review + create button.":::
-7. Verify that the extension is added to the list of installed extensions.
+1. Verify that the extension was added to the list of installed extensions.
:::image type="content" source="./media/nvidia-ext-portal/verify-extension-linux.png" alt-text="Screenshot that shows the new extension in the list of extensions for the V M.":::
+### Azure Resource Manager template
-### Azure Resource Manager Template
-
-Azure VM extensions can be deployed with Azure Resource Manager templates. Templates are ideal when deploying one or more virtual machines that require post deployment configuration.
+You can use Azure Resource Manager templates to deploy Azure VM extensions. Templates are ideal when you deploy one or more virtual machines that require post-deployment configuration.
-The JSON configuration for a virtual machine extension can be nested inside the virtual machine resource, or placed at the root or top level of a Resource Manager JSON template. The placement of the JSON configuration affects the value of the resource name and type. For more information, see [Set name and type for child resources](../../azure-resource-manager/templates/child-resource-name-type.md).
+The JSON configuration for a virtual machine extension can be nested inside the virtual machine resource or placed at the root or top level of a Resource Manager JSON template. The placement of the JSON configuration affects the value of the resource name and type. For more information, see [Set name and type for child resources](../../azure-resource-manager/templates/child-resource-name-type.md).
-The following example assumes the extension is nested inside the virtual machine resource. When nesting the extension resource, the JSON is placed in the `"resources": []` object of the virtual machine.
+The following example assumes the extension is nested inside the virtual machine resource. When the extension resource is nested, the JSON is placed in the `"resources": []` object of the virtual machine.
```json {
Set-AzVMExtension
### Azure CLI
-The following example mirrors the above Azure Resource Manager and PowerShell examples.
+The following example mirrors the preceding Resource Manager and PowerShell examples:
```azurecli az vm extension set \
az vm extension set \
--version 1.6 ```
-The following example also adds two optional custom settings as an example for non-default driver installation. Specifically, it updates the OS kernel to the latest and installs a specific CUDA toolkit version driver. Again, note the '--settings' are optional and default. Note that updating the kernel may increase the extension installation times. Also choosing a specific (older) CUDA toolkit version may not always be compatible with newer kernels.
+The following example also adds two optional custom settings as an example for nondefault driver installation. Specifically, it updates the OS kernel to the latest and installs a specific CUDA toolkit version driver. Again, note the `--settings` are optional and default. Updating the kernel might increase the extension installation times. Also, choosing a specific (older) CUDA toolkit version might not always be compatible with newer kernels.
```azurecli az vm extension set \
az vm extension set \
### Troubleshoot
-Data about the state of extension deployments can be retrieved from the Azure portal, and by using Azure PowerShell and Azure CLI. To see the deployment state of extensions for a given VM, run the following command.
+You can retrieve data about the state of extension deployments from the Azure portal and by using Azure PowerShell and the Azure CLI. To see the deployment state of extensions for a given VM, run the following command:
```powershell Get-AzVMExtension -ResourceGroupName myResourceGroup -VMName myVM -Name myExtensionName
Get-AzVMExtension -ResourceGroupName myResourceGroup -VMName myVM -Name myExtens
az vm extension list --resource-group myResourceGroup --vm-name myVM -o table ```
-Extension execution output is logged to the following file. Refer to this file to track the status of (any long running) installation as well as for troubleshooting any failures.
+Extension execution output is logged to the following file. Refer to this file to track the status of any long-running installation and for troubleshooting any failures.
```bash /var/log/azure/nvidia-vmext-status
Extension execution output is logged to the following file. Refer to this file t
### Exit codes
-| Exit Code | Meaning | Possible Action |
+| Exit code | Meaning | Possible action |
| :: | | | | 0 | Operation successful |
-| 1 | Incorrect usage of extension | Check execution output log |
-| 10 | Linux Integration Services for Hyper-V and Azure not available or installed | Check output of lspci |
-| 11 | NVIDIA GPU not found on this VM size | Use a [supported VM size and OS](../linux/n-series-driver-setup.md) |
+| 1 | Incorrect usage of extension | Check the execution output log. |
+| 10 | Linux Integration Services for Hyper-V and Azure not available or installed | Check the output of lspci. |
+| 11 | NVIDIA GPU not found on this VM size | Use a [supported VM size and OS](../linux/n-series-driver-setup.md). |
| 12 | Image offer not supported |
-| 13 | VM size not supported | Use an N-series VM to deploy |
-| 14 | Operation unsuccessful | Check execution output log |
-
+| 13 | VM size not supported | Use an N-series VM to deploy. |
+| 14 | Operation unsuccessful | Check the execution output log. |
### Support
-If you need more help at any point in this article, you can contact the Azure experts on the [MSDN Azure and Stack Overflow forums](https://azure.microsoft.com/support/community/). Alternatively, you can file an Azure support incident. Go to the [Azure support site](https://azure.microsoft.com/support/options/) and select Get support. For information about using Azure Support, read the [Microsoft Azure support FAQ](https://azure.microsoft.com/support/faq/).
+If you need more help at any point in this article, contact the Azure experts on the [MSDN Azure and Stack Overflow forums](https://azure.microsoft.com/support/community/). Alternatively, you can file an Azure support incident. Go to [Azure support](https://azure.microsoft.com/support/options/) and select **Get support**. For information about using Azure support, read the [Azure support FAQ](https://azure.microsoft.com/support/faq/).
## Next steps
-For more information about extensions, see [Virtual machine extensions and features for Linux](features-linux.md).
-For more information about N-series VMs, see [GPU optimized virtual machine sizes](../sizes-gpu.md).
+- For more information about extensions, see [Virtual machine extensions and features for Linux](features-linux.md).
+- For more information about N-series VMs, see [GPU optimized virtual machine sizes](../sizes-gpu.md).
virtual-machines Hpccompute Gpu Windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/extensions/hpccompute-gpu-windows.md
Title: NVIDIA GPU Driver Extension - Azure Windows VMs
-description: Microsoft Azure extension for installing NVIDIA GPU Drivers on N-series compute VMs running Windows.
+description: Azure extension for installing NVIDIA GPU drivers on N-series compute VMs running Windows.
documentationcenter: ''
# NVIDIA GPU Driver Extension for Windows
-## Overview
-
-This extension installs NVIDIA GPU drivers on Windows N-series VMs. Depending on the VM family, the extension installs CUDA or GRID drivers. When you install NVIDIA drivers using this extension, you are accepting and agreeing to the terms of the [NVIDIA End-User License Agreement](https://go.microsoft.com/fwlink/?linkid=874330). During the installation process, the VM may reboot to complete the driver setup.
+This extension installs NVIDIA GPU drivers on Windows N-series virtual machines (VMs). Depending on the VM family, the extension installs CUDA or GRID drivers. When you install NVIDIA drivers by using this extension, you're accepting and agreeing to the terms of the [NVIDIA End-User License Agreement](https://go.microsoft.com/fwlink/?linkid=874330). During the installation process, the VM might reboot to complete the driver setup.
Instructions on manual installation of the drivers and the current supported versions are available. For more information, see [Azure N-series NVIDIA GPU driver setup for Windows](../windows/n-series-driver-setup.md). An extension is also available to install NVIDIA GPU drivers on [Linux N-series VMs](hpccompute-gpu-linux.md).
This extension supports the following OSs:
### Internet connectivity
-The Microsoft Azure Extension for NVIDIA GPU Drivers requires that the target VM is connected to the internet and have access.
+The Microsoft Azure Extension for NVIDIA GPU Drivers requires that the target VM is connected to the internet and has access.
## Extension schema
-The following JSON shows the schema for the extension.
+The following JSON shows the schema for the extension:
```json {
The following JSON shows the schema for the extension.
### Properties
-| Name | Value / Example | Data Type |
+| Name | Value/Example | Data type |
| - | - | - | | apiVersion | 2015-06-15 | date | | publisher | Microsoft.HpcCompute | string | | type | NvidiaGpuDriverWindows | string | | typeHandlerVersion | 1.4 | int | - ## Deployment ### Azure portal
-You can deploy Azure Nvidia VM extensions in the Azure portal.
+You can deploy Azure NVIDIA VM extensions in the Azure portal.
1. In a browser, go to the [Azure portal](https://portal.azure.com).
-2. Go to the virtual machine on which you want to install the driver.
+1. Go to the virtual machine on which you want to install the driver.
-3. In the left menu, select **Extensions**.
+1. On the left menu, select **Extensions**.
:::image type="content" source="./media/nvidia-ext-portal/extensions-menu.png" alt-text="Screenshot that shows selecting Extensions in the Azure portal menu.":::
-4. Select **Add**.
+1. Select **Add**.
:::image type="content" source="./media/nvidia-ext-portal/add-extension.png" alt-text="Screenshot that shows adding a V M extension for the selected V M.":::
-5. Scroll to find and select **NVIDIA GPU Driver Extension**, and then select **Next**.
+1. Scroll to find and select **NVIDIA GPU Driver Extension**, and then select **Next**.
- :::image type="content" source="./media/nvidia-ext-portal/select-nvidia-extension.png" alt-text="Screenshot that shows selecting NVIDIA G P U driver.":::
+ :::image type="content" source="./media/nvidia-ext-portal/select-nvidia-extension.png" alt-text="Screenshot that shows selecting NVIDIA G P U Driver Extension.":::
-6. Select **Review + create** and then click **Create**, wait a few minutes for the driver to be deployed.
+1. Select **Review + create**, and select **Create**. Wait a few minutes for the driver to deploy.
- :::image type="content" source="./media/nvidia-ext-portal/create-nvidia-extension.png" alt-text="Screenshot that shows selecting the review and create button.":::
+ :::image type="content" source="./media/nvidia-ext-portal/create-nvidia-extension.png" alt-text="Screenshot that shows selecting the Review + create button.":::
-7. Verify that the extension is added to the list of installed extensions.
+1. Verify that the extension was added to the list of installed extensions.
:::image type="content" source="./media/nvidia-ext-portal/verify-extension.png" alt-text="Screenshot that shows the new extension in the list of extensions for the V M.":::
-### Azure Resource Manager Template
+### Azure Resource Manager template
-Azure VM extensions can be deployed with Azure Resource Manager templates. Templates are ideal when deploying one or more virtual machines that require post deployment configuration.
+You can use Azure Resource Manager templates to deploy Azure VM extensions. Templates are ideal when you deploy one or more virtual machines that require post-deployment configuration.
-The JSON configuration for a virtual machine extension can be nested inside the virtual machine resource, or placed at the root or top level of a Resource Manager JSON template. The placement of the JSON configuration affects the value of the resource name and type. For more information, see [Set name and type for child resources](../../azure-resource-manager/templates/child-resource-name-type.md).
+The JSON configuration for a virtual machine extension can be nested inside the virtual machine resource or placed at the root or top level of a Resource Manager JSON template. The placement of the JSON configuration affects the value of the resource name and type. For more information, see [Set name and type for child resources](../../azure-resource-manager/templates/child-resource-name-type.md).
-The following example assumes the extension is nested inside the virtual machine resource. When nesting the extension resource, the JSON is placed in the `"resources": []` object of the virtual machine.
+The following example assumes the extension is nested inside the virtual machine resource. When the extension resource is nested, the JSON is placed in the `"resources": []` object of the virtual machine.
```json {
az vm extension set \
### Troubleshoot
-Data about the state of extension deployments can be retrieved from the Azure portal, and by using Azure PowerShell and Azure CLI. To see the deployment state of extensions for a given VM, run the following command.
+You can retrieve data about the state of extension deployments from the Azure portal and by using Azure PowerShell and the Azure CLI. To see the deployment state of extensions for a given VM, run the following command:
```powershell Get-AzVMExtension -ResourceGroupName myResourceGroup -VMName myVM -Name myExtensionName
C:\WindowsAzure\Logs\Plugins\Microsoft.HpcCompute.NvidiaGpuDriverWindows\
### Error codes
-| Error Code | Meaning | Possible Action |
+| Error Code | Meaning | Possible action |
| :: | | |
-| 0 | Operation successful |
+| 0 | Operation successful. |
| 1 | Operation successful. Reboot required. |
-| 100 | Operation not supported or could not be completed. | Possible causes: PowerShell version not supported, VM size is not an N-series VM, Failure downloading data. Check the log files to determine cause of error. |
+| 100 | Operation not supported or couldn't be completed. | Possible causes are that the PowerShell version isn't supported, the VM size isn't an N-series VM, or a failure occurred in downloading data. Check the log files to determine the cause of the error. |
| 240, 840 | Operation timeout. | Retry operation. |
-| -1 | Exception occurred. | Check the log files to determine cause of exception. |
-| -5x | Operation interrupted due to pending reboot. | Reboot VM. Installation will continue after reboot. Uninstall should be invoked manually. |
-
+| -1 | Exception occurred. | Check the log files to determine the cause of the exception. |
+| -5x | Operation interrupted due to pending reboot. | Reboot VM. Installation continues after the reboot. Uninstall should be invoked manually. |
### Support
-If you need more help at any point in this article, you can contact the Azure experts on the [MSDN Azure and Stack Overflow forums](https://azure.microsoft.com/support/community/). Alternatively, you can file an Azure support incident. Go to the [Azure support site](https://azure.microsoft.com/support/options/) and select Get support. For information about using Azure Support, read the [Microsoft Azure support FAQ](https://azure.microsoft.com/support/faq/).
+If you need more help at any point in this article, contact the Azure experts on the [MSDN Azure and Stack Overflow forums](https://azure.microsoft.com/support/community/). Alternatively, you can file an Azure support incident. Go to [Azure support](https://azure.microsoft.com/support/options/) and select **Get support**. For information about using Azure support, read the [Azure support FAQ](https://azure.microsoft.com/support/faq/).
## Next steps
-For more information about extensions, see [Virtual machine extensions and features for Windows](features-windows.md).
-For more information about N-series VMs, see [GPU optimized virtual machine sizes](../sizes-gpu.md).
+- For more information about extensions, see [Virtual machine extensions and features for Windows](features-windows.md).
+- For more information about N-series VMs, see [GPU optimized virtual machine sizes](../sizes-gpu.md).
virtual-machines Field Programmable Gate Arrays Attestation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/field-programmable-gate-arrays-attestation.md
The FPGA Attestation service performs a series of validations on a design checkpoint file (called a ΓÇ£netlistΓÇ¥) generated by the Xilinx toolset and produces a file that contains the validated image (called a ΓÇ£bitstreamΓÇ¥) that can be loaded onto the Xilinx U250 FPGA card in an NP series VM.
+## News
+The current attestation service is using Vitis 2020.2 from Xilinx, on Jan 10th 2022, weΓÇÖll be moving to Vitis 2021.1, the change should be transparent to most users. Once your designs are ΓÇ£attestedΓÇ¥ using Vitis 2021.1, you should be moving to XRT2021.1. Xilinx will publish new marketplace images based on XRT 2021.1.
+Please note that current designs already attested on Vitis 2020.2, will work on the current deployment marketplace images as well as new images based on XRT2021.1
+
+As part of the move to 2021.1, Xilinx introduced a new DRC that might affect some designs previously working on Vitis 2020.2 regarding BUFCE_LEAF failing attestation, for more details here: [Xilinx AR 75980 UltraScale/UltraScale+ BRAM: CLOCK_DOMAIN = Common Mode skew checks](https://support.xilinx.com/s/article/75980?language=en_US).
+ ## Prerequisites You will need an Azure subscription and an Azure Storage account. The subscription gives you access to Azure and the storage account is used to hold your netlist and output files of the attestation service.
virtual-machines Image Builder Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/image-builder-overview.md
The Azure Image Builder Service is available in the following regions: regions.
- West Central US - West US - West US 2
+- West US 3
- South Central US - North Europe - West Europe
virtual-machines Image Builder Json https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/image-builder-json.md
The location is the region where the custom image will be created. The following
- West Central US - West US - West US 2
+- West US 3
- South Central US - North Europe - West Europe
virtual-machines Image Builder Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/image-builder-troubleshoot.md
Get-AzImageBuilderTemplate -ImageTemplateName <imageTemplateName> -ResourceGrou
> [!NOTE] > For PowerShell, you will need to install the [Azure Image Builder PowerShell Modules](../windows/image-builder-powershell.md#prerequisites).
+> [!IMPORTANT]
+> Our 2021-10-01 API introduces a change to the error schema that will be part of every future API release. Any customer that has automated our service needs to expect to receive a new error output when switching to 2021-10-01 or newer API versions (new schema shown below). We recommend that once customers switch to the new API version (2021-10-01 and beyond), they don't revert to older versions as they'll have to change their automation again to expect the older error schema. We do not anticipate changing the error schema again in future releases.
+
+For API versions 2020-02-14 and older, the error output will look like the following:
+```text
+{
+ "code": "ValidationFailed",
+ "message": "Validation failed: 'ImageTemplate.properties.source': Field 'imageId' has a bad value: '/subscriptions/subscriptionID/resourceGroups/resourceGroupName/providers/Microsoft.Compute//images//imageName'. Please review http://aka.ms/azvmimagebuildertmplref for details on fields requirements in the Image Builder Template."
+}
+```
+
+For API versions 2021-10-01 and newer, the error output will look like the following:
+```text
+{
+ "error": {
+ "code": "ValidationFailed",
+ "message": "Validation failed: 'ImageTemplate.properties.source': Field 'imageId' has a bad value: '/subscriptions/subscriptionID/resourceGroups/resourceGroupName/providers/Microsoft.Compute//images//imageName'. Please review http://aka.ms/azvmimagebuildertmplref for details on fields requirements in the Image Builder Template."
+ }
+}
+```
+ The following sections include problem resolution guidance for common image template submission errors. ### Update/Upgrade of image templates is currently not supported
Support Subtopic: Azure Image Builder
## Next steps
-For more information, see [Azure Image Builder overview](../image-builder-overview.md).
+For more information, see [Azure Image Builder overview](../image-builder-overview.md).
virtual-machines Trusted Launch Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/trusted-launch-portal.md
Title: "Deploy a trusted launch VM"
+ Title: Deploy a trusted launch VM
description: Deploy a VM that uses trusted launch.
virtual-machines Trusted Launch https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/trusted-launch.md
Hyper-V Shielded VM is currently available on Hyper-V only. [Hyper-V Shielded VM
### How can I convert existing VMs to trusted launch?
-For Generation 2 VM, migration path to convert to trusted launch is targeted after general availability (GA).
+You can update a gen 2 VM to use Trusted Launch. For more information about how to update an existing VM to use Trusted Launch, see [Deploy a VM with trusted launch enabled](trusted-launch-portal.md#verify-or-update-your-settings).
### What is VM Guest State (VMGS)?
virtual-machines Vm Generalized Image Version https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/vm-generalized-image-version.md
$nsgRuleRDP = New-AzNetworkSecurityRuleConfig `
-SourcePortRange * ` -DestinationAddressPrefix * ` -DestinationPortRange 3389 `
- -Access Allow
+ -Access Deny
$nsg = New-AzNetworkSecurityGroup ` -ResourceGroupName $resourceGroup ` -Location $location `
PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{
"protocol": "Tcp", "sourceAddressPrefix": "*", "destinationAddressPrefix": "*",
- "access": "Allow",
+ "access": "Deny",
"destinationPortRange": "3389", "sourcePortRange": "*", "priority": 1000,
virtual-machines Vm Specialized Image Version https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/vm-specialized-image-version.md
$nsgRuleRDP = New-AzNetworkSecurityRuleConfig `
-SourceAddressPrefix * ` -SourcePortRange * ` -DestinationAddressPrefix * `
- -DestinationPortRange 3389 -Access Allow
+ -DestinationPortRange 3389 -Access Deny
$nsg = New-AzNetworkSecurityGroup ` -ResourceGroupName $resourceGroup ` -Location $location `
virtual-machines Change Availability Set https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/windows/change-availability-set.md
# Change the availability set for a VM using Azure PowerShell
-**Applies to:** :heavy_check_mark: Windows VMs
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs
The following steps describe how to change the availability set of a VM using Azure PowerShell. A VM can only be added to an availability set when it is created. To change the availability set, you need to delete and then recreate the virtual machine.
-This article applies to both Linux and Windows VMs.
- This article was last tested on 2/12/2019 using the [Azure Cloud Shell](https://shell.azure.com/powershell) and the [Az PowerShell module](/powershell/azure/install-az-ps) version 1.2.0.
-This example does not check to see if the VM is attached to a load balancer. If your VM is attached to a load balancer, you will need to update the script to handle that case. Some extensions may also need to be reinstalled after you finish this process.
+> [!WARNING]
+> This is just an example and in some cases it will need to be updated for your specific deployment.
+>
+> If your VM is attached to a load balancer, you will need to update the script to handle that case.
+>
+> Some extensions may also need to be reinstalled after you finish this process.
+>
+> If your VM uses hybrid benefits, you will need to update the example to enable hybrid benefits on the new VM.
## Change the availability set
virtual-machines Create Powershell Availability Zone https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/windows/create-powershell-availability-zone.md
$pip = New-AzPublicIpAddress -ResourceGroupName myResourceGroup -Location eastus
The network security group secures the virtual machine using inbound and outbound rules. In this case, an inbound rule is created for port 3389, which allows incoming remote desktop connections. We also want to create an inbound rule for port 80, which allows incoming web traffic. ```powershell
-# Create an inbound network security group rule for port 3389
+# Create an inbound network security group rule for port 3389 - change -Access to "Allow" if you want to allow RDP access
$nsgRuleRDP = New-AzNetworkSecurityRuleConfig -Name myNetworkSecurityGroupRuleRDP -Protocol Tcp ` -Direction Inbound -Priority 1000 -SourceAddressPrefix * -SourcePortRange * -DestinationAddressPrefix * `
- -DestinationPortRange 3389 -Access Allow
+ -DestinationPortRange 3389 -Access Deny
-# Create an inbound network security group rule for port 80
+# Create an inbound network security group rule for port 80
$nsgRuleWeb = New-AzNetworkSecurityRuleConfig -Name myNetworkSecurityGroupRuleWWW -Protocol Tcp ` -Direction Inbound -Priority 1001 -SourceAddressPrefix * -SourcePortRange * -DestinationAddressPrefix * `
- -DestinationPortRange 80 -Access Allow
+ -DestinationPortRange 80 -Access Deny
# Create a network security group $nsg = New-AzNetworkSecurityGroup -ResourceGroupName myResourceGroup -Location eastus2 `
virtual-machines Create Vm Generalized Managed https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/windows/create-vm-generalized-managed.md
New-AzVm `
-VirtualNetworkName "myImageVnet" ` -SubnetName "myImageSubnet" ` -SecurityGroupName "myImageNSG" `
- -PublicIpAddressName "myImagePIP" `
- -OpenPorts 3389
+ -PublicIpAddressName "myImagePIP"
```
virtual-machines Create Vm Specialized https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/windows/create-vm-specialized.md
Create the [virtual network](../../virtual-network/virtual-networks-overview.md)
### Create the network security group and an RDP rule
-To be able to sign in to your VM with remote desktop protocol (RDP), you'll need to have a security rule that allows RDP access on port 3389. In our example, the VHD for the new VM was created from an existing specialized VM, so you can use an account that existed on the source virtual machine for RDP.
+To be able to sign in to your VM with remote desktop protocol (RDP), you'll need to have a security rule that allows RDP access on port 3389. In our example, the VHD for the new VM was created from an existing specialized VM, so you can use an account that existed on the source virtual machine for RDP. This example denies RDP traffic, to be more secure. You can change `-Access` to `Allow` if you want to allow RDP access.
This example sets the network security group (NSG) name to *myNsg* and the RDP rule name to *myRdpRule*. ```powershell $nsgName = "myNsg"
-$rdpRule = New-AzNetworkSecurityRuleConfig -Name myRdpRule -Description "Allow RDP" `
- -Access Allow -Protocol Tcp -Direction Inbound -Priority 110 `
+$rdpRule = New-AzNetworkSecurityRuleConfig -Name myRdpRule -Description "Deny RDP" `
+ -Access Deny -Protocol Tcp -Direction Inbound -Priority 110 `
-SourceAddressPrefix Internet -SourcePortRange * ` -DestinationAddressPrefix * -DestinationPortRange 3389 $nsg = New-AzNetworkSecurityGroup `
virtual-machines Image Builder Gallery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/windows/image-builder-gallery.md
$pip = New-AzPublicIpAddress -ResourceGroupName $vmResourceGroup -Location $repl
-Name "mypublicdns$(Get-Random)" -AllocationMethod Static -IdleTimeoutInMinutes 4 $nsgRuleRDP = New-AzNetworkSecurityRuleConfig -Name myNetworkSecurityGroupRuleRDP -Protocol Tcp ` -Direction Inbound -Priority 1000 -SourceAddressPrefix * -SourcePortRange * -DestinationAddressPrefix * `
- -DestinationPortRange 3389 -Access Allow
+ -DestinationPortRange 3389 -Access Deny
$nsg = New-AzNetworkSecurityGroup -ResourceGroupName $vmResourceGroup -Location $replRegion2 ` -Name myNetworkSecurityGroup -SecurityRules $nsgRuleRDP $nic = New-AzNetworkInterface -Name myNic -ResourceGroupName $vmResourceGroup -Location $replRegion2 `
virtual-machines Proximity Placement Groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/windows/proximity-placement-groups.md
New-AzVm `
-ResourceGroupName $resourceGroup ` -Name $vmName ` -Location $location `
- -OpenPorts 3389 `
-ProximityPlacementGroup $ppg.Id ```
virtual-machines Spot Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/windows/spot-powershell.md
$pip = New-AzPublicIpAddress -ResourceGroupName $resourceGroup -Location $locati
-Name "mypublicdns$(Get-Random)" -AllocationMethod Static -IdleTimeoutInMinutes 4 $nsgRuleRDP = New-AzNetworkSecurityRuleConfig -Name myNetworkSecurityGroupRuleRDP -Protocol Tcp ` -Direction Inbound -Priority 1000 -SourceAddressPrefix * -SourcePortRange * -DestinationAddressPrefix * `
- -DestinationPortRange 3389 -Access Allow
+ -DestinationPortRange 3389 -Access Deny
$nsg = New-AzNetworkSecurityGroup -ResourceGroupName $resourceGroup -Location $location ` -Name myNetworkSecurityGroup -SecurityRules $nsgRuleRDP $nic = New-AzNetworkInterface -Name myNic -ResourceGroupName $resourceGroup -Location $location `
virtual-machines Tutorial Custom Images https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/windows/tutorial-custom-images.md
$pip = New-AzPublicIpAddress -ResourceGroupName $resourceGroup -Location $locati
-Name "mypublicdns$(Get-Random)" -AllocationMethod Static -IdleTimeoutInMinutes 4 $nsgRuleRDP = New-AzNetworkSecurityRuleConfig -Name myNetworkSecurityGroupRuleRDP -Protocol Tcp ` -Direction Inbound -Priority 1000 -SourceAddressPrefix * -SourcePortRange * -DestinationAddressPrefix * `
- -DestinationPortRange 3389 -Access Allow
+ -DestinationPortRange 3389 -Access Deny
$nsg = New-AzNetworkSecurityGroup -ResourceGroupName $resourceGroup -Location $location ` -Name myNetworkSecurityGroup -SecurityRules $nsgRuleRDP $nic = New-AzNetworkInterface -Name $vmName -ResourceGroupName $resourceGroup -Location $location `
virtual-machines Upload Generalized Managed https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/windows/upload-generalized-managed.md
New-AzVm `
-VirtualNetworkName "myVnet" ` -SubnetName "mySubnet" ` -SecurityGroupName "myNSG" `
- -PublicIpAddressName "myPIP" `
- -OpenPorts 3389
+ -PublicIpAddressName "myPIP"
```
virtual-machines Automation Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/automation-get-started.md
Import-Module C:\Azure_SAP_Automated_Deployment\sap-automation\deploy\scripts\pw
## Copy the samples
-You can copy the sample configuration files to start testing the deployment automation framework.
+The repo contains a set of sample configuration files to start testing the deployment automation framework. You can copy them using the following steps.
# [Linux](#tab/linux) ```bash cd ~/Azure_SAP_Automated_Deployment
-cp -R sap-automation/samples/WORKSPACES WORKSPACES
+cp -Rp sap-automation/samples/WORKSPACES WORKSPACES
``` # [Windows](#tab/windows)
virtual-machines Automation Plan Deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/automation-plan-deployment.md
# Plan your deployment of SAP automation framework
-There are multiple considerations for running the [SAP deployment automation framework on Azure](automation-deployment-framework.md), this include topics like deployment credentials management, virtual network design.
+There are multiple considerations for planning an SAP deployment and running the [SAP deployment automation framework on Azure](automation-deployment-framework.md), this include topics like deployment credentials management, virtual network design.
For generic SAP on Azure design considerations please visit [Introduction to an SAP adoption scenario](/azure/cloud-adoption-framework/scenarios/sap)
For generic SAP on Azure design considerations please visit [Introduction to an
## Credentials management
-The automation framework uses Azure Active Directory (Azure AD) [Service Principals](#service-principal-creation) for deployment. You can use different deployment credentials for each [workload zone](#workload-zone-structure). The framework keeps these credentials in the [deployer's](automation-deployment-framework.md#deployment-components) key vault in Azure Key Vault. Then, the framework retrieves these credentials dynamically during the deployment process.
+The automation framework uses [Service Principals](#service-principal-creation) for infrastructure deployment. You can use different deployment credentials (service principals) for each [workload zone](#workload-zone-structure). The framework stores these credentials in the [deployer's](automation-deployment-framework.md#deployment-components) key vault in Azure Key Vault. Then, the framework retrieves these credentials dynamically during the deployment process.
-The automation framework also uses credentials for the default virtual machine (VM) accounts, as provided at the time of the VM creation. These credentials include:
+The automation framework also defines the credentials for the default virtual machine (VM) accounts, as provided at the time of the VM creation. These credentials include:
-| Credential | Scope | Storage | Identifier | Description |
-| - | -- | - | - | -- |
-| Local user | Deployer | - | Current user | Bootstraps the deployer. |
-| [Service principal](#service-principal-creation) | Environment | Deployer's key vault | Environment identifier | Does deployment activities. |
+| Credential | Scope | Storage | Identifier | Description |
+| - | -- | - | - | -- |
+| Local user | Deployer | - | Current user | Bootstraps the deployer. |
+| [Service principal](#service-principal-creation) | Environment | Deployer's key vault | Environment identifier | Deployment credentials. |
| VM credentials | Environment | Workload's key vault | Environment identifier | Sets the default VM user information. | ### Service principal creation
-Create your service principals:
+Create your service principal:
1. Sign in to the [Azure CLI](/cli/azure/) with an account that has adequate privileges to create a Service Principal. 1. Create a new Service Principal by running the command `az ad sp create-for-rbac`. Make sure to use a description name for `--name`. For example:
Create your service principals:
"tenant": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" } ```
-1. Assign the User Access Administrator role to your service principal. For example:
+1. Optionally assign the User Access Administrator role to your service principal. For example:
```azurecli az role assignment create --assignee <your-application-ID> --role "User Access Administrator" ```
For more information, see [the Azure CLI documentation for creating a service pr
## DevOps structure
-The Terraform automation templates are in the [SAP deployment automation framework repository](https://github.com/Azure/sap-automation/). For most use cases, consider this repository as read-only and don't modify its Terraform templates.
+The Terraform automation templates are in the [SAP deployment automation framework repository](https://github.com/Azure/sap-automation/). For most use cases, consider this repository as read-only and don't modify it.
-For your own parameter files, it's a best practice to keep these files in a source control repository that you manage. Clone the [SAP deployment automation framework repository](https://github.com/Azure/sap-automation/) and your repository into the same root folder. Then, [create an appropriate folder structure](#folder-structure).
+For your own parameter files, it's a best practice to keep these files in a source control repository that you manage. You can clone the [SAP deployment automation framework repository](https://github.com/Azure/sap-automation/) into your source control repository and then [create an appropriate folder structure](#folder-structure) in the repository.
> [!IMPORTANT] > Your parameter file's name becomes the name of the Terraform state file. Make sure to use a unique parameter file name for this reason.
For more information, see [how to configure the SAP system for automation](autom
When planning a deployment, it's important to consider the overall flow. There are three main steps of an SAP deployment on Azure with the automation framework.
-1. Preparing the region. This step deploys components to support the SAP automation framework in a specified Azure region. Some parts of this step are:
+1. Deploy the control plane. This step deploys components to support the SAP automation framework in a specified Azure region. Some parts of this step are:
1. Creating the deployment environment 1. Creating shared storage for Terraform state files 1. Creating shared storage for SAP installation media
virtual-machines Sap Deployment Checklist https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/sap-deployment-checklist.md
During this phase, you plan the migration of your SAP workload to the Azure plat
6. The number of Azure subscriptions and core quota for the subscriptions. [Open support requests to increase quotas of Azure subscriptions](../../../azure-portal/supportability/regional-quota-requests.md) as needed. 7. Data reduction and data migration plan for migrating SAP data into Azure. For SAP NetWeaver systems, SAP has guidelines on how to limit the volume of large amounts of data. See [this SAP guide](https://wiki.scn.sap.com/wiki/download/attachments/247399467/DVM_%20Guide_7.2.pdf?version=1&modificationDate=1549365516000&api=v2) about data management in SAP ERP systems. Some of the content also applies to NetWeaver and S/4HANA systems in general. 8. An automated deployment approach. The goal of the automation of infrastructure deployments on Azure is to deploy in a deterministic way and get deterministic results. Many customers use PowerShell or CLI-based scripts. But there are various open-source technologies that you can use to deploy Azure infrastructure for SAP and even install SAP software. You can find examples on GitHub:
- - [Automated SAP Deployments in Azure Cloud](https://github.com/Azure/sap-hana)
+ - [Automated SAP Deployments in Azure Cloud](https://github.com/Azure/sap-automation)
- [SAP HANA Installation](https://github.com/AzureCAT-GSI/SAP-HANA-ARM) 9. Define a regular design and deployment review cadence between you as the customer, the system integrator, Microsoft, and other involved parties.
See these articles:
- [Azure Virtual Machines planning and implementation for SAP NetWeaver](./planning-guide.md) - [Azure Virtual Machines deployment for SAP NetWeaver](./deployment-guide.md)-- [Considerations for Azure Virtual Machines DBMS deployment for SAP workloads](./dbms_guide_general.md)
+- [Considerations for Azure Virtual Machines DBMS deployment for SAP workloads](./dbms_guide_general.md)
virtual-network Public Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-network/ip-services/public-ip-addresses.md
The following table shows the property a public IP can be associated to a resour
| | | | | | | | Virtual machine |Network interface |Yes | Yes | Yes | Yes | | Public Load balancer |Front-end configuration |Yes | Yes | Yes |Yes |
-| Virtual Network gateway (VPN) |Gateway IP configuration |Yes (non-AZ only) |Yes (AZ only) | No |No |
+| Virtual Network gateway (VPN) |Gateway IP configuration |Yes (non-AZ only) |Yes | No |No |
| Virtual Network gateway (ER) |Gateway IP configuration |Yes | No | Yes (preview) |No | | NAT gateway |Gateway IP configuration |No |Yes | No |No | | Application gateway |Front-end configuration |Yes (V1 only) |Yes (V2 only) | No | No |
virtual-network Nat Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-network/nat-gateway/nat-overview.md
NAT is fully scaled out from the start. There's no ramp up or scale-out operatio
* Public IP * Public IP prefix
-* NAT is compatible with Standard SKU public IP address or public IP prefix resources or a combination of both. You can use a public IP prefix directly or distribute the public IP addresses of the prefix across multiple NAT gateway resources. NAT will groom all traffic to the range of IP addresses of the prefix. Basic resources, such as Basic Load Balancer or Basic Public IP aren't compatible with NAT. Basic resources must be placed on a subnet not associated to a NAT Gateway.
+* NAT is compatible with Standard SKU public IP address or public IP prefix resources or a combination of both. You can use a public IP prefix directly or distribute the public IP addresses of the prefix across multiple NAT gateway resources. NAT will groom all traffic to the range of IP addresses of the prefix. Basic resources, such as Basic Load Balancer or Basic Public IP aren't compatible with NAT. Basic resources must be placed on a subnet not associated to a NAT Gateway. Basic Load Balancer and Basic Public IP can be upgraded to standard in order to work with NAT gateway.
+ * To upgrade a basic load balancer to standard, see [Upgrade Azure Public Load Balancer](/azure/load-balancer/upgrade-basic-standard)
+ * To upgrade a basic public IP to standard, see [Upgrade a public IP address](/azure/virtual-network/ip-services/public-ip-upgrade-portal)
* NAT cannot be associated to an IPv6 Public IP address or IPv6 Public IP Prefix. However, it can be associated to a dual stack subnet. * NAT allows flows to be created from the virtual network to the services outside your VNet. Return traffic from the Internet is only allowed in response to an active flow. Services outside your VNet cannot initiate a connection to instances. * NAT can't span multiple virtual networks.
virtual-network Region Move Nat Gateway https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-network/nat-gateway/region-move-nat-gateway.md
+
+ Title: Create and configure NAT gateway after moving resources to another region
+description: Learn how to configure a new NAT gateway for resources moved to another region.
+++++ Last updated : 01/04/2022+++
+# Create and configure NAT gateway after moving resources to another region
+
+In this article, learn how to configure a NAT gateway after moving resources to a different region. You might want to move resources to take advantage of a new Azure region that is better suited to your customers' geographical presence, other needs, or to meet internal policy and governance requirements, or to take advantage of your organizationΓÇÖs infrastructure.
+
+> [!NOTE]
+> NAT gateway instances can't directly be moved from one region to another. A workaround is to use Azure Resource Mover to move all the resources associated with the existing NAT gateway to the new region. You then create a new instance of NAT gateway in the new region and then associate the moved resources with the new instance. After the new NAT gateway is functional in the new region, you delete the old instance in the previous region.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+- **Owner** access in the subscription in which resources you want to move are located.
+
+- Resources from previous region moved to new region. For more information on moving resources to another region, see [Move resources to another region with Azure Resource Mover](../../resource-mover/move-region-within-resource-group.md). Follow the steps in that article to move the resources in your previous region that are associated with the NAT gateway. After successful move of the resources, continue with the steps in this article.
+
+## Create a new NAT gateway
+
+After you have moved all the resources associated with the original instance of NAT gateway to the new region and verified them, the following steps will enable you to create a new instance of NAT gateway. This new NAT gateway can then be associated with the moved resources.
+
+1. In the search box at the top of the portal, enter **NAT gateway**. Select **NAT gateways**.
+
+2. Select **+ Create**.
+
+3. In **Create network address translation (NAT) gateway**, enter or select the following information in the **Basics** tab.
+
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **Create new**. </br> Enter **myResourceGroup** in **Name**. </br> Select **OK**. </br> Instead, you can select the existing resource group associated with the moved resources in the subscription. |
+ | **Instance details** | |
+ | Name | Enter **myNATgateway**. |
+ | Region | Select the name of the new region. |
+ | Availability Zone | Select **None**. Instead, you can select the zone of the moved resources if applicable. |
+ | Idle timeout (minutes) | Enter **10**. |
+
+4. Select the **Outbound IP** tab, or select **Next: Outbound IP** at the bottom of the page.
+
+5. In the **Outbound IP** tab, enter or select the following information.
+
+ | Setting | Value |
+ | - | -- |
+ | Public IP addresses | Select **Create a new public IP address**. </br> Enter **myNATPublicIP** in **Name**. </br> Select **OK**. </br> Instead, you can select an existing public IP in your subscription if applicable. |
+
+6. Select the **Subnet** tab, or select **Next: Subnet** at the bottom of the page.
+
+7. Select the pull-down box under **Virtual network** in the **Subnet** tab. Select the **Virtual Network** that you **moved** using Azure Resource Mover.
+
+8. In **Subnet name**, select the **subnet** that you **moved** using Azure Resource Mover.
+
+9. Select the **Review + create** tab, or select the **Review + create** button at the bottom of the page.
+
+10. Select **Create**.
+
+## Test NAT gateway in new region
+
+For steps on how to test the NAT gateway, see [Tutorial: Create a NAT gateway - Azure portal](tutorial-create-nat-gateway-portal.md#test-nat-gateway).
+
+## Delete old instance of NAT gateway
+
+After you have created new NAT gateway and have tested it, you can delete the source resources from the old region. This step will automatically delete the original NAT gateway.
+
+## Next steps
+
+For more information on moving resources in Azure, see:
+
+- [Move NSGs to another region](../move-across-regions-nsg-portal.md).
+- [Move public IP addresses to another region](../move-across-regions-publicip-portal.md).
+- [Move a storage account to another region](../../storage/common/storage-account-move.md?tabs=azure-portal)
++
vpn-gateway Bgp Howto https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/bgp-howto.md
In this step, you create a VPN gateway with the corresponding BGP parameters.
* The **Azure APIPA BGP IP address** field is optional. If your on-premises VPN devices use APIPA address for BGP, you must select an address from the Azure-reserved APIPA address range for VPN, which is from **169.254.21.0** to **169.254.22.255**. This example uses 169.254.21.11.
- * If you are creating an active-active VPN gateway, the BGP section will show an additional **Second Custom Azure APIPA BGP IP address**. From the allowed APIPA range (**169.254.21.0** to **169.254.22.255**), select another IP address. The second IP address must be different than the first address.
+ * If you are creating an active-active VPN gateway, the BGP section will show an additional **Second Custom Azure APIPA BGP IP address**. Each address you select must be unique and be in the allowed APIPA range (**169.254.21.0** to **169.254.22.255**). Active-active gateways also support multiple addresses for both **Azure APIPA BGP IP address** and **Second Custom Azure APIPA BGP IP address**. Additional inputs will only appear after you enter your first APIPA BGP IP address.
> [!IMPORTANT] >