Updates from: 06/25/2022 01:08:54
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Application Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/application-types.md
In a web application, each execution of a [policy](user-flow-overview.md) takes
Validation of the `id_token` by using a public signing key that is received from Azure AD is sufficient to verify the identity of the user. This process also sets a session cookie that can be used to identify the user on subsequent page requests.
-To see this scenario in action, try one of the web application sign in code samples in our [Getting started section](overview.md).
+To see this scenario in action, try one of the web application sign-in code samples in our [Getting started section](overview.md).
In addition to facilitating simple sign in, a web server application might also need to access a back-end web service. In this case, the web application can perform a slightly different [OpenID Connect flow](openid-connect.md) and acquire tokens by using authorization codes and refresh tokens. This scenario is depicted in the following [Web APIs section](#web-apis).
In this flow, the application executes [policies](user-flow-overview.md) and rec
Applications that contain long-running processes or that operate without the presence of a user also need a way to access secured resources such as web APIs. These applications can authenticate and get tokens by using their identities (rather than a user's delegated identity) and by using the OAuth 2.0 client credentials flow. Client credential flow isn't the same as on-behalf-flow and on-behalf-flow shouldn't be used for server-to-server authentication.
-The [OAuth 2.0 client credentials flow](./client-credentials-grant-flow.md) is currently in public preview. You can also set up client credential flow using Azure AD and the Microsoft identity platform /token endpoint (`https://login.microsoftonline.com/your-tenant-name.onmicrosoft.com/oauth2/v2.0/token`) for a [Microsoft Graph application](microsoft-graph-get-started.md) or your own application. For more information, check out the [Azure AD token reference](../active-directory/develop/id-tokens.md) article.
+For Azure AD B2C, the [OAuth 2.0 client credentials flow](./client-credentials-grant-flow.md) is currently in public preview. However, you can set up client credential flow using Azure AD and the Microsoft identity platform `/token` endpoint (`https://login.microsoftonline.com/your-tenant-name.onmicrosoft.com/oauth2/v2.0/token`) for a [Microsoft Graph application](microsoft-graph-get-started.md) or your own application. For more information, check out the [Azure AD token reference](../active-directory/develop/id-tokens.md) article.
## Unsupported application types
active-directory-b2c Client Credentials Grant Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/client-credentials-grant-flow.md
Previously updated : 06/15/2022 Last updated : 06/21/2022
The OAuth 2.0 client credentials grant flow permits an app (confidential client)
In the client credentials flow, permissions are granted directly to the application itself by an administrator. When the app presents a token to a resource, the resource enforces that the app itself has authorization to perform an action since there's no user involved in the authentication. This article covers the steps needed to authorize an application to call an API, and how to get the tokens needed to call that API.
+**This feature is in public preview.**
+ ## App registration overview To enable your app to sign in with client credentials and call a web API, you register two applications in the Azure AD B2C directory.
can't contain spaces. The following example demonstrates two app roles, read and
## Step 2. Register an application
-To enable your app to sign in with Azure AD B2C using client credentials flow, register your applications (**App 1**). To create the web API app registration, follow these steps:
+To enable your app to sign in with Azure AD B2C using client credentials flow, you can use an existing application or register a new one (**App 1**).
+
+If you're using an existing app, make sure the app's `accessTokenAcceptedVersion` is set to `2`:
+
+1. In the Azure portal, search for and select **Azure AD B2C**.
+1. Select **App registrations**, and then select the your existing app from the list.
+1. In the left menu, under **Manage**, select **Manifest** to open the manifest editor.
+1. Locate the `accessTokenAcceptedVersion` element, and set its value to `2`.
+1. At the top of the page, select **Save** to save the changes.
+
+To create a new web app registration, follow these steps:
1. In the Azure portal, search for and select **Azure AD B2C** 1. Select **App registrations**, and then select **New registration**.
$appId = "<client ID>"
$secret = "<client secret>" $endpoint = "https://<tenant-name>.b2clogin.com/<tenant-name>.onmicrosoft.com/<policy>/oauth2/v2.0/token" $scope = "<Your API id uri>/.default"
-$body = "granttype=client_credentials&scope=" + $scope + "&client_id=" + $appId + "&client_secret=" + $secret
+$body = "grant_type=client_credentials&scope=" + $scope + "&client_id=" + $appId + "&client_secret=" + $secret
$token = Invoke-RestMethod -Method Post -Uri $endpoint -Body $body ```
active-directory-b2c Identity Provider Swissid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-swissid.md
To enable sign-in for users with a SwissID account in Azure AD B2C, you need to
|Key |Note | |||
- | Environment| The SwissID OpenId well-known configuration endpoint. For example, <https://login.sandbox.pre.swissid.ch/idp/oauth2/.well-known/openid-configuration>. |
- | Client ID | The SwissID client ID. For example, 11111111-2222-3333-4444-555555555555. |
+ | Environment| The SwissID OpenId well-known configuration endpoint. For example, `https://login.sandbox.pre.swissid.ch/idp/oauth2/.well-known/openid-configuration`. |
+ | Client ID | The SwissID client ID. For example, `11111111-2222-3333-4444-555555555555`. |
| Password| The SwissID client secret.|
active-directory-b2c Implicit Flow Single Page Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/implicit-flow-single-page-application.md
Title: Single-page application sign-in using the OAuth 2.0 implicit flow in Azure Active Directory B2C
-description: Learn how to add single-page sign in using the OAuth 2.0 implicit flow with Azure Active Directory B2C.
+description: Learn how to add single-page sign-in using the OAuth 2.0 implicit flow with Azure Active Directory B2C.
Some frameworks, like [MSAL.js 1.x](https://github.com/AzureAD/microsoft-authent
Azure AD B2C extends the standard OAuth 2.0 implicit flow to more than simple authentication and authorization. Azure AD B2C introduces the [policy parameter](user-flow-overview.md). With the policy parameter, you can use OAuth 2.0 to add policies to your app, such as sign-up, sign-in, and profile management user flows. In the example HTTP requests in this article, we use **{tenant}.onmicrosoft.com** for illustration. Replace `{tenant}` with [the name of your tenant](tenant-management.md#get-your-tenant-name) if you've one. Also, you need to have [created a user flow](tutorial-create-user-flows.md?pivots=b2c-user-flow).
-We use the following figure to illustrate implicit sign in flow. Each step is described in detail later in the article.
+We use the following figure to illustrate implicit sign-in flow. Each step is described in detail later in the article.
![Swimlane-style diagram showing the OpenID Connect implicit flow](./media/implicit-flow-single-page-application/convergence_scenarios_implicit.png)
The parameters in the HTTP GET request are explained in the table below.
| scope | Yes | A space-separated list of scopes. A single scope value indicates to Azure AD both of the permissions that are being requested. The `openid` scope indicates a permission to sign in the user and get data about the user in the form of ID tokens. The `offline_access` scope is optional for web apps. It indicates that your app needs a refresh token for long-lived access to resources. | | state | No | A value included in the request that also is returned in the token response. It can be a string of any content that you want to use. Usually, a randomly generated, unique value is used, to prevent cross-site request forgery attacks. The state is also used to encode information about the user's state in the app before the authentication request occurred, for example, the page the user was on, or the user flow that was being executed. | | nonce | Yes | A value included in the request (generated by the app) that is included in the resulting ID token as a claim. The app can then verify this value to mitigate token replay attacks. Usually, the value is a randomized, unique string that can be used to identify the origin of the request. |
-| prompt | No | The type of user interaction that's required. Currently, the only valid value is `login`. This parameter forces the user to enter their credentials on that request. Single sign-on doesn't take effect. |
+| prompt | No | The type of user interaction that's required. Currently, the only valid value is `login`. This parameter forces the user to enter their credentials on that request. Single Sign-On doesn't take effect. |
This is the interactive part of the flow. The user is asked to complete the policy's workflow. The user might have to enter their username and password, sign in with a social identity, sign up for a local account, or any other number of steps. User actions depend on how the user flow is defined.
ID tokens and access tokens both expire after a short period of time. Your app m
## Send a sign-out request
-When you want to sign the user out of the app, redirect the user to Azure AD B2C's sign-out endpoint. You can then clear the user's session in the app. If you don't redirect the user, they might be able to reauthenticate to your app without entering their credentials again because they have a valid single sign-on session with Azure AD B2C.
+When you want to sign the user out of the app, redirect the user to Azure AD B2C's sign-out endpoint. You can then clear the user's session in the app. If you don't redirect the user, they might be able to reauthenticate to your app without entering their credentials again because they have a valid Single Sign-On session with Azure AD B2C.
You can simply redirect the user to the `end_session_endpoint` that is listed in the same OpenID Connect metadata document described in [Validate the ID token](#validate-the-id-token). For example:
GET https://{tenant}.b2clogin.com/{tenant}.onmicrosoft.com/{policy}/oauth2/v2.0/
> [!NOTE]
-> Directing the user to the `end_session_endpoint` clears some of the user's single sign-on state with Azure AD B2C. However, it doesn't sign the user out of the user's social identity provider session. If the user selects the same identity provider during a subsequent sign in, the user is re-authenticated, without entering their credentials. If a user wants to sign out of your Azure AD B2C application, it doesn't necessarily mean they want to completely sign out of their Facebook account, for example. However, for local accounts, the user's session will be ended properly.
->
+> Directing the user to the `end_session_endpoint` clears some of the user's Single Sign-On state with Azure AD B2C. However, it doesn't sign the user out of the user's social identity provider session. If the user selects the same identity provider during a subsequent sign in, the user is re-authenticated, without entering their credentials. If a user wants to sign out of your Azure AD B2C application, it doesn't necessarily mean they want to completely sign out of their Facebook account, for example. However, for local accounts, the user's session will be ended properly.
+ ## Next steps
active-directory-b2c Microsoft Graph Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/microsoft-graph-get-started.md
Previously updated : 09/20/2021 Last updated : 06/24/2022
There are two modes of communication you can use when working with the Microsoft
You enable the **Automated** interaction scenario by creating an application registration shown in the following sections.
-Although the OAuth 2.0 client credentials grant flow is not currently directly supported by the Azure AD B2C authentication service, you can set up client credential flow using Azure AD and the Microsoft identity platform /token endpoint for an application in your Azure AD B2C tenant. An Azure AD B2C tenant shares some functionality with Azure AD enterprise tenants.
+Azure AD B2C authentication service directly supports OAuth 2.0 client credentials grant flow (**currently in public preview**), but you can't use it to manage your Azure AD B2C resources via Microsoft Graph API. However, you can set up [client credential flow](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md) using Azure AD and the Microsoft identity platform `/token` endpoint for an application in your Azure AD B2C tenant.
## Register management application
active-directory Application Proxy Connector Installation Problem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-connector-installation-problem.md
When the installation of a connector fails, the root cause is usually one of the
> >
-**Review the pre-requisites required:**
+**Review the prerequisites required:**
-1. Verify the machine supports TLS1.2 ΓÇô All Windows versions after 2012 R2 should support TLS 1.2. If your connector machine is from a version of 2012 R2 or prior, make sure that the following KBs are installed on the machine: <https://support.microsoft.com/help/2973337/sha512-is-disabled-in-windows-when-you-use-tls-1.2>
+1. Verify the machine supports TLS1.2 ΓÇô All Windows versions after 2012 R2 should support TLS 1.2. If your connector machine is from a version of 2012 R2 or prior, make sure that the [required updates](https://support.microsoft.com/help/2973337/sha512-is-disabled-in-windows-when-you-use-tls-1.2) are installed.
2. Contact your network admin and ask to verify that the backend proxy and firewall do not block SHA512 for outgoing traffic.
active-directory Concept Authentication Authenticator App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-authenticator-app.md
Title: Microsoft Entra Authenticator app authentication method - Azure Active Directory
-description: Learn about using the Microsoft Entra Authenticator app in Azure Active Directory to help secure your sign-ins
+ Title: Microsoft Authenticator authentication method - Azure Active Directory
+description: Learn about using the Microsoft Authenticator in Azure Active Directory to help secure your sign-ins
Previously updated : 06/09/2022 Last updated : 06/23/2022
# Customer intent: As an identity administrator, I want to understand how to use the Microsoft Authenticator app in Azure AD to improve and secure user sign-in events.
-# Authentication methods in Azure Active Directory - Microsoft Entra Authenticator app
+# Authentication methods in Azure Active Directory - Microsoft Authenticator app
-The Microsoft Entra Authenticator app provides an additional level of security to your Azure AD work or school account or your Microsoft account and is available for [Android](https://go.microsoft.com/fwlink/?linkid=866594) and [iOS](https://go.microsoft.com/fwlink/?linkid=866594). With the Microsoft Authenticator app, users can authenticate in a passwordless way during sign-in, or as an additional verification option during self-service password reset (SSPR) or multifactor authentication events.
+The Microsoft Authenticator app provides an additional level of security to your Azure AD work or school account or your Microsoft account and is available for [Android](https://go.microsoft.com/fwlink/?linkid=866594) and [iOS](https://go.microsoft.com/fwlink/?linkid=866594). With the Microsoft Authenticator app, users can authenticate in a passwordless way during sign-in, or as an additional verification option during self-service password reset (SSPR) or multifactor authentication events.
Users may receive a notification through the mobile app for them to approve or deny, or use the Authenticator app to generate an OATH verification code that can be entered in a sign-in interface. If you enable both a notification and verification code, users who register the Authenticator app can use either method to verify their identity.
-To use the Authenticator app at a sign-in prompt rather than a username and password combination, see [Enable passwordless sign-in with the Microsoft Entra Authenticator app](howto-authentication-passwordless-phone.md).
+To use the Authenticator app at a sign-in prompt rather than a username and password combination, see [Enable passwordless sign-in with the Microsoft Authenticator](howto-authentication-passwordless-phone.md).
> [!NOTE] > Users don't have the option to register their mobile app when they enable SSPR. Instead, users can register their mobile app at [https://aka.ms/mfasetup](https://aka.ms/mfasetup) or as part of the combined security info registration at [https://aka.ms/setupsecurityinfo](https://aka.ms/setupsecurityinfo).
Instead of seeing a prompt for a password after entering a username, a user that
This authentication method provides a high level of security, and removes the need for the user to provide a password at sign-in.
-To get started with passwordless sign-in, see [Enable passwordless sign-in with the Microsoft Entra Authenticator app](howto-authentication-passwordless-phone.md).
+To get started with passwordless sign-in, see [Enable passwordless sign-in with the Microsoft Authenticator](howto-authentication-passwordless-phone.md).
## Notification through mobile app
Users may have a combination of up to five OATH hardware tokens or authenticator
## Next steps -- To get started with passwordless sign-in, see [Enable passwordless sign-in with the Microsoft Entra Authenticator app](howto-authentication-passwordless-phone.md).
+- To get started with passwordless sign-in, see [Enable passwordless sign-in with the Microsoft Authenticator](howto-authentication-passwordless-phone.md).
- Learn more about configuring authentication methods using the [Microsoft Graph REST API](/graph/api/resources/authenticationmethods-overview).
active-directory Concept Authentication Passwordless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-passwordless.md
Title: Azure Active Directory passwordless sign-in
-description: Learn about options for passwordless sign-in to Azure Active Directory using FIDO2 security keys or the Microsoft Entra Authenticator app
+description: Learn about options for passwordless sign-in to Azure Active Directory using FIDO2 security keys or Microsoft Authenticator
Previously updated : 06/09/2022 Last updated : 06/23/2022
Features like multifactor authentication (MFA) are a great way to secure your or
Each organization has different needs when it comes to authentication. Microsoft global Azure and Azure Government offer the following three passwordless authentication options that integrate with Azure Active Directory (Azure AD): - Windows Hello for Business-- Microsoft Entra Authenticator app
+- Microsoft Authenticator
- FIDO2 security keys ![Authentication: Security versus convenience](./media/concept-authentication-passwordless/passwordless-convenience-security.png)
The following steps show how the sign-in process works with Azure AD:
The Windows Hello for Business [planning guide](/windows/security/identity-protection/hello-for-business/hello-planning-guide) can be used to help you make decisions on the type of Windows Hello for Business deployment and the options you'll need to consider.
-## Microsoft Entra Authenticator App
+## Microsoft Authenticator
You can also allow your employee's phone to become a passwordless authentication method. You may already be using the Authenticator app as a convenient multi-factor authentication option in addition to a password. You can also use the Authenticator App as a passwordless option.
-![Sign in to Microsoft Edge with the Microsoft Entra Authenticator app](./media/concept-authentication-passwordless/concept-web-sign-in-microsoft-authenticator-app.png)
+![Sign in to Microsoft Edge with the Microsoft Authenticator](./media/concept-authentication-passwordless/concept-web-sign-in-microsoft-authenticator-app.png)
-The Authenticator App turns any iOS or Android phone into a strong, passwordless credential. Users can sign in to any platform or browser by getting a notification to their phone, matching a number displayed on the screen to the one on their phone, and then using their biometric (touch or face) or PIN to confirm. Refer to [Download and install the Microsoft Entra Authenticator app](https://support.microsoft.com/account-billing/download-and-install-the-microsoft-authenticator-app-351498fc-850a-45da-b7b6-27e523b8702a) for installation details.
+The Authenticator App turns any iOS or Android phone into a strong, passwordless credential. Users can sign in to any platform or browser by getting a notification to their phone, matching a number displayed on the screen to the one on their phone, and then using their biometric (touch or face) or PIN to confirm. Refer to [Download and install the Microsoft Authenticator](https://support.microsoft.com/account-billing/download-and-install-the-microsoft-authenticator-app-351498fc-850a-45da-b7b6-27e523b8702a) for installation details.
Passwordless authentication using the Authenticator app follows the same basic pattern as Windows Hello for Business. It's a little more complicated as the user needs to be identified so that Azure AD can find the Authenticator app version being used:
active-directory How To Migrate Mfa Server To Azure Mfa User Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-migrate-mfa-server-to-azure-mfa-user-authentication.md
Last updated 04/07/2022 --++
active-directory How To Migrate Mfa Server To Azure Mfa With Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-migrate-mfa-server-to-azure-mfa-with-federation.md
Last updated 04/21/2022--++
active-directory How To Migrate Mfa Server To Azure Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-migrate-mfa-server-to-azure-mfa.md
Title: Migrate from MFA Server to Azure AD Multi-Factor Authentication - Azure Active Directory
-description: Step-by-step guidance to migrate from Azure MFA Server on-premises to Azure Multi-Factor Authentication
+description: Step-by-step guidance to migrate from MFA Server on-premises to Azure AD Multi-Factor Authentication
Previously updated : 06/09/2022 Last updated : 06/23/2022 --++
-# Migrate from Azure MFA Server to Azure AD Multi-Factor Authentication
+# Migrate from MFA Server to Azure AD Multi-Factor Authentication
-Multifactor authentication (MFA) is important to securing your infrastructure and assets from bad actors. Azure Multi-Factor Authentication Server (MFA Server) isnΓÇÖt available for new deployments and will be deprecated. Customers who are using MFA Server should move to using cloud-based Azure Active Directory (Azure AD) Multi-Factor Authentication.
+Multifactor authentication (MFA) is important to securing your infrastructure and assets from bad actors. Azure AD Multi-Factor Authentication Server (MFA Server) isnΓÇÖt available for new deployments and will be deprecated. Customers who are using MFA Server should move to using cloud-based Azure Active Directory (Azure AD) Multi-Factor Authentication.
In this article, we assume that you have a hybrid environment where:
There are multiple possible end states to your migration, depending on your goal
| <br> | Goal: Decommission MFA Server ONLY | Goal: Decommission MFA Server and move to Azure AD Authentication | Goal: Decommission MFA Server and AD FS | |||-|--| |MFA provider | Change MFA provider from MFA Server to Azure AD Multi-Factor Authentication. | Change MFA provider from MFA Server to Azure AD Multi-Factor Authentication. | Change MFA provider from MFA Server to Azure AD Multi-Factor Authentication. |
-|User authentication |Continue to use federation for Azure AD authentication. | Move to Azure AD with Password Hash Synchronization (preferred) or Passthrough Authentication **and** seamless single sign-on (SSO).| Move to Azure AD with Password Hash Synchronization (preferred) or Passthrough Authentication **and** SSO. |
+|User authentication |Continue to use federation for Azure AD authentication. | Move to Azure AD with Password Hash Synchronization (preferred) or Passthrough Authentication **and** Seamless Single Sign-On (SSO).| Move to Azure AD with Password Hash Synchronization (preferred) or Passthrough Authentication **and** SSO. |
|Application authentication | Continue to use AD FS authentication for your applications. | Continue to use AD FS authentication for your applications. | Move apps to Azure AD before migrating to Azure AD Multi-Factor Authentication. | If you can, move both your multifactor authentication and your user authentication to Azure. For step-by-step guidance, see [Moving to Azure AD Multi-Factor Authentication and Azure AD user authentication](how-to-migrate-mfa-server-to-azure-mfa-user-authentication.md).
While you can migrate usersΓÇÖ registered multifactor authentication phone numbe
Users will need to register and add a new account on the Authenticator app and remove the old account. To help users to differentiate the newly added account from the old account linked to the MFA Server, make sure the Account name for the Mobile App on the MFA Server is named in a way to distinguish the two accounts.
-For example, the Account name that appears under Mobile App on the MFA Server has been renamed to OnPrem MFA Server.
+For example, the Account name that appears under Mobile App on the MFA Server has been renamed to On-Premises MFA Server.
The account name on the Authenticator App will change with the next push notification to the user. Migrating phone numbers can also lead to stale numbers being migrated and make users more likely to stay on phone-based MFA instead of setting up more secure methods like Microsoft Authenticator in passwordless mode.
We recommend that you use Password Hash Synchronization (PHS).
### Passwordless authentication
-As part of enrolling users to use Microsoft Authenticator as a second factor, we recommend you enable passwordless phone sign-in as part of their registration. For more information, including other passwordless methods such as FIDO and Windows Hello for Business, visit [Plan a passwordless authentication deployment with Azure AD](howto-authentication-passwordless-deployment.md#plan-for-and-deploy-the-authenticator-app).
+As part of enrolling users to use Microsoft Authenticator as a second factor, we recommend you enable passwordless phone sign-in as part of their registration. For more information, including other passwordless methods such as FIDO and Windows Hello for Business, visit [Plan a passwordless authentication deployment with Azure AD](howto-authentication-passwordless-deployment.md#plan-for-and-deploy-microsoft-authenticator).
### Microsoft Identity Manager self-service password reset
Check with the service provider for supported product versions and their capabil
- The NPS extension doesn't use Azure AD Conditional Access policies. If you stay with RADIUS and use the NPS extension, all authentication requests going to NPS will require the user to perform MFA. - Users must register for Azure AD Multi-Factor Authentication prior to using the NPS extension. Otherwise, the extension fails to authenticate the user, which can generate help desk calls. - When the NPS extension invokes MFA, the MFA request is sent to the user's default MFA method.
- - Because the sign-in happens on non-Microsoft applications, it is unlikely that the user will see visual notification that multifactor authentication is required and that a request has been sent to their device.
- - During the multifactor authentication requirement, the user must have access to their default authentication method to complete the requirement. They cannot choose an alternative method. Their default authentication method will be used even if it is disabled in the tenant authentication methods and multifactor authentication policies.
+ - Because the sign-in happens on non-Microsoft applications, it's unlikely that the user will see visual notification that multifactor authentication is required and that a request has been sent to their device.
+ - During the multifactor authentication requirement, the user must have access to their default authentication method to complete the requirement. They can't choose an alternative method. Their default authentication method will be used even if it's disabled in the tenant authentication methods and multifactor authentication policies.
- Users can change their default multifactor authentication method in the Security Info page (aka.ms/mysecurityinfo). - Available MFA methods for RADIUS clients are controlled by the client systems sending the RADIUS access requests. - MFA methods that require user input after they enter a password can only be used with systems that support access-challenge responses with RADIUS. Input methods might include OTP, hardware OATH tokens or the Microsoft Authenticator application.
active-directory Howto Authentication Passwordless Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-deployment.md
Previously updated : 06/09/2022 Last updated : 06/23/2022 --++
Passwords are a primary attack vector. Bad actors use social engineering, phishi
Microsoft offers the following [three passwordless authentication options](concept-authentication-passwordless.md) that integrate with Azure Active Directory (Azure AD):
-* [Microsoft Entra Authenticator app](./concept-authentication-passwordless.md#microsoft-entra-authenticator-app) - turns any iOS or Android phone into a strong, passwordless credential by allowing users to sign into any platform or browser.
+* [Microsoft Authenticator](./concept-authentication-passwordless.md#microsoft-authenticator) - turns any iOS or Android phone into a strong, passwordless credential by allowing users to sign into any platform or browser.
* [FIDO2-compliant security keys](./concept-authentication-passwordless.md#fido2-security-keys) - useful for users who sign in to shared machines like kiosks, in situations where use of phones is restricted, and for highly privileged identities.
The following table lists the passwordless authentication methods by device type
| Device types| Passwordless authentication method | | - | - |
-| Dedicated non-windows devices| <li> **Microsoft Entra Authenticator app** <li> Security keys |
+| Dedicated non-windows devices| <li> **Microsoft Authenticator** <li> Security keys |
| Dedicated Windows 10 computers (version 1703 and later)| <li> **Windows Hello for Business** <li> Security keys |
-| Dedicated Windows 10 computers (before version 1703)| <li> **Windows Hello for Business** <li> Microsoft Entra Authenticator app |
-| Shared devices: tablets, and mobile devices| <li> **Microsoft Entra Authenticator app** <li> One-time password sign-in |
-| Kiosks (Legacy)| **Microsoft Entra Authenticator app** |
-| Kiosks and shared computers ΓÇÄ(Windows 10)| <li> **Security keys** <li> Microsoft Entra Authenticator app |
+| Dedicated Windows 10 computers (before version 1703)| <li> **Windows Hello for Business** <li> Microsoft Authenticator app |
+| Shared devices: tablets, and mobile devices| <li> **Microsoft Authenticator** <li> One-time password sign-in |
+| Kiosks (Legacy)| **Microsoft Authenticator** |
+| Kiosks and shared computers ΓÇÄ(Windows 10)| <li> **Security keys** <li> Microsoft Authenticator app |
## Prerequisites
As part of this deployment plan, we recommend that passwordless authentication b
The prerequisites are determined by your selected passwordless authentication methods.
-| Prerequisite| Microsoft Entra Authenticator app| FIDO2 Security Keys|
+| Prerequisite| Microsoft Authenticator| FIDO2 Security Keys|
| - | -|-| | [Combined registration for Azure AD Multi-Factor Authentication (MFA) and self-service password reset (SSPR)](howto-registration-mfa-sspr-combined.md) is enabled| √| √| | [Users can perform Azure AD MFA](howto-mfa-getstarted.md)| √| √|
Your communications to end users should include the following information:
* [Guidance on combined registration for both Azure AD MFA and SSPR](howto-registration-mfa-sspr-combined.md)
-* [Downloading the Microsoft Entra Authenticator app](https://support.microsoft.com/account-billing/download-and-install-the-microsoft-authenticator-app-351498fc-850a-45da-b7b6-27e523b8702a)
+* [Downloading Microsoft Authenticator](https://support.microsoft.com/account-billing/download-and-install-the-microsoft-authenticator-app-351498fc-850a-45da-b7b6-27e523b8702a)
-* [Registering in the Microsoft Entra Authenticator app](howto-authentication-passwordless-phone.md)
+* [Registering in Microsoft Authenticator](howto-authentication-passwordless-phone.md)
* [Signing in with your phone](https://support.microsoft.com/account-billing/sign-in-to-your-accounts-using-the-microsoft-authenticator-app-582bdc07-4566-4c97-a7aa-56058122714c)
This method can also be used for easy recovery when the user has lost or forgott
>[!NOTE] > If you can't use the security key or the Authenticator app for some scenarios, multifactor authentication with a username and password along with another registered method can be used as a fallback option.
-## Plan for and deploy the Authenticator app
+## Plan for and deploy Microsoft Authenticator
-The [Authenticator app](concept-authentication-passwordless.md) turns any iOS or Android phone into a strong, passwordless credential. It's a free download from Google Play or the Apple App Store. Have users [download the Microsoft Entra Authenticator app](https://support.microsoft.com/account-billing/download-and-install-the-microsoft-authenticator-app-351498fc-850a-45da-b7b6-27e523b8702a) and follow the directions to enable phone sign-in.
+[Microsoft Authenticator](concept-authentication-passwordless.md) turns any iOS or Android phone into a strong, passwordless credential. It's a free download from Google Play or the Apple App Store. Have users [download Microsoft Authenticator](https://support.microsoft.com/account-billing/download-and-install-the-microsoft-authenticator-app-351498fc-850a-45da-b7b6-27e523b8702a) and follow the directions to enable phone sign-in.
### Technical considerations **Active Directory Federation Services (AD FS) Integration** - When a user enables the Authenticator passwordless credential, authentication for that user defaults to sending a notification for approval. Users in a hybrid tenant are prevented from being directed to AD FS for sign-in unless they select "Use your password instead." This process also bypasses any on-premises Conditional Access policies, and pass-through authentication (PTA) flows. However, if a login_hint is specified, the user is forwarded to AD FS and bypasses the option to use the passwordless credential.
-**Azure MFA server** - End users enabled for multi-factor authentication through an organization's on-premises Azure MFA server can create and use a single passwordless phone sign-in credential. If the user attempts to upgrade multiple installations (5 or more) of the Authenticator app with the credential, this change may result in an error.
+**MFA server** - End users enabled for multi-factor authentication through an organization's on-premises MFA server can create and use a single passwordless phone sign-in credential. If the user attempts to upgrade multiple installations (5 or more) of the Authenticator app with the credential, this change may result in an error.
> [!IMPORTANT]
-> As of July 1, 2019, Microsoft no longer offers MFA Server for new deployments. New customers that want to require multi-factor authentication (MFA) during sign-in events should use cloud-based Azure AD Multi-Factor Authentication. Existing customers that activated MFA Server before July 1, 2019 can download the latest version, future updates, and generate activation credentials as usual. We recommend moving from Azure MFA Server to Azure Active Directory MFA.
+> As of July 1, 2019, Microsoft no longer offers MFA Server for new deployments. New customers that want to require multi-factor authentication (MFA) during sign-in events should use cloud-based Azure AD Multi-Factor Authentication. Existing customers that activated MFA Server before July 1, 2019 can download the latest version, future updates, and generate activation credentials as usual. We recommend moving from MFA Server to Azure AD MFA.
**Device registration** - To use the Authenticator app for passwordless authentication, the device must be registered in the Azure AD tenant and can't be a shared device. A device can only be registered in a single tenant. This limit means that only one work or school account is supported for phone sign-in using the Authenticator app. ### Deploy phone sign-in with the Authenticator app
-Follow the steps in the article, [Enable passwordless sign-in with the Microsoft Entra Authenticator app](howto-authentication-passwordless-phone.md) to enable the Authenticator app as a passwordless authentication method in your organization.
+Follow the steps in the article, [Enable passwordless sign-in with Microsoft Authenticator](howto-authentication-passwordless-phone.md) to enable the Authenticator app as a passwordless authentication method in your organization.
### Testing Authenticator app
active-directory Access Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/access-tokens.md
Title: Microsoft identity platform access tokens description: Learn about access tokens emitted by the Azure AD v1.0 and Microsoft identity platform (v2.0) endpoints. -+
Last updated 12/28/2021-+
active-directory Active Directory Claims Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-claims-mapping.md
Title: Customize Azure AD tenant app claims (PowerShell) description: Learn how to customize claims emitted in tokens for an application in a specific Azure Active Directory tenant.-+
Last updated 06/16/2021-+
active-directory Active Directory Optional Claims https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-optional-claims.md
Title: Provide optional claims to Azure AD apps description: How to add custom or additional claims to the SAML 2.0 and JSON Web Tokens (JWT) tokens issued by Microsoft identity platform.-+
Last updated 04/04/2022-+
active-directory Active Directory Saml Claims Customization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-saml-claims-customization.md
Title: Customize app SAML token claims description: Learn how to customize the claims issued by Microsoft identity platform in the SAML token for enterprise applications. -+ Last updated 02/07/2022-+
active-directory Active Directory Schema Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-schema-extensions.md
Title: Use Azure AD schema extension attributes in claims description: Describes how to use directory schema extension attributes for sending user data to applications in token claims. -+
Last updated 07/29/2020-+ # Using directory schema extension attributes in claims
active-directory Reference Claims Mapping Policy Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-claims-mapping-policy-type.md
Title: Claims mapping policy description: Learn about the claims mapping policy type, which is used to modify the claims emitted in tokens issued for specific applications. -+
Last updated 03/04/2022-+
active-directory Scenario Web App Sign User App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-sign-user-app-configuration.md
The initialization code is different depending on the platform. For ASP.NET Core
# [ASP.NET Core](#tab/aspnetcore)
-In ASP.NET Core web apps (and web APIs), the application is protected because you have a `[Authorize]` attribute on the controllers or the controller actions. This attribute checks that the user is authenticated. The code that's initializing the application is in the *Startup.cs* file.
+In ASP.NET Core web apps (and web APIs), the application is protected because you have a `[Authorize]` attribute on the controllers or the controller actions. This attribute checks that the user is authenticated. Prior to the release of .NET 6, the code that's initializing the application is in the *Startup.cs* file. New ASP.NET Core projects with .NET 6 no longer contain a *Startup.cs* file. Taking its place is the *Program.cs* file. The rest of this tutorial pertains to .NET 5 or lower.
To add authentication with the Microsoft identity platform (formerly Azure AD v2.0), you'll need to add the following code. The comments in the code should be self-explanatory.
In the code above:
- The `AddMicrosoftIdentityUI` extension method is defined in **Microsoft.Identity.Web.UI**. It provides a default controller to handle sign-in and sign-out.
-You can find more details about how Microsoft.Identity.Web enables you to create web apps in <https://aka.ms/ms-id-web/webapp>
+For more information about how Microsoft.Identity.Web enables you to create web apps, see [Web Apps in microsoft-identity-web](https://aka.ms/ms-id-web/webapp).
# [ASP.NET](#tab/aspnet)
active-directory Security Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/security-tokens.md
Title: Security tokens description: Learn about the basics of security tokens in the Microsoft identity platform. -+
Last updated 09/27/2021-+ #Customer intent: As an application developer, I want to understand the basic concepts of security tokens in the Microsoft identity platform.
active-directory V2 Oauth2 Auth Code Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-oauth2-auth-code-flow.md
Title: Microsoft identity platform and OAuth 2.0 authorization code flow description: Build web applications using the Microsoft identity platform implementation of the OAuth 2.0 authentication protocol. -+
Last updated 02/02/2022-+
active-directory Howto Vm Sign In Azure Ad Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-vm-sign-in-azure-ad-linux.md
Title: Login in to Linux virtual machine in Azure using Azure Active Directory and openSSH certificate-based authentication
+ Title: Login to Linux virtual machine in Azure using Azure Active Directory and openSSH certificate-based authentication
description: Login with Azure AD using openSSH certificate-based authentication to an Azure VM running Linux
The following Azure regions are currently supported for this feature:
- Azure Global - Azure Government - Azure China 21Vianet
-
+ It's not supported to use this extension on Azure Kubernetes Service (AKS) clusters. For more information, see [Support policies for AKS](../../aks/support-policies.md). If you choose to install and use the CLI locally, you must be running the Azure CLI version 2.22.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). > [!NOTE]
-> This is functionality is also available for [Azure Arc-enabled servers](../../azure-arc/servers/ssh-arc-overview.md).
+> This functionality is also available for [Azure Arc-enabled servers](../../azure-arc/servers/ssh-arc-overview.md).
## Requirements for login with Azure AD using openSSH certificate-based authentication
Ensure your VM is configured with the following functionality:
Ensure your client meets the following requirements: -- SSH client must support OpenSSH based certificates for authentication. You can use Az CLI (2.21.1 or higher) with OpenSSH (included in Windows 10 version 1803 or higher) or Azure Cloud Shell to meet this requirement. -- SSH extension for Az CLI. You can install this using `az extension add --name ssh`. You donΓÇÖt need to install this extension when using Azure Cloud Shell as it comes pre-installed.-- If youΓÇÖre using any other SSH client other than Az CLI or Azure Cloud Shell that supports OpenSSH certificates, youΓÇÖll still need to use Az CLI with SSH extension to retrieve ephemeral SSH cert and optionally a config file and then use the config file with your SSH client.
+- SSH client must support OpenSSH based certificates for authentication. You can use Azure CLI (2.21.1 or higher) with OpenSSH (included in Windows 10 version 1803 or higher) or Azure Cloud Shell to meet this requirement.
+- SSH extension for Azure CLI. You can install this using `az extension add --name ssh`. You donΓÇÖt need to install this extension when using Azure Cloud Shell as it comes pre-installed.
+- If youΓÇÖre using any other SSH client other than Azure CLI or Azure Cloud Shell that supports OpenSSH certificates, youΓÇÖll still need to use Azure CLI with SSH extension to retrieve ephemeral SSH cert and optionally a config file and then use the config file with your SSH client.
- TCP connectivity from the client to either the public or private IP of the VM (ProxyCommand or SSH forwarding to a machine with connectivity also works). > [!IMPORTANT] > SSH clients based on PuTTy do not support openSSH certificates and cannot be used to login with Azure AD openSSH certificate-based authentication.
-## Enabling Azure AD login in for Linux VM in Azure
+## Enabling Azure AD login for Linux VM in Azure
-To use Azure AD login in for Linux VM in Azure, you need to first enable Azure AD login option for your Linux VM, configure Azure role assignments for users who are authorized to login in to the VM and then use SSH client that supports OpensSSH such as Az CLI or Az Cloud Shell to SSH to your Linux VM. There are multiple ways you can enable Azure AD login for your Linux VM, as an example you can use:
+To use Azure AD login for Linux VM in Azure, you need to first enable Azure AD login option for your Linux VM, configure Azure role assignments for users who are authorized to login to the VM and then use SSH client that supports OpensSSH such as Azure CLI or Azure Cloud Shell to SSH to your Linux VM. There are multiple ways you can enable Azure AD login for your Linux VM, as an example you can use:
- Azure portal experience when creating a Linux VM - Azure Cloud Shell experience when creating a Windows VM or for an existing Linux VM
As an example, to create an Ubuntu Server 18.04 Long Term Support (LTS) VM in Az
1. Check the box to enable **Login with Azure Active Directory (Preview)**. 1. Ensure **System assigned managed identity** is checked. 1. Go through the rest of the experience of creating a virtual machine. During this preview, youΓÇÖll have to create an administrator account with username and password or SSH public key.
-
+ ### Using the Azure Cloud Shell experience to enable Azure AD login Azure Cloud Shell is a free, interactive shell that you can use to run the steps in this article. Common Azure tools are preinstalled and configured in Cloud Shell for you to use with your account. Just select the Copy button to copy the code, paste it in Cloud Shell, and then press Enter to run it. There are a few ways to open Cloud Shell:
The example can be customized to support your testing requirements as needed.
```azurecli-interactive az group create --name AzureADLinuxVM --location southcentralus- az vm create \ --resource-group AzureADLinuxVM \ --name myVM \
az vm create \
--assign-identity \ --admin-username azureuser \ --generate-ssh-keys- az vm extension set \ --publisher Microsoft.Azure.ActiveDirectory \ --name AADSSHLoginForLinux \
There are multiple ways you can configure role assignments for VM, as an example
- Azure AD Portal experience - Azure Cloud Shell experience
-> [!Note]
+> [!NOTE]
> The Virtual Machine Administrator Login and Virtual Machine User Login roles use dataActions and can be assigned at the management group, subscription, resource group, or resource scope. It is recommended that the roles be assigned at the management group, subscription or resource level and not at the individual VM level to avoid risk of running out of [Azure role assignments limit](../../role-based-access-control/troubleshooting.md#azure-role-assignments-limit) per subscription.- ### Using Azure AD Portal experience To configure role assignments for your Azure AD enabled Linux VMs:
To configure role assignments for your Azure AD enabled Linux VMs:
1. Select **Add** > **Add role assignment** to open the Add role assignment page. 1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
-
+ | Setting | Value | | | | | Role | **Virtual Machine Administrator Login** or **Virtual Machine User Login** |
To configure role assignments for your Azure AD enabled Linux VMs:
![Add role assignment page in Azure portal.](../../../includes/role-based-access-control/media/add-role-assignment-page.png) After a few moments, the security principal is assigned the role at the selected scope.
-
+ ### Using the Azure Cloud Shell experience The following example uses [az role assignment create](/cli/azure/role/assignment#az-role-assignment-create) to assign the Virtual Machine Administrator Login role to the VM for your current Azure user. The username of your current Azure account is obtained with [az account show](/cli/azure/account#az-account-show), and the scope is set to the VM created in a previous step with [az vm show](/cli/azure/vm#az-vm-show). The scope could also be assigned at a resource group or subscription level, normal Azure RBAC inheritance permissions apply.
az role assignment create \
> [!NOTE] > If your Azure AD domain and logon username domain do not match, you must specify the object ID of your user account with the `--assignee-object-id`, not just the username for `--assignee`. You can obtain the object ID for your user account with [az ad user list](/cli/azure/ad/user#az-ad-user-list).- For more information on how to use Azure RBAC to manage access to your Azure subscription resources, see the article [Steps to assign an Azure role](../../role-based-access-control/role-assignments-steps.md).
-## Install SSH extension for Az CLI
+## Install SSH extension for Azure CLI
-If youΓÇÖre using Azure Cloud Shell, then no other setup is needed as both the minimum required version of Az CLI and SSH extension for Az CLI are already included in the Cloud Shell environment.
+If youΓÇÖre using Azure Cloud Shell, then no other setup is needed as both the minimum required version of Azure CLI and SSH extension for Azure CLI are already included in the Cloud Shell environment.
-Run the following command to add SSH extension for Az CLI
+Run the following command to add SSH extension for Azure CLI
```azurecli az extension add --name ssh
az extension show --name ssh
## Using Conditional Access
-You can enforce Conditional Access policies such as require multi-factor authentication, require compliant or hybrid Azure AD joined device for the device running SSH client, and checking for risk before authorizing access to Linux VMs in Azure that are enabled with Azure AD login in. The application that appears in Conditional Access policy is called "Azure Linux VM Sign-In".
+You can enforce Conditional Access policies such as require multi-factor authentication, require compliant or hybrid Azure AD joined device for the device running SSH client, and checking for risk before authorizing access to Linux VMs in Azure that are enabled with Azure AD login. The application that appears in Conditional Access policy is called "Azure Linux VM Sign-In".
> [!NOTE]
-> Conditional Access policy enforcement requiring device compliance or Hybrid Azure AD join on the client device running SSH client only works with Az CLI running on Windows and macOS. It is not supported when using Az CLI on Linux or Azure Cloud Shell.
+> Conditional Access policy enforcement requiring device compliance or Hybrid Azure AD join on the client device running SSH client only works with Azure CLI running on Windows and macOS. It is not supported when using Azure CLI on Linux or Azure Cloud Shell.
### Missing application
Another way to verify it is via Graph PowerShell:
## Login using Azure AD user account to SSH into the Linux VM
-### Using Az CLI
+### Using Azure CLI
First do az login and then az ssh vm.
The following example automatically resolves the appropriate IP address for the
az ssh vm -n myVM -g AzureADLinuxVM ```
-If prompted, enter your Azure AD login credentials at the login page, perform an MFA, and/or satisfy device checks. YouΓÇÖll only be prompted if your az CLI session doesnΓÇÖt already meet any required Conditional Access criteria. Close the browser window, return to the SSH prompt, and youΓÇÖll be automatically connected to the VM.
+If prompted, enter your Azure AD login credentials at the login page, perform an MFA, and/or satisfy device checks. YouΓÇÖll only be prompted if your Azure CLI session doesnΓÇÖt already meet any required Conditional Access criteria. Close the browser window, return to the SSH prompt, and youΓÇÖll be automatically connected to the VM.
YouΓÇÖre now signed in to the Azure Linux virtual machine with the role permissions as assigned, such as VM User or VM Administrator. If your user account is assigned the Virtual Machine Administrator Login role, you can use sudo to run commands that require root privileges.
-### Using Az Cloud Shell
+### Using Azure Cloud Shell
-You can use Az Cloud Shell to connect to VMs without needing to install anything locally to your client machine. Start Cloud Shell by clicking the shell icon in the upper right corner of the Azure portal.
-
-Az Cloud Shell will automatically connect to a session in the context of the signed in user. During the Azure AD Login for Linux Preview, **you must run az login again and go through an interactive sign in flow**.
+You can use Azure Cloud Shell to connect to VMs without needing to install anything locally to your client machine. Start Cloud Shell by clicking the shell icon in the upper right corner of the Azure portal.
+
+Azure Cloud Shell will automatically connect to a session in the context of the signed in user. During the Azure AD Login for Linux Preview, **you must run az login again and go through an interactive sign in flow**.
```azurecli az login
az ssh vm -n myVM -g AzureADLinuxVM
``` > [!NOTE]
-> Conditional Access policy enforcement requiring device compliance or Hybrid Azure AD join is not supported when using Az Cloud Shell.
+> Conditional Access policy enforcement requiring device compliance or Hybrid Azure AD join is not supported when using Azure Cloud Shell.
### Login using Azure AD service principal to SSH into the Linux VM
Use the following example to authenticate to Azure CLI using the service princip
az login --service-principal -u <sp-app-id> -p <password-or-cert> --tenant <tenant-id> ```
-Once authentication with a service principal is complete, use the normal Az CLI SSH commands to connect to the VM.
+Once authentication with a service principal is complete, use the normal Azure CLI SSH commands to connect to the VM.
```azurecli az ssh vm -n myVM -g AzureADLinuxVM
az ssh vm --ip 10.11.123.456
For customers who are using previous version of Azure AD login for Linux that was based on device code flow, complete the following steps using Azure CLI. 1. Uninstall the AADLoginForLinux extension on the VM.
-
+ ```azurecli az vm extension delete -g MyResourceGroup --vm-name MyVm -n AADLoginForLinux ``` > [!NOTE] > The extension uninstall can fail if there are any Azure AD users currently logged in on the VM. Make sure all users are logged off first.- 1. Enable system-assigned managed identity on your VM. ```azurecli
Use Azure Policy to ensure Azure AD login is enabled for your new and existing L
## Troubleshoot sign-in issues
-Some common errors when you try to SSH with Azure AD credentials include no Azure roles assigned, and repeated prompts to sign in. Use the following sections to correct these issues.
+Some common errors when you try to SSH with Azure AD credentials include no Azure roles assigned, and repeated prompts to sign-in. Use the following sections to correct these issues.
### CouldnΓÇÖt retrieve token from local cache
-You must run az login again and go through an interactive sign in flow. Review the section [Using Az Cloud Shell](#using-az-cloud-shell).
+You must run `az login` again and go through an interactive sign-in flow. Review the section [Using Azure Cloud Shell](#using-azure-cloud-shell).
### Access denied: Azure role not assigned
active-directory Plan Device Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/plan-device-deployment.md
Last updated 02/15/2022 --++
active-directory Directory Delegated Administration Primer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/directory-delegated-administration-primer.md
Previously updated : 03/24/2022 Last updated : 06/23/2022
# What is delegated administration?
-Managing permissions for external partners is a key part of your security posture. WeΓÇÖve added capabilities to the Azure Active Directory (Azure AD) admin portal experience so that an administrator can see the relationships that their Azure AD tenant has with Microsoft Cloud Service Providers (CSP) who can manage the tenant. This permissions model is called delegated administration. This article introduces the Azure AD administrator to the relationship between the old Delegated Admin Permissions (DAP) permission model and the new Granular Delegated Admin Permissions (GDAP) permission model.
+Managing permissions for external partners is a key part of your security posture. WeΓÇÖve added capabilities to the administrator portal experience in Azure Active Directory (Azure AD), part of Microsoft Entra, so that an administrator can see the relationships that their Azure AD tenant has with Microsoft Cloud Service Providers (CSP) who can manage the tenant. This permissions model is called delegated administration. This article introduces the Azure AD administrator to the relationship between the old Delegated Admin Permissions (DAP) permission model and the new Granular Delegated Admin Permissions (GDAP) permission model.
## Delegated administration relationships
active-directory Directory Delete Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/directory-delete-howto.md
Previously updated : 11/23/2021 Last updated : 06/23/2022
# Delete a tenant in Azure Active Directory
-When an Azure AD organization (tenant) is deleted, all resources that are contained in the organization are also deleted. Prepare your organization by minimizing its associated resources before you delete. Only an Azure Active Directory (Azure AD) global administrator can delete an Azure AD organization from the portal.
+When an organization (tenant) is deleted in Azure Active Directory (Azure AD), part of Microsoft Entra, all resources that are contained in the organization are also deleted. Prepare your organization by minimizing its associated resources before you delete. Only a global administrator in Azure AD can delete an Azure AD organization from the portal.
## Prepare the organization
active-directory Directory Overview User Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/directory-overview-user-model.md
Previously updated : 09/01/2021 Last updated : 06/23/2022
# What is enterprise user management?
-This article introduces the Azure AD administrator to the relationship between top [identity management](../fundamentals/active-directory-whatis.md?context=azure%2factive-directory%2fusers-groups-roles%2fcontext%2fugr-context) tasks for users in terms of their groups, licenses, deployed enterprise apps, and administrator roles. As your organization grows, you can use Azure AD groups and administrator roles to:
+This article introduces and administrator for Azure Active Directory (Azure AD), part of Microsoft Entra, to the relationship between top [identity management](../fundamentals/active-directory-whatis.md?context=azure%2factive-directory%2fusers-groups-roles%2fcontext%2fugr-context) tasks for users in terms of their groups, licenses, deployed enterprise apps, and administrator roles. As your organization grows, you can use Azure AD groups and administrator roles to:
* Assign licenses to groups instead of to individually * Delegate permissions to distribute the work of Azure AD management to less-privileged roles
active-directory Directory Self Service Signup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/directory-self-service-signup.md
Previously updated : 09/01/2021 Last updated : 06/23/2022
# What is self-service sign-up for Azure Active Directory?
-This article explains how to use self-service sign-up to populate an organization in Azure Active Directory (Azure AD). If you want to take over a domain name from an unmanaged Azure AD organization, see [Take over an unmanaged tenant as administrator](domains-admin-takeover.md).
+This article explains how to use self-service sign-up to populate an organization in Azure Active Directory (Azure AD), part of Microsoft Entra. If you want to take over a domain name from an unmanaged Azure AD organization, see [Take over an unmanaged tenant as administrator](domains-admin-takeover.md).
## Why use self-service sign-up?
active-directory Directory Service Limits Restrictions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/directory-service-limits-restrictions.md
Previously updated : 10/27/2021 Last updated : 06/23/2022
# Azure AD service limits and restrictions
-This article contains the usage constraints and other service limits for the Azure Active Directory (Azure AD) service. If youΓÇÖre looking for the full set of Microsoft Azure service limits, see [Azure Subscription and Service Limits, Quotas, and Constraints](../../azure-resource-manager/management/azure-subscription-service-limits.md).
+This article contains the usage constraints and other service limits for the Azure Active Directory (Azure AD), part of Microsoft Entra, service. If youΓÇÖre looking for the full set of Microsoft Azure service limits, see [Azure Subscription and Service Limits, Quotas, and Constraints](../../azure-resource-manager/management/azure-subscription-service-limits.md).
[!INCLUDE [AAD-service-limits](../../../includes/active-directory-service-limits-include.md)]
active-directory Domains Admin Takeover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/domains-admin-takeover.md
Previously updated : 09/01/2021 Last updated : 06/23/2022
# Take over an unmanaged directory as administrator in Azure Active Directory
-This article describes two ways to take over a DNS domain name in an unmanaged directory in Azure Active Directory (Azure AD). When a self-service user signs up for a cloud service that uses Azure AD, they are added to an unmanaged Azure AD directory based on their email domain. For more about self-service or "viral" sign-up for a service, see [What is self-service sign-up for Azure Active Directory?](directory-self-service-signup.md)
+This article describes two ways to take over a DNS domain name in an unmanaged directory in Azure Active Directory (Azure AD), part of Microsoft Entra. When a self-service user signs up for a cloud service that uses Azure AD, they are added to an unmanaged Azure AD directory based on their email domain. For more about self-service or "viral" sign-up for a service, see [What is self-service sign-up for Azure Active Directory?](directory-self-service-signup.md)
> [!VIDEO https://www.youtube.com/embed/GOSpjHtrRsg]
active-directory Domains Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/domains-manage.md
Previously updated : 09/01/2021 Last updated : 06/23/2022
# Managing custom domain names in your Azure Active Directory
-A domain name is an important part of the identifier for many Azure Active Directory (Azure AD) resources: it's part of a user name or email address for a user, part of the address for a group, and is sometimes part of the app ID URI for an application. A resource in Azure AD can include a domain name that's owned by the Azure AD organization (sometimes called a tenant) that contains the resource. Only a Global Administrator can manage domains in Azure AD.
+A domain name is an important part of the identifier for resources in many Azure Active Directory (Azure AD), part of Microsoft Entra: it's part of a user name or email address for a user, part of the address for a group, and is sometimes part of the app ID URI for an application. A resource in Azure AD can include a domain name that's owned by the Azure AD organization (sometimes called a tenant) that contains the resource. Only a Global Administrator can manage domains in Azure AD.
## Set the primary domain name for your Azure AD organization
active-directory Domains Verify Custom Subdomain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/domains-verify-custom-subdomain.md
Previously updated : 04/05/2022 Last updated : 06/23/2022
# Change subdomain authentication type in Azure Active Directory
-After a root domain is added to Azure Active Directory (Azure AD), all subsequent subdomains added to that root in your Azure AD organization automatically inherit the authentication setting from the root domain. However, if you want to manage domain authentication settings independently from the root domain settings, you can now with the Microsoft Graph API. For example, if you have a federated root domain such as contoso.com, this article can help you verify a subdomain such as child.contoso.com as managed instead of federated.
+After a root domain is added to Azure Active Directory (Azure AD), part of Microsoft Entra, all subsequent subdomains added to that root in your Azure AD organization automatically inherit the authentication setting from the root domain. However, if you want to manage domain authentication settings independently from the root domain settings, you can now with the Microsoft Graph API. For example, if you have a federated root domain such as contoso.com, this article can help you verify a subdomain such as child.contoso.com as managed instead of federated.
In the Azure AD portal, when the parent domain is federated and the admin tries to verify a managed subdomain on the **Custom domain names** page, you'll get a 'Failed to add domain' error with the reason "One or more properties contains invalid values." If you try to add this subdomain from the Microsoft 365 admin center, you will receive a similar error. For more information about the error, see [A child domain doesn't inherit parent domain changes in Office 365, Azure, or Intune](/office365/troubleshoot/administration/child-domain-fails-inherit-parent-domain-changes).
active-directory Groups Assign Sensitivity Labels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-assign-sensitivity-labels.md
Previously updated : 04/19/2022 Last updated : 06/23/2022
# Assign sensitivity labels to Microsoft 365 groups in Azure Active Directory
-Azure Active Directory (Azure AD) supports applying sensitivity labels published by the [Microsoft Purview compliance portal](https://compliance.microsoft.com) to Microsoft 365 groups. Sensitivity labels apply to group across services like Outlook, Microsoft Teams, and SharePoint. For more information about Microsoft 365 apps support, see [Microsoft 365 support for sensitivity labels](/microsoft-365/compliance/sensitivity-labels-teams-groups-sites#support-for-the-sensitivity-labels).
+Azure Active Directory (Azure AD), part of Microsoft Entra, supports applying sensitivity labels published by the [Microsoft Purview compliance portal](https://compliance.microsoft.com) to Microsoft 365 groups. Sensitivity labels apply to group across services like Outlook, Microsoft Teams, and SharePoint. For more information about Microsoft 365 apps support, see [Microsoft 365 support for sensitivity labels](/microsoft-365/compliance/sensitivity-labels-teams-groups-sites#support-for-the-sensitivity-labels).
> [!IMPORTANT] > To configure this feature, there must be at least one active Azure Active Directory Premium P1 license in your Azure AD organization.
active-directory Groups Bulk Download Members https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-bulk-download-members.md
Previously updated : 10/26/2021 Last updated : 06/23/2022 -+ # Bulk download members of a group in Azure Active Directory
-Using Azure Active Directory (Azure AD) portal, you can bulk download the members of a group in your organization to a comma-separated values (CSV) file. All admins and non-admin users can download group membership lists.
+You can bulk download the members of a group in your organization to a comma-separated values (CSV) file in the portal for Azure Active Directory (Azure AD), part of Microsoft Entra. All admins and non-admin users can download group membership lists.
## To bulk download group membership
active-directory Groups Bulk Download https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-bulk-download.md
Previously updated : 10/26/2021 Last updated : 03/24/2022
# Bulk download a list of groups in Azure Active Directory
-Using Azure Active Directory (Azure AD) portal, you can bulk download the list of all the groups in your organization to a comma-separated values (CSV) file. All admins and non-admin users can download group lists.
+You can download a list of all the groups in your organization to a comma-separated values (CSV) file in the portal for Azure Active Directory (Azure AD), part of Microsoft Entra. All admins and non-admin users can download group lists.
## To download a list of groups
active-directory Groups Bulk Import Members https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-bulk-import-members.md
Previously updated : 09/02/2021 Last updated : 06/24/2022
# Bulk add group members in Azure Active Directory
-Using Azure Active Directory (Azure AD) portal, you can add a large number of members to a group by using a comma-separated values (CSV) file to bulk import group members.
+You can add multiple members to a group by using a comma-separated values (CSV) file to bulk import group members in the portal for Azure Active Directory (Azure AD), part of Microsoft Entra.
## Understand the CSV template
active-directory Groups Bulk Remove Members https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-bulk-remove-members.md
# Bulk remove group members in Azure Active Directory
-Using Azure Active Directory (Azure AD) portal, you can remove a large number of members from a group by using a comma-separated values (CSV) file to bulk remove group members.
+You can remove a large number of members from a group by using a comma-separated values (CSV) file to remove group members in bulk using the portal for Azure Active Directory (Azure AD), part of Microsoft Entra.
## Understand the CSV template
active-directory Groups Change Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-change-type.md
Previously updated : 09/02/2021 Last updated : 06/23/2022
# Change static group membership to dynamic in Azure Active Directory
-You can change a group's membership from static to dynamic (or vice-versa) In Azure Active Directory (Azure AD). Azure AD keeps the same group name and ID in the system, so all existing references to the group are still valid. If you create a new group instead, you would need to update those references. Dynamic group membership eliminates management overhead adding and removing users. This article tells you how to convert existing groups from static to dynamic membership using either Azure AD Admin center or PowerShell cmdlets.
+You can change a group's membership from static to dynamic (or vice-versa) In Azure Active Directory (Azure AD), part of Microsoft Entra. Azure AD keeps the same group name and ID in the system, so all existing references to the group are still valid. If you create a new group instead, you would need to update those references. Dynamic group membership eliminates management overhead adding and removing users. This article tells you how to convert existing groups from static to dynamic membership using either Azure AD Admin center or PowerShell cmdlets.
> [!WARNING] > When changing an existing static group to a dynamic group, all existing members are removed from the group, and then the membership rule is processed to add new members. If the group is used to control access to apps or resources, be aware that the original members might lose access until the membership rule is fully processed.
active-directory Groups Create Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-create-rule.md
Previously updated : 05/05/2022 Last updated : 06/23/2022
# Create or update a dynamic group in Azure Active Directory
-In Azure Active Directory (Azure AD), you can use rules to determine group membership based on user or device properties. This article tells how to set up a rule for a dynamic group in the Azure portal. Dynamic membership is supported for security groups and Microsoft 365 Groups. When a group membership rule is applied, user and device attributes are evaluated for matches with the membership rule. When an attribute changes for a user or device, all dynamic group rules in the organization are processed for membership changes. Users and devices are added or removed if they meet the conditions for a group. Security groups can be used for either devices or users, but Microsoft 365 Groups can be only user groups. Using Dynamic groups requires Azure AD premium P1 license or Intune for Education license. See [Dynamic membership rules for groups](./groups-dynamic-membership.md) for more details.
+You can use rules to determine group membership based on user or device properties In Azure Active Directory (Azure AD), part of Microsoft Entra. This article tells how to set up a rule for a dynamic group in the Azure portal. Dynamic membership is supported for security groups and Microsoft 365 Groups. When a group membership rule is applied, user and device attributes are evaluated for matches with the membership rule. When an attribute changes for a user or device, all dynamic group rules in the organization are processed for membership changes. Users and devices are added or removed if they meet the conditions for a group. Security groups can be used for either devices or users, but Microsoft 365 Groups can be only user groups. Using Dynamic groups requires Azure AD premium P1 license or Intune for Education license. See [Dynamic membership rules for groups](./groups-dynamic-membership.md) for more details.
## Rule builder in the Azure portal
active-directory Groups Dynamic Membership https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-dynamic-membership.md
Previously updated : 06/22/2022 Last updated : 06/23/2022
# Dynamic membership rules for groups in Azure Active Directory
-In Azure Active Directory (Azure AD), you can create attribute-based rules to enable dynamic membership for a group. Dynamic group membership adds and removes group members automatically using membership rules based on member attributes. This article details the properties and syntax to create dynamic membership rules for users or devices. You can set up a rule for dynamic membership on security groups or Microsoft 365 groups.
+You can create attribute-based rules to enable dynamic membership for a group in Azure Active Directory (Azure AD), part of Microsoft Entra. Dynamic group membership adds and removes group members automatically using membership rules based on member attributes. This article details the properties and syntax to create dynamic membership rules for users or devices. You can set up a rule for dynamic membership on security groups or Microsoft 365 groups.
-When any attributes of a user or device change, the system evaluates all dynamic group rules in a directory to see if the change would trigger any group adds or removes. If a user or device satisfies a rule on a group, they are added as a member of that group. If they no longer satisfy the rule, they are removed. You can't manually add or remove a member of a dynamic group.
+When the attributes of a user or a device change, the system evaluates all dynamic group rules in a directory to see if the change would trigger any group adds or removes. If a user or device satisfies a rule on a group, they're added as a member of that group. If they no longer satisfy the rule, they're removed. You can't manually add or remove a member of a dynamic group.
- You can create a dynamic group for devices or for users, but you can't create a rule that contains both users and devices. - You can't create a device group based on the user attributes of the device owner. Device membership rules can reference only device attributes.
Here are some examples of advanced rules or syntax for which we recommend that y
- Rule with more than five expressions - The Direct reports rule - Setting [operator precedence](#operator-precedence)-- [Rules with complex expressions](#rules-with-complex-expressions); for example `(user.proxyAddresses -any (_ -contains "contoso"))`
+- [Rules with complex expressions](#rules-with-complex-expressions); for example, `(user.proxyAddresses -any (_ -contains "contoso"))`
> [!NOTE] > The rule builder might not be able to display some rules constructed in the text box. You might see a message when the rule builder is not able to display the rule. The rule builder doesn't change the supported syntax, validation, or processing of dynamic group rules in any way.
For more step-by-step instructions, see [Create or update a dynamic group](group
### Rule syntax for a single expression
-A single expression is the simplest form of a membership rule and only has the three parts mentioned above. A rule with a single expression looks similar to this: `Property Operator Value`, where the syntax for the property is the name of object.property.
+A single expression is the simplest form of a membership rule and only has the three parts mentioned above. A rule with a single expression looks similar to this example: `Property Operator Value`, where the syntax for the property is the name of object.property.
-The following is an example of a properly constructed membership rule with a single expression:
+The following example illustrates a properly constructed membership rule with a single expression:
``` user.department -eq "Sales" ```
-Parentheses are optional for a single expression. The total length of the body of your membership rule cannot exceed 3072 characters.
+Parentheses are optional for a single expression. The total length of the body of your membership rule can't exceed 3072 characters.
## Constructing the body of a membership rule
dirSyncEnabled |true false |user.dirSyncEnabled -eq true
| streetAddress |Any string value or *null* | user.streetAddress -eq "value" | | surname |Any string value or *null* | user.surname -eq "value" | | telephoneNumber |Any string value or *null* | user.telephoneNumber -eq "value" |
-| usageLocation |Two lettered country/region code | user.usageLocation -eq "US" |
+| usageLocation |Two letter country or region code | user.usageLocation -eq "US" |
| userPrincipalName |Any string value | user.userPrincipalName -eq "alias@domain" | | userType |member guest *null* | user.userType -eq "Member" |
The following table lists all the supported operators and their syntax for a sin
### Using the -in and -notIn operators
-If you want to compare the value of a user attribute against a number of different values you can use the -in or -notIn operators. Use the bracket symbols "[" and "]" to begin and end the list of values.
+If you want to compare the value of a user attribute against multiple values, you can use the -in or -notIn operators. Use the bracket symbols "[" and "]" to begin and end the list of values.
In the following example, the expression evaluates to true if the value of user.department equals any of the values in the list:
The values used in an expression can consist of several types, including:
- Numbers - Arrays ΓÇô number array, string array
-When specifying a value within an expression it is important to use the correct syntax to avoid errors. Some syntax tips are:
+When specifying a value within an expression, it's important to use the correct syntax to avoid errors. Some syntax tips are:
- Double quotes are optional unless the value is a string.-- String and regex operations are not case sensitive.
+- String and regex operations aren't case sensitive.
- When a string value contains double quotes, both quotes should be escaped using the \` character, for example, user.department -eq \`"Sales\`" is the proper syntax when "Sales" is the value. Single quotes should be escaped by using two single quotes instead of one each time. - You can also perform Null checks, using null as a value, for example, `user.department -eq null`.
All operators are listed below in order of precedence from highest to lowest. Op
-any -all ```
-The following is an example of operator precedence where two expressions are being evaluated for the user:
+The following example illustrates operator precedence where two expressions are being evaluated for the user:
``` user.department ΓÇôeq "Marketing" ΓÇôand user.country ΓÇôeq "US" ```
-Parentheses are needed only when precedence does not meet your requirements. For example, if you want department to be evaluated first, the following shows how parentheses can be used to determine order:
+Parentheses are needed only when precedence doesn't meet your requirements. For example, if you want department to be evaluated first, the following shows how parentheses can be used to determine order:
``` user.country ΓÇôeq "US" ΓÇôand (user.department ΓÇôeq "Marketing" ΓÇôor user.department ΓÇôeq "Sales")
user.assignedPlans -all (assignedPlan.servicePlanId -eq "")
### Using the underscore (\_) syntax
-The underscore (\_) syntax matches occurrences of a specific value in one of the multivalued string collection properties to add users or devices to a dynamic group. It is used with the -any or -all operators.
+The underscore (\_) syntax matches occurrences of a specific value in one of the multivalued string collection properties to add users or devices to a dynamic group. It's used with the -any or -all operators.
Here's an example of using the underscore (\_) in a rule to add members based on user.proxyAddress (it works the same for user.otherMails). This rule adds any user with proxy address that contains "contoso" to the group.
The direct reports rule is constructed using the following syntax:
Direct Reports for "{objectID_of_manager}" ```
-Here's an example of a valid rule where "62e19b97-8b3d-4d4a-a106-4ce66896a863" is the objectID of the
+Here's an example of a valid rule, where "62e19b97-8b3d-4d4a-a106-4ce66896a863" is the objectID of the
``` Direct Reports for "62e19b97-8b3d-4d4a-a106-4ce66896a863"
The following tips can help you use the rule properly.
You can create a group containing all users within an organization using a membership rule. When users are added or removed from the organization in the future, the group's membership is adjusted automatically.
-The "All users" rule is constructed using single expression using the -ne operator and the null value. This rule adds B2B guest users as well as member users to the group.
+The "All users" rule is constructed using single expression using the -ne operator and the null value. This rule adds B2B guest users and member users to the group.
``` user.objectId -ne null
The following device attributes can be used.
managementType | MDM (for mobile devices) | device.managementType -eq "MDM" memberOf | Any string value (valid group object ID) | device.memberof -any (group.objectId -in ['value']) objectId | a valid Azure AD object ID | device.objectId -eq "76ad43c9-32c5-45e8-a272-7b58b58f596d"
- profileType | a valid [profile type](https://docs.microsoft.com/graph/api/resources/device?view=graph-rest-1.0#properties) in Azure AD | device.profileType -eq "RegisteredDevice"
+ profileType | a valid [profile type](/graph/api/resources/device?view=graph-rest-1.0#properties&preserve-view=true) in Azure AD | device.profileType -eq "RegisteredDevice"
systemLabels | any string matching the Intune device property for tagging Modern Workplace devices | device.systemLabels -contains "M365Managed" > [!NOTE]
-> When using deviceOwnership to create Dynamic Groups for devices, you need to set the value equal to "Company". On Intune the device ownership is represented instead as Corporate. Refer to [OwnerTypes](/intune/reports-ref-devices#ownertypes) for more details.
+> When using deviceOwnership to create Dynamic Groups for devices, you need to set the value equal to "Company." On Intune the device ownership is represented instead as Corporate. For more information, see [OwnerTypes](/intune/reports-ref-devices#ownertypes) for more details.
> When using deviceTrustType to create Dynamic Groups for devices, you need to set the value equal to "AzureAD" to represent Azure AD joined devices, "ServerAD" to represent Hybrid Azure AD joined devices or "Workplace" to represent Azure AD registered devices.
-> When using extensionAttribute1-15 to create Dynamic Groups for devices you need to set the value for extensionAttribute1-15 on the device. Learn more on [how to write extensionAttributes on an Azure AD device object](https://docs.microsoft.com/graph/api/device-update?view=graph-rest-1.0&tabs=http#example-2--write-extensionattributes-on-a-device)
+> When using extensionAttribute1-15 to create Dynamic Groups for devices you need to set the value for extensionAttribute1-15 on the device. Learn more on [how to write extensionAttributes on an Azure AD device object](/graph/api/device-update?view=graph-rest-1.0&tabs=http#example-2--write-extensionattributes-on-a-device&preserve-view=true)
## Next steps
active-directory Groups Dynamic Rule Member Of https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-dynamic-rule-member-of.md
Previously updated : 06/02/2022 Last updated : 06/23/2022
# Group membership in a dynamic group (preview) in Azure Active Directory
-This feature preview enables admins to create dynamic groups in Azure Active Directory (Azure AD) that populate by adding members of other groups using the memberOf attribute. Apps that couldn't read group-based membership previously in Azure AD can now read the entire membership of these new memberOf groups. Not only can these groups be used for apps, they can also be used for licensing assignment and role-based access control. The following diagram illustrates how you could create Dynamic-Group-A with members of Security-Group-X and Security-Group-Y. Members of the groups inside of Security-Group-X and Security-Group-Y don't become members of Dynamic-Group-A.
+This feature preview in Azure Active Directory (Azure AD), part of Microsoft Entra, enables admins to create dynamic groups that populate by adding members of other groups using the memberOf attribute. Apps that couldn't read group-based membership previously in Azure AD can now read the entire membership of these new memberOf groups. Not only can these groups be used for apps, they can also be used for licensing assignment and role-based access control. The following diagram illustrates how you could create Dynamic-Group-A with members of Security-Group-X and Security-Group-Y. Members of the groups inside of Security-Group-X and Security-Group-Y don't become members of Dynamic-Group-A.
:::image type="content" source="./media/groups-dynamic-rule-member-of/member-of-diagram.png" alt-text="Diagram showing how the memberOf attribute works.":::
active-directory Groups Dynamic Rule More Efficient https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-dynamic-rule-more-efficient.md
Previously updated : 03/29/2022 Last updated : 06/23/2022
# Create simpler, more efficient rules for dynamic groups in Azure Active Directory
-The team for Azure Active Directory (Azure AD) sees numerous incidents related to dynamic groups and the processing time for their membership rules. This article contains the methods by which our engineering team helps customers to simplify their membership rules. Simpler and more efficient rules result in better dynamic group processing times. When writing membership rules for dynamic groups, these are steps you can take to ensure the rules are as efficient as possible.
+The team for Azure Active Directory (Azure AD), part of Microsoft Entra, receives reports of incidents related to dynamic groups and the processing time for their membership rules. This article uses that reported information to present the most common methods by which our engineering team helps customers to simplify their membership rules. Simpler and more efficient rules result in better dynamic group processing times. When writing membership rules for dynamic groups, follow these steps to ensure that your rules are as efficient as possible.
## Minimize use of MATCH
-Minimize the usage of the 'match' operator in rules as much as possible. Instead, explore if it's possible to use the `contains`, `startswith`, or `-eq` operators. Considering using other properties that allow you to write rules to select the users you want to be in the group without using the `-match` operator. For example, if you want a rule for the group for all users whose city is Lagos, then instead of using rules like:
+Minimize the usage of the `match` operator in rules as much as possible. Instead, explore if it's possible to use the `contains`, `startswith`, or `-eq` operators. Considering using other properties that allow you to write rules to select the users you want to be in the group without using the `-match` operator. For example, if you want a rule for the group for all users whose city is Lagos, then instead of using rules like:
- `user.city -match "ago"` - `user.city -match ".*?ago.*"`
active-directory Groups Dynamic Rule Validation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-dynamic-rule-validation.md
Title: Validate rules for dynamic group membership (preview) - Azure AD | Microsoft Docs
-description: How to test members against a membership rule for a dynamic groups in Azure Active Directory.
+description: How to test members against a membership rule for a dynamic group in Azure Active Directory.
documentationcenter: ''
Previously updated : 09/02/2021 Last updated : 06/24/2022
# Validate a dynamic group membership rule (preview) in Azure Active Directory
-Azure Active Directory (Azure AD) now provides the means to validate dynamic group rules (in public preview). On the **Validate rules** tab, you can validate your dynamic rule against sample group members to confirm the rule is working as expected. When creating or updating dynamic group rules, administrators want to know whether a user or a device will be a member of the group. This helps evaluate whether user or device meets the rule criteria and aid in troubleshooting when membership is not expected.
+Azure Active Directory (Azure AD), part of Microsoft Entra, now provides the means to validate dynamic group rules (in public preview). On the **Validate rules** tab, you can validate your dynamic rule against sample group members to confirm the rule is working as expected. When you create or update dynamic group rules, you want to know whether a user or a device will be a member of the group. This knowledge helps you evaluate whether a user or device meets the rule criteria and help you troubleshoot when membership isn't expected.
## Prerequisites
-To use the evaluate dynamic group rule membership feature, the administrator must have one of the following rules assigned directly: Global Administrator, Groups Administrator, or Intune Administrator.
+To evaluate the dynamic group rule membership feature, the administrator must have one of the following rules assigned directly: Global Administrator, Groups Administrator, or Intune Administrator.
> [!TIP] > Assigning one of required roles via indirect group membership is not yet supported.
To use the evaluate dynamic group rule membership feature, the administrator mus
## Step-by-step walk-through
-To get started, go to **Azure Active Directory** > **Groups**. Select an existing dynamic group or create a new dynamic group and click on Dynamic membership rules. You can then see the **Validate Rules** tab.
+To get started, go to **Azure Active Directory** > **Groups**. Select an existing dynamic group or create a new dynamic group and select **Dynamic membership rules**. You can then see the **Validate Rules** tab.
![Find the Validate rules tab and start with an existing rule](./media/groups-dynamic-rule-validation/validate-tab.png)
On **Validate rules** tab, you can select users to validate their memberships. 2
![Add users to validate the existing rule against](./media/groups-dynamic-rule-validation/validate-tab-add-users.png)
-After choosing the users or devices from the picker, and **Select**, validation will automatically start and validation results will appear.
+After you select users or devices from the picker, and **Select**, validation will automatically start and validation results will appear.
![View the results of the rule validation](./media/groups-dynamic-rule-validation/validate-tab-results.png)
-The results tell whether a user is a member of the group or not. If the rule is not valid or there is a network issue, the result will show as **Unknown**. In case of **Unknown**, the detailed error message will describe the issue and actions needed.
+The results tell whether a user is a member of the group or not. If the rule isn't valid or there's a network issue, the result will show as **Unknown**. If the value is **Unknown**, the detailed error message will describe the issue and actions needed.
![View the details of the results of the rule validation](./media/groups-dynamic-rule-validation/validate-tab-view-details.png)
-You can modify the rule and validation of memberships will be triggered. To see why user is not a member of the group, click on "View details" and verification details will show the result of each expression composing the rule. Click **OK** to exit.
+You can modify the rule and validation of memberships will be triggered. To see why user isn't a member of the group, select **View details** and verification details will show the result of each expression composing the rule. Select **OK** to exit.
## Next steps
active-directory Groups Dynamic Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-dynamic-tutorial.md
Previously updated : 09/02/2021 Last updated : 06/24/2022
# Tutorial: Add or remove group members automatically
-In Azure Active Directory (Azure AD), you can automatically add or remove users to security groups or Microsoft 365 groups, so you don't always have to do it manually. Whenever any properties of a user or device change, Azure AD evaluates all dynamic group rules in your Azure AD organization to see if the change should add or remove members.
+In Azure Active Directory (Azure AD), part of Microsoft Entra, you can automatically add or remove users to security groups or Microsoft 365 groups, so you don't always have to do it manually. Whenever any properties of a user or device change, Azure AD evaluates all dynamic group rules in your Azure AD organization to see if the change should add or remove members.
In this tutorial, you learn how to: > [!div class="checklist"]
active-directory Groups Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-lifecycle.md
Previously updated : 10/22/2021 Last updated : 06/24/2022
# Configure the expiration policy for Microsoft 365 groups
-This article tells you how to manage the lifecycle of Microsoft 365 groups by setting an expiration policy for them. You can set expiration policy only for Microsoft 365 groups in Azure Active Directory (Azure AD).
+This article tells you how to manage the lifecycle of Microsoft 365 groups by setting an expiration policy for them. You can set expiration policy only for Microsoft 365 groups in Azure Active Directory (Azure AD), part of Microsoft Entra.
Once you set a group to expire:
active-directory Groups Members Owners Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-members-owners-search.md
Previously updated : 10/22/2021 Last updated : 06/24/2022
# Search groups and members in Azure Active Directory
-This article tells you how to search for members and owners of a group and how to use search filters the Azure Active Directory (Azure AD) portal. Search functions for groups include:
+This article tells you how to search for members and owners of a group and how to use search filters in the portal for Azure Active Directory (Azure AD), part of Microsoft Entra. Search functions for groups include:
- Groups search capabilities, such as substring search in group names - Filtering and sorting options on member and owner lists
active-directory Groups Naming Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-naming-policy.md
Previously updated : 09/02/2021 Last updated : 06/24/2022
# Enforce a naming policy on Microsoft 365 groups in Azure Active Directory
-To enforce consistent naming conventions for Microsoft 365 groups created or edited by your users, set up a group naming policy for your organizations in Azure Active Directory (Azure AD). For example, you could use the naming policy to communicate the function of a group, membership, geographic region, or who created the group. You could also use the naming policy to help categorize groups in the address book. You can use the policy to block specific words from being used in group names and aliases.
+To enforce consistent naming conventions for Microsoft 365 groups created or edited by your users, set up a group naming policy for your organizations in Azure Active Directory (Azure AD), part of Microsoft Entra. For example, you could use the naming policy to communicate the function of a group, membership, geographic region, or who created the group. You could also use the naming policy to help categorize groups in the address book. You can use the policy to block specific words from being used in group names and aliases.
> [!IMPORTANT] > Using Azure AD naming policy for Microsoft 365 groups requires that you possess but not necessarily assign an Azure Active Directory Premium P1 license or Azure AD Basic EDU license for each unique user that is a member of one or more Microsoft 365 groups.
active-directory Groups Quickstart Expiration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-quickstart-expiration.md
Previously updated : 09/02/2021 Last updated : 06/24/2022
Expiration policy is simple:
- A deleted Microsoft 365 group can be restored within 30 days by a group owner or by an Azure AD administrator > [!NOTE]
-> Groups now use Azure AD intelligence to automatically renewed based on whether they have been in recent use. This renewal decision is based on user activity in groups across Microsoft 365 services like Outlook, SharePoint, Teams, Yammer, and others.
+> Azure Active Directory (Azure AD), part of Microsoft Entra, uses intelligence to automatically renew groups based on whether they have been in recent use. This renewal decision is based on user activity in groups across Microsoft 365 services like Outlook, SharePoint, Teams, Yammer, and others.
If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
active-directory Groups Quickstart Naming Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-quickstart-naming-policy.md
Previously updated : 12/02/2020 Last updated : 06/24/2022
# Quickstart: Naming policy for groups in Azure Active Directory
-In this quickstart, you will set up naming policy in your Azure Active Directory (Azure AD) organization for user-created Microsoft 365 groups, to help you sort and search your organizationΓÇÖs groups. For example, you could use the naming policy to:
+In this quickstart, in Azure Active Directory (Azure AD), part of Microsoft Entra, you will set up naming policy in your Azure AD organization for user-created Microsoft 365 groups, to help you sort and search your groups. For example, you could use the naming policy to:
* Communicate the function of a group, membership, geographic region, or who created the group. * Help categorize groups in the address book.
active-directory Groups Restore Deleted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-restore-deleted.md
Previously updated : 12/02/2020 Last updated : 06/24/2022
# Restore a deleted Microsoft 365 group in Azure Active Directory
-When you delete a Microsoft 365 group in the Azure Active Directory (Azure AD), the deleted group is retained but not visible for 30 days from the deletion date. This behavior is so that the group and its contents can be restored if needed. This functionality is restricted exclusively to Microsoft 365 groups in Azure AD. It is not available for security groups and distribution groups. Please note that the 30-day group restoration period is not customizable.
+When you delete a Microsoft 365 group in Azure Active Directory (Azure AD), part of Microsoft Entra, the deleted group is retained but not visible for 30 days from the deletion date. This behavior is so that the group and its contents can be restored if needed. This functionality is restricted exclusively to Microsoft 365 groups in Azure AD. It is not available for security groups and distribution groups. Please note that the 30-day group restoration period is not customizable.
> [!NOTE] > Don't use `Remove-MsolGroup` because it purges the group permanently. Always use `Remove-AzureADMSGroup` to delete a Microsoft 365 group.
active-directory Groups Saasapps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-saasapps.md
Previously updated : 06/30/2021 Last updated : 06/24/2022
# Using a group to manage access to SaaS applications
-Using Azure Active Directory (Azure AD) with an Azure AD Premium license plan, you can use groups to assign access to a SaaS application that's integrated with Azure AD. For example, if you want to assign access for the marketing department to use five different SaaS applications, you can create an Office 365 or security group that contains the users in the marketing department, and then assign that group to these five SaaS applications that are needed by the marketing department. This way you can save time by managing the membership of the marketing department in one place. Users then are assigned to the application when they are added as members of the marketing group, and have their assignments removed from the application when they are removed from the marketing group. This capability can be used with hundreds of applications that you can add from within the Azure AD Application Gallery.
+Using Azure Active Directory (Azure AD), part of Microsoft Entra, with an Azure AD Premium license plan, you can use groups to assign access to a SaaS application that's integrated with Azure AD. For example, if you want to assign access for the marketing department to use five different SaaS applications, you can create an Office 365 or security group that contains the users in the marketing department, and then assign that group to these five SaaS applications that are needed by the marketing department. This way you can save time by managing the membership of the marketing department in one place. Users then are assigned to the application when they are added as members of the marketing group, and have their assignments removed from the application when they are removed from the marketing group. This capability can be used with hundreds of applications that you can add from within the Azure AD Application Gallery.
> [!IMPORTANT] > You can use this feature only after you start an Azure AD Premium trial or purchase Azure AD Premium license plan.
active-directory Groups Self Service Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-self-service-management.md
Previously updated : 03/22/2022 Last updated : 06/24/2022
# Set up self-service group management in Azure Active Directory
-You can enable users to create and manage their own security groups or Microsoft 365 groups in Azure Active Directory (Azure AD). The owner of the group can approve or deny membership requests, and can delegate control of group membership. Self-service group management features are not available for mail-enabled security groups or distribution lists.
+You can enable users to create and manage their own security groups or Microsoft 365 groups in Azure Active Directory (Azure AD), part of Microsoft Entra. The owner of the group can approve or deny membership requests, and can delegate control of group membership. Self-service group management features are not available for mail-enabled security groups or distribution lists.
## Self-service group membership defaults
active-directory Groups Settings Cmdlets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-settings-cmdlets.md
Previously updated : 07/19/2021 Last updated : 06/24/2022
# Azure Active Directory cmdlets for configuring group settings
-This article contains instructions for using Azure Active Directory (Azure AD) PowerShell cmdlets to create and update groups. This content applies only to Microsoft 365 groups (sometimes called unified groups).
+This article contains instructions for using PowerShell cmdlets to create and update groups in Azure Active Directory (Azure AD), part of Microsoft Entra. This content applies only to Microsoft 365 groups (sometimes called unified groups).
> [!IMPORTANT] > Some settings require an Azure Active Directory Premium P1 license. For more information, see the [Template settings](#template-settings) table.
active-directory Groups Settings V2 Cmdlets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-settings-v2-cmdlets.md
Previously updated : 12/02/2020 Last updated : 06/24/2022
> >
-This article contains examples of how to use PowerShell to manage your groups in Azure Active Directory (Azure AD). It also tells you how to get set up with the Azure AD PowerShell module. First, you must [download the Azure AD PowerShell module](https://www.powershellgallery.com/packages/AzureAD/).
+This article contains examples of how to use PowerShell to manage your groups in Azure Active Directory (Azure AD), part of Microsoft Entra. It also tells you how to get set up with the Azure AD PowerShell module. First, you must [download the Azure AD PowerShell module](https://www.powershellgallery.com/packages/AzureAD/).
## Install the Azure AD PowerShell module
active-directory Linkedin User Consent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/linkedin-user-consent.md
Previously updated : 12/02/2020 Last updated : 06/24/2022
# LinkedIn account connections data sharing and consent
-You can enable users in your Active Directory (Azure AD) organization to consent to connect their Microsoft work or school account with their LinkedIn account. After a user connects their accounts, information and highlights from LinkedIn are available in some Microsoft apps and services. Users can also expect their networking experience on LinkedIn to be improved and enriched with information from Microsoft.
+You can enable users in your organization in Active Directory (Azure AD), part of Microsoft Entra, to consent to connect their Microsoft work or school account with their LinkedIn account. After a user connects their accounts, information and highlights from LinkedIn are available in some Microsoft apps and services. Users can also expect their networking experience on LinkedIn to be improved and enriched with information from Microsoft.
To see LinkedIn information in Microsoft apps and services, users must consent to connect their own Microsoft and LinkedIn accounts. Users are prompted to connect their accounts the first time they click to see someone's LinkedIn information on a profile card in Outlook, OneDrive or SharePoint Online. LinkedIn account connections are not fully enabled for your users until they consent to the experience and to connect their accounts.
active-directory Signin Account Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/signin-account-support.md
Previously updated : 12/02/2020 Last updated : 06/24/2022
# Sign-in options for Microsoft accounts in Azure Active Directory
-The Microsoft 365 sign-in page for Azure Active Directory (Azure AD) supports work or school accounts and Microsoft accounts, but depending on the user's situation, it could be one or the other or both. For example, the Azure AD sign-in page supports:
+The Microsoft 365 sign-in page for Azure Active Directory (Azure AD), part of Microsoft Entra, supports work or school accounts and Microsoft accounts, but depending on the user's situation, it could be one or the other or both. For example, the Azure AD sign-in page supports:
* Apps that accept sign-ins from both types of account * Organizations that accept guests
active-directory Signin Realm Discovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/signin-realm-discovery.md
Previously updated : 12/02/2020 Last updated : 06/24/2022
# Home realm discovery for Azure Active Directory sign-in pages
-We are changing our Azure Active Directory (Azure AD) sign-in behavior to make room for new authentication methods and improve usability. During sign-in, Azure AD determines where a user needs to authenticate. Azure AD makes intelligent decisions by reading organization and user settings for the username entered on the sign-in page. This is a step towards a password-free future that enables additional credentials like FIDO 2.0.
+We are changing sign-in behavior in Azure Active Directory (Azure AD), part of Microsoft Entra, to make room for new authentication methods and improve usability. During sign-in, Azure AD determines where a user needs to authenticate. Azure AD makes intelligent decisions by reading organization and user settings for the username entered on the sign-in page. This is a step towards a password-free future that enables additional credentials like FIDO 2.0.
## Home realm discovery behavior
active-directory Users Bulk Add https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-bulk-add.md
Previously updated : 05/19/2021 Last updated : 06/24/2022
# Bulk create users in Azure Active Directory
-Azure Active Directory (Azure AD) supports bulk user create and delete operations and supports downloading lists of users. Just fill out comma-separated values (CSV) template you can download from the Azure AD portal.
+Azure Active Directory (Azure AD), part of Microsoft Entra, supports bulk user create and delete operations and supports downloading lists of users. Just fill out comma-separated values (CSV) template you can download from the Azure AD portal.
## Required permissions
The rows in a downloaded CSV template are as follows:
1. [Sign in to your Azure AD organization](https://aad.portal.azure.com) with an account that is a User administrator in the organization. 1. In Azure AD, select **Users** > **Bulk create**.
-1. On the **Bulk create user** page, select **Download** to receive a valid comma-separated values (CSV) file of user properties, and then add add users you want to create.
+1. On the **Bulk create user** page, select **Download** to receive a valid comma-separated values (CSV) file of user properties, and then add users you want to create.
![Select a local CSV file in which you list the users you want to add](./media/users-bulk-add/upload-button.png)
active-directory Users Bulk Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-bulk-delete.md
Previously updated : 07/09/2021 Last updated : 06/24/2022
+ # Bulk delete users in Azure Active Directory
-Using the Azure Active Directory (Azure AD) portal, you can remove a large number of members to a group by using a comma-separated values (CSV) file to bulk delete users.
+Using the admin center in Azure Active Directory (Azure AD), part of Microsoft Entra, you can remove a large number of members to a group by using a comma-separated values (CSV) file to bulk delete users.
## CSV template structure
active-directory Users Bulk Download https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-bulk-download.md
Previously updated : 10/26/2021 Last updated : 06/24/2022
# Download a list of users in Azure Active Directory portal
-Azure Active Directory (Azure AD) supports bulk user list download operations.
+Azure Active Directory (Azure AD), part of Microsoft Entra, supports bulk user list download operations.
## Required permissions
active-directory Users Bulk Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-bulk-restore.md
Previously updated : 12/02/2020 Last updated : 06/24/2022
# Bulk restore deleted users in Azure Active Directory
-Azure Active Directory (Azure AD) supports bulk user restore operations and supports downloading lists of users, groups, and group members.
+Azure Active Directory (Azure AD), part of Microsoft Entra, supports bulk user restore operations and supports downloading lists of users, groups, and group members.
## Understand the CSV template
active-directory Users Close Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-close-account.md
# Close your work or school account in an unmanaged Azure AD organization
-If you are a user in an unmanaged Azure Active Directory (Azure AD) organization, and you no longer need to use apps from that organization or maintain any association with it, you can close your account at any time. An unmanaged organization does not have a Global administrator. Users in an unmanaged organization can close their accounts on their own, without having to contact an administrator.
+If you are a user in an unmanaged organization (tenant) in Azure Active Directory (Azure AD), part of Microsoft Entra, and you no longer need to use apps from that organization or maintain any association with it, you can close your account at any time. An unmanaged organization does not have a Global Administrator. Users in an unmanaged organization can close their accounts on their own, without having to contact an administrator.
Users in an unmanaged organization are often created during self-service sign-up. An example might be an information worker in an organization who signs up for a free service. For more information about self-service sign-up, see [What is self-service sign-up for Azure Active Directory?](directory-self-service-signup.md).
active-directory Users Custom Security Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-custom-security-attributes.md
description: Assign or remove custom security attributes for a user in Azure Act
Previously updated : 02/03/2022 Last updated : 06/24/2022
> Custom security attributes are currently in PREVIEW. > See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-[Custom security attributes](../fundamentals/custom-security-attributes-overview.md) in Azure Active Directory (Azure AD) are business-specific attributes (key-value pairs) that you can define and assign to Azure AD objects. For example, you can assign custom security attribute to filter your employees or to help determine who gets access to resources. This article describes how to assign, update, remove, or filter custom security attributes for Azure AD.
+[Custom security attributes](../fundamentals/custom-security-attributes-overview.md) in Azure Active Directory (Azure AD), part of Microsoft Entra, are business-specific attributes (key-value pairs) that you can define and assign to Azure AD objects. For example, you can assign custom security attribute to filter your employees or to help determine who gets access to resources. This article describes how to assign, update, remove, or filter custom security attributes for Azure AD.
## Prerequisites
active-directory Users Restrict Guest Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-restrict-guest-permissions.md
Previously updated : 05/04/2022 Last updated : 06/24/2022
# Restrict guest access permissions in Azure Active Directory
-Azure Active Directory (Azure AD) allows you to restrict what external guest users can see in their organization in Azure AD. Guest users are set to a limited permission level by default in Azure AD, while the default for member users is the full set of user permissions. There's another guest user permission level in your Azure AD organization's external collaboration settings for even more restricted access, so that the guest access levels are:
+Azure Active Directory (Azure AD), part of Microsoft Entra, allows you to restrict what external guest users can see in their organization in Azure AD. Guest users are set to a limited permission level by default in Azure AD, while the default for member users is the full set of user permissions. There's another guest user permission level in your Azure AD organization's external collaboration settings for even more restricted access, so that the guest access levels are:
Permission level | Access level | Value - | | --
active-directory Auth Ssh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/auth-ssh.md
Previously updated : 03/01/2022 Last updated : 06/22/2022 - # SSH
-Secure Shell (SSH) is a network protocol that provides encryption for operating network services securely over an unsecured network. SSH also provides a command-line sign in, executes remote commands, and securely transfer files. It is commonly used in Unix-based systems such as Linux®. SSH replaces the Telnet protocol, which does not provide encryption in an unsecured network.
+Secure Shell (SSH) is a network protocol that provides encryption for operating network services securely over an unsecured network. SSH also provides a command-line sign-in, executes remote commands, and securely transfer files. It's commonly used in Unix-based systems such as Linux®. SSH replaces the Telnet protocol, which doesn't provide encryption in an unsecured network.
-Azure Active Directory (Azure AD) provides a Virtual Machine (VM) extension for Linux®-based systems running on Azure.
+Azure Active Directory (Azure AD) provides a Virtual Machine (VM) extension for Linux®-based systems running on Azure, and a client extension that integrates with [Azure CLI](/cli/azure/) and the OpenSSH client.
## Use when 
-* Working with Linux®-based VMs that require remote sign in
+* Working with Linux®-based VMs that require remote sign-in
* Executing remote commands in Linux®-based systems
Azure Active Directory (Azure AD) provides a Virtual Machine (VM) extension for
![diagram of Azure AD with SSH protocol](./media/authentication-patterns/ssh-auth.png)
-SSH with Azure AD
- ## Components of system 
-* **User**: Starts SSH client to set up a connection with the Linux® VMs and provides credentials for authentication.
+* **User**: Starts Azure CLI and SSH client to set up a connection with the Linux® VMs and provides credentials for authentication.
+
+* **Azure CLI**: The component that the user interacts with to initiate their session with Azure AD, request short-lived OpenSSH user certificates from Azure AD, and initiate the SSH session.
-* **Web browser**: The component that the user interacts with. It communicates with the Identity Provider (Azure AD) to securely authenticate and authorize the user.
+* **Web browser**: The component that the user interacts with to authenticate their Azure CLI session. It communicates with the Identity Provider (Azure AD) to securely authenticate and authorize the user.
-* **SSH Client**: Drives the connection setup process.
+* **OpenSSH Client**: This client is used by Azure CLI, or (optionally) directly by the end user, to initiate a connection to the Linux VM.
-* **Azure AD**: Authenticates the identity of the user using device flow, and issues token to the Linux VMs.
+* **Azure AD**: Authenticates the identity of the user and issues short-lived OpenSSH user certificates to their Azure CLI client.
-* **Linux VM**: Accepts token and provides successful connection.
+* **Linux VM**: Accepts OpenSSH user certificate and provides successful connection.
## Implement SSH with Azure AD  * [Log in to a Linux® VM with Azure Active Directory credentials - Azure Virtual Machines ](../devices/howto-vm-sign-in-azure-ad-linux.md) -
-* [OAuth 2.0 device code flow - Microsoft identity platform ](../develop/v2-oauth2-device-code.md)
-
-* [Integrate with Azure Active Directory (akamai.com)](https://learn.akamai.com/en-us/webhelp/enterprise-application-access/enterprise-application-access/GUID-6B16172C-86CC-48E8-B30D-8E678BF3325F.html)
active-directory Security Operations Privileged Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-privileged-accounts.md
Title: Security operations for privileged accounts in Azure Active Directory description: Learn about baselines, and how to monitor and alert on potential security issues with privileged accounts in Azure Active Directory. -+ Last updated 04/29/2022-+
active-directory Cloud Governed Management For On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/cloud-governed-management-for-on-premises.md
In hybrid environments, Microsoft's strategy is to enable deployments where the
## Next steps
-For more information on how to get started on this journey, see the Azure AD deployment plans, located at <https://aka.ms/deploymentplans>. They provide end-to-end guidance about how to deploy Azure Active Directory (Azure AD) capabilities. Each plan explains the business value, planning considerations, design, and operational procedures needed to successfully roll out common Azure AD capabilities. Microsoft continually updates the deployment plans with best practices learned from customer deployments and other feedback when we add new capabilities to managing from the cloud with Azure AD.
+For more information on how to get started on this journey, see the [Azure AD deployment plans](https://aka.ms/deploymentplans). These plans provide end-to-end guidance for deploying Azure Active Directory (Azure AD) capabilities. Each plan explains the business value, planning considerations, design, and operational procedures needed to successfully roll out common Azure AD capabilities. Microsoft continually updates the deployment plans with best practices learned from customer deployments and other feedback when we add new capabilities to managing from the cloud with Azure AD.
active-directory Howto Troubleshoot Upn Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/howto-troubleshoot-upn-changes.md
Last updated 03/13/2020 --++
See these resources:
* [Azure AD UserPrincipalName population](./plan-connect-userprincipalname.md)
-* [Microsoft identity platform ID tokens](../develop/id-tokens.md)
+* [Microsoft identity platform ID tokens](../develop/id-tokens.md)
active-directory End User Experiences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/end-user-experiences.md
Which method(s) you choose to deploy in your organization is your discretion.
## Azure AD My Apps
-My Apps at <https://myapps.microsoft.com> is a web-based portal that allows an end user with an organizational account in Azure Active Directory to view and launch applications to which they have been granted access by the Azure AD administrator. If you are an end user with [Azure Active Directory Premium](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing), you can also utilize self-service group management capabilities through My Apps.
+[My Apps](https://myapps.microsoft.com) is a web-based portal that allows an end user with an organizational account in Azure Active Directory to view and launch applications to which they have been granted access by the Azure AD administrator. If you are an end user with [Azure Active Directory Premium](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing), you can also utilize self-service group management capabilities through My Apps.
By default, all applications are listed together on a single page. But you can use collections to group together related applications and present them on a separate tab, making them easier to find. For example, you can use collections to create logical groupings of applications for specific job roles, tasks, projects, and so on. For information, see [Create collections on the My Apps portal](access-panel-collections.md).
active-directory Troubleshoot Saml Based Sso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/troubleshoot-saml-based-sso.md
To know the patterns pre-configured for the application:
8. Select **SAML-based Sign-on** from the **Mode** dropdown. 9. Go to the **Identifier** or **Reply URL** textbox, under the **Domain and URLs section.** 10. There are three ways to know the supported patterns for the application:
- - In the textbox, you see the supported pattern(s) as a placeholder *Example:* <https://contoso.com>.
+ - In the textbox, you see the supported pattern(s) as a placeholder, for example: `https://contoso.com`.
- if the pattern is not supported, you see a red exclamation mark when you try to enter the value in the textbox. If you hover your mouse over the red exclamation mark, you see the supported patterns. - In the tutorial for the application, you can also get information about the supported patterns. Under the **Configure Azure AD single sign-on** section. Go to the step for configured the values under the **Domain and URLs** section.
active-directory Plan Monitoring And Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/plan-monitoring-and-reporting.md
Title: Plan reports & monitoring deployment - Azure AD description: Describes how to plan and execute implementation of reporting and monitoring. -+ Last updated 11/13/2018-+ # Customer intent: As an Azure AD administrator, I want to monitor logs and report on access to increase security
Depending on the decisions you have made earlier using the design guidance above
Consider implementing [Privileged Identity Management](../privileged-identity-management/pim-configure.md)
-Consider implementing [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md)
+Consider implementing [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md)
active-directory Aws Clientvpn Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/aws-clientvpn-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal.
| > [!NOTE]
- > These values are not real. Update these values with the actual Sign on URL and Reply URL. The Sign on URL and Reply URL can have the same value (http://127.0.0.1:35001). Refer to [AWS Client VPN Documentation](https://docs.aws.amazon.com/vpn/latest/clientvpn-admin/client-authentication.html#ad) for details. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. Contact [AWS ClientVPN support team](https://aws.amazon.com/contact-us/) for any configuration issues.
+ > These values are not real. Update these values with the actual Sign on URL and Reply URL. The Sign on URL and Reply URL can have the same value (`http://127.0.0.1:35001`). Refer to [AWS Client VPN Documentation](https://docs.aws.amazon.com/vpn/latest/clientvpn-admin/client-authentication.html#ad) for details. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. Contact [AWS ClientVPN support team](https://aws.amazon.com/contact-us/) for any configuration issues.
1. In the Azure Active Directory service, navigate to **App registrations** and then select **All Applications**.
active-directory Keystone Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/keystone-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Keystone'
+description: Learn how to configure single sign-on between Azure Active Directory and Keystone.
++++++++ Last updated : 06/16/2022++++
+# Tutorial: Azure AD SSO integration with Keystone
+
+In this tutorial, you'll learn how to integrate Keystone with Azure Active Directory (Azure AD). When you integrate Keystone with Azure AD, you can:
+
+* Control in Azure AD who has access to Keystone.
+* Enable your users to be automatically signed-in to Keystone with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Keystone single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Keystone supports **SP** initiated SSO.
+
+## Add Keystone from the gallery
+
+To configure the integration of Keystone into Azure AD, you need to add Keystone from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Keystone** in the search box.
+1. Select **Keystone** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Keystone
+
+Configure and test Azure AD SSO with Keystone using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user at Keystone.
+
+To configure and test Azure AD SSO with Keystone, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Keystone SSO](#configure-keystone-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Keystone test user](#create-keystone-test-user)** - to have a counterpart of B.Simon in Keystone that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Keystone** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a value using the following pattern:
+ `urn:auth0:irdeto:<InstanceName>`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://irdeto.auth0.com/login/callback?connection=<InstanceName>`
+
+ c. In the **Sign-on URL** text box, type the URL:
+ `https://fms.live.fm.ks.irdeto.com/`
+
+ > [!Note]
+ > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [Keystone support team](mailto:soc@irdeto.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
+
+1. On the **Set up Keystone** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Keystone.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Keystone**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Keystone SSO
+
+To configure single sign-on on **Keystone** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Keystone support team](mailto:soc@irdeto.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Keystone test user
+
+In this section, you create a user called Britta Simon at Keystone. Work with [Keystone support team](mailto:soc@irdeto.com) to add the users in the Keystone platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Keystone Sign-On URL where you can initiate the login flow.
+
+* Go to Keystone Sign-On URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Keystone tile in the My Apps, this will redirect to Keystone Sign-On URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Keystone you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Preset Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/preset-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Preset'
+description: Learn how to configure single sign-on between Azure Active Directory and Preset.
++++++++ Last updated : 06/14/2022++++
+# Tutorial: Azure AD SSO integration with Preset
+
+In this tutorial, you'll learn how to integrate Preset with Azure Active Directory (Azure AD). When you integrate Preset with Azure AD, you can:
+
+* Control in Azure AD who has access to Preset.
+* Enable your users to be automatically signed-in to Preset with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Preset single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Preset supports **SP** and **IDP** initiated SSO.
+
+## Add Preset from the gallery
+
+To configure the integration of Preset into Azure AD, you need to add Preset from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Preset** in the search box.
+1. Select **Preset** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Preset
+
+Configure and test Azure AD SSO with Preset using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Preset.
+
+To configure and test Azure AD SSO with Preset, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Preset SSO](#configure-preset-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Preset test user](#create-preset-test-user)** - to have a counterpart of B.Simon in Preset that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Preset** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a value using the following pattern:
+ `urn:auth0:preset-io-prod:<ConnectionID>`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://auth.app.preset.io/login/callback?connection=<ConnectionID>`
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type the URL:
+ `https://manage.app.preset.io/login`
+
+ > [!Note]
+ > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [Preset support team](mailto:support@preset.io) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. Preset application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![image](common/default-attributes.png)
+
+1. In addition to above, Preset application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | | |
+ | firstName | user.givenname |
+ | lastName | user.surname |
+ | email | user.mail |
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
+
+1. On the **Set up Preset** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Attributes")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Preset.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Preset**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Preset SSO
+
+To configure single sign-on on **Preset** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Preset support team](mailto:support@preset.io). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Preset test user
+
+In this section, you create a user called Britta Simon in Preset. Work with [Preset support team](mailto:support@preset.io) to add the users in the Preset platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Preset Sign on URL where you can initiate the login flow.
+
+* Go to Preset Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Preset for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Preset tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Preset for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Preset you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Sap Successfactors Inbound Provisioning Cloud Only Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sap-successfactors-inbound-provisioning-cloud-only-tutorial.md
This section provides steps for user account provisioning from SuccessFactors to
**To configure SuccessFactors to Azure AD provisioning:**
-1. Go to <https://portal.azure.com>
+1. Go to the [Azure portal](https://portal.azure.com).
2. In the left navigation bar, select **Azure Active Directory**
active-directory Sap Successfactors Inbound Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sap-successfactors-inbound-provisioning-tutorial.md
This section provides steps for user account provisioning from SuccessFactors to
**To configure SuccessFactors to Active Directory provisioning:**
-1. Go to <https://portal.azure.com>
+1. Go to the [Azure portal](https://portal.azure.com).
2. In the left navigation bar, select **Azure Active Directory**
active-directory Sap Successfactors Writeback Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sap-successfactors-writeback-tutorial.md
This section provides steps for
**To configure SuccessFactors Writeback:**
-1. Go to <https://portal.azure.com>
+1. Go to the [Azure portal](https://portal.azure.com).
2. In the left navigation bar, select **Azure Active Directory**
active-directory Tendium Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tendium-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Tendium'
+description: Learn how to configure single sign-on between Azure Active Directory and Tendium.
++++++++ Last updated : 06/14/2022++++
+# Tutorial: Azure AD SSO integration with Tendium
+
+In this tutorial, you'll learn how to integrate Tendium with Azure Active Directory (Azure AD). When you integrate Tendium with Azure AD, you can:
+
+* Control in Azure AD who has access to Tendium.
+* Enable your users to be automatically signed-in to Tendium with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Tendium single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Tendium supports **SP** initiated SSO.
+* Tendium supports **Just In Time** user provisioning.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Add Tendium from the gallery
+
+To configure the integration of Tendium into Azure AD, you need to add Tendium from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Tendium** in the search box.
+1. Select **Tendium** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Tendium
+
+Configure and test Azure AD SSO with Tendium using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Tendium.
+
+To configure and test Azure AD SSO with Tendium, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Tendium SSO](#configure-tendium-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Tendium test user](#create-tendium-test-user)** - to have a counterpart of B.Simon in Tendium that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Tendium** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** text box, type the value:
+ `urn:amazon:cognito:sp:eu-west-1_bIV0Yblnt`
+
+ b. In the **Reply URL** text box, type the URL:
+ `https://auth-prod.app.tendium.com/saml2/idpresponse`
+
+ c. In the **Sign-on URL** text box, type the URL:
+ `https://app.tendium.com/auth/sign-in`
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Tendium.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Tendium**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Tendium SSO
+
+To configure single sign-on on **Tendium** side, you need to send the **App Federation Metadata Url** to [Tendium support team](mailto:tech-partners@tendium.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Tendium test user
+
+In this section, a user called B.Simon is created in Tendium. Tendium supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Tendium, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Tendium Sign-on URL where you can initiate the login flow.
+
+* Go to Tendium Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Tendium tile in the My Apps, this will redirect to Tendium Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+
+## Next steps
+
+Once you configure Tendium you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Training Platform Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/training-platform-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Training Platform'
+description: Learn how to configure single sign-on between Azure Active Directory and Training Platform.
++++++++ Last updated : 06/14/2022++++
+# Tutorial: Azure AD SSO integration with Training Platform
+
+In this tutorial, you'll learn how to integrate Training Platform with Azure Active Directory (Azure AD). When you integrate Training Platform with Azure AD, you can:
+
+* Control in Azure AD who has access to Training Platform.
+* Enable your users to be automatically signed-in to Training Platform with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Training Platform single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Training Platform supports **SP** and **IDP** initiated SSO.
+* Training Platform supports **Just In Time** user provisioning.
+
+## Add Training Platform from the gallery
+
+To configure the integration of Training Platform into Azure AD, you need to add Training Platform from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Training Platform** in the search box.
+1. Select **Training Platform** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Training Platform
+
+Configure and test Azure AD SSO with Training Platform using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Training Platform.
+
+To configure and test Azure AD SSO with Training Platform, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Training Platform SSO](#configure-training-platform-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Training Platform test user](#create-training-platform-test-user)** - to have a counterpart of B.Simon in Training Platform that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Training Platform** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a value using the following pattern:
+ `urn:auth0:living-security:<ID>`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://identity.livingsecurity.com/login/callback?connection=<ID>`
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type the URL:
+ `https://app.livingsecurity.com`
+
+ > [!Note]
+ > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [Training Platform support team](mailto:support@livingsecurity.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Training Platform.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Training Platform**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Training Platform SSO
+
+1. Log in to your Training Platform company site as an administrator.
+
+1. Go to **Configuration** section and select **SAML SSO Configuration** tab.
+
+1. Make sure your application is set to **Metadata URL** Mode.
+
+1. In the **Identity Provider Metadata Url*** textbox, paste the **App Federation Metadata Url** which you have copied from the Azure portal.
+
+ ![Screenshot that shows the Configuration Settings.](./media/training-platform-tutorial/settings.png "Configuration")
+
+1. Click **Save**.
+
+### Create Training Platform test user
+
+In this section, a user called B.Simon is created in Training Platform. Training Platform supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Training Platform, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Training Platform Sign-on URL where you can initiate the login flow.
+
+* Go to Training Platform Sign on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Training Platform for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Training Platform tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Training Platform for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Training Platform you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Veza Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/veza-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Veza'
+description: Learn how to configure single sign-on between Azure Active Directory and Veza.
++++++++ Last updated : 06/23/2022++++
+# Tutorial: Azure AD SSO integration with Veza
+
+In this tutorial, you'll learn how to integrate Veza with Azure Active Directory (Azure AD). When you integrate Veza with Azure AD, you can:
+
+* Control in Azure AD who has access to Veza.
+* Enable your users to be automatically signed-in to Veza with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Veza single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD. For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Veza supports **SP** and **IDP** initiated SSO.
+* Veza supports **Just In Time** user provisioning.
+
+## Add Veza from the gallery
+
+To configure the integration of Veza into Azure AD, you need to add Veza from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Veza** in the search box.
+1. Select **Veza** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Veza
+
+Configure and test Azure AD SSO with Veza using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Veza.
+
+To configure and test Azure AD SSO with Veza, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Veza SSO](#configure-veza-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Veza test user](#create-veza-test-user)** - to have a counterpart of B.Simon in Veza that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Veza** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** text box, type a value using one of the following patterns:
+
+ | **Identifier** |
+ |-|
+ `urn:auth0:<Cookie-auth0-instance-name>:saml-<customer-name>-cookie-connection` |
+ `urn:auth0:<Veza-auth0-instance-name>:saml-<customer-name>-cookie-connection` |
+
+ b. In the **Reply URL** text box, type a URL using one of the following patterns:
+
+ | **Reply URL** |
+ ||
+ | `https://<instancename>.veza.com` |
+ | `https://<instancename>.cookie.ai` |
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type a URL using one of the following patterns:
+
+ | **Sign-on URL** |
+ |--|
+ | `https://<instancename>.cookie.ai/login/callback?connection=saml-<customer-name>-cookie-connection` |
+ | `https://<instancename>.veza.com/ login/callback?connection=saml-<customer-name>-veza-connection` |
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Veza Client support team](mailto:support@veza.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
+
+1. On the **Set up Veza** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Veza.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Veza**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Veza SSO
+
+1. Log in to your Veza company site as an administrator.
+
+1. Go to **Administration** > **Sign-in Settings**, toggle **Enable MFA** button and choose to configure SSO.
+
+ ![Screenshot that shows the Configuration Settings.](./media/veza-tutorial/settings.png "Configuration")
+
+1. In the **Configure SSO** page, perform the following steps:
+
+ ![Screenshot that shows the Configuration of SSO Authentication.](./media/veza-tutorial/details.png "Profile")
+
+ a. In the **Sign In Url** textbox, paste the **Login URL** value, which you've copied from the Azure portal.
+
+ b. Open the downloaded **Certificate (Base64)** from the Azure portal and upload the file into the **X509 Signing Certificate** by clicking **Choose File** option.
+
+ c. In the **Sign Out Url** textbox, paste the **Logout URL** value, which you've copied from the Azure portal.
+
+ d. Toggle **Enable Request Signing** button and select RSA-SHA-256 and SHA-256 as the **Sign Request Algorithm**.
+
+ e. Click **Save** on the Veza SSO configuration and toggle the option to **Enable SSO**.
+
+### Create Veza test user
+
+In this section, a user called B.Simon is created in Veza. Veza supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Veza, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Veza Sign-On URL where you can initiate the login flow.
+
+* Go to Veza Sign-On URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Veza for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Veza tile in the My Apps, if configured in SP mode you would be redirected to the application Sign-On page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Veza for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Veza you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Workday Inbound Cloud Only Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/workday-inbound-cloud-only-tutorial.md
The following sections describe steps for configuring user provisioning from Wor
**To configure Workday to Azure Active Directory provisioning for cloud-only users:**
-1. Go to <https://portal.azure.com>.
+1. Go to the [Azure portal](https://portal.azure.com).
2. In the Azure portal, search for and select **Azure Active Directory**.
active-directory Workday Inbound Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/workday-inbound-tutorial.md
This section provides steps for user account provisioning from Workday to each A
**To configure Workday to Active Directory provisioning:**
-1. Go to <https://portal.azure.com>.
+1. Go to the [Azure portal](https://portal.azure.com).
2. In the Azure portal, search for and select **Azure Active Directory**.
active-directory Workday Writeback Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/workday-writeback-tutorial.md
Follow these instructions to configure writeback of user email addresses and use
**To configure Workday Writeback connector:**
-1. Go to <https://portal.azure.com>.
+1. Go to the [Azure portal](https://portal.azure.com).
2. In the Azure portal, search for and select **Azure Active Directory**.
active-directory Configure Azure Active Directory For Fedramp High Impact https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/configure-azure-active-directory-for-fedramp-high-impact.md
--++ Last updated 4/26/2021
The following is a list of FedRAMP resources:
* [Microsoft Purview compliance portal](/microsoft-365/compliance/microsoft-365-compliance-center)
-* [Microsoft Compliance Manager](/microsoft-365/compliance/compliance-manager)
+* [Microsoft Purview Compliance Manager](/microsoft-365/compliance/compliance-manager)
## Next steps
active-directory Fedramp Access Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/fedramp-access-controls.md
--++ Last updated 4/26/2021
active-directory Fedramp Identification And Authentication Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/fedramp-identification-and-authentication-controls.md
--++ Last updated 4/07/2022
active-directory Fedramp Other Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/fedramp-other-controls.md
--++ Last updated 4/26/2021
active-directory Memo 22 09 Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/memo-22-09-authorization.md
--++ Last updated 3/10/2022
active-directory Memo 22 09 Enterprise Wide Identity Management System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/memo-22-09-enterprise-wide-identity-management-system.md
--++ Last updated 3/10/2022
active-directory Memo 22 09 Meet Identity Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/memo-22-09-meet-identity-requirements.md
--++ Last updated 3/10/2022
active-directory Memo 22 09 Multi Factor Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/memo-22-09-multi-factor-authentication.md
--++ Last updated 3/10/2022
The following articles are part of this documentation set:
For more information about Zero Trust, see:
-[Securing identity with Zero Trust](/security/zero-trust/deploy/identity)
+[Securing identity with Zero Trust](/security/zero-trust/deploy/identity)
active-directory Memo 22 09 Other Areas Zero Trust https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/memo-22-09-other-areas-zero-trust.md
--++ Last updated 3/10/2022
active-directory Nist About Authenticator Assurance Levels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/nist-about-authenticator-assurance-levels.md
--++ Last updated 4/26/2021
In addition, Microsoft is fully committed to [protecting and managing customer d
[Achieve NIST AAL2 with Azure AD](nist-authenticator-assurance-level-2.md) [Achieve NIST AAL3 with Azure AD](nist-authenticator-assurance-level-3.md)
-ΓÇÄ
+ΓÇÄ
active-directory Nist Authentication Basics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/nist-authentication-basics.md
--++ Last updated 4/26/2021
One example is the Microsoft Authenticator app used in passwordless mode. With t
[Achieving NIST AAL2 by using Azure AD](nist-authenticator-assurance-level-2.md)
-[Achieving NIST AAL3 by using Azure AD](nist-authenticator-assurance-level-3.md)
+[Achieving NIST AAL3 by using Azure AD](nist-authenticator-assurance-level-3.md)
active-directory Nist Authenticator Assurance Level 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/nist-authenticator-assurance-level-1.md
--++ Last updated 4/26/2021
All communications between the claimant and Azure AD are performed over an authe
[Achieve NIST AAL2 with Azure AD](nist-authenticator-assurance-level-2.md)
-[Achieve NIST AAL3 with Azure AD](nist-authenticator-assurance-level-3.md)
+[Achieve NIST AAL3 with Azure AD](nist-authenticator-assurance-level-3.md)
active-directory Nist Authenticator Assurance Level 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/nist-authenticator-assurance-level-2.md
--++ Last updated 4/26/2021
active-directory Nist Authenticator Assurance Level 3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/nist-authenticator-assurance-level-3.md
--++ Last updated 4/26/2021
active-directory Nist Authenticator Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/nist-authenticator-types.md
--++ Last updated 4/26/2021
active-directory Nist Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/nist-overview.md
--++ Last updated 4/26/2021
active-directory Standards Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/standards-overview.md
--++ Last updated 4/26/2021
To learn more about supported compliance frameworks, see [Azure compliance offer
[Configure Azure Active Directory to achieve NIST authenticator assurance levels](nist-overview.md)
-[Configure Azure Active directory to meet FedRAMP High Impact level](configure-azure-active-directory-for-fedramp-high-impact.md)
+[Configure Azure Active directory to meet FedRAMP High Impact level](configure-azure-active-directory-for-fedramp-high-impact.md)
aks Node Pool Snapshot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-pool-snapshot.md
NODEPOOL_ID=$(az aks nodepool show --name nodepool1 --cluster-name myAKSCluster
Now, to take a snapshot from the previous node pool you'll use the `az aks snapshot` CLI command. ```azurecli-interactive
-az aks snapshot create --name MySnapshot --resource-group MyResourceGroup --nodepool-id $NODEPOOL_ID --location eastus
+az aks nodepool snapshot create --name MySnapshot --resource-group MyResourceGroup --nodepool-id $NODEPOOL_ID --location eastus
``` ## Create a node pool from a snapshot
api-management Api Management Get Started Publish Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-get-started-publish-versions.md
# Tutorial: Publish multiple versions of your API
-There are times when it's impractical to have all callers to your API use exactly the same version. When callers want to upgrade to a later version, they want an approach that's easy to understand. As shown in this tutorial, it is possible to provided multiple *versions* in Azure API Management.
+There are times when it's impractical to have all callers to your API use exactly the same version. When callers want to upgrade to a later version, they want an approach that's easy to understand. As shown in this tutorial, it is possible to provide multiple *versions* in Azure API Management.
For background, see [Versions & revisions](https://azure.microsoft.com/blog/versions-revisions/).
After creating the version, it now appears underneath **Demo Conference API** in
![Versions listed under an API in the Azure portal](media/api-management-getstarted-publish-versions/version-list.png)
-You can now edit and configure **v1** as an API that is separate from **Original**. Changes to one version do not affect another.
- > [!Note] > If you add a version to a non-versioned API, an **Original** is also automatically created. This version responds on the default URL. Creating an Original version ensures that any existing callers are not broken by the process of adding a version. If you create a new API with versions enabled at the start, an Original isn't created.
+## Edit a version
+
+After adding the version, you can now edit and configure it as an API that is separate from an Original. Changes to one version do not affect another. For example, add or remove API operations, or edit the OpenAPI specification. For more information, see [Edit an API](edit-api.md).
+ ## Add the version to a product In order for callers to see the new version, it must be added to a *product*. If you didn't already add the version to a product, you can add it to a product at any time.
api-management Authorizations How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authorizations-how-to.md
Four steps are needed to set up an authorization with the authorization code gra
:::image type="content" source="media/authorizations-how-to/register-application.png" alt-text="Screenshot of registering a new OAuth application in GitHub."::: 1. Enter an **Application name** and **Homepage URL** for the application. 1. Optionally, add an **Application description**.
- 1. In **Authorization callback URL** (the redirect URL), enter `https://authorization-manager-test.consent.azure-apim.net/redirect/apim/<YOUR-APIM-SERVICENAME>`, substituting the API Management service name that is used.
+ 1. In **Authorization callback URL** (the redirect URL), enter `https://authorization-manager.consent.azure-apim.net/redirect/apim/<YOUR-APIM-SERVICENAME>`, substituting the API Management service name that is used.
1. Select **Register application**. 1. In the **General** page, copy the **Client ID**, which you'll use in a later step. 1. Select **Generate a new client secret**. Copy the secret, which won't be displayed again, and which you'll use in a later step.
api-management Edit Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/edit-api.md
Title: Edit an API with the Azure portal | Microsoft Docs
-description: Learn how to use API Management (APIM) to edit an API. Add, delete, or rename operations in the APIM instance, or edit the API's swagger.
+description: Learn how to use API Management to edit an API. Add, delete, or rename operations in the APIM instance, or edit the API's swagger.
documentationcenter: ''
# Edit an API
-The steps in this tutorial show you how to use API Management (APIM) to edit an API.
+The steps in this tutorial show you how to use API Management to edit an API.
+ You can add, rename, or delete operations in the Azure portal. + You can edit your API's swagger.
The steps in this tutorial show you how to use API Management (APIM) to edit an
[!INCLUDE [api-management-navigate-to-instance.md](../../includes/api-management-navigate-to-instance.md)]
-## Edit an API in APIM
+## Edit an operation
-![Screenshot that highlights the process for editing an API in APIM.](./media/edit-api/edit-api001.png)
+![Screenshot that highlights the process for editing an API in API Management.](./media/edit-api/edit-api001.png)
1. Click the **APIs** tab. 2. Select one of the APIs that you previously imported.
The steps in this tutorial show you how to use API Management (APIM) to edit an
## Update the swagger
-You can update your backbend API from the Azure portal by following these steps:
+You can update your backend API from the Azure portal by following these steps:
1. Select **All operations** 2. Click pencil in the **Frontend** window.
You can update your backbend API from the Azure portal by following these steps:
> [!div class="nextstepaction"] > [APIM policy samples](./policy-reference.md)
-> [Transform and protect a published API](transform-api.md)
+> [Transform and protect a published API](transform-api.md)
api-management Soft Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/soft-delete.md
https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{reso
## Purge a soft-deleted instance
-Use the API Management [Purge](/rest/api/apimanagement/current-ga/deleted-services/purge) operation, substituting `{subscriptionId}`, `{location}`, and `{serviceName}` with your Azure subscription, resource location, and API Management name:
+Use the API Management [Purge](/rest/api/apimanagement/current-ga/deleted-services/purge) operation, substituting `{subscriptionId}`, `{location}`, and `{serviceName}` with your Azure subscription, resource location, and API Management name.
+
+> [!NOTE]
+> To purge a soft-deleted instance, you must have the following RBAC permissions at the subscription scope: Microsoft.ApiManagement/locations/deletedservices/delete, Microsoft.ApiManagement/deletedservices/read.
```rest DELETE https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.ApiManagement/locations/{location}/deletedservices/{serviceName}?api-version=2021-08-01
api-management Visual Studio Code Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/visual-studio-code-tutorial.md
The following example imports an OpenAPI Specification in JSON format into API M
1. In the Explorer pane, expand the API Management instance you created. 1. Right-click **APIs**, and select **Import from OpenAPI Link**. 1. When prompted, enter the following values:
- 1. An **OpenAPI link** for content in JSON format. For this example: *<https://conferenceapi.azurewebsites.net?format=json>*.
+ 1. An **OpenAPI link** for content in JSON format. For this example: `https://conferenceapi.azurewebsites.net?format=json`.
This URL is the service that implements the example API. API Management forwards requests to this address. 1. An **API name**, such as *demo-conference-api*, that is unique in the API Management instance. This name can contain only letters, number, and hyphens. The first and last characters must be alphanumeric. This name is used in the path to call the API.
app-service Quickstart Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-custom-container.md
Create an ASP.NET web app by following these steps:
:::image type="content" source="./media/quickstart-custom-container/select-mvc-template-for-container.png?text=Create ASP.NET Web Application" alt-text="Create ASP.NET Web Application":::
-1. If the _Dockerfile_ file isn't opened automatically, open it from the **Solution Explorer**.
+1. If the `Dockerfile` isn't opened automatically, open it from the **Solution Explorer**.
1. You need a [supported parent image](configure-custom-container.md#supported-parent-images). Change the parent image by replacing the `FROM` line with the following code and save the file:
app-service Tutorial Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-custom-container.md
In Solution Explorer, right-click the **CustomFontSample** project and select **
Select **Docker Compose** > **OK**.
-Your project is now set to run in a Windows container. A _Dockerfile_ is added to the **CustomFontSample** project, and a **docker-compose** project is added to the solution.
+Your project is now set to run in a Windows container. A `Dockerfile` is added to the **CustomFontSample** project, and a **docker-compose** project is added to the solution.
From the Solution Explorer, open **Dockerfile**.
A terminal window is opened and displays the image deployment progress. Wait for
## Sign in to Azure
-Sign in to the Azure portal at <https://portal.azure.com>.
+Sign in to the [Azure portal](https://portal.azure.com).
## Create a web app
application-gateway Application Gateway Backend Health Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-backend-health-troubleshooting.md
The message displayed in the **Details** column provides more detailed insights
> [!NOTE] > The default probe request is sent in the format of
-\<protocol\>://127.0.0.1:\<port\>/. For example, http://127.0.0.1:80 for an http probe on port 80. Only HTTP status codes of 200 through 399 are considered healthy. The protocol and destination port are inherited from the HTTP settings. If you want Application Gateway to probe on a different protocol, host name, or path and to recognize a different status code as Healthy, configure a custom probe and associate it with the HTTP settings.
+`<protocol>://127.0.0.1:<port>`. For example, `http://127.0.0.1:80` for an HTTP probe on port 80. Only HTTP status codes of 200 through 399 are considered healthy. The protocol and destination port are inherited from the HTTP settings. If you want Application Gateway to probe on a different protocol, host name, or path and to recognize a different status code as Healthy, configure a custom probe and associate it with the HTTP settings.
## Error messages
application-gateway Application Gateway Probe Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-probe-overview.md
In addition to using default health probe monitoring, you can also customize the
An application gateway automatically configures a default health probe when you don't set up any custom probe configuration. The monitoring behavior works by making an HTTP GET request to the IP addresses or FQDN configured in the back-end pool. For default probes if the backend http settings are configured for HTTPS, the probe uses HTTPS to test health of the backend servers.
-For example: You configure your application gateway to use back-end servers A, B, and C to receive HTTP network traffic on port 80. The default health monitoring tests the three servers every 30 seconds for a healthy HTTP response with a 30 second timeout for each request. A healthy HTTP response has a [status code](https://msdn.microsoft.com/library/aa287675.aspx) between 200 and 399. In this case, the HTTP GET request for the health probe will look like http://127.0.0.1/.
+For example: You configure your application gateway to use back-end servers A, B, and C to receive HTTP network traffic on port 80. The default health monitoring tests the three servers every 30 seconds for a healthy HTTP response with a 30 second timeout for each request. A healthy HTTP response has a [status code](https://msdn.microsoft.com/library/aa287675.aspx) between 200 and 399. In this case, the HTTP GET request for the health probe will look like `http://127.0.0.1/`.
If the default probe check fails for server A, the application gateway stops forwarding requests to this server. The default probe still continues to check for server A every 30 seconds. When server A responds successfully to one request from a default health probe, application gateway starts forwarding the requests to the server again.
application-gateway Configure Application Gateway With Private Frontend Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configure-application-gateway-with-private-frontend-ip.md
This article guides you through the steps to configure a Standard v1 Application
## Sign in to Azure
-Sign in to the Azure portal at <https://portal.azure.com>
+Sign in to the [Azure portal](https://portal.azure.com).
## Create an application gateway
applied-ai-services Applied Ai Services Customer Spotlight Use Cases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/applied-ai-services-customer-spotlight-use-cases.md
Customers are using Azure Applied AI Services to add AI horsepower to their busi
- AI-driven search offers strong data security and delivers smart results that add value. - Azure Form Recognizer increases ROI by using automation to streamline data extraction.
+## Use cases
+ | Partner | Description | Customer story | ||-|-| | <center>![Logo of Progressive Insurance, which consists of the word progressive in a slanted font in blue, capital letters.](./media/logo-progressive-02.png) | **Progressive uses Azure Bot Service and Azure Cognitive Search to help customers make smarter insurance decisions.** <br>"One of the great things about Bot Service is that, out of the box, we could use it to quickly put together the basic framework for our bot." *-Matt White, Marketing Manager, Personal Lines Acquisition Experience, Progressive Insurance* | [Insurance shoppers gain new service channel with artificial intelligence chatbot](https://customers.microsoft.com/story/789698-progressive-insurance-cognitive-services-insurance) |
applied-ai-services Form Recognizer Container Image Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/containers/form-recognizer-container-image-tags.md
Previously updated : 09/02/2021 Last updated : 06/23/2022 keywords: Docker, container, images
The following tags are available for Form Recognizer:
Release notes for `v2.1` (gated preview):
-| Container | Tags |
-||:|
-| **Layout**| &bullet; `latest` </br> &bullet; `2.1-preview` </br> &bullet; `2.1.0.016140001-08108749-amd64-preview`|
-| **Business Card** | &bullet; `latest` </br> &bullet; `2.1-preview` </br> &bullet; `2.1.016190001-amd64-preview` </br> &bullet; `2.1.016320001-amd64-preview` |
-| **ID Document** | &bullet; `latest` </br> &bullet; `2.1-preview`</br>&bullet; `2.1.016190001-amd64-preview`</br>&bullet; `2.1.016320001-amd64-preview` |
-| **Receipt**| &bullet; `latest` </br> &bullet; `2.1-preview`</br>&bullet; `2.1.016190001-amd64-preview`</br>&bullet; `2.1.016320001-amd64-preview` |
-| **Invoice**| &bullet; `latest` </br> &bullet; `2.1-preview`</br>&bullet; `2.1.016190001-amd64-preview`</br>&bullet; `2.1.016320001-amd64-preview` |
-| **Custom API** | &bullet; `latest` </br> &bullet;`2.1-distroless-20210622013115034-0cc5fcf6`</br>&bullet; `2.1-preview`|
-| **Custom Supervised**| &bullet; `latest` </br> &bullet; `2.1-distroless-20210622013149174-0cc5fcf6`</br>&bullet; `2.1-preview`|
+| Container | Tags | Retrieve image |
+||:||
+| **Layout**| &bullet; `latest` </br> &bullet; `2.1-preview`| `docker pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout)`|
+| **Business Card** | &bullet; `latest` </br> &bullet; `2.1-preview` |`docker pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/receipt` |
+| **ID Document** | &bullet; `latest` </br> &bullet; `2.1-preview`| `docker pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/id-document`|
+| **Receipt**| &bullet; `latest` </br> &bullet; `2.1-preview`| `docker pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/receipt` |
+| **Invoice**| &bullet; `latest` </br> &bullet; `2.1-preview`|`docker pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/invoice` |
+| **Custom API** | &bullet; `latest` </br> &bullet; `2.1-preview`| `docker pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/custom-api`|
+| **Custom Supervised**| &bullet; `latest` </br> &bullet; `2.1-preview`|`docker pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/custom-supervised` |
### [Previous versions](#tab/previous)
Release notes for `v2.1` (gated preview):
> [!div class="nextstepaction"] > [Install and run Form Recognizer containers](form-recognizer-container-install-run.md)
->
+>
applied-ai-services Deploy Label Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/deploy-label-tool.md
# Deploy the Sample Labeling tool
+>[!TIP]
+>
+> * For an enhanced experience and advanced model quality, try the [Form Recognizer v3.0 Studio (preview)](https://formrecognizer.appliedai.azure.com/studio).
+> * The v3.0 Studio supports any model trained with v2.1 labeled data.
+> * You can refer to the API migration guide for detailed information about migrating from v2.1 to v3.0.
+> * *See* our [**REST API**](quickstarts/try-v3-rest-api.md) or [**C#**](quickstarts/try-v3-csharp-sdk.md), [**Java**](quickstarts/try-v3-java-sdk.md), [**JavaScript**](quickstarts/try-v3-javascript-sdk.md), or [Python](quickstarts/try-v3-python-sdk.md) SDK quickstarts to get started with the V3.0 preview.
+ > [!NOTE] > The [cloud hosted](https://fott-2-1.azurewebsites.net/) labeling tool is available at [https://fott-2-1.azurewebsites.net/](https://fott-2-1.azurewebsites.net/). Follow the steps in this document only if you want to deploy the sample labeling tool for yourself.
applied-ai-services Label Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/label-tool.md
Title: "How-to: Analyze documents, Label forms, train a model, and analyze forms with Form Recognizer"
-description: In this how-to, you'll use the Form Recognizer sample tool to analyze documents, invoices, receipts etc. Label and create a custom model to extract text, tables, selection marks, structure and key-value pairs from documents.
+description: How to use the Form Recognizer sample tool to analyze documents, invoices, receipts etc. Label and create a custom model to extract text, tables, selection marks, structure and key-value pairs from documents.
Previously updated : 11/02/2021 Last updated : 06/23/2022 keywords: document processing
keywords: document processing
<!-- markdownlint-disable MD034 --> # Train a custom model using the Sample Labeling tool
-In this article, you'll use the Form Recognizer REST API with the Sample Labeling tool to train a custom model with manually labeled data.
+In this article, you'll use the Form Recognizer REST API with the Sample Labeling tool to train a custom model with manually labeled data.
> [!VIDEO https://docs.microsoft.com/Shows/Docs-Azure/Azure-Form-Recognizer/player] ## Prerequisites
-To complete this quickstart, you must have:
+ You'll need the following resources to complete this project:
* Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services) * Once you have your Azure subscription, <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer" title="Create a Form Recognizer resource" target="_blank">create a Form Recognizer resource </a> in the Azure portal to get your key and endpoint. After it deploys, select **Go to resource**.
- * You will need the key and endpoint from the resource you create to connect your application to the Form Recognizer API. You'll paste your key and endpoint into the code below later in the quickstart.
+ * You'll need the key and endpoint from the resource you create to connect your application to the Form Recognizer API. You'll paste your key and endpoint into the code below later in the quickstart.
* You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production. * A set of at least six forms of the same type. You'll use this data to train the model and test a form. You can use a [sample data set](https://go.microsoft.com/fwlink/?linkid=2090451) (download and extract *sample_data.zip*) for this quickstart. Upload the training files to the root of a blob storage container in a standard-performance-tier Azure Storage account.
Try out the [**Form Recognizer Sample Labeling tool**](https://fott-2-1.azureweb
> [!div class="nextstepaction"] > [Try Prebuilt Models](https://fott-2-1.azurewebsites.net/)
-You will need an Azure subscription ([create one for free](https://azure.microsoft.com/free/cognitive-services)) and a [Form Recognizer resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) endpoint and key to try out the Form Recognizer service.
+You'll need an Azure subscription ([create one for free](https://azure.microsoft.com/free/cognitive-services)) and a [Form Recognizer resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) endpoint and key to try out the Form Recognizer service.
## Set up the Sample Labeling tool
When you create or open a project, the main tag editor window opens. The tag edi
* The main editor pane that allows you to apply tags. * The tags editor pane that allows users to modify, lock, reorder, and delete tags.
-### Identify text and tables
+### Identify text and tables
Select **Run Layout on unvisited documents** on the left pane to get the text and table layout information for each document. The labeling tool will draw bounding boxes around each text element.
-The labeling tool will also show which tables have been automatically extracted. Select the table/grid icon on the left hand of the document to see the extracted table. In this quickstart, because the table content is automatically extracted, we will not be labeling the table content, but rather rely on the automated extraction.
+The labeling tool will also show which tables have been automatically extracted. Select the table/grid icon on the left hand of the document to see the extracted table. In this quickstart, because the table content is automatically extracted, we won't be labeling the table content, but rather rely on the automated extraction.
:::image type="content" source="media/label-tool/table-extraction.png" alt-text="Table visualization in Sample Labeling tool.":::
-In v2.1, if your training document does not have a value filled in, you can draw a box where the value should be. Use **Draw region** on the upper left corner of the window to make the region taggable.
+In v2.1, if your training document doesn't have a value filled in, you can draw a box where the value should be. Use **Draw region** on the upper left corner of the window to make the region taggable.
### Apply labels to text
The following value types and variations are currently supported:
* `number` * default, `currency`
- * Formatted as a Floating point value.
- * Example:1234.98 on the document will be formatted into 1234.98 on the output
+ * Formatted as a Floating point value.
+ * Example: 1234.98 on the document will be formatted into 1234.98 on the output
* `date` * default, `dmy`, `mdy`, `ymd` * `time` * `integer`
- * Formatted as a Integer value.
- * Example:1234.98 on the document will be formatted into 123498 on the output
+ * Formatted as an integer value.
+ * Example: 1234.98 on the document will be formatted into 123498 on the output.
* `selectionMark` > [!NOTE]
The following value types and variations are currently supported:
### Label tables (v2.1 only)
-At times, your data might lend itself better to being labeled as a table rather than key-value pairs. In this case, you can create a table tag by clicking on "Add a new table tag," specify whether the table will have a fixed number of rows or variable number of rows depending on the document, and define the schema.
+At times, your data might lend itself better to being labeled as a table rather than key-value pairs. In this case, you can create a table tag by selecting **Add a new table tag**. Specify whether the table will have a fixed number of rows or variable number of rows depending on the document and define the schema.
:::image type="content" source="media/label-tool/table-tag.png" alt-text="Configuring a table tag.":::
-Once you have defined your table tag, tag the cell values.
+Once you've defined your table tag, tag the cell values.
:::image type="content" source="media/table-labeling.png" alt-text="Labeling a table.":::
Once you have defined your table tag, tag the cell values.
Choose the Train icon on the left pane to open the Training page. Then select the **Train** button to begin training the model. Once the training process completes, you'll see the following information: * **Model ID** - The ID of the model that was created and trained. Each training call creates a new model with its own ID. Copy this string to a secure location; you'll need it if you want to do prediction calls through the [REST API](./quickstarts/try-sdk-rest-api.md?pivots=programming-language-rest-api&tabs=preview%2cv2-1) or [client library guide](./quickstarts/try-sdk-rest-api.md).
-* **Average Accuracy** - The model's average accuracy. You can improve model accuracy by labeling additional forms and retraining to create a new model. We recommend starting by labeling five forms and adding more forms as needed.
+* **Average Accuracy** - The model's average accuracy. You can improve model accuracy by adding and labeling more forms, then retraining to create a new model. We recommend starting by labeling five forms and adding more forms as needed.
* The list of tags, and the estimated accuracy per tag.
Select the Analyze (light bulb) icon on the left to test your model. Select sour
## Improve results
-Depending on the reported accuracy, you may want to do further training to improve the model. After you've done a prediction, examine the confidence values for each of the applied tags. If the average accuracy training value was high, but the confidence scores are low (or the results are inaccurate), you should add the prediction file to the training set, label it, and train again.
+Depending on the reported accuracy, you may want to do further training to improve the model. After you've done a prediction, examine the confidence values for each of the applied tags. If the average accuracy training value is high, but the confidence scores are low (or the results are inaccurate), add the prediction file to the training set, label it, and train again.
The reported average accuracy, confidence scores, and actual accuracy can be inconsistent when the analyzed documents differ from documents used in training. Keep in mind that some documents look similar when viewed by people but can look distinct to the AI model. For example, you might train with a form type that has two variations, where the training set consists of 20% variation A and 80% variation B. During prediction, the confidence scores for documents of variation A are likely to be lower.
Go to your project settings page (slider icon) and take note of the security tok
### Restore project credentials
-When you want to resume your project, you first need to create a connection to the same blob storage container. To do so, repeat the steps above. Then, go to the application settings page (gear icon) and see if your project's security token is there. If it isn't, add a new security token and copy over your token name and key from the previous step. Select **Save** to retain your settings..
+When you want to resume your project, you first need to create a connection to the same blob storage container. To do so, repeat the steps above. Then, go to the application settings page (gear icon) and see if your project's security token is there. If it isn't, add a new security token and copy over your token name and key from the previous step. Select **Save** to retain your settings.
### Resume a project
-Finally, go to the main page (house icon) and select **Open Cloud Project**. Then select the blob storage connection, and select your project's **.fott** file. The application will load all of the project's settings because it has the security token.
+Finally, go to the main page (house icon) and select **Open Cloud Project**. Then select the blob storage connection, and select your project's `.fott` file. The application will load all of the project's settings because it has the security token.
## Next steps
applied-ai-services Try Sample Label Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-sample-label-tool.md
Previously updated : 11/02/2021 Last updated : 06/24/2022 keywords: document processing
keywords: document processing
<!-- markdownlint-disable MD029 --> # Get started with the Form Recognizer Sample Labeling tool
-Azure Form Recognizer is a cloud-based Azure Applied AI Service that uses machine-learning models to extract key-value pairs, text, and tables from your documents. You can use Form Recognizer to automate your data processing in applications and workflows, enhance data-driven strategies, and enrich document search capabilities.
+>[!TIP]
+>
+> * For an enhanced experience and advanced model quality, try the [Form Recognizer v3.0 Studio (preview)](https://formrecognizer.appliedai.azure.com/studio).
+> * The v3.0 Studio supports any model trained with v2.1 labeled data.
+> * You can refer to the API migration guide for detailed information about migrating from v2.1 to v3.0.
+> * *See* our [**REST API**](try-v3-rest-api.md) or [**C#**](try-v3-csharp-sdk.md), [**Java**](try-v3-java-sdk.md), [**JavaScript**](try-v3-javascript-sdk.md), or [Python](try-v3-python-sdk.md) SDK quickstarts to get started with the V3.0 preview.
The Form Recognizer Sample Labeling tool is an open source tool that enables you to test the latest features of Azure Form Recognizer and Optical Character Recognition (OCR)
Form Recognizer offers several prebuilt models to choose from. Each model has it
* [**Sample receipt image**](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/contoso-allinone.jpg). * [**Sample business card image**](https://raw.githubusercontent.com/Azure/azure-sdk-for-python/master/sdk/formrecognizer/azure-ai-formrecognizer/samples/sample_forms/business_cards/business-card-english.jpg).
-1. In the **Source: URL** field, paste the selected URL and select the **Fetch** button.
+1. In the **Source** field, select **URL** from the dropdown menu, paste the selected URL, and select the **Fetch** button.
+
+ :::image type="content" source="../media/label-tool/fott-select-url.png" alt-text="Screenshot of source location dropdown menu.":::
1. In the **Form recognizer service endpoint** field, paste the endpoint that you obtained with your Form Recognizer subscription.
Azure the Form Recognizer Layout API extracts text, tables, selection marks, and
1. In the **key** field, paste the key you obtained from your Form Recognizer resource.
-1. In the **Source: URL** field, paste the following URL `https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/layout-page-001.jpg` and select the **Fetch** button.
+1. In the **Source** field, select **URL** from the dropdown menu, paste the following URL `https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/layout-page-001.jpg`, and select the **Fetch** button.
1. Select **Run Layout**. The Form Recognizer Sample Labeling tool will call the Analyze Layout API and analyze the document.
Train a custom model to analyze and extract data from forms and documents specif
### Prerequisites for training a custom form model
-* An Azure Storage blob container that contains a set of training data. Make sure all the training documents are of the same format. If you have forms in multiple formats, organize them into subfolders based on common format. For this project, you can use our [sample data set](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/curl/form-recognizer/sample_data_without_labels.zip).
+* An Azure Storage blob container that contains a set of training data. Make sure all the training documents are of the same format. If you have forms in multiple formats, organize them into subfolders based on common format. For this project, you can use our [sample data set](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/curl/form-recognizer/sample_data_without_labels.zip). If you don't know how to create an Azure storage account with a container, follow the [Azure Storage quickstart for Azure portal](../../../storage/blobs/storage-quickstart-blobs-portal.md).
* Configure CORS
- [CORS (Cross Origin Resource Sharing)](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services) needs to be configured on your Azure storage account for it to be accessible from the Form Recognizer Studio. To configure CORS in the Azure portal, you'll need access to the CORS blade of your storage account.
+ [CORS (Cross Origin Resource Sharing)](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services) needs to be configured on your Azure storage account for it to be accessible from the Form Recognizer Studio. To configure CORS in the Azure portal, you'll need access to the CORS tab of your storage account.
- 1. Select the CORS blade for the storage account.
+ 1. Select the CORS tab for the storage account.
:::image type="content" source="../media/quickstarts/cors-setting-menu.png" alt-text="Screenshot of the CORS setting menu in the Azure portal.":::
applied-ai-services Try V3 Csharp Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-csharp-sdk.md
Analyze and extract text, tables, structure, key-value pairs, and named entities
> [!div class="checklist"] > > * For this example, you'll need a **form document file from a URI**. You can use our [sample form document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-layout.pdf) for this quickstart.
-> * To analyze a given file at a URI, you'll use the `StartAnalyzeDocumentFromUri` method. The returned value is an `AnalyzeResult` object containing data about the submitted document.
+> * To analyze a given file at a URI, you'll use the `StartAnalyzeDocumentFromUri` method and pass `prebuilt-document` as the model ID. The returned value is an `AnalyzeResult` object containing data about the submitted document.
> * We've added the file URI value to the `Uri fileUri` variable at the top of the script. > * For simplicity, all the entity fields that the service returns are not shown here. To see the list of all supported fields and corresponding types, see the [General document](../concept-general-document.md#named-entity-recognition-ner-categories) concept page.
Analyze and extract common fields from specific document types using a prebuilt
> [!div class="checklist"] > > * Analyze an invoice using the prebuilt-invoice model. You can use our [sample invoice document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-invoice.pdf) for this quickstart.
-> * We've added the file URI value to the `Uri fileUri` variable at the top of the Program.cs file.
+> * We've added the file URI value to the `Uri invoiceUri` variable at the top of the Program.cs file.
> * To analyze a given file at a URI, use the `StartAnalyzeDocumentFromUri` method and pass `prebuilt-invoice` as the model ID. The returned value is an `AnalyzeResult` object containing data from the submitted document. > * For simplicity, all the key-value pairs that the service returns are not shown here. To see the list of all supported fields and corresponding types, see our [Invoice](../concept-invoice.md#field-extraction) concept page.
applied-ai-services Supervised Table Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/supervised-table-tags.md
# Use table tags to train your custom template model
+>[!TIP]
+>
+> * For an enhanced experience and advanced model quality, try the [Form Recognizer v3.0 Studio (preview)](https://formrecognizer.appliedai.azure.com/studio).
+> * The v3.0 Studio supports any model trained with v2.1 labeled data.
+> * You can refer to the API migration guide for detailed information about migrating from v2.1 to v3.0.
+> * *See* our [**REST API**](quickstarts/try-v3-rest-api.md) or [**C#**](quickstarts/try-v3-csharp-sdk.md), [**Java**](quickstarts/try-v3-java-sdk.md), [**JavaScript**](quickstarts/try-v3-javascript-sdk.md), or [Python](quickstarts/try-v3-python-sdk.md) SDK quickstarts to get started with the V3.0 preview.
+ In this article, you'll learn how to train your custom template model with table tags (labels). Some scenarios require more complex labeling than simply aligning key-value pairs. Such scenarios include extracting information from forms with complex hierarchical structures or encountering items that not automatically detected and extracted by the service. In these cases, you can use table tags to train your custom template model. ## When should I use table tags?
In this article, you'll learn how to train your custom template model with table
Here are some examples of when using table tags would be appropriate: - There's data that you wish to extract presented as tables in your forms, and the structure of the tables are meaningful. For instance, each row of the table represents one item and each column of the row represents a specific feature of that item. In this case, you could use a table tag where a column represents features and a row represents information about each feature.-- There's data you wish to extract that is not presented in specific form fields but semantically, the data could fit in a two-dimensional grid. For instance, your form has a list of people, and includes, a first name, a last name, and an email address. You would like to extract this information. In this case, you could use a table tag with first name, last name, and email address as columns and each row is populated with information about a person from your list.
+- There's data you wish to extract that isn't presented in specific form fields but semantically, the data could fit in a two-dimensional grid. For instance, your form has a list of people, and includes, a first name, a last name, and an email address. You would like to extract this information. In this case, you could use a table tag with first name, last name, and email address as columns and each row is populated with information about a person from your list.
> [!NOTE] > Form Recognizer automatically finds and extracts all tables in your documents whether the tables are tagged or not. Therefore, you don't have to label every table from your form with a table tag and your table tags don't have to replicate the structure of very table found in your form. Tables extracted automatically by Form Recognizer will be included in the pageResults section of the JSON output.
automation Private Link Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/how-to/private-link-security.md
Before setting up your Automation account resource, consider your network isolat
Follow the steps below to create a private endpoint for your Automation account. 1. Go to [Private Link center](https://portal.azure.com/#blade/Microsoft_Azure_Network/PrivateLinkCenterBlade/privateendpoints) in Azure portal to create a private endpoint to connect our network.
-Once your changes to public Network Access and Private Link are applied, it can take up to 35 minutes for them to take effect.
1. On **Private Link Center**, select **Create private endpoint**.
Once your changes to public Network Access and Private Link are applied, it can
:::image type="content" source="./media/private-link-security/create-private-endpoint-dns-inline.png" alt-text="Screenshot of how to create a private endpoint in DNS tab." lightbox="./media/private-link-security/create-private-endpoint-dns-expanded.png":::
-1. On **Tags**, you can categorize resources. Select **Name** and **Value** and select **Review + create**. You're taken to the **Review + create** page where Azure validates your configuration.
+1. On **Tags**, you can categorize resources. Select **Name** and **Value** and select **Review + create**.
+You're taken to the **Review + create** page where Azure validates your configuration. Once your changes to public Network Access and Private Link are applied, it can take up to 35 minutes for them to take effect.
On the **Private Link Center**, select **Private endpoints** to view your private link resource.
automation Create Account Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/quickstarts/create-account-portal.md
The following table describes the fields on the **Basics** tab.
|||| |Subscription|Required |From the drop-down list, select the Azure subscription for the account.| |Resource group|Required |From the drop-down list, select your existing resource group, or select **Create new**.|
-|Automation account name|Required |Enter a name unique for it's location and resource group. Names for Automation accounts that have been deleted might not be immediately available. You can't change the account name once it has been entered in the user interface. |
+|Automation account name|Required |Enter a name unique for its location and resource group. Names for Automation accounts that have been deleted might not be immediately available. You can't change the account name once it has been entered in the user interface. |
|Region|Required |From the drop-down list, select a region for the account. For an updated list of locations that you can deploy an Automation account to, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=automation&regions=all).| The following image shows a standard configuration for a new Automation account. ### Advanced
You can chose to enable managed identities later, and the Automation account is
The following image shows a standard configuration for a new Automation account.
-### Tags tab
+### Networking
+
+On the **Networking** tab, you can connect to your automation account either publicly, (via public IP addresses), or privately, using a private endpoint. The following image shows the connectivity configuration that you can define for a new automation account.
+
+- **Public Access** ΓÇô This default option provides a public endpoint for the Automation account that can receive traffic over the internet and does not require any additional configuration. However, we don't recommend it for private applications or secure environments. Instead, the second option **Private access**, a private Link mentioned below can be leveraged to restrict access to automation endpoints only from authorized virtual networks. Public access can simultaneously coexist with the private endpoint enabled on the Automation account. If you select public access while creating the Automation account, you can add a Private endpoint later from the Networking blade of the Automation Account.
+
+- **Private Access** ΓÇô This option provides a private endpoint for the Automation account that uses a private IP address from your virtual network. This network interface connects you privately and securely to the Automation account. You bring the service into your virtual network by enabling a private endpoint. This is the recommended configuration from a security point of view; however, this requires you to configure Hybrid Runbook Worker connected to an Azure virtual network & currently does not support cloud jobs.
++
+### Tags
On the **Tags** tab, you can specify Resource Manager tags to help organize your Azure resources. For more information, see [Tag resources, resource groups, and subscriptions for logical organization](../../azure-resource-manager/management/tag-resources.md).
azure-fluid-relay Version Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/concepts/version-compatibility.md
Title: Version compatibility with Fluid Framework releases
-description: |
- How to determine what versions of the Fluid Framework releases are compatible with Azure Fluid Relay service.
+description: How to determine what versions of the Fluid Framework releases are compatible with Azure Fluid Relay
npx install-peerdeps @fluidframework/azure-client
``` > [!TIP]
-> During Public Preview, the versions of **@fluidframework/azure-client** and **fluid-framework** will match. That is, if
-> the current release of **@fluidframework/azure-client** is 0.48, then it will be compatible with **fluid-framework** 0.48. The inverse is also true.
+> If building with any pre-release version of of **@fluidframework/azure-client** and **fluid-framework** we strongly recommend that you update to the latest 1.0 version. Earlier versions will not be
+> supported with the General Availability of Azure Fluid Relay. With this upgrade, youΓÇÖll make use of our new multi-region routing capability where
+> Azure Fluid Relay will host your session closer to your end users to improve customer experience. In the latest package, you will need to update your
+> serviceConfig object to the new Azure Fluid Relay service endpoint instead of the storage and orderer endpoints:
+> If your Azure Fluid Relay resource is in West US 2, please use **https://us.fluidrelay.azure.com**. If it is West Europe,
+> use **https://eu.fluidrelay.azure.com**. If it is in Southeast Asia, use **https://global.fluidrelay.azure.com**.
+> These values can also be found in the "Access Key" section of the Fluid Relay resource in the Azure portal. The orderer and storage endpoints will be deprecated soon.
+ ## Compatibility table | npm package | Minimum version | API | | - | :-- | : |
-| @fluidframework/azure-client | [0.48.4][] | [API](https://fluidframework.com/docs/apis/azure-client/) |
-| fluid-framework | [0.48.4][] | [API](https://fluidframework.com/docs/apis/fluid-framework/) |
-| @fluidframework/azure-service-utils | [0.48.4][] | [API](https://fluidframework.com/docs/apis/azure-service-utils/) |
-| @fluidframework/test-client-utils | [0.48.4][] | [API](https://fluidframework.com/docs/apis/test-client-utils/) |
+| @fluidframework/azure-client | [1.0.1][] | [API](https://fluidframework.com/docs/apis/azure-client/) |
+| fluid-framework | [1.0.1][] | [API](https://fluidframework.com/docs/apis/fluid-framework/) |
+| @fluidframework/azure-service-utils | [1.0.1][] | [API](https://fluidframework.com/docs/apis/azure-service-utils/) |
+| @fluidframework/test-client-utils | [1.0.1][] | [API](https://fluidframework.com/docs/apis/test-client-utils/) |
-[0.48.4]: https://fluidframework.com/docs/updates/v0.48/
+[1.0.1]: https://fluidframework.com/docs/updates/v1.0.0/
## Next steps
azure-fluid-relay Connect Fluid Azure Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/how-tos/connect-fluid-azure-service.md
To connect to an Azure Fluid Relay instance you first need to create an `AzureCl
const config = { tenantId: "myTenantId", tokenProvider: new InsecureTokenProvider("myTenantKey", { id: "userId" }),
- orderer: "https://myOrdererUrl",
- storage: "https://myStorageUrl",
+ endpoint: "https://myServiceEndpointUrl",
+ type: "remote",
}; const clientProps = {
const config = {
"myAzureFunctionUrl" + "/api/GetAzureToken", { userId: "userId", userName: "Test User" } ),
- orderer: "https://myOrdererUrl",
- storage: "https://myStorageUrl",
+ endpoint: "https://myServiceEndpointUrl",
+ type: "remote",
}; const clientProps = {
const config = {
"myAzureFunctionUrl" + "/api/GetAzureToken", { userId: "UserId", userName: "Test User", additionalDetails: userDetails } ),
- orderer: "https://myOrdererUrl",
- storage: "https://myStorageUrl",
+ endpoint: "https://myServiceEndpointUrl",
+ type: "remote",
}; ```
azure-fluid-relay Local Mode With Azure Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/how-tos/local-mode-with-azure-client.md
This article walks through the steps to configure **AzureClient** in local mode
connection: { tenantId: LOCAL_MODE_TENANT_ID, tokenProvider: new InsecureTokenProvider("", { id: "123", name: "Test User" }),
- orderer: "http://localhost:7070",
- storage: "http://localhost:7070",
+ endpoint: "http://localhost:7070",
+ type: "remote",
}, };
azure-fluid-relay Test Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/how-tos/test-automation.md
function createAzureClient(): AzureClient {
const user = { id: "userId", name: "Test User" }; const connectionConfig = useAzure ? {
+ type: "remote",
tenantId: "myTenantId", tokenProvider: new InsecureTokenProvider(tenantKey, user),
- orderer: "https://myOrdererUrl",
- storage: "https://myStorageUrl",
+ endpoint: "https://myServiceEndpointUrl",
} : {
- tenantId: LOCAL_MODE_TENANT_ID,
+ type: "local",
tokenProvider: new InsecureTokenProvider("", user),
- orderer: "http://localhost:7070",
- storage: "http://localhost:7070",
+ endpoint: "http://localhost:7070",
};- const clientProps = { connection: config, };
azure-fluid-relay Quickstart Dice Roll https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/quickstarts/quickstart-dice-roll.md
To run against the Azure Fluid Relay service, you'll need to update your app's c
### Configure and create an Azure client
-To configure the Azure client, replace the values in the `serviceConfig` object in `app.js` with your Azure Fluid Relay
-service configuration values. These values can be found in the "Access Key" section of the Fluid Relay resource in the Azure portal.
+To configure the Azure client, replace the local connection `serviceConfig` object in `app.js` with your Azure Fluid Relay
+service configuration values. These values can be found in the "Access Key" section of the Fluid Relay resource in the Azure portal. Your `serviceConfig` object should look like this with the values replaced
```javascript const serviceConfig = { connection: {
- tenantId: LOCAL_MODE_TENANT_ID, // REPLACE WITH YOUR TENANT ID
+ tenantId: "MY_TENANT_ID", // REPLACE WITH YOUR TENANT ID
tokenProvider: new InsecureTokenProvider("" /* REPLACE WITH YOUR PRIMARY KEY */, { id: "userId" }),
- orderer: "http://localhost:7070", // REPLACE WITH YOUR ORDERER ENDPOINT
- storage: "http://localhost:7070", // REPLACE WITH YOUR STORAGE ENDPOINT
+ endpoint: "https://myServiceEndpointUrl", // REPLACE WITH YOUR SERVICE ENDPOINT
+ type: "remote",
} }; ```
azure-fluid-relay Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/reference/service-limits.md
-# Azure Fluid Relay Limits
+# Azure Fluid Relay limits
-This article outlines known limitation of Azure Fluid Relay.
+This article outlines known limitations of Azure Fluid Relay.
## Distributed Data Structures
The Azure Fluid Relay doesn't support [experimental distributed data structures
The maximum number of simultaneous users in one session on Azure Fluid Relay is 100 users. This limit is on simultaneous users. What this means is that the 101st user won't be allowed to join the session. In the case where an existing user leaves the session, a new user will be able to join. This is because the number of simultaneous users at that point will be less than the limit.
-## Fluid Summaries
+## Fluid summaries
Incremental summaries uploaded to Azure Fluid Relay can't exceed 28 MB in size. More info [here](https://fluidframework.com/docs/concepts/summarizer).
azure-fluid-relay Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/resources/faq.md
The following are frequently asked questions about Azure Fluid Relay
+## When will Azure Fluid Relay be Generally Available?
+
+Azure Fluid Relay will be Generally Available on 8/1/2022. At that point, the service will no longer be free. Charges will apply based on your usage of Azure Fluid Relay. The service will be metering 4 activities:
+
+- Operations in: As end users join, leave, and contribute to a collaborative session, the Fluid Framework client libraries send messages (also referred to as operations or ops) to the service. Each message incoming from one client is counted as one message. Heartbeat messages and other session messages are also counted. Messages larger than 2KB are counted as multiple messages of 2KB each (for example, 11KB message is counted as 6 messages).
+- Operations out: Once the service processes incoming messages, it broadcasts them to all participants in the collaborative session. Each message sent to each client is counted as one message (for example, in a 3-user session, one of the users sends an op, that will generate 3 ops out).
+- Client connectivity minutes: The duration of each user being connected to the session will be charged on a per user basis (for example, 3 users collaborate on a session for an hour, this is charged as 180 connectivity minutes).
+- Storage: Each collaborative Fluid session stores session artifacts in the service. Storage of this data will be charged on a per GB per month basis (prorated as appropriate).
+
+Reference the table below for the prices (in USD) we will start to charge at General Availability for each of these meters in the regions Azure Fluid Relay is currently offered. Additional regions and additional information about other currencies will be available on our pricing page soon.
+
+| Meter | Unit | West US 2 | West Europe | Southeast Asia
+|--|--|--|--|--|
+| Operations In | 1 million ops | 1.50 | 1.95 | 1.95 |
+| Operations Out | 1 million ops | 0.50 | 0.65 | 0.65 |
+| Client Connectivity Minutes | 1 million minutes | 1.50 | 1.95 | 1.95 |
+| Storage | 1 GB/month | 0.20 | 0.26 | 0.26 |
+++ ## Which Azure regions currently provide Fluid Relay? For a complete list of available regions, see [Azure Fluid Relay regions and availability](https://azure.microsoft.com/global-infrastructure/services/?products=fluid-relay).
azure-fluid-relay Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/resources/support.md
-# Help and Support options for Azure Fluid Relay
+# Help and support options for Azure Fluid Relay
If you have an issue or question involving Azure Fluid Relay, the following options are available.
-## Check out Frequently Asked Questions
+## Check out frequently asked questions
You can see if your question is already answered on our Frequently Asked Questions [page](faq.md).
-## Create an Azure Support Request
+## Create an Azure support request
With Azure, there are many [support options and plans](https://azure.microsoft.com/support/plans/) available, which you can explore and review. You can create a support ticket in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview).
azure-functions Quickstart Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/quickstart-java.md
To complete this tutorial, you need:
- [Apache Maven](https://maven.apache.org), version 3.0 or above. - Latest version of the [Azure Functions Core Tools](../functions-run-local.md).
+ - For Azure Functions 3.x, Core Tools **v3.0.4585** or newer is required.
+ - For Azure Functions 4.x, Core Tools **v4.0.4590** or newer is required.
- An Azure Storage account, which requires that you have an Azure subscription.
azure-functions Functions Bindings Event Grid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-grid.md
zone_pivot_groups: programming-languages-set-functions-lang-workers
This reference shows how to connect to Azure Event Grid using Azure Functions triggers and bindings. | Action | Type | |||
Add this version of the extension to your project by installing the [NuGet packa
# [Extension v2.x](#tab/extensionv2/in-process)
-Supports the default Event Grid binding parameter type of [Microsoft.Azure.EventGrid.Models.EventGridEvent](/dotnet/api/microsoft.azure.eventgrid.models.eventgridevent).
+Supports the default Event Grid binding parameter type of [Microsoft.Azure.EventGrid.Models.EventGridEvent](/dotnet/api/microsoft.azure.eventgrid.models.eventgridevent). Event Grid extension versions earlier than 3.x don't support [CloudEvents schema](../event-grid/cloudevents-schema.md#azure-functions). To consume this schema, instead use an HTTP trigger.
Add the extension to your project by installing the [NuGet package], version 2.x. # [Functions 1.x](#tab/functionsv1/in-process)
-Functions 1.x apps automatically have a reference to the [Microsoft.Azure.WebJobs](https://www.nuget.org/packages/Microsoft.Azure.WebJobs) NuGet package, version 2.x.
+Functions 1.x apps automatically have a reference to the [Microsoft.Azure.WebJobs](https://www.nuget.org/packages/Microsoft.Azure.WebJobs) NuGet package, version 2.x. Event Grid extension versions earlier than 3.x don't support [CloudEvents schema](../event-grid/cloudevents-schema.md#azure-functions). To consume this schema, instead use an HTTP trigger.
The Event Grid output binding is only available for Functions 2.x and higher.
Add the extension to your project by installing the [NuGet package](https://www.
# [Extension v2.x](#tab/extensionv2/isolated-process)
-Add the extension to your project by installing the [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.EventGrid), version 2.x.
+Add the extension to your project by installing the [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.EventGrid), version 2.x. Event Grid extension versions earlier than 3.x don't support [CloudEvents schema](../event-grid/cloudevents-schema.md#azure-functions). To consume this schema, instead use an HTTP trigger.
# [Functions 1.x](#tab/functionsv1/isolated-process)
You can install this version of the extension in your function app by registerin
# [Extension v2.x](#tab/extensionv2/csharp-script)
-Supports the default Event Grid binding parameter type of [Microsoft.Azure.EventGrid.Models.EventGridEvent](/dotnet/api/microsoft.azure.eventgrid.models.eventgridevent).
+Supports the default Event Grid binding parameter type of [Microsoft.Azure.EventGrid.Models.EventGridEvent](/dotnet/api/microsoft.azure.eventgrid.models.eventgridevent). Event Grid extension versions earlier than 3.x don't support [CloudEvents schema](../event-grid/cloudevents-schema.md#azure-functions). To consume this schema, instead use an HTTP trigger.
You can install this version of the extension in your function app by registering the [extension bundle], version 2.x. # [Functions 1.x](#tab/functionsv1/csharp-script)
-Functions 1.x apps automatically have a reference to the [Microsoft.Azure.WebJobs](https://www.nuget.org/packages/Microsoft.Azure.WebJobs) NuGet package, version 2.x.
+Functions 1.x apps automatically have a reference to the [Microsoft.Azure.WebJobs](https://www.nuget.org/packages/Microsoft.Azure.WebJobs) NuGet package, version 2.x. Event Grid extension versions earlier than 3.x don't support [CloudEvents schema](../event-grid/cloudevents-schema.md#azure-functions). To consume this schema, instead use an HTTP trigger.
The Event Grid output binding is only available for Functions 2.x and higher.
To learn more, see [Update your extensions].
# [Bundle v2.x](#tab/extensionv2)
-You can install this version of the extension in your function app by registering the [extension bundle], version 2.x.
+You can install this version of the extension in your function app by registering the [extension bundle], version 2.x. Event Grid extension versions earlier than 3.x don't support [CloudEvents schema](../event-grid/cloudevents-schema.md#azure-functions). To consume this schema, instead use an HTTP trigger.
# [Functions 1.x](#tab/functionsv1)
-The Event Grid output binding is only available for Functions 2.x and higher.
+The Event Grid output binding is only available for Functions 2.x and higher. Event Grid extension versions earlier than 3.x don't support [CloudEvents schema](../event-grid/cloudevents-schema.md#azure-functions). To consume this schema, instead use an HTTP trigger.
azure-functions Functions Reference Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-python.md
Title: Python developer reference for Azure Functions
-description: Understand how to develop functions with Python
+description: Understand how to develop functions with Python.
Last updated 05/25/2022 ms.devlang: python
# Azure Functions Python developer guide
-This article is an introduction to developing Azure Functions using Python. The content below assumes that you've already read the [Azure Functions developers guide](functions-reference.md).
+This article is an introduction to developing for Azure Functions by using Python. It assumes that you've already read the [Azure Functions developer guide](functions-reference.md).
-As a Python developer, you may also be interested in one of the following articles:
+As a Python developer, you might also be interested in one of the following articles:
| Getting started | Concepts| Scenarios/Samples | |--|--|--| | <ul><li>[Python function using Visual Studio Code](./create-first-function-vs-code-python.md)</li><li>[Python function with terminal/command prompt](./create-first-function-cli-python.md)</li></ul> | <ul><li>[Developer guide](functions-reference.md)</li><li>[Hosting options](functions-scale.md)</li><li>[Performance&nbsp;considerations](functions-best-practices.md)</li></ul> | <ul><li>[Image classification with PyTorch](machine-learning-pytorch.md)</li><li>[Azure Automation sample](/samples/azure-samples/azure-functions-python-list-resource-groups/azure-functions-python-sample-list-resource-groups/)</li><li>[Machine learning with TensorFlow](functions-machine-learning-tensorflow.md)</li><li>[Browse Python samples](/samples/browse/?products=azure-functions&languages=python)</li></ul> | > [!NOTE]
-> While you can [develop your Python based Azure Functions locally on Windows](create-first-function-vs-code-python.md#run-the-function-locally), Python is only supported on a Linux based hosting plan when running in Azure. See the list of supported [operating system/runtime](functions-scale.md#operating-systemruntime) combinations.
+> Although you can [develop your Python-based functions locally on Windows](create-first-function-vs-code-python.md#run-the-function-locally), Python functions are supported in Azure only when they're running on Linux. See the [list of supported operating system/runtime combinations](functions-scale.md#operating-systemruntime).
## Programming model
-Azure Functions expects a function to be a stateless method in your Python script that processes input and produces output. By default, the runtime expects the method to be implemented as a global method called `main()` in the `__init__.py` file. You can also [specify an alternate entry point](#alternate-entry-point).
+Azure Functions expects a function to be a stateless method in your Python script that processes input and produces output. By default, the runtime expects the method to be implemented as a global method called `main()` in the *\__init\__.py* file. You can also [specify an alternate entry point](#alternate-entry-point).
-Data from triggers and bindings is bound to the function via method attributes using the `name` property defined in the *function.json* file. For example, the _function.json_ below describes a simple function triggered by an HTTP request named `req`:
+Data from triggers and bindings is bound to the function via method attributes that use the `name` property defined in the *function.json* file. For example, the following _function.json_ file describes a simple function triggered by an HTTP request named `req`:
:::code language="json" source="~/functions-quickstart-templates/Functions.Templates/Templates/HttpTrigger-Python/function.json":::
-Based on this definition, the `__init__.py` file that contains the function code might look like the following example:
+Based on this definition, the *\__init\__.py* file that contains the function code might look like the following example:
```python def main(req):
def main(req):
return f'Hello, {user}!' ```
-You can also explicitly declare the attribute types and return type in the function using Python type annotations. This action helps you to use the IntelliSense and autocomplete features provided by many Python code editors.
+You can also explicitly declare the attribute types and return type in the function by using Python type annotations. This action helps you to use the IntelliSense and autocomplete features that many Python code editors provide.
```python import azure.functions
def main(req: azure.functions.HttpRequest) -> str:
return f'Hello, {user}!' ```
-Use the Python annotations included in the [azure.functions.*](/python/api/azure-functions/azure.functions) package to bind input and outputs to your methods.
+Use the Python annotations included in the [azure.functions.*](/python/api/azure-functions/azure.functions) package to bind inputs and outputs to your methods.
## Alternate entry point
-You can change the default behavior of a function by optionally specifying the `scriptFile` and `entryPoint` properties in the *function.json* file. For example, the _function.json_ below tells the runtime to use the `customentry()` method in the _main.py_ file, as the entry point for your Azure Function.
+You can change the default behavior of a function by optionally specifying the `scriptFile` and `entryPoint` properties in the *function.json* file. For example, the following _function.json_ file tells the runtime to use the `customentry()` method in the _main.py_ file as the entry point for your function:
```json {
You can change the default behavior of a function by optionally specifying the `
## Folder structure
-The recommended folder structure for a Python Functions project looks like the following example:
+The recommended folder structure for an Azure Functions project in Python looks like the following example:
``` <project_root>/
The recommended folder structure for a Python Functions project looks like the f
| - requirements.txt | - Dockerfile ```
-The main project folder (<project_root>) can contain the following files:
+The main project folder (*<project_root>*) can contain the following files:
-* *local.settings.json*: Used to store app settings and connection strings when running locally. This file doesn't get published to Azure. To learn more, see [local.settings.file](functions-develop-local.md#local-settings-file).
-* *requirements.txt*: Contains the list of Python packages the system installs when publishing to Azure.
-* *host.json*: Contains configuration options that affect all functions in a function app instance. This file does get published to Azure. Not all options are supported when running locally. To learn more, see [host.json](functions-host-json.md).
-* *.vscode/*: (Optional) Contains store VSCode configuration. To learn more, see [VSCode setting](https://code.visualstudio.com/docs/getstarted/settings).
-* *.venv/*: (Optional) Contains a Python virtual environment used by local development.
-* *Dockerfile*: (Optional) Used when publishing your project in a [custom container](functions-create-function-linux-custom-image.md).
+* *local.settings.json*: Used to store app settings and connection strings when functions are running locally. This file isn't published to Azure. To learn more, see [Local settings file](functions-develop-local.md#local-settings-file).
+* *requirements.txt*: Contains the list of Python packages that the system installs when you're publishing to Azure.
+* *host.json*: Contains configuration options that affect all functions in a function app instance. This file is published to Azure. Not all options are supported when functions are running locally. To learn more, see the [host.json reference](functions-host-json.md).
+* *.vscode/*: (Optional) Contains stored Visual Studio Code configurations. To learn more, see [User and Workspace Settings](https://code.visualstudio.com/docs/getstarted/settings).
+* *.venv/*: (Optional) Contains a Python virtual environment that's used for local development.
+* *Dockerfile*: (Optional) Used when you're publishing your project in a [custom container](functions-create-function-linux-custom-image.md).
* *tests/*: (Optional) Contains the test cases of your function app.
-* *.funcignore*: (Optional) Declares files that shouldn't get published to Azure. Usually, this file contains `.vscode/` to ignore your editor setting, `.venv/` to ignore local Python virtual environment, `tests/` to ignore test cases, and `local.settings.json` to prevent local app settings being published.
+* *.funcignore*: (Optional) Declares files that shouldn't be published to Azure. Usually, this file contains `.vscode/` to ignore your editor setting, `.venv/` to ignore the local Python virtual environment, `tests/` to ignore test cases, and `local.settings.json` to prevent local app settings from being published.
-Each function has its own code file and binding configuration file (function.json).
+Each function has its own code file and binding configuration file (*function.json*).
-When you deploy your project to a function app in Azure, the entire contents of the main project (*<project_root>*) folder should be included in the package, but not the folder itself, which means `host.json` should be in the package root. We recommend that you maintain your tests in a folder along with other functions, in this example `tests/`. For more information, see [Unit Testing](#unit-testing).
+When you deploy your project to a function app in Azure, the entire contents of the main project (*<project_root>*) folder should be included in the package, but not the folder itself. That means *host.json* should be in the package root. We recommend that you maintain your tests in a folder along with other functions. In this example, the folder is *tests/*. For more information, see [Unit testing](#unit-testing).
## Import behavior
-You can import modules in your function code using both absolute and relative references. Based on the folder structure shown above, the following imports work from within the function file *<project_root>\my\_first\_function\\_\_init\_\_.py*:
+You can import modules in your function code by using both absolute and relative references. Based on the folder structure shown earlier, the following imports work from within the function file *<project_root>\my\_first\_function\\_\_init\_\_.py*:
```python from shared_code import my_first_helper_function #(absolute)
from . import example #(relative)
``` > [!NOTE]
-> The *shared_code/* folder needs to contain an \_\_init\_\_.py file to mark it as a Python package when using absolute import syntax.
+> The *shared_code/* folder needs to contain an *\_\_init\_\_.py* file to mark it as a Python package when you're using absolute import syntax.
-The following \_\_app\_\_ import and beyond top-level relative import are deprecated, since it isn't supported by static type checker and not supported by Python test frameworks:
+The following *\_\_app\_\_* import and beyond top-level relative import are deprecated. The static type checker and the Python test frameworks don't support them.
```python from __app__.shared_code import my_first_helper_function #(deprecated __app__ import)
from __app__.shared_code import my_first_helper_function #(deprecated __app__ im
from ..shared_code import my_first_helper_function #(deprecated beyond top-level relative import) ```
-## Triggers and Inputs
+## Triggers and inputs
-Inputs are divided into two categories in Azure Functions: trigger input and other input. Although they're different in the `function.json` file, usage is identical in Python code. Connection strings or secrets for trigger and input sources map to values in the `local.settings.json` file when running locally, and the application settings when running in Azure.
+Inputs are divided into two categories in Azure Functions: trigger input and other binding input. Although they're different in the *function.json* file, usage is identical in Python code. When functions are running locally, connection strings or secrets required by trigger and input sources are maintained in the `Values` collection of the *local.settings.json* file. When functions are running in Azure, those same connection strings or secrets are stored securely as [application settings](functions-how-to-use-azure-function-app-settings.md#settings).
-For example, the following code demonstrates the difference between the two:
+The following example code demonstrates the difference between the two:
```json // function.json
def main(req: func.HttpRequest,
logging.info(f'Python HTTP triggered function processed: {obj.read()}') ```
-When the function is invoked, the HTTP request is passed to the function as `req`. An entry will be retrieved from the Azure Blob Storage based on the _ID_ in the route URL and made available as `obj` in the function body. Here, the storage account specified is the connection string found in the AzureWebJobsStorage app setting, which is the same storage account used by the function app.
-
+When the function is invoked, the HTTP request is passed to the function as `req`. An entry will be retrieved from Azure Blob Storage based on the ID in the route URL and made available as `obj` in the function body. Here, the storage account specified is the connection string found in the `AzureWebJobsStorage` app setting, which is the same storage account that the function app uses.
## Outputs
-Output can be expressed both in return value and output parameters. If there's only one output, we recommend using the return value. For multiple outputs, you'll have to use output parameters.
+Output can be expressed in the return value and in output parameters. If there's only one output, we recommend using the return value. For multiple outputs, you'll have to use output parameters.
-To use the return value of a function as the value of an output binding, the `name` property of the binding should be set to `$return` in `function.json`.
+To use the return value of a function as the value of an output binding, set the `name` property of the binding to `$return` in *function.json*.
-To produce multiple outputs, use the `set()` method provided by the [`azure.functions.Out`](/python/api/azure-functions/azure.functions.out) interface to assign a value to the binding. For example, the following function can push a message to a queue and also return an HTTP response.
+To produce multiple outputs, use the `set()` method provided by the [azure.functions.Out](/python/api/azure-functions/azure.functions.out) interface to assign a value to the binding. For example, the following function can push a message to a queue and return an HTTP response:
```json {
def main(req: func.HttpRequest,
Access to the Azure Functions runtime logger is available via a root [`logging`](https://docs.python.org/3/library/logging.html#module-logging) handler in your function app. This logger is tied to Application Insights and allows you to flag warnings and errors that occur during the function execution.
-The following example logs an info message when the function is invoked via an HTTP trigger.
+The following example logs an info message when the function is invoked via an HTTP trigger:
```python import logging
More logging methods are available that let you write to the console at differen
| Method | Description | | - | |
-| **`critical(_message_)`** | Writes a message with level CRITICAL on the root logger. |
-| **`error(_message_)`** | Writes a message with level ERROR on the root logger. |
-| **`warning(_message_)`** | Writes a message with level WARNING on the root logger. |
-| **`info(_message_)`** | Writes a message with level INFO on the root logger. |
-| **`debug(_message_)`** | Writes a message with level DEBUG on the root logger. |
+| `critical(_message_)` | Writes a message with level CRITICAL on the root logger. |
+| `error(_message_)` | Writes a message with level ERROR on the root logger. |
+| `warning(_message_)` | Writes a message with level WARNING on the root logger. |
+| `info(_message_)` | Writes a message with level INFO on the root logger. |
+| `debug(_message_)` | Writes a message with level DEBUG on the root logger. |
To learn more about logging, see [Monitor Azure Functions](functions-monitoring.md). ### Log custom telemetry
-By default, the Functions runtime collects logs and other telemetry data generated by your functions. This telemetry ends up as traces in Application Insights. Request and dependency telemetry for certain Azure services are also collected by default by [triggers and bindings](functions-triggers-bindings.md#supported-bindings). To collect custom request and custom dependency telemetry outside of bindings, you can use the [OpenCensus Python Extensions](https://github.com/census-ecosystem/opencensus-python-extensions-azure). This extension sends custom telemetry data to your Application Insights instance. You can find a list of supported extensions at the [OpenCensus repository](https://github.com/census-instrumentation/opencensus-python/tree/master/contrib).
+By default, the Azure Functions runtime collects logs and other telemetry data that your functions generate. This telemetry ends up as traces in Application Insights. By default, [triggers and bindings](functions-triggers-bindings.md#supported-bindings) also collect request and dependency telemetry for certain Azure services.
+
+To collect custom request and custom dependency telemetry outside bindings, you can use the [OpenCensus Python Extensions](https://github.com/census-ecosystem/opencensus-python-extensions-azure). The Azure Functions extension sends custom telemetry data to your Application Insights instance. You can find a list of supported extensions at the [OpenCensus repository](https://github.com/census-instrumentation/opencensus-python/tree/master/contrib).
>[!NOTE] >To use the OpenCensus Python extensions, you need to enable [Python worker extensions](#python-worker-extensions) in your function app by setting `PYTHON_ENABLE_WORKER_EXTENSIONS` to `1`. You also need to switch to using the Application Insights connection string by adding the [`APPLICATIONINSIGHTS_CONNECTION_STRING`](functions-app-settings.md#applicationinsights_connection_string) setting to your [application settings](functions-how-to-use-azure-function-app-settings.md#settings), if it's not already there.
def main(req, context):
}) ```
-## HTTP Trigger and bindings
+## HTTP trigger and bindings
+
+The HTTP trigger is defined in the *function.json* file. The `name` parameter of the binding must match the named parameter in the function.
-The HTTP trigger is defined in the function.json file. The `name` of the binding must match the named parameter in the function.
-In the previous examples, a binding name `req` is used. This parameter is an [HttpRequest] object, and an [HttpResponse] object is returned.
+The previous examples use the binding name `req`. This parameter is an [HttpRequest] object, and an [HttpResponse] object is returned.
-From the [HttpRequest] object, you can get request headers, query parameters, route parameters, and the message body.
+From the `HttpRequest` object, you can get request headers, query parameters, route parameters, and the message body.
-The following example is from the [HTTP trigger template for Python](https://github.com/Azure/azure-functions-templates/tree/dev/Functions.Templates/Templates/HttpTrigger-Python).
+The following example is from the [HTTP trigger template for Python](https://github.com/Azure/azure-functions-templates/tree/dev/Functions.Templates/Templates/HttpTrigger-Python):
```python def main(req: func.HttpRequest) -> func.HttpResponse:
def main(req: func.HttpRequest) -> func.HttpResponse:
) ```
-In this function, the value of the `name` query parameter is obtained from the `params` parameter of the [HttpRequest] object. The JSON-encoded message body is read using the `get_json` method.
+In this function, the value of the `name` query parameter is obtained from the `params` parameter of the `HttpRequest` object. The JSON-encoded message body is read using the `get_json` method.
-Likewise, you can set the `status_code` and `headers` for the response message in the returned [HttpResponse] object.
+Likewise, you can set the `status_code` and `headers` information for the response message in the returned `HttpResponse` object.
## Web frameworks You can use WSGI and ASGI-compatible frameworks such as Flask and FastAPI with your HTTP-triggered Python functions. This section shows how to modify your functions to support these frameworks.
-First, the function.json file must be updated to include a `route` in the HTTP trigger, as shown in the following example:
+First, the *function.json* file must be updated to include `route` in the HTTP trigger, as shown in the following example:
```json {
First, the function.json file must be updated to include a `route` in the HTTP t
} ```
-The host.json file must also be updated to include an HTTP `routePrefix`, as shown in the following example.
+The *host.json* file must also be updated to include an HTTP `routePrefix` value, as shown in the following example:
```json {
The host.json file must also be updated to include an HTTP `routePrefix`, as sho
} ```
-Update the Python code file `init.py`, depending on the interface used by your framework. The following example shows either an ASGI handler approach or a WSGI wrapper approach for Flask:
+Update the Python code file *init.py*, based on the interface that your framework uses. The following example shows either an ASGI handler approach or a WSGI wrapper approach for Flask:
# [ASGI](#tab/asgi)
def main(req: func.HttpRequest, context) -> func.HttpResponse:
logging.info('Python HTTP trigger function processed a request.') return func.WsgiMiddleware(app).handle(req, context) ```
-For a full example, see [Using Flask Framework with Azure Functions](/samples/azure-samples/flask-app-on-azure-functions/azure-functions-python-create-flask-app/).
+For a full example, see [Using the Flask framework with Azure Functions](/samples/azure-samples/flask-app-on-azure-functions/azure-functions-python-create-flask-app/).
+## Scaling and performance
-## Scaling and Performance
-
-For scaling and performance best practices for Python function apps, see the [Python scale and performance article](python-scale-performance-reference.md).
+For scaling and performance best practices for Python function apps, see [Improve throughput performance of Python apps in Azure Functions](python-scale-performance-reference.md).
## Context
def main(req: azure.functions.HttpRequest,
return f'{context.invocation_id}' ```
-The [**Context**](/python/api/azure-functions/azure.functions.context) class has the following string attributes:
+The [Context](/python/api/azure-functions/azure.functions.context) class has the following string attributes:
-`function_directory`
-The directory in which the function is running.
+- `function_directory`: Directory in which the function is running.
-`function_name`
-Name of the function.
+- `function_name`: Name of the function.
-`invocation_id`
-ID of the current function invocation.
+- `invocation_id`: ID of the current function invocation.
-`trace_context`
-Context for distributed tracing. For more information, see [`Trace Context`](https://www.w3.org/TR/trace-context/).
+- `trace_context`: Context for distributed tracing. For more information, see [Trace Context](https://www.w3.org/TR/trace-context/) on the W3C website.
-`retry_context`
-Context for retries to the function. For more information, see [`retry-policies`](./functions-bindings-errors.md#retry-policies).
+- `retry_context`: Context for retries to the function. For more information, see [Retry policies](./functions-bindings-errors.md#retry-policies).
## Global variables
-It isn't guaranteed that the state of your app will be preserved for future executions. However, the Azure Functions runtime often reuses the same process for multiple executions of the same app. In order to cache the results of an expensive computation, declare it as a global variable.
+It isn't guaranteed that the state of your app will be preserved for future executions. However, the Azure Functions runtime often reuses the same process for multiple executions of the same app. To cache the results of an expensive computation, declare it as a global variable:
```python CACHED_DATA = None
def main(req):
## Environment variables
-In Functions, [application settings](functions-app-settings.md), such as service connection strings, are exposed as environment variables during execution. There are two main ways to access these settings in your code.
+In Azure Functions, [application settings](functions-app-settings.md), such as service connection strings, are exposed as environment variables during execution. There are two main ways to access these settings in your code:
| Method | Description | | | |
-| **`os.environ["myAppSetting"]`** | Tries to get the application setting by key name, raising an error when unsuccessful. |
-| **`os.getenv("myAppSetting")`** | Tries to get the application setting by key name, returning null when unsuccessful. |
+| `os.environ["myAppSetting"]` | Tries to get the application setting by key name. It raises an error when unsuccessful. |
+| `os.getenv("myAppSetting")` | Tries to get the application setting by key name. It returns `null` when unsuccessful. |
Both of these ways require you to declare `import os`.
For local development, application settings are [maintained in the local.setting
## Python version
-Azure Functions supports the following Python versions:
+Azure Functions supports the following Python versions. These are official Python distributions.
-| Functions version | Python<sup>*</sup> versions |
+| Functions version | Python versions |
| -- | -- | | 4.x | 3.9<br/> 3.8<br/>3.7 | | 3.x | 3.9<br/> 3.8<br/>3.7<br/>3.6 | | 2.x | 3.7<br/>3.6 |
-<sup>*</sup>Official Python distributions
-
-To request a specific Python version when you create your function app in Azure, use the `--runtime-version` option of the [`az functionapp create`](/cli/azure/functionapp#az-functionapp-create) command. The Functions runtime version is set by the `--functions-version` option. The Python version is set when the function app is created and can't be changed.
+To request a specific Python version when you create your function app in Azure, use the `--runtime-version` option of the [`az functionapp create`](/cli/azure/functionapp#az-functionapp-create) command. The `--functions-version` option sets the Azure Functions runtime version. The Python version is set when the function app is created and can't be changed.
The runtime uses the available Python version, when you run it locally. ### Changing Python version
-To set a Python function app to a specific language version, you need to specify the language and the version of the language in `LinuxFxVersion` field in site config. For example, to change Python app to use Python 3.8, set `linuxFxVersion` to `python|3.8`.
+To set a Python function app to a specific language version, you need to specify the language and the version of the language in `linuxFxVersion` field in site configuration. For example, to change Python app to use Python 3.8, set `linuxFxVersion` to `python|3.8`.
-To learn more about Azure Functions runtime support policy, refer to this [article](./language-support-policy.md)
+To learn more about the Azure Functions runtime support policy, see [Language runtime support policy](./language-support-policy.md).
-To see the full list of supported Python versions functions apps, refer to this [article](./supported-languages.md)
+To see the full list of supported Python versions for function apps, see [Supported languages in Azure Functions](./supported-languages.md).
# [Azure CLI](#tab/azurecli-linux)
-You can view and set the `linuxFxVersion` from the Azure CLI.
-
-Using the Azure CLI, view the current `linuxFxVersion` with the [az functionapp config show](/cli/azure/functionapp/config) command.
+You can view and set `linuxFxVersion` from the Azure CLI by using the [az functionapp config show](/cli/azure/functionapp/config) command. Replace `<function_app>` with the name of your function app. Replace `<my_resource_group>` with the name of the resource group for your function app.
```azurecli-interactive az functionapp config show --name <function_app> \ --resource-group <my_resource_group> ```
-In this code, replace `<function_app>` with the name of your function app. Also replace `<my_resource_group>` with the name of the resource group for your function app.
-
-You see the `linuxFxVersion` in the following output, which has been truncated for clarity:
+You see `linuxFxVersion` in the following output, which has been truncated for clarity:
```output {
You see the `linuxFxVersion` in the following output, which has been truncated f
} ```
-You can update the `linuxFxVersion` setting in the function app with the [az functionapp config set](/cli/azure/functionapp/config) command.
+You can update the `linuxFxVersion` setting in the function app by using the [az functionapp config set](/cli/azure/functionapp/config) command. In the following code:
+
+- Replace `<FUNCTION_APP>` with the name of your function app.
+- Replace `<RESOURCE_GROUP>` with the name of the resource group for your function app.
+- Replace `<LINUX_FX_VERSION>` with the Python version that you want to use, prefixed by `python|`. For example: `python|3.9`.
```azurecli-interactive az functionapp config set --name <FUNCTION_APP> \
az functionapp config set --name <FUNCTION_APP> \
--linux-fx-version <LINUX_FX_VERSION> ```
-Replace `<FUNCTION_APP>` with the name of your function app. Also replace `<RESOURCE_GROUP>` with the name of the resource group for your function app. Also, replace `<LINUX_FX_VERSION>` with the Python version you want to use, prefixed by `python|` for example, `python|3.9`.
-
-You can run this command from the [Azure Cloud Shell](../cloud-shell/overview.md) by choosing **Try it** in the preceding code sample. You can also use the [Azure CLI locally](/cli/azure/install-azure-cli) to execute this command after executing [az login](/cli/azure/reference-index#az-login) to sign in.
+You can run the command from [Azure Cloud Shell](../cloud-shell/overview.md) by selecting **Try it** in the preceding code sample. You can also use the [Azure CLI locally](/cli/azure/install-azure-cli) to run the command after you use [az login](/cli/azure/reference-index#az-login) to sign in.
-The function app restarts after the change is made to the site config.
+The function app restarts after you change the site configuration.
## Package management
-When developing locally using the Azure Functions Core Tools or Visual Studio Code, add the names and versions of the required packages to the `requirements.txt` file and install them using `pip`.
+When you're developing locally by using the Azure Functions Core Tools or Visual Studio Code, add the names and versions of the required packages to the *requirements.txt* file and install them by using `pip`.
-For example, the following requirements file and pip command can be used to install the `requests` package from PyPI.
+For example, you can use the following requirements file and `pip` command to install the `requests` package from PyPI:
```txt requests==2.19.1
pip install -r requirements.txt
## Publishing to Azure
-When you're ready to publish, make sure that all your publicly available dependencies are listed in the requirements.txt file. You can locate this file at the root of your project directory.
+When you're ready to publish, make sure that all your publicly available dependencies are listed in the *requirements.txt* file. This file is at the root of your project directory.
-Project files and folders that are excluded from publishing, including the virtual environment folder, you can find them in the root directory of your project.
+You can also find project files and folders that are excluded from publishing, including the virtual environment folder, in the root directory of your project.
-There are three build actions supported for publishing your Python project to Azure: remote build, local build, and builds using custom dependencies.
+Three build actions are supported for publishing your Python project to Azure: remote build, local build, and builds that use custom dependencies.
-You can also use Azure Pipelines to build your dependencies and publish using continuous delivery (CD). To learn more, see [Continuous delivery by using Azure DevOps](functions-how-to-azure-devops.md).
+You can also use Azure Pipelines to build your dependencies and publish by using continuous delivery (CD). To learn more, see [Continuous delivery by using Azure DevOps](functions-how-to-azure-devops.md).
### Remote build
-When you use remote build, dependencies restored on the server and native dependencies match the production environment. This results in a smaller deployment package to upload. Use remote build when developing Python apps on Windows. If your project has custom dependencies, you can [use remote build with extra index URL](#remote-build-with-extra-index-url).
+When you use a remote build, dependencies restored on the server and native dependencies match the production environment. This results in a smaller deployment package to upload. Use a remote build when you're developing Python apps on Windows. If your project has custom dependencies, you can [use a remote build with an extra index URL](#remote-build-with-extra-index-url).
-Dependencies are obtained remotely based on the contents of the requirements.txt file. [Remote build](functions-deployment-technologies.md#remote-build) is the recommended build method. By default, the Azure Functions Core Tools requests a remote build when you use the following [`func azure functionapp publish`](functions-run-local.md#publish) command to publish your Python project to Azure.
+Dependencies are obtained remotely based on the contents of the *requirements.txt* file. [Remote build](functions-deployment-technologies.md#remote-build) is the recommended build method. By default, Azure Functions Core Tools requests a remote build when you use the following [func azure functionapp publish](functions-run-local.md#publish) command to publish your Python project to Azure. Replace `<APP_NAME>` with the name of your function app in Azure.
```bash func azure functionapp publish <APP_NAME> ```
-Remember to replace `<APP_NAME>` with the name of your function app in Azure.
-
-The [Azure Functions Extension for Visual Studio Code](./create-first-function-vs-code-csharp.md#publish-the-project-to-azure) also requests a remote build by default.
+The [Azure Functions extension for Visual Studio Code](./create-first-function-vs-code-csharp.md#publish-the-project-to-azure) also requests a remote build by default.
### Local build
-Dependencies are obtained locally based on the contents of the requirements.txt file. You can prevent doing a remote build by using the following [`func azure functionapp publish`](functions-run-local.md#publish) command to publish with a local build.
+Dependencies are obtained locally based on the contents of the *requirements.txt* file. You can prevent a remote build by using the following [func azure functionapp publish](functions-run-local.md#publish) command to publish with a local build. Replace `<APP_NAME>` with the name of your function app in Azure.
```command func azure functionapp publish <APP_NAME> --build local ```
-Remember to replace `<APP_NAME>` with the name of your function app in Azure.
-
-When you use the `--build local` option, project dependencies are read from the requirements.txt file and those dependent packages are downloaded and installed locally. Project files and dependencies are deployed from your local computer to Azure. This results in a larger deployment package being uploaded to Azure. If for some reason, you can't get requirements.txt file by Core Tools, you must use the custom dependencies option for publishing.
+When you use the `--build local` option, project dependencies are read from the *requirements.txt* file. Those dependent packages are downloaded and installed locally. Project files and dependencies are deployed from your local computer to Azure. This results in the upload of a larger deployment package to Azure. If you can't get the *requirements.txt* file by using Core Tools, you must use the custom dependencies option for publishing.
-We don't recommend using local builds when developing locally on Windows.
+We don't recommend using local builds when you're developing locally on Windows.
### Custom dependencies
-When your project has dependencies not found in the [Python Package Index](https://pypi.org/), there are two ways to build the project. The build method depends on how you build the project.
+When your project has dependencies not found in the [Python Package Index](https://pypi.org/), there are two ways to build the project.
#### Remote build with extra index URL
-When your packages are available from an accessible custom package index, use a remote build. Before publishing, make sure to [create an app setting](functions-how-to-use-azure-function-app-settings.md#settings) named `PIP_EXTRA_INDEX_URL`. The value for this setting is the URL of your custom package index. Using this setting tells the remote build to run `pip install` using the `--extra-index-url` option. To learn more, see the [Python pip install documentation](https://pip.pypa.io/en/stable/reference/pip_install/#requirements-file-format).
+When your packages are available from an accessible custom package index, use a remote build. Before publishing, make sure to [create an app setting](functions-how-to-use-azure-function-app-settings.md#settings) named `PIP_EXTRA_INDEX_URL`. The value for this setting is the URL of your custom package index. Using this setting tells the remote build to run `pip install` with the `--extra-index-url` option. To learn more, see the [Python pip install documentation](https://pip.pypa.io/en/stable/reference/pip_install/#requirements-file-format).
You can also use basic authentication credentials with your extra package index URLs. To learn more, see [Basic authentication credentials](https://pip.pypa.io/en/stable/user_guide/#basic-authentication-credentials) in Python documentation.
-#### Install local packages
+#### Installing local packages
-If your project uses packages not publicly available to our tools, you can make them available to your app by putting them in the \_\_app\_\_/.python_packages directory. Before publishing, run the following command to install the dependencies locally:
+If your project uses packages that aren't publicly available, you can make them available to your app by putting them in the *\_\_app\_\_/.python_packages* directory. Before publishing, run the following command to install the dependencies locally:
```command pip install --target="<PROJECT_DIR>/.python_packages/lib/site-packages" -r requirements.txt ```
-When using custom dependencies, you should use the `--no-build` publishing option, since you've already installed the dependencies into the project folder.
+When you're using custom dependencies, use the following `--no-build` publishing option because you've already installed the dependencies into the project folder. Replace `<APP_NAME>` with the name of your function app in Azure.
```command func azure functionapp publish <APP_NAME> --no-build ```
-Remember to replace `<APP_NAME>` with the name of your function app in Azure.
-
-## Unit Testing
+## Unit testing
-Functions written in Python can be tested like other Python code using standard testing frameworks. For most bindings, it's possible to create a mock input object by creating an instance of an appropriate class from the `azure.functions` package. Since the [`azure.functions`](https://pypi.org/project/azure-functions/) package isn't immediately available, be sure to install it via your `requirements.txt` file as described in the [package management](#package-management) section above.
+You can test functions written in Python the same way that you test other Python code: through standard testing frameworks. For most bindings, it's possible to create a mock input object by creating an instance of an appropriate class from the [azure.functions](https://pypi.org/project/azure-functions/) package. Because the `azure.functions` package isn't immediately available, be sure to install it via your *requirements.txt* file as described in the earlier [Package management](#package-management) section.
-Take *my_second_function* as an example, following is a mock test of an HTTP triggered function:
+Take *my_second_function* as an example. Following is a mock test of an HTTP triggered function.
-First we need to create *<project_root>/my_second_function/function.json* file and define this function as an http trigger.
+First, to create the *<project_root>/my_second_function/function.json* file and define this function as an HTTP trigger, use the following code:
```json {
First we need to create *<project_root>/my_second_function/function.json* file a
} ```
-Now, we can implement the *my_second_function* and the *shared_code.my_second_helper_function*.
+Now, you can implement *my_second_function* and *shared_code.my_second_helper_function*:
```python # <project_root>/my_second_function/__init__.py
import logging
# Use absolute import to resolve shared_code modules from shared_code import my_second_helper_function
-# Define an http trigger which accepts ?value=<int> query parameter
+# Define an HTTP trigger that accepts the ?value=<int> query parameter
# Double the value and return the result in HttpResponse def main(req: func.HttpRequest) -> func.HttpResponse: logging.info('Executing my_second_function.')
def double(value: int) -> int:
return value * 2 ```
-We can start writing test cases for our http trigger.
+You can start writing test cases for your HTTP trigger:
```python # <project_root>/tests/test_my_second_function.py
class TestFunction(unittest.TestCase):
) ```
-Inside your `.venv` Python virtual environment, install your favorite Python test framework, such as `pip install pytest`. Then run `pytest tests` to check the test result.
+Inside your *.venv* Python virtual environment, install your favorite Python test framework, such as `pip install pytest`. Then run `pytest tests` to check the test result.
## Temporary files
-The `tempfile.gettempdir()` method returns a temporary folder, which on Linux is `/tmp`. Your application can use this directory to store temporary files generated and used by your functions during execution.
+The `tempfile.gettempdir()` method returns a temporary folder, which on Linux is */tmp*. Your application can use this directory to store temporary files that your functions generate and use during execution.
> [!IMPORTANT]
-> Files written to the temporary directory aren't guaranteed to persist across invocations. During scale out, temporary files aren't shared between instances.
+> Files written to the temporary directory aren't guaranteed to persist across invocations. During scale-out, temporary files aren't shared between instances.
-The following example creates a named temporary file in the temporary directory (`/tmp`):
+The following example creates a named temporary file in the temporary directory (*/tmp*):
```python import logging
from os import listdir
filesDirListInTemp = listdir(tempFilePath) ```
-We recommend that you maintain your tests in a folder separate from the project folder. This action keeps you from deploying test code with your app.
+We recommend that you maintain your tests in a folder that's separate from the project folder. This action keeps you from deploying test code with your app.
## Preinstalled libraries
-There are a few libraries that come with the Python Functions runtime.
+A few libraries come with the runtime for Azure Functions on Python.
### Python Standard Library
-The Python Standard Library contains a list of built-in Python modules that are shipped with each Python distribution. Most of these libraries help you access system functionality, like file I/O. On Windows systems, these libraries are installed with Python. On the Unix-based systems, they're provided by package collections.
+The Python Standard Library contains a list of built-in Python modules that are shipped with each Python distribution. Most of these libraries help you access system functionality, like file I/O. On Windows systems, these libraries are installed with Python. On Unix systems, package collections provide them.
-To view the full details of the list of these libraries, see the links below:
+To view the full details of these libraries, use these links:
* [Python 3.6 Standard Library](https://docs.python.org/3.6/library/) * [Python 3.7 Standard Library](https://docs.python.org/3.7/library/) * [Python 3.8 Standard Library](https://docs.python.org/3.8/library/) * [Python 3.9 Standard Library](https://docs.python.org/3.9/library/)
-### Azure Functions Python worker dependencies
+### Worker dependencies
-The Functions Python worker requires a specific set of libraries. You can also use these libraries in your functions, but they aren't a part of the Python standard. If your functions rely on any of these libraries, they may not be available to your code when running outside of Azure Functions. You can find a detailed list of dependencies in the **install\_requires** section in the [setup.py](https://github.com/Azure/azure-functions-python-worker/blob/dev/setup.py#L282) file.
+The Python worker for Azure Functions requires a specific set of libraries. You can also use these libraries in your functions, but they aren't a part of the Python standard. If your functions rely on any of these libraries, they might not be available to your code when you're running outside Azure Functions. You can find a detailed list of dependencies in the `install\_requires` section in the [setup.py](https://github.com/Azure/azure-functions-python-worker/blob/dev/setup.py#L282) file.
> [!NOTE]
-> If your function app's requirements.txt contains an `azure-functions-worker` entry, remove it. The functions worker is automatically managed by Azure Functions platform, and we regularly update it with new features and bug fixes. Manually installing an old version of worker in requirements.txt may cause unexpected issues.
+> If your function app's *requirements.txt* file contains an `azure-functions-worker` entry, remove it. The Azure Functions platform automatically manages this worker, and we regularly update it with new features and bug fixes. Manually installing an old version of the worker in *requirements.txt* might cause unexpected problems.
> [!NOTE]
-> If your package contains certain libraries that may collide with worker's dependencies (e.g. protobuf, tensorflow, grpcio), please configure [`PYTHON_ISOLATE_WORKER_DEPENDENCIES`](functions-app-settings.md#python_isolate_worker_dependencies-preview) to `1` in app settings to prevent your application from referring worker's dependencies. This feature is in preview.
+> If your package contains certain libraries that might collide with the worker's dependencies (for example, protobuf, TensorFlow, or grpcio), configure [PYTHON_ISOLATE_WORKER_DEPENDENCIES](functions-app-settings.md#python_isolate_worker_dependencies-preview) to `1` in app settings to prevent your application from referring to the worker's dependencies. This feature is in preview.
-### Azure Functions Python library
+### Python library for Azure Functions
-Every Python worker update includes a new version of [Azure Functions Python library (azure.functions)](https://github.com/Azure/azure-functions-python-library). This approach makes it easier to continuously update your Python function apps, because each update is backwards-compatible. A list of releases of this library can be found in [azure-functions PyPi](https://pypi.org/project/azure-functions/#history).
+Every Python worker update includes a new version of the [Python library for Azure Functions (azure.functions)](https://github.com/Azure/azure-functions-python-library). This approach makes it easier to continuously update your Python function apps, because each update is backward compatible. You can find a list of releases of this library in the [azure-functions information on the PyPi website](https://pypi.org/project/azure-functions/#history).
-The runtime library version is fixed by Azure, and it can't be overridden by requirements.txt. The `azure-functions` entry in requirements.txt is only for linting and customer awareness.
+The runtime library version is fixed by Azure, and *requirements.txt* can't override it. The `azure-functions` entry in *requirements.txt* is only for linting and customer awareness.
-Use the following code to track the actual version of the Python Functions library in your runtime:
+Use the following code to track the version of the Python library for Azure Functions in your runtime:
```python getattr(azure.functions, '__version__', '< 1.2.1')
getattr(azure.functions, '__version__', '< 1.2.1')
### Runtime system libraries
-For a list of preinstalled system libraries in Python worker Docker images, see the links below:
+The following table lists preinstalled system libraries in Docker images for the Python worker:
| Functions runtime | Debian version | Python versions | ||||
Extensions are imported in your function code much like a standard Python librar
| Scope | Description | | | |
-| **Application-level** | When imported into any function trigger, the extension applies to every function execution in the app. |
-| **Function-level** | Execution is limited to only the specific function trigger into which it's imported. |
+| **Application level** | When the extension is imported into any function trigger, it applies to every function execution in the app. |
+| **Function level** | Execution is limited to only the specific function trigger into which it's imported. |
-Review the information for a given extension to learn more about the scope in which the extension runs.
+Review the information for an extension to learn more about the scope in which the extension runs.
-Extensions implement a Python worker extension interface. This action lets the Python worker process call into the extension code during the function execution lifecycle. To learn more, see [Creating extensions](#creating-extensions).
+Extensions implement a Python worker extension interface. This action lets the Python worker process call into the extension code during the function execution lifecycle.
### Using extensions You can use a Python worker extension library in your Python functions by following these basic steps:
-1. Add the extension package in the requirements.txt file for your project.
+1. Add the extension package in the *requirements.txt* file for your project.
1. Install the library into your app. 1. Add the application setting `PYTHON_ENABLE_WORKER_EXTENSIONS`:
- + Locally: add `"PYTHON_ENABLE_WORKER_EXTENSIONS": "1"` in the `Values` section of your [local.settings.json file](functions-develop-local.md#local-settings-file).
- + Azure: add `PYTHON_ENABLE_WORKER_EXTENSIONS=1` to your [app settings](functions-how-to-use-azure-function-app-settings.md#settings).
+ + To add the setting locally, add `"PYTHON_ENABLE_WORKER_EXTENSIONS": "1"` in the `Values` section of your [local.settings.json file](functions-develop-local.md#local-settings-file).
+ + To add the setting in Azure, add `PYTHON_ENABLE_WORKER_EXTENSIONS=1` to your [app settings](functions-how-to-use-azure-function-app-settings.md#settings).
1. Import the extension module into your function trigger.
-1. Configure the extension instance, if needed. Configuration requirements should be called-out in the extension's documentation.
+1. Configure the extension instance, if needed. Configuration requirements should be called out in the extension's documentation.
> [!IMPORTANT]
-> Third-party Python worker extension libraries are not supported or warrantied by Microsoft. You must make sure that any extensions you use in your function app is trustworthy, and you bear the full risk of using a malicious or poorly written extension.
+> Microsoft doesn't support or warranty third-party Python worker extension libraries. Make sure that any extensions you use in your function app are trustworthy. You bear the full risk of using a malicious or poorly written extension.
-Third-parties should provide specific documentation on how to install and consume their specific extension in your function app. For a basic example of how to consume an extension, see [Consuming your extension](develop-python-worker-extensions.md#consume-your-extension-locally).
+Third parties should provide specific documentation on how to install and consume their specific extension in your function app. For a basic example of how to consume an extension, see [Consuming your extension](develop-python-worker-extensions.md#consume-your-extension-locally).
Here are examples of using extensions in a function app, by scope:
-# [Application-level](#tab/application-level)
+# [Application level](#tab/application-level)
```python # <project_root>/requirements.txt
AppExtension.configure(key=value)
def main(req, context): # Use context.app_ext_attributes here ```
-# [Function-level](#tab/function-level)
+# [Function level](#tab/function-level)
```python # <project_root>/requirements.txt function-level-extension==1.0.0
def main(req, context):
### Creating extensions
-Extensions are created by third-party library developers who have created functionality that can be integrated into Azure Functions. An extension developer design, implements, and releases Python packages that contain custom logic designed specifically to be run in the context of function execution. These extensions can be published either to the PyPI registry or to GitHub repositories.
+Extensions are created by third-party library developers who have created functionality that can be integrated into Azure Functions. An extension developer designs, implements, and releases Python packages that contain custom logic designed specifically to be run in the context of function execution. These extensions can be published either to the PyPI registry or to GitHub repositories.
To learn how to create, package, publish, and consume a Python worker extension package, see [Develop Python worker extensions for Azure Functions](develop-python-worker-extensions.md).
An extension inherited from [`AppExtensionBase`](https://github.com/Azure/azure-
| Method | Description | | | |
-| **`init`** | Called after the extension is imported. |
-| **`configure`** | Called from function code when needed to configure the extension. |
-| **`post_function_load_app_level`** | Called right after the function is loaded. The function name and function directory are passed to the extension. Keep in mind that the function directory is read-only, and any attempt to write to local file in this directory fails. |
-| **`pre_invocation_app_level`** | Called right before the function is triggered. The function context and function invocation arguments are passed to the extension. You can usually pass other attributes in the context object for the function code to consume. |
-| **`post_invocation_app_level`** | Called right after the function execution completes. The function context, function invocation arguments, and the invocation return object are passed to the extension. This implementation is a good place to validate whether execution of the lifecycle hooks succeeded. |
+| `init` | Called after the extension is imported. |
+| `configure` | Called from function code when it's needed to configure the extension. |
+| `post_function_load_app_level` | Called right after the function is loaded. The function name and function directory are passed to the extension. Keep in mind that the function directory is read-only. Any attempt to write to a local file in this directory fails. |
+| `pre_invocation_app_level` | Called right before the function is triggered. The function context and function invocation arguments are passed to the extension. You can usually pass other attributes in the context object for the function code to consume. |
+| `post_invocation_app_level` | Called right after the function execution finishes. The function context, function invocation arguments, and invocation return object are passed to the extension. This implementation is a good place to validate whether execution of the lifecycle hooks succeeded. |
#### Function-level extensions
An extension that inherits from [FuncExtensionBase](https://github.com/Azure/azu
| Method | Description | | | |
-| **`__init__`** | This method is the constructor of the extension. It's called when an extension instance is initialized in a specific function. When implementing this abstract method, you may want to accept a `filename` parameter and pass it to the parent's method `super().__init__(filename)` for proper extension registration. |
-| **`post_function_load`** | Called right after the function is loaded. The function name and function directory are passed to the extension. Keep in mind that the function directory is read-only, and any attempt to write to local file in this directory fails. |
-| **`pre_invocation`** | Called right before the function is triggered. The function context and function invocation arguments are passed to the extension. You can usually pass other attributes in the context object for the function code to consume. |
-| **`post_invocation`** | Called right after the function execution completes. The function context, function invocation arguments, and the invocation return object are passed to the extension. This implementation is a good place to validate whether execution of the lifecycle hooks succeeded. |
+| `__init__` | Called when an extension instance is initialized in a specific function. This method is the constructor of the extension. When you're implementing this abstract method, you might want to accept a `filename` parameter and pass it to the parent's `super().__init__(filename)` method for proper extension registration. |
+| `post_function_load` | Called right after the function is loaded. The function name and function directory are passed to the extension. Keep in mind that the function directory is read-only. Any attempt to write to a local file in this directory fails. |
+| `pre_invocation` | Called right before the function is triggered. The function context and function invocation arguments are passed to the extension. You can usually pass other attributes in the context object for the function code to consume. |
+| `post_invocation` | Called right after the function execution finishes. The function context, function invocation arguments, and invocation return object are passed to the extension. This implementation is a good place to validate whether execution of the lifecycle hooks succeeded. |
## Cross-origin resource sharing
By default, a host instance for Python can process only one function invocation
## <a name="shared-memory"></a>Shared memory (preview)
-To improve throughput, Functions let your out-of-process Python language worker share memory with the Functions host process. When your function app is hitting bottlenecks, you can enable shared memory by adding an application setting named [FUNCTIONS_WORKER_SHARED_MEMORY_DATA_TRANSFER_ENABLED](functions-app-settings.md#functions_worker_shared_memory_data_transfer_enabled) with a value of `1`. With shared memory enabled, you can then use the [DOCKER_SHM_SIZE](functions-app-settings.md#docker_shm_size) setting to set the shared memory to something like `268435456`, which is equivalent to 256 MB.
+To improve throughput, Azure Functions lets your out-of-process Python language worker share memory with the host process. When your function app is hitting bottlenecks, you can enable shared memory by adding an application setting named [FUNCTIONS_WORKER_SHARED_MEMORY_DATA_TRANSFER_ENABLED](functions-app-settings.md#functions_worker_shared_memory_data_transfer_enabled) with a value of `1`. With shared memory enabled, you can then use the [DOCKER_SHM_SIZE](functions-app-settings.md#docker_shm_size) setting to set the shared memory to something like `268435456`, which is equivalent to 256 MB.
-For example, you might enable shared memory to reduce bottlenecks when using Blob storage bindings to transfer payloads larger than 1 MB.
+For example, you might enable shared memory to reduce bottlenecks when using Azure Blob Storage bindings to transfer payloads larger than 1 MB.
-This functionality is available only for function apps running in Premium and Dedicated (App Service) plans. To learn more, see [Shared memory](https://github.com/Azure/azure-functions-python-worker/wiki/Shared-Memory).
+This functionality is available only for function apps running in Premium and Dedicated (Azure App Service) plans. To learn more, see [Shared memory](https://github.com/Azure/azure-functions-python-worker/wiki/Shared-Memory).
-## Known issues and FAQ
+## Known issues and FAQs
-Following is a list of troubleshooting guides for common issues:
+Here's a list of troubleshooting guides for common issues:
* [ModuleNotFoundError and ImportError](recover-python-functions.md#troubleshoot-modulenotfounderror) * [Can't import 'cygrpc'](recover-python-functions.md#troubleshoot-cannot-import-cygrpc)
-All known issues and feature requests are tracked using [GitHub issues](https://github.com/Azure/azure-functions-python-worker/issues) list. If you run into a problem and can't find the issue in GitHub, open a new issue and include a detailed description of the problem.
+All known issues and feature requests are tracked through the [GitHub issues](https://github.com/Azure/azure-functions-python-worker/issues) list. If you run into a problem and can't find the issue in GitHub, open a new issue and include a detailed description of the problem.
## Next steps
For more information, see the following resources:
* [Azure Functions package API documentation](/python/api/azure-functions/azure.functions) * [Best practices for Azure Functions](functions-best-practices.md) * [Azure Functions triggers and bindings](functions-triggers-bindings.md)
-* [Blob storage bindings](functions-bindings-storage-blob.md)
-* [HTTP and Webhook bindings](functions-bindings-http-webhook.md)
-* [Queue storage bindings](functions-bindings-storage-queue.md)
+* [Blob Storage bindings](functions-bindings-storage-blob.md)
+* [HTTP and webhook bindings](functions-bindings-http-webhook.md)
+* [Azure Queue Storage bindings](functions-bindings-storage-queue.md)
* [Timer trigger](functions-bindings-timer.md) [Having issues? Let us know.](https://aka.ms/python-functions-ref-survey)
azure-maps Drawing Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-requirements.md
You can see an example of the Walls layer in the [sample Drawing package](https:
You can include a DWG layer that contains doors. Each door must overlap the edge of a unit from the Unit layer.
-Door openings in an Azure Maps dataset are represented as a single-line segment that overlaps multiple unit boundaries. The following images show how to convert geometry in the Door layer to opening features in a dataset.
+Door openings in an Azure Maps dataset are represented as a single-line segment that overlaps multiple unit boundaries. The following images show how Azure Maps converts door layer geometry into opening features in a dataset..
![Four graphics that show the steps to generate openings](./media/drawing-requirements/opening-steps.png)
azure-monitor Agent Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-data-sources.md
# Log Analytics agent data sources in Azure Monitor The data that Azure Monitor collects from virtual machines with the legacy [Log Analytics](./log-analytics-agent.md) agent is defined by the data sources that you configure on the [Log Analytics workspace](../logs/data-platform-logs.md). Each data source creates records of a particular type with each type having its own set of properties.
-> [!IMPORTANT]
-> This article only covers data sources for the legacy [Log Analytics agent](./log-analytics-agent.md) which is one of the agents used by Azure Monitor. This agent **will be deprecated by August, 2024**. Please plan to [migrate to Azure Monitor agent](./azure-monitor-agent-migration.md) before that. Other agents collect different data and are configured differently. See [Overview of Azure Monitor agents](agents-overview.md) for a list of the available agents and the data they can collect.
- ![Log data collection](media/agent-data-sources/overview.png) + > [!IMPORTANT] > The data sources described in this article apply only to virtual machines running the Log Analytics agent.
azure-monitor Agent Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-linux.md
This article provides details on installing the Log Analytics agent on Linux computers using the following methods: * [Install the agent for Linux using a wrapper-script](#install-the-agent-using-wrapper-script) hosted on GitHub. This is the recommended method to install and upgrade the agent when the computer has connectivity with the Internet, directly or through a proxy server.
-* [Manually download and install](#install-the-agent-manually) the agent. This is required when the Linux computer does not have access to the Internet and will be communicating with Azure Monitor or Azure Automation through the [Log Analytics gateway](./gateway.md).
+* [Manually download and install](#install-the-agent-manually) the agent. This is required when the Linux computer doesn't have access to the Internet and will be communicating with Azure Monitor or Azure Automation through the [Log Analytics gateway](./gateway.md).
The installation methods described in this article are typically used for virtual machines on-premises or in other clouds. See [Installation options](./log-analytics-agent.md#installation-options) for more efficient options you can use for Azure virtual machines.
->[!IMPORTANT]
-> The Log Analytics agents are on a **deprecation path** and will no longer be supported after **August 31, 2024**. If you use the Log Analytics agents to ingest data to Azure Monitor, make sure to [migrate to the new Azure Monitor agent](./azure-monitor-agent-migration.md) prior to that date.
-- ## Supported operating systems
See [Overview of Azure Monitor agents](agents-overview.md#supported-operating-sy
>[!NOTE] >Running the Log Analytics Linux Agent in containers is not supported. If you need to monitor containers, please leverage the [Container Monitoring solution](../containers/containers.md) for Docker hosts or [Container insights](../containers/container-insights-overview.md) for Kubernetes.
-Starting with versions released after August 2018, we are making the following changes to our support model:
+Starting with versions released after August 2018, we're making the following changes to our support model:
* Only the server versions are supported, not client.
-* Focus support on any of the [Azure Linux Endorsed distros](../../virtual-machines/linux/endorsed-distros.md). Note that there may be some delay between a new distro/version being Azure Linux Endorsed and it being supported for the Log Analytics Linux agent.
+* Focus support on any of the [Azure Linux Endorsed distros](../../virtual-machines/linux/endorsed-distros.md). There may be some delay between a new distro/version being Azure Linux Endorsed and it being supported for the Log Analytics Linux agent.
* All minor releases are supported for each major version listed.
-* Versions that have passed their manufacturer's end-of-support date are not supported.
-* Only support VM images; containers, even those derived from official distro publishers' images, are not supported.
-* New versions of AMI are not supported.
+* Versions that have passed their manufacturer's end-of-support date aren't supported.
+* Only support VM images; containers, even those derived from official distro publishers' images, aren't supported.
+* New versions of AMI aren't supported.
* Only versions that run OpenSSL 1.x by default are supported. >[!NOTE]
Starting with versions released after August 2018, we are making the following c
Starting from Agent version 1.13.27, the Linux Agent will support both Python 2 and 3. We always recommend using the latest agent.
-If you are using an older version of the agent, you must have the Virtual Machine use Python 2 by default. If your virtual machine is using a distro that doesn't include Python 2 by default then you must install it. The following sample commands will install Python 2 on different distros.
+If you're using an older version of the agent, you must have the Virtual Machine use Python 2 by default. If your virtual machine is using a distro that doesn't include Python 2 by default, then you must install it. The following sample commands will install Python 2 on different distros.
- Red Hat, CentOS, Oracle: `yum install -y python2` - Ubuntu, Debian: `apt-get install -y python2` - SUSE: `zypper install -y python2`
-Again, only if you are using an older version of the agent, the python2 executable must be aliased to *python*. Following is one method that you can use to set this alias:
+Again, only if you're using an older version of the agent, the python2 executable must be aliased to *python*. Following is one method that you can use to set this alias:
1. Run the following command to remove any existing aliases.
The following are currently supported:
- FIPS - SELinux (Marketplace images for CentOS and RHEL with their default settings)
-The following are not supported:
+The following aren't supported:
- CIS - SELinux (custom hardening like MLS)
-CIS and SELinux hardening support is planned for [Azure Monitoring Agent](./azure-monitor-agent-overview.md). Further hardening and customization methods are not supported nor planned for OMS Agent. For instance, OS images like GitHub Enterprise Server which include customizations such as limitations to user account privileges are not supported.
+CIS and SELinux hardening support is planned for [Azure Monitoring Agent](./azure-monitor-agent-overview.md). Further hardening and customization methods aren't supported nor planned for OMS Agent. For instance, OS images like GitHub Enterprise Server which include customizations such as limitations to user account privileges aren't supported.
## Agent prerequisites
See [Log Analytics agent overview](./log-analytics-agent.md#network-requirements
## Workspace ID and key
-Regardless of the installation method used, you will require the workspace ID and key for the Log Analytics workspace that the agent will connect to. Select the workspace from the **Log Analytics workspaces** menu in the Azure portal. Then select **Agents management** in the **Settings** section.
+Regardless of the installation method used, you'll require the workspace ID and key for the Log Analytics workspace that the agent will connect to. Select the workspace from the **Log Analytics workspaces** menu in the Azure portal. Then select **Agents management** in the **Settings** section.
[![Workspace details](media/log-analytics-agent/workspace-details.png)](media/log-analytics-agent/workspace-details.png#lightbox)
docker-cimprov | 1.0.0 | Docker provider for OMI. Only installed if Docker is de
### Agent installation details
-After installing the Log Analytics agent for Linux packages, the following additional system-wide configuration changes are applied. These artifacts are removed when the omsagent package is uninstalled.
+After installing the Log Analytics agent for Linux packages, the following system-wide configuration changes are also applied. These artifacts are removed when the omsagent package is uninstalled.
* A non-privileged user named: `omsagent` is created. The daemon runs under this credential.
-* A sudoers *include* file is created in `/etc/sudoers.d/omsagent`. This authorizes `omsagent` to restart the syslog and omsagent daemons. If sudo *include* directives are not supported in the installed version of sudo, these entries will be written to `/etc/sudoers`.
+* A sudoers *include* file is created in `/etc/sudoers.d/omsagent`. This authorizes `omsagent` to restart the syslog and omsagent daemons. If sudo *include* directives aren't supported in the installed version of sudo, these entries will be written to `/etc/sudoers`.
* The syslog configuration is modified to forward a subset of events to the agent. For more information, see [Configure Syslog data collection](data-sources-syslog.md). On a monitored Linux computer, the agent is listed as `omsagent`. `omsconfig` is the Log Analytics agent for Linux configuration agent that looks for new portal side configuration every 5 minutes. The new and updated configuration is applied to the agent configuration files located at `/etc/opt/microsoft/omsagent/conf/omsagent.conf`.
Upgrading from a previous version, starting with version 1.0.0-47, is supported
## Cache information Data from the Log Analytics agent for Linux is cached on the local machine at *%STATE_DIR_WS%/out_oms_common*.buffer* before it's sent to Azure Monitor. Custom log data is buffered in *%STATE_DIR_WS%/out_oms_blob*.buffer*. The path may be different for some [solutions and data types](https://github.com/microsoft/OMS-Agent-for-Linux/search?utf8=%E2%9C%93&q=+buffer_path&type=).
-The agent attempts to upload every 20 seconds. If it fails, it will wait an exponentially increasing length of time until it succeeds: 30 seconds before the second attempt, 60 seconds before the third, 120 seconds ... and so on up to a maximum of 16 minutes between retries until it successfully connects again. The agent will retry up to 6 times for a given chunk of data before discarding and moving to the next one. This continues until the agent can successfully upload again. This means that data may be buffered up to approximately 30 minutes before being discarded.
+The agent attempts to upload every 20 seconds. If it fails, it waits an exponentially increasing length of time until it succeeds: 30 seconds before the second attempt, 60 seconds before the third, 120 seconds, and so on, up to a maximum of 16 minutes between retries until it successfully connects again. The agent retries up to 6 times for a given chunk of data before discarding and moving to the next one. This continues until the agent can successfully upload again. This means that data may be buffered up to approximately 30 minutes before being discarded.
The default cache size is 10 MB but can be modified in the [omsagent.conf file](https://github.com/microsoft/OMS-Agent-for-Linux/blob/e2239a0714ae5ab5feddcc48aa7a4c4f971417d4/installer/conf/omsagent.conf).
azure-monitor Agent Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-manage.md
After initial deployment of the Log Analytics Windows or Linux agent in Azure Monitor, you may need to reconfigure the agent, upgrade it, or remove it from the computer if it has reached the retirement stage in its lifecycle. You can easily manage these routine maintenance tasks manually or through automation, which reduces both operational error and expenses. + ## Upgrading agent The Log Analytics agent for Windows and Linux can be upgraded to the latest release manually or automatically depending on the deployment scenario and environment the VM is running in. The following methods can be used to upgrade the agent.
azure-monitor Agent Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-windows.md
This article provides details on installing the Log Analytics agent on Windows c
The installation methods described in this article are typically used for virtual machines on-premises or in other clouds. See [Installation options](./log-analytics-agent.md#installation-options) for more efficient options you can use for Azure virtual machines.
->[!IMPORTANT]
->The Log Analytics agents are on a **deprecation path** and will no longer be supported after **August 31, 2024**. If you use the Log Analytics agents to ingest data to Azure Monitor, make sure to [migrate to the new Azure Monitor agent](./azure-monitor-agent-migration.md) prior to that date.
> [!NOTE] > Installing the Log Analytics agent will typically not require you to restart the machine.
azure-monitor Azure Monitor Agent Data Collection Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-data-collection-endpoint.md
You can use data collection endpoints to enable the Azure Monitor agent to commu
![Data collection endpoint network isolation](media/azure-monitor-agent-dce/data-collection-endpoint-network-isolation.png) ## Next steps-- [Associate endpoint to machines](../agents/data-collection-rule-azure-monitor-agent.md#create-rule-and-association-in-azure-portal)
+- [Associate endpoint to machines](../agents/data-collection-rule-azure-monitor-agent.md#create-data-collection-rule-and-association)
- [Add endpoint to AMPLS resource](../logs/private-link-configure.md#connect-azure-monitor-resources)
azure-monitor Azure Monitor Agent Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-manage.md
The following prerequisites must be met prior to installing the Azure Monitor ag
- **Authentication**: [Managed identity](../../active-directory/managed-identities-azure-resources/overview.md) must be enabled on Azure virtual machines. Both system-assigned and user-assigned managed identities are supported. - **User-assigned**: This is recommended for large scale deployments, configurable via [built-in Azure policies](#using-azure-policy). It can be created once and shared across multiple VMs, and is thus more scalable than system-assigned. - **System-assigned**: This is suited for initial testing or small deployments. When used at scale (for example, for all VMs in a subscription) it results in substantial number of identities created (and deleted) in Azure AD (Azure Active Directory). To avoid this churn of identities, it is recommended to use user-assigned managed identities instead. **For Arc-enabled servers, system-assigned managed identity is enabled automatically** (as soon as you install the Arc agent) as it's the only supported type for Arc-enabled servers.
- - This is not required for Azure Arc-enabled servers. The system identity will be enabled automatically if the agent is installed via [creating and assigning a data collection rule using the Azure portal](data-collection-rule-azure-monitor-agent.md#create-rule-and-association-in-azure-portal).
+ - This is not required for Azure Arc-enabled servers. The system identity will be enabled automatically if the agent is installed via [creating and assigning a data collection rule using the Azure portal](data-collection-rule-azure-monitor-agent.md#create-data-collection-rule-and-association).
- **Networking**: The [AzureResourceManager service tag](../../virtual-network/service-tags-overview.md) must be enabled on the virtual network for the virtual machine. Additionally, the virtual machine must have access to the following HTTPS endpoints: - global.handler.control.monitor.azure.com - `<virtual-machine-region-name>`.handler.control.monitor.azure.com (example: westus.handler.control.azure.com)
The following prerequisites must be met prior to installing the Azure Monitor ag
## Using the Azure portal ### Install
-To install the Azure Monitor agent using the Azure portal, follow the process to [create a data collection rule](data-collection-rule-azure-monitor-agent.md#create-rule-and-association-in-azure-portal) in the Azure portal. This not only creates the rule, but it also associates it to the selected resources and installs the Azure Monitor agent on them if not already installed.
+To install the Azure Monitor agent using the Azure portal, follow the process to [create a data collection rule](data-collection-rule-azure-monitor-agent.md#create-data-collection-rule-and-association) in the Azure portal. This not only creates the rule, but it also associates it to the selected resources and installs the Azure Monitor agent on them if not already installed.
### Uninstall To uninstall the Azure Monitor agent using the Azure portal, navigate to your virtual machine, scale set or Arc-enabled server, select the **Extensions** tab and click on **AzureMonitorWindowsAgent** or **AzureMonitorLinuxAgent**. In the dialog that pops up, click **Uninstall**.
azure-monitor Azure Monitor Agent Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration.md
The following tables show gap analyses for the **log types** that are currently
## Test migration by using the Azure portal To ensure safe deployment during migration, you should begin testing with a few resources in your nonproduction environment that are running the existing Log Analytics agent. After you can validate the data collected on these test resources, roll out to production by following the same steps.
-See [create new data collection rules](./data-collection-rule-azure-monitor-agent.md#create-rule-and-association-in-azure-portal) to start collecting some of the existing data types. Once you validate data is flowing as expected with the Azure Monitor agent, check the `Category` column in the [Heartbeat](/azure/azure-monitor/reference/tables/heartbeat) table for the value *Azure Monitor Agent* for AMA collected data. Ensure it matches data flowing through the existing Log Analytics agent.
+See [create new data collection rules](./data-collection-rule-azure-monitor-agent.md#create-data-collection-rule-and-association) to start collecting some of the existing data types. Once you validate data is flowing as expected with the Azure Monitor agent, check the `Category` column in the [Heartbeat](/azure/azure-monitor/reference/tables/heartbeat) table for the value *Azure Monitor Agent* for AMA collected data. Ensure it matches data flowing through the existing Log Analytics agent.
## At-scale migration using Azure Policy
azure-monitor Azure Monitor Agent Windows Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-windows-client.md
Here is a comparison between client installer and VM extension for Azure Monitor
- `<virtual-machine-region-name>`.handler.control.monitor.azure.com (example: westus.handler.control.azure.com) - `<log-analytics-workspace-id>`.ods.opinsights.azure.com (example: 12345a01-b1cd-1234-e1f2-1234567g8h99.ods.opsinsights.azure.com) (If using private links on the agent, you must also add the [data collection endpoints](../essentials/data-collection-endpoint-overview.md#components-of-a-data-collection-endpoint))
-6. Existing data collection rule(s) you wish to associate with the devices. If it doesn't exist already, [follow the guidance here to create data collection rule(s)](./data-collection-rule-azure-monitor-agent.md#create-rule-and-associationusingrestapi). **Do not associate the rule to any resources yet**.
+6. Existing data collection rule(s) you wish to associate with the devices. If it doesn't exist already, [follow the guidance here to create data collection rule(s)](./data-collection-rule-azure-monitor-agent.md#create-data-collection-rule-and-association). **Do not associate the rule to any resources yet**.
## Install the agent 1. Download the Windows MSI installer for the agent using [this link](https://go.microsoft.com/fwlink/?linkid=2192409). You can also download it from **Monitor** > **Data Collection Rules** > **Create** experience on Azure portal (shown below):
PUT https://management.azure.com/providers/Microsoft.Insights/monitoredObjects/{
#### 3. Associate DCR to Monitored Object
-Now we associate the Data Collection Rules (DCR) to the Monitored Object by creating a Data Collection Rule Associations. If you haven't already, [follow instructions here](./data-collection-rule-azure-monitor-agent.md#create-rule-and-associationusingrestapi) to create data collection rule(s) first.
+Now we associate the Data Collection Rules (DCR) to the Monitored Object by creating Data Collection Rule Associations. If you haven't already, [follow instructions here](./data-collection-rule-azure-monitor-agent.md#create-data-collection-rule-and-association) to create data collection rules first.
**Permissions required**: Anyone who has ΓÇÿMonitored Object ContributorΓÇÖ at an appropriate scope can perform this operation, as assigned in step 1. **Request URI**
azure-monitor Data Collection Rule Azure Monitor Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-rule-azure-monitor-agent.md
Title: Configure data collection for the Azure Monitor agent
-description: Describes how to create a data collection rule to collect events and performance data from virtual machines using the Azure Monitor agent.
+ Title: Monitor data from virtual machines with Azure Monitor agent
+description: Describes how to collect events and performance data from virtual machines using the Azure Monitor agent.
Previously updated : 03/16/2022 Last updated : 06/23/2022+++
-# Configure data collection for the Azure Monitor agent
-This article describes how to create a [data collection rule](../essentials/data-collection-rule-overview.md) to collect events and performance counters from virtual machines using the Azure Monitor agent. The data collection rule defines data coming into Azure Monitor and specify where it should be sent.
+# Collect data from virtual machines with the Azure Monitor agent
-> [!NOTE]
-> This article describes how to configure data for virtual machines with the Azure Monitor agent only.
+This article describes how to collect events and performance counters from virtual machines using the Azure Monitor agent.
-## Data collection rule associations
+To collect data from virtual machines using the Azure Monitor agent, you'll:
-To apply a DCR to a virtual machine, you create an association for the virtual machine. A virtual machine may have an association to multiple DCRs, and a DCR may have multiple virtual machines associated to it. This allows you to define a set of DCRs, each matching a particular requirement, and apply them to only the virtual machines where they apply.
+1. Create [data collection rules (DCR)](../essentials/data-collection-rule-overview.md) that define which data Azure Monitor agent sends to which destinations.
+1. Associate the data collection rule to specific virtual machines.
-For example, consider an environment with a set of virtual machines running a line of business application and others running SQL Server. You might have one default data collection rule that applies to all virtual machines and separate data collection rules that collect data specifically for the line of business application and for SQL Server. The associations for the virtual machines to the data collection rules would look similar to the following diagram.
+## How data collection rule associations work
-![Diagram shows virtual machines hosting line of business application and SQL Server associated with data collection rules named central-i t-default and lob-app for line of business application and central-i t-default and s q l for SQL Server.](media/data-collection-rule-azure-monitor-agent/associations.png)
+You can associate virtual machines to multiple data collection rules. This allows you to define each data collection rule to address a particular requirement, and associate the data collection rules to virtual machines based on the specific data you want to collect from each machine.
+For example, consider an environment with a set of virtual machines running a line of business application and other virtual machines running SQL Server. You might have:
-## Create rule and association in Azure portal
+- One default data collection rule that applies to all virtual machines.
+- Separate data collection rules that collect data specifically for the line of business application and for SQL Server.
+
+The following diagram illustrates the associations for the virtual machines to the data collection rules.
-You can use the Azure portal to create a data collection rule and associate virtual machines in your subscription to that rule. The Azure Monitor agent will be automatically installed and a managed identity created for any virtual machines that don't already have it installed.
+![A diagram showing one virtual machine hosting a line of business application and one virtual machine hosting SQL Server. Both virtual machine are associated with data collection rule named central-i t-default. The virtual machine hosting the line of business application is also associated with a data collection rule called lob-app. The virtual machine hosting SQL Server is associated with a data collection rule called s q l.](media/data-collection-rule-azure-monitor-agent/associations.png)
-> [!IMPORTANT]
-> Creating a data collection rule using the portal also enables System-Assigned managed identity on the target resources, in addition to existing User-Assigned Identities (if any). For existing applications unless they specify the User-Assigned identity in the request, the machine will default to using System-Assigned Identity instead. [Learn More](../../active-directory/managed-identities-azure-resources/managed-identities-faq.md#what-identity-will-imds-default-to-if-dont-specify-the-identity-in-the-request)
-
+## Create data collection rule and association
-> [!NOTE]
-> If you wish to send data to Log Analytics, you must create the data collection rule in the **same region** where your Log Analytics workspace resides. The rule can be associated to machines in other supported region(s).
+To send data to Log Analytics, create the data collection rule in the **same region** where your Log Analytics workspace resides. You can still associate the rule to machines in other supported regions.
-In the **Monitor** menu in the Azure portal, select **Data Collection Rules** from the **Settings** section. Click **Create** to create a new Data Collection Rule and assignment.
+### [Portal](#tab/portal)
-[![Data Collection Rules](media/data-collection-rule-azure-monitor-agent/data-collection-rules-updated.png)](media/data-collection-rule-azure-monitor-agent/data-collection-rules-updated.png#lightbox)
+1. From the **Monitor** menu, select **Data Collection Rules**.
+1. Select **Create** to create a new Data Collection Rule and associations.
-Click **Create** to create a new rule and set of associations. Provide a **Rule name** and specify a **Subscription**, **Resource Group** and **Region**. This specifies where the DCR will be created. The virtual machines and their associations can be in any subscription or resource group in the tenant.
-Additionally, choose the appropriate **Platform Type** which specifies the type of resources this rule can apply to. Custom will allow for both Windows and Linux types. This allows for pre-curated creation experiences with options scoped to the selected platform type.
+ [![Screenshot showing the Create button on the Data Collection Rules screen.](media/data-collection-rule-azure-monitor-agent/data-collection-rules-updated.png)](media/data-collection-rule-azure-monitor-agent/data-collection-rules-updated.png#lightbox)
+
+1. Provide a **Rule name** and specify a **Subscription**, **Resource Group**, **Region**, and **Platform Type**.
-[![Data Collection Rule Basics](media/data-collection-rule-azure-monitor-agent/data-collection-rule-basics-updated.png)](media/data-collection-rule-azure-monitor-agent/data-collection-rule-basics-updated.png#lightbox)
+ **Region** specifies where the DCR will be created. The virtual machines and their associations can be in any subscription or resource group in the tenant.
-In the **Resources** tab, add the resources (virtual machines, virtual machine scale sets, Arc for servers) that should have the Data Collection Rule applied. The Azure Monitor Agent will be installed on resources that don't already have it installed, and will enable Azure Managed Identity as well.
+ **Platform Type** specifies the type of resources this rule can apply to. Custom allows for both Windows and Linux types.
-### Private link configuration using data collection endpoints
-If you need network isolation using private links for collecting data using agents from your resources, simply select existing endpoints (or create a new endpoint) from the same region for the respective resource(s) as shown below. See [how to create data collection endpoint](../essentials/data-collection-endpoint-overview.md).
+ [![Screenshot showing the Basics tab of the Data Collection Rules screen.](media/data-collection-rule-azure-monitor-agent/data-collection-rule-basics-updated.png)](media/data-collection-rule-azure-monitor-agent/data-collection-rule-basics-updated.png#lightbox)
-[![Data Collection Rule virtual machines](media/data-collection-rule-azure-monitor-agent/data-collection-rule-virtual-machines-with-endpoint.png)](media/data-collection-rule-azure-monitor-agent/data-collection-rule-virtual-machines-with-endpoint.png#lightbox)
+1. On the **Resources** tab, add the resources (virtual machines, virtual machine scale sets, Arc for servers) to which to associate the data collection rule. The portal will install Azure Monitor Agent on resources that don't already have it installed, and will also enable Azure Managed Identity.
-On the **Collect and deliver** tab, click **Add data source** to add a data source and destination set. Select a **Data source type**, and the corresponding details to select will be displayed. For performance counters, you can select from a predefined set of objects and their sampling rate. For events, you can select from a set of logs or facilities and the severity level.
+ > [!IMPORTANT]
+ > The portal enables System-Assigned managed identity on the target resources, in addition to existing User-Assigned Identities (if any). For existing applications, unless you specify the User-Assigned identity in the request, the machine will default to using System-Assigned Identity instead.
-[![Data source basic](media/data-collection-rule-azure-monitor-agent/data-collection-rule-data-source-basic-updated.png)](media/data-collection-rule-azure-monitor-agent/data-collection-rule-data-source-basic-updated.png#lightbox)
+ If you need network isolation using private links, select existing endpoints from the same region for the respective resources, or [create a new endpoint](../essentials/data-collection-endpoint-overview.md).
+ [![Data Collection Rule virtual machines](media/data-collection-rule-azure-monitor-agent/data-collection-rule-virtual-machines-with-endpoint.png)](media/data-collection-rule-azure-monitor-agent/data-collection-rule-virtual-machines-with-endpoint.png#lightbox)
-To specify other logs and performance counters from the [currently supported data sources](azure-monitor-agent-overview.md#data-sources-and-destinations) or to filter events using XPath queries, select **Custom**. You can then specify an [XPath ](https://www.w3schools.com/xml/xpath_syntax.asp) for any specific values to collect. See [Sample DCR](data-collection-rule-sample-agent.md) for an example.
+1. On the **Collect and deliver** tab, select **Add data source** to add a data source and set a destination.
+1. Select a **Data source type**.
+1. Select which data you want to collect. For performance counters, you can select from a predefined set of objects and their sampling rate. For events, you can select from a set of logs and severity levels.
-[![Data source custom](media/data-collection-rule-azure-monitor-agent/data-collection-rule-data-source-custom-updated.png)](media/data-collection-rule-azure-monitor-agent/data-collection-rule-data-source-custom-updated.png#lightbox)
+ [![Data source basic](media/data-collection-rule-azure-monitor-agent/data-collection-rule-data-source-basic-updated.png)](media/data-collection-rule-azure-monitor-agent/data-collection-rule-data-source-basic-updated.png#lightbox)
-On the **Destination** tab, add one or more destinations for the data source. You can select multiple destinations of same of different types, for instance multiple Log Analytics workspaces (i.e. "multi-homing"). Windows event and Syslog data sources can only send to Azure Monitor Logs. Performance counters can send to both Azure Monitor Metrics and Azure Monitor Logs.
+1. Select **Custom** to collect logs and performance counters that are not [currently supported data sources](azure-monitor-agent-overview.md#data-sources-and-destinations) or to [filter events using XPath queries](#filter-events-using-xpath-queries). You can then specify an [XPath](https://www.w3schools.com/xml/xpath_syntax.asp) to collect any specific values. See [Sample DCR](data-collection-rule-sample-agent.md) for an example.
-[![Destination](media/data-collection-rule-azure-monitor-agent/data-collection-rule-destination.png)](media/data-collection-rule-azure-monitor-agent/data-collection-rule-destination.png#lightbox)
+ [![Data source custom](media/data-collection-rule-azure-monitor-agent/data-collection-rule-data-source-custom-updated.png)](media/data-collection-rule-azure-monitor-agent/data-collection-rule-data-source-custom-updated.png#lightbox)
-Click **Add Data Source** and then **Review + create** to review the details of the data collection rule and association with the set of VMs. Click **Create** to create it.
+1. On the **Destination** tab, add one or more destinations for the data source. You can select multiple destinations of the same or different types - for instance multiple Log Analytics workspaces (known as "multi-homing").
-> [!NOTE]
-> After the data collection rule and associations have been created, it might take up to 5 minutes for data to be sent to the destinations.
+ You can send Windows event and Syslog data sources can to Azure Monitor Logs only. You can send performance counters to both Azure Monitor Metrics and Azure Monitor Logs.
-## Limit data collection with custom XPath queries
-Since you're charged for any data collected in a Log Analytics workspace, you should collect only the data that you require. Using basic configuration in the Azure portal, you only have limited ability to filter events to collect. For Application and System logs, this is all logs with a particular severity. For Security logs, this is all audit success or all audit failure logs.
+ [![Destination](media/data-collection-rule-azure-monitor-agent/data-collection-rule-destination.png)](media/data-collection-rule-azure-monitor-agent/data-collection-rule-destination.png#lightbox)
-To specify additional filters, you must use Custom configuration and specify an XPath that filters out the events you don't. XPath entries are written in the form `LogName!XPathQuery`. For example, you may want to return only events from the Application event log with an event ID of 1035. The XPathQuery for these events would be `*[System[EventID=1035]]`. Since you want to retrieve the events from the Application event log, the XPath would be `Application!*[System[EventID=1035]]`
-
-### Extracting XPath queries from Windows Event Viewer
-One of the ways to create XPath queries is to use Windows Event Viewer to extract XPath queries as shown below.
-*In step 5 when pasting over the 'Select Path' parameter value, you must append the log type category followed by '!' and then paste the copied value.
-
-[![Extract XPath](media/data-collection-rule-azure-monitor-agent/data-collection-rule-extract-xpath.png)](media/data-collection-rule-azure-monitor-agent/data-collection-rule-extract-xpath.png#lightbox)
-
-See [XPath 1.0 limitations](/windows/win32/wes/consuming-events#xpath-10-limitations) for a list of limitations in the XPath supported by Windows event log.
-
-> [!TIP]
-> You can use the PowerShell cmdlet `Get-WinEvent` with the `FilterXPath` parameter to test the validity of an XPathQuery locally on your machine first. The following script shows an example.
->
-> ```powershell
-> $XPath = '*[System[EventID=1035]]'
-> Get-WinEvent -LogName 'Application' -FilterXPath $XPath
-> ```
->
-> - **In the cmdlet above, the value for '-LogName' parameter is the initial part of the XPath query until the '!', while only the rest of the XPath query goes into the $XPath parameter.**
-> - If events are returned, the query is valid.
-> - If you receive the message *No events were found that match the specified selection criteria.*, the query may be valid, but there are no matching events on the local machine.
-> - If you receive the message *The specified query is invalid* , the query syntax is invalid.
-
-The following table shows examples for filtering events using a custom XPath.
-
-| Description | XPath |
-|:|:|
-| Collect only System events with Event ID = 4648 | `System!*[System[EventID=4648]]`
-| Collect Security Log events with Event ID = 4648 and a process name of consent.exe | `Security!*[System[(EventID=4648)]] and *[EventData[Data[@Name='ProcessName']='C:\Windows\System32\consent.exe']]` |
-| Collect all Critical, Error, Warning, and Information events from the System event log except for Event ID = 6 (Driver loaded) | `System!*[System[(Level=1 or Level=2 or Level=3) and (EventID != 6)]]` |
-| Collect all success and failure Security events except for Event ID 4624 (Successful logon) | `Security!*[System[(band(Keywords,13510798882111488)) and (EventID != 4624)]]` |
--
-## Create rule and association using REST API
-
-Follow the steps below to create a data collection rule and associations using the REST API.
+1. Select **Add Data Source** and then **Review + create** to review the details of the data collection rule and association with the set of virtual machines.
+1. Select **Create** to create the data collection rule.
> [!NOTE]
-> If you wish to send data to Log Analytics, you must create the data collection rule in the **same region** where your Log Analytics workspace resides. The rule can be associated to machines in other supported region(s).
+> It might take up to 5 minutes for data to be sent to the destinations after you create the data collection rule and associations.
+
+### [API](#tab/api)
1. Manually create the DCR file using the JSON format shown in [Sample DCR](data-collection-rule-sample-agent.md).
Follow the steps below to create a data collection rule and association
3. Create an association for each virtual machine to the data collection rule using the [REST API](/rest/api/monitor/datacollectionruleassociations/create#examples). -
-## Create rule and association using Resource Manager template
-
-> [!NOTE]
-> If you wish to send data to Log Analytics, you must create the data collection rule in the **same region** where your Log Analytics workspace resides. The rule can be associated to machines in other supported region(s).
-
-You can create a rule and an association for an Azure virtual machine or Azure Arc-enabled server using Resource Manager templates. See [Resource Manager template samples for data collection rules in Azure Monitor](./resource-manager-data-collection-rules.md) for sample templates).
--
-## Manage rules and association using PowerShell
-
-> [!NOTE]
-> If you wish to send data to Log Analytics, you must create the data collection rule in the **same region** where your Log Analytics workspace resides. The rule can be associated to machines in other supported region(s).
+### [PowerShell](#tab/powershell)
**Data collection rules**
You can create a rule and an association for an Azure virtual machine or Azure A
| Create an association | [New-AzDataCollectionRuleAssociation](/powershell/module/az.monitor/new-azdatacollectionruleassociation?view=azps-6.0.0&viewFallbackFrom=azps-5.4.0&preserve-view=true) | | Delete an association | [Remove-AzDataCollectionRuleAssociation](/powershell/module/az.monitor/remove-azdatacollectionruleassociation?view=azps-6.0.0&viewFallbackFrom=azps-5.4.0&preserve-view=true) |
+### [Azure CLI](#tab/cli)
+This is enabled as part of Azure CLI **monitor-control-service** Extension. [View all commands](/cli/azure/monitor/data-collection/rule)
-## Manage rules and association using Azure CLI
+### [Resource Manager template](#tab/arm)
-> [!NOTE]
-> If you wish to send data to Log Analytics, you must create the data collection rule in the **same region** where your Log Analytics workspace resides. The rule can be associated to machines in other supported region(s).
+See [Resource Manager template samples for data collection rules in Azure Monitor](./resource-manager-data-collection-rules.md) for sample templates.
-This is enabled as part of Azure CLI **monitor-control-service** Extension. [View all commands](/cli/azure/monitor/data-collection/rule)
+
+## Filter events using XPath queries
+Since you're charged for any data you collect in a Log Analytics workspace, collect only the data you need. The basic configuration in the Azure portal provides you with a limited ability to filter out events.
+To specify additional filters, use Custom configuration and specify an XPath that filters out the events you don't need. XPath entries are written in the form `LogName!XPathQuery`. For example, you may want to return only events from the Application event log with an event ID of 1035. The XPathQuery for these events would be `*[System[EventID=1035]]`. Since you want to retrieve the events from the Application event log, the XPath is `Application!*[System[EventID=1035]]`
+
+### Extracting XPath queries from Windows Event Viewer
+In Windows, you can use Event Viewer to extract XPath queries as shown below.
+
+When you paste the XPath query into the field on the **Add data source** screen, (step 5 in the picture below), you must append the log type category followed by '!'.
+
+[![Extract XPath](media/data-collection-rule-azure-monitor-agent/data-collection-rule-extract-xpath.png)](media/data-collection-rule-azure-monitor-agent/data-collection-rule-extract-xpath.png#lightbox)
+
+See [XPath 1.0 limitations](/windows/win32/wes/consuming-events#xpath-10-limitations) for a list of limitations in the XPath supported by Windows event log.
+
+> [!TIP]
+> You can use the PowerShell cmdlet `Get-WinEvent` with the `FilterXPath` parameter to test the validity of an XPathQuery locally on your machine first. The following script shows an example.
+>
+> ```powershell
+> $XPath = '*[System[EventID=1035]]'
+> Get-WinEvent -LogName 'Application' -FilterXPath $XPath
+> ```
+>
+> - **In the cmdlet above, the value of the *-LogName* parameter is the initial part of the XPath query until the '!'. The rest of the XPath query goes into the *$XPath* parameter.**
+> - If the script returns events, the query is valid.
+> - If you receive the message *No events were found that match the specified selection criteria.*, the query may be valid, but there are no matching events on the local machine.
+> - If you receive the message *The specified query is invalid* , the query syntax is invalid.
+
+Examples of filtering events using a custom XPath:
+
+| Description | XPath |
+|:|:|
+| Collect only System events with Event ID = 4648 | `System!*[System[EventID=4648]]`
+| Collect Security Log events with Event ID = 4648 and a process name of consent.exe | `Security!*[System[(EventID=4648)]] and *[EventData[Data[@Name='ProcessName']='C:\Windows\System32\consent.exe']]` |
+| Collect all Critical, Error, Warning, and Information events from the System event log except for Event ID = 6 (Driver loaded) | `System!*[System[(Level=1 or Level=2 or Level=3) and (EventID != 6)]]` |
+| Collect all success and failure Security events except for Event ID 4624 (Successful logon) | `Security!*[System[(band(Keywords,13510798882111488)) and (EventID != 4624)]]` |
## Next steps
azure-monitor Data Collection Rule Sample Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-rule-sample-agent.md
The sample [data collection rule](../essentials/data-collection-rule-overview.md
- Sends all data to a Log Analytics workspace named centralWorkspace. > [!NOTE]
-> For an explanation of XPaths that are used to specify event collection in data collection rules, see [Limit data collection with custom XPath queries](../agents/data-collection-rule-azure-monitor-agent.md#limit-data-collection-with-custom-xpath-queries)
+> For an explanation of XPaths that are used to specify event collection in data collection rules, see [Limit data collection with custom XPath queries](../agents/data-collection-rule-azure-monitor-agent.md#filter-events-using-xpath-queries)
## Sample DCR
azure-monitor Data Sources Collectd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-collectd.md
# Collect data from CollectD on Linux agents in Azure Monitor
-[CollectD](https://collectd.org/) is an open source Linux daemon that periodically collects performance metrics from applications and system level information. Example applications include the Java Virtual Machine (JVM), MySQL Server, and Nginx. This article provides information on collecting performance data from CollectD in Azure Monitor.
+[CollectD](https://collectd.org/) is an open source Linux daemon that periodically collects performance metrics from applications and system level information. Example applications include the Java Virtual Machine (JVM), MySQL Server, and Nginx. This article provides information on collecting performance data from CollectD in Azure Monitor using the Log Analytics agent.
+ A full list of available plugins can be found at [Table of Plugins](https://collectd.org/wiki/index.php/Table_of_Plugins).
azure-monitor Data Sources Custom Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-custom-logs.md
The Custom Logs data source for the Log Analytics agent in Azure Monitor allows you to collect events from text files on both Windows and Linux computers. Many applications log information to text files instead of standard logging services such as Windows Event log or Syslog. Once collected, you can either parse the data into individual fields in your queries or extract the data during collection to individual fields.
-> [!IMPORTANT]
-> This article covers collecting text logs with the [Log Analytics agent](./log-analytics-agent.md). See [Collect text logs with Azure Monitor agent (preview)](../agents/data-collection-text-log.md) for details on collecting text logs with [Azure Monitor agent](azure-monitor-agent-overview.md).
![Custom log collection](media/data-sources-custom-logs/overview.png)
azure-monitor Data Sources Event Tracing Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-event-tracing-windows.md
Last updated 02/07/2022
*Event Tracing for Windows (ETW)* provides a mechanism for instrumentation of user-mode applications and kernel-mode drivers. The Log Analytics agent is used to [collect Windows events](./data-sources-windows-events.md) written to the Administrative and Operational [ETW channels](/windows/win32/wes/eventmanifestschema-channeltype-complextype). However, it is occasionally necessary to capture and analyze other events, such as those written to the Analytic channel. + ## Event flow To successfully collect [manifest-based ETW events](/windows/win32/etw/about-event-tracing#types-of-providers) for analysis in Azure Monitor Logs, you must use the [Azure diagnostics extension](./diagnostics-extension-overview.md) for Windows (WAD). In this scenario, the diagnostics extension acts as the ETW consumer, writing events to Azure Storage (tables) as an intermediate store. Here it will be stored in a table named **WADETWEventTable**. Log Analytics then collects the table data from Azure storage, presenting it as a table named **ETWEvent**.
azure-monitor Data Sources Iis Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-iis-logs.md
# Collect IIS logs with Log Analytics agent in Azure Monitor Internet Information Services (IIS) stores user activity in log files that can be collected by the Log Analytics agent and stored in [Azure Monitor Logs](../data-platform.md).
+![IIS logs](media/data-sources-iis-logs/overview.png)
+ > [!IMPORTANT]
-> This article covers collecting IIS logs with the [Log Analytics agent](./log-analytics-agent.md). See [Collect text logs with Azure Monitor agent (preview)](../agents/data-collection-text-log.md) for details on collecting IIS logs with [Azure Monitor agent](azure-monitor-agent-overview.md).
+> This article covers collecting IIS logs with the [Log Analytics agent](./log-analytics-agent.md), which **will be deprecated by August 2024**. Please be sure to [migrate to Azure Monitor agent](./azure-monitor-agent-manage.md) before August 2024 to continue ingesting data. See [Collect text logs with Azure Monitor agent (preview)](../agents/data-collection-text-log.md) for details on collecting IIS logs with [Azure Monitor agent](azure-monitor-agent-overview.md).
-![IIS logs](media/data-sources-iis-logs/overview.png)
## Configuring IIS logs Azure Monitor collects entries from log files created by IIS, so you must [configure IIS for logging](/previous-versions/orphan-topics/ws.11/hh831775(v=ws.11)).
azure-monitor Data Sources Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-json.md
Custom JSON data sources can be collected into [Azure Monitor](../data-platform.
> [!NOTE]
-> Log Analytics agent for Linux v1.1.0-217+ is required for Custom JSON Data
+> Log Analytics agent for Linux v1.1.0-217+ is required for Custom JSON Data.
## Configuration
azure-monitor Data Sources Linux Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-linux-applications.md
This article provides details for configuring the [Log Analytics agent for Linux
- [MySQL](#mysql) - [Apache HTTP Server](#apache-http-server) ++ ## MySQL If MySQL Server or MariaDB Server is detected on the computer when the Log Analytics agent is installed, a performance monitoring provider for MySQL Server will be automatically installed. This provider connects to the local MySQL/MariaDB server to expose performance statistics. MySQL user credentials must be configured so that the provider can access the MySQL Server.
azure-monitor Data Sources Performance Counters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-performance-counters.md
Last updated 02/26/2021
# Collect Windows and Linux performance data sources with Log Analytics agent Performance counters in Windows and Linux provide insight into the performance of hardware components, operating systems, and applications. Azure Monitor can collect performance counters from Log Analytics agents at frequent intervals for Near Real Time (NRT) analysis in addition to aggregating performance data for longer term analysis and reporting.
-> [!IMPORTANT]
-> This article covers collecting performance data with the [Log Analytics agent](./log-analytics-agent.md) which is one of the agents used by Azure Monitor. Other agents collect different data and are configured differently. See [Overview of Azure Monitor agents](../agents/agents-overview.md) for a list of the available agents and the data they can collect.
![Performance counters](media/data-sources-performance-counters/overview.png)
azure-monitor Data Sources Syslog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-syslog.md
# Collect Syslog data sources with Log Analytics agent Syslog is an event logging protocol that is common to Linux. Applications will send messages that may be stored on the local machine or delivered to a Syslog collector. When the Log Analytics agent for Linux is installed, it configures the local Syslog daemon to forward messages to the agent. The agent then sends the message to Azure Monitor where a corresponding record is created.
-> [!IMPORTANT]
-> This article covers collecting Syslog events with the [Log Analytics agent](./log-analytics-agent.md) which is one of the agents used by Azure Monitor. Other agents collect different data and are configured differently. See [Overview of Azure Monitor agents](../agents/agents-overview.md) for a list of the available agents and the data they can collect.
+ > [!NOTE] > Azure Monitor supports collection of messages sent by rsyslog or syslog-ng, where rsyslog is the default daemon. The default syslog daemon on version 5 of Red Hat Enterprise Linux, CentOS, and Oracle Linux version (sysklog) is not supported for syslog event collection. To collect syslog data from this version of these distributions, the [rsyslog daemon](http://rsyslog.com) should be installed and configured to replace sysklog.
->
->
+ ![Syslog collection](media/data-sources-syslog/overview.png)
azure-monitor Data Sources Windows Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-windows-events.md
# Collect Windows event log data sources with Log Analytics agent
-Windows Event logs are one of the most common [data sources](../agents/agent-data-sources.md) for Log Analytics agents on Windows virtual machines since many applications write to the Windows event log. You can collect events from standard logs such as System and Application in addition to specifying any custom logs created by applications you need to monitor.
+Windows Event logs are one of the most common [data sources](../agents/agent-data-sources.md) for Log Analytics agents on Windows virtual machines since many applications write to the Windows event log. You can collect events from standard logs, such as System and Application, and any custom logs created by applications you need to monitor.
-> [!IMPORTANT]
-> This article covers collecting Windows events with the [Log Analytics agent](./log-analytics-agent.md) which is one of the agents used by Azure Monitor. Other agents collect different data and are configured differently. See [Overview of Azure Monitor agents](../agents/agents-overview.md) for a list of the available agents and the data they can collect.
+![Diagram that shows the Log Analytics agent sending Windows events to the Event table in Azure Monitor.](media/data-sources-windows-events/overview.png)
-![Windows Events](media/data-sources-windows-events/overview.png)
## Configuring Windows Event logs Configure Windows Event logs from the [Agents configuration menu](../agents/agent-data-sources.md#configuring-data-sources) for the Log Analytics workspace.
-Azure Monitor only collects events from the Windows event logs that are specified in the settings. You can add an event log by typing in the name of the log and clicking **+**. For each log, only the events with the selected severities are collected. Check the severities for the particular log that you want to collect. You cannot provide any additional criteria to filter events.
+Azure Monitor only collects events from the Windows event logs that are specified in the settings. You can add an event log by typing in the name of the log and clicking **+**. For each log, only the events with the selected severities are collected. Check the severities for the particular log that you want to collect. You can't provide any additional criteria to filter events.
-As you type the name of an event log, Azure Monitor provides suggestions of common event log names. If the log you want to add does not appear in the list, you can still add it by typing in the full name of the log. You can find the full name of the log by using event viewer. In event viewer, open the *Properties* page for the log and copy the string from the *Full Name* field.
+As you type the name of an event log, Azure Monitor provides suggestions of common event log names. If the log you want to add doesn't appear in the list, you can still add it by typing in the full name of the log. You can find the full name of the log by using event viewer. In event viewer, open the *Properties* page for the log and copy the string from the *Full Name* field.
-[![Configure Windows events](media/data-sources-windows-events/configure.png)](media/data-sources-windows-events/configure.png#lightbox)
+[![Screenshot showing the Windows event logs tab on the Agents configuration screen.](media/data-sources-windows-events/configure.png)](media/data-sources-windows-events/configure.png#lightbox)
> [!IMPORTANT] > You can't configure collection of security events from the workspace using Log Analytics agent. You must use [Microsoft Defender for Cloud](../../security-center/security-center-enable-data-collection.md) or [Microsoft Sentinel](../../sentinel/connect-windows-security-events.md) to collect security events. [Azure Monitor agent](azure-monitor-agent-overview.md) can also be used to collect security events.
As you type the name of an event log, Azure Monitor provides suggestions of comm
> Critical events from the Windows event log will have a severity of "Error" in Azure Monitor Logs. ## Data collection
-Azure Monitor collects each event that matches a selected severity from a monitored event log as the event is created. The agent records its place in each event log that it collects from. If the agent goes offline for a period of time, then it collects events from where it last left off, even if those events were created while the agent was offline. There is a potential for these events to not be collected if the event log wraps with uncollected events being overwritten while the agent is offline.
+Azure Monitor collects each event that matches a selected severity from a monitored event log as the event is created. The agent records its place in each event log that it collects from. If the agent goes offline for a while, it collects events from where it last left off, even if those events were created while the agent was offline. There's a potential for these events to not be collected if the event log wraps with uncollected events being overwritten while the agent is offline.
>[!NOTE] >Azure Monitor does not collect audit events created by SQL Server from source *MSSQLSERVER* with event ID 18453 that contains keywords - *Classic* or *Audit Success* and keyword *0xa0000000000000*.
azure-monitor Log Analytics Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/log-analytics-agent.md
# Log Analytics agent overview
-The Log Analytics agents are on a **deprecation path** and will no longer be supported after **August 31, 2024**. If you use the Log Analytics agents to ingest data to Azure Monitor, make sure to [migrate to the new Azure Monitor agent](./azure-monitor-agent-migration.md) prior to that date.
The Azure Log Analytics agent collects telemetry from Windows and Linux virtual machines in any cloud, on-premises machines, and those monitored by [System Center Operations Manager](/system-center/scom/) and sends collected data to your Log Analytics workspace in Azure Monitor. The Log Analytics agent also supports insights and other services in Azure Monitor such as [VM insights](../vm/vminsights-enable-overview.md), [Microsoft Defender for Cloud](../../security-center/index.yml), and [Azure Automation](../../automation/automation-intro.md). This article provides a detailed overview of the agent, system and network requirements, and deployment methods.
+>[!IMPORTANT]
+>The Log Analytics agent is on a **deprecation path** and won't be supported after **August 31, 2024**. If you use the Log Analytics agent to ingest data to Azure Monitor, make sure to [migrate to the new Azure Monitor agent](./azure-monitor-agent-migration.md) prior to that date.
+ > [!NOTE] > You may also see the Log Analytics agent referred to as the Microsoft Monitoring Agent (MMA).
azure-monitor Java In Process Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-in-process-agent.md
# Azure Monitor OpenTelemetry-based auto-instrumentation for Java applications
-This article describes how to enable and configure the OpenTelemetry-based Azure Monitor Java offering. After you finish the instructions in this article, you'll be able to use Azure Monitor Application Insights to monitor your application.
+This article describes how to enable and configure the OpenTelemetry-based Azure Monitor Java offering. It can be used for any environment, including on-premises. After you finish the instructions in this article, you'll be able to use Azure Monitor Application Insights to monitor your application.
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)] ## Get started
-Java auto-instrumentation can be enabled without any code changes.
+Java auto-instrumentation is enabled through configuration changes; no code changes are required.
### Prerequisites
To provide feedback:
## Next steps
+- Review [Java auto-instrumentation configuration options](java-standalone-config.md).
- To review the source code, see the [Azure Monitor Java auto-instrumentation GitHub repository](https://github.com/Microsoft/ApplicationInsights-Java). - To learn more about OpenTelemetry and its community, see the [OpenTelemetry Java GitHub repository](https://github.com/open-telemetry/opentelemetry-java-instrumentation). - To enable usage experiences, see [Enable web or browser user monitoring](javascript.md).
azure-monitor Java On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-on-premises.md
- Title: Monitor Java applications running on premises - Azure Monitor Application Insights
-description: Application performance monitoring for Java applications running on premises without instrumenting the app. Distributed tracing and application map.
-- Previously updated : 04/16/2020---
-# Java codeless application monitoring on-premises - Azure Monitor Application Insights
-
-Java codeless application monitoring is all about simplicity - there are no code changes, the Java agent can be enabled through just a couple of configuration changes.
-
-## Overview
-
-Once enabled, the Java agent will automatically collect a multitude of requests, dependencies, logs, and metrics from the most widely used libraries and frameworks.
-
-Please follow [the detailed instructions](./java-in-process-agent.md) for all of the environments, including on-premises.
-
-## Next steps
-
-* [Application Insights Java 3.x](./java-in-process-agent.md)
azure-monitor Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript.md
# Application Insights for web pages
-Find out about the performance and usage of your web page or app. If you add [Application Insights](app-insights-overview.md) to your page script, you get timings of page loads and AJAX calls, counts, and details of browser exceptions and AJAX failures, as well as users and session counts. All these can be segmented by page, client OS and browser version, geo location, and other dimensions. You can set alerts on failure counts or slow page loading. And by inserting trace calls in your JavaScript code, you can track how the different features of your web page application are used.
+> [!NOTE]
+> We continue to assess the viability of OpenTelemetry for browser scenarios. The Application Insights JavaScript SDK is recommended for the forseeable future, which is fully compatible with OpenTelemetry distributed tracing.
-Application Insights can be used with any web pages - you just add a short piece of JavaScript, Node.js has a [standalone SDK](nodejs.md). If your web service is [Java](java-in-process-agent.md) or [ASP.NET](asp-net.md), you can use the server-side SDKs with the client-side JavaScript SDK to get an end-to-end understanding of your app's performance.
+Find out about the performance and usage of your web page or app. If you add [Application Insights](app-insights-overview.md) to your page script, you get timings of page loads and AJAX calls, counts, and details of browser exceptions and AJAX failures, as well as users and session counts. All of this telemetry can be segmented by page, client OS and browser version, geo location, and other dimensions. You can set alerts on failure counts or slow page loading. And by inserting trace calls in your JavaScript code, you can track how the different features of your web page application are used.
+Application Insights can be used with any web pages - you just add a short piece of JavaScript, Node.js has a [standalone SDK](nodejs.md). If your web service is [Java](java-in-process-agent.md) or [ASP.NET](asp-net.md), you can use the server-side SDKs with the client-side JavaScript SDK to get an end-to-end understanding of your app's performance.
## Adding the JavaScript SDK
Application Insights can be used with any web pages - you just add a short piece
### npm based setup
-Install via NPM.
+Install via Node Package Manager (npm).
```sh npm i --save @microsoft/applicationinsights-web
appInsights.trackPageView(); // Manually call trackPageView to establish the cur
If your app doesn't use npm, you can directly instrument your webpages with Application Insights by pasting this snippet at the top of each your pages. Preferably, it should be the first script in your `<head>` section so that it can monitor any potential issues with all of your dependencies and optionally any JavaScript errors. If you're using Blazor Server App, add the snippet at the top of the file `_Host.cshtml` in the `<head>` section.
-To assist with tracking which version of the snippet your application is using, starting from version 2.5.5 the page view event will include a new tag "ai.internal.snippet" that will contain the identified snippet version.
+Starting from version 2.5.5, the page view event will include a new tag "ai.internal.snippet" that contains the identified snippet version. This feature assists with tracking which version of the snippet your application is using.
The current Snippet (listed below) is version "5", the version is encoded in the snippet as sv:"#" and the [current version is also available on GitHub](https://go.microsoft.com/fwlink/?linkid=2156318).
cfg: { // Application Insights Configuration
#### Reporting Script load failures
-This version of the snippet detects and reports failures when loading the SDK from the CDN as an exception to the Azure Monitor portal (under the failures &gt; exceptions &gt; browser), this exception provides visibility into failures of this type so that you're aware your application isn't reporting telemetry (or other exceptions) as expected. This signal is an important measurement in understanding that you have lost telemetry because the SDK didn't load or initialize which can lead to:
+This version of the snippet detects and reports failures when loading the SDK from the CDN as an exception to the Azure Monitor portal (under the failures &gt; exceptions &gt; browser). The exception provides visibility into failures of this type so that you're aware your application isn't reporting telemetry (or other exceptions) as expected. This signal is an important measurement in understanding that you have lost telemetry because the SDK didn't load or initialize which can lead to:
- Under-reporting of how users are using (or trying to use) your site; - Missing telemetry on how your end users are using your site; - Missing JavaScript errors that could potentially be blocking your end users from successfully using your site.
For details on this exception see the [SDK load failure](javascript-sdk-load-fai
Reporting of this failure as an exception to the portal doesn't use the configuration option ```disableExceptionTracking``` from the application insights configuration and therefore if this failure occurs it will always be reported by the snippet, even when the window.onerror support is disabled.
-Reporting of SDK load failures isn't supported on Internet Explorer 8 or earlier. This reduces the minified size of the snippet by assuming that most environments aren't exclusively IE 8 or less. If you have this requirement and you wish to receive these exceptions, you'll need to either include a fetch poly fill or create your own snippet version that uses ```XDomainRequest``` instead of ```XMLHttpRequest```, it's recommended that you use the [provided snippet source code](https://github.com/microsoft/ApplicationInsights-JS/blob/master/AISKU/snippet/snippet.js) as a starting point.
+Reporting of SDK load failures isn't supported on Internet Explorer 8 or earlier. This behavior reduces the minified size of the snippet by assuming that most environments aren't exclusively IE 8 or less. If you have this requirement and you wish to receive these exceptions, you'll need to either include a fetch poly fill or create your own snippet version that uses ```XDomainRequest``` instead of ```XMLHttpRequest```, it's recommended that you use the [provided snippet source code](https://github.com/microsoft/ApplicationInsights-JS/blob/master/AISKU/snippet/snippet.js) as a starting point.
> [!NOTE] > If you are using a previous version of the snippet, it is highly recommended that you update to the latest version so that you will receive these previously unreported issues. #### Snippet configuration options
-All configuration options have now been move towards the end of the script to help avoid accidentally introducing JavaScript errors that wouldn't just cause the SDK to fail to load, but also it would disable the reporting of the failure.
+All configuration options have been move towards the end of the script. This placement avoids accidentally introducing JavaScript errors that wouldn't just cause the SDK to fail to load, but also it would disable the reporting of the failure.
Each configuration option is shown above on a new line, if you don't wish to override the default value of an item listed as [optional] you can remove that line to minimize the resulting size of your returned page.
The available configuration options are
### Connection String Setup + ```js import { ApplicationInsights } from '@microsoft/applicationinsights-web'
appInsights.trackPageView();
### Sending telemetry to the Azure portal
-By default the Application Insights JavaScript SDK autocollects many telemetry items that are helpful in determining the health of your application and the underlying user experience. These include:
+By default, the Application Insights JavaScript SDK auto-collects many telemetry items that are helpful in determining the health of your application and the underlying user experience.
+
+This telemetry includes:
- **Uncaught exceptions** in your app, including information on - Stack trace
Most configuration fields are named such that they can be defaulted to false. Al
| maxBatchInterval | How long to batch telemetry for before sending (milliseconds) | numeric<br/>15000 | | disable&#8203;ExceptionTracking | If true, exceptions aren't autocollected. | boolean<br/> false | | disableTelemetry | If true, telemetry isn't collected or sent. | boolean<br/>false |
-| enableDebug | If true, **internal** debugging data is thrown as an exception **instead** of being logged, regardless of SDK logging settings. Default is false. <br>***Note:*** Enabling this setting will result in dropped telemetry whenever an internal error occurs. This can be useful for quickly identifying issues with your configuration or usage of the SDK. If you don't want to lose telemetry while debugging, consider using `loggingLevelConsole` or `loggingLevelTelemetry` instead of `enableDebug`. | boolean<br/>false |
+| enableDebug | If true, **internal** debugging data is thrown as an exception **instead** of being logged, regardless of SDK logging settings. Default is false. <br>***Note:*** Enabling this setting will result in dropped telemetry whenever an internal error occurs. This setting can be useful for quickly identifying issues with your configuration or usage of the SDK. If you don't want to lose telemetry while debugging, consider using `loggingLevelConsole` or `loggingLevelTelemetry` instead of `enableDebug`. | boolean<br/>false |
| loggingLevelConsole | Logs **internal** Application Insights errors to console. <br>0: off, <br>1: Critical errors only, <br>2: Everything (errors & warnings) | numeric<br/> 0 | | loggingLevelTelemetry | Sends **internal** Application Insights errors as telemetry. <br>0: off, <br>1: Critical errors only, <br>2: Everything (errors & warnings) | numeric<br/> 1 | | diagnosticLogInterval | (internal) Polling interval (in ms) for internal logging queue | numeric<br/> 10000 |
-| samplingPercentage | Percentage of events that will be sent. Default is 100, meaning all events are sent. Set this if you wish to preserve your data cap for large-scale applications. | numeric<br/>100 |
+| samplingPercentage | Percentage of events that will be sent. Default is 100, meaning all events are sent. Set this option if you wish to preserve your data cap for large-scale applications. | numeric<br/>100 |
| autoTrackPageVisitTime | If true, on a pageview, the _previous_ instrumented page's view time is tracked and sent as telemetry and a new timer is started for the current pageview. It's sent as a custom metric named `PageVisitTime` in `milliseconds` and is calculated via the Date [now()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date/now) function (if available) and falls back to (new Date()).[getTime()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date/getTime) if now() is unavailable (IE8 or less). Default is false. | boolean<br/>false | | disableAjaxTracking | If true, Ajax calls aren't autocollected. | boolean<br/> false | | disableFetchTracking | If true, Fetch requests aren't autocollected.|boolean<br/>true |
Most configuration fields are named such that they can be defaulted to false. Al
| enableSessionStorageBuffer | If true, the buffer with all unsent telemetry is stored in session storage. The buffer is restored on page load | boolean<br />true | | cookieCfg | Defaults to cookie usage enabled see [ICookieCfgConfig](#icookiemgrconfig) settings for full defaults. | [ICookieCfgConfig](#icookiemgrconfig)<br>(Since 2.6.0)<br/>undefined | | ~~isCookieUseDisabled~~<br>disableCookiesUsage | If true, the SDK won't store or read any data from cookies. Disables the User and Session cookies and renders the usage blades and experiences useless. isCookieUseDisable is deprecated in favor of disableCookiesUsage, when both are provided disableCookiesUsage takes precedence.<br>(Since v2.6.0) And if `cookieCfg.enabled` is also defined it will take precedence over these values, Cookie usage can be re-enabled after initialization via the core.getCookieMgr().setEnabled(true). | alias for [`cookieCfg.enabled`](#icookiemgrconfig)<br>false |
-| cookieDomain | Custom cookie domain. This is helpful if you want to share Application Insights cookies across subdomains.<br>(Since v2.6.0) If `cookieCfg.domain` is defined it will take precedence over this value. | alias for [`cookieCfg.domain`](#icookiemgrconfig)<br>null |
-| cookiePath | Custom cookie path. This is helpful if you want to share Application Insights cookies behind an application gateway.<br>If `cookieCfg.path` is defined it will take precedence over this value. | alias for [`cookieCfg.path`](#icookiemgrconfig)<br>(Since 2.6.0)<br/>null |
+| cookieDomain | Custom cookie domain. This option is helpful if you want to share Application Insights cookies across subdomains.<br>(Since v2.6.0) If `cookieCfg.domain` is defined it will take precedence over this value. | alias for [`cookieCfg.domain`](#icookiemgrconfig)<br>null |
+| cookiePath | Custom cookie path. This option is helpful if you want to share Application Insights cookies behind an application gateway.<br>If `cookieCfg.path` is defined it will take precedence over this value. | alias for [`cookieCfg.path`](#icookiemgrconfig)<br>(Since 2.6.0)<br/>null |
| isRetryDisabled | If false, retry on 206 (partial success), 408 (timeout), 429 (too many requests), 500 (internal server error), 503 (service unavailable), and 0 (offline, only if detected) | boolean<br/>false | | isStorageUseDisabled | If true, the SDK won't store or read any data from local and session storage. | boolean<br/> false | | isBeaconApiDisabled | If false, the SDK will send all telemetry using the [Beacon API](https://www.w3.org/TR/beacon) | boolean<br/>true |
Most configuration fields are named such that they can be defaulted to false. Al
| distributedTracingMode | Sets the distributed tracing mode. If AI_AND_W3C mode or W3C mode is set, W3C trace context headers (traceparent/tracestate) will be generated and included in all outgoing requests. AI_AND_W3C is provided for back-compatibility with any legacy Application Insights instrumented services. See example [here](./correlation.md#enable-w3c-distributed-tracing-support-for-web-apps).| `DistributedTracingModes`or<br/>numeric<br/>(Since v2.6.0) `DistributedTracingModes.AI_AND_W3C`<br />(v2.5.11 or earlier) `DistributedTracingModes.AI` | | enable&#8203;AjaxErrorStatusText | If true, include response error data text in dependency event on failed AJAX requests. | boolean<br/> false | | enable&#8203;AjaxPerfTracking |Flag to enable looking up and including more browser window.performance timings in the reported `ajax` (XHR and fetch) reported metrics. | boolean<br/> false |
-| maxAjaxPerf&#8203;LookupAttempts | The maximum number of times to look for the window.performance timings (if available), this is required as not all browsers populate the window.performance before reporting the end of the XHR request and for fetch requests this is added after its complete.| numeric<br/> 3 |
+| maxAjaxPerf&#8203;LookupAttempts | The maximum number of times to look for the window.performance timings (if available). This option is sometimes required as not all browsers populate the window.performance before reporting the end of the XHR request and for fetch requests this is added after its complete.| numeric<br/> 3 |
| ajaxPerfLookupDelay | The amount of time to wait before reattempting to find the window.performance timings for an `ajax` request, time is in milliseconds and is passed directly to setTimeout(). | numeric<br/> 25 ms | | enableUnhandled&#8203;PromiseRejection&#8203;Tracking | If true, unhandled promise rejections will be autocollected and reported as a JavaScript error. When disableExceptionTracking is true (don't track exceptions), the config value will be ignored and unhandled promise rejections won't be reported. | boolean<br/> false |
-| enablePerfMgr | When enabled (true) this will create local perfEvents for code that has been instrumented to emit perfEvents (via the doPerf() helper). This can be used to identify performance issues within the SDK based on your usage or optionally within your own instrumented code. [More details are available by the basic documentation](https://github.com/microsoft/ApplicationInsights-JS/blob/master/docs/PerformanceMonitoring.md). Since v2.5.7 | boolean<br/>false |
+| enablePerfMgr | When enabled (true) this will create local perfEvents for code that has been instrumented to emit perfEvents (via the doPerf() helper). This option can be used to identify performance issues within the SDK based on your usage or optionally within your own instrumented code. [More details are available by the basic documentation](https://github.com/microsoft/ApplicationInsights-JS/blob/master/docs/PerformanceMonitoring.md). Since v2.5.7 | boolean<br/>false |
| perfEvtsSendAll | When _enablePerfMgr_ is enabled and the [IPerfManager](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/IPerfManager.ts) fires a [INotificationManager](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/INotificationManager.ts).perfEvent() this flag determines whether an event is fired (and sent to all listeners) for all events (true) or only for 'parent' events (false &lt;default&gt;).<br />A parent [IPerfEvent](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/IPerfEvent.ts) is an event where no other IPerfEvent is still running at the point of this event being created and its _parent_ property isn't null or undefined. Since v2.5.7 | boolean<br />false | | idLength | The default length used to generate new random session and user ID values. Defaults to 22, previous default value was 5 (v2.5.8 or less), if you need to keep the previous maximum length you should set this value to 5. | numeric<br />22 |
Cookie Configuration for instance-based cookie management added in version 2.6.0
| Name | Description | Type and Default | ||-|| | enabled | A boolean that indicates whether the use of cookies by the SDK is enabled by the current instance. If false, the instance of the SDK initialized by this configuration won't store or read any data from cookies | boolean<br/> true |
-| domain | Custom cookie domain. This is helpful if you want to share Application Insights cookies across subdomains. If not provided uses the value from root `cookieDomain` value. | string<br/>null |
+| domain | Custom cookie domain, which is helpful if you want to share Application Insights cookies across subdomains. If not provided uses the value from root `cookieDomain` value. | string<br/>null |
| path | Specifies the path to use for the cookie, if not provided it will use any value from the root `cookiePath` value. | string <br/> / | | getCookie | Function to fetch the named cookie value, if not provided it will use the internal cookie parsing / caching. | `(name: string) => string` <br/> null | | setCookie | Function to set the named cookie with the specified value, only called when adding or updating a cookie. | `(name: string, value: string) => void` <br/> null |
cfg: { // Application Insights Configuration
</script> ```
-# [NPM](#tab/npm)
+# [npm](#tab/npm)
```javascript // excerpt of the config section of the JavaScript SDK snippet with correlation
By default, this SDK will **not** handle state-based route changing that occurs
### Single Page Applications
-For Single Page Applications, please reference plugin documentation for plugin specific guidance.
+For Single Page Applications, reference plugin documentation for plugin specific guidance.
| Plugins | ||
For Single Page Applications, please reference plugin documentation for plugin s
### Advanced Correlation
-When a page is first loading and the SDK has not fully initialized, we are unable to generate the Operation ID for the first request. As a result, distributed tracing is incomplete until the SDK fully initializes.
-To remedy this problem, you can include dynamic JavaScript on the returned HTML page and the SDK will use a callback function during initialization to retroactively pull the Operation ID from the serverside and populate the clientside with it.
+When a page is first loading and the SDK hasn't fully initialized, we're unable to generate the Operation ID for the first request. As a result, distributed tracing is incomplete until the SDK fully initializes.
+To remedy this problem, you can include dynamic JavaScript on the returned HTML page. The SDK will use a callback function during initialization to retroactively pull the Operation ID from the `serverside` and populate the `clientside` with it.
# [Snippet](#tab/snippet)
Here's a sample of how to create a dynamic JS using Razor:
}}); </script> ```
-# [NPM](#tab/npm)
+# [npm](#tab/npm)
```js import { ApplicationInsights } from '@microsoft/applicationinsights-web'
appInsights.context.telemetryContext.parentID = serverId;
appInsights.loadAppInsights(); ```
-When using a npm based configuration, a location must be determined to store the Operation ID (generally global) to enable access for the SDK initialization bundle to `appInsights.context.telemetryContext.parentID` so it can populate it before the first page view event is sent.
+When using an npm based configuration, a location must be determined to store the Operation ID to enable access for the SDK initialization bundle to `appInsights.context.telemetryContext.parentID` so it can populate it before the first page view event is sent.
Test in internal environment to verify monitoring telemetry is working as expect
## SDK performance/overhead
-At just 36 KB gzipped, and taking only ~15 ms to initialize, Application Insights adds a negligible amount of loadtime to your website. By using the snippet, minimal components of the library are quickly loaded. In the meantime, the full script is downloaded in the background.
+At just 36 KB gzipped, and taking only ~15 ms to initialize, Application Insights adds a negligible amount of loadtime to your website. Minimal components of the library are quickly loaded when using this snippet. In the meantime, the full script is downloaded in the background.
While the script is downloading from the CDN, all tracking of your page is queued. Once the downloaded script finishes asynchronously initializing, all events that were queued are tracked. As a result, you won't lose any telemetry during the entire life cycle of your page. This setup process provides your page with a seamless analytics system, invisible to your users.
While the script is downloading from the CDN, all tracking of your page is queue
![Chrome](https://raw.githubusercontent.com/alrra/browser-logos/master/src/chrome/chrome_48x48.png) | ![Firefox](https://raw.githubusercontent.com/alrra/browser-logos/master/src/firefox/firefox_48x48.png) | ![IE](https://raw.githubusercontent.com/alrra/browser-logos/master/src/edge/edge_48x48.png) | ![Opera](https://raw.githubusercontent.com/alrra/browser-logos/master/src/opera/opera_48x48.png) | ![Safari](https://raw.githubusercontent.com/alrra/browser-logos/master/src/safari/safari_48x48.png) | | | | |
-Chrome Latest Γ£ö | Firefox Latest Γ£ö | IE 9+ & Edge Γ£ö<br>IE 8- Compatible | Opera Latest Γ£ö | Safari Latest Γ£ö |
+Chrome Latest Γ£ö | Firefox Latest Γ£ö | IE 9+ & Microsoft Edge Γ£ö<br>IE 8- Compatible | Opera Latest Γ£ö | Safari Latest Γ£ö |
## ES3/IE8 Compatibility
-As an SDK there are numerous users that canΓÇÖt control the browsers that their customers use. As such we need to ensure that this SDK continues to "work" and doesn't break the JS execution when loaded by an older browser. While it would be ideal to not support IE8 and older generation (ES3) browsers, there are numerous large customers/users that continue to require pages to "work" and as noted they may or canΓÇÖt control which browser that their end users choose to use.
+As such we need to ensure that this SDK continues to "work" and doesn't break the JS execution when loaded by an older browser. It would be ideal to not support older browsers, but numerous large customers canΓÇÖt control which browser their end users choose to use.
-This does NOT mean that we'll only support the lowest common set of features, just that we need to maintain ES3 code compatibility and when adding new features they'll need to be added in a manner that wouldn't break ES3 JavaScript parsing and added as an optional feature.
+This statement does NOT mean that we'll only support the lowest common set of features. We need to maintain ES3 code compatibility and when adding new features, they'll need to be added in a manner that wouldn't break ES3 JavaScript parsing and added as an optional feature.
[See GitHub for full details on IE8 support](https://github.com/Microsoft/ApplicationInsights-JS#es3ie8-compatibility)
For the latest updates and bug fixes, [consult the release notes](./release-note
## Troubleshooting
-### I am getting an error message of Failed to get Request-Context correlation header as it may be not included in the response or not accessible
+### I'm getting an error message of Failed to get Request-Context correlation header as it may be not included in the response or not accessible
-The `correlationHeaderExcludedDomains` configuration property is an exclude list that disables correlation headers for specific domains, this is useful for when including those headers would cause the request to fail or not be sent due to third-party server configuration. This property supports wildcards.
+The `correlationHeaderExcludedDomains` configuration property is an exclude list that disables correlation headers for specific domains. This option is useful when including those headers would cause the request to fail or not be sent due to third-party server configuration. This property supports wildcards.
An example would be `*.queue.core.windows.net`, as seen in the code sample above. Adding the application domain to this property should be avoided as it stops the SDK from including the required distributed tracing `Request-Id`, `Request-Context` and `traceparent` headers as part of the request.
The server-side needs to be able to accept connections with those headers presen
Access-Control-Allow-Headers: `Request-Id`, `traceparent`, `Request-Context`, `<your header>`
-### I am receiving duplicate telemetry data from the Application Insights JavaScript SDK
+### I'm receiving duplicate telemetry data from the Application Insights JavaScript SDK
-If the SDK reports correlation recursively enable the configuration setting of `excludeRequestFromAutoTrackingPatterns` to exclude the duplicate data, this can occur when using connection strings. The syntax for the configuration setting is `excludeRequestFromAutoTrackingPatterns: [<endpointUrl>]`.
+If the SDK reports correlation recursively, enable the configuration setting of `excludeRequestFromAutoTrackingPatterns` to exclude the duplicate data. This scenario can occur when using connection strings. The syntax for the configuration setting is `excludeRequestFromAutoTrackingPatterns: [<endpointUrl>]`.
## <a name="next"></a> Next steps
+* [Source map for JavaScript](source-map-support.md)
* [Track usage](usage-overview.md) * [Custom events and metrics](api-custom-events-metrics.md) * [Build-measure-learn](usage-overview.md)
azure-monitor Best Practices Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-cost.md
Virtual machines can vary significantly in the amount of data they collect, depe
| Source | Strategy | Log Analytics agent | Azure Monitor agent | |:|:|:|:|
-| Event logs | Collect only required event logs and levels. For example, *Information* level events are rarely used and should typically not be collected. For Azure Monitor agent, filter particular event IDs that are frequent but not valuable. | Change the [event log configuration for the workspace](agents/data-sources-windows-events.md) | Change the [data collection rule](agents/data-collection-rule-azure-monitor-agent.md). Use [custom XPath queries](agents/data-collection-rule-azure-monitor-agent.md#limit-data-collection-with-custom-xpath-queries) to filter specific event IDs. |
-| Syslog | Reduce the number of facilities collected and only collect required event levels. For example, *Info* and *Debug* level events are rarely used and should typically not be collected. | Change the [syslog configuration for the workspace](agents/data-sources-syslog.md). | Change the [data collection rule](agents/data-collection-rule-azure-monitor-agent.md). Use [custom XPath queries](agents/data-collection-rule-azure-monitor-agent.md#limit-data-collection-with-custom-xpath-queries) to filter specific events. |
-| Performance counters | Collect only the performance counters required and reduce the frequency of collection. For Azure Monitor agent, consider sending performance data only to Metrics and not Logs. | Change the [performance counter configuration for the workspace](agents/data-sources-performance-counters.md). | Change the [data collection rule](agents/data-collection-rule-azure-monitor-agent.md). Use [custom XPath queries](agents/data-collection-rule-azure-monitor-agent.md#limit-data-collection-with-custom-xpath-queries) to filter specific counters. |
+| Event logs | Collect only required event logs and levels. For example, *Information* level events are rarely used and should typically not be collected. For Azure Monitor agent, filter particular event IDs that are frequent but not valuable. | Change the [event log configuration for the workspace](agents/data-sources-windows-events.md) | Change the [data collection rule](agents/data-collection-rule-azure-monitor-agent.md). Use [custom XPath queries](agents/data-collection-rule-azure-monitor-agent.md#filter-events-using-xpath-queries) to filter specific event IDs. |
+| Syslog | Reduce the number of facilities collected and only collect required event levels. For example, *Info* and *Debug* level events are rarely used and should typically not be collected. | Change the [syslog configuration for the workspace](agents/data-sources-syslog.md). | Change the [data collection rule](agents/data-collection-rule-azure-monitor-agent.md). Use [custom XPath queries](agents/data-collection-rule-azure-monitor-agent.md#filter-events-using-xpath-queries) to filter specific events. |
+| Performance counters | Collect only the performance counters required and reduce the frequency of collection. For Azure Monitor agent, consider sending performance data only to Metrics and not Logs. | Change the [performance counter configuration for the workspace](agents/data-sources-performance-counters.md). | Change the [data collection rule](agents/data-collection-rule-azure-monitor-agent.md). Use [custom XPath queries](agents/data-collection-rule-azure-monitor-agent.md#filter-events-using-xpath-queries) to filter specific counters. |
### Use transformations to filter events
azure-monitor Data Collection Endpoint Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-endpoint-overview.md
The sample data collection endpoint below is for virtual machines with Azure Mon
``` ## Next steps-- [Associate endpoint to machines](../agents/data-collection-rule-azure-monitor-agent.md#create-rule-and-association-in-azure-portal)
+- [Associate endpoint to machines](../agents/data-collection-rule-azure-monitor-agent.md#create-data-collection-rule-and-association)
- [Add endpoint to AMPLS resource](../logs/private-link-configure.md#connect-azure-monitor-resources)
azure-monitor Azure Cli Log Analytics Workspace Sample https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/azure-cli-log-analytics-workspace-sample.md
Use the Azure CLI commands described here to manage your log analytics workspace in Azure Monitor.
-> [!NOTE]
-> On August 31, 2024, Microsoft will retire the Log Analytics agent. You can use the Azure Monitor agent after that time. For more information, see [Overview of Azure Monitor agents](../agents/agents-overview.md).
[!INCLUDE [Prepare your Azure CLI environment](../../../includes/azure-cli-prepare-your-environment.md)]
azure-monitor Vminsights Health Configure Dcr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-health-configure-dcr.md
The following table lists the default configuration for each monitor. This defau
## Overrides An *override* changes one ore more properties of a monitor. For example, an override could disable a monitor that's enabled by default, define warning criteria for the monitor, or modify the monitor's critical threshold.
-Overrides are defined in a [Data Collection Rule (DCR)](../essentials/data-collection-rule-overview.md). You can create multiple DCRs with different sets of overrides and apply them to multiple virtual machines. You apply a DCR to a virtual machine by creating an association as described in [Configure data collection for the Azure Monitor agent (preview)](../agents/data-collection-rule-azure-monitor-agent.md#data-collection-rule-associations).
+Overrides are defined in a [Data Collection Rule (DCR)](../essentials/data-collection-rule-overview.md). You can create multiple DCRs with different sets of overrides and apply them to multiple virtual machines. You apply a DCR to a virtual machine by creating an association as described in [Configure data collection for the Azure Monitor agent (preview)](../agents/data-collection-rule-azure-monitor-agent.md#create-data-collection-rule-and-association).
## Multiple overrides
azure-monitor Vminsights Health Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-health-migrate.md
Before you can remove the data collection rule for VM insights guest health, you
From the **Monitor** menu in the Azure portal, select **Data Collection Rules**. Click on the DCR for VM insights guest health, and then select **Resources**. Select the VMs to remove and click **Delete**.
-You can also remove the Data Collection Rule Association using [Azure PowerShell](../agents/data-collection-rule-azure-monitor-agent.md#manage-rules-and-association-using-powershell) or [Azure CLI](/cli/azure/monitor/data-collection/rule/association#az-monitor-data-collection-rule-association-delete).
+You can also remove the Data Collection Rule Association using [Azure PowerShell](../agents/data-collection-rule-azure-monitor-agent.md#create-data-collection-rule-and-association) or [Azure CLI](/cli/azure/monitor/data-collection/rule/association#az-monitor-data-collection-rule-association-delete).
### 3. Delete Data Collection Rule created for VM insights guest health
-To remove the data collection rule, click **Delete** from the DCR page in the Azure portal. You can also delete the Data Collection Rule using [Azure PowerShell](../agents/data-collection-rule-azure-monitor-agent.md#manage-rules-and-association-using-powershell) or [Azure CLI](/cli/azure/monitor/data-collection/rule#az-monitor-data-collection-rule-delete).
+To remove the data collection rule, click **Delete** from the DCR page in the Azure portal. You can also delete the Data Collection Rule using [Azure PowerShell](../agents/data-collection-rule-azure-monitor-agent.md#create-data-collection-rule-and-association) or [Azure CLI](/cli/azure/monitor/data-collection/rule#az-monitor-data-collection-rule-delete).
## Next steps
azure-netapp-files Performance Impact Kerberos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-impact-kerberos.md
na Previously updated : 02/18/2021 Last updated : 06/25/2021 # Performance impact of Kerberos on Azure NetApp Files NFSv4.1 volumes
-Azure NetApp Files supports [NFS client encryption in Kerberos](configure-kerberos-encryption.md) modes (krb5, krb5i, and krb5p) with AES-256 encryption. This article describes the performance impact of Kerberos on NFSv4.1 volumes.
+Azure NetApp Files supports [NFS client encryption in Kerberos](configure-kerberos-encryption.md) modes (krb5, krb5i, and krb5p) with AES-256 encryption. This article describes the performance impact of Kerberos on NFSv4.1 volumes. **Performance comparisons referenced in this article are made against the `sec=sys` security parameter, testing on a single volume with a single client.**
## Available security options
This section describes the single client-side performance impact of the various
## Expected performance impact
-There are two areas of focus: light load and upper limit. The following lists describe the performance impact security setting by security setting and scenario by scenario. All comparisons are made against the `sec=sys` security parameter. The test was done on a single volume, using a single client.
+There are two areas of focus: light load and upper limit. The following lists describe the performance impact security setting by security setting and scenario by scenario.
-Performance impact of krb5:
+**Testing Scope**
+* All comparisons are made against the `sec=sys` security parameter.
+* The test was done on a single volume, using a single client.
+
+**Performance impact of krb5:**
* Low concurrency (r/w): * Sequential latency increased 0.3 ms.
Performance impact of krb5:
* Maximum random I/O decreased by 30% for pure read workloads with the overall impact dropping to zero as the workload shifts to pure write. * Maximum metadata workload decreased 30%.
-Performance impact of krb5i:
+**Performance impact of krb5i:**
* Low concurrency (r/w): * Sequential latency increased 0.5 ms.
Performance impact of krb5i:
* Maximum random I/O decreased by 50% for pure read workloads with the overall impact decreasing to 25% as the workload shifts to pure write. * Maximum metadata workload decreased 30%.
-Performance impact of krb5p:
+**Performance impact of krb5p:**
* Low concurrency (r/w): * Sequential latency increased 0.8 ms.
azure-resource-manager Concepts Built In Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/concepts-built-in-policy.md
Title: Deploy associations for managed application using policy
-description: Learn about deploying associations for a managed application using Azure Policy service.
+ Title: Deploy associations for managed application using Azure Policy
+description: Learn about deploying associations for a managed application using Azure Policy.
Previously updated : 09/06/2019 Last updated : 06/23/2022
Azure policies can be used to deploy associations to associate resources to a ma
## Built-in policy to deploy associations
-Deploy associations for a managed application is a built-in policy that can be used to deploy association to associate a resource to a managed application. The policy accepts three parameters:
+Deploy associations for a managed application is a built-in policy that associates a resource type to a managed application. The policy deployment doesn't support nested resource types. The policy accepts three parameters:
- Managed application ID - This ID is the resource ID of the managed application to which the resources need to be associated. - Resource types to associate - These resource types are the list of resource types to be associated to the managed application. You can associate multiple resource types to a managed application using the same policy.-- Association name prefix - This string is the prefix to be added to the name of the association resource being created. The default value is "DeployedByPolicy".
+- Association name prefix - This string is the prefix to be added to the name of the association resource being created. The default value is `DeployedByPolicy`.
-The policy uses DeployIfNotExists evaluation. It runs after a Resource Provider has handled a create or update resource request of the selected resource type(s) and the evaluation has returned a success status code. After that, the association resource gets deployed using a template deployment.
+The policy uses `DeployIfNotExists` evaluation. It runs after a Resource Provider has handled a create or update resource request of the selected resource type and the evaluation has returned a success status code. After that, the association resource gets deployed using a template deployment.
For more information on associations, see [Azure Custom Providers resource onboarding](../custom-providers/concepts-resource-onboarding.md)
-## How to use the deploy associations built-in policy
+For more information, see [Deploy associations for a managed application](../../governance/policy/samples/built-in-policies.md#managed-application).
+
+## How to use the deploy associations built-in policy
### Prerequisites+ If the managed application needs permissions to the subscription to perform an action, the policy deployment of association resource wouldn't work without granting the permissions. ### Policy assignment
-To use the built-in policy, create a policy assignment and assign the Deploy associations for a managed application policy. Once the policy has been assigned successfully,
+
+To use the built-in policy, create a policy assignment and assign the Deploy associations for a managed application policy. Once the policy has been assigned successfully,
the policy will identify non-compliant resources and deploy association for those resources.
-![Assign built-in policy](media/concepts-built-in-policy/assign-builtin-policy-managedapp.png)
## Getting help
-If you have questions about Azure Custom Resource Providers development, try asking them on [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-custom-providers). A similar question might have already been answered, so check first before posting. Add the tag ```azure-custom-providers``` to get a fast response!
+If you have questions about Azure Custom Resource Providers development, try asking them on [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-custom-providers). A similar question might have already been answered, so check first before posting. Use the tag `azure-custom-providers`.
## Next steps
-In this article, you learnt about using built-in policy to deploy associations. See these articles to learn more:
+In this article, you learned about using built-in policy to deploy associations. See these articles to learn more:
- [Concepts: Azure Custom Providers resource onboarding](../custom-providers/concepts-resource-onboarding.md) - [Tutorial: Resource onboarding with custom providers](../custom-providers/tutorial-resource-onboarding.md)
azure-resource-manager Create Private Link Access Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/create-private-link-access-commands.md
+
+ Title: Manage resources through private link
+description: Restrict management access for resource to private link
+ Last updated : 06/16/2022++
+# Use APIs to create a private link for managing Azure resources
+
+This article explains how you can use [Azure Private Link](../../private-link/index.yml) to restrict access for managing resources in your subscriptions.
++
+## Create resource management private link
+
+To create resource management private link, send the following request:
+# [Azure CLI](#tab/azure-cli)
+ ### Example
+ ```azurecli
+ # Login first with az login if not using Cloud Shell
+ az resourcemanagement private-link create --location WestUS --resource-group PrivateLinkTestRG --name NewRMPL --public-network-access enabled
+ ```
+
+# [PowerShell](#tab/azure-powershell)
+ ### Example
+ ```azurepowershell-interactive
+ # Login first with Connect-AzAccount if not using Cloud Shell
+ New-AzResourceManagementPrivateLink -ResourceGroupName PrivateLinkTestRG -Name NewRMPL
+ ```
+
+# [REST](#tab/REST)
+ REST call
+ ```http
+ PUT
+ https://management.azure.com/subscriptions/{subscriptionID}/resourceGroups/{resourceGroupName}/providers/Microsoft.Authorization/resourceManagementPrivateLinks/{rmplName}?api-version=2020-05-01
+ ```
+
+ In the request body, include the location you want for the resource:
+
+ ```json
+ {
+ "location":"{region}"
+ }
+ ```
+
+ The operation returns:
+
+ ```json
+ {
+ "id": "/subscriptions/{subID}/resourceGroups/{rgName}/providers/Microsoft.Authorization/resourceManagementPrivateLinks/{name}",
+ "location": "{region}",
+ "name": "{rmplName}",
+ "properties": {
+ "privateEndpointConnections": []
+ },
+ "resourceGroup": "{rgName}",
+ "type": "Microsoft.Authorization/resourceManagementPrivateLinks"
+ }
+ ```
+
++
+Note the ID that is returned for the new resource management private link. You'll use it for creating the private link association.
+
+## Create private link association
+The resource name of a private link association resource must be a GUID, and it isn't yet supported to disable the publicNetworkAccess field.
+
+To create the private link association, use:
+# [Azure CLI](#tab/azure-cli)
+ ### Example
+ ```azurecli
+ # Login first with az login if not using Cloud Shell
+ az private-link association create --management-group-id fc096d27-0434-4460-a3ea-110df0422a2d --name 1d7942d1-288b-48de-8d0f-2d2aa8e03ad4 --privatelink "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/PrivateLinkTestRG/providers/Microsoft.Authorization/resourceManagementPrivateLinks/newRMPL"
+ ```
+
+# [PowerShell](#tab/azure-powershell)
+ ### Example
+ ```azurepowershell-interactive
+ # Login first with Connect-AzAccount if not using Cloud Shell
+ New-AzPrivateLinkAssociation -ManagementGroupId fc096d27-0434-4460-a3ea-110df0422a2d -Name 1d7942d1-288b-48de-8d0f-2d2aa8e03ad4 -PrivateLink "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/PrivateLinkTestRG/providers/Microsoft.Authorization/resourceManagementPrivateLinks/newRMPL" -PublicNetworkAccess enabled | fl
+ ```
+
+# [REST](#tab/REST)
+ REST call
+
+ ```http
+ PUT
+ https://management.azure.com/providers/Microsoft.Management/managementGroups/{managementGroupId}/providers/Microsoft.Authorization/privateLinkAssociations/{GUID}?api-version=2020-05-01
+ ```
+
+ In the request body, include:
+
+ ```json
+ {
+ "properties": {
+ "privateLink": "/subscriptions/{subscription-id}/resourceGroups/{rg-name}/providers/Microsoft.Authorization/resourceManagementPrivateLinks/{rmplName}",
+ "publicNetworkAccess": "enabled"
+ }
+ }
+ ```
+
+ The operation returns:
+
+ ```json
+ {
+ "id": {plaResourceId},
+ "name": {plaName},
+ "properties": {
+ "privateLink": {rmplResourceId},
+ "publicNetworkAccess": "Enabled",
+ "tenantId": "{tenantId}",
+ "scope": "/providers/Microsoft.Management/managementGroups/{managementGroupId}"
+ },
+ "type": "Microsoft.Authorization/privateLinkAssociations"
+ }
+ ```
++
+## Add private endpoint
+
+This article assumes you already have a virtual network. In the subnet that will be used for the private endpoint, you must turn off private endpoint network policies. If you haven't turned off private endpoint network policies, see [Disable network policies for private endpoints](../../private-link/disable-private-endpoint-network-policy.md).
+
+To create a private endpoint, see Private Endpoint documentation for creating via [Portal](../../private-link/create-private-endpoint-portal.md), [PowerShell](../../private-link/create-private-endpoint-powershell.md), [CLI](../../private-link/create-private-endpoint-cli.md), [Bicep](../../private-link/create-private-endpoint-bicep.md), or [template](../../private-link/create-private-endpoint-template.md).
+
+In the request body, set the `privateServiceLinkId` to the ID from your resource management private link. The `groupIds` must contain `ResourceManagement`. The location of the private endpoint must be the same as the location of the subnet.
+
+```json
+{
+ "location": "westus2",
+ "properties": {
+ "privateLinkServiceConnections": [
+ {
+ "name": "{connection-name}",
+ "properties": {
+ "privateLinkServiceId": "/subscriptions/{subID}/resourceGroups/{rgName}/providers/Microsoft.Authorization/resourceManagementPrivateLinks/{name}",
+ "groupIds": [
+ "ResourceManagement"
+ ]
+ }
+ }
+ ],
+ "subnet": {
+ "id": "/subscriptions/{subID}/resourceGroups/{rgName}/providers/Microsoft.Network/virtualNetworks/{vnet-name}/subnets/{subnet-name}"
+ }
+ }
+}
+```
+
+The next step varies depending whether you're using automatic or manual approval. For more information about approval, see [Access to a private link resource using approval workflow](../../private-link/private-endpoint-overview.md#access-to-a-private-link-resource-using-approval-workflow).
+
+The response includes approval state.
+
+```json
+"privateLinkServiceConnectionState": {
+ "actionsRequired": "None",
+ "description": "",
+ "status": "Approved"
+},
+```
+
+If your request is automatically approved, you can continue to the next section. If your request requires manual approval, wait for the network admin to approve your private endpoint connection.
+
+## Next steps
+
+To learn more about private links, see [Azure Private Link](../../private-link/index.yml).
azure-resource-manager Create Private Link Access Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/create-private-link-access-rest.md
- Title: Manage resources through private link
-description: Restrict management access for resource to private link
- Previously updated : 04/26/2022--
-# Use REST API to create private link for managing Azure resources (preview)
-
-This article explains how you can use [Azure Private Link](../../private-link/index.yml) to restrict access for managing resources in your subscriptions.
--
-## Create resource management private link
-
-To create resource management private link, send the following request:
-
-```http
-PUT
-https://management.azure.com/subscriptions/{subscriptionID}/resourceGroups/{resourceGroupName}/providers/Microsoft.Authorization/resourceManagementPrivateLinks/{rmplName}?api-version=2020-05-01
-```
-
-In the request body, include the location you want for the resource:
-
-```json
-{
- "location":"{region}"
-}
-```
-
-The operation returns:
-
-```json
-{
- "id": "/subscriptions/{subID}/resourceGroups/{rgName}/providers/Microsoft.Authorization/resourceManagementPrivateLinks/{name}",
- "location": "{region}",
- "name": "{rmplName}",
- "properties": {
- "privateEndpointConnections": []
- },
- "resourceGroup": "{rgName}",
- "type": "Microsoft.Authorization/resourceManagementPrivateLinks"
-}
-```
-
-Note the ID that is returned for the new resource management private link. You'll use it for creating the private link association.
-
-## Create private link association
-
-To create the private link association, use:
-
-```http
-PUT
-https://management.azure.com/providers/Microsoft.Management/managementGroups/{managementGroupId}/providers/Microsoft.Authorization/privateLinkAssociations/{GUID}?api-version=2020-05-01
-```
-
-In the request body, include:
-
-```json
-{
- "properties": {
- "privateLink": "/subscriptions/{subscription-id}/resourceGroups/{rg-name}/providers/Microsoft.Authorization/resourceManagementPrivateLinks/{rmplName}",
- "publicNetworkAccess": "enabled"
- }
-}
-```
-
-The operation returns:
-
-```json
-{
- "id": {plaResourceId},
- "name": {plaName},
- "properties": {
- "privateLink": {rmplResourceId},
- "publicNetworkAccess": "Enabled",
- "tenantId": "{tenantId}",
- "scope": "/providers/Microsoft.Management/managementGroups/{managementGroupId}"
- },
- "type": "Microsoft.Authorization/privateLinkAssociations"
-}
-```
-
-## Add private endpoint
-
-This article assumes you already have a virtual network. In the subnet that will be used for the private endpoint, you must turn off private endpoint network policies. If you haven't turned off private endpoint network policies, see [Disable network policies for private endpoints](../../private-link/disable-private-endpoint-network-policy.md).
-
-To create a private endpoint, use the following operation:
-
-```http
-PUT
-https://management.azure.com/subscriptions/{subscriptionID}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/privateEndpoints/{privateEndpointName}?api-version=2020-11-01
-```
-
-In the request body, set the `privateServiceLinkId` to the ID from your resource management private link. The `groupIds` must contain `ResourceManagement`. The location of the private endpoint must be the same as the location of the subnet.
-
-```json
-{
- "location": "westus2",
- "properties": {
- "privateLinkServiceConnections": [
- {
- "name": "{connection-name}",
- "properties": {
- "privateLinkServiceId": "/subscriptions/{subID}/resourceGroups/{rgName}/providers/Microsoft.Authorization/resourceManagementPrivateLinks/{name}",
- "groupIds": [
- "ResourceManagement"
- ]
- }
- }
- ],
- "subnet": {
- "id": "/subscriptions/{subID}/resourceGroups/{rgName}/providers/Microsoft.Network/virtualNetworks/{vnet-name}/subnets/{subnet-name}"
- }
- }
-}
-```
-
-The next step varies depending whether you're using automatic or manual approval. For more information about approval, see [Access to a private link resource using approval workflow](../../private-link/private-endpoint-overview.md#access-to-a-private-link-resource-using-approval-workflow).
-
-The response includes approval state.
-
-```json
-"privateLinkServiceConnectionState": {
- "actionsRequired": "None",
- "description": "",
- "status": "Approved"
-},
-```
-
-If your request is automatically approved, you can continue to the next section. If your request requires manual approval, wait for the network admin to approve your private endpoint connection.
-
-## Next steps
-
-To learn more about private links, see [Azure Private Link](../../private-link/index.yml).
azure-resource-manager Manage Private Link Access Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-private-link-access-commands.md
+
+ Title: Manage resource management private links
+description: Use APIs to manage existing resource management private links
+ Last updated : 06/16/2022++
+# Manage resource management private links
+
+This article explains how you to work with existing resource management private links. It shows API operations for getting and deleting existing resources.
+
+If you need to create a resource management private link, see [Use portal to create private link for managing Azure resources](create-private-link-access-portal.md) or [Use APIs to create private link for managing Azure resources](create-private-link-access-commands.md).
+
+## Resource management private links
+
+To **get a specific** resource management private link, send the following request:
+
+# [Azure CLI](#tab/azure-cli)
+ ### Example
+ ```azurecli
+ # Login first with az login if not using Cloud Shell
+ az resourcemanagement private-link show --resource-group PrivateLinkTestRG --name NewRMPL
+ ```
+
+# [PowerShell](#tab/azure-powershell)
+ ### Example
+ ```azurepowershell-interactive
+ # Login first with Connect-AzAccount if not using Cloud Shell
+ Get-AzResourceManagementPrivateLink -ResourceGroupName PrivateLinkTestRG -Name NewRMPL
+ ```
+
+# [REST](#tab/REST)
+ REST call
+ ```http
+ GET https://management.azure.com/subscriptions/{subscriptionID}/resourceGroups/{resourceGroupName}/providers/Microsoft.Authorization/resourceManagementPrivateLinks/{rmplName}?api-version=2020-05-01
+ ```
+
+The operation returns:
+
+```json
+{
+ "properties": {
+ "privateEndpointConnections": []
+ },
+ "id": {rmplResourceId},
+ "name": {rmplName},
+ "type": "Microsoft.Authorization/resourceManagementPrivateLinks",
+ "location": {region}
+}
+```
+++
+To **get all** resource management private links in a subscription, use:
+# [Azure CLI](#tab/azure-cli)
+ ```azurecli
+ # Login first with az login if not using Cloud Shell
+ az resourcemanagement private-link list
+ ```
+
+# [PowerShell](#tab/azure-powershell)
+ ```azurepowershell-interactive
+ # Login first with Connect-AzAccount if not using Cloud Shell
+ Get-AzResourceManagementPrivateLink
+ ```
+
+# [REST](#tab/REST)
+ REST call
+ ```http
+ GET
+ https://management.azure.com/subscriptions/{subscriptionID}/providers/Microsoft.Authorization/resourceManagementPrivateLinks?api-version=2020-05-01
+ ```
+
+ The operation returns:
+
+ ```json
+ [
+ {
+ "properties": {
+ "privateEndpointConnections": []
+ },
+ "id": {rmplResourceId},
+ "name": {rmplName},
+ "type": "Microsoft.Authorization/resourceManagementPrivateLinks",
+ "location": {region}
+ },
+ {
+ "properties": {
+ "privateEndpointConnections": []
+ },
+ "id": {rmplResourceId},
+ "name": {rmplName},
+ "type": "Microsoft.Authorization/resourceManagementPrivateLinks",
+ "location": {region}
+ }
+ ]
+ ```
+++
+To **delete a specific** resource management private link, use:
+# [Azure CLI](#tab/azure-cli)
+ ### Example
+ ```azurecli
+ # Login first with az login if not using Cloud Shell
+ az resourcemanagement private-link delete --resource-group PrivateLinkTestRG --name NewRMPL
+ ```
+
+# [PowerShell](#tab/azure-powershell)
+ ### Example
+ ```azurepowershell-interactive
+ # Login first with Connect-AzAccount if not using Cloud Shell
+ Remove-AzResourceManagementPrivateLink -ResourceGroupName PrivateLinkTestRG -Name NewRMPL
+ ```
+
+# [REST](#tab/REST)
+ REST call
+ ```http
+ DELETE
+ https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Authorization/resourceManagementPrivateLinks/{rmplName}?api-version=2020-05-01
+ ```
+
+ The operation returns: `Status 200 OK`.
+++
+## Private link association
+
+To **get a specific** private link association for a management group, use:
+# [Azure CLI](#tab/azure-cli)
+ ### Example
+ ```azurecli
+ # Login first with az login if not using Cloud Shell
+ az private-link association show --management-group-id fc096d27-0434-4460-a3ea-110df0422a2d --name 1d7942d1-288b-48de-8d0f-2d2aa8e03ad4
+ ```
+
+# [PowerShell](#tab/azure-powershell)
+ ### Example
+ ```azurepowershell-interactive
+ # Login first with Connect-AzAccount if not using Cloud Shell
+ Get-AzPrivateLinkAssociation -ManagementGroupId fc096d27-0434-4460-a3ea-110df0422a2d -Name 1d7942d1-288b-48de-8d0f-2d2aa8e03ad4 | fl
+ ```
+
+# [REST](#tab/REST)
+ REST call
+ ```http
+ GET
+ https://management.azure.com/providers/Microsoft.Management/managementGroups/{managementGroupID}/providers/Microsoft.Authorization/privateLinkAssociations?api-version=2020-05-01
+ ```
+
+ The operation returns:
+
+ ```json
+ {
+ "value": [
+ {
+ "properties": {
+ "privateLink": {rmplResourceID},
+ "tenantId": {tenantId},
+ "scope": "/providers/Microsoft.Management/managementGroups/{managementGroupId}"
+ },
+ "id": {plaResourceId},
+ "type": "Microsoft.Authorization/privateLinkAssociations",
+ "name": {plaName}
+ }
+ ]
+ }
+ ```
+++
+To **delete** a private link association, use:
+# [Azure CLI](#tab/azure-cli)
+ ### Example
+ ```azurecli
+ # Login first with az login if not using Cloud Shell
+ az private-link association delete --management-group-id 24f15700-370c-45bc-86a7-aee1b0c4eb8a --name 1d7942d1-288b-48de-8d0f-2d2aa8e03ad4
+ ```
+
+# [PowerShell](#tab/azure-powershell)
+ ### Example
+ ```azurepowershell-interactive
+ # Login first with Connect-AzAccount if not using Cloud Shell
+ Remove-AzPrivateLinkAssociation -ManagementGroupId 24f15700-370c-45bc-86a7-aee1b0c4eb8a -Name 1d7942d1-288b-48de-8d0f-2d2aa8e03ad4
+ ```
+
+# [REST](#tab/REST)
+ REST call
+
+ ```http
+ DELETE
+ https://management.azure.com/providers/Microsoft.Management/managementGroups/{managementGroupID}/providers/Microsoft.Authorization/privateLinkAssociations/{plaID}?api-version=2020-05-01
+ ```
+
+The operation returns: `Status 200 OK`.
++++
+## Next steps
+
+* To learn more about private links, see [Azure Private Link](../../private-link/index.yml).
+* To manage your private endpoints, see [Manage Private Endpoints](../../private-link/manage-private-endpoint.md).
+* To create a resource management private links, see [Use portal to create private link for managing Azure resources](create-private-link-access-portal.md) or [Use REST API to create private link for managing Azure resources](create-private-link-access-rest.md).
azure-resource-manager Manage Private Link Access Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-private-link-access-rest.md
- Title: Manage resource management private links
-description: Use REST API to manage existing resource management private links
- Previously updated : 07/29/2021--
-# Manage resource management private links with REST API (preview)
-
-This article explains how you to work with existing resource management private links. It shows REST API operations for getting and deleting existing resources.
-
-If you need to create a resource management private link, see [Use portal to create private link for managing Azure resources](create-private-link-access-portal.md) or [Use REST API to create private link for managing Azure resources](create-private-link-access-rest.md).
-
-## Resource management private links
-
-To **get a specific** resource management private link, send the following request:
-
-```http
-GET
-https://management.azure.com/subscriptions/{subscriptionID}/resourceGroups/{resourceGroupName}/providers/Microsoft.Authorization/resourceManagementPrivateLinks/{rmplName}?api-version=2020-05-01
-```
-
-The operation returns:
-
-```json
-{
- "properties": {
- "privateEndpointConnections": []
- },
- "id": {rmplResourceId},
- "name": {rmplName},
- "type": "Microsoft.Authorization/resourceManagementPrivateLinks",
- "location": {region}
-}
-```
-
-To **get all** resource management private links in a subscription, use:
-
-```http
-GET
-https://management.azure.com/subscriptions/{subscriptionID}/providers/Microsoft.Authorization/resourceManagementPrivateLinks?api-version=2020-05-01
-```
-
-The operation returns:
-
-```json
-[
- {
- "properties": {
- "privateEndpointConnections": []
- },
- "id": {rmplResourceId},
- "name": {rmplName},
- "type": "Microsoft.Authorization/resourceManagementPrivateLinks",
- "location": {region}
- },
- {
- "properties": {
- "privateEndpointConnections": []
- },
- "id": {rmplResourceId},
- "name": {rmplName},
- "type": "Microsoft.Authorization/resourceManagementPrivateLinks",
- "location": {region}
- }
-]
-```
-
-To **delete a specific** resource management private link, use:
-
-```http
-DELETE
-https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Authorization/resourceManagementPrivateLinks/{rmplName}?api-version=2020-05-01
-```
-
-The operation returns: `Status 200 OK`.
-
-## Private link association
-
-To **get a specific** private link association for a management group, use:
-
-```http
-GET
-https://management.azure.com/providers/Microsoft.Management/managementGroups/{managementGroupID}/providers/Microsoft.Authorization/privateLinkAssociations?api-version=2020-05-01
-```
-
-The operation returns:
-
-```json
-{
- "value": [
- {
- "properties": {
- "privateLink": {rmplResourceID},
- "tenantId": {tenantId},
- "scope": "/providers/Microsoft.Management/managementGroups/{managementGroupId}"
- },
- "id": {plaResourceId},
- "type": "Microsoft.Authorization/privateLinkAssociations",
- "name": {plaName}
- }
- ]
-}
-```
-
-To **delete** a private link association, use:
-
-```http
-DELETE
-https://management.azure.com/providers/Microsoft.Management/managementGroups/{managementGroupID}/providers/Microsoft.Authorization/privateLinkAssociations/{plaID}?api-version=2020-05-01
-```
-
-The operation returns: `Status 200 OK`.
-
-## Private endpoints
-
-To **get all** private endpoints in a subscription, use:
-
-```http
-GET
-https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Network/privateEndpoints?api-version=2020-04-01
-```
-
-The operation returns:
-
-```json
-{
- "value": [
- {
- "name": {privateEndpointName},
- "id": {privateEndpointResourceId},
- "etag": {etag},
- "type": "Microsoft.Network/privateEndpoints",
- "location": {region},
- "properties": {
- "provisioningState": "Updating",
- "resourceGuid": {GUID},
- "privateLinkServiceConnections": [
- {
- "name": {connectionName},
- "id": {connectionResourceId},
- "etag": {etag},
- "properties": {
- "provisioningState": "Succeeded",
- "privateLinkServiceId": {rmplResourceId},
- "groupIds": [
- "ResourceManagement"
- ],
- "privateLinkServiceConnectionState": {
- "status": "Approved",
- "description": "",
- "actionsRequired": "None"
- }
- },
- "type": "Microsoft.Network/privateEndpoints/privateLinkServiceConnections"
- }
- ],
- "manualPrivateLinkServiceConnections": [],
- "subnet": {
- "id": {subnetResourceId}
- },
- "networkInterfaces": [
- {
- "id": {networkInterfaceResourceId}
- }
- ],
- "customDnsConfigs": [
- {
- "fqdn": "management.azure.com",
- "ipAddresses": [
- "10.0.0.4"
- ]
- }
- ]
- }
- }
- ]
-}
-```
-
-## Next steps
-
-* To learn more about private links, see [Azure Private Link](../../private-link/index.yml).
-* To create a resource management private links, see [Use portal to create private link for managing Azure resources](create-private-link-access-portal.md) or [Use REST API to create private link for managing Azure resources](create-private-link-access-rest.md).
azure-video-indexer Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/release-notes.md
Azure Video Indexer is now part of [Network Service Tags](network-security.md).
### Celebrity recognition toggle
-You can now enable or disable the celebrity recognition model on the account level (on classic account only). To turn on or off the model, go to the account settings > and toggle on/off the model. Once you disable the model, Video Indexer insights will not include the output of celebrity model and will not run the celebrity model pipeline.
+You can now enable or disable the celebrity recognition model on the account level (on classic account only). To turn on or off the model, go to the **Model customization** > toggle on/off the model. Once you disable the model, Video Indexer insights will not include the output of celebrity model and will not run the celebrity model pipeline.
+ ### Azure Video Indexer repository name
cloud-services Cloud Services Guestos Msrc Releases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-msrc-releases.md
na Previously updated : 6/21/2022 Last updated : 6/24/2022
The following tables show the Microsoft Security Response Center (MSRC) updates
| Rel 22-06 | [5014692] | Latest Cumulative Update(LCU) | 6.44 | Jun 14, 2022 | | Rel 22-06 | [5014678] | Latest Cumulative Update(LCU) | 7.12 | Jun 14, 2022 | | Rel 22-06 | [5014702] | Latest Cumulative Update(LCU) | 5.68 | Jun 14, 2022 |
+| Rel 22-06 | [5011486] | IE Cumulative Updates | 2.124, 3.111, 4.104 | Mar 8, 2022 |
| Rel 22-06 | [5013641] | . NET Framework 3.5 and 4.7.2 Cumulative Update | 6.45 | May 10, 2022 | | Rel 22-06 | [5013630] | .NET Framework 4.8 Security and Quality Rollup | 7.13 | May 10, 2022 | | Rel 22-06 | [5014026] | Servicing Stack update | 5.69 | May 10, 2022 | | Rel 22-06 | [4494175] | Microcode | 5.69 | Sep 1, 2020 | | Rel 22-06 | [4494174] | Microcode | 6.45 | Sep 1, 2020 |
+| Rel 22-06 | [5013637] | .NET Framework 3.5 Security and Quality Rollup | 2.124 | Jun 14, 2022 |
+| Rel 22-06 | [5013644] | .NET Framework 4.6.2 Security and Quality Rollup | 2.124 | May 10, 2022 |
+| Rel 22-06 | [5013638] | .NET Framework 3.5 Security and Quality Rollup | 4.104 | Jun 14, 2022 |
+| Rel 22-06 | [5013643] | .NET Framework 4.6.2 Security and Quality Rollup | 4.104 | May 10, 2022 |
+| Rel 22-06 | [5013635] | .NET Framework 3.5 Security and Quality Rollup | 3.111 | Jun 14, 2022 |
+| Rel 22-06 | [5013642] | . NET Framework 4.6.2 Security and Quality Rollup | 3.111 | May 10, 2022 |
+| Rel 22-06 | [5014748] | Monthly Rollup | 2.124 | Jun 14, 2022 |
+| Rel 22-06 | [5014747] | Monthly Rollup | 3.111 | Jun 14, 2022 |
+| Rel 22-06 | [5014738] | Monthly Rollup | 4.104 | Jun 14, 2022 |
+| Rel 22-06 | [5014027] | Servicing Stack update | 3.111 | May 10, 2022 |
+| Rel 22-06 | [5014025] | Servicing Stack update | 4.104 | May 10, 2022 |
+| Rel 22-06 | [4578013] | Standalone Security Update | 4.104 | Aug 19, 2020 |
+| Rel 22-06 | [5011649] | Servicing Stack update | 2.124 | Mar 8, 2022 |
[5014692]: https://support.microsoft.com/kb/5014692 [5014678]: https://support.microsoft.com/kb/5014678
The following tables show the Microsoft Security Response Center (MSRC) updates
[5014026]: https://support.microsoft.com/kb/5014026 [4494175]: https://support.microsoft.com/kb/4494175 [4494174]: https://support.microsoft.com/kb/4494174
+[5011486]: https://support.microsoft.com/kb/5011486
+[5013637]: https://support.microsoft.com/kb/5013637
+[5013644]: https://support.microsoft.com/kb/5013644
+[5013638]: https://support.microsoft.com/kb/5013638
+[5013643]: https://support.microsoft.com/kb/5013643
+[5013635]: https://support.microsoft.com/kb/5013635
+[5013642]: https://support.microsoft.com/kb/5013642
+[5014748]: https://support.microsoft.com/kb/5014748
+[5014747]: https://support.microsoft.com/kb/5014747
+[5014738]: https://support.microsoft.com/kb/5014738
+[5014027]: https://support.microsoft.com/kb/5014027
+[5014025]: https://support.microsoft.com/kb/5014025
+[4578013]: https://support.microsoft.com/kb/4578013
+[5011649]: https://support.microsoft.com/kb/5011649
## May 2022 Guest OS
cloud-services Cloud Services Guestos Update Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-update-matrix.md
na Previously updated : 6/21/2022 Last updated : 6/22/2022 # Azure Guest OS releases and SDK compatibility matrix
The September Guest OS has released.
## Family 4 releases **Windows Server 2012 R2**
-.NET Framework installed: 3.5, 4.5.1, 4.5.2
+.NET Framework installed: 3.5, 4.6.2
| Configuration string | Release date | Disable date | | | | |
The September Guest OS has released.
## Family 3 releases **Windows Server 2012**
-.NET Framework installed: 3.5, 4.5
+.NET Framework installed: 3.5, 4.6.2
| Configuration string | Release date | Disable date | | | | |
The September Guest OS has released.
## Family 2 releases **Windows Server 2008 R2 SP1**
-.NET Framework installed: 3.5 (includes 2.0 and 3.0), 4.5
+.NET Framework installed: 3.5 (includes 2.0 and 3.0), 4.6.2
| Configuration string | Release date | Disable date | | | | |
cognitive-services Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/regions.md
The Speech service is available in these regions for speech-to-text, pronunciati
If you plan to train a custom model with audio data, use one of the regions with dedicated hardware for faster training. Then you can use the [Speech-to-text REST API v3.0](rest-speech-to-text.md) to [copy the trained model](how-to-custom-speech-train-model.md#copy-a-model) to another region. > [!TIP]
-> For pronunciation assessment, `en-US` and `en-GB` are available in all regions listed above, `zh-CN` is available in East Asia and Southeast Asia regions, `es-ES` and `fr-FR` are available in West Europe region, and `en-AU` is available in Australia East region.
+> For pronunciation assessment, `en-US` and `en-GB` are available in all regions listed above, `zh-CN` is available in East Asia and Southeast Asia regions, `de-DE`, `es-ES`, and `fr-FR` are available in West Europe region, and `en-AU` is available in Australia East region.
### Intent recognition
connectors Built In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/built-in.md
Title: Overview about built-in connectors in Azure Logic Apps
-description: Learn about built-in connectors that run natively to create automated integration workflows in Azure Logic Apps.
+ Title: Built-in connector overview
+description: Learn about built-in connectors that run natively in Azure Logic Apps.
ms.suite: integration
This article provides a general overview about built-in connectors in Consumptio
## Built-in connectors in Consumption versus Standard
-The following table lists the current and expanding galleries of built-in connectors available for Consumption versus Standard logic app workflows. An asterisk (**\***) marks [service provider-based built-in connectors](../logic-apps/custom-connector-overview.md#service-provider-interface-implementation).
+The following table lists the current and expanding galleries of built-in connectors available for Consumption versus Standard logic app workflows. For Standard workflows, an asterisk (**\***) marks [built-in connectors based on the *service provider* model](#service-provider-interface-implementation), which is described in more detail later.
| Consumption | Standard | |-|-| | Azure API Management<br>Azure App Services <br>Azure Functions <br>Azure Logic Apps <br>Batch <br>Control <br>Data Operations <br>Date Time <br>Flat File <br>HTTP <br>Inline Code <br>Integration Account <br>Liquid <br>Request <br>Schedule <br>Variables <br>XML | Azure Blob* <br>Azure Cosmos DB* <br>Azure Functions <br>Azure Queue* <br>Azure Table Storage* <br>Control <br>Data Operations <br>Date Time <br>DB2* <br>Event Hubs* <br>Flat File <br>FTP* <br>HTTP <br>IBM Host File* <br>Inline Code <br>Liquid operations <br>MQ* <br>Request <br>Schedule <br>Service Bus* <br>SFTP* <br>SQL Server* <br>Variables <br>Workflow operations <br>XML operations | |||
+<a name="service-provider-interface-implementation"></a>
+
+## Service provider-based built-in connectors
+
+In Standard logic app workflows, a built-in connector that has the following attributes is informally known as a *service provider*:
+
+* Is based on the [Azure Functions extensibility model](../azure-functions/functions-bindings-register.md).
+
+* Provides access from a Standard logic app workflow to a service, such as Azure Blob Storage, Azure Service Bus, Azure Event Hubs, SFTP, and SQL Server.
+
+ Some built-in connectors support only a single way to authenticate a connection to the underlying service. Other built-in connectors can offer a choice, such as using a connection string, Azure Active Directory (Azure AD), or a managed identity.
+
+* Runs in the same process as the redesigned Azure Logic Apps runtime.
+
+These service provider-based built-in connectors are available alongside their [managed connector versions](managed.md).
+
+In contrast, a built-in connector that's *not a service provider* has the following attributes:
+
+* Isn't based on the Azure Functions extensibility model.
+
+* Is directly implemented as a job within the Azure Logic Apps runtime, such as Schedule, HTTP, Request, and XML operations.
+ <a name="custom-built-in"></a> ## Custom built-in connectors
-For Standard logic apps, if a built-connector isn't available for your scenario, you can create your own built-in connector. You can use the same [*service provider interface implementation*](../logic-apps/custom-connector-overview.md#service-provider-interface-implementation) that's used by service provider-based built-in connectors, such as SQL Server, Service Bus, Blob Storage, Event Hubs, and Blob Storage. This interface implementation is based on the [Azure Functions extensibility model](../azure-functions/functions-bindings-register.md) and provides the capability for you to create custom built-in connectors that anyone can use in Standard logic apps.
+For Standard logic apps, you can create your own built-in connector with the same [built-in connector extensibility model](../logic-apps/custom-connector-overview.md#built-in-connector-extensibility-model) that's used by service provider-based built-in connectors, such as Azure Blob, Azure Event Hubs, Azure Service Bus, SQL Server, and more. This interface implementation is based on the [Azure Functions extensibility model](../azure-functions/functions-bindings-register.md) and provides the capability for you to create custom built-in connectors that anyone can use in Standard logic apps.
+
+For Consumption logic apps, you can't create your own built-in connectors, but you create your own managed connectors.
For more information, review the following documentation:
-* [Custom connectors for Standard logic apps](../logic-apps/custom-connector-overview.md#custom-connector-standard)
+* [Custom connectors in Azure Logic Apps](../logic-apps/custom-connector-overview.md#custom-connector-standard)
* [Create custom built-in connectors for Standard logic apps](../logic-apps/create-custom-built-in-connector-standard.md) <a name="general-built-in"></a>
You can use the following built-in connectors to perform general tasks, for exam
:::row-end::: :::row::: :::column:::
- [![FTP icon][ftp-icon]][ftp-doc]
+ ![FTP icon][ftp-icon]
\ \
- [**FTP**][ftp-doc]<br>(*Standard logic app only*)
+ **FTP**<br>(*Standard logic app only*)
\ \ Connect to FTP or FTPS servers you can access from the internet so that you can work with your files and folders. :::column-end::: :::column:::
- [![SFTP-SSH icon][sftp-ssh-icon]][sftp-ssh-doc]
+ ![SFTP-SSH icon][sftp-ssh-icon]
\ \
- [**SFTP-SSH**][sftp-ssh-doc]<br>(*Standard logic app only*)
+ **SFTP-SSH**<br>(*Standard logic app only*)
\ \ Connect to SFTP servers that you can access from the internet by using SSH so that you can work with your files and folders.
You can use the following built-in connectors to perform general tasks, for exam
<a name="service-built-in"></a>
-## Service-based built-in connectors
+## Built-in connectors for specific services and systems
-Connectors for some services provide both built-in connectors and managed connectors, which might differ across these versions.
+You can use the following built-in connectors to access specific services and systems. In Standard logic app workflows, some of these built-in connectors are also informally known as *service providers*, which can differ from their managed connector counterparts in some ways.
:::row::: :::column:::
Connectors for some services provide both built-in connectors and managed connec
When Swagger is included, the triggers and actions defined by these apps appear like any other first-class triggers and actions in Azure Logic Apps. :::column-end::: :::column:::
- [![Azure Blob icon icon][azure-blob-storage-icon]][azure-app-services-doc]
+ ![Azure Blob icon][azure-blob-storage-icon]
\ \
- [**Azure Blob**][azure-blob-storage-doc]<br>(*Standard logic app only*)
+ **Azure Blob**<br>(*Standard logic app only*)
\ \ Connect to your Azure Blob Storage account so you can create and manage blob content. :::column-end::: :::column:::
- [![Azure Cosmos DB icon][azure-cosmos-db-icon]][azure-cosmos-db-doc]
+ ![Azure Cosmos DB icon][azure-cosmos-db-icon]
\ \
- [**Azure Cosmos DB**][azure-cosmos-db-doc]<br>(*Standard logic app only*)
+ **Azure Cosmos DB**<br>(*Standard logic app only*)
\ \ Connect to Azure Cosmos DB so that you can access and manage Azure Cosmos DB documents. :::column-end::: :::column:::
- [![Azure Functions icon][azure-functions-icon]][azure-functions-doc]
+ ![Azure Event Hubs icon][azure-event-hubs-icon]
\ \
- [**Azure Functions**][azure-functions-doc]
+ **Azure Event Hubs**<br>(*Standard logic app only*)
\ \
- Call [Azure-hosted functions](../azure-functions/functions-overview.md) to run your own *code snippets* (C# or Node.js) within your workflow.
+ Consume and publish events through an event hub. For example, get output from your logic app with Event Hubs, and then send that output to a real-time analytics provider.
:::column-end::: :::row-end::: :::row::: :::column:::
- [![Azure Logic Apps icon][azure-logic-apps-icon]][nested-logic-app-doc]
+ [![Azure Functions icon][azure-functions-icon]][azure-functions-doc]
\ \
- [**Azure Logic Apps**][nested-logic-app-doc]<br>(*Consumption logic app*) <br><br>-or-<br><br>[**Workflow operations**][nested-logic-app-doc]<br>(*Standard logic app*)
+ [**Azure Functions**][azure-functions-doc]
\ \
- Call other workflows that start with the Request trigger named **When a HTTP request is received**.
+ Call [Azure-hosted functions](../azure-functions/functions-overview.md) to run your own *code snippets* (C# or Node.js) within your workflow.
:::column-end::: :::column:::
- [![Azure Service Bus icon][azure-service-bus-icon]][azure-service-bus-doc]
+ [![Azure Logic Apps icon][azure-logic-apps-icon]][nested-logic-app-doc]
\ \
- [**Azure Service Bus**][azure-service-bus-doc]<br>(*Standard logic app only*)
+ [**Azure Logic Apps**][nested-logic-app-doc]<br>(*Consumption logic app*) <br><br>-or-<br><br>**Workflow operations**<br>(*Standard logic app*)
\ \
- Manage asynchronous messages, queues, sessions, topics, and topic subscriptions.
+ Call other workflows that start with the Request trigger named **When a HTTP request is received**.
:::column-end::: :::column:::
- [![Azure Table Storage icon][azure-table-storage-icon]][azure-table-storage-doc]
+ ![Azure Service Bus icon][azure-service-bus-icon]
\ \
- [**Azure Table Storage**][azure-table-storage-doc]<br>(*Standard logic app only*)
+ **Azure Service Bus**<br>(*Standard logic app only*)
\ \
- Connect to your Azure Storage account so that you can create, update, query, and manage tables.
+ Manage asynchronous messages, queues, sessions, topics, and topic subscriptions.
:::column-end::: :::column:::
- [![Azure Event Hubs icon][azure-event-hubs-icon]][azure-event-hubs-doc]
+ ![Azure Table Storage icon][azure-table-storage-icon]
\ \
- [**Event Hubs**][azure-event-hubs-doc]<br>(*Standard logic app only*)
+ **Azure Table Storage**<br>(*Standard logic app only*)
\ \
- Consume and publish events through an event hub. For example, get output from your logic app with Event Hubs, and then send that output to a real-time analytics provider.
+ Connect to your Azure Storage account so that you can create, update, query, and manage tables.
:::column-end::: :::column:::
- [![IBM DB2 icon][ibm-db2-icon]][ibm-db2-doc]
+ ![IBM DB2 icon][ibm-db2-icon]
\ \
- [**DB2**][ibm-db2-doc]<br>(*Standard logic app only*)
+ **DB2**<br>(*Standard logic app only*)
\ \ Connect to IBM DB2 in the cloud or on-premises. Update a row, get a table, and more.
Connectors for some services provide both built-in connectors and managed connec
Connect to IBM Host File and generate or parse contents. :::column-end::: :::column:::
- [![IBM MQ icon][ibm-mq-icon]][ibm-mq-doc]
+ ![IBM MQ icon][ibm-mq-icon]
\ \
- [**MQ**][ibm-mq-doc]<br>(*Standard logic app only*)
+ **IBM MQ**<br>(*Standard logic app only*)
\ \ Connect to IBM MQ on-premises or in Azure to send and receive messages.
container-apps Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/containers.md
When allocating resources, the total amount of CPUs and memory requested for all
## Multiple containers
-You can define multiple containers in a single container app. The containers in a container app share hard disk and network resources and experience the same [application lifecycle](application-lifecycle-management.md).
+You can define multiple containers in a single container app to implement the [sidecar pattern](/azure/architecture/patterns/sidecar). The containers in a container app share hard disk and network resources and experience the same [application lifecycle](./application-lifecycle-management.md).
-To run multiple containers in a container app, add more than one container in the `containers` array of the container app template.
+Examples of sidecar containers include:
-Reasons to run containers together in a container app include:
+- An agent that reads logs from the primary app container on a [shared volume](storage-mounts.md?pivots=aca-cli#temporary-storage) and forwards them to a logging service.
+- A background process that refreshes a cache used by the primary app container in a shared volume.
-- Use a container as a sidecar to your primary app.-- Share disk space and the same virtual network.-- Share scale rules among containers.-- Group multiple containers that need to always run together.-- Enable direct communication among containers.
+> [!NOTE]
+> Running multiple containers in a single container app is an advanced use case. You should use this pattern only in specific instances in which your containers are tightly coupled. In most situations where you want to run multiple containers, such as when implementing a microservice architecture, deploy each service as a separate container app.
+
+To run multiple containers in a container app, add more than one container in the containers array of the container app template.
## Container registries
container-apps Vnet Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/vnet-custom.md
The following example shows you how to create a Container Apps environment in an
<!-- Create --> [!INCLUDE [container-apps-create-portal-steps.md](../../includes/container-apps-create-portal-steps.md)]
+> [!NOTE]
+> Network address prefixes requires a CDIR range of `/23`.
+ 7. Select the **Networking** tab to create a VNET. 8. Select **Yes** next to *Use your own virtual network*. 9. Next to the *Virtual network* box, select the **Create new** link and enter the following value.
$VNET_NAME="my-custom-vnet"
Now create an Azure virtual network to associate with the Container Apps environment. The virtual network must have a subnet available for the environment deployment. > [!NOTE]
-> You can use an existing virtual network, but a dedicated subnet is required for use with Container Apps.
+> You can use an existing virtual network, but a dedicated subnet with a CDIR range of `/23` is required for use with Container Apps.
# [Bash](#tab/bash)
cosmos-db Synapse Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/synapse-link.md
Synapse Link isn't recommended if you're looking for traditional data warehouse
## Limitations
-* Azure Synapse Link for Azure Cosmos DB is supported for SQL API and Azure Cosmos DB API for MongoDB. It isn't supported for Gremlin API, Cassandra API, and Table API.
+* Azure Synapse Link for Azure Cosmos DB is not supported for Gremlin API, Cassandra API, and Table API. It is supported for SQL API and API for Mongo DB.
* Accessing the Azure Cosmos DB analytics store with Azure Synapse Dedicated SQL Pool currently isn't supported.
-* Enabling Synapse Link on existing Cosmos DB containers is only supported for SQL API accounts. Synapse Link can be enabled on new containers for both SQL API and MongoDB API accounts.
-
-* Backup and restore of your data in analytical store isn't supported at this time. You can recreate your analytical store data in some scenarios as below:
- * Azure Synapse Link and periodic backup mode can coexist in the same database account. In this mode, your transactional store data will be automatically backed up. However, analytical store data isn't included in backups and restores. If you use `transactional TTL` equal or bigger than your `analytical TTL` on your container, you can
- fully recreate your analytical store data by enabling analytical store on the restored container. Please note, at present, you can only recreate analytical store on your
- restored containers for SQL API.
- * Synapse Link and continuous backup mode (point=in-time restore) coexistence in the same database account isn't supported. If you enable continuous backup mode, you can't
- turn on Synapse Link, and vice versa.
-
-* Role-Based Access (RBAC) isn't supported when querying using Synapse SQL serverless pools.
+* Enabling Synapse Link on existing Cosmos DB containers is only supported for SQL API accounts. Synapse Link can be enabled on new containers for both SQL API and MongoDB API accounts.
+
+* Backup and restore:
+ * You can recreate your analytical store data in some scenarios as below. In this mode, your transactional store data will be automatically backed up. If `transactional TTL` is equal or greater than your `analytical TTL` on your container, you can fully recreate your analytical store data by enabling analytical store on the restored container:
+ - Azure Synapse Link can be enabled on accounts configured with periodic backups.
+ - If continuous backup (point-in-time restore) is enabled on your account, you can now restore your analytical data. To enable Synapse Link for such accounts, please reach out to cosmosdbsynapselink@microsoft.com. This is applicable only for SQL API.
+ * Restoring analytical data is not supported in following scenarios, for SQL API and API for Mongo DB:
+ - If you already enabled Synapse Link on your database account, you cannot enable point-in-time restore on such accounts.
+ - If `analytical TTL` is greater than `transactional TTL`, data that only exists in analytical store cannot be restored. You can continue to access full data from analytical store in the parent region.
+
+* Granular Role-based Access (RBAC)s isn't supported when querying from Synapse. Users that have access to your Synapse workspace and have access to the Cosmos DB account can access all containers within that account. We currently don't support more granular access to the containers.
## Security
data-factory Format Parquet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/format-parquet.md
Previously updated : 03/25/2022 Last updated : 06/22/2022
For a list of supported features for all available connectors, visit the [Connec
## Using Self-hosted Integration Runtime > [!IMPORTANT]
-> For copy empowered by Self-hosted Integration Runtime e.g. between on-premises and cloud data stores, if you are not copying Parquet files **as-is**, you need to install the **64-bit JRE 8 (Java Runtime Environment) or OpenJDK** and **Microsoft Visual C++ 2010 Redistributable Package** on your IR machine. Check the following paragraph with more details.
+> For copy empowered by Self-hosted Integration Runtime e.g. between on-premises and cloud data stores, if you are not copying Parquet files **as-is**, you need to install the **64-bit JRE 8 (Java Runtime Environment) or OpenJDK** on your IR machine. Check the following paragraph with more details.
For copy running on Self-hosted IR with Parquet file serialization/deserialization, the service locates the Java runtime by firstly checking the registry *`(SOFTWARE\JavaSoft\Java Runtime Environment\{Current Version}\JavaHome)`* for JRE, if not found, secondly checking system variable *`JAVA_HOME`* for OpenJDK. - **To use JRE**: The 64-bit IR requires 64-bit JRE. You can find it from [here](https://go.microsoft.com/fwlink/?LinkId=808605).-- **To use OpenJDK**: It's supported since IR version 3.13. Package the jvm.dll with all other required assemblies of OpenJDK into Self-hosted IR machine, and set system environment variable JAVA_HOME accordingly.-- **To install Visual C++ 2010 Redistributable Package**: Visual C++ 2010 Redistributable Package is not installed with self-hosted IR installations. You can find it from [here](https://www.microsoft.com/download/details.aspx?id=26999).
+- **To use OpenJDK**: It's supported since IR version 3.13. Package the jvm.dll with all other required assemblies of OpenJDK into Self-hosted IR machine, and set system environment variable JAVA_HOME accordingly, and then restart Self-hosted IR for taking effect immediately.
> [!TIP] > If you copy data to/from Parquet format using Self-hosted Integration Runtime and hit error saying "An error occurred when invoking java, message: **java.lang.OutOfMemoryError:Java heap space**", you can add an environment variable `_JAVA_OPTIONS` in the machine that hosts the Self-hosted IR to adjust the min/max heap size for JVM to empower such copy, then rerun the pipeline.
data-factory Managed Virtual Network Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/managed-virtual-network-private-endpoint.md
Previously updated : 06/16/2022 Last updated : 06/24/2022 # Azure Data Factory managed virtual network
Only a managed private endpoint in an approved state can send traffic to a speci
## Interactive authoring
-Interactive authoring capabilities are used for functionalities like test connection, browse folder list and table list, get schema, and preview data. You can enable interactive authoring when you create or edit an integration runtime in a Data Factory managed virtual network. The back-end service preallocates the compute for interactive authoring functionalities. Otherwise, the compute is allocated every time any interactive operation is performed, which takes more time.
-
-The Time-To-Live (TTL) for interactive authoring is 60 minutes. This means it will be automatically disabled 60 minutes after the last interactive authoring operation.
+Interactive authoring capabilities are used for functionalities like test connection, browse folder list and table list, get schema, and preview data. You can enable interactive authoring when creating or editing an Azure integration runtime, which is in Azure Data Factory managed virtual network. The backend service will pre-allocate compute for interactive authoring functionalities. Otherwise, the compute will be allocated every time any interactive operation is performed which will take more time. The time to live (TTL) for interactive authoring is 60 minutes by default, which means it will automatically become disabled after 60 minutes of the last interactive authoring operation. You can change the TTL value according to your actual needs.
:::image type="content" source="./media/managed-vnet/interactive-authoring.png" alt-text="Screenshot that shows interactive authoring.":::
-## Activity execution time using a managed virtual network
+## Time to live
+
+### Copy activity
-By design, an integration runtime in a managed virtual network takes longer queue time than a global integration runtime. One compute node isn't reserved per data factory, so warm-up is required before each activity starts. Warm-up occurs primarily on the virtual network join rather than the integration runtime.
+By default, every copy activity spins up a new compute based upon the configuration in copy activity. With managed virtual network enabled, cold computes start-up time takes a few minutes and data movement can't start until it is complete. If your pipelines contain multiple sequential copy activities or you have a lot of copy activities in foreach loop and canΓÇÖt run them all in parallel, you can enable a time to live (TTL) value in the Azure integration runtime configuration. Specifying a time to live value and DIU numbers required for the copy activity keeps the corresponding computes alive for a certain period of time after its execution completes. If a new copy activity starts during the TTL time, it will reuse the existing computes and start-up time will be greatly reduced. After the second copy activity completes, the computes will again stay alive for the TTL time.
+
+> [!NOTE]
+> Reconfiguring the DIU number will not affect the current copy activity execution.
-For non-Copy activities, including pipeline activity and external activity, there's a 60-minute TTL when you trigger them the first time. Within TTL, the queue time is shorter because the node is already warmed up.
+### Pipeline and external activity
+
+Unlike copy activity, pipeline and external activity have a default time to live (TTL) of 60 minutes. You can change the default TTL on Azure integration runtime configuration according to your actual needs, but itΓÇÖs not supported to disable the TTL.
+
+> [!NOTE]
+> Time to live (TTL) is only applicable to managed virtual network.
-The Copy activity doesn't have TTL support yet.
> [!NOTE] > The data integration unit (DIU) measure of 2 DIU isn't supported for the Copy activity in a managed virtual network.
data-factory Data Factory Data Movement Security Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-data-movement-security-considerations.md
The following cloud data stores require approving of IP address of the gateway m
**Answer:** We do not support this feature yet. We are actively working on it. **Question:** What are the port requirements for the gateway to work?
-**Answer:** Gateway makes HTTP-based connections to open internet. The **outbound ports 443 and 80** must be opened for gateway to make this connection. Open **Inbound Port 8050** only at the machine level (not at corporate firewall level) for Credential Manager application. If Azure SQL Database or Azure Synapse Analytics is used as source/ destination, then you need to open **1433** port as well. For more information, see [Firewall configurations and filtering IP addresses](#firewall-configurations-and-filtering-ip-address-of-gateway) section.
+**Answer:** Gateway makes HTTP-based connections to open internet. The **outbound ports 443 and 80** must be opened for gateway to make this connection. Open **inbound port 8050** only at the machine level (not at corporate firewall level) for Credential Manager application. If Azure SQL Database or Azure Synapse Analytics is used as source or destination, then you need to open **port 1433** as well. For more information, see [Firewall configurations and filtering IP addresses](#firewall-configurations-and-filtering-ip-address-of-gateway) section.
**Question:** What are certificate requirements for Gateway? **Answer:** Current gateway requires a certificate that is used by the credential manager application for securely setting data store credentials. This certificate is a self-signed certificate created and configured by the gateway setup. You can use your own TLS/SSL certificate instead. For more information, see [click-once credential manager application](#click-once-credentials-manager-app) section. ## Next steps
-For information about performance of copy activity, see [Copy activity performance and tuning guide](data-factory-copy-activity-performance.md).
+For information about performance of copy activity, see [Copy activity performance and tuning guide](data-factory-copy-activity-performance.md).
databox-online Azure Stack Edge Deploy Nvidia Deepstream Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-deploy-nvidia-deepstream-module.md
+
+ Title: Deploy the Nvidia DeepStream module on Ubuntu VM on Azure Stack Edge Pro with GPU | Microsoft Docs
+description: Learn how to deploy the Nvidia Deepstream module on an Ubuntu virtual machine that is running on your Azure Stack Edge Pro GPU device.
++++++ Last updated : 06/23/2022+++
+# Deploy the Nvidia DeepStream module on Ubuntu VM on Azure Stack Edge Pro with GPU
++
+This article walks you through deploying NvidiaΓÇÖs DeepStream module on an Ubuntu VM running on your Azure Stack Edge device. The DeepStream module is supported only on GPU devices.
+
+## Prerequisites
+
+Before you begin, make sure you have:
+
+- Deployed an IoT Edge runtime on a GPU VM running on an Azure Stack Edge device. For detailed steps, see [Deploy IoT Edge on an Ubuntu VM on Azure Stack Edge](azure-stack-edge-gpu-deploy-iot-edge-linux-vm.md).
+
+## Get module from IoT Edge Module Marketplace
+
+1. In the [Azure portal](https://portal.azure.com), go to **Device management** > **IoT Edge**.
+1. Select the IoT Hub device that you configured while deploying the IoT Edge runtime.
+
+ ![Screenshot of the Azure portal, I o T Edge, I o T Hub device.](media/azure-stack-edge-deploy-nvidia-deepstream-module/azure-portal-select-iot-edge-device.png)
+
+1. Select **Set modules**.
+
+ ![Screenshot of the Azure portal, I o T Hub, set modules page.](media/azure-stack-edge-deploy-nvidia-deepstream-module/azure-portal-create-vm-iot-hub-set-module.png)
+
+1. Select **Add** > **Marketplace Module**.
+
+ ![Screenshot of the Azure portal, Marketplace Module, Add Marketplace Module selection.](media/azure-stack-edge-deploy-nvidia-deepstream-module/azure-portal-create-vm-add-iot-edge-module.png)
+
+1. Search for **NVIDIA DeepStream SDK 5.1 for x86/AMD64** and then select it.
+
+ ![Screenshot of the Azure portal, I o T Edge Module Marketplace, modules options.](media/azure-stack-edge-deploy-nvidia-deepstream-module/azure-portal-create-vm-iot-edge-module-marketplace.png)
+
+1. Select **Review + Create**, and then select **Create module**.
+
+## Verify module runtime status
+
+1. Verify that the module is running.
+
+ ![Screenshot of the Azure portal, modules runtime status.](media/azure-stack-edge-deploy-nvidia-deepstream-module/azure-portal-create-vm-verify-module-status.png)
+
+1. Verify that the module provides the following output in the troubleshooting page of the IoT Edge device on IoT Hub:
+
+ ![Screenshot of the Azure portal, NVIDIADeepStreamSDK log file output.](media/azure-stack-edge-deploy-nvidia-deepstream-module/azure-portal-create-vm-troubleshoot-iot-edge-module.png)
+
+After a certain period of time, the module runtime will complete and quit, causing the module status to return an error. This error condition is expected behavior.
+
+![Screenshot of the Azure portal, NVIDIADeepStreamSDK module runtime status with error condition.](media/azure-stack-edge-deploy-nvidia-deepstream-module/azure-portal-create-vm-add-iot-edge-module-error.png)
databox-online Azure Stack Edge Gpu Deploy Iot Edge Linux Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-iot-edge-linux-vm.md
+
+ Title: Deploy IoT Edge runtime on Ubuntu VM on Azure Stack Edge Pro with GPU | Microsoft Docs
+description: Learn how to deploy IoT Edge runtime and run IoT Edge module on an Ubuntu virtual machine that is running on your Azure Stack Edge Pro GPU device.
++++++ Last updated : 06/23/2022+++
+# Deploy IoT Edge on an Ubuntu VM on Azure Stack Edge
++
+This article describes how to deploy an IoT Edge runtime on an Ubuntu VM running on your Azure Stack Edge device. For new development work, use the self-serve deployment method described in this article as it uses the latest software version.
+
+## High-level flow
+
+The high-level flow is as follows:
+
+1. Create or identify the IoT Hub or [Azure IoT Hub Device Provisioning Service (DPS)](../iot-dps/about-iot-dps.md) instance.
+1. Use Azure CLI to acquire the Ubuntu 20.04 LTS VM image.
+1. Upload the Ubuntu image onto the Azure Stack Edge VM image library.
+1. Deploy the Ubuntu image as a VM using the following steps:
+ 1. Provide the name of the VM, the username, and the password. Creating another disk is optional.
+ 1. Set up the network configuration.
+ 1. Provide a prepared *cloud-init* script on the *Advanced* tab.
+
+## Prerequisites
+
+Before you begin, make sure you have:
+
+- An Azure Stack Edge device that you've activated. For detailed steps, see [Activate Azure Stack Edge Pro GPU](azure-stack-edge-gpu-deploy-activate.md).
+- Access to the latest Ubuntu 20.04 VM image, either the image from Azure Marketplace or a custom image that you're bringing:
+
+ ```$urn = Canonical:0001-com-ubuntu-server-focal:20_04-lts:20.04.202007160```
+
+ Use steps in [Search for Azure Marketplace images](azure-stack-edge-gpu-create-virtual-machine-marketplace-image.md#search-for-azure-marketplace-images) to acquire the VM image.
+
+## Prepare the cloud-init script
+
+To deploy the IoT Edge runtime onto the Ubuntu VM, use a *cloud-init* script during the VM deployment.
+
+Use steps in one of the following sections:
+
+- [Prepare the cloud-init script with symmetric key provisioning](azure-stack-edge-gpu-deploy-iot-edge-linux-vm.md#use-symmetric-key-provisioning).
+- [Prepare the cloud-init script with IoT Hub DPS](azure-stack-edge-gpu-deploy-iot-edge-linux-vm.md#use-dps).
+
+### Use symmetric key provisioning
+
+To connect your device to IoT Hub without DPS, use the steps in this section to prepare a *cloud-init* script for the VM creation *Advanced* page to deploy the IoT Edge runtime and NvidiaΓÇÖs container runtime.
+
+1. Use an existing IoT Hub or create a new Hub. Use these steps to [create an IoT Hub](../iot-hub/iot-hub-create-through-portal.md).
+
+1. Use these steps to [register your Azure Stack Edge device in IoT Hub](../iot-edge/how-to-provision-single-device-linux-symmetric.md#register-your-device).
+
+1. Retrieve the Primary Connection String from IoT Hub for your device, and then paste it into the location below for *DeviceConnectionString*.
+
+**Cloud-init script for symmetric key provisioning**
+
+```azurecli
+
+#cloud-config
+
+runcmd:
+ - dcs="<DeviceConnectionString>"
+ - |
+ set -x
+ (
+
+ # Wait for docker daemon to start
+
+ while [ $(ps -ef | grep -v grep | grep docker | wc -l) -le 0 ]; do
+ sleep 3
+ done
+
+ if [ $(lspci | grep NVIDIA | wc -l) -gt 0 ]; then
+
+ #install Nvidia drivers
+
+ apt install -y ubuntu-drivers-common
+ ubuntu-drivers devices
+ ubuntu-drivers autoinstall
+
+ # Install NVIDIA Container Runtime
+
+ curl -s -L https://nvidia.github.io/nvidia-container-runtime/gpgkey | apt-key add -
+ distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
+ curl -s -L https://nvidia.github.io/nvidia-container-runtime/$distribution/nvidia-container-runtime.list | tee /etc/apt/sources.list.d/nvidia-container-runtime.list
+ apt update
+ apt install -y nvidia-container-runtime
+ fi
+
+ # Restart Docker
+
+ systemctl daemon-reload
+ systemctl restart docker
+
+ # Install IoT Edge
+
+ apt install -y aziot-edge
+
+ if [ ! -z $dcs ]; then
+ iotedge config mp --connection-string $dcs
+ iotedge config apply
+ fi
+ if [ $(lspci | grep NVIDIA | wc -l) -gt 0 ]; then
+ reboot
+ fi ) &
+
+apt:
+ preserve_sources_list: true
+ sources:
+ msft.list:
+ source: "deb https://packages.microsoft.com/ubuntu/20.04/prod focal main"
+ key: |
+ --BEGIN PGP PUBLIC KEY BLOCK--
+ Version: GnuPG v1.4.7 (GNU/Linux)
+
+ mQENBFYxWIwBCADAKoZhZlJxGNGWzqV+1OG1xiQeoowKhssGAKvd+buXCGISZJwT
+ LXZqIcIiLP7pqdcZWtE9bSc7yBY2MalDp9Liu0KekywQ6VVX1T72NPf5Ev6x6DLV
+ 7aVWsCzUAF+eb7DC9fPuFLEdxmOEYoPjzrQ7cCnSV4JQxAqhU4T6OjbvRazGl3ag
+ OeizPXmRljMtUUttHQZnRhtlzkmwIrUivbfFPD+fEoHJ1+uIdfOzZX8/oKHKLe2j
+ H632kvsNzJFlROVvGLYAk2WRcLu+RjjggixhwiB+Mu/A8Tf4V6b+YppS44q8EvVr
+ M+QvY7LNSOffSO6Slsy9oisGTdfE39nC7pVRABEBAAG0N01pY3Jvc29mdCAoUmVs
+ ZWFzZSBzaWduaW5nKSA8Z3Bnc2VjdXJpdHlAbWljcm9zb2Z0LmNvbT6JATUEEwEC
+ AB8FAlYxWIwCGwMGCwkIBwMCBBUCCAMDFgIBAh4BAheAAAoJEOs+lK2+EinPGpsH
+ /32vKy29Hg51H9dfFJMx0/a/F+5vKeCeVqimvyTM04C+XENNuSbYZ3eRPHGHFLqe
+ MNGxsfb7C7ZxEeW7J/vSzRgHxm7ZvESisUYRFq2sgkJ+HFERNrqfci45bdhmrUsy
+ 7SWw9ybxdFOkuQoyKD3tBmiGfONQMlBaOMWdAsic965rvJsd5zYaZZFI1UwTkFXV
+ KJt3bp3Ngn1vEYXwijGTa+FXz6GLHueJwF0I7ug34DgUkAFvAs8Hacr2DRYxL5RJ
+ XdNgj4Jd2/g6T9InmWT0hASljur+dJnzNiNCkbn9KbX7J/qK1IbR8y560yRmFsU+
+ NdCFTW7wY0Fb1fWJ+/KTsC4=
+ =J6gs
+ --END PGP PUBLIC KEY BLOCK--
+packages:
+ - moby-cli
+ - moby-engine
+write_files:
+ - path: /etc/systemd/system/docker.service.d/override.conf
+ permissions: "0644"
+ content: |
+ [Service]
+ ExecStart=
+ ExecStart=/usr/bin/dockerd --host=fd:// --add-runtime=nvidia=/usr/bin/nvidia-container-runtime --log-driver local
+
+```
+
+### Use DPS
+
+Use steps in this section to connect your device to DPS and IoT Central. You'll prepare a *script.sh* file to deploy the IoT Edge runtime as you create the VM.
+
+1. Use the existing IoT Hub and DPS, or create a new IoT Hub.
+
+ - Use these steps to [create an IoT Hub](../iot-hub/iot-hub-create-through-portal.md).
+ - Use these steps to [create the DPS, and then link the IoT Hub to the DPS scope](../iot-dps/quick-setup-auto-provision.md).
+
+1. Go to the DPS resource and create an individual enrollment. 
+
+ 1. Go to **Device Provisioning Service** > **Manage enrollments** > **Add individual enrollment**.
+ 1. Make sure that the selection for **Symmetric Key for attestation type and IoT Edge device** is **True**. The default selection is **False**.
+ 1. Retrieve the following information from the DPS resource page:
+ - **Registration ID**. We recommend that you use the same ID as the **Device ID** for your IoT Hub.
+ - **ID Scope** which is available in the [Overview menu](../iot-dps/quick-create-simulated-device-symm-key.md#prepare-and-run-the-device-provisioning-code).
+ - **Primary SAS Key** from the Individual Enrollment menu.
+1. Copy and paste values from IoT Hub (IDScope) and DPS (RegistrationID, Symmetric Key) into the script arguments.
+
+**Cloud-init script for IoT Hub DPS**
+
+```azurecli
+
+#cloud-config
+
+runcmd:
+ - dps_idscope="<DPS IDScope>"
+ - registration_device_id="<RegistrationID>"
+ - key="<Symmetric Key>"
+ - |
+ set -x
+ (
+
+ wget https://github.com/Azure/iot-edge-config/releases/latest/download/azure-iot-edge-installer.sh -O azure-iot-edge-installer.sh \
+ && chmod +x azure-iot-edge-installer.sh \
+ && sudo -H ./azure-iot-edge-installer.sh -s $dps_idscope -r $registration_device_id -k $key \
+ && rm -rf azure-iot-edge-installer.sh
+
+ # Wait for docker daemon to start
+
+ while [ $(ps -ef | grep -v grep | grep docker | wc -l) -le 0 ]; do
+ sleep 3
+ done
+
+ systemctl stop aziot-edge
+
+ if [ $(lspci | grep NVIDIA | wc -l) -gt 0 ]; then
+
+ #install Nvidia drivers
+
+ apt install -y ubuntu-drivers-common
+ ubuntu-drivers devices
+ ubuntu-drivers autoinstall
+
+ # Install NVIDIA Container Runtime
+
+ curl -s -L https://nvidia.github.io/nvidia-container-runtime/gpgkey | apt-key add -
+ distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
+ curl -s -L https://nvidia.github.io/nvidia-container-runtime/$distribution/nvidia-container-runtime.list | tee /etc/apt/sources.list.d/nvidia-container-runtime.list
+ apt update
+ apt install -y nvidia-container-runtime
+ fi
+
+ # Restart Docker
+
+ systemctl daemon-reload
+ systemctl restart docker
+
+ systemctl start aziot-edge
+ if [ $(lspci | grep NVIDIA | wc -l) -gt 0 ]; then
+ reboot
+ fi
+ ) &
+write_files:
+ - path: /etc/systemd/system/docker.service.d/override.conf
+ permissions: "0644"
+ content: |
+ [Service]
+ ExecStart=
+ ExecStart=/usr/bin/dockerd --host=fd:// --add-runtime=nvidia=/usr/bin/nvidia-container-runtime --log-driver local
+
+```
+
+## Deploy IoT Edge runtime
+
+Deploying the IoT Edge runtime is part of VM creation, using the *cloud-init* script mentioned above.
+
+Here are the high-level steps to deploy the VM and IoT Edge runtime:
+
+1. In the [Azure portal](https://portal.azure.com), go to Azure Marketplace.
+ 1. Connect to the Azure Cloud Shell or a client with Azure CLI installed. For detailed steps, see [Quickstart for Bash in Azure Cloud Shell](../cloud-shell/quickstart.md).
+ 1. Use steps in [Search for Azure Marketplace images](azure-stack-edge-gpu-create-virtual-machine-marketplace-image.md#search-for-azure-marketplace-images) to search the Azure Marketplace for the following Ubuntu 20.04 LTS image:
+
+ ```azurecli
+ $urn = Canonical:0001-com-ubuntu-server-focal:20_04-lts:20.04.202007160
+ ```
+
+ 1. Create a new managed disk from the Marketplace image.
+
+ 1. Export a VHD from the managed disk to an Azure Storage account.
+
+ For detailed steps, follow the instructions in [Use Azure Marketplace image to create VM image for your Azure Stack Edge](azure-stack-edge-gpu-create-virtual-machine-marketplace-image.md).
+
+1. Follow these steps to create an Ubuntu VM using the VM image.
+ 1. Specify the *cloud-init* script on the **Advanced** tab. To create a VM, see [Deploy GPU VM via Azure portal](azure-stack-edge-gpu-deploy-gpu-virtual-machine.md?tabs=portal) or [Deploy VM via Azure portal](azure-stack-edge-gpu-deploy-virtual-machine-portal.md).
+
+ ![Screenshot of the Advanced tab of V M configuration in the Azure portal.](media/azure-stack-edge-gpu-deploy-iot-edge-linux-vm/azure-portal-create-vm-advanced-page-2.png)
+
+ 1. Specify the appropriate device connection strings in the *cloud-init* to connect to the IoT Hub or DPS device. For detailed steps, see [Provision with symmetric keys](azure-stack-edge-gpu-deploy-iot-edge-linux-vm.md#use-symmetric-key-provisioning) or [Provision with IoT Hub DPS](azure-stack-edge-gpu-deploy-iot-edge-linux-vm.md#use-dps).
+
+ ![Screenshot of the Custom data field of V M configuration in the Azure portal.](media/azure-stack-edge-gpu-deploy-iot-edge-linux-vm/azure-portal-create-vm-init-script.png)
+
+ If you didn't specify the *cloud-init* during VM creation, you'll have to manually deploy the IoT Edge runtime after the VM is created:
+
+ 1. Connect to the VM via SSH.
+ 1. Install the container engine on the VM. For detailed steps, see [Create and provision an IoT Edge device on Linux using symmetric keys](../iot-edge/how-to-provision-single-device-linux-symmetric.md#install-a-container-engine) or [Quickstart - Set up IoT Hub DPS with the Azure portal](../iot-dps/quick-setup-auto-provision.md).
++
+## Verify the IoT Edge runtime
+
+Use these steps to verify that your IoT Edge runtime is running.
+
+1. Go to IoT Hub resource in the Azure portal.
+1. Select the IoT Edge device.
+1. Verify that the IoT Edge runtime is running.
+
+ ![Screenshot of the I o T Edge runtime status in the Azure portal.](media/azure-stack-edge-gpu-deploy-iot-edge-linux-vm/azure-portal-iot-edge-runtime-status.png)
+
+## Update the IoT Edge runtime
+
+To update the VM, follow the instructions in [Update IoT Edge](../iot-edge/how-to-update-iot-edge.md?view=iotedge-2020-11&tabs=linux&preserve-view=true). To find the latest version of Azure IoT Edge, see [Azure IoT Edge releases](../iot-edge/how-to-update-iot-edge.md?view=iotedge-2020-11&tabs=linux&preserve-view=true).
+
+## Next steps
+
+To deploy and run an IoT Edge module on your Ubuntu VM, see the steps in [Deploy IoT Edge modules](../iot-edge/how-to-deploy-modules-portal.md?view=iotedge-2020-11&preserve-view=true).
+
+To deploy NvidiaΓÇÖs DeepStream module, see [Deploy the Nvidia DeepStream module on Ubuntu VM on Azure Stack Edge Pro with GPU](azure-stack-edge-deploy-nvidia-deepstream-module.md).
governance Guest Configuration Create Definition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/guest-configuration-create-definition.md
Parameters of the `New-GuestConfigurationPolicy` cmdlet:
- **DisplayName**: Policy display name. - **Description**: Policy description. - **Parameter**: Policy parameters provided in hashtable format.-- **Version**: Policy version.
+- **PolicyVersion**: Policy version.
- **Path**: Destination path where policy definitions are created. - **Platform**: Target platform (Windows/Linux) for guest configuration policy and content package.
New-GuestConfigurationPolicy `
-Description 'Details about my policy.' ` -Path './policies' ` -Platform 'Windows' `
- -Version 1.0.0 `
+ -PolicyVersion 1.0.0 `
-Verbose ```
New-GuestConfigurationPolicy `
-Description 'Details about my policy.' ` -Path './policies' ` -Platform 'Windows' `
- -Version 1.0.0 `
+ -PolicyVersion 1.0.0 `
-Mode 'ApplyAndAutoCorrect' ` -Verbose ```
New-GuestConfigurationPolicy `
-Description 'Audit if a Windows Service isn't enabled on Windows machine.' ` -Path '.\policies' ` -Parameter $PolicyParameterInfo `
- -Version 1.0.0
+ -PolicyVersion 1.0.0
``` ### Publish the Azure Policy definition
hdinsight Hdinsight Hadoop Port Settings For Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-port-settings-for-services.md
Linux-based HDInsight clusters only expose three ports publicly on the internet:
HDInsight is implemented by several Azure Virtual Machines (cluster nodes) running on an Azure Virtual Network. From within the virtual network, you can access ports not exposed over the internet. If you connect via SSH to the head node, you can directly access services running on the cluster nodes.
-> [!IMPORTANT]
+> [!IMPORTANT]
> If you do not specify an Azure Virtual Network as a configuration option for HDInsight, one is created automatically. However, you can't join other machines (such as other Azure Virtual Machines or your client development machine) to this virtual network. To join additional machines to the virtual network, you must create the virtual network first, and then specify it when creating your HDInsight cluster. For more information, see [Plan a virtual network for HDInsight](hdinsight-plan-virtual-network-deployment.md).
All services publicly exposed on the internet must be authenticated:
## Non-public ports
-> [!NOTE]
+> [!NOTE]
> Some services are only available on specific cluster types. For example, HBase is only available on HBase cluster types.
-> [!IMPORTANT]
+> [!IMPORTANT]
> Some services only run on one headnode at a time. If you attempt to connect to the service on the primary headnode and receive an error, retry using the secondary headnode. ### Ambari
Examples:
| | | | | | | HMaster |Head nodes |16000 |&nbsp; |&nbsp; | | HMaster info Web UI |Head nodes |16010 |HTTP |The port for the HBase Master web UI |
-| Region server |All worker nodes |16020 |&nbsp; |&nbsp; |
-| &nbsp; |&nbsp; |2181 |&nbsp; |The port that clients use to connect to ZooKeeper |
+|Region server|All worker nodes |16020 ||&nbsp;|
+|Region server info Web UI&nbsp;|&nbsp;All worker nodes |16030|HTTP|The port for the HBase Region server Web UI|
+||| 2181 ||The port that clients use to connect to ZooKeeper |
### Kafka ports
hdinsight Apache Hive Migrate Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/apache-hive-migrate-workloads.md
Title: Migrate Azure HDInsight 3.6 Hive workloads to HDInsight 4.0
description: Learn how to migrate Apache Hive workloads on HDInsight 3.6 to HDInsight 4.0. - Last updated 11/4/2020
Last updated 11/4/2020
# Migrate Azure HDInsight 3.6 Hive workloads to HDInsight 4.0
-HDInsight 4.0 has several advantages over HDInsight 3.6. Here is an [overview of what's new in HDInsight 4.0](../hdinsight-version-release.md).
+HDInsight 4.0 has several advantages over HDInsight 3.6. Here's an [overview of what's new in HDInsight 4.0](../hdinsight-version-release.md).
This article covers steps to migrate Hive workloads from HDInsight 3.6 to 4.0, including
Migration of Hive tables to a new Storage Account needs to be done as a separate
### 1. Prepare the data
-* HDInsight 3.6 by default does not support ACID tables. If ACID tables are present, however, run 'MAJOR' compaction on them. See the [Hive Language Manual](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-AlterTable/Partition/Compact) for details on compaction.
+* HDInsight 3.6 by default doesn't support ACID tables. If ACID tables are present, however, run 'MAJOR' compaction on them. See the [Hive Language Manual](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-AlterTable/Partition/Compact) for details on compaction.
* If using [Azure Data Lake Storage Gen1](../overview-data-lake-storage-gen1.md), Hive table locations are likely dependent on the cluster's HDFS configurations. Run the following script action to make these locations portable to other clusters. See [Script action to a running cluster](../hdinsight-hadoop-customize-cluster-linux.md#script-action-to-a-running-cluster).
- |Property | Value |
- |||
- |Bash script URI|`https://hdiconfigactions.blob.core.windows.net/linuxhivemigrationv01/hive-adl-expand-location-v01.sh`|
- |Node type(s)|Head|
- |Parameters||
+ |Property | Value |
+ |||
+ |Bash script URI|`https://hdiconfigactions.blob.core.windows.net/linuxhivemigrationv01/hive-adl-expand-location-v01.sh`|
+ |Node type(s)|Head|
+ |Parameters||
### 2. Copy the SQL database
Migration of Hive tables to a new Storage Account needs to be done as a separate
This step uses the [`Hive Schema Tool`](https://cwiki.apache.org/confluence/display/Hive/Hive+Schema+Tool) from HDInsight 4.0 to upgrade the metastore schema.
-> [!Warning]
+> [!WARNING]
> This step is not reversible. Run this only on a copy of the metastore. 1. Create a temporary HDInsight 4.0 cluster to access the 4.0 Hive `schematool`. You can use the [default Hive metastore](../hdinsight-use-external-metadata-stores.md#default-metastore) for this step.
-1. From the HDInsight 4.0 cluster, execute `schematool` to upgrade the target HDInsight 3.6 metastore:
+1. From the HDInsight 4.0 cluster, execute `schematool` to upgrade the target HDInsight 3.6 metastore. Edit the following shell script to add your SQL server name, database name, username, and password. Open an [SSH Session](../hdinsight-hadoop-linux-use-ssh-unix.md) on the headnode and run it.
- ```sh
- SERVER='servername.database.windows.net' # replace with your SQL Server
- DATABASE='database' # replace with your 3.6 metastore SQL Database
- USERNAME='username' # replace with your 3.6 metastore username
- PASSWORD='password' # replace with your 3.6 metastore password
- STACK_VERSION=$(hdp-select status hive-server2 | awk '{ print $3; }')
- /usr/hdp/$STACK_VERSION/hive/bin/schematool -upgradeSchema -url "jdbc:sqlserver://$SERVER;databaseName=$DATABASE;trustServerCertificate=false;encrypt=true;hostNameInCertificate=*.database.windows.net;" -userName "$USERNAME" -passWord "$PASSWORD" -dbType "mssql" --verbose
- ```
+ ```sh
+ SERVER='servername.database.windows.net' # replace with your SQL Server
+ DATABASE='database' # replace with your 3.6 metastore SQL Database
+ USERNAME='username' # replace with your 3.6 metastore username
+ PASSWORD='password' # replace with your 3.6 metastore password
+ STACK_VERSION=$(hdp-select status hive-server2 | awk '{ print $3; }')
+ /usr/hdp/$STACK_VERSION/hive/bin/schematool -upgradeSchema -url "jdbc:sqlserver://$SERVER;databaseName=$DATABASE;trustServerCertificate=false;encrypt=true;hostNameInCertificate=*.database.windows.net;" -userName "$USERNAME" -passWord "$PASSWORD" -dbType "mssql" --verbose
+ ```
- > [!NOTE]
- > This utility uses client `beeline` to execute SQL scripts in `/usr/hdp/$STACK_VERSION/hive/scripts/metastore/upgrade/mssql/upgrade-*.mssql.sql`.
- >
- > SQL Syntax in these scripts is not necessarily compatible to other client tools. For example, [SSMS](/sql/ssms/download-sql-server-management-studio-ssms) and [Query Editor on Azure Portal](/azure/azure-sql/database/connect-query-portal) require keyword `GO` after each command.
- >
- > If any script fails due to resource capacity or transaction timeouts, scale up the SQL Database.
+ > [!NOTE]
+ > This utility uses client `beeline` to execute SQL scripts in `/usr/hdp/$STACK_VERSION/hive/scripts/metastore/upgrade/mssql/upgrade-*.mssql.sql`.
+ >
+ > SQL Syntax in these scripts is not necessarily compatible to other client tools. For example, [SSMS](/sql/ssms/download-sql-server-management-studio-ssms) and [Query Editor on Azure Portal](/azure/azure-sql/database/connect-query-portal) require keyword `GO` after each command.
+ >
+ > If any script fails due to resource capacity or transaction timeouts, scale up the SQL Database.
1. Verify the final version with query `select schema_version from dbo.version`.
- The output should match that of the following bash command from the HDInsight 4.0 cluster.
+ The output should match that of the following bash command from the HDInsight 4.0 cluster.
- ```bash
- grep . /usr/hdp/$(hdp-select --version)/hive/scripts/metastore/upgrade/mssql/upgrade.order.mssql | tail -n1 | rev | cut -d'-' -f1 | rev
- ```
+ ```bash
+ grep . /usr/hdp/$(hdp-select --version)/hive/scripts/metastore/upgrade/mssql/upgrade.order.mssql | tail -n1 | rev | cut -d'-' -f1 | rev
+ ```
1. Delete the temporary HDInsight 4.0 cluster.
Create a new HDInsight 4.0 cluster, [selecting the upgraded Hive metastore](../h
* If Hive jobs fail due to storage inaccessibility, verify that the table location is in a Storage Account added to the cluster.
- Use the following Hive command to identify table location:
+ Use the following Hive command to identify table location:
- ```sql
- SHOW CREATE TABLE ([db_name.]table_name|view_name);
- ```
+ ```sql
+ SHOW CREATE TABLE ([db_name.]table_name|view_name);
+ ```
### 5. Convert Tables for ACID Compliance
HDInsight optionally integrates with Azure Active Directory using HDInsight Ente
* `HiveCLI` is replaced with `Beeline`.
-Refer to [HDInsight 4.0 Announcement](../hdinsight-version-release.md) for additional changes.
+Refer to [HDInsight 4.0 Announcement](../hdinsight-version-release.md) for other changes.
## Troubleshooting guide
Refer to [HDInsight 4.0 Announcement](../hdinsight-version-release.md) for addit
* [HDInsight 4.0 Announcement](../hdinsight-version-release.md) * [HDInsight 4.0 deep dive](https://azure.microsoft.com/blog/deep-dive-into-azure-hdinsight-4-0/)
-* [Hive 3 ACID Tables](https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.1.0/using-hiveql/content/hive_3_internals.html)
+* [Hive 3 ACID Tables](https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.1.0/using-hiveql/content/hive_3_internals.html)
hdinsight Interactive Query Troubleshoot Error Message Hive View https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/interactive-query-troubleshoot-error-message-hive-view.md
Title: Error message not shown in Apache Hive View - Azure HDInsight
description: Query fails in Apache Hive View without any details on Azure HDInsight cluster. Previously updated : 07/30/2019 Last updated : 06/24/2022 # Scenario: Query error message not displayed in Apache Hive View in Azure HDInsight
Check the Notifications tab on the Top-right corner of the Hive_view to see the
## Next steps
key-vault Rbac Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/rbac-guide.md
> Key Vault resource provider supports two resource types: **vaults** and **managed HSMs**. Access control described in this article only applies to **vaults**. To learn more about access control for managed HSM, see [Managed HSM access control](../managed-hsm/access-control.md). > [!NOTE]
-> Azure App Service certificate configuration does not support Key Vault RBAC permission model.
+> Azure App Service certificate configuration through Azure Portal does not support Key Vault RBAC permission model. It is supported using client libraries like Azure PowerShell, Azure CLI, ARM template deployments with **Key Vault Secrets User** and **Key Vault Reader** role assignemnts.
Azure role-based access control (Azure RBAC) is an authorization system built on [Azure Resource Manager](../../azure-resource-manager/management/overview.md) that provides fine-grained access management of Azure resources.
logic-apps Custom Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/custom-connector-overview.md
ms.suite: integration Previously updated : 05/17/2022 Last updated : 06/10/2022 # As a developer, I want learn about the capability to create custom connectors with operations that I can use in my Azure Logic Apps workflows.
In [multi-tenant Azure Logic Apps](logic-apps-overview.md), you can create [cust
In [single-tenant Azure Logic Apps](logic-apps-overview.md), the redesigned Azure Logic Apps runtime powers Standard logic app workflows. This runtime differs from the multi-tenant Azure Logic Apps runtime that powers Consumption logic app workflows. The single-tenant runtime uses the [Azure Functions extensibility model](../azure-functions/functions-bindings-register.md), which provides a key capability for you to create your own [built-in connectors](../connectors/built-in.md) for anyone to use in Standard workflows. In most cases, the built-in version provides better performance, capabilities, pricing, and so on.
-When single-tenant Azure Logic Apps officially released, new built-in connectors included Azure Blob Storage, Azure Event Hubs, Azure Service Bus, and SQL Server. Over time, this list of built-in connectors continues to grow. However, if you need connectors that aren't available in Standard logic app workflows, you can [create your own built-in connectors](create-custom-built-in-connector-standard.md) using the same extensibility model that's used by built-in connectors in Standard workflows.
+When single-tenant Azure Logic Apps officially released, new built-in connectors included Azure Blob Storage, Azure Event Hubs, Azure Service Bus, and SQL Server. Over time, this list of built-in connectors continues to grow. However, if you need connectors that aren't available in Standard logic app workflows, you can [create your own built-in connectors](create-custom-built-in-connector-standard.md) using the same extensibility model that's used by *service provider-based* built-in connectors in Standard workflows.
<a name="service-provider-interface-implementation"></a>
-### Built-in connectors as service providers
+### Service provider-based built-in connectors
-In single-tenant Azure Logic Apps, a built-in connector that has the following attributes is called a *service provider*:
+In single-tenant Azure Logic Apps, a [built-in connector with specific attributes is informally known as a *service provider*](../connectors/built-in.md#service-provider-interface-implementation). For example, these connectors are based on the [Azure Functions extensibility model](../azure-functions/functions-bindings-register.md), which provide the capability for you to create your own custom built-in connectors to use in Standard logic app workflows.
-* Is based on the [Azure Functions extensibility model](../azure-functions/functions-bindings-register.md).
-
-* Provides access from a Standard logic app workflow to a service, such as Azure Blob Storage, Azure Service Bus, Azure Event Hubs, SFTP, and SQL Server.
-
- Some built-in connectors support only a single way to authenticate a connection to the underlying service. Other built-in connectors can offer a choice, such as using a connection string, Azure Active Directory (Azure AD), or a managed identity.
-
-* Runs in the same process as the redesigned Azure Logic Apps runtime.
-
-A built-in connector that's *not a service provider* has the following attributes:
+In contrast, non-service provider built-in connectors have the following attributes:
* Isn't based on the Azure Functions extensibility model.
This method has a default implementation, so you don't need to explicitly implem
When you're ready to start the implementation steps, continue to the following article:
-* [Create custom built-in connectors for Standard logic apps in single-tenant Azure Logic Apps](create-custom-built-in-connector-standard.md)
+* [Create custom built-in connectors for Standard logic apps in single-tenant Azure Logic Apps](create-custom-built-in-connector-standard.md)
logic-apps Single Tenant Overview Compare https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/single-tenant-overview-compare.md
The single-tenant model and **Logic App (Standard)** resource type include many
* Create logic apps and their workflows from [hundreds of managed connectors](/connectors/connector-reference/connector-reference-logicapps-connectors) for Software-as-a-Service (SaaS) and Platform-as-a-Service (PaaS) apps and services plus connectors for on-premises systems.
- * More managed connectors are now available as built-in connectors in Standard logic app workflows. The built-in versions run natively on the single-tenant Azure Logic Apps runtime. Some built-in connectors are also [*service provider-based* connectors](custom-connector-overview.md#service-provider-interface-implementation). For a list, review the [Built-in connectors for Standard logic apps](#built-connectors-standard) section later in this article.
+ * More managed connectors are now available as built-in connectors in Standard logic app workflows. The built-in versions run natively on the single-tenant Azure Logic Apps runtime. Some built-in connectors are also informally known as [*service provider* connectors](../connectors/built-in.md#service-provider-interface-implementation). For a list, review [Built-in connectors in Consumption and Standard](../connectors/built-in.md#built-in-connectors).
* You can create your own custom built-in connectors for any service that you need by using the single-tenant Azure Logic Apps extensibility framework. Similar to built-in connectors such as Azure Service Bus and SQL Server, custom built-in connectors provide higher throughput, low latency, and local connectivity because they run in the same process as the single-tenant runtime. However, custom built-in connectors aren't similar to [custom managed connectors](../connectors/apis-list.md#custom-connectors-and-apis), which aren't currently supported. For more information, review [Custom connector overview](custom-connector-overview.md#custom-connector-standard) and [Create custom built-in connectors for Standard logic apps in single-tenant Azure Logic Apps](create-custom-built-in-connector-standard.md).
A Standard logic app workflow has many of the same built-in connectors as a Cons
For example, a Standard logic app workflow has both managed connectors and built-in connectors for Azure Blob, Azure Cosmos DB, Azure Event Hubs, Azure Service Bus, DB2, FTP, MQ, SFTP, SQL Server, and others. Although a Consumption logic app workflow doesn't have these same built-in connector versions, other built-in connectors such as Azure API Management, Azure App Services, and Batch, are available.
-In single-tenant Azure Logic Apps, [built-in connectors with specific attributes are informally known as *service providers*](custom-connector-overview.md#service-provider-interface-implementation). Some built-in connectors support only a single way to authenticate a connection to the underlying service. Other built-in connectors can offer a choice, such as using a connection string, Azure Active Directory (Azure AD), or a managed identity. All built-in connectors run in the same process as the redesigned Azure Logic Apps runtime. For more information, review the [built-in connector list for Standard logic app workflows](../connectors/built-in.md).
+In single-tenant Azure Logic Apps, [built-in connectors with specific attributes are informally known as *service providers*](../connectors/built-in.md#service-provider-interface-implementation). Some built-in connectors support only a single way to authenticate a connection to the underlying service. Other built-in connectors can offer a choice, such as using a connection string, Azure Active Directory (Azure AD), or a managed identity. All built-in connectors run in the same process as the redesigned Azure Logic Apps runtime. For more information, review the [built-in connector list for Standard logic app workflows](../connectors/built-in.md#built-in-connectors).
<a name="limited-unavailable-unsupported"></a>
machine-learning Concept Compute Target https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-compute-target.md
When you select a node size for a managed compute resource in Azure Machine Lear
There are a few exceptions and limitations to choosing a VM size: * Some VM series aren't supported in Azure Machine Learning.
-* Some VM series are restricted. To use a restricted series, contact support and request a quota increase for the series. Please note that for GPUs and specialty SKUs, you would always have to request for quota due to high demand and limited supply. For information on how to contact support, see [Azure support options](https://azure.microsoft.com/support/options/).
-
-See the following table to learn more about supported series and restrictions.
+* There are some VM series, such as GPUs and other special SKUs, which may not initially appear in your list of available VMs. But you can still use them, once you request a quota change. For more information about requesting quotas, see [Request quota increases](how-to-manage-quotas.md#request-quota-increases).
+See the following table to learn more about supported series.
| **Supported VM series** | **Category** | **Supported by** | |||||
machine-learning Concept Secure Network Traffic Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-secure-network-traffic-flow.md
When you create a compute instance or compute cluster, the following resources a
* A Network Security Group with required outbound rules. These rules allow __inbound__ access from the Azure Machine Learning (TCP on port 44224) and Azure Batch service (TCP on ports 29876-29877). > [!IMPORTANT]
- > If you usee a firewall to block internet access into the VNet, you must configure the firewall to allow this traffic. For example, with Azure Firewall you can create user-defined routes. For more information, see [How to use Azure Machine Learning with a firewall](how-to-access-azureml-behind-firewall.md#inbound-configuration).
+ > If you use a firewall to block internet access into the VNet, you must configure the firewall to allow this traffic. For example, with Azure Firewall you can create user-defined routes. For more information, see [How to use Azure Machine Learning with a firewall](how-to-access-azureml-behind-firewall.md#inbound-configuration).
* A load balancer with a public IP.
machine-learning How To Deploy Fpga Web Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-fpga-web-service.md
Title: Deploy ML models to FPGAs
+ Title: Deploy ML models to FPGAs
description: Learn about field-programmable gate arrays. You can deploy a web service on an FPGA with Azure Machine Learning for ultra-low latency inference.
Next, create a Docker image from the converted model and all dependencies. This
#### Deploy to a local edge server
-All [Azure Data Box Edge devices](../databox-online/azure-stack-edge-overview.md) contain an FPGA for running the model. Only one model can be running on the FPGA at one time. To run a different model, just deploy a new container. Instructions and sample code can be found in [this Azure Sample](https://github.com/Azure-Samples/aml-hardware-accelerated-models).
+All [Azure Data Box Edge devices](../databox-online/azure-stack-edge-overview.md) contain an FPGA for running the model. Only one model can be running on the FPGA at one time. To run a different model, just deploy a new container. Instructions and sample code can be found in [this Azure Sample](https://github.com/Azure-Samples/aml-hardware-accelerated-models).
### Consume the deployed model
machine-learning How To Manage Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-quotas.md
In addition, the maximum **run time** is 30 days and the maximum number of **met
[Request a quota increase](#request-quota-increases) to raise the limits for various VM family core quotas, total subscription core quotas, cluster quota and resources in this section. Available resources:
-+ **Dedicated cores per region** have a default limit of 24 to 300, depending on your subscription offer type. You can increase the number of dedicated cores per subscription for each VM family. Specialized VM families like NCv2, NCv3, or ND series start with a default of zero cores.
++ **Dedicated cores per region** have a default limit of 24 to 300, depending on your subscription offer type. You can increase the number of dedicated cores per subscription for each VM family. Specialized VM families like NCv2, NCv3, or ND series start with a default of zero cores. GPUs also default to zero cores. + **Low-priority cores per region** have a default limit of 100 to 3,000, depending on your subscription offer type. The number of low-priority cores per subscription can be increased and is a single value across VM families.
The following table shows additional limits in the platform. Please reach out to
| **Resource or Action** | **Maximum limit** | | | | | Workspaces per resource group | 800 |
-| Nodes in a single Azure Machine Learning Compute (AmlCompute) **cluster** setup as a non communication-enabled pool (i.e. cannot run MPI jobs) | 100 nodes but configurable up to 65000 nodes |
-| Nodes in a single Parallel Run Step **run** on an Azure Machine Learning Compute (AmlCompute) cluster | 100 nodes but configurable up to 65000 nodes if your cluster is setup to scale per above |
-| Nodes in a single Azure Machine Learning Compute (AmlCompute) **cluster** setup as a communication-enabled pool | 300 nodes but configurable up to 4000 nodes |
-| Nodes in a single Azure Machine Learning Compute (AmlCompute) **cluster** setup as a communication-enabled pool on an RDMA enabled VM Family | 100 nodes |
+| Nodes in a single Azure Machine Learning Compute (AmlCompute) **cluster** set up as a non communication-enabled pool (i.e. cannot run MPI jobs) | 100 nodes but configurable up to 65000 nodes |
+| Nodes in a single Parallel Run Step **run** on an Azure Machine Learning Compute (AmlCompute) cluster | 100 nodes but configurable up to 65000 nodes if your cluster is set up to scale per above |
+| Nodes in a single Azure Machine Learning Compute (AmlCompute) **cluster** set up as a communication-enabled pool | 300 nodes but configurable up to 4000 nodes |
+| Nodes in a single Azure Machine Learning Compute (AmlCompute) **cluster** set up as a communication-enabled pool on an RDMA enabled VM Family | 100 nodes |
| Nodes in a single MPI **run** on an Azure Machine Learning Compute (AmlCompute) cluster | 100 nodes but can be increased to 300 nodes |
-| GPU MPI processes per node | 1-4 |
-| GPU workers per node | 1-4 |
| Job lifetime | 21 days<sup>1</sup> | | Job lifetime on a low-priority node | 7 days<sup>2</sup> | | Parameter servers per node | 1 |
You can't set a negative value or a value higher than the subscription-level quo
> [!NOTE] > You need subscription-level permissions to set a quota at the workspace level.
-## View your usage and quotas
+## View quotas in the studio
-To view your quota for various Azure resources like virtual machines, storage, or network, use the Azure portal:
+1. When you create a new compute resource, by default you'll see only VM sizes that you already have quota to use. Switch the view to **Select from all options**.
+
+ :::image type="content" source="media/how-to-manage-quotas/select-all-options.png" alt-text="Screenshot shows select all options to see compute resources that need more quota":::
+
+1. Scroll down until you see the list of VM sizes you do not have quota for.
+
+ :::image type="content" source="media/how-to-manage-quotas/scroll-to-zero-quota.png" alt-text="Screenshot shows list of zero quota":::
+
+1. Use the link to go directly to the online customer support request for more quota.
+
+## View your usage and quotas in the Azure portal
+
+To view your quota for various Azure resources like virtual machines, storage, or network, use the [Azure portal](https://portal.azure.com):
1. On the left pane, select **All services** and then select **Subscriptions** under the **General** category.
machine-learning How To Setup Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-setup-customer-managed-keys.md
Previously updated : 03/17/2022 Last updated : 06/24/2022 # Use customer-managed keys with Azure Machine Learning
In the [customer-managed keys concepts article](concept-customer-managed-keys.md
* The customer-managed key for resources the workspace depends on canΓÇÖt be updated after workspace creation. * Resources managed by Microsoft in your subscription canΓÇÖt transfer ownership to you. * You can't delete Microsoft-managed resources used for customer-managed keys without also deleting your workspace.
+* The key vault that contains your customer-managed key must be in the same Azure subscription as the Azure Machine Learning workspace
> [!IMPORTANT] > When using a customer-managed key, the costs for your subscription will be higher because of the additional resources in your subscription. To estimate the cost, use the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/).
In the [customer-managed keys concepts article](concept-customer-managed-keys.md
To create the key vault, see [Create a key vault](../key-vault/general/quick-create-portal.md). When creating Azure Key Vault, you must enable __soft delete__ and __purge protection__.
+> [!IMPORTANT]
+> The key vault must be in the same Azure subscription that will contain your Azure Machine Learning workspace.
+ ### Create a key > [!TIP]
To create the key vault, see [Create a key vault](../key-vault/general/quick-cre
Create an Azure Machine Learning workspace. When creating the workspace, you must select the __Azure Key Vault__ and the __key__. Depending on how you create the workspace, you specify these resources in different ways:
+> [!WARNING]
+> The key vault that contains your customer-managed key must be in the same Azure subscription as the workspace.
+ * __Azure portal__: Select the key vault and key from a dropdown input box when configuring the workspace. * __SDK, REST API, and Azure Resource Manager templates__: Provide the Azure Resource Manager ID of the key vault and the URL for the key. To get these values, use the [Azure CLI](/cli/azure/install-azure-cli) and the following commands:
marketplace Azure App Managed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-app-managed.md
Indicate who should have management access to this managed application in each s
Complete the following steps for Global Azure and Azure Government Cloud, as applicable. 1. In the **Azure Active Directory Tenant ID** box, enter the Azure AD Tenant ID (also known as directory ID) containing the identities of the users, groups, or applications you want to grant permissions to.
-1. In the **Principal ID** box, provide the Azure AD object ID of the user, group, or application that you want to be granted permission to the managed resource group. Identify the user by their Principal ID, which can be found at the [Azure Active Directory users blade](https://portal.azure.com/#blade/Microsoft_AAD_IAM/UsersManagementMenuBlade/AllUsers) on the Azure portal.
+1. In the **Principal ID** box, provide the Azure AD object ID of the user, group, or application that you want to be granted permission to the managed resource group. Identify the user by their Principal ID, which can be found at the [Azure Active Directory users blade](https://portal.azure.com/#view/Microsoft_AAD_UsersAndTenants/UserManagementMenuBlade/~/AllUsers) on the Azure portal.
1. From the **Role definition** list, select an Azure AD built-in role. The role you select describes the permissions the principal will have on the resources in the customer subscription. 1. To add another authorization, select the **Add authorization (max 100)** link, and repeat steps 1 through 3.
marketplace Private Offers Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/private-offers-api.md
Sample absolute pricing resource:
"paymentOption": { "type": "month", "value": 1
- }
+ },
"billingTerm": { "type": "year", "value": 1
migrate Tutorial Migrate Vmware Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-vmware-powershell.md
You can specify the replication properties as follows.
Target VM size | Mandatory | Specify the Azure VM size to be used for the replicating VM by using (`TargetVMSize`) parameter. For instance, to migrate a VM to D2_v2 VM in Azure, specify the value for (`TargetVMSize`) as "Standard_D2_v2". License | Mandatory | To use Azure Hybrid Benefit for your Windows Server machines that are covered with active Software Assurance or Windows Server subscriptions, specify the value for (`LicenseType`) parameter as **WindowsServer**. Otherwise, specify the value as **NoLicenseType**. OS Disk | Mandatory | Specify the unique identifier of the disk that has the operating system bootloader and installer. The disk ID to be used is the unique identifier (UUID) property for the disk retrieved using the [Get-AzMigrateDiscoveredServer](/powershell/module/az.migrate/get-azmigratediscoveredserver) cmdlet.
- Disk Type | Mandatory | Specify the name of the load balancer to be created.
+ Disk Type | Mandatory | Specify the type of disk to be used.
Infrastructure redundancy | Optional | Specify infrastructure redundancy option as follows. <br/><br/> - **Availability Zone** to pin the migrated machine to a specific Availability Zone in the region. Use this option to distribute servers that form a multi-node application tier across Availability Zones. This option is only available if the target region selected for the migration supports Availability Zones. To use availability zones, specify the availability zone value for (`TargetAvailabilityZone`) parameter. <br/> - **Availability Set** to place the migrated machine in an Availability Set. The target Resource Group that was selected must have one or more availability sets to use this option. To use availability set, specify the availability set ID for (`TargetAvailabilitySet`) parameter. Boot Diagnostic Storage Account | Optional | To use a boot diagnostic storage account, specify the ID for (`TargetBootDiagnosticStorageAccount`) parameter. <br/> - The storage account used for boot diagnostics should be in the same subscription that you're migrating your VMs to. <br/> - By default, no value is set for this parameter. Tags | Optional | Add tags to your migrated virtual machines, disks, and NICs. <br/> Use (`Tag`) to add tags to virtual machines, disks, and NICs. <br/> or <br/> Use (`VMTag`) for adding tags to your migrated virtual machines.<br/> Use (`DiskTag`) for adding tags to disks. <br/> Use (`NicTag`) for adding tags to network interfaces. <br/> For example, add the required tags to a variable $tags and pass the variable in the required parameter. $tags = @{Organization=ΓÇ¥ContosoΓÇ¥}
mysql Tutorial Archive Laravel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/tutorial-archive-laravel.md
+
+ Title: 'Tutorial: Build a PHP (Laravel) app with Azure Database for MySQL Flexible Server'
+description: This tutorial explains how to build a PHP app with flexible server.
+++++
+ms.devlang: php
Last updated : 9/21/2020+++
+# Tutorial: Build a PHP (Laravel) and MySQL Flexible Server app in Azure App Service
+++
+[Azure App Service](../../app-service/overview.md) provides a highly scalable, self-patching web hosting service using the Linux operating system. This tutorial shows how to create a PHP app in Azure and connect it to a MySQL database. When you're finished, you'll have a [Laravel](https://laravel.com/) app running on Azure App Service on Linux.
+
+In this tutorial, you learn how to:
+> [!div class="checklist"]
+>
+> * Setup a PHP (Laravel) app with local MySQL
+> * Create a MySQL Flexible Server
+> * Connect a PHP app to MySQL Flexible Server
+> * Deploy the app to Azure App Service
+> * Update the data model and redeploy the app
+> * Manage the app in the Azure portal
++
+## Prerequisites
+
+To complete this tutorial:
+
+1. [Install Git](https://git-scm.com/)
+2. [Install PHP 5.6.4 or above](https://php.net/downloads.php)
+3. [Install Composer](https://getcomposer.org/doc/00-intro.md)
+4. Enable the following PHP extensions Laravel needs: OpenSSL, PDO-MySQL, Mbstring, Tokenizer, XML
+5. [Install and start MySQL](https://dev.mysql.com/doc/refman/5.7/en/installing.html)
+
+## Prepare local MySQL
+
+In this step, you create a database in your local MySQL server for your use in this tutorial.
+
+### Connect to local MySQL server
+
+In a terminal window, connect to your local MySQL server. You can use this terminal window to run all the commands in this tutorial.
+
+```bash
+mysql -u root -p
+```
+
+If you're prompted for a password, enter the password for the `root` account. If you don't remember your root account password, see [MySQL: How to Reset the Root Password](https://dev.mysql.com/doc/refman/5.7/en/resetting-permissions.html).
+
+If your command runs successfully, then your MySQL server is running. If not, make sure that your local MySQL server is started by following the [MySQL post-installation steps](https://dev.mysql.com/doc/refman/5.7/en/postinstallation.html).
+
+### Create a database locally
+
+At the `mysql` prompt, create a database.
+
+```sql
+CREATE DATABASE sampledb;
+```
+
+Exit your server connection by typing `quit`.
+
+```sql
+quit
+```
+
+<a name="step2"></a>
+
+## Create a PHP app locally
+
+In this step, you get a Laravel sample application, configure its database connection, and run it locally.
+
+### Clone the sample
+
+In the terminal window, navigate to an empty directory where you can clone the sample application. Run the following command to clone the sample repository.
+
+```bash
+git clone https://github.com/Azure-Samples/laravel-tasks
+```
+
+`cd` to your cloned directory.
+Install the required packages.
+
+```bash
+cd laravel-tasks
+composer install
+```
+
+### Configure MySQL connection
+
+In the repository root, create a file named *.env*. Copy the following variables into the *.env* file. Replace the _&lt;root_password>_ placeholder with the MySQL root user's password.
+
+```txt
+APP_ENV=local
+APP_DEBUG=true
+APP_KEY=
+
+DB_CONNECTION=mysql
+DB_HOST=127.0.0.1
+DB_DATABASE=sampledb
+DB_USERNAME=root
+DB_PASSWORD=<root_password>
+```
+
+For information on how Laravel uses the *.env* file, see [Laravel Environment Configuration](https://laravel.com/docs/5.4/configuration#environment-configuration).
+
+### Run the sample locally
+
+Run [Laravel database migrations](https://laravel.com/docs/5.4/migrations) to create the tables the application needs. To see which tables are created in the migrations, look in the *database/migrations* directory in the Git repository.
+
+```bash
+php artisan migrate
+```
+
+Generate a new Laravel application key.
+
+```bash
+php artisan key:generate
+```
+
+Run the application.
+
+```bash
+php artisan serve
+```
+
+Navigate to `http://localhost:8000` in a browser. Add a few tasks in the page.
++
+To stop PHP, type `Ctrl + C` in the terminal.
+
+## Create a MySQL Flexible Server
+
+In this step, you create a MySQL database in [Azure Database for MySQL Flexible Server](../index.yml). Later, you configure the PHP application to connect to this database. In the [Azure Cloud Shell](../../cloud-shell/overview.md), create a server in with the [`az flexible-server create`](/cli/azure/mysql/server#az-mysql-flexible-server-create) command.
+
+```azurecli-interactive
+az mysql flexible-server create --resource-group myResourceGroup --public-access <IP-Address>
+```
+
+> [!IMPORTANT]
+>
+>* Make a note of the **servername** and **connection string** to use it in the next step to connect and run laravel data migration.
+> * For **IP-Address** argument, provide the IP of your client machine. The server is locked when created and you need to permit access to your client machine to manage the server locally.
+
+### Configure server firewall to allow web app to connect to the server
+
+In the Cloud Shell, create a firewall rule for your MySQL server to allow client connections by using the az mysql server firewall-rule create command. When both starting IP and end IP are set to ```0.0.0.0```, the firewall is only opened for other Azure services that do not have a static IP to connect to the server.
+
+```azurecli
+az mysql flexible-server firewall-rule create --name allanyAzureIPs --server <mysql-server-name> --resource-group myResourceGroup --start-ip-address 0.0.0.0 --end-ip-address 0.0.0.0
+```
+
+### Connect to production MySQL server locally
+
+In the local terminal window, connect to the MySQL server in Azure. Use the value you specified previously for ```<admin-user>``` and ```<mysql-server-name>``` . When prompted for a password, use the password you specified when you created the database in Azure.
+
+```bash
+mysql -u <admin-user> -h <mysql-server-name>.mysql.database.azure.com -P 3306 -p
+```
+
+### Create a production database
+
+At the `mysql` prompt, create a database.
+
+```sql
+CREATE DATABASE sampledb;
+```
+
+### Create a user with permissions
+
+Create a database user called *phpappuser* and give it all privileges in the `sampledb` database. For simplicity of the tutorial, use *MySQLAzure2020* as the password.
+
+```sql
+CREATE USER 'phpappuser' IDENTIFIED BY 'MySQLAzure2020';
+GRANT ALL PRIVILEGES ON sampledb.* TO 'phpappuser';
+```
+
+Exit the server connection by typing `quit`.
+
+```sql
+quit
+```
+
+## Connect app to MySQL flexible server
+
+In this step, you connect the PHP application to the MySQL database you created in Azure Database for MySQL.
+
+<a name="devconfig"></a>
+
+### Configure the database connection
+
+In the repository root, create an *.env.production* file and copy the following variables into it. Replace the placeholder _&lt;mysql-server-name>_ in both *DB_HOST* and *DB_USERNAME*.
+
+```
+APP_ENV=production
+APP_DEBUG=true
+APP_KEY=
+
+DB_CONNECTION=mysql
+DB_HOST=<mysql-server-name>.mysql.database.azure.com
+DB_DATABASE=sampledb
+DB_USERNAME=phpappuser
+DB_PASSWORD=MySQLAzure2017
+MYSQL_SSL=true
+```
+
+Save the changes.
+
+> [!TIP]
+> To secure your MySQL connection information, this file is already excluded from the Git repository (See *.gitignore* in the repository root). Later, you learn how to configure environment variables in App Service to connect to your database in Azure Database for MySQL. With environment variables, you don't need the *.env* file in App Service.
+>
+
+### Configure TLS/SSL certificate
+
+By default, MySQL Flexible Server enforces TLS connections from clients. To connect to your MySQL database in Azure, you must use the [*.pem* certificate supplied by Azure Database for MySQL Flexible Server](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem). Download [this certificate](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem)) and place it in the **SSL** folder in the local copy of the sample app repository.
+
+Open *config/database.php* and add the `sslmode` and `options` parameters to `connections.mysql`, as shown in the following code.
+
+```php
+'mysql' => [
+ ...
+ 'sslmode' => env('DB_SSLMODE', 'prefer'),
+ 'options' => (env('MYSQL_SSL') && extension_loaded('pdo_mysql')) ? [
+ PDO::MYSQL_ATTR_SSL_KEY => '/ssl/DigiCertGlobalRootCA.crt.pem',
+ ] : []
+],
+```
+
+### Test the application locally
+
+Run Laravel database migrations with *.env.production* as the environment file to create the tables in your MySQL database in Azure Database for MySQL. Remember that _.env.production_ has the connection information to your MySQL database in Azure.
+
+```bash
+php artisan migrate --env=production --force
+```
+
+*.env.production* doesn't have a valid application key yet. Generate a new one for it in the terminal.
+
+```bash
+php artisan key:generate --env=production --force
+```
+
+Run the sample application with *.env.production* as the environment file.
+
+```bash
+php artisan serve --env=production
+```
+
+Navigate to `http://localhost:8000`. If the page loads without errors, the PHP application is connecting to the MySQL database in Azure.
+
+Add a few tasks in the page.
++
+To stop PHP, type `Ctrl + C` in the terminal.
+
+### Commit your changes
+
+Run the following Git commands to commit your changes:
+
+```bash
+git add .
+git commit -m "database.php updates"
+```
+
+Your app is ready to be deployed.
+
+## Deploy to Azure
+
+In this step, you deploy the MySQL-connected PHP application to Azure App Service.
+
+### Configure a deployment user
+
+FTP and local Git can deploy to an Azure web app by using a deployment user. Once you configure your deployment user, you can use it for all your Azure deployments. Your account-level deployment username and password are different from your Azure subscription credentials.
+
+To configure the deployment user, run the [az webapp deployment user set](/cli/azure/webapp/deployment/user#az-webapp-deployment-user-set) command in Azure Cloud Shell. Replace _&lt;username>_ and _&lt;password>_ with your deployment user username and password.
+
+The username must be unique within Azure, and for local Git pushes, must not contain the '@' symbol.
+The password must be at least eight characters long, with two of the following three elements: letters, numbers, and symbols.
+
+```azurecli
+az appservice plan create --name myAppServicePlan --resource-group myResourceGroup --sku F1 --is-linux
+```
+
+The JSON output shows the password as null. If you get a 'Conflict'. Details: 409 error, change the username. If you get a 'Bad Request'. Details: 400 error, use a stronger password. **Record your username and password to use to deploy your web apps.**
+
+### Create an App Service plan
+
+In the Cloud Shell, create an App Service plan in the resource group with the [az appservice plan create](/cli/azure/appservice/plan#az-appservice-plan-create) command. The following example creates an App Service plan named myAppServicePlan in the Free pricing tier (--sku F1) and in a Linux container (--is-linux).
+
+az appservice plan create --name myAppServicePlan --resource-group myResourceGroup --sku F1 --is-linux
+
+<a name="create"></a>
+
+### Create a web app
+
+Create a [web app](../../app-service/overview.md#app-service-on-linux) in the myAppServicePlan App Service plan.
+
+In the Cloud Shell, you can use the [az webapp create](/cli/azure/webapp#az-webapp-create) command. In the following example, replace _&lt;app-name>_ with a globally unique app name (valid characters are `a-z`, `0-9`, and `-`). The runtime is set to `PHP|7.0`. To see all supported runtimes, run [az webapp list-runtimes --os linux](/cli/azure/webapp#az-webapp-list-runtimes).
+
+```azurecli
+az webapp create --resource-group myResourceGroup --plan myAppServicePlan --name <app-name> --runtime "PHP|7.3" --deployment-local-git
+```
+
+When the web app has been created, the Azure CLI shows output similar to the following example:
+
+```
+Local git is configured with url of 'https://<username>@<app-name>.scm.azurewebsites.net/<app-name>.git'
+{
+ "availabilityState": "Normal",
+ "clientAffinityEnabled": true,
+ "clientCertEnabled": false,
+ "cloningInfo": null,
+ "containerSize": 0,
+ "dailyMemoryTimeQuota": 0,
+ "defaultHostName": "<app-name>.azurewebsites.net",
+ "deploymentLocalGitUrl": "https://<username>@<app-name>.scm.azurewebsites.net/<app-name>.git",
+ "enabled": true,
+ < JSON data removed for brevity. >
+}
+```
+
+You've created an empty new web app, with git deployment enabled.
+
+> [!NOTE]
+> The URL of the Git remote is shown in the deploymentLocalGitUrl property, with the format `https://<username>@<app-name>.scm.azurewebsites.net/<app-name>.git`. Save this URL as you need it later.
+
+### Configure database settings
+
+In App Service, you set environment variables as *app settings* by using the [`az webapp config appsettings set`](/cli/azure/webapp/config/appsettings#az-webapp-config-appsettings-set) command.
+
+The following command configures the app settings `DB_HOST`, `DB_DATABASE`, `DB_USERNAME`, and `DB_PASSWORD`. Replace the placeholders _&lt;app-name>_ and _&lt;mysql-server-name>_.
+
+```azurecli-interactive
+az webapp config appsettings set --name <app-name> --resource-group myResourceGroup --settings DB_HOST="<mysql-server-name>.mysql.database.azure.com" DB_DATABASE="sampledb" DB_USERNAME="phpappuser" DB_PASSWORD="MySQLAzure2017" MYSQL_SSL="true"
+```
+
+You can use the PHP [getenv](https://www.php.net/manual/en/function.getenv.php) method to access the settings. the Laravel code uses an [env](https://laravel.com/docs/5.4/helpers#method-env) wrapper over the PHP `getenv`. For example, the MySQL configuration in *config/database.php* looks like the following code:
+
+```php
+'mysql' => [
+ 'driver' => 'mysql',
+ 'host' => env('DB_HOST', 'localhost'),
+ 'database' => env('DB_DATABASE', 'forge'),
+ 'username' => env('DB_USERNAME', 'forge'),
+ 'password' => env('DB_PASSWORD', ''),
+ ...
+],
+```
+
+### Configure Laravel environment variables
+
+Laravel needs an application key in App Service. You can configure it with app settings.
+
+In the local terminal window, use `php artisan` to generate a new application key without saving it to *.env*.
+
+```bash
+php artisan key:generate --show
+```
+
+In the Cloud Shell, set the application key in the App Service app by using the [`az webapp config appsettings set`](/cli/azure/webapp/config/appsettings#az-webapp-config-appsettings-set) command. Replace the placeholders _&lt;app-name>_ and _&lt;outputofphpartisankey:generate>_.
+
+```azurecli-interactive
+az webapp config appsettings set --name <app-name> --resource-group myResourceGroup --settings APP_KEY="<output_of_php_artisan_key:generate>" APP_DEBUG="true"
+```
+
+`APP_DEBUG="true"` tells Laravel to return debugging information when the deployed app encounters errors. When running a production application, set it to `false`, which is more secure.
+
+### Set the virtual application path
+
+[Laravel application lifecycle](https://laravel.com/docs/5.4/lifecycle) begins in the *public* directory instead of the application's root directory. The default PHP Docker image for App Service uses Apache, and it doesn't let you customize the `DocumentRoot` for Laravel. However, you can use `.htaccess` to rewrite all requests to point to */public* instead of the root directory. In the repository root, an `.htaccess` is added already for this purpose. With it, your Laravel application is ready to be deployed.
+
+For more information, see [Change site root](../../app-service/configure-language-php.md?pivots=platform-linux#change-site-root).
+
+### Push to Azure from Git
+
+Back in the local terminal window, add an Azure remote to your local Git repository. Replace _&lt;deploymentLocalGitUrl-from-create-step>_ with the URL of the Git remote that you saved from [Create a web app](#create-a-web-app).
+
+```bash
+git remote add azure <deploymentLocalGitUrl-from-create-step>
+```
+
+Push to the Azure remote to deploy your app with the following command. When Git Credential Manager prompts you for credentials, make sure you enter the credentials you created in **Configure a deployment user**, not the credentials you use to sign in to the Azure portal.
+
+```bash
+git push azure main
+```
+
+This command may take a few minutes to run. While running, it displays information similar to the following example:
+
+<pre>
+Counting objects: 3, done.
+Delta compression using up to 8 threads.
+Compressing objects: 100% (3/3), done.
+Writing objects: 100% (3/3), 291 bytes | 0 bytes/s, done.
+Total 3 (delta 2), reused 0 (delta 0)
+remote: Updating branch 'main'.
+remote: Updating submodules.
+remote: Preparing deployment for commit id 'a5e076db9c'.
+remote: Running custom deployment command...
+remote: Running deployment command...
+...
+&lt; Output has been truncated for readability &gt;
+</pre>
+
+### Browse to the Azure app
+
+Browse to `http://<app-name>.azurewebsites.net` and add a few tasks to the list.
++
+Congratulations, you're running a data-driven PHP app in Azure App Service.
+
+## Update model locally and redeploy
+
+In this step, you make a simple change to the `task` data model and the webapp, and then publish the update to Azure.
+
+For the tasks scenario, you modify the application so that you can mark a task as complete.
+
+### Add a column
+
+In the local terminal window, navigate to the root of the Git repository.
+
+Generate a new database migration for the `tasks` table:
+
+```bash
+php artisan make:migration add_complete_column --table=tasks
+```
+
+This command shows you the name of the migration file that's generated. Find this file in *database/migrations* and open it.
+
+Replace the `up` method with the following code:
+
+```php
+public function up()
+{
+ Schema::table('tasks', function (Blueprint $table) {
+ $table->boolean('complete')->default(False);
+ });
+}
+```
+
+The preceding code adds a boolean column in the `tasks` table called `complete`.
+
+Replace the `down` method with the following code for the rollback action:
+
+```php
+public function down()
+{
+ Schema::table('tasks', function (Blueprint $table) {
+ $table->dropColumn('complete');
+ });
+}
+```
+
+In the local terminal window, run Laravel database migrations to make the change in the local database.
+
+```bash
+php artisan migrate
+```
+
+Based on the [Laravel naming convention](https://laravel.com/docs/5.4/eloquent#defining-models), the model `Task` (see *app/Task.php*) maps to the `tasks` table by default.
+
+### Update application logic
+
+Open the *routes/web.php* file. The application defines its routes and business logic here.
+
+At the end of the file, add a route with the following code:
+
+```php
+/**
+ * Toggle Task completeness
+ */
+Route::post('/task/{id}', function ($id) {
+ error_log('INFO: post /task/'.$id);
+ $task = Task::findOrFail($id);
+
+ $task->complete = !$task->complete;
+ $task->save();
+
+ return redirect('/');
+});
+```
+
+The preceding code makes a simple update to the data model by toggling the value of `complete`.
+
+### Update the view
+
+Open the *resources/views/tasks.blade.php* file. Find the `<tr>` opening tag and replace it with:
+
+```html
+<tr class="{{ $task->complete ? 'success' : 'active' }}" >
+```
+
+The preceding code changes the row color depending on whether the task is complete.
+
+In the next line, you have the following code:
+
+```html
+<td class="table-text"><div>{{ $task->name }}</div></td>
+```
+
+Replace the entire line with the following code:
+
+```html
+<td>
+ <form action="{{ url('task/'.$task->id) }}" method="POST">
+ {{ csrf_field() }}
+
+ <button type="submit" class="btn btn-xs">
+ <i class="fa {{$task->complete ? 'fa-check-square-o' : 'fa-square-o'}}"></i>
+ </button>
+ {{ $task->name }}
+ </form>
+</td>
+```
+
+The preceding code adds the submit button that references the route that you defined earlier.
+
+### Test the changes locally
+
+In the local terminal window, run the development server from the root directory of the Git repository.
+
+```bash
+php artisan serve
+```
+
+To see the task status change, navigate to `http://localhost:8000` and select the checkbox.
++
+To stop PHP, type `Ctrl + C` in the terminal.
+
+### Publish changes to Azure
+
+In the local terminal window, run Laravel database migrations with the production connection string to make the change in the Azure database.
+
+```bash
+php artisan migrate --env=production --force
+```
+
+Commit all the changes in Git, and then push the code changes to Azure.
+
+```bash
+git add .
+git commit -m "added complete checkbox"
+git push azure main
+```
+
+Once the `git push` is complete, navigate to the Azure app and test the new functionality.
++
+If you added any tasks, they are retained in the database. Updates to the data schema leave existing data intact.
+
+## Clean up resources
+
+In the preceding steps, you created Azure resources in a resource group. If you don't expect to need these resources in the future, delete the resource group by running the following command in the Cloud Shell:
+
+```azurecli
+az group delete --name myResourceGroup
+```
+
+<a name="next"></a>
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [How to manage your resources in Azure portal](../../azure-resource-manager/management/manage-resources-portal.md) <br/>
+> [!div class="nextstepaction"]
+> [How to manage your server](how-to-manage-server-cli.md)
mysql Tutorial Php Database App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/tutorial-php-database-app.md
Title: 'Tutorial: Build a PHP (Laravel) app with Azure Database for MySQL Flexible Server'
-description: This tutorial explains how to build a PHP app with flexible server.
--
+ Title: 'Tutorial: Build a PHP app with Azure Database for MySQL - Flexible Server'
+description: This tutorial explains how to build a PHP app with flexible server and deploy it on Azure App Service.
++ ms.devlang: php Previously updated : 9/21/2020 Last updated : 6/21/2022
-# Tutorial: Build a PHP (Laravel) and MySQL Flexible Server app in Azure App Service
+# Tutorial: Deploy a PHP and MySQL - Flexible Server app on Azure App Service
[!INCLUDE[applies-to-mysql-flexible-server](../includes/applies-to-mysql-flexible-server.md)]
+[Azure App Service](../../app-service/overview.md) provides a highly scalable, self-patching web hosting service using the Linux operating system.
-[Azure App Service](../../app-service/overview.md) provides a highly scalable, self-patching web hosting service using the Linux operating system. This tutorial shows how to create a PHP app in Azure and connect it to a MySQL database. When you're finished, you'll have a [Laravel](https://laravel.com/) app running on Azure App Service on Linux.
+This tutorial shows how to build and deploy a sample PHP application to Azure App Service, and integrate it with Azure Database for MySQL - Flexible Server on the back end.
-In this tutorial, you learn how to:
+In this tutorial, you'll learn how to:
> [!div class="checklist"] >
-> * Setup a PHP (Laravel) app with local MySQL
-> * Create a MySQL Flexible Server
-> * Connect a PHP app to MySQL Flexible Server
+> * Create a MySQL flexible server
+> * Connect a PHP app to the MySQL flexible server
> * Deploy the app to Azure App Service
-> * Update the data model and redeploy the app
-> * Manage the app in the Azure portal
+> * Update and redeploy the app
[!INCLUDE [flexible-server-free-trial-note](../includes/flexible-server-free-trial-note.md)] ## Prerequisites
-To complete this tutorial:
+- [Install Git](https://git-scm.com/).
+- The [Azure Command-Line Interface (CLI)](/cli/azure/install-azure-cli).
+- An Azure subscription [!INCLUDE [flexible-server-free-trial-note](../includes/flexible-server-free-trial-note.md)]
-1. [Install Git](https://git-scm.com/)
-2. [Install PHP 5.6.4 or above](https://php.net/downloads.php)
-3. [Install Composer](https://getcomposer.org/doc/00-intro.md)
-4. Enable the following PHP extensions Laravel needs: OpenSSL, PDO-MySQL, Mbstring, Tokenizer, XML
-5. [Install and start MySQL](https://dev.mysql.com/doc/refman/5.7/en/installing.html)
+## Create an Azure Database for MySQL flexible server
-## Prepare local MySQL
+First, we'll provision a MySQL flexible server with public access connectivity, configure firewall rules to allow the application to access the server, and create a production database.
-In this step, you create a database in your local MySQL server for your use in this tutorial.
+To learn how to use private access connectivity instead and isolate app and database resources in a virtual network, see [Tutorial: Connect an App Services Web app to an Azure Database for MySQL flexible server in a virtual network](tutorial-webapp-server-vnet.md).
-### Connect to local MySQL server
+### Create a resource group
-In a terminal window, connect to your local MySQL server. You can use this terminal window to run all the commands in this tutorial.
+An Azure resource group is a logical group in which Azure resources are deployed and managed. Let's create a resource group *rg-php-demo* using the [az group create](/cli/azure/group#az-group-create) command in the *centralus* location.
-```bash
-mysql -u root -p
-```
-
-If you're prompted for a password, enter the password for the `root` account. If you don't remember your root account password, see [MySQL: How to Reset the Root Password](https://dev.mysql.com/doc/refman/5.7/en/resetting-permissions.html).
-
-If your command runs successfully, then your MySQL server is running. If not, make sure that your local MySQL server is started by following the [MySQL post-installation steps](https://dev.mysql.com/doc/refman/5.7/en/postinstallation.html).
-
-### Create a database locally
-
-At the `mysql` prompt, create a database.
-
-```sql
-CREATE DATABASE sampledb;
-```
-
-Exit your server connection by typing `quit`.
-
-```sql
-quit
-```
-
-<a name="step2"></a>
-
-## Create a PHP app locally
-
-In this step, you get a Laravel sample application, configure its database connection, and run it locally.
-
-### Clone the sample
-
-In the terminal window, navigate to an empty directory where you can clone the sample application. Run the following command to clone the sample repository.
-
-```bash
-git clone https://github.com/Azure-Samples/laravel-tasks
-```
-
-`cd` to your cloned directory.
-Install the required packages.
-
-```bash
-cd laravel-tasks
-composer install
-```
-
-### Configure MySQL connection
+1. Open command prompt.
+1. Sign in to your Azure account.
+ ```azurecli-interactive
+ az login
+ ```
+1. Choose your Azure subscription.
+ ```azurecli-interactive
+ az account set -s <your-subscription-ID>
+ ```
+1. Create the resource group.
+ ```azurecli-interactive
+ az group create --name rg-php-demo --location centralus
+ ```
-In the repository root, create a file named *.env*. Copy the following variables into the *.env* file. Replace the _&lt;root_password>_ placeholder with the MySQL root user's password.
+### Create a MySQL flexible server
-```txt
-APP_ENV=local
-APP_DEBUG=true
-APP_KEY=
+1. To create a MySQL flexible server with public access connectivity, run the following [`az flexible-server create`](/cli/azure/mysql/server#az-mysql-flexible-server-create) command. Replace your values for server name, admin username and password.
-DB_CONNECTION=mysql
-DB_HOST=127.0.0.1
-DB_DATABASE=sampledb
-DB_USERNAME=root
-DB_PASSWORD=<root_password>
-```
-
-For information on how Laravel uses the *.env* file, see [Laravel Environment Configuration](https://laravel.com/docs/5.4/configuration#environment-configuration).
-
-### Run the sample locally
-
-Run [Laravel database migrations](https://laravel.com/docs/5.4/migrations) to create the tables the application needs. To see which tables are created in the migrations, look in the *database/migrations* directory in the Git repository.
-
-```bash
-php artisan migrate
-```
-
-Generate a new Laravel application key.
-
-```bash
-php artisan key:generate
-```
-
-Run the application.
+ ```azurecli-interactive
+ az mysql flexible-server create \
+ --name <your-mysql-server-name> \
+ --resource-group rg-php-demo \
+ --location centralus \
+ --admin-user <your-mysql-admin-username> \
+ --admin-password <your-mysql-admin-password>
+ ```
-```bash
-php artisan serve
-```
-
-Navigate to `http://localhost:8000` in a browser. Add a few tasks in the page.
--
-To stop PHP, type `Ctrl + C` in the terminal.
-
-## Create a MySQL Flexible Server
+ YouΓÇÖve now created a flexible server in the CentralUS region. The server is based on the Burstable B1MS compute SKU, with 32 GB storage, a 7-day backup retention period, and configured with public access connectivity.
-In this step, you create a MySQL database in [Azure Database for MySQL Flexible Server](../index.yml). Later, you configure the PHP application to connect to this database. In the [Azure Cloud Shell](../../cloud-shell/overview.md), create a server in with the [`az flexible-server create`](/cli/azure/mysql/server#az-mysql-flexible-server-create) command.
+1. Next, to create a firewall rule for your MySQL flexible server to allow client connections, run the following command. When both starting IP and end IP are set to 0.0.0.0, only other Azure resources (like App Services apps, VMs, AKS cluster, etc.) can connect to the flexible server.
-```azurecli-interactive
-az mysql flexible-server create --resource-group myResourceGroup --public-access <IP-Address>
-```
+ ```azurecli-interactive
+ az mysql flexible-server firewall-rule create \
+ --name <your-mysql-server-name> \
+ --resource-group rg-php-demo \
+ --rule-name AllowAzureIPs \
+ --start-ip-address 0.0.0.0 \
+ --end-ip-address 0.0.0.0
+ ```
-> [!IMPORTANT]
->
->* Make a note of the **servername** and **connection string** to use it in the next step to connect and run laravel data migration.
-> * For **IP-Address** argument, provide the IP of your client machine. The server is locked when created and you need to permit access to your client machine to manage the server locally.
+1. To create a new MySQL production database *sampledb* to use with the PHP application, run the following command:
-### Configure server firewall to allow web app to connect to the server
+ ```azurecli-interactive
+ az mysql flexible-server db create \
+ --resource-group rg-php-demo \
+ --server-name <your-mysql-server-name> \
+ --database-name sampledb
+ ```
-In the Cloud Shell, create a firewall rule for your MySQL server to allow client connections by using the az mysql server firewall-rule create command. When both starting IP and end IP are set to ```0.0.0.0```, the firewall is only opened for other Azure services that do not have a static IP to connect to the server.
-```azurecli
-az mysql flexible-server firewall-rule create --name allanyAzureIPs --server <mysql-server-name> --resource-group myResourceGroup --start-ip-address 0.0.0.0 --end-ip-address 0.0.0.0
-```
+## Build your application
-### Connect to production MySQL server locally
+For the purposes of this tutorial, we'll use a sample PHP application that displays and manages a product catalog. The application provides basic functionalities like viewing the products in the catalog, adding new products, updating existing item prices and removing products.
-In the local terminal window, connect to the MySQL server in Azure. Use the value you specified previously for ```<admin-user>``` and ```<mysql-server-name>``` . When prompted for a password, use the password you specified when you created the database in Azure.
+To learn more about the application code, go ahead and explore the app in the [GitHub repository](https://github.com/Azure-Samples/php-mysql-app-service). To learn how to connect a PHP app to MySQL flexible server, refer [Quickstart: Connect using PHP](connect-php.md).
-```bash
-mysql -u <admin-user> -h <mysql-server-name>.mysql.database.azure.com -P 3306 -p
-```
+In this tutorial, we'll directly clone the coded sample app and learn how to deploy it on Azure App Service.
-### Create a production database
+1. To clone the sample application repository and change to the repository root, run the following commands:
-At the `mysql` prompt, create a database.
+ ```bash
+ git clone https://github.com/Azure-Samples/php-mysql-app-service.git
+ cd php-mysql-app-service
+ ```
-```sql
-CREATE DATABASE sampledb;
-```
+1. Run the following command to ensure that the default branch is `main`.
-### Create a user with permissions
+ ```bash
+ git branch -m main
+ ```
-Create a database user called *phpappuser* and give it all privileges in the `sampledb` database. For simplicity of the tutorial, use *MySQLAzure2020* as the password.
+## Create and configure an Azure App Service Web App
-```sql
-CREATE USER 'phpappuser' IDENTIFIED BY 'MySQLAzure2020';
-GRANT ALL PRIVILEGES ON sampledb.* TO 'phpappuser';
-```
+In Azure App Service (Web Apps, API Apps, or Mobile Apps), an app always runs in an App Service plan. An App Service plan defines a set of compute resources for a web app to run. In this step, we'll create an Azure App Service plan and an App Service web app within it, which will host the sample application.
-Exit the server connection by typing `quit`.
+1. To create an App Service plan in the Free pricing tier, run the following command:
-```sql
-quit
-```
+ ```azurecli-interactive
+ az appservice plan create --name plan-php-demo \
+ --resource-group rg-php-demo \
+ --location centralus \
+ --sku FREE --is-linux
+ ```
-## Connect app to MySQL flexible server
+1. If you want to deploy an application to Azure web app using deployment methods like FTP or Local Git, you need to configure a deployment user with username and password credentials. After you configure your deployment user, you can take advantage of it for all your Azure App Service deployments.
-In this step, you connect the PHP application to the MySQL database you created in Azure Database for MySQL.
+ ```azurecli-interactive
+ az webapp deployment user set \
+ --user-name <your-deployment-username> \
+ --password <your-deployment-password>
+ ```
-<a name="devconfig"></a>
+1. To create an App Service web app with PHP 8.0 runtime and to configure the Local Git deployment option to deploy your app from a Git repository on your local computer, run the following command. Replace `<your-app-name>` with a globally unique app name (valid characters are a-z, 0-9, and -).
-### Configure the database connection
+ ```azurecli-interactive
+ az webapp create \
+ --resource-group rg-php-demo \
+ --plan plan-php-demo \
+ --name <your-app-name> \
+ --runtime "PHP|8.0" \
+ --deployment-local-git
+ ```
-In the repository root, create an *.env.production* file and copy the following variables into it. Replace the placeholder _&lt;mysql-server-name>_ in both *DB_HOST* and *DB_USERNAME*.
+ > [!IMPORTANT]
+ > In the Azure CLI output, the URL of the Git remote is displayed in the deploymentLocalGitUrl property, with the format `https://<username>@<app-name>.scm.azurewebsites.net/<app-name>.git`. Save this URL, as you'll need it later.
-```
-APP_ENV=production
-APP_DEBUG=true
-APP_KEY=
-
-DB_CONNECTION=mysql
-DB_HOST=<mysql-server-name>.mysql.database.azure.com
-DB_DATABASE=sampledb
-DB_USERNAME=phpappuser
-DB_PASSWORD=MySQLAzure2017
-MYSQL_SSL=true
-```
+1. Next we'll configure the MySQL flexible server database connection settings on the Web App.
-Save the changes.
+ The `config.php` file in the sample PHP application retrieves the database connection information (server name, database name, server username and password) from environment variables using the `getenv()` function. In App Service, to set environment variables as **Application Settings** (*appsettings*), run the following command:
-> [!TIP]
-> To secure your MySQL connection information, this file is already excluded from the Git repository (See *.gitignore* in the repository root). Later, you learn how to configure environment variables in App Service to connect to your database in Azure Database for MySQL. With environment variables, you don't need the *.env* file in App Service.
->
+ ```azurecli-interactive
+ az webapp config appsettings set \
+ --name <your-app-name> \
+ --resource-group rg-php-demo \
+ --settings DB_HOST="<your-server-name>.mysql.database.azure.com" \
+ DB_DATABASE="sampledb" \
+ DB_USERNAME="<your-mysql-admin-username>" \
+ DB_PASSWORD="<your-mysql-admin-password>" \
+ MYSQL_SSL="true"
+ ```
+
+ Alternatively, you can use Service Connector to establish a connection between the App Service app and the MySQL flexible server. For more details, see [Integrate Azure Database for MySQL with Service Connector](../../service-connector/how-to-integrate-mysql.md).
-### Configure TLS/SSL certificate
+## Deploy your application using Local Git
-By default, MySQL Flexible Server enforces TLS connections from clients. To connect to your MySQL database in Azure, you must use the [*.pem* certificate supplied by Azure Database for MySQL Flexible Server](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem). Download [this certificate](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem)) and place it in the **SSL** folder in the local copy of the sample app repository.
+Now, we'll deploy the sample PHP application to Azure App Service using the Local Git deployment option.
-Open *config/database.php* and add the `sslmode` and `options` parameters to `connections.mysql`, as shown in the following code.
+1. Since you're deploying the main branch, you need to set the default deployment branch for your App Service app to main. To set the DEPLOYMENT_BRANCH under **Application Settings**, run the following command:
-```php
-'mysql' => [
- ...
- 'sslmode' => env('DB_SSLMODE', 'prefer'),
- 'options' => (env('MYSQL_SSL') && extension_loaded('pdo_mysql')) ? [
- PDO::MYSQL_ATTR_SSL_KEY => '/ssl/DigiCertGlobalRootCA.crt.pem',
- ] : []
-],
-```
+ ```azurecli-interactive
+ az webapp config appsettings set \
+ --name <your-app-name> \
+ --resource-group rg-php-demo \
+ --settings DEPLOYMENT_BRANCH='main'
+ ```
-### Test the application locally
+1. Verify that you are in the application repository's root directory.
-Run Laravel database migrations with *.env.production* as the environment file to create the tables in your MySQL database in Azure Database for MySQL. Remember that _.env.production_ has the connection information to your MySQL database in Azure.
+1. To add an Azure remote to your local Git repository, run the following command.
-```bash
-php artisan migrate --env=production --force
-```
+ **Note:** Replace `<deploymentLocalGitUrl>` with the URL of the Git remote that you saved in the **Create an App Service web app** step.
-*.env.production* doesn't have a valid application key yet. Generate a new one for it in the terminal.
+ ```azurecli-interactive
+ git remote add azure <deploymentLocalGitUrl>
+ ```
-```bash
-php artisan key:generate --env=production --force
-```
+1. To deploy your app by performing a `git push` to the Azure remote, run the following command. When Git Credential Manager prompts you for credentials, enter the deployment credentials that you created in **Configure a deployment user** step.
-Run the sample application with *.env.production* as the environment file.
+ ```azurecli-interactive
+ git push azure main
+ ```
-```bash
-php artisan serve --env=production
-```
+The deployment may take a few minutes to succeed.
-Navigate to `http://localhost:8000`. If the page loads without errors, the PHP application is connecting to the MySQL database in Azure.
+## Test your application
-Add a few tasks in the page.
+Finally, test the application by browsing to `https://<app-name>.azurewebsites.net`, and then add, view, update or delete items from the product catalog.
-To stop PHP, type `Ctrl + C` in the terminal.
+Congratulations! You have successfully deployed a sample PHP application to Azure App Service and integrated it with Azure Database for MySQL - Flexible Server on the back end.
-### Commit your changes
+## Update and redeploy the app
-Run the following Git commands to commit your changes:
+To update the Azure app, make the necessary code changes, commit all the changes in Git, and then push the code changes to Azure.
```bash git add .
-git commit -m "database.php updates"
-```
-
-Your app is ready to be deployed.
-
-## Deploy to Azure
-
-In this step, you deploy the MySQL-connected PHP application to Azure App Service.
-
-### Configure a deployment user
-
-FTP and local Git can deploy to an Azure web app by using a deployment user. Once you configure your deployment user, you can use it for all your Azure deployments. Your account-level deployment username and password are different from your Azure subscription credentials.
-
-To configure the deployment user, run the [az webapp deployment user set](/cli/azure/webapp/deployment/user#az-webapp-deployment-user-set) command in Azure Cloud Shell. Replace _&lt;username>_ and _&lt;password>_ with your deployment user username and password.
-
-The username must be unique within Azure, and for local Git pushes, must not contain the '@' symbol.
-The password must be at least eight characters long, with two of the following three elements: letters, numbers, and symbols.
-
-```azurecli
-az appservice plan create --name myAppServicePlan --resource-group myResourceGroup --sku F1 --is-linux
-```
-
-The JSON output shows the password as null. If you get a 'Conflict'. Details: 409 error, change the username. If you get a 'Bad Request'. Details: 400 error, use a stronger password. **Record your username and password to use to deploy your web apps.**
-
-### Create an App Service plan
-
-In the Cloud Shell, create an App Service plan in the resource group with the [az appservice plan create](/cli/azure/appservice/plan#az-appservice-plan-create) command. The following example creates an App Service plan named myAppServicePlan in the Free pricing tier (--sku F1) and in a Linux container (--is-linux).
-
-az appservice plan create --name myAppServicePlan --resource-group myResourceGroup --sku F1 --is-linux
-
-<a name="create"></a>
-
-### Create a web app
-
-Create a [web app](../../app-service/overview.md#app-service-on-linux) in the myAppServicePlan App Service plan.
-
-In the Cloud Shell, you can use the [az webapp create](/cli/azure/webapp#az-webapp-create) command. In the following example, replace _&lt;app-name>_ with a globally unique app name (valid characters are `a-z`, `0-9`, and `-`). The runtime is set to `PHP|7.0`. To see all supported runtimes, run [az webapp list-runtimes --os linux](/cli/azure/webapp#az-webapp-list-runtimes).
-
-```azurecli
-az webapp create --resource-group myResourceGroup --plan myAppServicePlan --name <app-name> --runtime "PHP|7.3" --deployment-local-git
-```
-
-When the web app has been created, the Azure CLI shows output similar to the following example:
-
-```
-Local git is configured with url of 'https://<username>@<app-name>.scm.azurewebsites.net/<app-name>.git'
-{
- "availabilityState": "Normal",
- "clientAffinityEnabled": true,
- "clientCertEnabled": false,
- "cloningInfo": null,
- "containerSize": 0,
- "dailyMemoryTimeQuota": 0,
- "defaultHostName": "<app-name>.azurewebsites.net",
- "deploymentLocalGitUrl": "https://<username>@<app-name>.scm.azurewebsites.net/<app-name>.git",
- "enabled": true,
- < JSON data removed for brevity. >
-}
-```
-
-You've created an empty new web app, with git deployment enabled.
-
-> [!NOTE]
-> The URL of the Git remote is shown in the deploymentLocalGitUrl property, with the format `https://<username>@<app-name>.scm.azurewebsites.net/<app-name>.git`. Save this URL as you need it later.
-
-### Configure database settings
-
-In App Service, you set environment variables as *app settings* by using the [`az webapp config appsettings set`](/cli/azure/webapp/config/appsettings#az-webapp-config-appsettings-set) command.
-
-The following command configures the app settings `DB_HOST`, `DB_DATABASE`, `DB_USERNAME`, and `DB_PASSWORD`. Replace the placeholders _&lt;app-name>_ and _&lt;mysql-server-name>_.
-
-```azurecli-interactive
-az webapp config appsettings set --name <app-name> --resource-group myResourceGroup --settings DB_HOST="<mysql-server-name>.mysql.database.azure.com" DB_DATABASE="sampledb" DB_USERNAME="phpappuser" DB_PASSWORD="MySQLAzure2017" MYSQL_SSL="true"
-```
-
-You can use the PHP [getenv](https://www.php.net/manual/en/function.getenv.php) method to access the settings. the Laravel code uses an [env](https://laravel.com/docs/5.4/helpers#method-env) wrapper over the PHP `getenv`. For example, the MySQL configuration in *config/database.php* looks like the following code:
-
-```php
-'mysql' => [
- 'driver' => 'mysql',
- 'host' => env('DB_HOST', 'localhost'),
- 'database' => env('DB_DATABASE', 'forge'),
- 'username' => env('DB_USERNAME', 'forge'),
- 'password' => env('DB_PASSWORD', ''),
- ...
-],
-```
-
-### Configure Laravel environment variables
-
-Laravel needs an application key in App Service. You can configure it with app settings.
-
-In the local terminal window, use `php artisan` to generate a new application key without saving it to *.env*.
-
-```bash
-php artisan key:generate --show
-```
-
-In the Cloud Shell, set the application key in the App Service app by using the [`az webapp config appsettings set`](/cli/azure/webapp/config/appsettings#az-webapp-config-appsettings-set) command. Replace the placeholders _&lt;app-name>_ and _&lt;outputofphpartisankey:generate>_.
-
-```azurecli-interactive
-az webapp config appsettings set --name <app-name> --resource-group myResourceGroup --settings APP_KEY="<output_of_php_artisan_key:generate>" APP_DEBUG="true"
-```
-
-`APP_DEBUG="true"` tells Laravel to return debugging information when the deployed app encounters errors. When running a production application, set it to `false`, which is more secure.
-
-### Set the virtual application path
-
-[Laravel application lifecycle](https://laravel.com/docs/5.4/lifecycle) begins in the *public* directory instead of the application's root directory. The default PHP Docker image for App Service uses Apache, and it doesn't let you customize the `DocumentRoot` for Laravel. However, you can use `.htaccess` to rewrite all requests to point to */public* instead of the root directory. In the repository root, an `.htaccess` is added already for this purpose. With it, your Laravel application is ready to be deployed.
-
-For more information, see [Change site root](../../app-service/configure-language-php.md?pivots=platform-linux#change-site-root).
-
-### Push to Azure from Git
-
-Back in the local terminal window, add an Azure remote to your local Git repository. Replace _&lt;deploymentLocalGitUrl-from-create-step>_ with the URL of the Git remote that you saved from [Create a web app](#create-a-web-app).
-
-```bash
-git remote add azure <deploymentLocalGitUrl-from-create-step>
-```
-
-Push to the Azure remote to deploy your app with the following command. When Git Credential Manager prompts you for credentials, make sure you enter the credentials you created in **Configure a deployment user**, not the credentials you use to sign in to the Azure portal.
-
-```bash
+git commit -m "Update Azure app"
git push azure main ```
-This command may take a few minutes to run. While running, it displays information similar to the following example:
-
-<pre>
-Counting objects: 3, done.
-Delta compression using up to 8 threads.
-Compressing objects: 100% (3/3), done.
-Writing objects: 100% (3/3), 291 bytes | 0 bytes/s, done.
-Total 3 (delta 2), reused 0 (delta 0)
-remote: Updating branch 'main'.
-remote: Updating submodules.
-remote: Preparing deployment for commit id 'a5e076db9c'.
-remote: Running custom deployment command...
-remote: Running deployment command...
-...
-&lt; Output has been truncated for readability &gt;
-</pre>
-
-### Browse to the Azure app
-
-Browse to `http://<app-name>.azurewebsites.net` and add a few tasks to the list.
--
-Congratulations, you're running a data-driven PHP app in Azure App Service.
-
-## Update model locally and redeploy
-
-In this step, you make a simple change to the `task` data model and the webapp, and then publish the update to Azure.
-
-For the tasks scenario, you modify the application so that you can mark a task as complete.
-
-### Add a column
-
-In the local terminal window, navigate to the root of the Git repository.
-
-Generate a new database migration for the `tasks` table:
-
-```bash
-php artisan make:migration add_complete_column --table=tasks
-```
-
-This command shows you the name of the migration file that's generated. Find this file in *database/migrations* and open it.
-
-Replace the `up` method with the following code:
-
-```php
-public function up()
-{
- Schema::table('tasks', function (Blueprint $table) {
- $table->boolean('complete')->default(False);
- });
-}
-```
-
-The preceding code adds a boolean column in the `tasks` table called `complete`.
-
-Replace the `down` method with the following code for the rollback action:
-
-```php
-public function down()
-{
- Schema::table('tasks', function (Blueprint $table) {
- $table->dropColumn('complete');
- });
-}
-```
-
-In the local terminal window, run Laravel database migrations to make the change in the local database.
-
-```bash
-php artisan migrate
-```
-
-Based on the [Laravel naming convention](https://laravel.com/docs/5.4/eloquent#defining-models), the model `Task` (see *app/Task.php*) maps to the `tasks` table by default.
-
-### Update application logic
-
-Open the *routes/web.php* file. The application defines its routes and business logic here.
-
-At the end of the file, add a route with the following code:
-
-```php
-/**
- * Toggle Task completeness
- */
-Route::post('/task/{id}', function ($id) {
- error_log('INFO: post /task/'.$id);
- $task = Task::findOrFail($id);
-
- $task->complete = !$task->complete;
- $task->save();
-
- return redirect('/');
-});
-```
-
-The preceding code makes a simple update to the data model by toggling the value of `complete`.
-
-### Update the view
-
-Open the *resources/views/tasks.blade.php* file. Find the `<tr>` opening tag and replace it with:
-
-```html
-<tr class="{{ $task->complete ? 'success' : 'active' }}" >
-```
-
-The preceding code changes the row color depending on whether the task is complete.
-
-In the next line, you have the following code:
-
-```html
-<td class="table-text"><div>{{ $task->name }}</div></td>
-```
-
-Replace the entire line with the following code:
-
-```html
-<td>
- <form action="{{ url('task/'.$task->id) }}" method="POST">
- {{ csrf_field() }}
-
- <button type="submit" class="btn btn-xs">
- <i class="fa {{$task->complete ? 'fa-check-square-o' : 'fa-square-o'}}"></i>
- </button>
- {{ $task->name }}
- </form>
-</td>
-```
-
-The preceding code adds the submit button that references the route that you defined earlier.
-
-### Test the changes locally
-
-In the local terminal window, run the development server from the root directory of the Git repository.
-
-```bash
-php artisan serve
-```
-
-To see the task status change, navigate to `http://localhost:8000` and select the checkbox.
--
-To stop PHP, type `Ctrl + C` in the terminal.
-
-### Publish changes to Azure
-
-In the local terminal window, run Laravel database migrations with the production connection string to make the change in the Azure database.
-
-```bash
-php artisan migrate --env=production --force
-```
-
-Commit all the changes in Git, and then push the code changes to Azure.
-
-```bash
-git add .
-git commit -m "added complete checkbox"
-git push azure main
-```
-
-Once the `git push` is complete, navigate to the Azure app and test the new functionality.
--
-If you added any tasks, they are retained in the database. Updates to the data schema leave existing data intact.
+Once the `git push` is complete, navigate to or refresh the Azure app to test the new functionality.
## Clean up resources
-In the preceding steps, you created Azure resources in a resource group. If you don't expect to need these resources in the future, delete the resource group by running the following command in the Cloud Shell:
+In this tutorial, you created all the Azure resources in a resource group. If you don't expect to need these resources in the future, delete the resource group by running the following command in the Cloud Shell:
-```azurecli
-az group delete --name myResourceGroup
+```azurecli-interactive
+az group delete --name rg-php-demo
```
-<a name="next"></a>
- ## Next steps > [!div class="nextstepaction"]
-> [How to manage your resources in Azure portal](../../azure-resource-manager/management/manage-resources-portal.md) <br/>
+> [How to manage your resources in Azure portal](../../azure-resource-manager/management/manage-resources-portal.md)
+ > [!div class="nextstepaction"] > [How to manage your server](how-to-manage-server-cli.md)+
mysql Tutorial Webapp Server Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/tutorial-webapp-server-vnet.md
Title: 'Tutorial: Create Azure Database for MySQL Flexible Server and Azure App Service Web App in same virtual network'
-description: Quickstart guide to create Azure Database for MySQL Flexible Server with Web App in a virtual network
+ Title: 'Tutorial: Connect an App Services Web app to an Azure Database for MySQL flexible server in a virtual network'
+description: Tutorial to create and connect Web App to Azure Database for MySQL Flexible Server in a virtual network
Last updated 03/18/2021
-# Tutorial: Create an Azure Database for MySQL - Flexible Server with App Services Web App in virtual network
+# Tutorial: Connect an App Services Web app to an Azure Database for MySQL flexible server in a virtual network
[[!INCLUDE[applies-to-mysql-flexible-server](../includes/applies-to-mysql-flexible-server.md)]
-This tutorial shows you how create a Azure App Service Web App with MySQL Flexible Server inside a [Virtual network](../../virtual-network/virtual-networks-overview.md).
+This tutorial shows you how to create and connect an Azure App Service Web App to an Azure Database for MySQL flexible server isolated inside same or different [virtual networks](../../virtual-network/virtual-networks-overview.md).
In this tutorial you will learn how to: >[!div class="checklist"]
+>
> * Create a MySQL flexible server in a virtual network
-> * Create a subnet to delegate to App Service
-> * Create a web app
+> * Create a subnet to delegate to App Service and create a web app
> * Add the web app to the virtual network
-> * Connect to Postgres from the web app
+> * Connect to MySQL flexible server from the web app
+> * Connect a Web app and MySQL flexible server isolated in different VNets
## Prerequisites
Configure the web app to allow all outbound connections from within the virtual
az webapp config set --name mywebapp --resource-group myresourcesourcegroup --generic-configurations '{"vnetRouteAllEnabled": true}' ```
+## App Service Web app and MySQL flexible server in different virtual networks
+
+If you have created the App Service app and the MySQL flexible server in different virtual networks (VNets), you will need to use one of the following methods to establish a seamless connection:
+
+- **Connect the two VNets using VNet peering** (local or global). See [Connect virtual networks with virtual network peering](../../virtual-network/tutorial-connect-virtual-networks-cli.md) guide.
+- **Link MySQL flexible server's Private DNS zone to the web app's VNet using virtual network links.** If you use the Azure portal or the Azure CLI to create MySQL flexible servers in a VNet, a new private DNS zone is auto-provisioned in your subscription using the server name provided. Navigate to the flexible server's private DNS zone and follow the [How to link the private DNS zone to a virtual network](../../dns/private-dns-getstarted-portal.md#link-the-virtual-network) guide to set up a virtual network link.
+ ## Clean up resources Clean up all resources you created in the tutorial using the following command. This command deletes all the resources in this resource group.
mysql How To Auto Grow Storage Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-auto-grow-storage-powershell.md
Last updated 06/20/2022 + # Auto grow storage in Azure Database for MySQL server using PowerShell [!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)]
mysql Quickstart Create Mysql Server Database Using Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/quickstart-create-mysql-server-database-using-azure-portal.md
Title: 'Quickstart: Create a server - Azure portal - Azure Database for MySQL'
description: This article walks you through using the Azure portal to create a sample Azure Database for MySQL server in about five minutes. + - Last updated 06/20/2022
object-anchors Unity Remoting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/object-anchors/quickstarts/unity-remoting.md
description: In this quickstart, you learn how to enable Unity Remoting in a pro
Previously updated : 04/04/2022 Last updated : 06/22/2022
To complete this quickstart, make sure you have:
|Component |Unity 2019 |Unity 2020 | |--|-|-| |Unity Editor | 2019.4.36f1 | 2020.3.30f1 |
-|Windows Mixed Reality XR Plugin | 2.9.2 | 4.6.2 |
+|Windows Mixed Reality XR Plugin | 2.9.3 | 4.6.3 |
|Holographic Remoting Player | 2.7.5 | 2.7.5 | |Azure Object Anchors SDK | 0.19.0 | 0.19.0 | |Mixed Reality WinRT Projections | 0.5.2009 | 0.5.2009 |
To complete this quickstart, make sure you have:
## One-time setup 1. On your HoloLens, install version 2.7.5 or newer of the [Holographic Remoting Player](https://www.microsoft.com/p/holographic-remoting-player/9nblggh4sv40) via the Microsoft Store. 1. In the <a href="/windows/mixed-reality/develop/unity/welcome-to-mr-feature-tool" target="_blank">Mixed Reality Feature Tool</a>, under the **Platform Support** section, install the **Mixed Reality WinRT Projections** feature package, version 0.5.2009 or newer, into your Unity project folder.
-1. In the Unity **Package Manager** window, ensure that the **Windows XR Plugin** is updated to version 2.9.2 or newer for Unity 2019, or version 4.6.2 or newer for Unity 2020.
-1. In the Unity **Project Settings** window, click on the **XR Plug-in Management** section, select the **PC Standalone** tab, and ensure that the box for **Windows Mixed Reality** is checked, as well as **Initialize XR on Startup**.
-1. Open the **Windows XR Plugin Remoting** window from the **Window/XR** menu, select **Remote to Device** from the drop-down, and enter your device's IP address in the **Remote Machine** box.
-1. Place .ou model files in `%USERPROFILE%\AppData\LocalLow\<companyname>\<productname>` where `<companyname>` and `<productname>` match the values in the **Player** section of your project's **Project Settings** (e.g. `Microsoft\AOABasicApp`). (See the **Windows Editor and Standalone Player** section of [Unity - Scripting API: Application.persistentDataPath](https://docs.unity3d.com/ScriptReference/Application-persistentDataPath.html).)
+1. In the Unity **Package Manager** window, ensure that the **Windows XR Plugin** is updated to version 2.9.3 or newer for Unity 2019, or version 4.6.3 or newer for Unity 2020.
+1. In the Unity **Project Settings** window, select the **XR Plug-in Management** section, select the **PC Standalone** tab, and ensure that the **Windows Mixed Reality** and **Initialize XR on Startup** checkboxes are checked.
+1. Place `.ou` model files in `%USERPROFILE%\AppData\LocalLow\<companyname>\<productname>` where `<companyname>` and `<productname>` match the values in the **Player** section of your project's **Project Settings** (for example, `Microsoft\AOABasicApp`). (See the **Windows Editor and Standalone Player** section of [Unity - Scripting API: Application.persistentDataPath](https://docs.unity3d.com/ScriptReference/Application-persistentDataPath.html).)
## Using Remoting with Object Anchors
+1. Launch the **Holographic Remoting Player** app on your HoloLens. Your device's IP address will be displayed for convenient reference.
1. Open your project in the Unity Editor.
-1. Launch the **Holographic Remoting Player** app on your HoloLens.
-1. *Before* entering **Play Mode** for the first time, *uncheck* the **Connect on Play** checkbox, and manually connect to the HoloLens by pressing **Connect**.
- 1. Enter **Play Mode** to finish initializing the connection.
- 1. After this, you may reenable **Connect on Play** for the remainder of the session.
-1. Enter and exit Play Mode as needed; iterate on changes in the Editor; use Visual Studio to debug script execution, and all the normal Unity development activities you're used to in Play Mode!
+1. Open the **Windows XR Plugin Remoting** window from the **Window/XR** menu, select **Remote to Device** from the drop-down, ensure your device's IP address is entered in the **Remote Machine** box, and make sure that the **Connect on Play** checkbox is checked.
+1. Enter and exit Play Mode as needed - Unity will connect to the Player app running on the device, and display your scene in real time! You can iterate on changes in the Editor, use Visual Studio to debug script execution, and do all the normal Unity development activities you're used to in Play Mode!
## Known limitations
-* Some Object Anchors SDK features are not supported since they rely on access to the HoloLens cameras which is not currently available via Remoting. These include <a href="/dotnet/api/microsoft.azure.objectanchors.objectobservationmode">Active Observation Mode</a> and <a href="/dotnet/api/microsoft.azure.objectanchors.objectinstancetrackingmode">High Accuracy Tracking Mode</a>.
-* The Object Anchors SDK currently only supports Unity Remoting while using the **Windows Mixed Reality XR Plugin**. If the **OpenXR XR Plugin** is used, <a href="/dotnet/api/microsoft.azure.objectanchors.objectobserver.issupported">`ObjectObserver.IsSupported`</a> will return `false` in **Play Mode** and other APIs may throw exceptions.
+* Some Object Anchors SDK features aren't supported since they rely on access to the HoloLens cameras, which isn't currently available via Remoting. These include <a href="/dotnet/api/microsoft.azure.objectanchors.objectobservationmode">Active Observation Mode</a> and <a href="/dotnet/api/microsoft.azure.objectanchors.objectinstancetrackingmode">High Accuracy Tracking Mode</a>.
+* The Object Anchors SDK currently only supports Unity Remoting while using the **Windows Mixed Reality XR Plugin**. If the **OpenXR XR Plugin** is used, <a href="/dotnet/api/microsoft.azure.objectanchors.objectobserver.issupported">`ObjectObserver.IsSupported`</a> will return `false` in **Play Mode** and other APIs may throw exceptions.
openshift Howto Configure Ovn Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-configure-ovn-kubernetes.md
+
+ Title: Configure OVN-Kubernetes network provider for Azure Red Hat OpenShift clusters (preview)
+description: In this how-to article, learn how to configure OVN-Kubernetes network provider for Azure Red Hat OpenShift clusters (preview).
++++ Last updated : 06/13/2022
+topic: how-to
+keywords: azure, openshift, aro, red hat, azure CLI, azure portal, ovn, ovn-kubernetes, CNI, Container Network Interface
+Customer intent: I need to configure OVN-Kubernetes network provider for Azure Red Hat OpenShift clusters.
++
+# Configure OVN-Kubernetes network provider for Azure Red Hat OpenShift clusters
+
+This article explains how to Configure OVN-Kubernetes network provider for Azure Red Hat OpenShift clusters.
+
+## About the OVN-Kubernetes default Container Network Interface (CNI) network provider (preview)
+
+OVN-Kubernetes Container Network Interface (CNI) for Azure Red Hat OpenShift (ARO) cluster is now available for preview.
+
+The OpenShift Container Platform cluster uses a virtualized network for pod and service networks. The OVN-Kubernetes Container Network Interface (CNI) plug-in is a network provider for the default cluster network. OVN-Kubernetes, which is based on the Open Virtual Network (OVN), provides an overlay-based networking implementation.
+
+A cluster that uses the OVN-Kubernetes network provider also runs Open vSwitch (OVS) on each node. OVN configures OVS on each node to implement the declared network configuration.
+
+> [!IMPORTANT]
+> Currently, this Azure Red Hat OpenShift feature is being offered in preview only. Preview features are available on a self-service, opt-in basis. Previews are provided "as is" and "as available," and they are excluded from the service-level agreements and limited warranty. Azure Red Hat OpenShift previews are partially covered by customer support on a best-effort basis. As such, these features are not meant for production use.
+
+## OVN-Kubernetes features
+
+The OVN-Kubernetes CNI cluster network provider offers the following features:
+
+* Uses OVN to manage network traffic flows. OVN is a community developed, vendor-agnostic network virtualization solution.
+* Implements Kubernetes network policy support, including ingress and egress rules.
+* Uses the Generic Network Virtualization Encapsulation (Geneve) protocol rather than the Virtual Extensible LAN (VXLAN) protocol to create an overlay network between nodes.
+
+For more information about OVN-Kubernetes CNI network provider, see [About the OVN-Kubernetes default Container Network Interface (CNI) network provider](https://docs.openshift.com/container-platform/4.10/networking/ovn_kubernetes_network_provider/about-ovn-kubernetes.html).
+
+## Prerequisites
+
+Complete the following prerequisites.
+### Install and use the preview Azure Command-Line Interface (CLI)
+
+> [!NOTE]
+> The Azure CLI extension is required for the preview feature only.
+
+If you choose to install and use the CLI locally, ensure you're running Azure CLI version 2.37.0 or later. Run `az --version` to find the version. For details on installing or upgrading Azure CLI, see [Install Azure CLI](/cli/azure/install-azure-cli).
+
+1. Use the following URL to download both the Python wheel and the CLI extension:
+
+ [https://aka.ms/az-aroext-latest.whl](https://aka.ms/az-aroext-latest.whl)
+
+2. Run the following command:
+
+```azurecli-interactive
+az extension add --upgrade -s <path to downloaded .whl file>
+```
+
+3. Verify the CLI extension is being used:
+
+```azurecli-interactive
+az extension list
+[
+ {
+ "experimental": false,
+ "extensionType": "whl",
+ "name": "aro",
+ "path": "<path may differ depending on system>",
+ "preview": true,
+ "version": "1.0.6"
+ }
+]
+```
+
+4. Run the following command:
+
+```azurecli-interactive
+az aro create --help
+```
+
+The result should show the `ΓÇôsdn-type` option, as follows:
+
+```json
+--sdn-type --software-defined-network-type : SDN type either "OpenShiftSDN" (default) or "OVNKubernetes". Allowed values: OVNKubernetes, OpenShiftSDN
+```
+
+## Create an Azure Red Hat OpenShift cluster with OVN as the network provider
+
+The process to create an Azure Red Hat OpenShift cluster with OVN is exactly the same as the existing process explained in [Tutorial: Create an Azure Red Hat OpenShift 4 cluster](tutorial-create-cluster.md), with the following exception. You must also pass in the SDN type of `OVNKubernetes` in step 4 below.
+
+The following high-level procedure outlines the steps to create an Azure Red Hat OpenShift cluster with OVN as the network provider:
+
+1. Install the preview Azure CLI extension.
+2. Verify your permissions.
+3. Register the resource providers.
+4. Create a virtual network containing two empty subnets.
+5. Create an Azure Red Hat OpenShift cluster by using OVN CNI network provider.
+6. Verify the Azure Red Hat OpenShift cluster is using OVN CNI network provider.
+
+## Verify your permissions
+
+Using OVN CNI network provider for Azure Red Hat OpenShift clusters requires you to create a resource group, which will contain the virtual network for the cluster. You must have either Contributor and User Access Administrator permissions or have Owner permissions either directly on the virtual network or on the resource group or subscription containing it.
+
+You'll also need sufficient Azure Active Directory permissions (either a member user of the tenant, or a guest user assigned with role Application administrator) for the tooling to create an application and service principal on your behalf for the cluster. For more information about user roles, see [Member and guest users](../active-directory/fundamentals/users-default-permissions.md#member-and-guest-users) and [Assign administrator and non-administrator roles to users with Azure Active Directory](../active-directory/fundamentals/active-directory-users-assign-role-azure-portal.md).
+
+## Register the resource providers
+
+If you have multiple Azure subscriptions, you must register the resource providers. For information about the registration procedure, see [Register the resource providers](tutorial-create-cluster.md#register-the-resource-providers).
+
+## Create a virtual network containing two empty subnets
+
+If you have an existing virtual network that meets your needs, you can skip this step. To know the procedure of creating a virtual network, see [Create a virtual network containing two empty subnets](tutorial-create-cluster.md#create-a-virtual-network-containing-two-empty-subnets).
+
+## Create an Azure Red Hat OpenShift cluster by using OVN-Kubernetes CNI network provider
+
+Run the following command to create an Azure Red Hat OpenShift cluster that uses the OVN CNI network provider:
+
+```
+az aro create --resource-group $RESOURCEGROUP \
+ --name $CLUSTER \
+ --vnet aro-vnet \
+ --master-subnet master-subnet \
+ --worker-subnet worker-subnet \
+ --sdn-type OVNKubernetes \
+ --pull-secret @pull-secret.txt \
+```
+
+## Verify an Azure Red Hat OpenShift cluster is using the OVN CNI network provider
+
+After the cluster is successfully configured to use the OVN CNI network provider, sign in to your account and run the following command:
+
+```
+oc get network.config/cluster -o jsonpath='{.status.networkType}{"\n"}'
+```
+
+The value of `status.networkType` must be `OVNKubernetes`.
partner-solutions Add Connectors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/apache-kafka-confluent-cloud/add-connectors.md
Title: Add connectors for Confluent Cloud - Azure partner solutions
-description: This article describes how to install connectors for Confluent Cloud that you use with Azure resources.
+ Title: Azure services and Confluent Cloud integration - Azure partner solutions
+description: This article describes how to use Azure services and install connectors for Confluent Cloud integration.
Previously updated : 09/03/2021 Last updated : 06/24/2022
-# Add connectors for Confluent Cloud
+# Azure services and Confluent Cloud integrations
-This article describes how to install connectors to Azure resources for Confluent Cloud.
+This article describes how to use Azure services like Azure Functions, and install connectors to Azure resources for Confluent Cloud.
-## Connector to Azure Cosmos DB
+## Azure Cosmos DB connector
**Azure Cosmos DB Sink Connector fully managed connector** is generally available within Confluent Cloud. The fully managed connector eliminates the need for the development and management of custom integrations, and reduces the overall operational burden of connecting your data between Confluent Cloud and Azure Cosmos DB. The Microsoft Azure Cosmos Sink Connector for Confluent Cloud reads from and writes data to a Microsoft Azure Cosmos database. The connector polls data from Kafka and writes to database containers. To set up your connector, see [Azure Cosmos DB Sink Connector for Confluent Cloud](https://docs.confluent.io/cloud/current/connectors/cc-azure-cosmos-sink.html).
-**Azure Cosmos DB Self Managed connector** must be installed manually. First download an uber JAR from the [Cosmos DB Releases page](https://github.com/microsoft/kafka-connect-cosmosdb/releases). Or, you can [build your own uber JAR directly from the source code](https://github.com/microsoft/kafka-connect-cosmosdb/blob/dev/doc/README_Sink.md#install-sink-connector). Complete the installation by following the guidance described in the Confluent documentation for [installing connectors manually](https://docs.confluent.io/home/connect/install.html#install-connector-manually).
+**Azure Cosmos DB Self Managed connector** must be installed manually. First download an uber JAR from the [Cosmos DB Releases page](https://github.com/microsoft/kafka-connect-cosmosdb/releases). Or, you can [build your own uber JAR directly from the source code](https://github.com/microsoft/kafka-connect-cosmosdb/blob/dev/doc/README_Sink.md#install-sink-connector). Complete the installation by following the guidance described in the Confluent documentation for [installing connectors manually](https://docs.confluent.io/home/connect/install.html#install-connector-manually).
+
+## Azure Functions
+
+**Azure Functions Kafka trigger extension** is used to run your function code in response to messages in Kafka topics. You can also use a Kafka output binding to write from your function to a topic. For information about setup and configuration details, see [Apache Kafka bindings for Azure Functions overview](../../azure-functions/functions-bindings-kafka.md).
## Next steps
postgresql Concepts High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-high-availability.md
This model of high availability deployment enables Flexible server to be highly
Automatic backups are performed periodically from the primary database server, while the transaction logs are continuously archived to the backup storage from the standby replica. If the region supports availability zones, then backup data is stored on zone-redundant storage (ZRS). In regions that doesn't support availability zones, backup data is stored on local redundant storage (LRS). :::image type="content" source="./media/business-continuity/concepts-same-zone-high-availability-architecture.png" alt-text="Same-zone high availability":::
+>[!NOTE]
+> See the [HA limitation section](#high-availabilitylimitations) for a current restriction with same-zone HA deployment.
+ ## Components and workflow ### Transaction completion
Flexible servers that are configured with high availability, log data is replica
## High availability - limitations
+>[!NOTE]
+> New server creates with **Same-zone HA** are currently restricted when you choose the primary server's AZ. Workarounds are to (a) create your same-zone HA server without choosing the primary AZ, or (b) create as a single instance (non-HA) server and then enable same-zone HA after the server is created.
+ * High availability is not supported with burstable compute tier. * High availability is supported only in regions where multiple zones are available. * Due to synchronous replication to the standby server, especially with zone-redundant HA, applications can experience elevated write and commit latency.
postgresql Connect Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/connect-csharp.md
Last updated 11/30/2021
# Quickstart: Use .NET (C#) to connect and query data in Azure Database for PostgreSQL - Flexible Server This quickstart demonstrates how to connect to an Azure Database for PostgreSQL using a C# application. It shows how to use SQL statements to query, insert, update, and delete data in the database. The steps in this article assume that you are familiar with developing using C#, and that you are new to working with Azure Database for PostgreSQL. ## Prerequisites+ For this quickstart you need: - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
For this quickstart you need:
- Install [Npgsql](https://www.nuget.org/packages/Npgsql/) NuGet package in Visual Studio. ## Get connection information+ Get the connection information needed to connect to the Azure Database for PostgreSQL. You need the fully qualified server name and login credentials. 1. Log in to the [Azure portal](https://portal.azure.com/).
Get the connection information needed to connect to the Azure Database for Postg
:::image type="content" source="./media/connect-csharp/1-connection-string.png" alt-text="Azure Database for PostgreSQL server name"::: ## Step 1: Connect and insert data+ Use the following code to connect and load the data using **CREATE TABLE** and **INSERT INTO** SQL statements. The code uses NpgsqlCommand class with method: - [Open()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_Open) to establish a connection to the PostgreSQL database. - [CreateCommand()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_CreateCommand) sets the CommandText property.
namespace Driver
[Having any issues? Let us know.](https://github.com/MicrosoftDocs/azure-docs/issues) ## Step 2: Read data+ Use the following code to connect and read the data using a **SELECT** SQL statement. The code uses NpgsqlCommand class with method: - [Open()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_Open) to establish a connection to PostgreSQL. - [CreateCommand()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_CreateCommand) and [ExecuteReader()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlCommand.html#Npgsql_NpgsqlCommand_ExecuteReader) to run the database commands.
namespace Driver
## Step 3: Update data+ Use the following code to connect and update the data using an **UPDATE** SQL statement. The code uses NpgsqlCommand class with method: - [Open()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_Open) to establish a connection to PostgreSQL. - [CreateCommand()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_CreateCommand), sets the CommandText property.
namespace Driver
[Having any issues? Let us know.](https://github.com/MicrosoftDocs/azure-docs/issues) ## Step 4: Delete data+ Use the following code to connect and delete data using a **DELETE** SQL statement. The code uses NpgsqlCommand class with method [Open()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_Open) to establish a connection to the PostgreSQL database. Then, the code uses the [CreateCommand()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_CreateCommand) method, sets the CommandText property, and calls the method [ExecuteNonQuery()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlCommand.html#Npgsql_NpgsqlCommand_ExecuteNonQuery) to run the database commands.
az group delete \
``` ## Next steps+ > [!div class="nextstepaction"] > [Manage Azure Database for MySQL server using Portal](./how-to-manage-server-portal.md)<br/>
postgresql How To Manage High Availability Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-manage-high-availability-portal.md
Previously updated : 06/16/2022 Last updated : 06/23/2022 # Manage high availability in Flexible Server
This section provides details specifically for HA-related fields. You can follow
4. If you chose the Availability zone in step 2 and if you chose zone-redundant HA, then you can choose the standby zone. :::image type="content" source="./media/how-to-manage-high-availability-portal/choose-standby-availability-zone.png" alt-text="Screenshot of Standby AZ selection.":::
-5. If you want to change the default compute and storage, click **Configure server**.
+>[!NOTE]
+> See the [HA limitation section](concepts-high-availability.md#high-availabilitylimitations) for a current restriction with same-zone HA deployment.
+
+1. If you want to change the default compute and storage, click **Configure server**.
:::image type="content" source="./media/how-to-manage-high-availability-portal/configure-server.png" alt-text="Screenshot of configure compute and storage screen.":::
-6. If high availability option is checked, the burstable tier will not be available to choose. You can choose either
+2. If high availability option is checked, the burstable tier will not be available to choose. You can choose either
**General purpose** or **Memory Optimized** compute tiers. Then you can select **compute size** for your choice from the dropdown. :::image type="content" source="./media/how-to-manage-high-availability-portal/select-compute.png" alt-text="Compute tier selection screen.":::
-7. Select **storage size** in GiB using the sliding bar and select the **backup retention period** between 7 days and 35 days.
+3. Select **storage size** in GiB using the sliding bar and select the **backup retention period** between 7 days and 35 days.
:::image type="content" source="./media/how-to-manage-high-availability-portal/storage-backup.png" alt-text="Screenshot of Storage Backup.":::
-8. Click **Save**.
+4. Click **Save**.
## Enable high availability post server creation
Follow these steps to perform a planned failover from your primary to the standb
> > * The overall end-to-end operation time may be longer than the actual downtime experienced by the application. Please measure the downtime from the application perspective.
+## Enabling Zone redundant HA after the region supports AZ
+
+There are Azure regions that do not support availability zones. If you have already deployed non-HA servers, you cannot directly enable zone redundant HA on the server, but you can perform restore and enable HA in that server. Following steps shows how to enable Zone redundant HA for that server.
+
+1. From the overview page of the server, click **Restore** to [perform a PITR](how-to-restore-server-portal.md#restoring-to-the-latest-restore-point). Choose **Latest restore point**.
+2. Choose a server name, availability zone.
+3. Click **Review+Create**".
+4. A new Flexible server will be created from the backup.
+5. Once the new server is created, from the overview page of the server, follow the [guide](#enable-high-availability-post-server-creation) to enable HA.
+6. After data verification, you can optionally [delete](how-to-manage-server-portal.md#delete-a-server) the old server.
+7. Make sure your clients connection strings are modified to point to your new HA-enabled server.
+
## Next steps
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/overview.md
Previously updated : 06/15/2022 Last updated : 06/23/2022
$ New Zone-redundant high availability deployments are temporarily blocked in th
$$ New server deployments are temporarily blocked in these regions. Already provisioned servers are fully supported.
-** Zone-redundant high availability can now be deployed when you provision new servers in these regions. Pre-existing servers deployed in AZ with *no preference* (which you can check on the Azure portal), the standby will be provisioned in the same AZ. To configure zone-redundant high availability, perform a point-in-time restore of the server and enable HA on the restored server.
+** Zone-redundant high availability can now be deployed when you provision new servers in these regions. Any existing servers deployed in AZ with *no preference* (which you can check on the Azure portal) prior to the region started to support AZ, even when you enable zone-redundant HA, the standby will be provisioned in the same AZ (same-zone HA) as the primary server. To enable zone-redundant high availability, [follow the steps.](how-to-manage-high-availability-portal.md#enabling-zone-redundant-ha-after-the-region-supports-az).
<!-- We continue to add more regions for flexible server. --> > [!NOTE]
postgresql Application Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/application-best-practices.md
description: Learn about best practices for building an app by using Azure Datab
- + Previously updated : 12/10/2020 Last updated : 06/24/2022 # Best practices for building an application with Azure Database for PostgreSQL
Here are a few tools and practices that you can use to help debug performance is
With connection pooling, a fixed set of connections is established at the startup time and maintained. This also helps reduce the memory fragmentation on the server that is caused by the dynamic new connections established on the database server. The connection pooling can be configured on the application side if the app framework or database driver supports it. If that is not supported, the other recommended option is to leverage a proxy connection pooler service like [PgBouncer](https://pgbouncer.github.io/) or [Pgpool](https://pgpool.net/mediawiki/index.php/Main_Page) running outside the application and connecting to the database server. Both PgBouncer and Pgpool are community based tools that work with Azure Database for PostgreSQL. ### Retry logic to handle transient errors+ Your application might experience transient errors where connections to the database are dropped or lost intermittently. In such situations, the server is up and running after one to two retries in 5 to 10 seconds. A good practice is to wait for 5 seconds before your first retry. Then follow each retry by increasing the wait gradually, up to 60 seconds. Limit the maximum number of retries at which point your application considers the operation failed, so you can then further investigate. See [How to troubleshoot connection errors](./concepts-connectivity.md) to learn more. ### Enable read replication to mitigate failovers
-You can use [Data-in Replication](./concepts-read-replicas.md) for failover scenarios. When you're using read replicas, no automated failover between source and replica servers occurs. You'll notice a lag between the source and the replica because the replication is asynchronous. Network lag can be influenced by many factors, like the size of the workload running on the source server and the latency between datacenters. In most cases, replica lag ranges from a few seconds to a couple of minutes.
+You can use [Data-in Replication](./concepts-read-replicas.md) for failover scenarios. When you're using read replicas, no automated failover between source and replica servers occurs. You'll notice a lag between the source and the replica because the replication is asynchronous. Network lag can be influenced by many factors, like the size of the workload running on the source server and the latency between datacenters. In most cases, replica lag ranges from a few seconds to a couple of minutes.
## Database deployment ### Configure CI/CD deployment pipeline+ Occasionally, you need to deploy changes to your database. In such cases, you can use continuous integration (CI) through [GitHub Actions](https://github.com/Azure/postgresql/blob/master/README.md) for your PostgreSQL server to update the database by running a custom script against it. ### Define manual database deployment process+ During manual database deployment, follow these steps to minimize downtime or reduce the risk of failed deployment: - Create a copy of a production database on a new database by using pg_dump.
During manual database deployment, follow these steps to minimize downtime or re
> If the application is like an e-commerce app and you can't put it in read-only state, deploy the changes directly on the production database after making a backup. Theses change should occur during off-peak hours with low traffic to the app to minimize the impact, because some users might experience failed requests. Make sure your application code also handles any failed requests. ## Database schema and queries+ Here are few tips to keep in mind when you build your database schema and your queries. ### Use BIGINT or UUID for Primary Keys+ When building custom application or some frameworks they maybe using `INT` instead of `BIGINT` for primary keys. When you use ```INT```, you run the risk of where the value in your database can exceed storage capacity of ```INT``` data type. Making this change to an existing production application can be time consuming with cost more development time. Another option is to use [UUID](https://www.postgresql.org/docs/current/datatype-uuid.html) for primary keys.This identifier uses an auto-generated 128-bit string, for example ```a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11```. Learn more about [PostgreSQL data types](https://www.postgresql.org/docs/8.1/datatype.html). ### Use indexes
When building custom application or some frameworks they maybe using `INT` inste
There are many types of [indexes](https://www.postgresql.org/docs/9.1/indexes.html) in Postgres which can be used in different ways. Using an index helps the server find and retrieve specific rows much faster than it could do without an index. But indexes also add overhead to the database server, hence avoid having too many indexes. ### Use autovacuum+ You can optimize your server with autovacuum on an Azure Database for PostgreSQL server. PostgreSQL allow greater database concurrency but with every update results in insert and delete. For delete, the records are soft marked which will be purged later. To carry out these tasks, PostgreSQL runs a vacuum job. If you don't vacuum from time to time, the dead tuples that accumulate can result in: - Data bloat, such as larger databases and tables.
You can optimize your server with autovacuum on an Azure Database for PostgreSQL
Learn more about [how to optimize with autovacuum](how-to-optimize-autovacuum.md). ### Use pg_stats_statements
-Pg_stat_statements is a PostgreSQL extension that's enabled by default in Azure Database for PostgreSQL. The extension provides a means to track execution statistics for all SQL statements executed by a server. See [how to use pg_statement](how-to-optimize-query-stats-collection.md).
+Pg_stat_statements is a PostgreSQL extension that's enabled by default in Azure Database for PostgreSQL. The extension provides a means to track execution statistics for all SQL statements executed by a server. See [how to use pg_statement](how-to-optimize-query-stats-collection.md).
### Use the Query Store+ The [Query Store](./concepts-query-store.md) feature in Azure Database for PostgreSQL provides a more effective method to track query statistics. We recommend this feature as an alternative to using pg_stats_statements. ### Optimize bulk inserts and use transient data+ If you have workload operations that involve transient data or that insert large datasets in bulk, consider using unlogged tables. It provides atomicity and durability, by default. Atomicity, consistency, isolation, and durability make up the ACID properties. See [how to optimize bulk inserts](how-to-optimize-bulk-inserts.md). ## Next Steps+ [Postgres Guide](http://postgresguide.com/)
postgresql Concept Reserved Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concept-reserved-pricing.md
Previously updated : 10/06/2021 Last updated : 06/24/2022 # Prepay for Azure Database for PostgreSQL compute resources with reserved capacity
Last updated 10/06/2021
Azure Database for PostgreSQL now helps you save money by prepaying for compute resources compared to pay-as-you-go prices. With Azure Database for PostgreSQL reserved capacity, you make an upfront commitment on PostgreSQL server for a one or three year period to get a significant discount on the compute costs. To purchase Azure Database for PostgreSQL reserved capacity, you need to specify the Azure region, deployment type, performance tier, and term. </br> ## How does the instance reservation work?+ You don't need to assign the reservation to specific Azure Database for PostgreSQL servers. An already running Azure Database for PostgreSQL (or ones that are newly deployed) will automatically get the benefit of reserved pricing. By purchasing a reservation, you're pre-paying for the compute costs for a period of one or three years. As soon as you buy a reservation, the Azure database for PostgreSQL compute charges that match the reservation attributes are no longer charged at the pay-as-you go rates. A reservation does not cover software, networking, or storage charges associated with the PostgreSQL Database servers. At the end of the reservation term, the billing benefit expires, and the Azure Database for PostgreSQL are billed at the pay-as-you go price. Reservations do not auto-renew. For pricing information, see the [Azure Database for PostgreSQL reserved capacity offering](https://azure.microsoft.com/pricing/details/postgresql/). </br> > [!IMPORTANT]
The size of reservation should be based on the total amount of compute used by t
For example, let's suppose that you are running one general purpose Gen5 ΓÇô 32 vCore PostgreSQL database, and two memory-optimized Gen5 ΓÇô 16 vCore PostgreSQL databases. Further, let's supposed that you plan to deploy within the next month an additional general purpose Gen5 ΓÇô 8 vCore database server, and one memory-optimized Gen5 ΓÇô 32 vCore database server. Let's suppose that you know that you will need these resources for at least one year. In this case, you should purchase a 40 (32 + 8) vCores, one-year reservation for single database general purpose - Gen5 and a 64 (2x16 + 32) vCore one year reservation for single database memory optimized - Gen5 - ## Buy Azure Database for PostgreSQL reserved capacity 1. Sign in to the [Azure portal](https://portal.azure.com/).
For example, let's suppose that you are running one general purpose Gen5 ΓÇô 32
3. Select **Add** and then in the Purchase reservations pane, select **Azure Database for PostgreSQL** to purchase a new reservation for your PostgreSQL databases. 4. Fill in the required fields. Existing or new databases that match the attributes you select qualify to get the reserved capacity discount. The actual number of your Azure Database for PostgreSQL servers that get the discount depend on the scope and quantity selected. - :::image type="content" source="media/concepts-reserved-pricing/postgresql-reserved-price.png" alt-text="Overview of reserved pricing"::: - The following table describes required fields. | Field | Description |
Use Azure APIs to programmatically get information for your organization about A
- View and manage reservation access - Split or merge reservations - Change the scope of reservations
-
+ For more information, see [APIs for Azure reservation automation](../../cost-management-billing/reservations/reservation-apis.md). ## vCore size flexibility
postgresql Concepts Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-aks.md
Previously updated : 07/14/2020 Last updated : 06/24/2022 # Connecting Azure Kubernetes Service and Azure Database for PostgreSQL - Single Server
Last updated 07/14/2020
Azure Kubernetes Service (AKS) provides a managed Kubernetes cluster you can use in Azure. Below are some options to consider when using AKS and Azure Database for PostgreSQL together to create an application. ## Accelerated networking+ Use accelerated networking-enabled underlying VMs in your AKS cluster. When accelerated networking is enabled on a VM, there is lower latency, reduced jitter, and decreased CPU utilization on the VM. Learn more about how accelerated networking works, the supported OS versions, and supported VM instances for [Linux](../../virtual-network/create-vm-accelerated-networking-cli.md). From November 2018, AKS supports accelerated networking on those supported VM instances. Accelerated networking is enabled by default on new AKS clusters that use those VMs.
az network nic list --resource-group nodeResourceGroup -o table
``` ## Connection pooling
-A connection pooler minimizes the cost and time associated with creating and closing new connections to the database. The pool is a collection of connections that can be reused.
-There are multiple connection poolers you can use with PostgreSQL. One of these is [PgBouncer](https://pgbouncer.github.io/). In the Microsoft Container Registry, we provide a lightweight containerized PgBouncer that can be used in a sidecar to pool connections from AKS to Azure Database for PostgreSQL. Visit the [docker hub page](https://hub.docker.com/r/microsoft/azureossdb-tools-pgbouncer/) to learn how to access and use this image.
+A connection pooler minimizes the cost and time associated with creating and closing new connections to the database. The pool is a collection of connections that can be reused.
+
+There are multiple connection poolers you can use with PostgreSQL. One of these is [PgBouncer](https://pgbouncer.github.io/). In the Microsoft Container Registry, we provide a lightweight containerized PgBouncer that can be used in a sidecar to pool connections from AKS to Azure Database for PostgreSQL. Visit the [docker hub page](https://hub.docker.com/r/microsoft/azureossdb-tools-pgbouncer/) to learn how to access and use this image.
## Next steps
-Create an AKS cluster [using the Azure CLI](../../aks/learn/quick-kubernetes-deploy-cli.md), [using Azure PowerShell](../../aks/learn/quick-kubernetes-deploy-powershell.md), or [using the Azure portal](../../aks/learn/quick-kubernetes-deploy-portal.md).
+Create an AKS cluster [using the Azure CLI](../../aks/learn/quick-kubernetes-deploy-cli.md), [using Azure PowerShell](../../aks/learn/quick-kubernetes-deploy-powershell.md), or [using the Azure portal](../../aks/learn/quick-kubernetes-deploy-portal.md).
postgresql Concepts Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-audit.md
Previously updated : 01/28/2020 Last updated : 06/24/2022 # Audit logging in Azure Database for PostgreSQL - Single Server
To use the [portal](https://portal.azure.com):
1. On the left, under **Settings**, select **Server parameters**. 1. Search for **shared_preload_libraries**. 1. Select **PGAUDIT**.
-
+ :::image type="content" source="./media/concepts-audit/share-preload-parameter.png" alt-text="Screenshot that shows Azure Database for PostgreSQL enabling shared_preload_libraries for PGAUDIT."::: 1. Restart the server to apply the change. 1. Check that `pgaudit` is loaded in `shared_preload_libraries` by executing the following query in psql:
-
+ ```SQL show shared_preload_libraries; ``` You should see `pgaudit` in the query result that will return `shared_preload_libraries`. 1. Connect to your server by using a client like psql, and enable the pgAudit extension:
-
+ ```SQL CREATE EXTENSION pgaudit; ```
To configure pgAudit, in the [portal](https://portal.azure.com):
1. On the left, under **Settings**, select **Server parameters**. 1. Search for the **pgaudit** parameters. 1. Select appropriate settings parameters to edit. For example, to start logging, set **pgaudit.log** to **WRITE**.
-
+ :::image type="content" source="./media/concepts-audit/pgaudit-config.png" alt-text="Screenshot that shows Azure Database for PostgreSQL configuring logging with pgAudit."::: 1. Select **Save** to save your changes.
postgresql Concepts Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-azure-ad-authentication.md
Previously updated : 07/23/2020 Last updated : 06/24/2022 # Use Azure Active Directory for authenticating with PostgreSQL
postgresql Concepts Azure Advisor Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-azure-advisor-recommendations.md
Previously updated : 04/08/2021 Last updated : 06/24/2022 + # Azure Advisor for PostgreSQL [!INCLUDE [applies-to-postgresql-single-server](../includes/applies-to-postgresql-single-server.md)]
postgresql Concepts Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-backup.md
Previously updated : 06/14/2022 Last updated : 06/24/2022 # Backup and restore in Azure Database for PostgreSQL - Single Server
These backup files cannot be exported. The backups can only be used for restore
For servers that support up to 4-TB maximum storage, full backups occur once every week. Differential backups occur twice a day. Transaction log backups occur every five minutes. - #### Servers with up to 16-TB storage
-In a subset of [Azure regions](./concepts-pricing-tiers.md#storage), all newly provisioned servers can support up to 16-TB storage. Backups on these large storage servers are snapshot-based. The first full snapshot backup is scheduled immediately after a server is created. That first full snapshot backup is retained as the server's base backup. Subsequent snapshot backups are differential backups only. Differential snapshot backups do not occur on a fixed schedule. In a day, multiple differential snapshot backups are performed, but only 3 backups are retained. Transaction log backups occur every five minutes.
+In a subset of [Azure regions](./concepts-pricing-tiers.md#storage), all newly provisioned servers can support up to 16-TB storage. Backups on these large storage servers are snapshot-based. The first full snapshot backup is scheduled immediately after a server is created. That first full snapshot backup is retained as the server's base backup. Subsequent snapshot backups are differential backups only. Differential snapshot backups do not occur on a fixed schedule. In a day, multiple differential snapshot backups are performed, but only 3 backups are retained. Transaction log backups occur every five minutes.
> [!NOTE] > Automatic backups are performed for [replica servers](./concepts-read-replicas.md) that are configured with up to 4TB storage configuration. ### Backup retention
-Backups are retained based on the backup retention period setting on the server. You can select a retention period of 7 to 35 days. The default retention period is 7 days. You can set the retention period during server creation or later by updating the backup configuration using [Azure portal](./how-to-restore-server-portal.md#set-backup-configuration) or [Azure CLI](./how-to-restore-server-cli.md#set-backup-configuration).
+Backups are retained based on the backup retention period setting on the server. You can select a retention period of 7 to 35 days. The default retention period is 7 days. You can set the retention period during server creation or later by updating the backup configuration using [Azure portal](./how-to-restore-server-portal.md#set-backup-configuration) or [Azure CLI](./how-to-restore-server-cli.md#set-backup-configuration).
The backup retention period governs how far back in time a point-in-time restore can be retrieved, since it's based on backups available. The backup retention period can also be treated as a recovery window from a restore perspective. All backups required to perform a point-in-time restore within the backup retention period are retained in backup storage. For example - if the backup retention period is set to 7 days, the recovery window is considered last 7 days. In this scenario, all the backups required to restore the server in last 7 days are retained. With a backup retention window of seven days: - Servers with up to 4-TB storage will retain up to 2 full database backups, all the differential backups, and transaction log backups performed since the earliest full database backup.
Azure Database for PostgreSQL provides the flexibility to choose between locally
Azure Database for PostgreSQL provides up to 100% of your provisioned server storage as backup storage at no extra cost. Any additional backup storage used is charged in GB per month. For example, if you have provisioned a server with 250 GB of storage, you have 250 GB of additional storage available for server backups at no extra cost. Storage consumed for backups more than 250 GB is charged as per the [pricing model](https://azure.microsoft.com/pricing/details/postgresql/).
-You can use the [Backup Storage used](concepts-monitoring.md) metric in Azure Monitor available in the Azure portal to monitor the backup storage consumed by a server. The Backup Storage used metric represents the sum of storage consumed by all the full database backups, differential backups, and log backups retained based on the backup retention period set for the server. The frequency of the backups is service managed and explained earlier. Heavy transactional activity on the server can cause backup storage usage to increase irrespective of the total database size. For geo-redundant storage, backup storage usage is twice that of the locally redundant storage.
+You can use the [Backup Storage used](concepts-monitoring.md) metric in Azure Monitor available in the Azure portal to monitor the backup storage consumed by a server. The Backup Storage used metric represents the sum of storage consumed by all the full database backups, differential backups, and log backups retained based on the backup retention period set for the server. The frequency of the backups is service managed and explained earlier. Heavy transactional activity on the server can cause backup storage usage to increase irrespective of the total database size. For geo-redundant storage, backup storage usage is twice that of the locally redundant storage.
The primary means of controlling the backup storage cost is by setting the appropriate backup retention period and choosing the right backup redundancy options to meet your desired recovery goals. You can select a retention period from a range of 7 to 35 days. General Purpose and Memory Optimized servers can choose to have geo-redundant storage for backups. ## Restore
-In Azure Database for PostgreSQL, performing a restore creates a new server from the original server's backups.
+In Azure Database for PostgreSQL, performing a restore creates a new server from the original server's backups.
There are two types of restore available:
There are two types of restore available:
The estimated time of recovery depends on several factors including the database sizes, the transaction log size, the network bandwidth, and the total number of databases recovering in the same region at the same time. The recovery time varies depending on the the last data backup and the amount of recovery needs to be performed. It is usually less than 12 hours. > [!NOTE]
-> If your source PostgreSQL server is encrypted with customer-managed keys, please see the [documentation](concepts-data-encryption-postgresql.md) for additional considerations.
+> If your source PostgreSQL server is encrypted with customer-managed keys, please see the [documentation](concepts-data-encryption-postgresql.md) for additional considerations.
> [!NOTE] > If you want to restore a deleted PostgreSQL server, follow the procedure documented [here](how-to-restore-dropped-server.md).
If you want to restore a dropped table,
5. You can optionally delete the restored server. >[!Note]
-> It is recommended not to create multiple restores for the same server at the same time.
+> It is recommended not to create multiple restores for the same server at the same time.
### Geo-restore
After a restore from either recovery mechanism, you should perform the following
- Learn how to restore usingΓÇ»[the Azure portal](how-to-restore-server-portal.md). - Learn how to restore usingΓÇ»[the Azure CLI](how-to-restore-server-cli.md).-- To learn more about business continuity, see theΓÇ»[business continuity overview](concepts-business-continuity.md).
+- To learn more about business continuity, see theΓÇ»[business continuity overview](concepts-business-continuity.md).
postgresql Concepts Business Continuity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-business-continuity.md
Previously updated : 08/07/2020 Last updated : 06/24/2022 # Overview of business continuity with Azure Database for PostgreSQL - Single Server
The following table compares RTO and RPO in a **typical workload** scenario:
| Geo-restore from geo-replicated backups | Not supported | RTO - Varies <br/>RPO < 1 h | RTO - Varies <br/>RPO < 1 h | | Read replicas | RTO - Minutes* <br/>RPO < 5 min* | RTO - Minutes* <br/>RPO < 5 min*| RTO - Minutes* <br/>RPO < 5 min*|
- \* RTO and RPO **can be much higher** in some cases depending on various factors including latency between sites, the amount of data to be transmitted, and importantly primary database write workload.
+\* RTO and RPO **can be much higher** in some cases depending on various factors including latency between sites, the amount of data to be transmitted, and importantly primary database write workload.
## Recover a server after a user or application error
The geo-restore feature restores the server using geo-redundant backups. The bac
> Geo-restore is only possible if you provisioned the server with geo-redundant backup storage. If you wish to switch from locally redundant to geo-redundant backups for an existing server, you must take a dump using pg_dump of your existing server and restore it to a newly created server configured with geo-redundant backups. ## Cross-region read replicas
-You can use cross region read replicas to enhance your business continuity and disaster recovery planning. Read replicas are updated asynchronously using PostgreSQL's physical replication technology, and may lag the primary. Learn more about read replicas, available regions, and how to fail over from the [read replicas concepts article](concepts-read-replicas.md).
+
+You can use cross region read replicas to enhance your business continuity and disaster recovery planning. Read replicas are updated asynchronously using PostgreSQL's physical replication technology, and may lag the primary. Learn more about read replicas, available regions, and how to fail over from the [read replicas concepts article](concepts-read-replicas.md).
## FAQ+ ### Where does Azure Database for PostgreSQL store customer data?
-By default, Azure Database for PostgreSQL doesn't move or store customer data out of the region it is deployed in. However, customers can optionally chose to enable [geo-redundant backups](concepts-backup.md#backup-redundancy-options) or create [cross-region read replica](concepts-read-replicas.md#cross-region-replication) for storing data in another region.
+By default, Azure Database for PostgreSQL doesn't move or store customer data out of the region it is deployed in. However, customers can optionally chose to enable [geo-redundant backups](concepts-backup.md#backup-redundancy-options) or create [cross-region read replica](concepts-read-replicas.md#cross-region-replication) for storing data in another region.
## Next steps+ - Learn more about the [automated backups in Azure Database for PostgreSQL](concepts-backup.md). - Learn how to restore using [the Azure portal](how-to-restore-server-portal.md) or [the Azure CLI](how-to-restore-server-cli.md). - Learn about [read replicas in Azure Database for PostgreSQL](concepts-read-replicas.md).
postgresql Concepts Certificate Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-certificate-rotation.md
Previously updated : 09/02/2020 Last updated : 06/24/2022 # Understanding the changes in the Root CA change for Azure Database for PostgreSQL Single server
Azure database for PostgreSQL users can only use the predefined certificate to c
As per the industry's compliance requirements, CA vendors began revoking CA certificates for non-compliant CAs, requiring servers to use certificates issued by compliant CAs, and signed by CA certificates from those compliant CAs. Since Azure Database for MySQL used one of these non-compliant certificates, we needed to rotate the certificate to the compliant version to minimize the potential threat to your MySQL servers.
-The new certificate is rolled out and in effect starting February 15, 2021 (02/15/2021).
+The new certificate is rolled out and in effect starting February 15, 2021 (02/15/2021).
## What change was performed on February 15, 2021 (02/15/2021)?
There is no change required on client side. if you followed our previous recomme
Then replace the original keystore file with the new generated one: * System.setProperty("javax.net.ssl.trustStore","path_to_truststore_file"); * System.setProperty("javax.net.ssl.trustStorePassword","password");
-
+ * For .NET (Npgsql) users on Windows, make sure **Baltimore CyberTrust Root** and **DigiCert Global Root G2** both exist in Windows Certificate Store, Trusted Root Certification Authorities. If any certificates do not exist, import the missing certificate. ![Azure Database for PostgreSQL .net cert](media/overview/netconnecter-cert.png)
There is no change required on client side. if you followed our previous recomme
* For other PostgreSQL client users, you can merge two CA certificate files like this format below </br>--BEGIN CERTIFICATE--
- </br>(Root CA1: BaltimoreCyberTrustRoot.crt.pem)
- </br>--END CERTIFICATE--
- </br>--BEGIN CERTIFICATE--
- </br>(Root CA2: DigiCertGlobalRootG2.crt.pem)
- </br>--END CERTIFICATE--
+</br>(Root CA1: BaltimoreCyberTrustRoot.crt.pem)
+</br>--END CERTIFICATE--
+</br>--BEGIN CERTIFICATE--
+</br>(Root CA2: DigiCertGlobalRootG2.crt.pem)
+</br>--END CERTIFICATE--
* Replace the original root CA pem file with the combined root CA file and restart your application/client. * In future, after the new certificate deployed on the server side, you can change your CA pem file to DigiCertGlobalRootG2.crt.pem. > [!NOTE]
-> Please do not drop or alter **Baltimore certificate** until the cert change is made. We will send a communication once the change is done, after which it is safe for them to drop the Baltimore certificate.
+> Please do not drop or alter **Baltimore certificate** until the cert change is made. We will send a communication once the change is done, after which it is safe for them to drop the Baltimore certificate.
## Why was BaltimoreCyberTrustRoot certificate not replaced to DigiCertGlobalRootG2 during this change on February 15, 2021?
-We evaluated the customer readiness for this change and realized many customers were looking for additional lead time to manage this change. In the interest of providing more lead time to customers for readiness, we have decided to defer the certificate change to DigiCertGlobalRootG2 for at least a year providing sufficient lead time to the customers and end users.
+We evaluated the customer readiness for this change and realized many customers were looking for additional lead time to manage this change. In the interest of providing more lead time to customers for readiness, we have decided to defer the certificate change to DigiCertGlobalRootG2 for at least a year providing sufficient lead time to the customers and end users.
-Our recommendations to users is, use the aforementioned steps to create a combined certificate and connect to your server but do not remove BaltimoreCyberTrustRoot certificate until we send a communication to remove it.
+Our recommendations to users is, use the aforementioned steps to create a combined certificate and connect to your server but do not remove BaltimoreCyberTrustRoot certificate until we send a communication to remove it.
## What if we removed the BaltimoreCyberTrustRoot certificate? You will start to connectivity errors while connecting to your Azure Database for PostgreSQL server. You will need to configure SSL with [BaltimoreCyberTrustRoot](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) certificate again to maintain connectivity. - ## Frequently asked questions ### 1. If I am not using SSL/TLS, do I still need to update the root CA?
-No actions required if you are not using SSL/TLS.
+
+No actions required if you are not using SSL/TLS.
### 2. If I am using SSL/TLS, do I need to restart my database server to update the root CA?+ No, you do not need to restart the database server to start using the new certificate. This is a client-side change and the incoming client connections need to use the new certificate to ensure that they can connect to the database server. ### 3. How do I know if I'm using SSL/TLS with root certificate verification? You can identify whether your connections verify the root certificate by reviewing your connection string.-- If your connection string includes `sslmode=verify-ca` or `sslmode=verify-full`, you need to update the certificate.-- If your connection string includes `sslmode=disable`, `sslmode=allow`, `sslmode=prefer`, or `sslmode=require`, you do not need to update certificates. -- If your connection string does not specify sslmode, you do not need to update certificates.
+- If your connection string includes `sslmode=verify-ca` or `sslmode=verify-full`, you need to update the certificate.
+- If your connection string includes `sslmode=disable`, `sslmode=allow`, `sslmode=prefer`, or `sslmode=require`, you do not need to update certificates.
+- If your connection string does not specify sslmode, you do not need to update certificates.
If you are using a client that abstracts the connection string away, review the client's documentation to understand whether it verifies certificates. To understand PostgreSQL sslmode review the [SSL mode descriptions](https://www.postgresql.org/docs/11/libpq-ssl.html#ssl-mode-descriptions) in PostgreSQL documentation. ### 4. What is the impact if using App Service with Azure Database for PostgreSQL?+ For Azure app services, connecting to Azure Database for PostgreSQL, we can have two possible scenarios and it depends on how on you are using SSL with your application. * This new certificate has been added to App Service at platform level. If you are using the SSL certificates included on App Service platform in your application, then no action is needed. * If you are explicitly including the path to SSL cert file in your code, then you would need to download the new cert and update the code to use the new cert. A good example of this scenario is when you use custom containers in App Service as shared in the [App Service documentation](../../app-service/tutorial-multi-container-app.md#configure-database-variables-in-wordpress) ### 5. What is the impact if using Azure Kubernetes Services (AKS) with Azure Database for PostgreSQL?+ If you are trying to connect to the Azure Database for PostgreSQL using Azure Kubernetes Services (AKS), it is similar to access from a dedicated customers host environment. Refer to the steps [here](../../aks/ingress-own-tls.md). ### 6. What is the impact if using Azure Data Factory to connect to Azure Database for PostgreSQL?+ For connector using Azure Integration Runtime, the connector leverage certificates in the Windows Certificate Store in the Azure-hosted environment. These certificates are already compatible to the newly applied certificates and therefore no action is needed. For connector using Self-hosted Integration Runtime where you explicitly include the path to SSL cert file in your connection string, you will need to download the [new certificate](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem) and update the connection string to use it. ### 7. Do I need to plan a database server maintenance downtime for this change?+ No. Since the change here is only on the client side to connect to the database server, there is no maintenance downtime needed for the database server for this change. ### 8. If I create a new server after February 15, 2021 (02/15/2021), will I be impacted?+ For servers created after February 15, 2021 (02/15/2021), you will continue to use the [BaltimoreCyberTrustRoot](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) for your applications to connect using SSL. ### 9. How often does Microsoft update their certificates or what is the expiry policy?+ These certificates used by Azure Database for PostgreSQL are provided by trusted Certificate Authorities (CA). So the support of these certificates is tied to the support of these certificates by CA. The [BaltimoreCyberTrustRoot](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) certificate is scheduled to expire in 2025 so Microsoft will need to perform a certificate change before the expiry. Also in case if there are unforeseen bugs in these predefined certificates, Microsoft will need to make the certificate rotation at the earliest similar to the change performed on February 15, 2021 to ensure the service is secure and compliant at all times. ### 10. If I am using read replicas, do I need to perform this update only on the primary server or the read replicas?
-Since this update is a client-side change, if the client used to read data from the replica server, you will need to apply the changes for those clients as well.
+
+Since this update is a client-side change, if the client used to read data from the replica server, you will need to apply the changes for those clients as well.
### 11. Do we have server-side query to verify if SSL is being used?+ To verify if you are using SSL connection to connect to the server refer [SSL verification](concepts-ssl-connection-security.md#applications-that-require-certificate-verification-for-tls-connectivity). ### 12. Is there an action needed if I already have the DigiCertGlobalRootG2 in my certificate file?+ No. There is no action needed if your certificate file already has the **DigiCertGlobalRootG2**. ### 13. What if you are using docker image of PgBouncer sidecar provided by Microsoft?
-A new docker image which supports both [**Baltimore**](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) and [**DigiCert**](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem) is published to below [here](https://hub.docker.com/_/microsoft-azure-oss-db-tools-pgbouncer-sidecar) (Latest tag). You can pull this new image to avoid any interruption in connectivity starting February 15, 2021.
+
+A new docker image which supports both [**Baltimore**](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) and [**DigiCert**](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem) is published to below [here](https://hub.docker.com/_/microsoft-azure-oss-db-tools-pgbouncer-sidecar) (Latest tag). You can pull this new image to avoid any interruption in connectivity starting February 15, 2021.
### 14. What if I have further questions?+ If you have questions, get answers from community experts in [Microsoft Q&A](mailto:AzureDatabaseforPostgreSQL@service.microsoft.com). If you have a support plan and you need technical help, [contact us](mailto:AzureDatabaseforPostgreSQL@service.microsoft.com)
postgresql Concepts Connection Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-connection-libraries.md
Previously updated : 5/6/2019 Last updated : 06/24/2022 # Connection libraries for Azure Database for PostgreSQL - Single Server
Most language client libraries used to connect to PostgreSQL server are external
| C++ | [libpqxx](http://pqxx.org/) | New-style C++ interface | [Download](http://pqxx.org/download/software/) | ## Next steps+ Read these quickstarts on how to connect to and query Azure Database for PostgreSQL by using your language of choice: [Python](./connect-python.md) | [Node.JS](./connect-nodejs.md) | [Java](./connect-java.md) | [Ruby](./connect-ruby.md) | [PHP](./connect-php.md) | [.NET (C#)](./connect-csharp.md) | [Go](./connect-go.md)
postgresql Concepts Connectivity Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-connectivity-architecture.md
Previously updated : 10/15/2021 Last updated : 06/24/2022 # Connectivity architecture in Azure Database for PostgreSQL
Connection to your Azure Database for PostgreSQL is established through a gatewa
:::image type="content" source="./media/concepts-connectivity-architecture/connectivity-architecture-overview-proxy.png" alt-text="Overview of the connectivity architecture"::: - As client connects to the database, the connection string to the server resolves to the gateway IP address. The gateway listens on the IP address on port 5432. Inside the database cluster, traffic is forwarded to appropriate Azure Database for PostgreSQL. Therefore, in order to connect to your server, such as from corporate networks, it is necessary to open up the **client-side firewall to allow outbound traffic to be able to reach our gateways**. Below you can find a complete list of the IP addresses used by our gateways per region. ## Azure Database for PostgreSQL gateway IP addresses
-The gateway service is hosted on group of stateless compute nodes sitting behind an IP address, which your client would reach first when trying to connect to an Azure Database for PostgreSQL server.
+The gateway service is hosted on group of stateless compute nodes sitting behind an IP address, which your client would reach first when trying to connect to an Azure Database for PostgreSQL server.
-As part of ongoing service maintenance, we will periodically refresh compute hardware hosting the gateways to ensure we provide the most secure and performant connectivity experience. When the gateway hardware is refreshed, a new ring of the compute nodes is built out first. This new ring serves the traffic for all the newly created Azure Database for PostgreSQL servers and it will have a different IP address from older gateway rings in the same region to differentiate the traffic. The older gateway hardware continues serving existing servers but are planned for decommissioning in future. Before decommissioning a gateway hardware, customers running their servers and connecting to older gateway rings will be notified via email and in the Azure portal, three months in advance before decommissioning. The decommissioning of gateways can impact the connectivity to your servers if
+As part of ongoing service maintenance, we will periodically refresh compute hardware hosting the gateways to ensure we provide the most secure and performant connectivity experience. When the gateway hardware is refreshed, a new ring of the compute nodes is built out first. This new ring serves the traffic for all the newly created Azure Database for PostgreSQL servers and it will have a different IP address from older gateway rings in the same region to differentiate the traffic. The older gateway hardware continues serving existing servers but are planned for decommissioning in future. Before decommissioning a gateway hardware, customers running their servers and connecting to older gateway rings will be notified via email and in the Azure portal, three months in advance before decommissioning. The decommissioning of gateways can impact the connectivity to your servers if
* You hard code the gateway IP addresses in the connection string of your application. It is **not recommended**.You should use fully qualified domain name (FQDN) of your server in the format `<servername>.postgres.database.azure.com`, in the connection string for your application. * You do not update the newer gateway IP addresses in the client-side firewall to allow outbound traffic to be able to reach our new gateway rings.
The following table lists the gateway IP addresses of the Azure Database for Pos
* **Gateway IP addresses:** This column lists the current IP addresses of the gateways hosted on the latest generation of hardware. If you are provisioning a new server, we recommend that you open the client-side firewall to allow outbound traffic for the IP addresses listed in this column. * **Gateway IP addresses (decommissioning):** This column lists the IP addresses of the gateways hosted on an older generation of hardware that is being decommissioned right now. If you are provisioning a new server, you can ignore these IP addresses. If you have an existing server, continue to retain the outbound rule for the firewall for these IP addresses as we have not decommissioned it yet. If you drop the firewall rules for these IP addresses, you may get connectivity errors. Instead, you are expected to proactively add the new IP addresses listed in Gateway IP addresses column to the outbound firewall rule as soon as you receive the notification for decommissioning. This will ensure when your server is migrated to latest gateway hardware, there is no interruptions in connectivity to your server.
-* **Gateway IP addresses (decommissioned):** This columns lists the IP addresses of the gateway rings, which are decommissioned and are no longer in operations. You can safely remove these IP addresses from your outbound firewall rule.
-
+* **Gateway IP addresses (decommissioned):** This columns lists the IP addresses of the gateway rings, which are decommissioned and are no longer in operations. You can safely remove these IP addresses from your outbound firewall rule.
| **Region name** | **Gateway IP addresses** |**Gateway IP addresses (decommissioning)** | **Gateway IP addresses (decommissioned)** | |:-|:-|:-|:|
The following table lists the gateway IP addresses of the Azure Database for Pos
| West US |13.86.216.212, 13.86.217.212 |104.42.238.205 | 23.99.34.75| | West US 2 | 13.66.226.202, 13.66.136.192,13.66.136.195 | | | | West US 3 | 20.150.184.2 | | |
-||||
## Frequently asked questions ### What you need to know about this planned maintenance?+ This is a DNS change only which makes it transparent to clients. While the IP address for FQDN is changed in the DNS server, the local DNS cache will be refreshed within 5 minutes, and it is automatically done by the operating systems. After the local DNS refresh, all the new connections will connect to the new IP address, all existing connections will remain connected to the old IP address with no interruption until the old IP addresses are fully decommissioned. The old IP address will roughly take three to four weeks before getting decommissioned; therefore, it should have no effect on the client applications. ### What are we decommissioning?+ Only Gateway nodes will be decommissioned. When users connect to their servers, the first stop of the connection is to gateway node, before connection is forwarded to server. We are decommissioning old gateway rings (not tenant rings where the server is running) refer to the [connectivity architecture](#connectivity-architecture) for more clarification. ### How can you validate if your connections are going to old gateway nodes or new gateway nodes?+ Ping your server's FQDN, for example ``ping xxx.postgres.database.azure.com``. If the returned IP address is one of the IPs listed under Gateway IP addresses (decommissioning) in the document above, it means your connection is going through the old gateway. Contrarily, if the returned Ip address is one of the IPs listed under Gateway IP addresses, it means your connection is going through the new gateway. You may also test by [PSPing](/sysinternals/downloads/psping) or TCPPing the database server from your client application with port 3306 and ensure that return IP address isn't one of the decommissioning IP addresses ### How do I know when the maintenance is over and will I get another notification when old IP addresses are decommissioned?
-You will receive an email to inform you when we will start the maintenance work. The maintenance can take up to one month depending on the number of servers we need to migrate in al regions. Please prepare your client to connect to the database server using the FQDN or using the new IP address from the table above.
+
+You will receive an email to inform you when we will start the maintenance work. The maintenance can take up to one month depending on the number of servers we need to migrate in al regions. Please prepare your client to connect to the database server using the FQDN or using the new IP address from the table above.
### What do I do if my client applications are still connecting to old gateway server ?+ This indicates that your applications connect to server using static IP address instead of FQDN. Review connection strings and connection pooling setting, AKS setting, or even in the source code. ### Is there any impact for my application connections?+ This maintenance is just a DNS change, so it is transparent to the client. Once the DNS cache is refreshed in the client (automatically done by operation system), all the new connection will connect to the new IP address and all the existing connection will still working fine until the old IP address fully get decommissioned, which usually several weeks later. And the retry logic is not required for this case, but it is good to see the application have retry logic configured. Please either use FQDN to connect to the database server or enable list the new 'Gateway IP addresses' in your application connection string. This maintenance operation will not drop the existing connections. It only makes the new connection requests go to new gateway ring. ### Can I request for a specific time window for the maintenance? + As the migration should be transparent and no impact to customer's connectivity, we expect there will be no issue for majority of users. Review your application proactively and ensure that you either use FQDN to connect to the database server or enable list the new 'Gateway IP addresses' in your application connection string. ### I am using private link, will my connections get affected?
-No, this is a gateway hardware decommission and have no relation to private link or private IP addresses, it will only affect public IP addresses mentioned under the decommissioning IP addresses.
+No, this is a gateway hardware decommission and have no relation to private link or private IP addresses, it will only affect public IP addresses mentioned under the decommissioning IP addresses.
## Next steps * [Create and manage Azure Database for PostgreSQL firewall rules using the Azure portal](./how-to-manage-firewall-using-portal.md)
-* [Create and manage Azure Database for PostgreSQL firewall rules using Azure CLI](quickstart-create-server-database-azure-cli.md#configure-a-server-based-firewall-rule)
+* [Create and manage Azure Database for PostgreSQL firewall rules using Azure CLI](quickstart-create-server-database-azure-cli.md#configure-a-server-based-firewall-rule)
postgresql Concepts Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-connectivity.md
Previously updated : 5/6/2019 Last updated : 06/24/2022 # Handling transient connectivity errors for Azure Database for PostgreSQL - Single Server
postgresql Concepts Data Access And Security Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-data-access-and-security-private-link.md
Previously updated : 03/10/2020 Last updated : 06/24/2022 # Private Link for Azure Database for PostgreSQL-Single server
Private endpoints are required to enable Private Link. This can be done using th
* [CLI](./how-to-configure-privatelink-cli.md) ### Approval Process
-Once the network admin creates the private endpoint (PE), the PostgreSQL admin can manage the private endpoint Connection (PEC) to Azure Database for PostgreSQL. This separation of duties between the network admin and the DBA is helpful for management of the Azure Database for PostgreSQL connectivity.
+
+Once the network admin creates the private endpoint (PE), the PostgreSQL admin can manage the private endpoint Connection (PEC) to Azure Database for PostgreSQL. This separation of duties between the network admin and the DBA is helpful for management of the Azure Database for PostgreSQL connectivity.
* Navigate to the Azure Database for PostgreSQL server resource in the Azure portal. * Select the private endpoint connections in the left pane
Once the network admin creates the private endpoint (PE), the PostgreSQL admin c
## Use cases of Private Link for Azure Database for PostgreSQL - Clients can connect to the private endpoint from the same VNet, [peered VNet](../../virtual-network/virtual-network-peering-overview.md) in same region or across regions, or via [VNet-to-VNet connection](../../vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md) across regions. Additionally, clients can connect from on-premises using ExpressRoute, private peering, or VPN tunneling. Below is a simplified diagram showing the common use cases. :::image type="content" source="media/concepts-data-access-and-security-private-link/show-private-link-overview.png" alt-text="select the private endpoint overview"::: ### Connecting from an Azure VM in Peered Virtual Network (VNet)+ Configure [VNet peering](../../virtual-network/tutorial-connect-virtual-networks-powershell.md) to establish connectivity to the Azure Database for PostgreSQL - Single server from an Azure VM in a peered VNet. ### Connecting from an Azure VM in VNet-to-VNet environment+ Configure [VNet-to-VNet VPN gateway connection](../../vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md) to establish connectivity to a Azure Database for PostgreSQL - Single server from an Azure VM in a different region or subscription. ### Connecting from an on-premises environment over VPN+ To establish connectivity from an on-premises environment to the Azure Database for PostgreSQL - Single server, choose and implement one of the options: * [Point-to-Site connection](../../vpn-gateway/vpn-gateway-howto-point-to-site-rm-ps.md)
The following situations and outcomes are possible when you use Private Link in
## Deny public access for Azure Database for PostgreSQL Single server
-If you want to rely only on private endpoints for accessing their Azure Database for PostgreSQL Single server, you can disable setting all public endpoints ([firewall rules](concepts-firewall-rules.md) and [VNet service endpoints](concepts-data-access-and-security-vnet.md)) by setting the **Deny Public Network Access** configuration on the database server.
+If you want to rely only on private endpoints for accessing their Azure Database for PostgreSQL Single server, you can disable setting all public endpoints ([firewall rules](concepts-firewall-rules.md) and [VNet service endpoints](concepts-data-access-and-security-vnet.md)) by setting the **Deny Public Network Access** configuration on the database server.
When this setting is set to *YES* only connections via private endpoints are allowed to your Azure Database for PostgreSQL. When this setting is set to *NO* clients can connect to your Azure Database for PostgreSQL based on your firewall or VNet service endpoint setting. Additionally, once the value of the Private network access is set, customers cannot add and/or update existing 'Firewall rules' and 'VNet service endpoint rules'.
postgresql Concepts Data Access And Security Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-data-access-and-security-vnet.md
Previously updated : 07/17/2020 Last updated : 06/24/2022 # Use Virtual Network service endpoints and rules for Azure Database for PostgreSQL - Single Server
You can also consider using [Private Link](concepts-data-access-and-security-pri
**Subnet:** A virtual network contains **subnets**. Any Azure virtual machines (VMs) within the VNet is assigned to a subnet. A subnet can contain multiple VMs and/or other compute nodes. Compute nodes that are outside of your virtual network cannot access your virtual network unless you configure your security to allow access.
-**Virtual Network service endpoint:** A [Virtual Network service endpoint][vm-virtual-network-service-endpoints-overview-649d] is a subnet whose property values include one or more formal Azure service type names. In this article we are interested in the type name of **Microsoft.Sql**, which refers to the Azure service named SQL Database. This service tag also applies to the Azure Database for PostgreSQL and MySQL services. It is important to note when applying the **Microsoft.Sql** service tag to a VNet service endpoint it will configure service endpoint traffic for Azure Database
+**Virtual Network service endpoint:** A [Virtual Network service endpoint][vm-virtual-network-service-endpoints-overview-649d] is a subnet whose property values include one or more formal Azure service type names. In this article we are interested in the type name of **Microsoft.Sql**, which refers to the Azure service named SQL Database. This service tag also applies to the Azure Database for PostgreSQL and MySQL services. It is important to note when applying the **Microsoft.Sql** service tag to a VNet service endpoint it will configure service endpoint traffic for Azure Database
**Virtual network rule:** A virtual network rule for your Azure Database for PostgreSQL server is a subnet that is listed in the access control list (ACL) of your Azure Database for PostgreSQL server. To be in the ACL for your Azure Database for PostgreSQL server, the subnet must contain the **Microsoft.Sql** type name.
You can salvage the IP option by obtaining a *static* IP address for your VM. Fo
However, the static IP approach can become difficult to manage, and it is costly when done at scale. Virtual network rules are easier to establish and to manage. - <a name="anch-details-about-vnet-rules-38q"></a> ## Details about virtual network rules
Merely setting a VNet firewall rule does not help secure the server to the VNet.
You can set the **IgnoreMissingServiceEndpoint** flag by using the Azure CLI or portal. ## Related articles+ - [Azure virtual networks][vm-virtual-network-overview] - [Azure virtual network service endpoints][vm-virtual-network-service-endpoints-overview-649d] ## Next steps+ For articles on creating VNet rules, see: - [Create and manage Azure Database for PostgreSQL VNet rules using the Azure portal](how-to-manage-vnet-using-portal.md) - [Create and manage Azure Database for PostgreSQL VNet rules using Azure CLI](how-to-manage-vnet-using-cli.md) - <!-- Link references, to text, Within this same GitHub repo. --> [arm-deployment-model-568f]: ../../azure-resource-manager/management/deployment-models.md
For articles on creating VNet rules, see:
[expressroute-indexmd-744v]: ../../expressroute/index.yml
-[resource-manager-portal]: ../../azure-resource-manager/management/resource-providers-and-types.md
+[resource-manager-portal]: ../../azure-resource-manager/management/resource-providers-and-types.md
postgresql Concepts Data Encryption Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-data-encryption-postgresql.md
Previously updated : 01/13/2020 Last updated : 06/24/2022 # Azure Database for PostgreSQL Single server data encryption with a customer-managed key
Data encryption with customer-managed keys for Azure Database for PostgreSQL Sin
* Enabling encryption does not have any additional performance impact with or without customers managed key (CMK) as PostgreSQL relies on Azure storage layer for data encryption in both the scenarios ,the only difference is when CMK is used **Azure Storage Encryption Key** which performs actual data encryption is encrypted using CMK. * Ability to implement separation of duties between security officers, and DBA and system administrators. - ## Terminology and description **Data encryption key (DEK)**: A symmetric AES256 key used to encrypt a partition or block of data. Encrypting each block of data with a different key makes crypto analysis attacks more difficult. Access to DEKs is needed by the resource provider or application instance that is encrypting and decrypting a specific block. When you replace a DEK with a new key, only the data in its associated block must be re-encrypted with the new key.
postgresql Concepts Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-extensions.md
Previously updated : 03/25/2021 Last updated : 06/24/2022 + # PostgreSQL extensions in Azure Database for PostgreSQL - Single Server [!INCLUDE [applies-to-postgresql-single-server](../includes/applies-to-postgresql-single-server.md)]
Azure Database for PostgreSQL supports a subset of key extensions as listed belo
## Postgres 11 extensions
-The following extensions are available in Azure Database for PostgreSQL servers which have Postgres version 11.
+The following extensions are available in Azure Database for PostgreSQL servers which have Postgres version 11.
> [!div class="mx-tableFixed"] > | **Extension**| **Extension version** | **Description** |
The following extensions are available in Azure Database for PostgreSQL servers
> |[unaccent](https://www.postgresql.org/docs/11/unaccent.html) | 1.1 | text search dictionary that removes accents| > |[uuid-ossp](https://www.postgresql.org/docs/11/uuid-ossp.html) | 1.1 | generate universally unique identifiers (UUIDs)|
-## Postgres 10 extensions
+## Postgres 10 extensions
The following extensions are available in Azure Database for PostgreSQL servers which have Postgres version 10.
The following extensions are available in Azure Database for PostgreSQL servers
> |[unaccent](https://www.postgresql.org/docs/10/unaccent.html) | 1.1 | text search dictionary that removes accents| > |[uuid-ossp](https://www.postgresql.org/docs/10/uuid-ossp.html) | 1.1 | generate universally unique identifiers (UUIDs)|
-## Postgres 9.6 extensions
+## Postgres 9.6 extensions
The following extensions are available in Azure Database for PostgreSQL servers which have Postgres version 9.6.
The following extensions are available in Azure Database for PostgreSQL servers
> |[unaccent](https://www.postgresql.org/docs/9.5/unaccent.html) | 1.0 | text search dictionary that removes accents| > |[uuid-ossp](https://www.postgresql.org/docs/9.5/uuid-ossp.html) | 1.0 | generate universally unique identifiers (UUIDs)| - ## pg_stat_statements+ The [pg_stat_statements extension](https://www.postgresql.org/docs/current/pgstatstatements.html) is preloaded on every Azure Database for PostgreSQL server to provide you a means of tracking execution statistics of SQL statements. The setting `pg_stat_statements.track`, which controls what statements are counted by the extension, defaults to `top`, meaning all statements issued directly by clients are tracked. The two other tracking levels are `none` and `all`. This setting is configurable as a server parameter through the [Azure portal](./how-to-configure-server-parameters-using-portal.md) or the [Azure CLI](./how-to-configure-server-parameters-using-cli.md). There is a tradeoff between the query execution information pg_stat_statements provides and the impact on server performance as it logs each SQL statement. If you are not actively using the pg_stat_statements extension, we recommend that you set `pg_stat_statements.track` to `none`. Note that some third party monitoring services may rely on pg_stat_statements to deliver query performance insights, so confirm whether this is the case for you or not. ## dblink and postgres_fdw+ [dblink](https://www.postgresql.org/docs/current/contrib-dblink-function.html) and [postgres_fdw](https://www.postgresql.org/docs/current/postgres-fdw.html) allow you to connect from one PostgreSQL server to another, or to another database in the same server. The receiving server needs to allow connections from the sending server through its firewall. When using these extensions to connect between Azure Database for PostgreSQL servers, this can be done by setting "Allow access to Azure services" to ON. This is also needed if you want to use the extensions to loop back to the same server. The "Allow access to Azure services" setting can be found in the Azure portal page for the Postgres server, under Connection Security. Turning "Allow access to Azure services" ON puts all Azure IPs on the allow list. > [!NOTE] > Currently, outbound connections from Azure Database for PostgreSQL via foreign data wrapper extensions such as postgres_fdw are not supported, except for connections to other Azure Database for PostgreSQL servers in the same Azure region. ## uuid+ If you are planning to use `uuid_generate_v4()` from the [uuid-ossp extension](https://www.postgresql.org/docs/current/uuid-ossp.html), consider comparing with `gen_random_uuid()` from the [pgcrypto extension](https://www.postgresql.org/docs/current/pgcrypto.html) for performance benefits. ## pgAudit
-The [pgAudit extension](https://github.com/pgaudit/pgaudit/blob/master/README.md) provides session and object audit logging. To learn how to use this extension in Azure Database for PostgreSQL, visit the [auditing concepts article](concepts-audit.md).
+
+The [pgAudit extension](https://github.com/pgaudit/pgaudit/blob/master/README.md) provides session and object audit logging. To learn how to use this extension in Azure Database for PostgreSQL, visit the [auditing concepts article](concepts-audit.md).
## pg_prewarm+ The pg_prewarm extension loads relational data into cache. Prewarming your caches means that your queries have better response times on their first run after a restart. In Postgres 10 and below, prewarming is done manually using the [prewarm function](https://www.postgresql.org/docs/10/pgprewarm.html).
-In Postgres 11 and above, you can configure prewarming to happen [automatically](https://www.postgresql.org/docs/current/pgprewarm.html). You need to include pg_prewarm in your `shared_preload_libraries` parameter's list and restart the server to apply the change. Parameters can be set from the [Azure portal](how-to-configure-server-parameters-using-portal.md), [CLI](how-to-configure-server-parameters-using-cli.md), REST API, or ARM template.
+In Postgres 11 and above, you can configure prewarming to happen [automatically](https://www.postgresql.org/docs/current/pgprewarm.html). You need to include pg_prewarm in your `shared_preload_libraries` parameter's list and restart the server to apply the change. Parameters can be set from the [Azure portal](how-to-configure-server-parameters-using-portal.md), [CLI](how-to-configure-server-parameters-using-cli.md), REST API, or ARM template.
## TimescaleDB+ TimescaleDB is a time-series database that is packaged as an extension for PostgreSQL. TimescaleDB provides time-oriented analytical functions, optimizations, and scales Postgres for time-series workloads. [Learn more about TimescaleDB](https://docs.timescale.com/timescaledb/latest/), a registered trademark of [Timescale, Inc.](https://www.timescale.com/). Azure Database for PostgreSQL provides the TimescaleDB [Apache-2 edition](https://www.timescale.com/legal/licenses). ### Installing TimescaleDB+ To install TimescaleDB, you need to include it in the server's shared preload libraries. A change to Postgres's `shared_preload_libraries` parameter requires a **server restart** to take effect. You can change parameters using the [Azure portal](how-to-configure-server-parameters-using-portal.md) or the [Azure CLI](how-to-configure-server-parameters-using-cli.md). Using the [Azure portal](https://portal.azure.com/):
Using the [Azure portal](https://portal.azure.com/):
4. Select **TimescaleDB**.
-5. Select **Save** to preserve your changes. You get a notification once the change is saved.
+5. Select **Save** to preserve your changes. You get a notification once the change is saved.
6. After the notification, **restart** the server to apply these changes. To learn how to restart a server, see [Restart an Azure Database for PostgreSQL server](how-to-restart-server-portal.md). - You can now enable TimescaleDB in your Postgres database. Connect to the database and issue the following command: ```sql CREATE EXTENSION IF NOT EXISTS timescaledb CASCADE; ``` > [!TIP]
-> If you see an error, confirm that you [restarted your server](how-to-restart-server-portal.md) after saving shared_preload_libraries.
+> If you see an error, confirm that you [restarted your server](how-to-restart-server-portal.md) after saving shared_preload_libraries.
You can now create a TimescaleDB hypertable [from scratch](https://docs.timescale.com/getting-started/creating-hypertables) or migrate [existing time-series data in PostgreSQL](https://docs.timescale.com/getting-started/migrating-data). ### Restoring a Timescale database using pg_dump and pg_restore+ To restore a Timescale database using pg_dump and pg_restore, you need to run two helper procedures in the destination database: `timescaledb_pre_restore()` and `timescaledb_post restore()`. First prepare the destination database:
SELECT timescaledb_post_restore();
``` For more details on restore method wiith Timescale enabled database see [Timescale documentation](https://docs.timescale.com/timescaledb/latest/how-to-guides/backup-and-restore/pg-dump-and-restore/#restore-your-entire-database-from-backup) - ### Restoring a Timescale database using timescaledb-backup
- While running `SELECT timescaledb_post_restore()` procedure listed above you may get permissions denied error updating timescaledb.restoring flag. This is due to limited ALTER DATABASE permission in Cloud PaaS database services. In this case you can perform alternative method using `timescaledb-backup` tool to backup and restore Timescale database. Timescaledb-backup is a program for making dumping and restoring a TimescaleDB database simpler, less error-prone, and more performant.
- To do so you should do following
+While running `SELECT timescaledb_post_restore()` procedure listed above you may get permissions denied error updating timescaledb.restoring flag. This is due to limited ALTER DATABASE permission in Cloud PaaS database services. In this case you can perform alternative method using `timescaledb-backup` tool to backup and restore Timescale database. Timescaledb-backup is a program for making dumping and restoring a TimescaleDB database simpler, less error-prone, and more performant.
+To do so you should do following
1. Install tools as detailed [here](https://github.com/timescale/timescaledb-backup#installing-timescaledb-backup) 2. Create target Azure Database for PostgreSQL server and database 3. Enable Timescale extension as shown above 4. Grant azure_pg_admin [role](https://www.postgresql.org/docs/11/database-roles.html) to user that will be used by [ts-restore](https://github.com/timescale/timescaledb-backup#using-ts-restore) 5. Run [ts-restore](https://github.com/timescale/timescaledb-backup#using-ts-restore) to restore database
- More details on these utilities can be found [here](https://github.com/timescale/timescaledb-backup).
+More details on these utilities can be found [here](https://github.com/timescale/timescaledb-backup).
> [!NOTE]
-> When using `timescale-backup` utilities to restore to Azure is that since database user names for non-flexible Azure Database for PostgresQL must use the `<user@db-name>` format, you need to replace `@` with `%40` character encoding.
+> When using `timescale-backup` utilities to restore to Azure is that since database user names for non-flexible Azure Database for PostgresQL must use the `<user@db-name>` format, you need to replace `@` with `%40` character encoding.
## Next steps
-If you don't see an extension that you'd like to use, let us know. Vote for existing requests or create new feedback requests in our [feedback forum](https://feedback.azure.com/d365community/forum/c5e32b97-ee24-ec11-b6e6-000d3a4f0da0).
+If you don't see an extension that you'd like to use, let us know. Vote for existing requests or create new feedback requests in our [feedback forum](https://feedback.azure.com/d365community/forum/c5e32b97-ee24-ec11-b6e6-000d3a4f0da0).
postgresql Concepts Firewall Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-firewall-rules.md
Previously updated : 07/17/2020 Last updated : 06/24/2022 # Firewall rules in Azure Database for PostgreSQL - Single Server
To configure your firewall, you create firewall rules that specify ranges of acc
**Firewall rules:** These rules enable clients to access your entire Azure Database for PostgreSQL Server, that is, all the databases within the same logical server. Server-level firewall rules can be configured by using the Azure portal or using Azure CLI commands. To create server-level firewall rules, you must be the subscription owner or a subscription contributor. ## Firewall overview+ All access to your Azure Database for PostgreSQL server is blocked by the firewall by default. To access your server from another computer/client or application, you need to specify one or more server-level firewall rules to enable access to your server. Use the firewall rules to specify allowed public IP address ranges. Access to the Azure portal website itself is not impacted by the firewall rules. Connection attempts from the internet and Azure must first pass through the firewall before they can reach your PostgreSQL Database, as shown in the following diagram: :::image type="content" source="../media/concepts-firewall-rules/1-firewall-concept.png" alt-text="Example flow of how the firewall works"::: ## Connecting from the Internet+ Server-level firewall rules apply to all databases on the same Azure Database for PostgreSQL server. If the source IP address of the request is within one of the ranges specified in the server-level firewall rules, the connection is granted otherwise it is rejected. For example, if your application connects with JDBC driver for PostgreSQL, you may encounter this error attempting to connect when the firewall is blocking the connection. > java.util.concurrent.ExecutionException: java.lang.RuntimeException: > org.postgresql.util.PSQLException: FATAL: no pg\_hba.conf entry for host "123.45.67.890", user "adminuser", database "postgresql", SSL ## Connecting from Azure
-It is recommended that you find the outgoing IP address of any application or service and explicitly allow access to those individual IP addresses or ranges. For example, you can find the outgoing IP address of an Azure App Service or use a public IP tied to a virtual machine or other resource (see below for info on connecting with a virtual machine's private IP over service endpoints).
+
+It is recommended that you find the outgoing IP address of any application or service and explicitly allow access to those individual IP addresses or ranges. For example, you can find the outgoing IP address of an Azure App Service or use a public IP tied to a virtual machine or other resource (see below for info on connecting with a virtual machine's private IP over service endpoints).
If a fixed outgoing IP address isn't available for your Azure service, you can consider enabling connections from all Azure datacenter IP addresses. This setting can be enabled from the Azure portal by setting the **Allow access to Azure services** option to **ON** from the **Connection security** pane and hitting **Save**. From the Azure CLI, a firewall rule setting with starting and ending address equal to 0.0.0.0 does the equivalent. If the connection attempt is rejected by firewall rules, it does not reach the Azure Database for PostgreSQL server. > [!IMPORTANT] > The **Allow access to Azure services** option configures the firewall to allow all connections from Azure including connections from the subscriptions of other customers. When selecting this option, make sure your login and user permissions limit access to only authorized users.
->
+>
:::image type="content" source="../media/concepts-firewall-rules/allow-azure-services.png" alt-text="Configure Allow access to Azure services in the portal"::: ## Connecting from a VNet
-To connect securely to your Azure Database for PostgreSQL server from a VNet, consider using [VNet service endpoints](./concepts-data-access-and-security-vnet.md).
+
+To connect securely to your Azure Database for PostgreSQL server from a VNet, consider using [VNet service endpoints](./concepts-data-access-and-security-vnet.md).
## Programmatically managing firewall rules+ In addition to the Azure portal, firewall rules can be managed programmatically using Azure CLI. See also [Create and manage Azure Database for PostgreSQL firewall rules using Azure CLI](quickstart-create-server-database-azure-cli.md#configure-a-server-based-firewall-rule) ## Troubleshooting firewall issues+ Consider the following points when access to the Microsoft Azure Database for PostgreSQL Server service does not behave as you expect: * **Changes to the allow list have not taken effect yet:** There may be as much as a five-minute delay for changes to the Azure Database for PostgreSQL Server firewall configuration to take effect.
Consider the following points when access to the Microsoft Azure Database for Po
* **Firewall rule is not available for IPv6 format:** The firewall rules must be in IPv4 format. If you specify firewall rules in IPv6 format, it will show the validation error. - ## Next steps+ * [Create and manage Azure Database for PostgreSQL firewall rules using the Azure portal](how-to-manage-firewall-using-portal.md) * [Create and manage Azure Database for PostgreSQL firewall rules using Azure CLI](quickstart-create-server-database-azure-cli.md#configure-a-server-based-firewall-rule) * [VNet service endpoints in Azure Database for PostgreSQL](./concepts-data-access-and-security-vnet.md)
postgresql Concepts High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-high-availability.md
Previously updated : 6/15/2020 Last updated : 06/24/2022 # High availability in Azure Database for PostgreSQL ΓÇô Single Server
Last updated 6/15/2020
The Azure Database for PostgreSQL ΓÇô Single Server service provides a guaranteed high level of availability with the financially backed service level agreement (SLA) of [99.99%](https://azure.microsoft.com/support/legal/sla/postgresql) uptime. Azure Database for PostgreSQL provides high availability during planned events such as user-initiated scale compute operation, and also when unplanned events such as underlying hardware, software, or network failures occur. Azure Database for PostgreSQL can quickly recover from most critical circumstances, ensuring virtually no application down time when using this service.
-Azure Database for PostgreSQL is suitable for running mission critical databases that require high uptime. Built on Azure architecture, the service has inherent high availability, redundancy, and resiliency capabilities to mitigate database downtime from planned and unplanned outages, without requiring you to configure any additional components.
+Azure Database for PostgreSQL is suitable for running mission critical databases that require high uptime. Built on Azure architecture, the service has inherent high availability, redundancy, and resiliency capabilities to mitigate database downtime from planned and unplanned outages, without requiring you to configure any additional components.
## Components in Azure Database for PostgreSQL ΓÇô Single Server
Azure Database for PostgreSQL is suitable for running mission critical databases
| <b>Gateway | The Gateway acts as a database proxy, routes all client connections to the database server. | ## Planned downtime mitigation
-Azure Database for PostgreSQL is architected to provide high availability during planned downtime operations.
+
+Azure Database for PostgreSQL is architected to provide high availability during planned downtime operations.
:::image type="content" source="./media/concepts-high-availability/azure-postgresql-elastic-scaling.png" alt-text="view of Elastic Scaling in Azure PostgreSQL":::
Here are some planned maintenance scenarios:
| <b>New Software Deployment (Azure) | New features rollout or bug fixes automatically happen as part of serviceΓÇÖs planned maintenance. For more information, refer to the [documentation](./concepts-monitoring.md#planned-maintenance-notification), and also check your [portal](https://aka.ms/servicehealthpm).| | <b>Minor version upgrades | Azure Database for PostgreSQL automatically patches database servers to the minor version determined by Azure. It happens as part of service's planned maintenance. This would incur a short downtime in terms of seconds, and the database server is automatically restarted with the new minor version. For more information, refer to the [documentation](./concepts-monitoring.md#planned-maintenance-notification), and also check your [portal](https://aka.ms/servicehealthpm).| - ## Unplanned downtime mitigation
-Unplanned downtime can occur as a result of unforeseen failures, including underlying hardware fault, networking issues, and software bugs. If the database server goes down unexpectedly, a new database server is automatically provisioned in seconds. The remote storage is automatically attached to the new database server. PostgreSQL engine performs the recovery operation using WAL and database files, and opens up the database server to allow clients to connect. Uncommitted transactions are lost, and they have to be retried by the application. While an unplanned downtime cannot be avoided, Azure Database for PostgreSQL mitigates the downtime by automatically performing recovery operations at both database server and storage layers without requiring human intervention.
-
+Unplanned downtime can occur as a result of unforeseen failures, including underlying hardware fault, networking issues, and software bugs. If the database server goes down unexpectedly, a new database server is automatically provisioned in seconds. The remote storage is automatically attached to the new database server. PostgreSQL engine performs the recovery operation using WAL and database files, and opens up the database server to allow clients to connect. Uncommitted transactions are lost, and they have to be retried by the application. While an unplanned downtime cannot be avoided, Azure Database for PostgreSQL mitigates the downtime by automatically performing recovery operations at both database server and storage layers without requiring human intervention.
:::image type="content" source="./media/concepts-high-availability/azure-postgresql-built-in-high-availability.png" alt-text="view of High Availability in Azure PostgreSQL":::
Unplanned downtime can occur as a result of unforeseen failures, including under
2. Gateway that acts as a proxy to route client connections to the proper database server 3. Azure storage with three copies for reliability, availability, and redundancy. 4. Remote storage also enables fast detach/re-attach after the server failover.
-
+ ### Unplanned downtime: failure scenarios and service recovery+ Here are some failure scenarios and how Azure Database for PostgreSQL automatically recovers: | **Scenario** | **Automatic recovery** |
Here are some failure scenarios that require user action to recover:
| <b> Logical/user errors | Recovery from user errors, such as accidentally dropped tables or incorrectly updated data, involves performing a [point-in-time recovery](./concepts-backup.md) (PITR), by restoring and recovering the data until the time just before the error had occurred.<br> <br> If you want to restore only a subset of databases or specific tables rather than all databases in the database server, you can restore the database server in a new instance, export the table(s) via [pg_dump](https://www.postgresql.org/docs/11/app-pgdump.html), and then use [pg_restore](https://www.postgresql.org/docs/11/app-pgrestore.html) to restore those tables into your database. | - ## Summary Azure Database for PostgreSQL provides fast restart capability of database servers, redundant storage, and efficient routing from the Gateway. For additional data protection, you can configure backups to be geo-replicated, and also deploy one or more read replicas in other regions. With inherent high availability capabilities, Azure Database for PostgreSQL protects your databases from most common outages, and offers an industry leading, finance-backed [99.99% of uptime SLA](https://azure.microsoft.com/support/legal/sla/postgresql). All these availability and reliability capabilities enable Azure to be the ideal platform to run your mission-critical applications. ## Next steps+ - Learn about [Azure regions](../../availability-zones/az-overview.md) - Learn about [handling transient connectivity errors](concepts-connectivity.md)-- Learn how to [replicate your data with read replicas](how-to-read-replicas-portal.md)
+- Learn how to [replicate your data with read replicas](how-to-read-replicas-portal.md)
postgresql Concepts Infrastructure Double Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-infrastructure-double-encryption.md
Previously updated : 6/30/2020 Last updated : 06/24/2022 # Azure Database for PostgreSQL Infrastructure double encryption
Infrastructure double encryption adds a second layer of encryption using service
> [!NOTE] > This feature is only supported for "General Purpose" and "Memory Optimized" pricing tiers in Azure Database for PostgreSQL.
-Infrastructure Layer encryption has the benefit of being implemented at the layer closest to the storage device or network wires. Azure Database for PostgreSQL implements the two layers of encryption using service-managed keys. Although still technically in the service layer, it is very close to hardware that stores the data at rest. You can still optionally enable data encryption at rest using [customer managed key](concepts-data-encryption-postgresql.md) for the provisioned PostgreSQL server.
+Infrastructure Layer encryption has the benefit of being implemented at the layer closest to the storage device or network wires. Azure Database for PostgreSQL implements the two layers of encryption using service-managed keys. Although still technically in the service layer, it is very close to hardware that stores the data at rest. You can still optionally enable data encryption at rest using [customer managed key](concepts-data-encryption-postgresql.md) for the provisioned PostgreSQL server.
-Implementation at the infrastructure layers also supports a diversity of keys. Infrastructure must be aware of different clusters of machine and networks. As such, different keys are used to minimize the blast radius of infrastructure attacks and a variety of hardware and network failures.
+Implementation at the infrastructure layers also supports a diversity of keys. Infrastructure must be aware of different clusters of machine and networks. As such, different keys are used to minimize the blast radius of infrastructure attacks and a variety of hardware and network failures.
> [!NOTE] > Using Infrastructure double encryption will have performance impact on the Azure Database for PostgreSQL server due to the additional encryption process.
The encryption capabilities that are provided by Azure Database for PostgreSQL c
| 2 | *Yes* | *Yes* | *No* | | 3 | *Yes* | *No* | *Yes* | | 4 | *Yes* | *Yes* | *Yes* |
-| | | | |
> [!Important] > - Scenario 2 and 4 will have performance impact on the Azure Database for PostgreSQL server due to the additional layer of infrastructure encryption.
postgresql Concepts Known Issues Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-known-issues-limitations.md
Previously updated : 11/30/2021 Last updated : 06/24/2022 # Azure Database for PostgreSQL - Known issues and limitations
Applicable to Azure Database for PostgreSQL - Single Server.
| Applicable | Cause | Remediation| | -- | | - |
-| PostgreSQL 9.6, 10, 11 | Turning on the server parameter `pg_qs.replace_parameter_placeholders` might lead to a server shutdown in some rare scenarios. | Through Azure Portal, Server Parameters section, turn the parameter `pg_qs.replace_parameter_placeholders` value to `OFF` and save. |
-
+| PostgreSQL 9.6, 10, 11 | Turning on the server parameter `pg_qs.replace_parameter_placeholders` might lead to a server shutdown in some rare scenarios. | Through Azure Portal, Server Parameters section, turn the parameter `pg_qs.replace_parameter_placeholders` value to `OFF` and save. |
## Next steps+ - See Query Store [best practices](./concepts-query-store-best-practices.md)
postgresql Concepts Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-limits.md
Previously updated : 01/28/2020 Last updated : 06/24/2022 + # Limits in Azure Database for PostgreSQL - Single Server [!INCLUDE [applies-to-postgresql-single-server](../includes/applies-to-postgresql-single-server.md)]
The following sections describe capacity and functional limits in the database s
## Maximum connections
-The maximum number of connections per pricing tier and vCores are shown below. The Azure system requires five connections to monitor the Azure Database for PostgreSQL server.
+The maximum number of connections per pricing tier and vCores are shown below. The Azure system requires five connections to monitor the Azure Database for PostgreSQL server.
|**Pricing Tier**| **vCore(s)**| **Max Connections** | **Max User Connections** | |||||
When connections exceed the limit, you may receive the following error:
A PostgreSQL connection, even idle, can occupy about 10MB of memory. Also, creating new connections takes time. Most applications request many short-lived connections, which compounds this situation. The result is fewer resources available for your actual workload leading to decreased performance. A connection pooler that decreases idle connections and reuses existing connections will help avoid this. To learn more, visit our [blog post](https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/not-all-postgres-connection-pooling-is-equal/ba-p/825717). ## Functional limitations+ ### Scale operations+ - Dynamic scaling to and from the Basic pricing tiers is currently not supported. - Decreasing server storage size is currently not supported. ### Server version upgrades+ - Automated migration between major database engine versions is currently not supported. If you would like to upgrade to the next major version, take a [dump and restore](./how-to-migrate-using-dump-and-restore.md) it to a server that was created with the new engine version. > Note that prior to PostgreSQL version 10, the [PostgreSQL versioning policy](https://www.postgresql.org/support/versioning/) considered a _major version_ upgrade to be an increase in the first _or_ second number (for example, 9.5 to 9.6 was considered a _major_ version upgrade). > As of version 10, only a change in the first number is considered a major version upgrade (for example, 10.0 to 10.1 is a _minor_ version upgrade, and 10 to 11 is a _major_ version upgrade). ### VNet service endpoints+ - Support for VNet service endpoints is only for General Purpose and Memory Optimized servers. ### Restoring a server+ - When using the PITR feature, the new server is created with the same pricing tier configurations as the server it is based on. - The new server created during a restore does not have the firewall rules that existed on the original server. Firewall rules need to be set up separately for this new server. - Restoring a deleted server is not supported. ### UTF-8 characters on Windows+ - In some scenarios, UTF-8 characters are not supported fully in open source PostgreSQL on Windows, which affects Azure Database for PostgreSQL. Please see the thread on [Bug #15476 in the postgresql-archive](https://www.postgresql.org/message-id/2101.1541220270%40sss.pgh.pa.us) for more information. ### GSS error+ If you see an error related to **GSS**, you are likely using a newer client/driver version which Azure Postgres Single Server does not yet fully support. This error is known to affect [JDBC driver versions 42.2.15 and 42.2.16](https://github.com/pgjdbc/pgjdbc/issues/1868). - We plan to complete the update by the end of November. Consider using a working driver version in the meantime. - Or, consider disabling the GSS request. Use a connection parameter like `gssEncMode=disable`. ### Storage size reduction+ Storage size cannot be reduced. You have to create a new server with desired storage size, perform manual [dump and restore](./how-to-migrate-using-dump-and-restore.md) and migrate your database(s) to the new server. ## Next steps+ - Understand [whatΓÇÖs available in each pricing tier](concepts-pricing-tiers.md) - Learn about [Supported PostgreSQL Database Versions](concepts-supported-versions.md) - Review [how to back up and restore a server in Azure Database for PostgreSQL using the Azure portal](how-to-restore-server-portal.md)
postgresql Concepts Logical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-logical.md
Previously updated : 12/09/2020 Last updated : 06/24/2022 # Logical decoding
Last updated 12/09/2020
Logical decoding uses an output plugin to convert PostgresΓÇÖs write ahead log (WAL) into a readable format. Azure Database for PostgreSQL provides the output plugins [wal2json](https://github.com/eulerto/wal2json), [test_decoding](https://www.postgresql.org/docs/current/test-decoding.html) and pgoutput. pgoutput is made available by PostgreSQL from PostgreSQL version 10 and up.
-For an overview of how Postgres logical decoding works, [visit our blog](https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/change-data-capture-in-postgres-how-to-use-logical-decoding-and/ba-p/1396421).
+For an overview of how Postgres logical decoding works, [visit our blog](https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/change-data-capture-in-postgres-how-to-use-logical-decoding-and/ba-p/1396421).
> [!NOTE] > Logical replication using PostgreSQL publication/subscription is not supported with Azure Database for PostgreSQL - Single Server. - ## Set up your server + Logical decoding and [read replicas](concepts-read-replicas.md) both depend on the Postgres write ahead log (WAL) for information. These two features need different levels of logging from Postgres. Logical decoding needs a higher level of logging than read replicas. To configure the right level of logging, use the Azure replication support parameter. Azure replication support has three setting options:
To configure the right level of logging, use the Azure replication support param
* **Replica** - More verbose than **Off**. This is the minimum level of logging needed for [read replicas](concepts-read-replicas.md) to work. This setting is the default on most servers. * **Logical** - More verbose than **Replica**. This is the minimum level of logging for logical decoding to work. Read replicas also work at this setting. - ### Using Azure CLI 1. Set azure.replication_support to `logical`. ```azurecli-interactive az postgres server configuration set --resource-group mygroup --server-name myserver --name azure.replication_support --value logical
- ```
+ ```
2. Restart the server to apply the change. ```azurecli-interactive az postgres server restart --resource-group mygroup --name myserver ```
-3. If you are running Postgres 9.5 or 9.6, and use public network access, add the firewall rule to include the public IP address of the client from where you will run the logical replication. The firewall rule name must include **_replrule**. For example, *test_replrule*. To create a new firewall rule on the server, run the [az postgres server firewall-rule create](/cli/azure/postgres/server/firewall-rule) command.
+3. If you are running Postgres 9.5 or 9.6, and use public network access, add the firewall rule to include the public IP address of the client from where you will run the logical replication. The firewall rule name must include **_replrule**. For example, *test_replrule*. To create a new firewall rule on the server, run the [az postgres server firewall-rule create](/cli/azure/postgres/server/firewall-rule) command.
### Using Azure portal
To configure the right level of logging, use the Azure replication support param
:::image type="content" source="./media/concepts-logical/confirm-restart.png" alt-text="Azure Database for PostgreSQL - Replication - Confirm restart":::
-3. If you are running Postgres 9.5 or 9.6, and use public network access, add the firewall rule to include the public IP address of the client from where you will run the logical replication. The firewall rule name must include **_replrule**. For example, *test_replrule*. Then click **Save**.
+3. If you are running Postgres 9.5 or 9.6, and use public network access, add the firewall rule to include the public IP address of the client from where you will run the logical replication. The firewall rule name must include **_replrule**. For example, *test_replrule*. Then select **Save**.
:::image type="content" source="./media/concepts-logical/client-replrule-firewall.png" alt-text="Azure Database for PostgreSQL - Replication - Add firewall rule":::
To configure the right level of logging, use the Azure replication support param
Logical decoding can be consumed via streaming protocol or SQL interface. Both methods use [replication slots](https://www.postgresql.org/docs/current/logicaldecoding-explanation.html#LOGICALDECODING-REPLICATION-SLOTS). A slot represents a stream of changes from a single database.
-Using a replication slot requires Postgres's replication privileges. At this time, the replication privilege is only available for the server's admin user.
+Using a replication slot requires Postgres's replication privileges. At this time, the replication privilege is only available for the server's admin user.
### Streaming protocol
-Consuming changes using the streaming protocol is often preferable. You can create your own consumer / connector, or use a tool like [Debezium](https://debezium.io/).
-Visit the wal2json documentation for [an example using the streaming protocol with pg_recvlogical](https://github.com/eulerto/wal2json#pg_recvlogical).
+Consuming changes using the streaming protocol is often preferable. You can create your own consumer / connector, or use a tool like [Debezium](https://debezium.io/).
+Visit the wal2json documentation for [an example using the streaming protocol with pg_recvlogical](https://github.com/eulerto/wal2json#pg_recvlogical).
### SQL interface+ In the example below, we use the SQL interface with the wal2json plugin.
-
+ 1. Create a slot. ```SQL SELECT * FROM pg_create_logical_replication_slot('test_slot', 'wal2json'); ```
-
+ 2. Issue SQL commands. For example: ```SQL CREATE TABLE a_table (
In the example below, we use the SQL interface with the wal2json plugin.
item varchar(40), PRIMARY KEY (id) );
-
+ INSERT INTO a_table (id, item) VALUES ('id1', 'item1'); DELETE FROM a_table WHERE id='id1'; ```
In the example below, we use the SQL interface with the wal2json plugin.
SELECT pg_drop_replication_slot('test_slot'); ``` - ## Monitoring slots You must monitor logical decoding. Any unused replication slot must be dropped. Slots hold on to Postgres WAL logs and relevant system catalogs until changes have been read by a consumer. If your consumer fails or has not been properly configured, the unconsumed logs will pile up and fill your storage. Also, unconsumed logs increase the risk of transaction ID wraparound. Both situations can cause the server to become unavailable. Therefore, it is critical that logical replication slots are consumed continuously. If a logical replication slot is no longer used, drop it immediately.
The 'active' column in the pg_replication_slots view will indicate whether there
SELECT * FROM pg_replication_slots; ```
-[Set alerts](how-to-alert-on-metric.md) on *Storage used* and *Max lag across replicas* metrics to notify you when the values increase past normal thresholds.
+[Set alerts](how-to-alert-on-metric.md) on *Storage used* and *Max lag across replicas* metrics to notify you when the values increase past normal thresholds.
> [!IMPORTANT] > You must drop unused replication slots. Failing to do so can lead to server unavailability. ## How to drop a slot+ If you are not actively consuming a replication slot you should drop it. To drop a replication slot called `test_slot` using SQL:
SELECT pg_drop_replication_slot('test_slot');
``` > [!IMPORTANT]
-> If you stop using logical decoding, change azure.replication_support back to `replica` or `off`. The WAL details retained by `logical` are more verbose, and should be disabled when logical decoding is not in use.
+> If you stop using logical decoding, change azure.replication_support back to `replica` or `off`. The WAL details retained by `logical` are more verbose, and should be disabled when logical decoding is not in use.
-
## Next steps * Visit the Postgres documentation to [learn more about logical decoding](https://www.postgresql.org/docs/current/logicaldecoding-explanation.html). * Reach out to [our team](mailto:AskAzureDBforPostgreSQL@service.microsoft.com) if you have questions about logical decoding. * Learn more about [read replicas](concepts-read-replicas.md).-
postgresql Concepts Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-monitoring.md
Previously updated : 10/21/2020 Last updated : 06/24/2022 # Monitor and tune Azure Database for PostgreSQL - Single Server
Last updated 10/21/2020
Monitoring data about your servers helps you troubleshoot and optimize for your workload. Azure Database for PostgreSQL provides various monitoring options to provide insight into the behavior of your server. ## Metrics+ Azure Database for PostgreSQL provides various metrics that give insight into the behavior of the resources supporting the PostgreSQL server. Each metric is emitted at a one-minute frequency, and has up to [93 days of history](../../azure-monitor/essentials/data-platform-metrics.md#retention-of-metrics). You can configure alerts on the metrics. For step by step guidance, see [How to set up alerts](how-to-alert-on-metric.md). Other tasks include setting up automated actions, performing advanced analytics, and archiving history. For more information, see the [Azure Metrics Overview](../../azure-monitor/data-platform.md). ### List of metrics+ These metrics are available for Azure Database for PostgreSQL: |Metric|Metric Display Name|Unit|Description|
These metrics are available for Azure Database for PostgreSQL:
|pg_replica_log_delay_in_seconds|Replica Lag|Seconds|The time since the last replayed transaction. This metric is available for replica servers only.| ## Server logs+ You can enable logging on your server. These resource logs can be sent to [Azure Monitor logs](../../azure-monitor/logs/log-query-overview.md), Event Hubs, and a Storage Account. To learn more about logging, visit the [server logs](concepts-server-logs.md) page. ## Query Store+ [Query Store](concepts-query-store.md) keeps track of query performance over time including query runtime statistics and wait events. The feature persists query runtime performance information in a system database named **azure_sys** under the query_store schema. You can control the collection and storage of data via various configuration knobs. ## Query Performance Insight+ [Query Performance Insight](concepts-query-performance-insight.md) works in conjunction with Query Store to provide visualizations accessible from the Azure portal. These charts enable you to identify key queries that impact performance. Query Performance Insight is accessible from the **Support + troubleshooting** section of your Azure Database for PostgreSQL server's portal page. ## Performance Recommendations
-The [Performance Recommendations](concepts-performance-recommendations.md) feature identifies opportunities to improve workload performance. Performance Recommendations provides you with recommendations for creating new indexes that have the potential to improve the performance of your workloads. To produce index recommendations, the feature takes into consideration various database characteristics, including its schema and the workload as reported by Query Store. After implementing any performance recommendation, customers should test performance to evaluate the impact of those changes.
+
+The [Performance Recommendations](concepts-performance-recommendations.md) feature identifies opportunities to improve workload performance. Performance Recommendations provides you with recommendations for creating new indexes that have the potential to improve the performance of your workloads. To produce index recommendations, the feature takes into consideration various database characteristics, including its schema and the workload as reported by Query Store. After implementing any performance recommendation, customers should test performance to evaluate the impact of those changes.
## Planned maintenance notification
The [Performance Recommendations](concepts-performance-recommendations.md) featu
Learn more about how to set up notifications in the [planned maintenance notifications](./concepts-planned-maintenance-notification.md) document. ## Next steps+ - See [how to set up alerts](how-to-alert-on-metric.md) for guidance on creating an alert on a metric. - For more information on how to access and export metrics using the Azure portal, REST API, or CLI, see the [Azure Metrics Overview](../../azure-monitor/data-platform.md) - Read our blog on [best practices for monitoring your server](https://azure.microsoft.com/blog/best-practices-for-alerting-on-metrics-with-azure-database-for-postgresql-monitoring/).-- Learn more about [planned maintenance notifications](./concepts-planned-maintenance-notification.md) in Azure Database for PostgreSQL - Single Server.
+- Learn more about [planned maintenance notifications](./concepts-planned-maintenance-notification.md) in Azure Database for PostgreSQL - Single Server.
postgresql Concepts Performance Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-performance-recommendations.md
Previously updated : 08/21/2019 Last updated : 06/24/2022 + # Performance Recommendations in Azure Database for PostgreSQL - Single Server [!INCLUDE [applies-to-postgresql-single-server](../includes/applies-to-postgresql-single-server.md)] **Applies to:** Azure Database for PostgreSQL - Single Server versions 9.6, 10, 11
-The Performance Recommendations feature analyses your databases to create customized suggestions for improved performance. To produce the recommendations, the analysis looks at various database characteristics including schema. Enable [Query Store](concepts-query-store.md) on your server to fully utilize the Performance Recommendations feature. After implementing any performance recommendation, you should test performance to evaluate the impact of those changes.
+The Performance Recommendations feature analyses your databases to create customized suggestions for improved performance. To produce the recommendations, the analysis looks at various database characteristics including schema. Enable [Query Store](concepts-query-store.md) on your server to fully utilize the Performance Recommendations feature. After implementing any performance recommendation, you should test performance to evaluate the impact of those changes.
## Permissions+ **Owner** or **Contributor** permissions required to run analysis using the Performance Recommendations feature. ## Performance recommendations+ The [Performance Recommendations](concepts-performance-recommendations.md) feature analyzes workloads across your server to identify indexes with the potential to improve performance. Open **Performance Recommendations** from the **Intelligent Performance** section of the menu bar on the Azure portal page for your PostgreSQL server. :::image type="content" source="./media/concepts-performance-recommendations/performance-recommendations-page.png" alt-text="Performance Recommendations landing page":::
-Select **Analyze** and choose a database, which will begin the analysis. Depending on your workload, th analysis may take several minutes to complete. Once the analysis is done, there will be a notification in the portal. Analysis performs a deep examination of your database. We recommend you perform analysis during off-peak periods.
+Select **Analyze** and choose a database, which will begin the analysis. Depending on your workload, th analysis may take several minutes to complete. Once the analysis is done, there will be a notification in the portal. Analysis performs a deep examination of your database. We recommend you perform analysis during off-peak periods.
The **Recommendations** window will show a list of recommendations if any were found. :::image type="content" source="./media/concepts-performance-recommendations/performance-recommendations-result.png" alt-text="Performance Recommendations new page":::
-Recommendations are not automatically applied. To apply the recommendation, copy the query text and run it from your client of choice. Remember to test and monitor to evaluate the recommendation.
+Recommendations are not automatically applied. To apply the recommendation, copy the query text and run it from your client of choice. Remember to test and monitor to evaluate the recommendation.
## Recommendation types Currently, two types of recommendations are supported: *Create Index* and *Drop Index*. ### Create Index recommendations+ *Create Index* recommendations suggest new indexes to speed up the most frequently run or time-consuming queries in the workload. This recommendation type requires [Query Store](concepts-query-store.md) to be enabled. Query Store collects query information and provides the detailed query runtime and frequency statistics that the analysis uses to make the recommendation. ### Drop Index recommendations+ Besides detecting missing indexes, Azure Database for PostgreSQL analyzes the performance of existing indexes. If an index is either rarely used or redundant, the analyzer recommends dropping it. ## Considerations+ * Performance Recommendations is not available for [read replicas](concepts-read-replicas.md). ## Next steps-- Learn more about [monitoring and tuning](concepts-monitoring.md) in Azure Database for PostgreSQL.
+- Learn more about [monitoring and tuning](concepts-monitoring.md) in Azure Database for PostgreSQL.
postgresql Concepts Planned Maintenance Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-planned-maintenance-notification.md
Previously updated : 2/17/2022 Last updated : 06/24/2022 # Planned maintenance notification in Azure Database for PostgreSQL - Single Server
You can utilize the planned maintenance notifications feature to receive alerts
### Planned maintenance notification - **Planned maintenance notifications** allow you to receive alerts for upcoming planned maintenance event to your Azure Database for PostgreSQL. These notifications are integrated with [Service Health's](../../service-health/overview.md) planned maintenance and allow you to view all scheduled maintenance for your subscriptions in one place. It also helps to scale the notification to the right audiences for different resource groups, as you may have different contacts responsible for different resources. You will receive the notification about the upcoming maintenance 72 calendar hours before the event. We will make every attempt to provide **Planned maintenance notification** 72 hours notice for all events. However, in cases of critical or security patches, notifications might be sent closer to the event or be omitted.
-You can either check the planned maintenance notification on Azure portal or configure alerts to receive notification.
+You can either check the planned maintenance notification on Azure portal or configure alerts to receive notification.
### Check planned maintenance notification from Azure portal 1. In the [Azure portal](https://portal.azure.com), select **Service Health**. 2. Select **Planned Maintenance** tab
-3. Select **Subscription**, **Region, and **Service** for which you want to check the planned maintenance notification.
-
+3. Select **Subscription**, **Region, and **Service** for which you want to check the planned maintenance notification.
+ ### To receive planned maintenance notification 1. In the [portal](https://portal.azure.com), select **Service Health**.
No, all the Azure regions are patched during the deployment wise window timings.
A transient error, also known as a transient fault, is an error that will resolve itself. [Transient errors](./concepts-connectivity.md#transient-errors) can occur during maintenance. Most of these events are automatically mitigated by the system in less than 60 seconds. Transient errors should be handled using [retry logic](./concepts-connectivity.md#handling-transient-errors). - ## Next steps - For any questions or suggestions you might have about working with Azure Database for PostgreSQL, send an email to the Azure Database for PostgreSQL Team at AskAzureDBforPostgreSQL@service.microsoft.com
postgresql Concepts Pricing Tiers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-pricing-tiers.md
Previously updated : 10/14/2020 Last updated : 06/24/2022
Servers with less than equal to 100 GB provisioned storage are marked read-only
For example, if you have provisioned 110 GB of storage, and the actual utilization goes over 105 GB, the server is marked read-only. Alternatively, if you have provisioned 5 GB of storage, the server is marked read-only when the free storage reaches less than 512 MB.
-When the server is set to read-only, all existing sessions are disconnected and uncommitted transactions are rolled back. Any subsequent write operations and transaction commits fail. All subsequent read queries will work uninterrupted.
+When the server is set to read-only, all existing sessions are disconnected and uncommitted transactions are rolled back. Any subsequent write operations and transaction commits fail. All subsequent read queries will work uninterrupted.
You can either increase the amount of provisioned storage to your server or start a new session in read-write mode and drop data to reclaim free storage. Running `SET SESSION CHARACTERISTICS AS TRANSACTION READ WRITE;` sets the current session to read write mode. In order to avoid data corruption, do not perform any write operations when the server is still in read-only status.
postgresql Concepts Query Performance Insight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-query-performance-insight.md
Previously updated : 08/21/2019 Last updated : 06/24/2022
-# Query Performance Insight
+# Query Performance Insight
[!INCLUDE [applies-to-postgresql-single-server](../includes/applies-to-postgresql-single-server.md)]
Last updated 08/21/2019
Query Performance Insight helps you to quickly identify what your longest running queries are, how they change over time, and what waits are affecting them. ## Prerequisites+ For Query Performance Insight to function, data must exist in the [Query Store](concepts-query-store.md). ## Viewing performance insights
-The [Query Performance Insight](concepts-query-performance-insight.md) view in the Azure portal will surface visualizations on key information from Query Store.
+
+The [Query Performance Insight](concepts-query-performance-insight.md) view in the Azure portal will surface visualizations on key information from Query Store.
In the portal page of your Azure Database for PostgreSQL server, select **Query performance Insight** under the **Intelligent Performance** section of the menu bar. **Query Text is no longer supported** is shown. However, the query text can still be viewed by connecting to azure_sys and querying 'query_store.query_texts_view'.
In the portal page of your Azure Database for PostgreSQL server, select **Query
The **Long running queries** tab shows the top five queries by average duration per execution, aggregated in 15-minute intervals. You can view more queries by selecting from the **Number of Queries** drop down. The chart colors may change for a specific Query ID when you do this.
-You can click and drag in the chart to narrow down to a specific time window. Alternatively, use the zoom in and out icons to view a smaller or larger period of time respectively.
+You can select and drag in the chart to narrow down to a specific time window. Alternatively, use the zoom in and out icons to view a smaller or larger period of time respectively.
The table below the chart gives more details about the long-running queries in that time window.
Select the **Wait Statistics** tab to view the corresponding visualizations on w
:::image type="content" source="./media/concepts-query-performance-insight/query-performance-insight-wait-statistics.png" alt-text="Query Performance Insight waits statistics"::: ## Considerations+ * Query Performance Insight is not available for [read replicas](concepts-read-replicas.md). ## Next steps-- Learn more about [monitoring and tuning](concepts-monitoring.md) in Azure Database for PostgreSQL.-
+- Learn more about [monitoring and tuning](concepts-monitoring.md) in Azure Database for PostgreSQL.
postgresql Concepts Query Store Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-query-store-best-practices.md
Previously updated : 5/6/2019 Last updated : 06/24/2022 # Best practices for Query Store
This article outlines best practices for using Query Store in Azure Database for
## Set the optimal query capture mode
-Let Query Store capture the data that matters to you.
+Let Query Store capture the data that matters to you.
|**pg_qs.query_capture_mode** | **Scenario**| |||
Let Query Store capture the data that matters to you.
|_Top_ |Focus your attention on top queries - those issued by clients. |_None_ |You've already captured a query set and time window that you want to investigate and you want to eliminate the distractions that other queries may introduce. _None_ is suitable for testing and bench-marking environments. _None_ should be used with caution as you might miss the opportunity to track and optimize important new queries. You can't recover data on those past time windows. |
-Query Store also includes a store for wait statistics. There is an additional capture mode query that governs wait statistics: **pgms_wait_sampling.query_capture_mode** can be set to _none_ or _all_.
+Query Store also includes a store for wait statistics. There is an additional capture mode query that governs wait statistics: **pgms_wait_sampling.query_capture_mode** can be set to _none_ or _all_.
> [!NOTE]
-> **pg_qs.query_capture_mode** supersedes **pgms_wait_sampling.query_capture_mode**. If pg_qs.query_capture_mode is _none_, the pgms_wait_sampling.query_capture_mode setting has no effect.
-
+> **pg_qs.query_capture_mode** supersedes **pgms_wait_sampling.query_capture_mode**. If pg_qs.query_capture_mode is _none_, the pgms_wait_sampling.query_capture_mode setting has no effect.
## Keep the data you need
-The **pg_qs.retention_period_in_days** parameter specifies in days the data retention period for Query Store. Older query and statistics data is deleted. By default, Query Store is configured to retain the data for 7 days. Avoid keeping historical data you do not plan to use. Increase the value if you need to keep data longer.
+The **pg_qs.retention_period_in_days** parameter specifies in days the data retention period for Query Store. Older query and statistics data is deleted. By default, Query Store is configured to retain the data for 7 days. Avoid keeping historical data you do not plan to use. Increase the value if you need to keep data longer.
## Set the frequency of wait stats sampling
-The **pgms_wait_sampling.history_period** parameter specifies how often (in milliseconds) wait events are sampled. The shorter the period, the more frequent the sampling. More information is retrieved, but that comes with the cost of greater resource consumption. Increase this period if the server is under load or you don't need the granularity
+The **pgms_wait_sampling.history_period** parameter specifies how often (in milliseconds) wait events are sampled. The shorter the period, the more frequent the sampling. More information is retrieved, but that comes with the cost of greater resource consumption. Increase this period if the server is under load or you don't need the granularity
## Get quick insights into Query Store+ You can use [Query Performance Insight](concepts-query-performance-insight.md) in the Azure portal to get quick insights into the data in Query Store. The visualizations surface the longest running queries and longest wait events over time. ## Next steps-- Learn how to get or set parameters using the [Azure portal](how-to-configure-server-parameters-using-portal.md) or the [Azure CLI](how-to-configure-server-parameters-using-cli.md).+
+- Learn how to get or set parameters using the [Azure portal](how-to-configure-server-parameters-using-portal.md) or the [Azure CLI](how-to-configure-server-parameters-using-cli.md).
postgresql Concepts Query Store Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-query-store-scenarios.md
Previously updated : 5/6/2019 Last updated : 06/24/2022 + # Usage scenarios for Query Store [!INCLUDE [applies-to-postgresql-single-server](../includes/applies-to-postgresql-single-server.md)]
You can use Query Store in a wide variety of scenarios in which tracking and mai
- Identifying and tuning top expensive queries - A/B testing - Keeping performance stable during upgrades -- Identifying and improving ad hoc workloads
+- Identifying and improving ad hoc workloads
-## Identify and tune expensive queries
+## Identify and tune expensive queries
### Identify longest running queries
-Use the [Query Performance Insight](concepts-query-performance-insight.md) view in the Azure portal to quickly identify the longest running queries. These queries typically tend to consume a significant amount resources. Optimizing your longest running questions can improve performance by freeing up resources for use by other queries running on your system.
+
+Use the [Query Performance Insight](concepts-query-performance-insight.md) view in the Azure portal to quickly identify the longest running queries. These queries typically tend to consume a significant amount resources. Optimizing your longest running questions can improve performance by freeing up resources for use by other queries running on your system.
### Target queries with performance deltas + Query Store slices the performance data into time windows, so you can track a query's performance over time. This helps you identify exactly which queries are contributing to an increase in overall time spent. As a result you can do targeted troubleshooting of your workload. ### Tuning expensive queries + When you identify a query with suboptimal performance, the action you take depends on the nature of the problem: - Use [Performance Recommendations](concepts-performance-recommendations.md) to determine if there are any suggested indexes. If yes, create the index, and then use Query Store to evaluate query performance after creating the index. - Make sure that the statistics are up-to-date for the underlying tables used by the query.-- Consider rewriting expensive queries. For example, take advantage of query parameterization and reduce use of dynamic SQL. Implement optimal logic when reading data like applying data filtering on database side, not on application side. -
+- Consider rewriting expensive queries. For example, take advantage of query parameterization and reduce use of dynamic SQL. Implement optimal logic when reading data like applying data filtering on database side, not on application side.
## A/B testing + Use Query Store to compare workload performance before and after an application change you plan to introduce. Examples of scenarios for using Query Store to assess the impact of the environment or application change to workload performance: - Rolling out a new version of an application. - Adding additional resources to the server. -- Creating missing indexes on tables referenced by expensive queries.
-
+- Creating missing indexes on tables referenced by expensive queries.
+ In any of these scenarios, apply the following workflow: 1. Run your workload with Query Store before the planned change to generate a performance baseline. 2. Apply application change(s) at the controlled moment in time. 3. Continue running the workload long enough to generate performance image of the system after the change. 4. Compare results from before and after the change.
-5. Decide whether to keep the change or rollback.
-
+5. Decide whether to keep the change or rollback.
## Identify and improve ad hoc workloads
-Some workloads do not have dominant queries that you can tune to improve overall application performance. Those workloads are typically characterized with a relatively large number of unique queries, each of them consuming a portion of system resources. Each unique query is executed infrequently, so individually their runtime consumption is not critical. On the other hand, given that the application is generating new queries all the time, a significant portion of system resources is spent on query compilation, which is not optimal. Usually, this situation happens if your application generates queries (instead of using stored procedures or parameterized queries) or if it relies on object-relational mapping frameworks that generate queries by default.
-
-If you are in control of the application code, you may consider rewriting the data access layer to use stored procedures or parameterized queries. However, this situation can be also improved without application changes by forcing query parameterization for the entire database (all queries) or for the individual query templates with the same query hash.
+
+Some workloads do not have dominant queries that you can tune to improve overall application performance. Those workloads are typically characterized with a relatively large number of unique queries, each of them consuming a portion of system resources. Each unique query is executed infrequently, so individually their runtime consumption is not critical. On the other hand, given that the application is generating new queries all the time, a significant portion of system resources is spent on query compilation, which is not optimal. Usually, this situation happens if your application generates queries (instead of using stored procedures or parameterized queries) or if it relies on object-relational mapping frameworks that generate queries by default.
+
+If you are in control of the application code, you may consider rewriting the data access layer to use stored procedures or parameterized queries. However, this situation can be also improved without application changes by forcing query parameterization for the entire database (all queries) or for the individual query templates with the same query hash.
## Next steps-- Learn more about the [best practices for using Query Store](concepts-query-store-best-practices.md)+
+- Learn more about the [best practices for using Query Store](concepts-query-store-best-practices.md)
postgresql Concepts Query Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-query-store.md
Previously updated : 07/01/2020 Last updated : 06/24/2022 # Monitor performance with the Query Store
The Query Store feature in Azure Database for PostgreSQL provides a way to track
> Do not modify the **azure_sys** database or its schemas. Doing so will prevent Query Store and related performance features from functioning correctly. ## Enabling Query Store+ Query Store is an opt-in feature, so it isn't active by default on a server. The store is enabled or disabled globally for all the databases on a given server and cannot be turned on or off per database. ### Enable Query Store using the Azure portal+ 1. Sign in to the Azure portal and select your Azure Database for PostgreSQL server. 2. Select **Server Parameters** in the **Settings** section of the menu. 3. Search for the `pg_qs.query_capture_mode` parameter.
To enable wait statistics in your Query Store:
1. Search for the `pgms_wait_sampling.query_capture_mode` parameter. 1. Set the value to `ALL` and **Save**. - Alternatively you can set these parameters using the Azure CLI. ```azurecli-interactive az postgres server configuration set --name pg_qs.query_capture_mode --resource-group myresourcegroup --server mydemoserver --value TOP
az postgres server configuration set --name pgms_wait_sampling.query_capture_mod
Allow up to 20 minutes for the first batch of data to persist in the azure_sys database. ## Information in Query Store+ Query Store has two stores: - A runtime stats store for persisting the query execution statistics information. - A wait stats store for persisting wait statistics information.
To minimize space usage, the runtime execution statistics in the runtime stats s
## Access Query Store information
-Query Store data is stored in the azure_sys database on your Postgres server.
+Query Store data is stored in the azure_sys database on your Postgres server.
The following query returns information about queries in Query Store: ```sql SELECT * FROM query_store.qs_view;
-```
+```
Or this query for wait stats: ```sql
SELECT * FROM query_store.pgms_wait_sampling_view;
``` ## Finding wait queries+ Wait event types combine different wait events into buckets by similarity. Query Store provides the wait event type, specific wait event name, and the query in question. Being able to correlate this wait information with the query runtime statistics means you can gain a deeper understanding of what contributes to query performance characteristics. Here are some examples of how you can gain more insights into your workload using the wait statistics in Query Store:
Here are some examples of how you can gain more insights into your workload usin
| High Memory waits | Find the top memory consuming queries in Query Store. These queries are probably delaying further progress of the affected queries. Check the **Performance Recommendations** for your server in the portal to see if there are index recommendations that would optimize these queries.| ## Configuration options
-When Query Store is enabled it saves data in 15-minute aggregation windows, up to 500 distinct queries per window.
+
+When Query Store is enabled it saves data in 15-minute aggregation windows, up to 500 distinct queries per window.
The following options are available for configuring Query Store parameters.
The following options apply specifically to wait statistics.
> [!NOTE] > **pg_qs.query_capture_mode** supersedes **pgms_wait_sampling.query_capture_mode**. If pg_qs.query_capture_mode is NONE, the pgms_wait_sampling.query_capture_mode setting has no effect. - Use the [Azure portal](how-to-configure-server-parameters-using-portal.md) or [Azure CLI](how-to-configure-server-parameters-using-cli.md) to get or set a different value for a parameter. ## Views and functions+ View and manage Query Store using the following views and functions. Anyone in the PostgreSQL public role can use these views to see the data in Query Store. These views are only available in the **azure_sys** database. Queries are normalized by looking at their structure after removing literals and constants. If two queries are identical except for literal values, they will have the same hash. ### query_store.qs_view+ This view returns query text data in Query Store. There is one row for each distinct query_text. The data isn't available via the Intelligent Performance section in the portal, APIs, or the CLI - but It can be found by connecting to azure_sys and querying 'query_store.query_texts_view'. |**Name** |**Type** | **References** | **Description**|
This view returns query text data in Query Store. There is one row for each dist
|temp_blks_written| bigint || Total number of temp blocks written by the statement| |blk_read_time |double precision || Total time the statement spent reading blocks, in milliseconds (if track_io_timing is enabled, otherwise zero)| |blk_write_time |double precision || Total time the statement spent writing blocks, in milliseconds (if track_io_timing is enabled, otherwise zero)|
-
+ ### query_store.query_texts_view+ This view returns query text data in Query Store. There is one row for each distinct query_text. | **Name** | **Type** | **Description** |
This view returns query text data in Query Store. There is one row for each dist
| query_sql_text | Varchar(10000) | Text of a representative statement. Different queries with the same structure are clustered together; this text is the text for the first of the queries in the cluster. | ### query_store.pgms_wait_sampling_view+ This view returns query text data in Query Store. There is one row for each distinct query_text. The data isn't available via the Intelligent Performance section in the portal, APIs, or the CLI - but It can be found by connecting to azure_sys and querying 'query_store.query_texts_view'. | **Name** | **Type** | **References** | **Description** |
Query_store.staging_data_reset() returns void
`staging_data_reset` discards all statistics gathered in memory by Query Store (that is, the data in memory that has not been flushed yet to the database). This function can only be executed by the server admin role. - ## Azure Monitor+ Azure Database for PostgreSQL is integrated with [Azure Monitor diagnostic settings](../../azure-monitor/essentials/diagnostic-settings.md). Diagnostic settings allows you to send your Postgres logs in JSON format to [Azure Monitor Logs](../../azure-monitor/logs/log-query-overview.md) for analytics and alerting, Event Hubs for streaming, and Azure Storage for archiving. >[!IMPORTANT] > This diagnostic feature for is only available in the General Purpose and Memory Optimized pricing tiers. ### Configure diagnostic settings
-You can enable diagnostic settings for your Postgres server using the Azure portal, CLI, REST API, and PowerShell. The log categories to configure are **QueryStoreRuntimeStatistics** and **QueryStoreWaitStatistics**.
+
+You can enable diagnostic settings for your Postgres server using the Azure portal, CLI, REST API, and PowerShell. The log categories to configure are **QueryStoreRuntimeStatistics** and **QueryStoreWaitStatistics**.
To enable resource logs using the Azure portal:
To enable resource logs using the Azure portal:
To enable this setting using PowerShell, CLI, or REST API, visit the [diagnostic settings article](../../azure-monitor/essentials/diagnostic-settings.md). ### JSON log format+ The following tables describes the fields for the two log types. Depending on the output endpoint you choose, the fields included and the order in which they appear may vary. #### QueryStoreRuntimeStatistics+ |**Field** | **Description** | ||| | TimeGenerated [UTC] | Time stamp when the log was recorded in UTC |
The following tables describes the fields for the two log types. Depending on th
| SubscriptionId | Your subscription ID | | ResourceProvider | `Microsoft.DBForPostgreSQL` | | Resource | Postgres server name |
-| ResourceType | `Servers` |
-
+| ResourceType | `Servers` |
#### QueryStoreWaitStatistics+ |**Field** | **Description** | ||| | TimeGenerated [UTC] | Time stamp when the log was recorded in UTC |
The following tables describes the fields for the two log types. Depending on th
| SubscriptionId | Your subscription ID | | ResourceProvider | `Microsoft.DBForPostgreSQL` | | Resource | Postgres server name |
-| ResourceType | `Servers` |
+| ResourceType | `Servers` |
## Limitations and known issues+ - If a PostgreSQL server has the parameter default_transaction_read_only on, Query Store cannot capture data. - Query Store functionality can be interrupted if it encounters long Unicode queries (>= 6000 bytes). - [Read replicas](concepts-read-replicas.md) replicate Query Store data from the primary server. This means that a read replica's Query Store does not provide statistics about queries run on the read replica. - ## Next steps+ - Learn more about [scenarios where Query Store can be especially helpful](concepts-query-store-scenarios.md). - Learn more about [best practices for using Query Store](concepts-query-store-best-practices.md).
postgresql Concepts Read Replicas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-read-replicas.md
Previously updated : 05/29/2021 Last updated : 06/24/2022 # Read replicas in Azure Database for PostgreSQL - Single Server
Replicas are new servers that you manage similar to regular Azure Database for P
Learn how to [create and manage replicas](how-to-read-replicas-portal.md). ## When to use a read replica+ The read replica feature helps to improve the performance and scale of read-intensive workloads. Read workloads can be isolated to the replicas, while write workloads can be directed to the primary. Read replicas can also be deployed on a different region and can be promoted to be a read/write server in the event of a disaster recovery. A common scenario is to have BI and analytical workloads use the read replica as the data source for reporting.
A common scenario is to have BI and analytical workloads use the read replica as
Because replicas are read-only, they don't directly reduce write-capacity burdens on the primary. ### Considerations
-The feature is meant for scenarios where the lag is acceptable and meant for offloading queries. It isn't meant for synchronous replication scenarios where the replica data is expected to be up-to-date. There will be a measurable delay between the primary and the replica. This can be in minutes or even hours depending on the workload and the latency between the primary and the replica. The data on the replica eventually becomes consistent with the data on the primary. Use this feature for workloads that can accommodate this delay.
+
+The feature is meant for scenarios where the lag is acceptable and meant for offloading queries. It isn't meant for synchronous replication scenarios where the replica data is expected to be up-to-date. There will be a measurable delay between the primary and the replica. This can be in minutes or even hours depending on the workload and the latency between the primary and the replica. The data on the replica eventually becomes consistent with the data on the primary. Use this feature for workloads that can accommodate this delay.
> [!NOTE] > For most workloads read replicas offer near-real-time updates from the primary. However, with persistent heavy write-intensive primary workloads, the replication lag could continue to grow and may never be able to catch-up with the primary. This may also increase storage usage at the primary as the WAL files are not deleted until they are received at the replica. If this situation persists, deleting and recreating the read replica after the write-intensive workloads completes is the option to bring the replica back to a good state with respect to lag.
The feature is meant for scenarios where the lag is acceptable and meant for off
> Automatic backups are performed for replica servers that are configured with up to 4TB storage configuration. ## Cross-region replication+ You can create a read replica in a different region from your primary server. Cross-region replication can be helpful for scenarios like disaster recovery planning or bringing data closer to your users. >[!NOTE]
You can have a primary server in any [Azure Database for PostgreSQL region](http
[ :::image type="content" source="media/concepts-read-replica/read-replica-regions.png" alt-text="Read replica regions":::](media/concepts-read-replica/read-replica-regions.png#lightbox) ### Universal replica regions+ You can always create a read replica in any of the following regions, regardless of where your primary server is located. These are the universal replica regions: Australia East, Australia Southeast, Brazil South, Canada Central, Canada East, Central US, East Asia, East US, East US 2, Japan East, Japan West, Korea Central, Korea South, North Central US, North Europe, South Central US, Southeast Asia, UK South, UK West, West Europe, West US, West US 2, West Central US. ### Paired regions+ In addition to the universal replica regions, you can create a read replica in the Azure paired region of your primary server. If you don't know your region's pair, you can learn more from the [Azure Paired Regions article](../../availability-zones/cross-region-replication-azure.md).
-If you are using cross-region replicas for disaster recovery planning, we recommend you create the replica in the paired region instead of one of the other regions. Paired regions avoid simultaneous updates and prioritize physical isolation and data residency.
+If you are using cross-region replicas for disaster recovery planning, we recommend you create the replica in the paired region instead of one of the other regions. Paired regions avoid simultaneous updates and prioritize physical isolation and data residency.
+
+There are limitations to consider:
-There are limitations to consider:
-
* Uni-directional pairs: Some Azure regions are paired in one direction only. These regions include West India, Brazil South. This means that a primary server in West India can create a replica in South India. However, a primary server in South India cannot create a replica in West India. This is because West India's secondary region is South India, but South India's secondary region is not West India. - ## Create a replica+ When you start the create replica workflow, a blank Azure Database for PostgreSQL server is created. The new server is filled with the data that was on the primary server. The creation time depends on the amount of data on the primary and the time since the last weekly full backup. The time can range from a few minutes to several hours. Every replica is enabled for storage [auto-grow](concepts-pricing-tiers.md#storage-auto-grow). The auto-grow feature allows the replica to keep up with the data replicated to it, and prevent a break in replication caused by out of storage errors.
Learn how to [create a read replica in the Azure portal](how-to-read-replicas-po
If your source PostgreSQL server is encrypted with customer-managed keys, please see the [documentation](concepts-data-encryption-postgresql.md) for additional considerations. ## Connect to a replica+ When you create a replica, it doesn't inherit the firewall rules or VNet service endpoint of the primary server. These rules must be set up independently for the replica. The replica inherits the admin account from the primary server. All user accounts on the primary server are replicated to the read replicas. You can only connect to a read replica by using the user accounts that are available on the primary server.
psql -h myreplica.postgres.database.azure.com -U myadmin@myreplica -d postgres
At the prompt, enter the password for the user account. ## Monitor replication+ Azure Database for PostgreSQL provides two metrics for monitoring replication. The two metrics are **Max Lag Across Replicas** and **Replica Lag**. To learn how to view these metrics, see the **Monitor a replica** section of the [read replica how-to article](how-to-read-replicas-portal.md). The **Max Lag Across Replicas** metric shows the lag in bytes between the primary and the most-lagging replica. This metric is applicable and available on the primary server only, and will be available only if at least one of the read replica is connected to the primary and the primary is in streaming replication mode. The lag information does not show details when the replica is in the process of catching up with the primary using the archived logs of the primary in a file-shipping replication mode.
The **Replica Lag** metric shows the time since the last replayed transaction. I
SELECT EXTRACT (EPOCH FROM now() - pg_last_xact_replay_timestamp()); ```
-Set an alert to inform you when the replica lag reaches a value that isnΓÇÖt acceptable for your workload.
+Set an alert to inform you when the replica lag reaches a value that isnΓÇÖt acceptable for your workload.
For additional insight, query the primary server directly to get the replication lag in bytes on all replicas.
For additional insight, query the primary server directly to get the replication
> If a primary server or read replica restarts, the time it takes to restart and catch up is reflected in the Replica Lag metric. ## Stop replication / Promote replica
-You can stop the replication between a primary and a replica at any time. The stop action causes the replica to restart and promotes the replica to be an independent, standalone read-writeable server. The data in the standalone server is the data that was available on the replica server at the time the replication is stopped. Any subsequent updates at the primary are not propagated to the replica. However, replica server may have accumulated logs that are not applied yet. As part of the restart process, the replica applies all the pending logs before accepting client connections.
+
+You can stop the replication between a primary and a replica at any time. The stop action causes the replica to restart and promotes the replica to be an independent, standalone read-writeable server. The data in the standalone server is the data that was available on the replica server at the time the replication is stopped. Any subsequent updates at the primary are not propagated to the replica. However, replica server may have accumulated logs that are not applied yet. As part of the restart process, the replica applies all the pending logs before accepting client connections.
>[!NOTE] > Resetting admin password on replica server is currently not supported. Additionally, updating admin password along with promote replica operation in the same request is also not supported. If you wish to do this you must first promote the replica server then update the password on the newly promoted server separately. ### Considerations+ - Before you stop replication on a read replica, check for the replication lag to ensure the replica has all the data that you require. - As the read replica has to apply all pending logs before it can be made a standalone server, RTO can be higher for write heavy workloads when the stop replication happens as there could be a significant delay on the replica. Please pay attention to this when planning to promote a replica. - The promoted replica server cannot be made into a replica again.
Learn how to [stop replication to a replica](how-to-read-replicas-portal.md).
## Failover to replica
-In the event of a primary server failure, it is **not** automatically failed over to the read replica.
+In the event of a primary server failure, it is **not** automatically failed over to the read replica.
Since replication is asynchronous, there could be a considerable lag between the primary and the replica. The amount of lag is influenced by a number of factors such as the type of workload running on the primary server and the latency between the primary and the replica server. In typical cases with nominal write workload, replica lag is expected between a few seconds to few minutes. However, in cases where the primary runs very heavy write-intensive workload and the replica is not catching up fast enough, the lag can be much higher. You can track the replication lag for each replica using the metric *Replica Lag*. This metric shows the time since the last replayed transaction at the replica. We recommend that you identify the average lag by observing the replica lag over a period of time. You can set an alert on replica lag, so that if it goes outside your expected range, you will be notified to take action. > [!Tip] > If you failover to the replica, the lag at the time you delink the replica from the primary will indicate how much data is lost.
-Once you have decided you want to failover to a replica,
+Once you have decided you want to failover to a replica,
1. Stop replication to the replica<br/> This step is necessary to make the replica server to become a standalone server and be able to accept writes. As part of this process, the replica server will restart and be delinked from the primary. Once you initiate stop replication, the backend process typically takes few minutes to apply any residual logs that were not yet applied and to open the database as a read-writeable server. See the [stop replication](#stop-replication--promote-replica) section of this article to understand the implications of this action.
-
+ 2. Point your application to the (former) replica<br/> Each server has a unique connection string. Update your application connection string to point to the (former) replica instead of the primary.
-
+ Once your application is successfully processing reads and writes, you have completed the failover. The amount of downtime your application experiences will depend on when you detect an issue and complete steps 1 and 2 above. ### Disaster recovery
-When there is a major disaster event such as availability zone-level or regional failures, you can perform disaster recovery operation by promoting your read replica. From the UI portal, you can navigate to the read replica server. Then click the replication tab, and you can stop the replica to promote it to be an independent server. Alternatively, you can use the [Azure CLI](/cli/azure/postgres/server/replica#az-postgres-server-replica-stop) to stop and promote the replica server.
+When there is a major disaster event such as availability zone-level or regional failures, you can perform disaster recovery operation by promoting your read replica. From the UI portal, you can navigate to the read replica server. Then select the replication tab, and you can stop the replica to promote it to be an independent server. Alternatively, you can use the [Azure CLI](/cli/azure/postgres/server/replica#az-postgres-server-replica-stop) to stop and promote the replica server.
## Considerations This section summarizes considerations about the read replica feature. ### Prerequisites+ Read replicas and [logical decoding](concepts-logical.md) both depend on the Postgres write ahead log (WAL) for information. These two features need different levels of logging from Postgres. Logical decoding needs a higher level of logging than read replicas. To configure the right level of logging, use the Azure replication support parameter. Azure replication support has three setting options:
To configure the right level of logging, use the Azure replication support param
* **Replica** - More verbose than **Off**. This is the minimum level of logging needed for [read replicas](concepts-read-replicas.md) to work. This setting is the default on most servers. * **Logical** - More verbose than **Replica**. This is the minimum level of logging for logical decoding to work. Read replicas also work at this setting. - ### New replicas+ A read replica is created as a new Azure Database for PostgreSQL server. An existing server can't be made into a replica. You can't create a replica of another read replica. ### Replica configuration+ A replica is created by using the same compute and storage settings as the primary. After a replica is created, several settings can be changed including storage and backup retention period. Firewall rules, virtual network rules, and parameter settings are not inherited from the primary server to the replica when the replica is created or afterwards. ### Scaling+ Scaling vCores or between General Purpose and Memory Optimized: * PostgreSQL requires the `max_connections` setting on a secondary server to be [greater than or equal to the setting on the primary](https://www.postgresql.org/docs/current/hot-standby.html), otherwise the secondary will not start. * In Azure Database for PostgreSQL, the maximum allowed connections for each server is fixed to the compute sku since connections occupy memory. You can learn more about the [mapping between max_connections and compute skus](concepts-limits.md).
Scaling storage:
* All replicas have storage auto-grow enabled to prevent replication issues from a storage-full replica. This setting cannot be disabled. * You can also manually scale up storage, as you would do on any other server - ### Basic tier+ Basic tier servers only support same-region replication. ### max_prepared_transactions+ [PostgreSQL requires](https://www.postgresql.org/docs/current/runtime-config-resource.html#GUC-MAX-PREPARED-TRANSACTIONS) the value of the `max_prepared_transactions` parameter on the read replica to be greater than or equal to the primary value; otherwise, the replica won't start. If you want to change `max_prepared_transactions` on the primary, first change it on the replicas. ### Stopped replicas+ If you stop replication between a primary server and a read replica, the replica restarts to apply the change. The stopped replica becomes a standalone server that accepts both reads and writes. The standalone server can't be made into a replica again. ### Deleted primary and standalone servers+ When a primary server is deleted, all of its read replicas become standalone servers. The replicas are restarted to reflect this change. ## Next steps+ * Learn how to [create and manage read replicas in the Azure portal](how-to-read-replicas-portal.md).
-* Learn how to [create and manage read replicas in the Azure CLI and REST API](how-to-read-replicas-cli.md).
+* Learn how to [create and manage read replicas in the Azure CLI and REST API](how-to-read-replicas-cli.md).
postgresql Concepts Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-security.md
Previously updated : 11/22/2019 Last updated : 06/24/2022 # Security in Azure Database for PostgreSQL - Single Server
There are multiple layers of security that are available to protect the data on
## Information protection and encryption ### In-transit+ Azure Database for PostgreSQL secures your data by encrypting data in-transit with Transport Layer Security. Encryption (SSL/TLS) is enforced by default. ### At-rest
-The Azure Database for PostgreSQL service uses the FIPS 140-2 validated cryptographic module for storage encryption of data at-rest. Data, including backups, are encrypted on disk, including the temporary files created while running queries. The service uses the AES 256-bit cipher included in Azure storage encryption, and the keys are system managed. Storage encryption is always on and can't be disabled.
+The Azure Database for PostgreSQL service uses the FIPS 140-2 validated cryptographic module for storage encryption of data at-rest. Data, including backups, are encrypted on disk, including the temporary files created while running queries. The service uses the AES 256-bit cipher included in Azure storage encryption, and the keys are system managed. Storage encryption is always on and can't be disabled.
## Network security
-Connections to an Azure Database for PostgreSQL server are first routed through a regional gateway. The gateway has a publicly accessible IP, while the server IP addresses are protected. For more information about the gateway, visit the [connectivity architecture article](concepts-connectivity-architecture.md).
-A newly created Azure Database for PostgreSQL server has a firewall that blocks all external connections. Though they reach the gateway, they are not allowed to connect to the server.
+Connections to an Azure Database for PostgreSQL server are first routed through a regional gateway. The gateway has a publicly accessible IP, while the server IP addresses are protected. For more information about the gateway, visit the [connectivity architecture article](concepts-connectivity-architecture.md).
+
+A newly created Azure Database for PostgreSQL server has a firewall that blocks all external connections. Though they reach the gateway, they are not allowed to connect to the server.
### IP firewall rules+ IP firewall rules grant access to servers based on the originating IP address of each request. See the [firewall rules overview](concepts-firewall-rules.md) for more information. ### Virtual network firewall rules+ Virtual network service endpoints extend your virtual network connectivity over the Azure backbone. Using virtual network rules you can enable your Azure Database for PostgreSQL server to allow connections from selected subnets in a virtual network. For more information, see the [virtual network service endpoint overview](concepts-data-access-and-security-vnet.md). ### Private IP
-Private Link allows you to connect to your Azure Database for PostgreSQL Single server in Azure via a private endpoint. Azure Private Link essentially brings Azure services inside your private Virtual Network (VNet). The PaaS resources can be accessed using the private IP address just like any other resource in the VNet. For more information,see the [private link overview](concepts-data-access-and-security-private-link.md)
+Private Link allows you to connect to your Azure Database for PostgreSQL Single server in Azure via a private endpoint. Azure Private Link essentially brings Azure services inside your private Virtual Network (VNet). The PaaS resources can be accessed using the private IP address just like any other resource in the VNet. For more information,see the [private link overview](concepts-data-access-and-security-private-link.md)
## Access management
While creating the Azure Database for PostgreSQL server, you provide credentials
You can also connect to the server using [Azure Active Directory authentication](concepts-azure-ad-authentication.md). - ## Threat protection You can opt in to [Advanced Threat Protection](../../defender-for-cloud/defender-for-databases-introduction.md) which detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit servers.
-[Audit logging](concepts-audit.md) is available to track activity in your databases.
+[Audit logging](concepts-audit.md) is available to track activity in your databases.
## Migrating from Oracle Oracle supports Transparent Data Encryption (TDE) to encrypt table and tablespace data. In Azure for PostgreSQL, the data is automatically encrypted at various layers. See the "At-rest" section in this page and also refer to various Security topics, including [customer managed keys](./concepts-data-encryption-postgresql.md) and [Infrastructure double encryption](./concepts-infrastructure-double-encryption.md). You may also consider using [pgcrypto](https://www.postgresql.org/docs/11/pgcrypto.html) extension which is supported in [Azure for PostgreSQL](./concepts-extensions.md). ## Next steps+ - Enable firewall rules for [IPs](concepts-firewall-rules.md) or [virtual networks](concepts-data-access-and-security-vnet.md)-- Learn about [Azure Active Directory authentication](concepts-azure-ad-authentication.md) in Azure Database for PostgreSQL
+- Learn about [Azure Active Directory authentication](concepts-azure-ad-authentication.md) in Azure Database for PostgreSQL
postgresql Concepts Server Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-server-logs.md
Previously updated : 06/25/2020 Last updated : 06/24/2022 # Logs in Azure Database for PostgreSQL - Single Server
Azure Database for PostgreSQL allows you to configure and access Postgres's stan
Audit logging is made available through a PostgreSQL extension, pgaudit. To learn more, visit the [auditing concepts](concepts-audit.md) article. - ## Configure logging
-You can configure Postgres standard logging on your server using the logging server parameters. On each Azure Database for PostgreSQL server, `log_checkpoints` and `log_connections` are on by default. There are additional parameters you can adjust to suit your logging needs:
+
+You can configure Postgres standard logging on your server using the logging server parameters. On each Azure Database for PostgreSQL server, `log_checkpoints` and `log_connections` are on by default. There are additional parameters you can adjust to suit your logging needs:
:::image type="content" source="./media/concepts-server-logs/log-parameters.png" alt-text="Azure Database for PostgreSQL - Logging parameters"::: To learn more about Postgres log parameters, visit the [When To Log](https://www.postgresql.org/docs/current/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHEN) and [What To Log](https://www.postgresql.org/docs/current/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHAT) sections of the Postgres documentation. Most, but not all, Postgres logging parameters are available to configure in Azure Database for PostgreSQL.
-To learn how to configure parameters in Azure Database for PostgreSQL, see the [portal documentation](how-to-configure-server-parameters-using-portal.md) or the [CLI documentation](how-to-configure-server-parameters-using-cli.md).
+To learn how to configure parameters in Azure Database for PostgreSQL, see the [portal documentation](how-to-configure-server-parameters-using-portal.md) or the [CLI documentation](how-to-configure-server-parameters-using-cli.md).
> [!NOTE]
-> Configuring a high volume of logs, for example statement logging, can add significant performance overhead.
+> Configuring a high volume of logs, for example statement logging, can add significant performance overhead.
## Access .log files+ The default log format in Azure Database for PostgreSQL is .log. A sample line from this log looks like: ``` 2019-10-14 17:00:03 UTC-5d773cc3.3c-LOG: connection received: host=101.0.0.6 port=34331 pid=16216 ```
-Azure Database for PostgreSQL provides a short-term storage location for the .log files. A new file begins every 1 hour or 100 MB, whichever comes first. Logs are appended to the current file as they are emitted from Postgres.
+Azure Database for PostgreSQL provides a short-term storage location for the .log files. A new file begins every 1 hour or 100 MB, whichever comes first. Logs are appended to the current file as they are emitted from Postgres.
-You can set the retention period for this short-term log storage using the `log_retention_period` parameter. The default value is 3 days; the maximum value is 7 days. The short-term storage location can hold up to 1 GB of log files. After 1 GB, the oldest files, regardless of retention period, will be deleted to make room for new logs.
+You can set the retention period for this short-term log storage using the `log_retention_period` parameter. The default value is 3 days; the maximum value is 7 days. The short-term storage location can hold up to 1 GB of log files. After 1 GB, the oldest files, regardless of retention period, will be deleted to make room for new logs.
-For longer-term retention of logs and log analysis, you can download the .log files and move them to a third-party service. You can download the files using the [Azure portal](how-to-configure-server-logs-in-portal.md), [Azure CLI](how-to-configure-server-logs-using-cli.md). Alternatively, you can configure Azure Monitor diagnostic settings which automatically emits your logs (in JSON format) to longer-term locations. Learn more about this option in the section below.
+For longer-term retention of logs and log analysis, you can download the .log files and move them to a third-party service. You can download the files using the [Azure portal](how-to-configure-server-logs-in-portal.md), [Azure CLI](how-to-configure-server-logs-using-cli.md). Alternatively, you can configure Azure Monitor diagnostic settings which automatically emits your logs (in JSON format) to longer-term locations. Learn more about this option in the section below.
You can stop generating .log files by setting the parameter `logging_collector` to OFF. Turning off .log file generation is recommended if you are using Azure Monitor diagnostic settings. This configuration will reduce the performance impact of additional logging. > [!NOTE]
You can stop generating .log files by setting the parameter `logging_collector`
## Resource logs
-Azure Database for PostgreSQL is integrated with Azure Monitor diagnostic settings. Diagnostic settings allows you to send your Postgres logs in JSON format to Azure Monitor Logs for analytics and alerting, Event Hubs for streaming, and Azure Storage for archiving.
+Azure Database for PostgreSQL is integrated with Azure Monitor diagnostic settings. Diagnostic settings allows you to send your Postgres logs in JSON format to Azure Monitor Logs for analytics and alerting, Event Hubs for streaming, and Azure Storage for archiving.
> [!IMPORTANT] > This diagnostic feature for server logs is only available in the General Purpose and Memory Optimized [pricing tiers](concepts-pricing-tiers.md). - ### Configure diagnostic settings You can enable diagnostic settings for your Postgres server using the Azure portal, CLI, REST API, and PowerShell. The log category to select is **PostgreSQLLogs**. (There are other logs you can configure if you are using [Query Store](concepts-query-store.md).)
The query above will show results over the last 6 hours for any Postgres server
### Log format
-The following table describes the fields for the **PostgreSQLLogs** type. Depending on the output endpoint you choose, the fields included and the order in which they appear may vary.
+The following table describes the fields for the **PostgreSQLLogs** type. Depending on the output endpoint you choose, the fields included and the order in which they appear may vary.
|**Field** | **Description** | |||
The following table describes the fields for the **PostgreSQLLogs** type. Depend
| _ResourceId | Resource URI | | Prefix | Log line's prefix | - ## Next steps+ - Learn more about accessing logs from the [Azure portal](how-to-configure-server-logs-in-portal.md) or [Azure CLI](how-to-configure-server-logs-using-cli.md). - Learn more about [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/). - Learn more about [audit logs](concepts-audit.md)
postgresql Concepts Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-servers.md
-
+ Title: Servers - Azure Database for PostgreSQL - Single Server description: This article provides considerations and guidelines for configuring and managing Azure Database for PostgreSQL - Single Server.
Previously updated : 5/6/2019 Last updated : 06/24/2022 + # Azure Database for PostgreSQL - Single Server [!INCLUDE [applies-to-postgresql-single-server](../includes/applies-to-postgresql-single-server.md)]
Last updated 5/6/2019
This article provides considerations and guidelines for working with Azure Database for PostgreSQL - Single Server. ## What is an Azure Database for PostgreSQL server?+ A server in the Azure Database for PostgreSQL - Single Server deployment option is a central administrative point for multiple databases. It is the same PostgreSQL server construct that you may be familiar with in the on-premises world. Specifically, the PostgreSQL service is managed, provides performance guarantees, exposes access and features at the server-level. An Azure Database for PostgreSQL server:
An Azure Database for PostgreSQL server:
Within an Azure Database for PostgreSQL server, you can create one or multiple databases. You can opt to create a single database per server to utilize all the resources, or create multiple databases to share the resources. The pricing is structured per-server, based on the configuration of pricing tier, vCores, and storage (GB). For more information, see [Pricing tiers](./concepts-pricing-tiers.md). ## How do I connect and authenticate to an Azure Database for PostgreSQL server?+ The following elements help ensure safe access to your database: |Security concept|Description|
The following elements help ensure safe access to your database:
| **Firewall** | To help protect your data, a firewall rule prevents all access to your server and to its databases, until you specify which computers have permission. See [Azure Database for PostgreSQL Server firewall rules](concepts-firewall-rules.md). | ## Managing your server+ You can manage Azure Database for PostgreSQL servers by using the [Azure portal](https://portal.azure.com) or the [Azure CLI](/cli/azure/postgres).
-While creating a server, you set up the credentials for your admin user. The admin user is the highest privilege user you have on the server. It belongs to the role azure_pg_admin. This role does not have full superuser permissions.
+While creating a server, you set up the credentials for your admin user. The admin user is the highest privilege user you have on the server. It belongs to the role azure_pg_admin. This role does not have full superuser permissions.
The PostgreSQL superuser attribute is assigned to the azure_superuser, which belongs to the managed service. You do not have access to this role.
An Azure Database for PostgreSQL server has default databases:
- **azure_maintenance** - This database is used to separate the processes that provide the managed service from user actions. You do not have access to this database. - **azure_sys** - A database for the Query Store. This database does not accumulate data when Query Store is off; this is the default setting. For more information, see the [Query Store overview](concepts-query-store.md). - ## Server parameters
-The PostgreSQL server parameters determine the configuration of the server. In Azure Database for PostgreSQL, the list of parameters can be viewed and edited using the Azure portal or the Azure CLI.
-As a managed service for Postgres, the configurable parameters in Azure Database for PostgreSQL are a subset of the parameters in a local Postgres instance (For more information on Postgres parameters, see the [PostgreSQL documentation](https://www.postgresql.org/docs/9.6/static/runtime-config.html)). Your Azure Database for PostgreSQL server is enabled with default values for each parameter on creation. Some parameters that would require a server restart or superuser access for changes to take effect cannot be configured by the user.
+The PostgreSQL server parameters determine the configuration of the server. In Azure Database for PostgreSQL, the list of parameters can be viewed and edited using the Azure portal or the Azure CLI.
+As a managed service for Postgres, the configurable parameters in Azure Database for PostgreSQL are a subset of the parameters in a local Postgres instance (For more information on Postgres parameters, see the [PostgreSQL documentation](https://www.postgresql.org/docs/9.6/static/runtime-config.html)). Your Azure Database for PostgreSQL server is enabled with default values for each parameter on creation. Some parameters that would require a server restart or superuser access for changes to take effect cannot be configured by the user.
## Next steps+ - For an overview of the service, see [Azure Database for PostgreSQL Overview](overview.md). - For information about specific resource quotas and limitations based on your **service tier**, see [Service tiers](concepts-pricing-tiers.md). - For information on connecting to the service, see [Connection libraries for Azure Database for PostgreSQL](concepts-connection-libraries.md).
postgresql Concepts Ssl Connection Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-ssl-connection-security.md
Previously updated : 07/08/2020 Last updated : 06/24/2022 + # Configure TLS connectivity in Azure Database for PostgreSQL - Single Server [!INCLUDE [applies-to-postgresql-single-server](../includes/applies-to-postgresql-single-server.md)]
By default, the PostgreSQL database service is configured to require TLS connect
> [!IMPORTANT] > SSL root certificate is set to expire starting February 15, 2021 (02/15/2021). Please update your application to use the [new certificate](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem). To learn more , see [planned certificate updates](concepts-certificate-rotation.md) - ## Enforcing TLS connections
-For all Azure Database for PostgreSQL servers provisioned through the Azure portal and CLI, enforcement of TLS connections is enabled by default.
+For all Azure Database for PostgreSQL servers provisioned through the Azure portal and CLI, enforcement of TLS connections is enabled by default.
Likewise, connection strings that are pre-defined in the "Connection Strings" settings under your server in the Azure portal include the required parameters for common languages to connect to your database server using TLS. The TLS parameter varies based on the connector, for example "ssl=true" or "sslmode=require" or "sslmode=required" and other variations.
You can optionally disable enforcing TLS connectivity. Microsoft Azure recommend
### Using the Azure portal
-Visit your Azure Database for PostgreSQL server and click **Connection security**. Use the toggle button to enable or disable the **Enforce SSL connection** setting. Then, click **Save**.
+Visit your Azure Database for PostgreSQL server and select **Connection security**. Use the toggle button to enable or disable the **Enforce SSL connection** setting. Then, select **Save**.
:::image type="content" source="./media/concepts-ssl-connection-security/1-disable-ssl.png" alt-text="Connection Security - Disable Enforce TLS/SSL":::
Some application frameworks that use PostgreSQL for their database services do n
## Applications that require certificate verification for TLS connectivity
-In some cases, applications require a local certificate file generated from a trusted Certificate Authority (CA) certificate file to connect securely. The certificate to connect to an Azure Database for PostgreSQL server is located at https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem. Download the certificate file and save it to your preferred location.
+In some cases, applications require a local certificate file generated from a trusted Certificate Authority (CA) certificate file to connect securely. The certificate to connect to an Azure Database for PostgreSQL server is located at https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem. Download the certificate file and save it to your preferred location.
See the following links for certificates for servers in sovereign clouds: [Azure Government](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem), [Azure China](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem), and [Azure Germany](https://www.d-trust.net/cgi-bin/D-TRUST_Root_Class_3_CA_2_2009.crt).
Azure Database for PostgreSQL single server provides the ability to enforce the
| TLS1_1 | TLS 1.1, TLS 1.2 and higher | | TLS1_2 | TLS version 1.2 and higher | - For example, setting this Minimum TLS setting version to TLS 1.0 means your server will allow connections from clients using TLS 1.0, 1.1, and 1.2+. Alternatively, setting this to 1.2 means that you only allow connections from clients using TLS 1.2+ and all connections with TLS 1.0 and TLS 1.1 will be rejected. > [!Note]
postgresql Concepts Supported Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-supported-versions.md
Previously updated : 03/10/2022 Last updated : 06/24/2022 adobe-target: true + # Supported PostgreSQL major versions [!INCLUDE [applies-to-postgresql-single-server](../includes/applies-to-postgresql-single-server.md)]
Please see [Azure Database for PostgreSQL versioning policy](concepts-version-po
Azure Database for PostgreSQL currently supports the following major versions: ## PostgreSQL version 11+ The current minor release is 11.12. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/11/static/release-11-12.html) to learn more about improvements and fixes in this minor release. ## PostgreSQL version 10+ The current minor release is 10.17. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/10/static/release-10-17.html) to learn more about improvements and fixes in this minor release. ## PostgreSQL version 9.6 (retired)+ Aligning with Postgres community's [versioning policy](https://www.postgresql.org/support/versioning/), Azure Database for PostgreSQL has retired PostgreSQL version 9.6 as of November 11, 2021. See [Azure Database for PostgreSQL versioning policy](concepts-version-policy.md) for more details and restrictions. If you're running this major version, upgrade to a higher version, preferably to PostgreSQL 11 at your earliest convenience. ## PostgreSQL version 9.5 (retired)+ Aligning with Postgres community's [versioning policy](https://www.postgresql.org/support/versioning/), Azure Database for PostgreSQL has retired PostgreSQL version 9.5 as of February 11, 2021. See [Azure Database for PostgreSQL versioning policy](concepts-version-policy.md) for more details and restrictions. If you're running this major version, upgrade to a higher version, preferably to PostgreSQL 11 at your earliest convenience. ## Managing upgrades
-The PostgreSQL project regularly issues minor releases to fix reported bugs. Azure Database for PostgreSQL automatically patches servers with minor releases during the service's monthly deployments.
+
+The PostgreSQL project regularly issues minor releases to fix reported bugs. Azure Database for PostgreSQL automatically patches servers with minor releases during the service's monthly deployments.
Automatic in-place upgrades for major versions are not supported. To upgrade to a higher major version, you can * Use one of the methods documented in [major version upgrades using dump and restore](./how-to-upgrade-using-dump-and-restore.md).
Automatic in-place upgrades for major versions are not supported. To upgrade to
* Use [Azure Database Migration service](../../dms/tutorial-azure-postgresql-to-azure-postgresql-online-portal.md) for doing online upgrades. ### Version syntax+ Before PostgreSQL version 10, the [PostgreSQL versioning policy](https://www.postgresql.org/support/versioning/) considered a _major version_ upgrade to be an increase in the first _or_ second number. For example, 9.5 to 9.6 was considered a _major_ version upgrade. As of version 10, only a change in the first number is considered a major version upgrade. For example, 10.0 to 10.1 is a _minor_ release upgrade. Version 10 to 11 is a _major_ version upgrade. ## Next steps+ For information on supported PostgreSQL extensions, see [the extensions document](concepts-extensions.md).
postgresql Concepts Version Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-version-policy.md
Previously updated : 12/14/2021 Last updated : 06/24/2022 + # Azure Database for PostgreSQL versioning policy [!INCLUDE [applies-to-postgresql-single-server](../includes/applies-to-postgresql-single-server.md)]
Azure Database for PostgreSQL supports the following database versions.
| *PostgreSQL 9.5 (retired)* | See [policy](#retired-postgresql-engine-versions-not-supported-in-azure-database-for-postgresql) | | | ## Major version support+ Each major version of PostgreSQL will be supported by Azure Database for PostgreSQL from the date on which Azure begins supporting the version until the version is retired by the PostgreSQL community, as provided in the [PostgreSQL community versioning policy](https://www.postgresql.org/support/versioning/). ## Minor version support
-Azure Database for PostgreSQL automatically performs minor version upgrades to the Azure preferred PostgreSQL version as part of periodic maintenance.
+
+Azure Database for PostgreSQL automatically performs minor version upgrades to the Azure preferred PostgreSQL version as part of periodic maintenance.
## Major version retirement policy+ The table below provides the retirement details for PostgreSQL major versions. The dates follow the [PostgreSQL community versioning policy](https://www.postgresql.org/support/versioning/). | Version | What's New | Azure support start date | Retirement date|
You may continue to run the retired version in Azure Database for PostgreSQL. Ho
- In the extreme event of a serious threat to the service caused by the PostgreSQL database engine vulnerability identified in the retired database version, Azure may choose to stop your database server to secure the service. In such case, you will be notified to upgrade the server before bringing the server online. ## PostgreSQL version syntax+ Before PostgreSQL version 10, the [PostgreSQL versioning policy](https://www.postgresql.org/support/versioning/) considered a _major version_ upgrade to be an increase in the first _or_ second number. For example, 9.5 to 9.6 was considered a _major_ version upgrade. As of version 10, only a change in the first number is considered a major version upgrade. For example, 10.0 to 10.1 is a _minor_ release upgrade. Version 10 to 11 is a _major_ version upgrade. ## Next steps+ - See Azure Database for PostgreSQL - Single Server [supported versions](./concepts-supported-versions.md) - See Azure Database for PostgreSQL - Flexible Server [supported versions](../flexible-server/concepts-supported-versions.md) - See Azure Database for PostgreSQL - Hyperscale (Citus) [supported versions](../hyperscale/reference-versions.md)
postgresql Connect Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/connect-csharp.md
ms.devlang: csharp Previously updated : 10/18/2020 Last updated : 06/24/2022 # Quickstart: Use .NET (C#) to connect and query data in Azure Database for PostgreSQL - Single Server
Last updated 10/18/2020
This quickstart demonstrates how to connect to an Azure Database for PostgreSQL using a C# application. It shows how to use SQL statements to query, insert, update, and delete data in the database. The steps in this article assume that you are familiar with developing using C#, and that you are new to working with Azure Database for PostgreSQL. ## Prerequisites+ For this quickstart you need: - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
For this quickstart you need:
- Install [Npgsql](https://www.nuget.org/packages/Npgsql/) NuGet package in Visual Studio. ## Get connection information+ Get the connection information needed to connect to the Azure Database for PostgreSQL. You need the fully qualified server name and login credentials. 1. Log in to the [Azure portal](https://portal.azure.com/).
-2. From the left-hand menu in Azure portal, click **All resources**, and then search for the server you have created (such as **mydemoserver**).
-3. Click the server name.
+2. From the left-hand menu in Azure portal, select **All resources**, and then search for the server you have created (such as **mydemoserver**).
+3. Select the server name.
4. From the server's **Overview** panel, make a note of the **Server name** and **Server admin login name**. If you forget your password, you can also reset the password from this panel.
- :::image type="content" source="./media/connect-csharp/1-connection-string.png" alt-text="Azure Database for PostgreSQL server name":::
## Step 1: Connect and insert data+ Use the following code to connect and load the data using **CREATE TABLE** and **INSERT INTO** SQL statements. The code uses NpgsqlCommand class with method: - [Open()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_Open) to establish a connection to the PostgreSQL database. - [CreateCommand()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_CreateCommand) sets the CommandText property.
namespace Driver
``` ## Step 2: Read data+ Use the following code to connect and read the data using a **SELECT** SQL statement. The code uses NpgsqlCommand class with method: - [Open()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_Open) to establish a connection to PostgreSQL. - [CreateCommand()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_CreateCommand) and [ExecuteReader()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlCommand.html#Npgsql_NpgsqlCommand_ExecuteReader) to run the database commands.
namespace Driver
``` ## Step 3: Update data+ Use the following code to connect and update the data using an **UPDATE** SQL statement. The code uses NpgsqlCommand class with method: - [Open()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_Open) to establish a connection to PostgreSQL. - [CreateCommand()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_CreateCommand), sets the CommandText property.
namespace Driver
``` ## Step 4: Delete data+ Use the following code to connect and delete data using a **DELETE** SQL statement. The code uses NpgsqlCommand class with method [Open()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_Open) to establish a connection to the PostgreSQL database. Then, the code uses the [CreateCommand()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_CreateCommand) method, sets the CommandText property, and calls the method [ExecuteNonQuery()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlCommand.html#Npgsql_NpgsqlCommand_ExecuteNonQuery) to run the database commands.
az group delete \
``` ## Next steps+ > [!div class="nextstepaction"] > [Manage Azure Database for MySQL server using Portal](./how-to-create-manage-server-portal.md)<br/> > [!div class="nextstepaction"] > [Manage Azure Database for MySQL server using CLI](./how-to-manage-server-cli.md)-
postgresql Connect Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/connect-go.md
ms.devlang: golang Previously updated : 5/6/2019 Last updated : 06/24/2022 # Quickstart: Use Go language to connect and query data in Azure Database for PostgreSQL - Single Server
Last updated 5/6/2019
This quickstart demonstrates how to connect to an Azure Database for PostgreSQL using code written in the [Go](https://go.dev/) language (golang). It shows how to use SQL statements to query, insert, update, and delete data in the database. This article assumes you are familiar with development using Go, but that you are new to working with Azure Database for PostgreSQL. ## Prerequisites+ This quickstart uses the resources created in either of these guides as a starting point: - [Create DB - Portal](quickstart-create-server-database-portal.md) - [Create DB - Azure CLI](quickstart-create-server-database-azure-cli.md) ## Install Go and pq connector+ Install [Go](https://go.dev/doc/install) and the [Pure Go Postgres driver (pq)](https://github.com/lib/pq) on your own machine. Depending on your platform, follow the appropriate steps: ### Windows+ 1. [Download](https://go.dev/dl/) and install Go for Microsoft Windows according to the [installation instructions](https://go.dev/doc/install). 2. Launch the command prompt from the start menu. 3. Make a folder for your project, such as `mkdir %USERPROFILE%\go\src\postgresqlgo`.
Install [Go](https://go.dev/doc/install) and the [Pure Go Postgres driver (pq)](
``` ### Linux (Ubuntu)+ 1. Launch the Bash shell. 2. Install Go by running `sudo apt-get install golang-go`. 3. Make a folder for your project in your home directory, such as `mkdir -p ~/go/src/postgresqlgo/`.
Install [Go](https://go.dev/doc/install) and the [Pure Go Postgres driver (pq)](
``` ### Apple macOS+ 1. Download and install Go according to the [installation instructions](https://go.dev/doc/install) matching your platform. 2. Launch the Bash shell. 3. Make a folder for your project in your home directory, such as `mkdir -p ~/go/src/postgresqlgo/`.
Install [Go](https://go.dev/doc/install) and the [Pure Go Postgres driver (pq)](
``` ## Get connection information+ Get the connection information needed to connect to the Azure Database for PostgreSQL. You need the fully qualified server name and login credentials. 1. Log in to the [Azure portal](https://portal.azure.com/).
-2. From the left-hand menu in Azure portal, click **All resources**, and then search for the server you have created (such as **mydemoserver**).
-3. Click the server name.
+2. From the left-hand menu in Azure portal, select **All resources**, and then search for the server you have created (such as **mydemoserver**).
+3. Select the server name.
4. From the server's **Overview** panel, make a note of the **Server name** and **Server admin login name**. If you forget your password, you can also reset the password from this panel.
- :::image type="content" source="./media/connect-go/1-connection-string.png" alt-text="Azure Database for PostgreSQL server name":::
## Build and run Go code + 1. To write Golang code, you can use a plain text editor, such as Notepad in Microsoft Windows, [vi](https://manpages.ubuntu.com/manpages/xenial/man1/nvi.1.html#contenttoc5) or [Nano](https://www.nano-editor.org/) in Ubuntu, or TextEdit in macOS. If you prefer a richer Interactive Development Environment (IDE) try [GoLand](https://www.jetbrains.com/go/) by Jetbrains, [Visual Studio Code](https://code.visualstudio.com/) by Microsoft, or [Atom](https://atom.io/). 2. Paste the Golang code from the following sections into text files, and save into your project folder with file extension \*.go, such as Windows path `%USERPROFILE%\go\src\postgresqlgo\createtable.go` or Linux path `~/go/src/postgresqlgo/createtable.go`. 3. Locate the `HOST`, `DATABASE`, `USER`, and `PASSWORD` constants in the code, and replace the example values with your own values.
Get the connection information needed to connect to the Azure Database for Postg
6. Alternatively, to build the code into a native application, `go build createtable.go`, then launch `createtable.exe` to run the application. ## Connect and create a table+ Use the following code to connect and create a table using **CREATE TABLE** SQL statement, followed by **INSERT INTO** SQL statements to add rows into the table. The code imports three packages: the [sql package](https://go.dev/pkg/database/sql/), the [pq package](https://godoc.org/github.com/lib/pq) as a driver to communicate with the PostgreSQL server, and the [fmt package](https://go.dev/pkg/fmt/) for printed input and output on the command line. The code calls method [sql.Open()](https://godoc.org/github.com/lib/pq#Open) to connect to Azure Database for PostgreSQL database, and checks the connection using method [db.Ping()](https://go.dev/pkg/database/sql/#DB.Ping). A [database handle](https://go.dev/pkg/database/sql/#DB) is used throughout, holding the connection pool for the database server. The code calls the [Exec()](https://go.dev/pkg/database/sql/#DB.Exec) method several times to run several SQL commands. Each time a custom checkError() method checks if an error occurred and panic to exit if an error does occur.
-Replace the `HOST`, `DATABASE`, `USER`, and `PASSWORD` parameters with your own values.
+Replace the `HOST`, `DATABASE`, `USER`, and `PASSWORD` parameters with your own values.
```go package main
func main() {
``` ## Read data
-Use the following code to connect and read the data using a **SELECT** SQL statement.
+
+Use the following code to connect and read the data using a **SELECT** SQL statement.
The code imports three packages: the [sql package](https://go.dev/pkg/database/sql/), the [pq package](https://godoc.org/github.com/lib/pq) as a driver to communicate with the PostgreSQL server, and the [fmt package](https://go.dev/pkg/fmt/) for printed input and output on the command line. The code calls method [sql.Open()](https://godoc.org/github.com/lib/pq#Open) to connect to Azure Database for PostgreSQL database, and checks the connection using method [db.Ping()](https://go.dev/pkg/database/sql/#DB.Ping). A [database handle](https://go.dev/pkg/database/sql/#DB) is used throughout, holding the connection pool for the database server. The select query is run by calling method [db.Query()](https://go.dev/pkg/database/sql/#DB.Query), and the resulting rows are kept in a variable of type [rows](https://go.dev/pkg/database/sql/#Rows). The code reads the column data values in the current row using method [rows.Scan()](https://go.dev/pkg/database/sql/#Rows.Scan) and loops over the rows using the iterator [rows.Next()](https://go.dev/pkg/database/sql/#Rows.Next) until no more rows exist. Each row's column values are printed to the console out. Each time a custom checkError() method is used to check if an error occurred and panic to exit if an error does occur.
-Replace the `HOST`, `DATABASE`, `USER`, and `PASSWORD` parameters with your own values.
+Replace the `HOST`, `DATABASE`, `USER`, and `PASSWORD` parameters with your own values.
```go package main
func main() {
``` ## Update data+ Use the following code to connect and update the data using an **UPDATE** SQL statement. The code imports three packages: the [sql package](https://go.dev/pkg/database/sql/), the [pq package](https://godoc.org/github.com/lib/pq) as a driver to communicate with the Postgres server, and the [fmt package](https://go.dev/pkg/fmt/) for printed input and output on the command line.
func checkError(err error) {
} func main() {
-
+ // Initialize connection string. var connectionString string = fmt.Sprintf("host=%s user=%s password=%s dbname=%s sslmode=require", HOST, USER, PASSWORD, DATABASE)
func main() {
``` ## Delete data
-Use the following code to connect and delete the data using a **DELETE** SQL statement.
+
+Use the following code to connect and delete the data using a **DELETE** SQL statement.
The code imports three packages: the [sql package](https://go.dev/pkg/database/sql/), the [pq package](https://godoc.org/github.com/lib/pq) as a driver to communicate with the Postgres server, and the [fmt package](https://go.dev/pkg/fmt/) for printed input and output on the command line.
func checkError(err error) {
} func main() {
-
+ // Initialize connection string. var connectionString string = fmt.Sprintf("host=%s user=%s password=%s dbname=%s sslmode=require", HOST, USER, PASSWORD, DATABASE)
az group delete \
``` ## Next steps+ > [!div class="nextstepaction"] > [Migrate your database using Export and Import](./how-to-migrate-using-export-and-import.md)
postgresql Connect Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/connect-java.md
ms.devlang: java Previously updated : 08/17/2020 Last updated : 06/24/2022 # Quickstart: Use Java and JDBC with Azure Database for PostgreSQL
az group delete \
``` ## Next steps+ > [!div class="nextstepaction"] > [Migrate your database using Export and Import](./how-to-migrate-using-export-and-import.md)
postgresql Connect Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/connect-nodejs.md
ms.devlang: javascript Previously updated : 5/6/2019 Last updated : 06/24/2022 # Quickstart: Use Node.js to connect and query data in Azure Database for PostgreSQL - Single Server
In this quickstart, you connect to an Azure Database for PostgreSQL using a Node
- [Node.js](https://nodejs.org) ## Install pg client+ Install [pg](https://www.npmjs.com/package/pg), which is a PostgreSQL client for Node.js. To do so, run the node package manager (npm) for JavaScript from your command line to install the pg client.
npm list
``` ## Get connection information+ Get the connection information needed to connect to the Azure Database for PostgreSQL. You need the fully qualified server name and login credentials. 1. In the [Azure portal](https://portal.azure.com/), search for and select the server you have created (such as **mydemoserver**).
Get the connection information needed to connect to the Azure Database for Postg
:::image type="content" source="./media/connect-nodejs/server-details-azure-database-postgresql.png" alt-text="Azure Database for PostgreSQL connection string"::: ## Running the JavaScript code in Node.js+ You may launch Node.js from the Bash shell, Terminal, or Windows Command Prompt by typing `node`, then run the example JavaScript code interactively by copy and pasting it onto the prompt. Alternatively, you may save the JavaScript code into a text file and launch `node filename.js` with the file name as a parameter to run it. ## Connect, create table, and insert data+ Use the following code to connect and load the data using **CREATE TABLE** and **INSERT INTO** SQL statements.
-The [pg.Client](https://github.com/brianc/node-postgres/wiki/Client) object is used to interface with the PostgreSQL server. The [pg.Client.connect()](https://github.com/brianc/node-postgres/wiki/Client#method-connect) function is used to establish the connection to the server. The [pg.Client.query()](https://github.com/brianc/node-postgres/wiki/Query) function is used to execute the SQL query against PostgreSQL database.
+The [pg.Client](https://github.com/brianc/node-postgres/wiki/Client) object is used to interface with the PostgreSQL server. The [pg.Client.connect()](https://github.com/brianc/node-postgres/wiki/Client#method-connect) function is used to establish the connection to the server. The [pg.Client.query()](https://github.com/brianc/node-postgres/wiki/Query) function is used to execute the SQL query against PostgreSQL database.
Replace the host, dbname, user, and password parameters with the values that you specified when you created the server and database.
function queryDatabase() {
``` ## Read data
-Use the following code to connect and read the data using a **SELECT** SQL statement. The [pg.Client](https://github.com/brianc/node-postgres/wiki/Client) object is used to interface with the PostgreSQL server. The [pg.Client.connect()](https://github.com/brianc/node-postgres/wiki/Client#method-connect) function is used to establish the connection to the server. The [pg.Client.query()](https://github.com/brianc/node-postgres/wiki/Query) function is used to execute the SQL query against PostgreSQL database.
-Replace the host, dbname, user, and password parameters with the values that you specified when you created the server and database.
+Use the following code to connect and read the data using a **SELECT** SQL statement. The [pg.Client](https://github.com/brianc/node-postgres/wiki/Client) object is used to interface with the PostgreSQL server. The [pg.Client.connect()](https://github.com/brianc/node-postgres/wiki/Client#method-connect) function is used to establish the connection to the server. The [pg.Client.query()](https://github.com/brianc/node-postgres/wiki/Query) function is used to execute the SQL query against PostgreSQL database.
+
+Replace the host, dbname, user, and password parameters with the values that you specified when you created the server and database.
```javascript const pg = require('pg');
client.connect(err => {
}); function queryDatabase() {
-
+ console.log(`Running query to PostgreSQL server: ${config.host}`); const query = 'SELECT * FROM inventory;';
function queryDatabase() {
``` ## Update data
-Use the following code to connect and read the data using a **UPDATE** SQL statement. The [pg.Client](https://github.com/brianc/node-postgres/wiki/Client) object is used to interface with the PostgreSQL server. The [pg.Client.connect()](https://github.com/brianc/node-postgres/wiki/Client#method-connect) function is used to establish the connection to the server. The [pg.Client.query()](https://github.com/brianc/node-postgres/wiki/Query) function is used to execute the SQL query against PostgreSQL database.
-Replace the host, dbname, user, and password parameters with the values that you specified when you created the server and database.
+Use the following code to connect and read the data using a **UPDATE** SQL statement. The [pg.Client](https://github.com/brianc/node-postgres/wiki/Client) object is used to interface with the PostgreSQL server. The [pg.Client.connect()](https://github.com/brianc/node-postgres/wiki/Client#method-connect) function is used to establish the connection to the server. The [pg.Client.query()](https://github.com/brianc/node-postgres/wiki/Query) function is used to execute the SQL query against PostgreSQL database.
+
+Replace the host, dbname, user, and password parameters with the values that you specified when you created the server and database.
```javascript const pg = require('pg');
function queryDatabase() {
``` ## Delete data
-Use the following code to connect and read the data using a **DELETE** SQL statement. The [pg.Client](https://github.com/brianc/node-postgres/wiki/Client) object is used to interface with the PostgreSQL server. The [pg.Client.connect()](https://github.com/brianc/node-postgres/wiki/Client#method-connect) function is used to establish the connection to the server. The [pg.Client.query()](https://github.com/brianc/node-postgres/wiki/Query) function is used to execute the SQL query against PostgreSQL database.
-Replace the host, dbname, user, and password parameters with the values that you specified when you created the server and database.
+Use the following code to connect and read the data using a **DELETE** SQL statement. The [pg.Client](https://github.com/brianc/node-postgres/wiki/Client) object is used to interface with the PostgreSQL server. The [pg.Client.connect()](https://github.com/brianc/node-postgres/wiki/Client#method-connect) function is used to establish the connection to the server. The [pg.Client.query()](https://github.com/brianc/node-postgres/wiki/Query) function is used to execute the SQL query against PostgreSQL database.
+
+Replace the host, dbname, user, and password parameters with the values that you specified when you created the server and database.
```javascript const pg = require('pg');
az group delete \
``` ## Next steps+ > [!div class="nextstepaction"] > [Migrate your database using Export and Import](./how-to-migrate-using-export-and-import.md)
postgresql Connect Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/connect-php.md
ms.devlang: php Previously updated : 2/28/2018 Last updated : 06/24/2022 # Quickstart: Use PHP to connect and query data in Azure Database for PostgreSQL - Single Server
Last updated 2/28/2018
This quickstart demonstrates how to connect to an Azure Database for PostgreSQL using a [PHP](https://secure.php.net/manual/intro-whatis.php) application. It shows how to use SQL statements to query, insert, update, and delete data in the database. The steps in this article assume that you are familiar with developing using PHP, and are new to working with Azure Database for PostgreSQL. ## Prerequisites+ This quickstart uses the resources created in either of these guides as a starting point: - [Create DB - Portal](quickstart-create-server-database-portal.md) - [Create DB - Azure CLI](quickstart-create-server-database-azure-cli.md) ## Install PHP+ Install PHP on your own server, or create an Azure [web app](../../app-service/overview.md) that includes PHP. ### Windows+ - Download [PHP 7.1.4 non-thread safe (x64) version](https://windows.php.net/download#php-7.1) - Install PHP and refer to the [PHP manual](https://secure.php.net/manual/install.windows.php) for further configuration - The code uses the **pgsql** class (ext/php_pgsql.dll) that is included in the PHP installation. - Enabled the **pgsql** extension by editing the php.ini configuration file, typically located at `C:\Program Files\PHP\v7.1\php.ini`. The configuration file should contain a line with the text `extension=php_pgsql.so`. If it is not shown, add the text and save the file. If the text is present, but commented with a semicolon prefix, uncomment the text by removing the semicolon. ### Linux (Ubuntu)+ - Download [PHP 7.1.4 non-thread safe (x64) version](https://secure.php.net/downloads.php) - Install PHP and refer to the [PHP manual](https://secure.php.net/manual/install.unix.php) for further configuration - The code uses the **pgsql** class (php_pgsql.so). Install it by running `sudo apt-get install php-pgsql`. - Enabled the **pgsql** extension by editing the `/etc/php/7.0/mods-available/pgsql.ini` configuration file. The configuration file should contain a line with the text `extension=php_pgsql.so`. If it is not shown, add the text and save the file. If the text is present, but commented with a semicolon prefix, uncomment the text by removing the semicolon. ### MacOS+ - Download [PHP 7.1.4 version](https://secure.php.net/downloads.php) - Install PHP and refer to the [PHP manual](https://secure.php.net/manual/install.macosx.php) for further configuration ## Get connection information+ Get the connection information needed to connect to the Azure Database for PostgreSQL. You need the fully qualified server name and login credentials. 1. Log in to the [Azure portal](https://portal.azure.com/).
-2. From the left-hand menu in Azure portal, click **All resources**, and then search for the server you have created (such as **mydemoserver**).
-3. Click the server name.
+2. From the left-hand menu in Azure portal, select **All resources**, and then search for the server you have created (such as **mydemoserver**).
+3. Select the server name.
4. From the server's **Overview** panel, make a note of the **Server name** and **Server admin login name**. If you forget your password, you can also reset the password from this panel.
- :::image type="content" source="./media/connect-php/1-connection-string.png" alt-text="Azure Database for PostgreSQL server name":::
## Connect and create a table+ Use the following code to connect and create a table using **CREATE TABLE** SQL statement, followed by **INSERT INTO** SQL statements to add rows into the table. The code call method [pg_connect()](https://secure.php.net/manual/en/function.pg-connect.php) to connect to Azure Database for PostgreSQL. Then it calls method [pg_query()](https://secure.php.net/manual/en/function.pg-query.php) several times to run several commands, and [pg_last_error()](https://secure.php.net/manual/en/function.pg-last-error.php) to check the details if an error occurred each time. Then it calls method [pg_close()](https://secure.php.net/manual/en/function.pg-close.php) to close the connection.
-Replace the `$host`, `$database`, `$user`, and `$password` parameters with your own values.
+Replace the `$host`, `$database`, `$user`, and `$password` parameters with your own values.
```php <?php
Replace the `$host`, `$database`, `$user`, and `$password` parameters with your
``` ## Read data
-Use the following code to connect and read the data using a **SELECT** SQL statement.
- The code call method [pg_connect()](https://secure.php.net/manual/en/function.pg-connect.php) to connect to Azure Database for PostgreSQL. Then it calls method [pg_query()](https://secure.php.net/manual/en/function.pg-query.php) to run the SELECT command, keeping the results in a result set, and [pg_last_error()](https://secure.php.net/manual/en/function.pg-last-error.php) to check the details if an error occurred. To read the result set, method [pg_fetch_row()](https://secure.php.net/manual/en/function.pg-fetch-row.php) is called in a loop, once per row, and the row data is retrieved in an array `$row`, with one data value per column in each array position. To free the result set, method [pg_free_result()](https://secure.php.net/manual/en/function.pg-free-result.php) is called. Then it calls method [pg_close()](https://secure.php.net/manual/en/function.pg-close.php) to close the connection.
+Use the following code to connect and read the data using a **SELECT** SQL statement.
-Replace the `$host`, `$database`, `$user`, and `$password` parameters with your own values.
+The code call method [pg_connect()](https://secure.php.net/manual/en/function.pg-connect.php) to connect to Azure Database for PostgreSQL. Then it calls method [pg_query()](https://secure.php.net/manual/en/function.pg-query.php) to run the SELECT command, keeping the results in a result set, and [pg_last_error()](https://secure.php.net/manual/en/function.pg-last-error.php) to check the details if an error occurred. To read the result set, method [pg_fetch_row()](https://secure.php.net/manual/en/function.pg-fetch-row.php) is called in a loop, once per row, and the row data is retrieved in an array `$row`, with one data value per column in each array position. To free the result set, method [pg_free_result()](https://secure.php.net/manual/en/function.pg-free-result.php) is called. Then it calls method [pg_close()](https://secure.php.net/manual/en/function.pg-close.php) to close the connection.
+
+Replace the `$host`, `$database`, `$user`, and `$password` parameters with your own values.
```php <?php
Replace the `$host`, `$database`, `$user`, and `$password` parameters with your
$database = "mypgsqldb"; $user = "mylogin@mydemoserver"; $password = "<server_admin_password>";
-
+ // Initialize connection object. $connection = pg_connect("host=$host dbname=$database user=$user password=$password") or die("Failed to create connection to database: ". pg_last_error(). "<br/>");
Replace the `$host`, `$database`, `$user`, and `$password` parameters with your
``` ## Update data+ Use the following code to connect and update the data using a **UPDATE** SQL statement. The code call method [pg_connect()](https://secure.php.net/manual/en/function.pg-connect.php) to connect to Azure Database for PostgreSQL. Then it calls method [pg_query()](https://secure.php.net/manual/en/function.pg-query.php) to run a command, and [pg_last_error()](https://secure.php.net/manual/en/function.pg-last-error.php) to check the details if an error occurred. Then it calls method [pg_close()](https://secure.php.net/manual/en/function.pg-close.php) to close the connection.
-Replace the `$host`, `$database`, `$user`, and `$password` parameters with your own values.
+Replace the `$host`, `$database`, `$user`, and `$password` parameters with your own values.
```php <?php
Replace the `$host`, `$database`, `$user`, and `$password` parameters with your
?> ``` - ## Delete data
-Use the following code to connect and read the data using a **DELETE** SQL statement.
- The code call method [pg_connect()](https://secure.php.net/manual/en/function.pg-connect.php) to connect to Azure Database for PostgreSQL. Then it calls method [pg_query()](https://secure.php.net/manual/en/function.pg-query.php) to run a command, and [pg_last_error()](https://secure.php.net/manual/en/function.pg-last-error.php) to check the details if an error occurred. Then it calls method [pg_close()](https://secure.php.net/manual/en/function.pg-close.php) to close the connection.
+Use the following code to connect and read the data using a **DELETE** SQL statement.
+
+The code call method [pg_connect()](https://secure.php.net/manual/en/function.pg-connect.php) to connect to Azure Database for PostgreSQL. Then it calls method [pg_query()](https://secure.php.net/manual/en/function.pg-query.php) to run a command, and [pg_last_error()](https://secure.php.net/manual/en/function.pg-last-error.php) to check the details if an error occurred. Then it calls method [pg_close()](https://secure.php.net/manual/en/function.pg-close.php) to close the connection.
-Replace the `$host`, `$database`, `$user`, and `$password` parameters with your own values.
+Replace the `$host`, `$database`, `$user`, and `$password` parameters with your own values.
```php <?php
az group delete \
``` ## Next steps+ > [!div class="nextstepaction"] > [Migrate your database using Export and Import](./how-to-migrate-using-export-and-import.md)
postgresql Connect Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/connect-python.md
ms.devlang: python Previously updated : 10/28/2020 Last updated : 06/24/2022 # Quickstart: Use Python to connect and query data in Azure Database for PostgreSQL - Single Server
In this quickstart, you will learn how to connect to the database on Azure Datab
> [!TIP] > If you are looking to build a Django Application with PostgreSQL then checkout the tutorial, [Deploy a Django web app with PostgreSQL](../../app-service/tutorial-python-postgresql-app.md) tutorial. - ## Prerequisites+ For this quickstart you need: - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
For this quickstart you need:
- Install [psycopg2](https://pypi.python.org/pypi/psycopg2-binary/) using `pip install psycopg2-binary` in a terminal or command prompt window. For more information, see [how to install `psycopg2`](https://www.psycopg.org/docs/install.html). ## Get database connection information+ Connecting to an Azure Database for PostgreSQL database requires the fully qualified server name and login credentials. You can get this information from the Azure portal. 1. In the [Azure portal](https://portal.azure.com/), search for and select your Azure Database for PostgreSQL server name.
Connecting to an Azure Database for PostgreSQL database requires the fully quali
> - `<database-name>` a default database named *postgres* was automatically created when you created your server. You can rename that database or [create a new database](https://www.postgresql.org/docs/current/sql-createdatabase.html) by using SQL commands. ## Step 1: Connect and insert data+ The following code example connects to your Azure Database for PostgreSQL database using - [psycopg2.connect](https://www.psycopg.org/docs/connection.html) function, and loads data with a SQL **INSERT** statement. - [cursor.execute](https://www.psycopg.org/docs/cursor.html#execute) function executes the SQL query against the database.
import psycopg2
# Update connection string information -- host = "<server-name>" dbname = "<database-name>" user = "<admin-username>"
sslmode = "require"
# Construct connection string -- conn_string = "host={0} user={1} dbname={2} password={3} sslmode={4}".format(host, user, dbname, password, sslmode) conn = psycopg2.connect(conn_string) print("Connection established")
cursor = conn.cursor()
# Drop previous table of same name if one exists -- cursor.execute("DROP TABLE IF EXISTS inventory;") print("Finished dropping table (if existed)") # Create a table -- cursor.execute("CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER);") print("Finished creating table") # Insert some data into the table -- cursor.execute("INSERT INTO inventory (name, quantity) VALUES (%s, %s);", ("banana", 150)) cursor.execute("INSERT INTO inventory (name, quantity) VALUES (%s, %s);", ("orange", 154)) cursor.execute("INSERT INTO inventory (name, quantity) VALUES (%s, %s);", ("apple", 100))
print("Inserted 3 rows of data")
# Clean up -- conn.commit() cursor.close() conn.close()
When the code runs successfully, it produces the following output:
:::image type="content" source="media/connect-python/2-example-python-output.png" alt-text="Command-line output"::: - ## Step 2: Read data+ The following code example connects to your Azure Database for PostgreSQL database and uses - [cursor.execute](https://www.psycopg.org/docs/cursor.html#execute) with the SQL **SELECT** statement to read data. - [cursor.fetchall()](https://www.psycopg.org/docs/cursor.html#cursor.fetchall) accepts a query and returns a result set to iterate over by using
The following code example connects to your Azure Database for PostgreSQL databa
# Fetch all rows from table -- cursor.execute("SELECT * FROM inventory;") rows = cursor.fetchall() # Print all rows -- for row in rows: print("Data row = (%s, %s, %s)" %(str(row[0]), str(row[1]), str(row[2])))-- ``` ## Step 3: Update data+ The following code example uses [cursor.execute](https://www.psycopg.org/docs/cursor.html#execute) with the SQL **UPDATE** statement to update data. ```Python # Update a data row in the table -- cursor.execute("UPDATE inventory SET quantity = %s WHERE name = %s;", (200, "banana")) print("Updated 1 row of data")- ``` ## Step 5: Delete data+ The following code example runs [cursor.execute](https://www.psycopg.org/docs/cursor.html#execute) with the SQL **DELETE** statement to delete an inventory item that you previously inserted. ```Python # Delete data row from table -- cursor.execute("DELETE FROM inventory WHERE name = %s;", ("orange",)) print("Deleted 1 row of data")- ``` ## Clean up resources
az group delete \
``` ## Next steps+ > [!div class="nextstepaction"] > [Manage Azure Database for MySQL server using Portal](./how-to-create-manage-server-portal.md)<br/> > [!div class="nextstepaction"] > [Manage Azure Database for MySQL server using CLI](./how-to-manage-server-cli.md)<br/>-
postgresql Connect Ruby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/connect-ruby.md
ms.devlang: ruby Previously updated : 5/6/2019 Last updated : 06/24/2022 # Quickstart: Use Ruby to connect and query data in Azure Database for PostgreSQL - Single Server
Last updated 5/6/2019
This quickstart demonstrates how to connect to an Azure Database for PostgreSQL using a [Ruby](https://www.ruby-lang.org) application. It shows how to use SQL statements to query, insert, update, and delete data in the database. The steps in this article assume that you are familiar with developing using Ruby, and are new to working with Azure Database for PostgreSQL. ## Prerequisites+ This quickstart uses the resources created in either of these guides as a starting point: - [Create DB - Portal](quickstart-create-server-database-portal.md) - [Create DB - Azure CLI](quickstart-create-server-database-azure-cli.md)
You also need to have installed:
- [Ruby pg](https://rubygems.org/gems/pg/), the PostgreSQL module for Ruby ## Get connection information+ Get the connection information needed to connect to the Azure Database for PostgreSQL. You need the fully qualified server name and login credentials. 1. Log in to the [Azure portal](https://portal.azure.com/).
-2. From the left-hand menu in Azure portal, click **All resources**, and then search for the server you have created (such as **mydemoserver**).
-3. Click the server name.
+2. From the left-hand menu in Azure portal, select **All resources**, and then search for the server you have created (such as **mydemoserver**).
+3. Select the server name.
4. From the server's **Overview** panel, make a note of the **Server name** and **Server admin login name**. If you forget your password, you can also reset the password from this panel.
- :::image type="content" source="./media/connect-ruby/1-connection-string.png" alt-text="Azure Database for PostgreSQL server name":::
> [!NOTE] > The `@` symbol in the Azure Postgres username has been url encoded as `%40` in all the connection strings. ## Connect and create a table+ Use the following code to connect and create a table using **CREATE TABLE** SQL statement, followed by **INSERT INTO** SQL statements to add rows into the table. The code uses a ```PG::Connection``` object with constructor ```new``` to connect to Azure Database for PostgreSQL. Then it calls method ```exec()``` to run the DROP, CREATE TABLE, and INSERT INTO commands. The code checks for errors using the ```PG::Error``` class. Then it calls method ```close()``` to close the connection before terminating. See Ruby Pg reference documentation for more information on these classes and methods. Replace the `host`, `database`, `user`, and `password` strings with your own values. - ```ruby require 'pg'
end
``` ## Read data+ Use the following code to connect and read the data using a **SELECT** SQL statement. The code uses a ```PG::Connection``` object with constructor ```new``` to connect to Azure Database for PostgreSQL. Then it calls method ```exec()``` to run the SELECT command, keeping the results in a result set. The result set collection is iterated over using the `resultSet.each do` loop, keeping the current row values in the `row` variable. The code checks for errors using the ```PG::Error``` class. Then it calls method ```close()``` to close the connection before terminating. See Ruby Pg reference documentation for more information on these classes and methods.
end
``` ## Update data+ Use the following code to connect and update the data using a **UPDATE** SQL statement. The code uses a ```PG::Connection``` object with constructor ```new``` to connect to Azure Database for PostgreSQL. Then it calls method ```exec()``` to run the UPDATE command. The code checks for errors using the ```PG::Error``` class. Then it calls method ```close()``` to close the connection before terminating. See [Ruby Pg reference documentation](https://rubygems.org/gems/pg) for more information on these classes and methods.
ensure
end ``` - ## Delete data+ Use the following code to connect and read the data using a **DELETE** SQL statement. The code uses a ```PG::Connection``` object with constructor ```new``` to connect to Azure Database for PostgreSQL. Then it calls method ```exec()``` to run the UPDATE command. The code checks for errors using the ```PG::Error``` class. Then it calls method ```close()``` to close the connection before terminating.
postgresql Connect Rust https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/connect-rust.md
ms.devlang: rust Previously updated : 05/17/2022 Last updated : 06/24/2022 # Quickstart: Use Rust to interact with Azure Database for PostgreSQL - Single Server
For this quickstart, you need:
- [Git](https://git-scm.com/downloads) installed. ## Get database connection information+ Connecting to an Azure Database for PostgreSQL database requires a fully qualified server name and login credentials. You can get this information from the Azure portal. 1. In the [Azure portal](https://portal.azure.com/), search for and select your Azure Database for PostgreSQL server name.
The sample application uses a simple `inventory` table to demonstrate the CRUD (
CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER); ```
-The `drop_create_table` function initially tries to `DROP` the `inventory` table before creating a new one. This makes it easier for learning/experimentation, as you always start with a known (clean) state. The [execute](https://docs.rs/postgres/0.19.0/postgres/struct.Client.html#method.execute) method is used for create and drop operations.
+The `drop_create_table` function initially tries to `DROP` the `inventory` table before creating a new one. This makes it easier for learning/experimentation, as you always start with a known (clean) state. The [execute](https://docs.rs/postgres/0.19.0/postgres/struct.Client.html#method.execute) method is used for create and drop operations.
```rust const CREATE_QUERY: &str =
fn drop_create_table(pg_client: &mut postgres::Client) {
### Insert data
-`insert_data` adds entries to the `inventory` table. It creates a [prepared statement](https://docs.rs/postgres/0.19.0/postgres/struct.Statement.html) with [prepare](https://docs.rs/postgres/0.19.0/postgres/struct.Client.html#method.prepare) function.
-
+`insert_data` adds entries to the `inventory` table. It creates a [prepared statement](https://docs.rs/postgres/0.19.0/postgres/struct.Statement.html) with [prepare](https://docs.rs/postgres/0.19.0/postgres/struct.Client.html#method.prepare) function.
```rust const INSERT_QUERY: &str = "INSERT INTO inventory (name, quantity) VALUES ($1, $2) RETURNING id;";
Finally, a `for` loop is used to add `item-3`, `item-4` and, `item-5` with rando
### Query data
-`query_data` function demonstrates how to retrieve data from the `inventory` table. The [query_one](https://docs.rs/postgres/0.19.0/postgres/struct.Client.html#method.query_one) method is used to get an item by its `id`.
+`query_data` function demonstrates how to retrieve data from the `inventory` table. The [query_one](https://docs.rs/postgres/0.19.0/postgres/struct.Client.html#method.query_one) method is used to get an item by its `id`.
```rust const SELECT_ALL_QUERY: &str = "SELECT * FROM inventory;";
fn query_data(pg_client: &mut postgres::Client) {
} ```
-All rows in the inventory table are fetched using a `select * from` query with the [query](https://docs.rs/postgres/0.19.0/postgres/struct.Client.html#method.query) method. The returned rows are iterated over to extract the value for each column using [get](https://docs.rs/postgres/0.19.0/postgres/row/struct.Row.html#method.get).
+All rows in the inventory table are fetched using a `select * from` query with the [query](https://docs.rs/postgres/0.19.0/postgres/struct.Client.html#method.query) method. The returned rows are iterated over to extract the value for each column using [get](https://docs.rs/postgres/0.19.0/postgres/row/struct.Row.html#method.get).
> [!TIP] > Note how `get` makes it possible to specify the column either by its numeric index in the row, or by its column name.
fn update_data(pg_client: &mut postgres::Client) {
], ) .expect("update failed");
-
+ let quantity: i32 = row.get("quantity"); println!("updated item id {} to quantity = {}", id, quantity); }
fn update_data(pg_client: &mut postgres::Client) {
### Delete data
-Finally, the `delete` function demonstrates how to remove an item from the `inventory` table by its `id`. The `id` is chosen randomly - it's a random integer between `1` to `5` (`5` inclusive) since the `insert_data` function had added `5` rows to start with.
+Finally, the `delete` function demonstrates how to remove an item from the `inventory` table by its `id`. The `id` is chosen randomly - it's a random integer between `1` to `5` (`5` inclusive) since the `insert_data` function had added `5` rows to start with.
> [!TIP] > Note that we use `query` instead of `execute` since we intend to get back the info about the item we just deleted (using [RETURNING clause](https://www.postgresql.org/docs/current/dml-returning.html)).
az group delete \
``` ## Next steps+ > [!div class="nextstepaction"] > [Manage Azure Database for PostgreSQL server using Portal](./how-to-create-manage-server-portal.md)<br/>
postgresql How To Alert On Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-alert-on-metric.md
Previously updated : 5/6/2019 Last updated : 06/24/2022 # Use the Azure portal to set up alerts on metrics for Azure Database for PostgreSQL - Single Server
Last updated 5/6/2019
This article shows you how to set up Azure Database for PostgreSQL alerts using the Azure portal. You can receive an alert based on monitoring metrics for your Azure services.
-The alert triggers when the value of a specified metric crosses a threshold you assign. The alert triggers both when the condition is first met, and then afterwards when that condition is no longer being met.
+The alert triggers when the value of a specified metric crosses a threshold you assign. The alert triggers both when the condition is first met, and then afterwards when that condition is no longer being met.
You can configure an alert to do the following actions when it triggers: * Send email notifications to the service administrator and co-administrators.
You can configure and get information about alert rules using:
* [Azure Monitor REST API](/rest/api/monitor/metricalerts) ## Create an alert rule on a metric from the Azure portal+ 1. In the [Azure portal](https://portal.azure.com/), select the Azure Database for PostgreSQL server you want to monitor. 2. Under the **Monitoring** section of the sidebar, select **Alerts** as shown:
You can configure and get information about alert rules using:
5. Within the **Condition** section, select **Add condition**. 6. Select a metric from the list of signals to be alerted on. In this example, select "Storage percent".
-
+ :::image type="content" source="./media/how-to-alert-on-metric/6-configure-signal-logic.png" alt-text="Select metric"::: 7. Configure the alert logic including the **Condition** (ex. "Greater than"), **Threshold** (ex. 85 percent), **Time Aggregation**, **Period** of time the metric rule must be satisfied before the alert triggers (ex. "Over the last 30 minutes"), and **Frequency**.
-
+ Select **Done** when complete. :::image type="content" source="./media/how-to-alert-on-metric/7-set-threshold-time.png" alt-text="Screenshot that highlights the Alert logic section and the Done button.":::
You can configure and get information about alert rules using:
9. Fill out the "Add action group" form with a name, short name, subscription, and resource group. 10. Configure an **Email/SMS/Push/Voice** action type.
-
+ Choose "Email Azure Resource Manager Role" to select subscription Owners, Contributors, and Readers to receive notifications.
-
+ Optionally, provide a valid URI in the **Webhook** field if you want it called when the alert fires. Select **OK** when completed.
You can configure and get information about alert rules using:
11. Specify an Alert rule name, Description, and Severity.
- :::image type="content" source="./media/how-to-alert-on-metric/11-name-description-severity.png" alt-text="Action group":::
+ :::image type="content" source="./media/how-to-alert-on-metric/11-name-description-severity.png" alt-text="Action group":::
12. Select **Create alert rule** to create the alert. Within a few minutes, the alert is active and triggers as previously described. ## Manage your alerts+ Once you have created an alert, you can select it and do the following actions: * View a graph showing the metric threshold and the actual values from the previous day relevant to this alert.
Once you have created an alert, you can select it and do the following actions:
* **Disable** or **Enable** the alert, if you want to temporarily stop or resume receiving notifications. ## Next steps+ * Learn more about [configuring webhooks in alerts](../../azure-monitor/alerts/alerts-webhooks.md).
-* Get an [overview of metrics collection](../../azure-monitor/data-platform.md) to make sure your service is available and responsive.
+* Get an [overview of metrics collection](../../azure-monitor/data-platform.md) to make sure your service is available and responsive.
postgresql How To Auto Grow Storage Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-auto-grow-storage-cli.md
Previously updated : 8/7/2019 Last updated : 06/24/2022 + # Auto-grow Azure Database for PostgreSQL storage - Single Server using the Azure CLI [!INCLUDE [applies-to-postgresql-single-server](../includes/applies-to-postgresql-single-server.md)]
postgresql How To Auto Grow Storage Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-auto-grow-storage-portal.md
Previously updated : 5/29/2019 Last updated : 06/24/2022 + # Auto grow storage using the Azure portal in Azure Database for PostgreSQL - Single Server [!INCLUDE [applies-to-postgresql-single-server](../includes/applies-to-postgresql-single-server.md)]
This article describes how you can configure an Azure Database for PostgreSQL se
When a server reaches the allocated storage limit, the server is marked as read-only. However, if you enable storage auto grow, the server storage increases to accommodate the growing data. For servers with less than 100 GB provisioned storage, the provisioned storage size is increased by 5 GB as soon as the free storage is below the greater of 1 GB or 10% of the provisioned storage. For servers with more than 100 GB of provisioned storage, the provisioned storage size is increased by 5% when the free storage space is below 10GB of the provisioned storage size. Maximum storage limits as specified [here](./concepts-pricing-tiers.md#storage) apply. ## Prerequisites+ To complete this how-to guide, you need: - An [Azure Database for PostgreSQL server](quickstart-create-server-database-portal.md)
-## Enable storage auto grow
+## Enable storage auto grow
Follow these steps to set PostgreSQL server storage auto grow: 1. In the [Azure portal](https://portal.azure.com/), select your existing Azure Database for PostgreSQL server.
-2. On the PostgreSQL server page, under **Settings**, click **Pricing tier** to open the pricing tier page.
+2. On the PostgreSQL server page, under **Settings**, select **Pricing tier** to open the pricing tier page.
3. In the **Auto-growth** section, select **Yes** to enable storage auto grow. :::image type="content" source="./media/how-to-auto-grow-storage-portal/3-auto-grow.png" alt-text="Azure Database for PostgreSQL - Settings_Pricing_tier - Auto-growth":::
-4. Click **OK** to save the changes.
+4. Select **OK** to save the changes.
5. A notification will confirm that auto grow was successfully enabled.
postgresql How To Auto Grow Storage Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-auto-grow-storage-powershell.md
Previously updated : 05/17/2022 Last updated : 06/24/2022 + # Auto grow Azure Database for PostgreSQL storage using PowerShell [!INCLUDE [applies-to-postgresql-single-server](../includes/applies-to-postgresql-single-server.md)]
New-AzPostgreSqlServer -Name mydemoserver -ResourceGroupName myresourcegroup -Sk
## Next steps > [!div class="nextstepaction"]
-> [How to create and manage read replicas in Azure Database for PostgreSQL using PowerShell](how-to-read-replicas-powershell.md).
+> [How to create and manage read replicas in Azure Database for PostgreSQL using PowerShell](how-to-read-replicas-powershell.md).
postgresql How To Configure Privatelink Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-configure-privatelink-cli.md
Previously updated : 01/09/2020 Last updated : 06/24/2022
az group create --name myResourceGroup --location westeurope
``` ## Create a Virtual Network+ Create a Virtual Network with [az network vnet create](/cli/azure/network/vnet). This example creates a default Virtual Network named *myVirtualNetwork* with one subnet named *mySubnet*: ```azurecli-interactive az network vnet create \
- --name myVirtualNetwork \
- --resource-group myResourceGroup \
- --subnet-name mySubnet
+--name myVirtualNetwork \
+--resource-group myResourceGroup \
+--subnet-name mySubnet
``` ## Disable subnet private endpoint policies + Azure deploys resources to a subnet within a virtual network, so you need to create or update the subnet to disable private endpoint [network policies](../../private-link/disable-private-endpoint-network-policy.md). Update a subnet configuration named *mySubnet* with [az network vnet subnet update](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-update): ```azurecli-interactive az network vnet subnet update \
- --name mySubnet \
- --resource-group myResourceGroup \
- --vnet-name myVirtualNetwork \
- --disable-private-endpoint-network-policies true
+--name mySubnet \
+--resource-group myResourceGroup \
+--vnet-name myVirtualNetwork \
+--disable-private-endpoint-network-policies true
``` ## Create the VM + Create a VM with az vm create. When prompted, provide a password to be used as the sign-in credentials for the VM. This example creates a VM named *myVm*: ```azurecli-interactive az vm create \
az vm create \
--image Win2019Datacenter ```
- Note the public IP address of the VM. You will use this address to connect to the VM from the internet in the next step.
+Note the public IP address of the VM. You will use this address to connect to the VM from the internet in the next step.
## Create an Azure Database for PostgreSQL - Single server
-Create a Azure Database for PostgreSQL with the az postgres server create command. Remember that the name of your PostgreSQL Server must be unique across Azure, so replace the placeholder value with your own unique values that you used above:
+
+Create a Azure Database for PostgreSQL with the az postgres server create command. Remember that the name of your PostgreSQL Server must be unique across Azure, so replace the placeholder value with your own unique values that you used above:
```azurecli-interactive
-# Create a server in the resource group
+# Create a server in the resource group
[!INCLUDE [applies-to-postgresql-single-server](../includes/applies-to-postgresql-single-server.md)]
az postgres server create \
``` ## Create the Private Endpoint
-Create a private endpoint for the PostgreSQL server in your Virtual Network:
+
+Create a private endpoint for the PostgreSQL server in your Virtual Network:
```azurecli-interactive az network private-endpoint create \
az network private-endpoint create \
--private-connection-resource-id $(az resource show -g myResourcegroup -n mydemoserver --resource-type "Microsoft.DBforPostgreSQL/servers" --query "id" -o tsv) \ --group-id postgresqlServer \ --connection-name myConnection
- ```
+```
## Configure the Private DNS Zone
-Create a Private DNS Zone for PostgreSQL server domain and create an association link with the Virtual Network.
+
+Create a Private DNS Zone for PostgreSQL server domain and create an association link with the Virtual Network.
```azurecli-interactive az network private-dns zone create --resource-group myResourceGroup \
az network private-dns link vnet create --resource-group myResourceGroup \
--zone-name "privatelink.postgres.database.azure.com"\ --name MyDNSLink \ --virtual-network myVirtualNetwork \
- --registration-enabled false
+ --registration-enabled false
#Query for the network interface ID + networkInterfaceId=$(az network private-endpoint show --name myPrivateEndpoint --resource-group myResourceGroup --query 'networkInterfaces[0].id' -o tsv)
az resource show --ids $networkInterfaceId --api-version 2019-04-01 -o json
#Create DNS records + az network private-dns record-set a create --name myserver --zone-name privatelink.postgres.database.azure.com --resource-group myResourceGroup az network private-dns record-set a add-record --record-set-name myserver --zone-name privatelink.postgres.database.azure.com --resource-group myResourceGroup -a <Private IP Address> ```
Connect to the VM *myVm* from the internet as follows:
1. You may receive a certificate warning during the sign-in process. If you receive a certificate warning, select **Yes** or **Continue**.
-1. Once the VM desktop appears, minimize it to go back to your local desktop.
+1. Once the VM desktop appears, minimize it to go back to your local desktop.
## Access the PostgreSQL server privately from the VM 1. In the Remote Desktop of *myVM*, open PowerShell.
-2. Enter ΓÇ»`nslookup mydemopostgresserver.privatelink.postgres.database.azure.com`.
+2. Enter ΓÇ»`nslookup mydemopostgresserver.privatelink.postgres.database.azure.com`.
You'll receive a message similar to this:
Connect to the VM *myVm* from the internet as follows:
| User name | Enter username as username@servername which is provided during the PostgreSQL server creation. | |Password |Enter a password provided during the PostgreSQL server creation. | |SSL|Select **Required**.|
- ||
5. Select Connect.
Connect to the VM *myVm* from the internet as follows:
8. Close the remote desktop connection to myVm. ## Clean up resources
-When no longer needed, you can use az group delete to remove the resource group and all the resources it has:
+
+When no longer needed, you can use az group delete to remove the resource group and all the resources it has:
```azurecli-interactive az group delete --name myResourceGroup --yes ``` ## Next steps+ - Learn more about [What is Azure private endpoint](../../private-link/private-endpoint-overview.md)
postgresql How To Configure Privatelink Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-configure-privatelink-portal.md
Previously updated : 01/09/2020 Last updated : 06/24/2022 # Create and manage Private Link for Azure Database for PostgreSQL - Single server using Portal
If you don't have an Azure subscription, create a [free account](https://azure.m
> The private link feature is only available for Azure Database for PostgreSQL servers in the General Purpose or Memory Optimized pricing tiers. Ensure the database server is in one of these pricing tiers. ## Sign in to Azure+ Sign in to the [Azure portal](https://portal.azure.com). ## Create an Azure VM
Sign in to the [Azure portal](https://portal.azure.com).
In this section, you will create virtual network and the subnet to host the VM that is used to access your Private Link resource (a PostgreSQL server in Azure). ### Create the virtual network+ In this section, you will create a Virtual Network and the subnet to host the VM that is used to access your Private Link resource. 1. On the upper-left side of the screen, select **Create a resource** > **Networking** > **Virtual network**.
In this section, you will create a Virtual Network and the subnet to host the VM
| Public inbound ports | Leave the default **None**. | | **SAVE MONEY** | | | Already have a Windows license? | Leave the default **No**. |
- |||
1. Select **Next: Disks**.
In this section, you will create a Virtual Network and the subnet to host the VM
| Public IP | Leave the default **(new) myVm-ip**. | | Public inbound ports | Select **Allow selected ports**. | | Select inbound ports | Select **HTTP** and **RDP**.|
- |||
- 1. Select **Review + create**. You're taken to the **Review + create** page where Azure validates your configuration.
In this section, you will create a Virtual Network and the subnet to host the VM
## Create an Azure Database for PostgreSQL Single server
-In this section, you will create an Azure Database for PostgreSQL server in Azure.
+In this section, you will create an Azure Database for PostgreSQL server in Azure.
1. On the upper-left side of the screen in the Azure portal, select **Create a resource** > **Databases** > **Azure Database for PostgreSQL**.
In this section, you will create an Azure Database for PostgreSQL server in Azur
| Location | Select an Azure region where you want to want your PostgreSQL Server to reside. | |Version | Select the database version of the PostgreSQL server that is required.| | Compute + Storage| Select the pricing tier that is needed for the server based on the workload. |
- |||
-
+ 7. Select **OK**. 8. Select **Review + create**. You're taken to the **Review + create** page where Azure validates your configuration. 9. When you see the Validation passed message, select **Create**.
-10. When you see the Validation passed message, select Create.
+10. When you see the Validation passed message, select Create.
## Create a private endpoint
-In this section, you will create a PostgreSQL server and add a private endpoint to it.
+In this section, you will create a PostgreSQL server and add a private endpoint to it.
1. On the upper-left side of the screen in the Azure portal, select **Create a resource** > **Networking** > **Private Link**. 2. In **Private Link Center - Overview**, on the option to **Build a private connection to a service**, select **Start**.
In this section, you will create a PostgreSQL server and add a private endpoint
|**PRIVATE DNS INTEGRATION**|| |Integrate with private DNS zone |Select **Yes**. | |Private DNS Zone |Select *(New)privatelink.postgres.database.azure.com* |
- |||
> [!Note] > Use the predefined private DNS zone for your service or provide your preferred DNS zone name. Refer to the [Azure services DNS zone configuration](../../private-link/private-endpoint-dns.md) for details. 1. Select **Review + create**. You're taken to the **Review + create** page where Azure validates your configuration.
-2. When you see the **Validation passed** message, select **Create**.
+2. When you see the **Validation passed** message, select **Create**.
:::image type="content" source="media/concepts-data-access-and-security-private-link/show-postgres-private-link-1.png" alt-text="Private Link created":::
In this section, you will create a PostgreSQL server and add a private endpoint
## Connect to a VM using Remote Desktop (RDP) -
-After you've created **myVm**, connect to it from the internet as follows:
+After you've created **myVm**, connect to it from the internet as follows:
1. In the portal's search bar, enter *myVm*.
After you've created **myVm**, connect to it from the internet as follows:
1. In the Remote Desktop of *myVM*, open PowerShell.
-2. EnterΓÇ»`nslookup mydemopostgresserver.privatelink.postgres.database.azure.com`.
+2. EnterΓÇ»`nslookup mydemopostgresserver.privatelink.postgres.database.azure.com`.
You'll receive a message similar to this: ```azurepowershell
After you've created **myVm**, connect to it from the internet as follows:
| User name | Enter username as username@servername which is provided during the PostgreSQL server creation. | |Password |Enter a password provided during the PostgreSQL server creation. | |SSL|Select **Required**.|
- ||
5. Select Connect.
After you've created **myVm**, connect to it from the internet as follows:
8. Close the remote desktop connection to myVm. ## Clean up resources+ When you're done using the private endpoint, PostgreSQL server, and the VM, delete the resource group and all of the resources it contains: 1. Enter *myResourceGroup* in the **Search** box at the top of the portal and select *myResourceGroup* from the search results.
When you're done using the private endpoint, PostgreSQL server, and the VM, dele
In this how-to, you created a VM on a virtual network, an Azure Database for PostgreSQL - Single server, and a private endpoint for private access. You connected to one VM from the internet and securely communicated to the PostgreSQL server using Private Link. To learn more about private endpoints, see [What is Azure private endpoint](../../private-link/private-endpoint-overview.md). <!-- Link references, to text, Within this same GitHub repo. -->
-[resource-manager-portal]: ../../azure-resource-manager/management/resource-providers-and-types.md
+[resource-manager-portal]: ../../azure-resource-manager/management/resource-providers-and-types.md
postgresql How To Configure Server Logs In Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-configure-server-logs-in-portal.md
Previously updated : 5/6/2019 Last updated : 06/24/2022 # Configure and access Azure Database for PostgreSQL - Single Server logs from the Azure portal
Last updated 5/6/2019
You can configure, list, and download the [Azure Database for PostgreSQL logs](concepts-server-logs.md) from the Azure portal. ## Prerequisites+ The steps in this article require that you have [Azure Database for PostgreSQL server](quickstart-create-server-database-portal.md). ## Configure logging
-Configure access to the query logs and error logs.
+
+Configure access to the query logs and error logs.
1. Sign in to the [Azure portal](https://portal.azure.com/). 2. Select your Azure Database for PostgreSQL server.
-3. Under the **Monitoring** section in the sidebar, select **Server logs**.
+3. Under the **Monitoring** section in the sidebar, select **Server logs**.
:::image type="content" source="./media/how-to-configure-server-logs-in-portal/1-select-server-logs-configure.png" alt-text="Screenshot of Server logs options":::
-4. To see the server parameters, select **Click here to enable logs and configure log parameters**.
+4. To see the server parameters, select **Select here to enable logs and configure log parameters**.
5. Change the parameters that you need to adjust. All changes you make in this session are highlighted in purple.
- After you have changed the parameters, select **Save**. Or, you can discard your changes.
+ After you have changed the parameters, select **Save**. Or, you can discard your changes.
:::image type="content" source="./media/how-to-configure-server-logs-in-portal/3-save-discard.png" alt-text="Screenshot of Server Parameters options"::: From the **Server Parameters** page, you can return to the list of logs by closing the page. ## View list and download logs
-After logging begins, you can view a list of available logs, and download individual log files.
+
+After logging begins, you can view a list of available logs, and download individual log files.
1. Open the Azure portal.
After logging begins, you can view a list of available logs, and download indivi
:::image type="content" source="./media/how-to-configure-server-logs-in-portal/6-download.png" alt-text="Screenshot of Server logs page, with down-arrow icon highlighted"::: ## Next steps+ - See [Access server logs in CLI](how-to-configure-server-logs-using-cli.md) to learn how to download logs programmatically. - Learn more about [server logs](concepts-server-logs.md) in Azure Database for PostgreSQL. - For more information about the parameter definitions and PostgreSQL logging, see the PostgreSQL documentation on [error reporting and logging](https://www.postgresql.org/docs/current/static/runtime-config-logging.html).-
postgresql How To Configure Server Logs Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-configure-server-logs-using-cli.md
ms.devlang: azurecli Previously updated : 5/6/2019 Last updated : 06/24/2022 # Configure and access server logs by using Azure CLI [!INCLUDE [applies-to-postgresql-single-server](../includes/applies-to-postgresql-single-server.md)]
-You can download the PostgreSQL server error logs by using the command-line interface (Azure CLI). However, access to transaction logs isn't supported.
+You can download the PostgreSQL server error logs by using the command-line interface (Azure CLI). However, access to transaction logs isn't supported.
## Prerequisites+ To step through this how-to guide, you need: - [Azure Database for PostgreSQL server](quickstart-create-server-database-azure-cli.md) - The [Azure CLI](/cli/azure/install-azure-cli) command-line utility or Azure Cloud Shell in the browser ## Configure logging+ You can configure the server to access query logs and error logs. Error logs can have auto-vacuum, connection, and checkpoint information. 1. Turn on logging. 2. To enable query logging, update **log\_statement** and **log\_min\_duration\_statement**.
You can configure the server to access query logs and error logs. Error logs can
For more information, see [Customizing server configuration parameters](how-to-configure-server-parameters-using-cli.md). ## List logs+ To list the available log files for your server, run the [az postgres server-logs list](/cli/azure/postgres/server-logs) command. You can list the log files for server **mydemoserver.postgres.database.azure.com** under the resource group **myresourcegroup**. Then direct the list of log files to a text file called **log\_files\_list.txt**.
You can list the log files for server **mydemoserver.postgres.database.azure.com
az postgres server-logs list --resource-group myresourcegroup --server mydemoserver > log_files_list.txt ``` ## Download logs locally from the server
-With the [az postgres server-logs download](/cli/azure/postgres/server-logs) command, you can download individual log files for your server.
+
+With the [az postgres server-logs download](/cli/azure/postgres/server-logs) command, you can download individual log files for your server.
Use the following example to download the specific log file for the server **mydemoserver.postgres.database.azure.com** under the resource group **myresourcegroup** to your local environment. ```azurecli-interactive az postgres server-logs download --name 20170414-mydemoserver-postgresql.log --resource-group myresourcegroup --server mydemoserver ``` ## Next steps+ - To learn more about server logs, see [Server logs in Azure Database for PostgreSQL](concepts-server-logs.md). - For more information about server parameters, see [Customize server configuration parameters using Azure CLI](how-to-configure-server-parameters-using-cli.md).
postgresql How To Configure Server Parameters Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-configure-server-parameters-using-cli.md
ms.devlang: azurecli Previously updated : 06/19/2019 Last updated : 06/24/2022 + # Customize server configuration parameters for Azure Database for PostgreSQL - Single Server using Azure CLI [!INCLUDE [applies-to-postgresql-single-server](../includes/applies-to-postgresql-single-server.md)]
-You can list, show, and update configuration parameters for an Azure PostgreSQL server using the Command Line Interface (Azure CLI). A subset of engine configurations is exposed at server-level and can be modified.
+You can list, show, and update configuration parameters for an Azure PostgreSQL server using the Command Line Interface (Azure CLI). A subset of engine configurations is exposed at server-level and can be modified.
## Prerequisites+ To step through this how-to guide, you need: - Create an Azure Database for PostgreSQL server and database by following [Create an Azure Database for PostgreSQL](quickstart-create-server-database-azure-cli.md) - Install [Azure CLI](/cli/azure/install-azure-cli) command-line interface on your machine or use the [Azure Cloud Shell](../../cloud-shell/overview.md) in the Azure portal using your browser. ## List server configuration parameters for Azure Database for PostgreSQL server+ To list all modifiable parameters in a server and their values, run the [az postgres server configuration list](/cli/azure/postgres/server/configuration) command. You can list the server configuration parameters for the server **mydemoserver.postgres.database.azure.com** under resource group **myresourcegroup**.
You can list the server configuration parameters for the server **mydemoserver.p
az postgres server configuration list --resource-group myresourcegroup --server mydemoserver ``` ## Show server configuration parameter details+ To show details about a particular configuration parameter for a server, run the [az postgres server configuration show](/cli/azure/postgres/server/configuration) command. This example shows details of the **log\_min\_messages** server configuration parameter for server **mydemoserver.postgres.database.azure.com** under resource group **myresourcegroup.**
This example shows details of the **log\_min\_messages** server configuration pa
az postgres server configuration show --name log_min_messages --resource-group myresourcegroup --server mydemoserver ``` ## Modify server configuration parameter value
-You can also modify the value of a certain server configuration parameter, which updates the underlying configuration value for the PostgreSQL server engine. To update the configuration, use the [az postgres server configuration set](/cli/azure/postgres/server/configuration) command.
+
+You can also modify the value of a certain server configuration parameter, which updates the underlying configuration value for the PostgreSQL server engine. To update the configuration, use the [az postgres server configuration set](/cli/azure/postgres/server/configuration) command.
To update the **log\_min\_messages** server configuration parameter of server **mydemoserver.postgres.database.azure.com** under resource group **myresourcegroup.** ```azurecli-interactive
az postgres server configuration set --name log_min_messages --resource-group my
This command resets the **log\_min\_messages** configuration to the default value **WARNING**. For more information on server configuration and permissible values, see PostgreSQL documentation on [Server Configuration](https://www.postgresql.org/docs/9.6/static/runtime-config.html). ## Next steps+ - [Learn how to restart a server](how-to-restart-server-cli.md) - To configure and access server logs, see [Server Logs in Azure Database for PostgreSQL](concepts-server-logs.md)
postgresql How To Configure Server Parameters Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-configure-server-parameters-using-portal.md
Previously updated : 02/28/2018 Last updated : 06/24/2022
-# Configure server parameters in Azure Database for PostgreSQL - Single Server via the Azure portal
+# Configure server parameters in Azure Database for PostgreSQL - Single Server via the Azure portal
[!INCLUDE [applies-to-postgresql-single-server](../includes/applies-to-postgresql-single-server.md)] You can list, show, and update configuration parameters for an Azure Database for PostgreSQL server through the Azure portal. ## Prerequisites+ To step through this how-to guide you need: - [Azure Database for PostgreSQL server](quickstart-create-server-database-portal.md) ## Viewing and editing parameters+ 1. Open the [Azure portal](https://portal.azure.com). 2. Select your Azure Database for PostgreSQL server.
To step through this how-to guide you need:
:::image type="content" source="./media/how-to-configure-server-parameters-in-portal/7-reset-to-default-button.png" alt-text="Reset all to default"::: ## Next steps+ Learn about: - [Overview of server parameters in Azure Database for PostgreSQL](concepts-servers.md) - [Configuring parameters using the Azure CLI](how-to-configure-server-parameters-using-cli.md)
postgresql How To Configure Server Parameters Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-configure-server-parameters-using-powershell.md
ms.devlang: azurepowershell Previously updated : 06/08/2020 Last updated : 06/24/2022 # Customize Azure Database for PostgreSQL server parameters using PowerShell
Update-AzPostgreSqlConfiguration -Name slow_query_log -ResourceGroupName myresou
## Next steps > [!div class="nextstepaction"]
-> [Auto grow storage in Azure Database for PostgreSQL server using PowerShell](how-to-auto-grow-storage-powershell.md).
+> [Auto grow storage in Azure Database for PostgreSQL server using PowerShell](how-to-auto-grow-storage-powershell.md).
postgresql How To Configure Sign In Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-configure-sign-in-azure-ad-authentication.md
Previously updated : 05/26/2021 Last updated : 06/24/2022 # Use Azure Active Directory for authentication with PostgreSQL
To set the Azure AD administrator (you can use a user or a group), please follow
> When setting the administrator, a new user is added to the Azure Database for PostgreSQL server with full administrator permissions. > The Azure AD Admin user in Azure Database for PostgreSQL will have the role `azure_ad_admin`. > Only one Azure AD admin can be created per PostgreSQL server and selection of another one will overwrite the existing Azure AD admin configured for the server.
-> You can specify an Azure AD group instead of an individual user to have multiple administrators.
+> You can specify an Azure AD group instead of an individual user to have multiple administrators.
Only one Azure AD admin can be created per PostgreSQL server and selection of another one will overwrite the existing Azure AD admin configured for the server. You can specify an Azure AD group instead of an individual user to have multiple administrators. Note that you will then sign in with the group name for administration purposes.
After authentication is successful, Azure AD will return an access token:
The token is a Base 64 string that encodes all the information about the authenticated user, and which is targeted to the Azure Database for PostgreSQL service. - ### Step 3: Use token as password for logging in with client psql When connecting you need to use the access token as the PostgreSQL user password.
psql "host=mydb.postgres... user=user@tenant.onmicrosoft.com@mydb dbname=postgre
To connect using Azure AD token with pgAdmin you need to follow the next steps: 1. Uncheck the connect now option at server creation. 2. Enter your server details in the connection tab and save.
-3. From the browser menu, click connect to the Azure Database for PostgreSQL server
+3. From the browser menu, select connect to the Azure Database for PostgreSQL server
4. Enter the AD token password when prompted. - Important considerations when connecting: * `user@tenant.onmicrosoft.com` is the name of the Azure AD user
Important considerations when connecting as a group member:
* When connecting as a group, use only the group name (e.g. GroupName@mydb) and not the alias of a group member. * If the name contains spaces, use \ before each space to escape it. * The access token validity is anywhere between 5 minutes to 60 minutes. We recommend you get the access token just before initiating the login to Azure Database for PostgreSQL.
-
-You are now authenticated to your PostgreSQL server using Azure AD authentication.
+You are now authenticated to your PostgreSQL server using Azure AD authentication.
## Creating Azure AD users in Azure Database for PostgreSQL
postgresql How To Connect Query Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-connect-query-guide.md
Previously updated : 09/21/2020 Last updated : 06/24/2022 # Connect and query overview for Azure database for PostgreSQL- Single Server
postgresql How To Connect With Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-connect-with-managed-identity.md
Previously updated : 05/19/2020 Last updated : 06/24/2022 # Connect with Managed Identity to Azure Database for PostgreSQL
postgresql How To Connection String Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-connection-string-powershell.md
Previously updated : 8/6/2020 Last updated : 06/24/2022 # How to generate an Azure Database for PostgreSQL connection string with PowerShell
postgresql How To Create Manage Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-create-manage-server-portal.md
Previously updated : 11/20/2019 Last updated : 06/24/2022 # Manage an Azure Database for PostgreSQL server using the Azure portal
You can change the administrator role's password using the Azure portal.
## Delete a server
-You can delete your server if you no longer need it.
+You can delete your server if you no longer need it.
1. Select your server in the Azure portal. In the **Overview** window select **Delete**.
postgresql How To Create Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-create-users.md
Previously updated : 09/22/2019 Last updated : 06/24/2022 # Create users in Azure Database for PostgreSQL - Single Server
The server admin user account can be used to create additional users and grant t
2. Use the admin account and password to connect to your database server. Use your preferred client tool, such as pgAdmin or psql. If you are unsure of how to connect, see [the quickstart](./quickstart-create-server-database-portal.md)
-3. Edit and run the following SQL code. Replace your new user name for the placeholder value <new_user>, and replace the placeholder password with your own strong password.
+3. Edit and run the following SQL code. Replace your new user name for the placeholder value <new_user>, and replace the placeholder password with your own strong password.
```sql CREATE ROLE <new_user> WITH LOGIN NOSUPERUSER INHERIT CREATEDB CREATEROLE NOREPLICATION PASSWORD '<StrongPassword!>';
The server admin user account can be used to create additional users and grant t
```sql CREATE DATABASE <newdb>;
-
+ CREATE ROLE <db_user> WITH LOGIN NOSUPERUSER INHERIT CREATEDB NOCREATEROLE NOREPLICATION PASSWORD '<StrongPassword!>';
-
+ GRANT CONNECT ON DATABASE <newdb> TO <db_user>; ```
The server admin user account can be used to create additional users and grant t
If a user creates a table "role," the table belongs to that user. If another user needs access to the table, you must grant privileges to the other user on the table level.
- For example:
+ For example:
```sql GRANT SELECT ON ALL TABLES IN SCHEMA <schema_name> TO <db_user>;
postgresql How To Data Encryption Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-data-encryption-cli.md
Previously updated : 03/30/2020 Last updated : 06/24/2022
Use one of the pre-created Azure Resource Manager templates to provision the ser
This Azure Resource Manager template creates an Azure Database for PostgreSQL Single server and uses the **KeyVault** and **Key** passed as parameters to enable data encryption on the server. ### For an existing server+ Additionally, you can use Azure Resource Manager templates to enable data encryption on your existing Azure Database for PostgreSQL Single servers. * Pass the Resource ID of the Azure Key Vault key that you copied earlier under the `Uri` property in the properties object.
postgresql How To Data Encryption Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-data-encryption-portal.md
Previously updated : 01/13/2020 Last updated : 06/24/2022
Learn how to use the Azure portal to set up and manage data encryption for your
:::image type="content" source="media/concepts-data-access-and-security-data-encryption/show-access-policy-overview.png" alt-text="Screenshot of Key Vault, with Access policies and Add Access Policy highlighted":::
-2. Select **Key permissions**, and select **Get**, **Wrap**, **Unwrap**, and the **Principal**, which is the name of the PostgreSQL server. If your server principal can't be found in the list of existing principals, you need to register it. You're prompted to register your server principal when you attempt to set up data encryption for the first time, and it fails.
+2. Select **Key permissions**, and select **Get**, **Wrap**, **Unwrap**, and the **Principal**, which is the name of the PostgreSQL server. If your server principal can't be found in the list of existing principals, you need to register it. You're prompted to register your server principal when you attempt to set up data encryption for the first time, and it fails.
:::image type="content" source="media/concepts-data-access-and-security-data-encryption/access-policy-wrap-unwrap.png" alt-text="Access policy overview":::
postgresql How To Data Encryption Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-data-encryption-troubleshoot.md
Previously updated : 02/13/2020 Last updated : 06/24/2022 # Troubleshoot data encryption in Azure Database for PostgreSQL - Single Server
postgresql How To Data Encryption Validation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-data-encryption-validation.md
Previously updated : 04/28/2020 Last updated : 06/24/2022 # Validating data encryption for Azure Database for PostgreSQL
This article helps you validate that data encryption using customer managed key
* In the Azure portal, navigate to the **Azure Key Vault** -> **Keys** * Select the key used for server encryption. * Set the status of the key **Enabled** to **No**.
-
+ After some time (**~15 min**), the Azure Database for PostgreSQL server **Status** should be **Inaccessible**. Any I/O operation done against the server will fail which validates that the server is indeed encrypted with customers key and the key is currently not valid.
-
- In order to make the server **Available** against, you can revalidate the key.
-
+
+ In order to make the server **Available** against, you can revalidate the key.
+ * Set the status of the key in the Key Vault to **Yes**. * On the server **Data Encryption**, select **Revalidate key**. * After the revalidation of the key is successful, the server **Status** changes to **Available**
This article helps you validate that data encryption using customer managed key
## Next steps
-To learn more about data encryption, see [Azure Database for PostgreSQL Single server data encryption with customer-managed key](concepts-data-encryption-postgresql.md).
+To learn more about data encryption, see [Azure Database for PostgreSQL Single server data encryption with customer-managed key](concepts-data-encryption-postgresql.md).
postgresql How To Deny Public Network Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-deny-public-network-access.md
Previously updated : 03/10/2020 Last updated : 06/24/2022 # Deny Public Network Access in Azure Database for PostgreSQL Single server using Azure portal
Follow these steps to set PostgreSQL Single server Deny Public Network Access:
1. In the [Azure portal](https://portal.azure.com/), select your existing Azure Database for PostgreSQL Single server.
-1. On the PostgreSQL Single server page, under **Settings**, click **Connection security** to open the connection security configuration page.
+1. On the PostgreSQL Single server page, under **Settings**, select **Connection security** to open the connection security configuration page.
1. In **Deny Public Network Access**, select **Yes** to enable deny public access for your PostgreSQL Single server. :::image type="content" source="./media/how-to-deny-public-network-access/deny-public-network-access.PNG" alt-text="Azure Database for PostgreSQL Single server Deny network access":::
-1. Click **Save** to save the changes.
+1. Select **Save** to save the changes.
1. A notification will confirm that connection security setting was successfully enabled.
postgresql How To Deploy Github Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-deploy-github-action.md
Previously updated : 10/12/2020 Last updated : 06/24/2022 # Quickstart: Use GitHub Actions to connect to Azure PostgreSQL [!INCLUDE [applies-to-postgresql-single-server](../includes/applies-to-postgresql-single-server.md)]
-**APPLIES TO:** :::image type="icon" source="./media/applies-to/yes.png" border="false":::Azure Database for PostgreSQL - Single Server :::image type="icon" source="./media/applies-to/yes.png" border="false":::Azure Database for PostgreSQL - Flexible Server
+**APPLIES TO:** :::image type="icon" source="./media/applies-to/yes.png" border="false":::Azure Database for PostgreSQL - Single Server :::image type="icon" source="./media/applies-to/yes.png" border="false":::Azure Database for PostgreSQL - Flexible Server
Get started with [GitHub Actions](https://docs.github.com/en/actions) by using a workflow to deploy database updates to [Azure Database for PostgreSQL](https://azure.microsoft.com/services/postgresql/).
You will use the connection string as a GitHub secret.
1. Paste the connection string value into the secret's value field. Give the secret the name `AZURE_POSTGRESQL_CONNECTION_STRING`. - ## Add your workflow 1. Go to **Actions** for your GitHub repository.
postgresql How To Double Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-double-encryption.md
Previously updated : 03/14/2021 Last updated : 06/24/2022 # Infrastructure double encryption for Azure Database for PostgreSQL
az postgres server create --resource-group myresourcegroup --name mydemoserver
## Next steps To learn more about data encryption, see [Azure Database for PostgreSQL data Infrastructure double encryption](concepts-Infrastructure-double-encryption.md).-
postgresql How To Manage Firewall Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-manage-firewall-using-portal.md
Previously updated : 5/6/2019 Last updated : 06/24/2022 # Create and manage firewall rules for Azure Database for PostgreSQL - Single Server using the Azure portal
Server-level firewall rules can be used to manage access to an Azure Database fo
Virtual Network (VNet) rules can also be used to secure access to your server. Learn more about [creating and managing Virtual Network service endpoints and rules using the Azure portal](how-to-manage-vnet-using-portal.md). ## Prerequisites+ To step through this how-to guide, you need: - A server [Create an Azure Database for PostgreSQL](quickstart-create-server-database-portal.md) ## Create a server-level firewall rule in the Azure portal
-1. On the PostgreSQL server page, under Settings heading, click **Connection security** to open the Connection security page for the Azure Database for PostgreSQL.
- :::image type="content" source="./media/how-to-manage-firewall-using-portal/1-connection-security.png" alt-text="Azure portal - click Connection Security":::
+1. On the PostgreSQL server page, under Settings heading, select **Connection security** to open the Connection security page for the Azure Database for PostgreSQL.
+
+ :::image type="content" source="./media/how-to-manage-firewall-using-portal/1-connection-security.png" alt-text="Azure portal - select Connection Security":::
-2. Click **Add client IP** on the toolbar. This automatically creates a firewall rule with the public IP address of your computer, as perceived by the Azure system.
+2. Select **Add client IP** on the toolbar. This automatically creates a firewall rule with the public IP address of your computer, as perceived by the Azure system.
- :::image type="content" source="./media/how-to-manage-firewall-using-portal/2-add-my-ip.png" alt-text="Azure portal - click Add My IP":::
+ :::image type="content" source="./media/how-to-manage-firewall-using-portal/2-add-my-ip.png" alt-text="Azure portal - select Add My IP":::
3. Verify your IP address before saving the configuration. In some situations, the IP address observed by Azure portal differs from the IP address used when accessing the internet and Azure servers. Therefore, you may need to change the Start IP and End IP to make the rule function as expected. Use a search engine or other online tool to check your own IP address. For example, search for "what is my IP."
To step through this how-to guide, you need:
:::image type="content" source="./media/how-to-manage-firewall-using-portal/4-specify-addresses.png" alt-text="Azure portal - firewall rules":::
-5. Click **Save** on the toolbar to save this server-level firewall rule. Wait for the confirmation that the update to the firewall rules was successful.
+5. Select **Save** on the toolbar to save this server-level firewall rule. Wait for the confirmation that the update to the firewall rules was successful.
- :::image type="content" source="./media/how-to-manage-firewall-using-portal/5-save-firewall-rule.png" alt-text="Azure portal - click Save":::
+ :::image type="content" source="./media/how-to-manage-firewall-using-portal/5-save-firewall-rule.png" alt-text="Azure portal - select Save":::
## Connecting from Azure+ To allow applications from Azure to connect to your Azure Database for PostgreSQL server, Azure connections must be enabled. For example, to host an Azure Web Apps application, or an application that runs in an Azure VM, or to connect from an Azure Data Factory data management gateway. The resources do not need to be in the same Virtual Network (VNet) or Resource Group for the firewall rule to enable those connections. When an application from Azure attempts to connect to your database server, the firewall verifies that Azure connections are allowed. There are a couple of methods to enable these types of connections. A firewall setting with starting and ending address equal to 0.0.0.0 indicates these connections are allowed. Alternatively, you can set the **Allow access to Azure services** option to **ON** in the portal from the **Connection security** pane and hit **Save**. If the connection attempt is not allowed, the request does not reach the Azure Database for PostgreSQL server. > [!IMPORTANT] > This option configures the firewall to allow all connections from Azure including connections from the subscriptions of other customers. When selecting this option, make sure your login and user permissions limit access to only authorized users.
->
+>
## Manage existing server-level firewall rules through the Azure portal+ Repeat the steps to manage the firewall rules.
-* To add the current computer, click the button to + **Add My IP**. Click **Save** to save the changes.
-* To add additional IP addresses, type in the Rule Name, Start IP Address, and End IP Address. Click **Save** to save the changes.
-* To modify an existing rule, click any of the fields in the rule and modify. Click **Save** to save the changes.
-* To delete an existing rule, click the ellipsis […] and click **Delete** to remove the rule. Click **Save** to save the changes.
+* To add the current computer, select the button to + **Add My IP**. Select **Save** to save the changes.
+* To add additional IP addresses, type in the Rule Name, Start IP Address, and End IP Address. Select **Save** to save the changes.
+* To modify an existing rule, select any of the fields in the rule and modify. Select **Save** to save the changes.
+* To delete an existing rule, select the ellipsis […] and select **Delete** to remove the rule. Select **Save** to save the changes.
## Next steps+ - Similarly, you can script to [Create and manage Azure Database for PostgreSQL firewall rules using Azure CLI](quickstart-create-server-database-azure-cli.md#configure-a-server-based-firewall-rule). - Further secure access to your server by [creating and managing Virtual Network service endpoints and rules using the Azure portal](how-to-manage-vnet-using-portal.md). - For help in connecting to an Azure Database for PostgreSQL server, see [Connection libraries for Azure Database for PostgreSQL](concepts-connection-libraries.md).
postgresql How To Manage Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-manage-server-cli.md
Previously updated : 9/22/2020 Last updated : 06/24/2022 # Manage an Azure Database for PostgreSQL Single server using the Azure CLI
storage-size | 6144 | The storage capacity of the server (unit is megabytes). Mi
> - Storage can be scaled up (however, you cannot scale storage down) > - Scaling up from Basic to General purpose or Memory optimized pricing tier is not supported. You can manually scale up with either [using a bash script](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/upgrade-from-basic-to-general-purpose-or-memory-optimized-tiers/ba-p/830404) or [using PostgreSQL Workbench](https://techcommunity.microsoft.com/t5/azure-database-support-blog/how-to-scale-up-azure-database-for-mysql-from-basic-tier-to/ba-p/369134) - ## Manage PostgreSQL databases on a server.+ You can use any of these commands to create, delete, list, and view database properties of a database on your server | Cmdlet | Usage| Description |
You can use any of these commands to create, delete, list, and view database pro
|[az postgres db show](/cli/azure/sql/db#az-mysql-db-show)|```az postgres db show -g myresourcegroup -s mydemoserver -n mydatabasename```|Shows more details of the database| ## Update admin password+ You can change the administrator role's password with this command ```azurecli-interactive az postgres server update --resource-group myresourcegroup --name mydemoserver --admin-password <new-password>
az postgres server update --resource-group myresourcegroup --name mydemoserver -
> Password must contain characters from three of the following categories: English uppercase letters, English lowercase letters, numbers, and non-alphanumeric characters. ## Delete a server+ If you would just like to delete the PostgreSQL Single server, you can run [az postgres server delete](/cli/azure/mysql/server#az-mysql-server-delete) command. ```azurecli-interactive
az postgres server delete --resource-group myresourcegroup --name mydemoserver
``` ## Next steps+ - [Restart a server](how-to-restart-server-cli.md) - [Restore a server in a bad state](how-to-restore-server-cli.md) - [Monitor and tune the server](concepts-monitoring.md)
postgresql How To Manage Vnet Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-manage-vnet-using-cli.md
ms.devlang: azurecli Previously updated : 01/26/2022 Last updated : 06/24/2022 # Create and manage VNet service endpoints for Azure Database for PostgreSQL - Single Server using Azure CLI
postgresql How To Manage Vnet Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-manage-vnet-using-portal.md
Previously updated : 5/6/2019 Last updated : 06/24/2022 + # Create and manage VNet service endpoints and VNet rules in Azure Database for PostgreSQL - Single Server by using the Azure portal [!INCLUDE [applies-to-postgresql-single-server](../includes/applies-to-postgresql-single-server.md)]
Virtual Network (VNet) services endpoints and rules extend the private address s
> Support for VNet service endpoints is only for General Purpose and Memory Optimized servers. > In case of VNet peering, if traffic is flowing through a common VNet Gateway with service endpoints and is supposed to flow to the peer, please create an ACL/VNet rule to allow Azure Virtual Machines in the Gateway VNet to access the Azure Database for PostgreSQL server. - ## Create a VNet rule and enable service endpoints in the Azure portal
-1. On the PostgreSQL server page, under the Settings heading, click **Connection Security** to open the Connection Security pane for Azure Database for PostgreSQL.
+1. On the PostgreSQL server page, under the Settings heading, select **Connection Security** to open the Connection Security pane for Azure Database for PostgreSQL.
2. Ensure that the Allow access to Azure services control is set to **OFF**. > [!Important] > If you leave the control set to ON, your Azure PostgreSQL Database server accepts communication from any subnet. Leaving the control set to ON might be excessive access from a security point of view. The Microsoft Azure Virtual Network service endpoint feature, in coordination with the virtual network rule feature of Azure Database for PostgreSQL, together can reduce your security surface area.
-3. Next, click on **+ Adding existing virtual network**. If you do not have an existing VNet you can click **+ Create new virtual network** to create one. See [Quickstart: Create a virtual network using the Azure portal](../../virtual-network/quick-create-portal.md)
+3. Next, select on **+ Adding existing virtual network**. If you do not have an existing VNet you can select **+ Create new virtual network** to create one. See [Quickstart: Create a virtual network using the Azure portal](../../virtual-network/quick-create-portal.md)
- :::image type="content" source="./media/how-to-manage-vnet-using-portal/1-connection-security.png" alt-text="Azure portal - click Connection security":::
+ :::image type="content" source="./media/how-to-manage-vnet-using-portal/1-connection-security.png" alt-text="Azure portal - select Connection security":::
-4. Enter a VNet rule name, select the subscription, Virtual network and Subnet name and then click **Enable**. This automatically enables VNet service endpoints on the subnet using the **Microsoft.SQL** service tag.
+4. Enter a VNet rule name, select the subscription, Virtual network and Subnet name and then select **Enable**. This automatically enables VNet service endpoints on the subnet using the **Microsoft.SQL** service tag.
:::image type="content" source="./media/how-to-manage-vnet-using-portal/2-configure-vnet.png" alt-text="Azure portal - configure VNet"::: The account must have the necessary permissions to create a virtual network and service endpoint. Service endpoints can be configured on virtual networks independently, by a user with write access to the virtual network.
-
+ To secure Azure service resources to a VNet, the user must have permission to "Microsoft.Network/virtualNetworks/subnets/joinViaServiceEndpoint/" for the subnets being added. This permission is included in the built-in service administrator roles, by default and can be modified by creating custom roles.
-
+ Learn more about [built-in roles](../../role-based-access-control/built-in-roles.md) and assigning specific permissions to [custom roles](../../role-based-access-control/custom-roles.md).
-
+ VNets and Azure service resources can be in the same or different subscriptions. If the VNet and Azure service resources are in different subscriptions, the resources should be under the same Active Directory (AD) tenant. Ensure that both the subscriptions have the **Microsoft.Sql** resource provider registered. For more information refer [resource-manager-registration][resource-manager-portal] > [!IMPORTANT] > It is highly recommended to read this article about service endpoint configurations and considerations before configuring service endpoints. **Virtual Network service endpoint:** A [Virtual Network service endpoint](../../virtual-network/virtual-network-service-endpoints-overview.md) is a subnet whose property values include one or more formal Azure service type names. VNet services endpoints use the service type name **Microsoft.Sql**, which refers to the Azure service named SQL Database. This service tag also applies to the Azure SQL Database, Azure Database for PostgreSQL and MySQL services. It is important to note when applying the **Microsoft.Sql** service tag to a VNet service endpoint it configures service endpoint traffic for all Azure Database services, including Azure SQL Database, Azure Database for PostgreSQL and Azure Database for MySQL servers on the subnet.
- >
+ >
-5. Once enabled, click **OK** and you will see that VNet service endpoints are enabled along with a VNet rule.
+5. Once enabled, select **OK** and you will see that VNet service endpoints are enabled along with a VNet rule.
:::image type="content" source="./media/how-to-manage-vnet-using-portal/3-vnet-service-endpoints-enabled-vnet-rule-created.png" alt-text="VNet service endpoints enabled and VNet rule created"::: ## Next steps+ - Similarly, you can script to [Enable VNet service endpoints and create a VNET rule for Azure Database for PostgreSQL using Azure CLI](how-to-manage-vnet-using-cli.md). - For help in connecting to an Azure Database for PostgreSQL server, see [Connection libraries for Azure Database for PostgreSQL](./concepts-connection-libraries.md) <!-- Link references, to text, Within this same GitHub repo. -->
-[resource-manager-portal]: ../../azure-resource-manager/management/resource-providers-and-types.md
+[resource-manager-portal]: ../../azure-resource-manager/management/resource-providers-and-types.md
postgresql How To Move Regions Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-move-regions-portal.md
Previously updated : 06/29/2020 Last updated : 06/24/2022 #Customer intent: As an Azure service administrator, I want to move my service resources to another Azure region
Last updated 06/29/2020
There are various scenarios for moving an existing Azure Database for PostgreSQL server from one region to another. For example, you might want to move a production server to another region as part of your disaster recovery planning.
-You can use an Azure Database for PostgreSQL [cross-region read replica](concepts-read-replicas.md#cross-region-replication) to complete the move to another region. To do so, first create a read replica in the target region. Next, stop replication to the read replica server to make it a standalone server that accepts both read and write traffic.
+You can use an Azure Database for PostgreSQL [cross-region read replica](concepts-read-replicas.md#cross-region-replication) to complete the move to another region. To do so, first create a read replica in the target region. Next, stop replication to the read replica server to make it a standalone server that accepts both read and write traffic.
> [!NOTE]
-> This article focuses on moving your server to a different region. If you want to move your server to a different resource group or subscription, refer to the [move](../../azure-resource-manager/management/move-resource-group-and-subscription.md) article.
+> This article focuses on moving your server to a different region. If you want to move your server to a different resource group or subscription, refer to the [move](../../azure-resource-manager/management/move-resource-group-and-subscription.md) article.
## Prerequisites
You can use an Azure Database for PostgreSQL [cross-region read replica](concept
## Prepare to move
-To prepare the source server for replication using the Azure portal, use the following steps:
+To prepare the source server for replication using the Azure portal, use the following steps:
1. Sign into the [Azure portal](https://portal.azure.com/). 1. Select the existing Azure Database for PostgreSQL server that you want to use as the source server. This action opens the **Overview** page.
To stop replication to the replica from the Azure portal, use the following step
1. Select **Replication** from the menu, under **SETTINGS**. 1. Select the replica server. 1. Select **Stop replication**.
-1. Confirm you want to stop replication by clicking **OK**.
+1. Confirm you want to stop replication by selecting **OK**.
## Clean up source server
You may want to delete the source Azure Database for PostgreSQL server. To do so
## Next steps
-In this tutorial, you moved an Azure Database for PostgreSQL server from one region to another by using the Azure portal and then cleaned up the unneeded source resources.
+In this tutorial, you moved an Azure Database for PostgreSQL server from one region to another by using the Azure portal and then cleaned up the unneeded source resources.
- Learn more about [read replicas](concepts-read-replicas.md) - Learn more about [managing read replicas in the Azure portal](how-to-read-replicas-portal.md)-- Learn more about [business continuity](concepts-business-continuity.md) options
+- Learn more about [business continuity](concepts-business-continuity.md) options
postgresql How To Optimize Autovacuum https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-optimize-autovacuum.md
Previously updated : 07/09/2020 Last updated : 06/24/2022 # Optimize autovacuum on an Azure Database for PostgreSQL - Single Server
postgresql How To Optimize Bulk Inserts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-optimize-bulk-inserts.md
Previously updated : 5/6/2019 Last updated : 06/24/2022 # Optimize bulk inserts and use transient data on an Azure Database for PostgreSQL - Single Server
This article describes how you can optimize bulk insert operations and use trans
If you have workload operations that involve transient data or that insert large datasets in bulk, consider using unlogged tables.
-Unlogged tables is a PostgreSQL feature that can be used effectively to optimize bulk inserts. PostgreSQL uses Write-Ahead Logging (WAL). It provides atomicity and durability, by default. Atomicity, consistency, isolation, and durability make up the ACID properties.
+Unlogged tables is a PostgreSQL feature that can be used effectively to optimize bulk inserts. PostgreSQL uses Write-Ahead Logging (WAL). It provides atomicity and durability, by default. Atomicity, consistency, isolation, and durability make up the ACID properties.
Inserting into an unlogged table means that PostgreSQL does inserts without writing into the transaction log, which itself is an I/O operation. As a result, these tables are considerably faster than ordinary tables. Use the following options to create an unlogged table: - Create a new unlogged table by using the syntax `CREATE UNLOGGED TABLE <tableName>`.-- Convert an existing logged table to an unlogged table by using the syntax `ALTER TABLE <tableName> SET UNLOGGED`.
+- Convert an existing logged table to an unlogged table by using the syntax `ALTER TABLE <tableName> SET UNLOGGED`.
To reverse the process, use the syntax `ALTER TABLE <tableName> SET LOGGED`. ## Unlogged table tradeoff+ Unlogged tables aren't crash-safe. An unlogged table is automatically truncated after a crash or subject to an unclean shutdown. The contents of an unlogged table also aren't replicated to standby servers. Any indexes created on an unlogged table are automatically unlogged as well. After the insert operation completes, convert the table to logged so that the insert is durable. Some customer workloads have experienced approximately a 15 percent to 20 percent performance improvement when unlogged tables were used. ## Next steps+ Review your workload for uses of transient data and large bulk inserts. See the following PostgreSQL documentation:
-
+ - [Create Table SQL commands](https://www.postgresql.org/docs/current/static/sql-createtable.html)
postgresql How To Optimize Query Stats Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-optimize-query-stats-collection.md
Previously updated : 5/6/2019 Last updated : 06/24/2022 # Optimize query statistics collection on an Azure Database for PostgreSQL - Single Server
postgresql How To Optimize Query Time With Toast Table Storage Strategy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-optimize-query-time-with-toast-table-storage-strategy.md
Previously updated : 5/6/2019 Last updated : 06/24/2022
-# Optimize query time with the TOAST table storage strategy
+# Optimize query time with the TOAST table storage strategy
[!INCLUDE [applies-to-postgresql-single-server](../includes/applies-to-postgresql-single-server.md)] This article describes how to optimize query times with the oversized-attribute storage technique (TOAST) table storage strategy. ## TOAST table storage strategies+ Four different strategies are used to store columns on disk that can use TOAST. They represent various combinations between compression and out-of-line storage. The strategy can be set at the level of data type and at the column level. - **Plain** prevents either compression or out-of-line storage. It disables the use of single-byte headers for varlena types. Plain is the only possible strategy for columns of data types that can't use TOAST. - **Extended** allows both compression and out-of-line storage. Extended is the default for most data types that can use TOAST. Compression is attempted first. Out-of-line storage is attempted if the row is still too large.
Four different strategies are used to store columns on disk that can use TOAST.
- **Main** allows compression but not out-of-line storage. Out-of-line storage is still performed for such columns, but only as a last resort. It occurs when there's no other way to make the row small enough to fit on a page. ## Use TOAST table storage strategies+ If your queries access data types that can use TOAST, consider using the Main strategy instead of the default Extended option to reduce query times. Main doesn't rule out out-of-line storage. If your queries don't access data types that can use TOAST, it might be beneficial to keep the Extended option. A larger portion of the rows of the main table fit in the shared buffer cache, which helps performance. If you have a workload that uses a schema with wide tables and high character counts, consider using PostgreSQL TOAST tables. An example customer table had greater than 350 columns with several columns that spanned 255 characters. After it was converted to the TOAST table Main strategy, their benchmark query time reduced from 4203 seconds to 467 seconds. That's an 89 percent improvement. ## Next steps
-Review your workload for the previous characteristics.
+
+Review your workload for the previous characteristics.
Review the following PostgreSQL documentation: -- [Chapter 68, Database physical storage](https://www.postgresql.org/docs/current/storage-toast.html)
+- [Chapter 68, Database physical storage](https://www.postgresql.org/docs/current/storage-toast.html)
postgresql How To Read Replicas Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-read-replicas-cli.md
Previously updated : 12/17/2020 Last updated : 06/24/2022
In this article, you learn how to create and manage read replicas in Azure Database for PostgreSQL by using the Azure CLI and REST API. To learn more about read replicas, see the [overview](concepts-read-replicas.md). ## Azure replication support+ [Read replicas](concepts-read-replicas.md) and [logical decoding](concepts-logical.md) both depend on the Postgres write ahead log (WAL) for information. These two features need different levels of logging from Postgres. Logical decoding needs a higher level of logging than read replicas. To configure the right level of logging, use the Azure replication support parameter. Azure replication support has three setting options:
To configure the right level of logging, use the Azure replication support param
* **Replica** - More verbose than **Off**. This is the minimum level of logging needed for [read replicas](concepts-read-replicas.md) to work. This setting is the default on most servers. * **Logical** - More verbose than **Replica**. This is the minimum level of logging for logical decoding to work. Read replicas also work at this setting. - > [!NOTE] > When deploying read replicas for persistent heavy write-intensive primary workloads, the replication lag could continue to grow and may never be able to catch-up with the primary. This may also increase storage usage at the primary as the WAL files are not deleted until they are received at the replica. ## Azure CLI+ You can create and manage read replicas using the Azure CLI. ### Prerequisites
You can create and manage read replicas using the Azure CLI.
- [Install Azure CLI 2.0](/cli/azure/install-azure-cli) - An [Azure Database for PostgreSQL server](quickstart-create-server-up-azure-cli.md) to be the primary server. - ### Prepare the primary server 1. Check the primary server's `azure.replication_support` value. It should be at least REPLICA for read replicas to work.
You can create and manage read replicas using the Azure CLI.
az postgres server configuration show --resource-group myresourcegroup --server-name mydemoserver --name azure.replication_support ```
-2. If `azure.replication_support` is not at least REPLICA, set it.
+2. If `azure.replication_support` is not at least REPLICA, set it.
```azurecli-interactive az postgres server configuration set --resource-group myresourcegroup --server-name mydemoserver --name azure.replication_support --value REPLICA
az postgres server replica create --name mydemoserver-replica --source-server my
``` > [!NOTE]
-> To learn more about which regions you can create a replica in, visit the [read replica concepts article](concepts-read-replicas.md).
+> To learn more about which regions you can create a replica in, visit the [read replica concepts article](concepts-read-replicas.md).
If you haven't set the `azure.replication_support` parameter to **REPLICA** on a General Purpose or Memory Optimized primary server and restarted the server, you receive an error. Complete those two steps before you create a replica.
If you haven't set the `azure.replication_support` parameter to **REPLICA** on a
> Before a primary server setting is updated to a new value, update the replica setting to an equal or greater value. This action helps the replica keep up with any changes made to the master. ### List replicas+ You can view the list of replicas of a primary server by using [az postgres server replica list](/cli/azure/postgres/server/replica#az-postgres-server-replica-list) command. ```azurecli-interactive
az postgres server replica list --server-name mydemoserver --resource-group myre
``` ### Stop replication to a replica server+ You can stop replication between a primary server and a read replica by using [az postgres server replica stop](/cli/azure/postgres/server/replica#az-postgres-server-replica-stop) command. After you stop replication to a primary server and a read replica, it can't be undone. The read replica becomes a standalone server that supports both reads and writes. The standalone server can't be made into a replica again.
az postgres server replica stop --name mydemoserver-replica --resource-group myr
``` ### Delete a primary or replica server+ To delete a primary or replica server, you use the [az postgres server delete](/cli/azure/postgres/server#az-postgres-server-delete) command. When you delete a primary server, replication to all read replicas is stopped. The read replicas become standalone servers that now support both reads and writes.
az postgres server delete --name myserver --resource-group myresourcegroup
``` ## REST API+ You can create and manage read replicas using the [Azure REST API](/rest/api/azure/). ### Prepare the primary server
You can create and manage read replicas using the [Azure REST API](/rest/api/azu
``` ### Create a read replica+ You can create a read replica by using the [create API](/rest/api/postgresql/singleserver/servers/create): ```http
PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{
``` > [!NOTE]
-> To learn more about which regions you can create a replica in, visit the [read replica concepts article](concepts-read-replicas.md).
+> To learn more about which regions you can create a replica in, visit the [read replica concepts article](concepts-read-replicas.md).
If you haven't set the `azure.replication_support` parameter to **REPLICA** on a General Purpose or Memory Optimized primary server and restarted the server, you receive an error. Complete those two steps before you create a replica. A replica is created by using the same compute and storage settings as the master. After a replica is created, several settings can be changed independently from the primary server: compute generation, vCores, storage, and back-up retention period. The pricing tier can also be changed independently, except to or from the Basic tier. - > [!IMPORTANT] > Before a primary server setting is updated to a new value, update the replica setting to an equal or greater value. This action helps the replica keep up with any changes made to the master. ### List replicas+ You can view the list of replicas of a primary server using the [replica list API](/rest/api/postgresql/singleserver/replicas/listbyserver): ```http
GET https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{
``` ### Stop replication to a replica server+ You can stop replication between a primary server and a read replica by using the [update API](/rest/api/postgresql/singleserver/servers/update). After you stop replication to a primary server and a read replica, it can't be undone. The read replica becomes a standalone server that supports both reads and writes. The standalone server can't be made into a replica again.
PATCH https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups
``` ### Delete a primary or replica server+ To delete a primary or replica server, you use the [delete API](/rest/api/postgresql/singleserver/servers/delete): When you delete a primary server, replication to all read replicas is stopped. The read replicas become standalone servers that now support both reads and writes.
DELETE https://management.azure.com/subscriptions/{subscriptionId}/resourceGroup
``` ## Next steps+ * Learn more about [read replicas in Azure Database for PostgreSQL](concepts-read-replicas.md). * Learn how to [create and manage read replicas in the Azure portal](how-to-read-replicas-portal.md).
postgresql How To Read Replicas Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-read-replicas-portal.md
Previously updated : 11/05/2020 Last updated : 06/24/2022 # Create and manage read replicas in Azure Database for PostgreSQL - Single Server from the Azure portal
Last updated 11/05/2020
In this article, you learn how to create and manage read replicas in Azure Database for PostgreSQL from the Azure portal. To learn more about read replicas, see the [overview](concepts-read-replicas.md). - ## Prerequisites+ An [Azure Database for PostgreSQL server](quickstart-create-server-database-portal.md) to be the primary server. ## Azure replication support
To configure the right level of logging, use the Azure replication support param
* **Replica** - More verbose than **Off**. This is the minimum level of logging needed for [read replicas](concepts-read-replicas.md) to work. This setting is the default on most servers. * **Logical** - More verbose than **Replica**. This is the minimum level of logging for logical decoding to work. Read replicas also work at this setting. - > [!NOTE] > When deploying read replicas for persistent heavy write-intensive primary workloads, the replication lag could continue to grow and may never be able to catch-up with the primary. This may also increase storage usage at the primary as the WAL files are not deleted until they are received at the replica.
To configure the right level of logging, use the Azure replication support param
1. In the Azure portal, select an existing Azure Database for PostgreSQL server to use as a master.
-2. From the server's menu, select **Replication**. If Azure replication support is set to at least **Replica**, you can create read replicas.
+2. From the server's menu, select **Replication**. If Azure replication support is set to at least **Replica**, you can create read replicas.
3. If Azure replication support is not set to at least **Replica**, set it. Select **Save**.
To configure the right level of logging, use the Azure replication support param
:::image type="content" source="./media/how-to-read-replicas-portal/success-notifications.png" alt-text="Success notifications"::: 6. Refresh the Azure portal page to update the Replication toolbar. You can now create read replicas for this server.
-
## Create a read replica+ To create a read replica, follow these steps:
-1. Select an existing Azure Database for PostgreSQL server to use as the primary server.
+1. Select an existing Azure Database for PostgreSQL server to use as the primary server.
2. On the server sidebar, under **SETTINGS**, select **Replication**.
To create a read replica, follow these steps:
:::image type="content" source="./media/how-to-read-replicas-portal/add-replica.png" alt-text="Add a replica":::
-4. Enter a name for the read replica.
+4. Enter a name for the read replica.
:::image type="content" source="./media/how-to-read-replicas-portal/name-replica.png" alt-text="Name the replica":::
To create a read replica, follow these steps:
:::image type="content" source="./media/how-to-read-replicas-portal/location-replica.png" alt-text="Select a location"::: > [!NOTE]
- > To learn more about which regions you can create a replica in, visit the [read replica concepts article](concepts-read-replicas.md).
+ > To learn more about which regions you can create a replica in, visit the [read replica concepts article](concepts-read-replicas.md).
6. Select **OK** to confirm the creation of the replica. After the read replica is created, it can be viewed from the **Replication** window: :::image type="content" source="./media/how-to-read-replicas-portal/list-replica.png" alt-text="View the new replica in the Replication window":::
-
> [!IMPORTANT] > Review the [considerations section of the Read Replica overview](concepts-read-replicas.md#considerations).
After the read replica is created, it can be viewed from the **Replication** win
> Before a primary server setting is updated to a new value, update the replica setting to an equal or greater value. This action helps the replica keep up with any changes made to the master. ## Stop replication+ You can stop replication between a primary server and a read replica. > [!IMPORTANT]
To stop replication between a primary server and a read replica from the Azure p
3. Select the replica server for which to stop replication. :::image type="content" source="./media/how-to-read-replicas-portal/select-replica.png" alt-text="Select the replica":::
-
+ 4. Select **Stop replication**. :::image type="content" source="./media/how-to-read-replicas-portal/select-stop-replication.png" alt-text="Select stop replication":::
-
+ 5. Select **OK** to stop replication. :::image type="content" source="./media/how-to-read-replicas-portal/confirm-stop-replication.png" alt-text="Confirm to stop replication":::
-
## Delete a primary server
-To delete a primary server, you use the same steps as to delete a standalone Azure Database for PostgreSQL server.
+
+To delete a primary server, you use the same steps as to delete a standalone Azure Database for PostgreSQL server.
> [!IMPORTANT] > When you delete a primary server, replication to all read replicas is stopped. The read replicas become standalone servers that now support both reads and writes.
To delete a server from the Azure portal, follow these steps:
2. Open the **Overview** page for the server. Select **Delete**. :::image type="content" source="./media/how-to-read-replicas-portal/delete-server.png" alt-text="On the server Overview page, select to delete the primary server":::
-
+ 3. Enter the name of the primary server to delete. Select **Delete** to confirm deletion of the primary server. :::image type="content" source="./media/how-to-read-replicas-portal/confirm-delete.png" alt-text="Confirm to delete the primary server":::
-
## Delete a replica+ You can delete a read replica similar to how you delete a primary server. - In the Azure portal, open the **Overview** page for the read replica. Select **Delete**. :::image type="content" source="./media/how-to-read-replicas-portal/delete-replica.png" alt-text="On the replica Overview page, select to delete the replica":::
-
+ You can also delete the read replica from the **Replication** window by following these steps: 1. In the Azure portal, select your primary Azure Database for PostgreSQL server.
You can also delete the read replica from the **Replication** window by followin
3. Select the read replica to delete. :::image type="content" source="./media/how-to-read-replicas-portal/select-replica.png" alt-text="Select the replica to delete":::
-
+ 4. Select **Delete replica**. :::image type="content" source="./media/how-to-read-replicas-portal/select-delete-replica.png" alt-text="Select delete replica":::
-
+ 5. Enter the name of the replica to delete. Select **Delete** to confirm deletion of the replica. :::image type="content" source="./media/how-to-read-replicas-portal/confirm-delete-replica.png" alt-text="Confirm to delete te replica":::
-
## Monitor a replica+ Two metrics are available to monitor read replicas. ### Max Lag Across Replicas metric
-The **Max Lag Across Replicas** metric shows the lag in bytes between the primary server and the most-lagging replica.
+
+The **Max Lag Across Replicas** metric shows the lag in bytes between the primary server and the most-lagging replica.
1. In the Azure portal, select the primary Azure Database for PostgreSQL server. 2. Select **Metrics**. In the **Metrics** window, select **Max Lag Across Replicas**. :::image type="content" source="./media/how-to-read-replicas-portal/select-max-lag.png" alt-text="Monitor the max lag across replicas":::
-
-3. For your **Aggregation**, select **Max**.
+3. For your **Aggregation**, select **Max**.
### Replica Lag metric+ The **Replica Lag** metric shows the time since the last replayed transaction on a replica. If there are no transactions occurring on your master, the metric reflects this time lag. 1. In the Azure portal, select the Azure Database for PostgreSQL read replica.
The **Replica Lag** metric shows the time since the last replayed transaction on
2. Select **Metrics**. In the **Metrics** window, select **Replica Lag**. :::image type="content" source="./media/how-to-read-replicas-portal/select-replica-lag.png" alt-text="Monitor the replica lag":::
-
-3. For your **Aggregation**, select **Max**.
-
+
+3. For your **Aggregation**, select **Max**.
+ ## Next steps+ * Learn more about [read replicas in Azure Database for PostgreSQL](concepts-read-replicas.md).
-* Learn how to [create and manage read replicas in the Azure CLI and REST API](how-to-read-replicas-cli.md).
+* Learn how to [create and manage read replicas in the Azure CLI and REST API](how-to-read-replicas-cli.md).
postgresql How To Read Replicas Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-read-replicas-powershell.md
Previously updated : 06/08/2020 Last updated : 06/24/2022
postgresql How To Restart Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-restart-server-cli.md
Previously updated : 5/6/2019 Last updated : 06/24/2022
This topic describes how you can restart an Azure Database for PostgreSQL server. You may need to restart your server for maintenance reasons, which causes a short outage as the server performs the operation. The server restart will be blocked if the service is busy. For example, the service may be processing a previously requested operation such as scaling vCores.
-
+ > [!NOTE] > The time required to complete a restart depends on the PostgreSQL recovery process. To decrease the restart time, we recommend you minimize the amount of activity occurring on the server prior to the restart. You may also want to increase the checkpoint frequency. You can also tune checkpoint related parameter values including `max_wal_size`. It is also recommended to run `CHECKPOINT` command prior to restarting the server. ## Prerequisites+ To complete this how-to guide: - Create an [Azure Database for PostgreSQL server](quickstart-create-server-up-azure-cli.md).
postgresql How To Restart Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-restart-server-portal.md
Previously updated : 12/20/2020 Last updated : 06/24/2022 # Restart Azure Database for PostgreSQL - Single Server using the Azure portal
Last updated 12/20/2020
This topic describes how you can restart an Azure Database for PostgreSQL server. You may need to restart your server for maintenance reasons, which causes a short outage as the server performs the operation. The server restart will be blocked if the service is busy. For example, the service may be processing a previously requested operation such as scaling vCores.
-
+ > [!NOTE]
-> The time required to complete a restart depends on the PostgreSQL recovery process. To decrease the restart time, we recommend you minimize the amount of activity occurring on the server prior to the restart. You may also want to increase the checkpoint frequency. You can also tune checkpoint related parameter values including `max_wal_size`. It is also recommended to run `CHECKPOINT` command prior to restarting the server to enable faster recovery time. If the `CHECKPOINT` command is not performed prior to restarting the server then it may lead to longer recovery time.
+> The time required to complete a restart depends on the PostgreSQL recovery process. To decrease the restart time, we recommend you minimize the amount of activity occurring on the server prior to the restart. You may also want to increase the checkpoint frequency. You can also tune checkpoint related parameter values including `max_wal_size`. It is also recommended to run `CHECKPOINT` command prior to restarting the server to enable faster recovery time. If the `CHECKPOINT` command is not performed prior to restarting the server then it may lead to longer recovery time.
## Prerequisites+ To complete this how-to guide, you need: - An [Azure Database for PostgreSQL server](quickstart-create-server-database-portal.md)
The following steps restart the PostgreSQL server:
1. In the [Azure portal](https://portal.azure.com/), select your Azure Database for PostgreSQL server.
-2. In the toolbar of the server's **Overview** page, click **Restart**.
+2. In the toolbar of the server's **Overview** page, select **Restart**.
:::image type="content" source="./media/how-to-restart-server-portal/2-server.png" alt-text="Azure Database for PostgreSQL - Overview - Restart button":::
-3. Click **Yes** to confirm restarting the server.
+3. Select **Yes** to confirm restarting the server.
:::image type="content" source="./media/how-to-restart-server-portal/3-restart-confirm.png" alt-text="Azure Database for PostgreSQL - Restart confirm":::
postgresql How To Restart Server Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-restart-server-powershell.md
Previously updated : 05/17/2022 Last updated : 06/24/2022
Restart-AzPostgreSqlServer -Name mydemoserver -ResourceGroupName myresourcegroup
## Next steps > [!div class="nextstepaction"]
-> [Create an Azure Database for PostgreSQL server using PowerShell](quickstart-create-postgresql-server-database-using-azure-powershell.md)
+> [Create an Azure Database for PostgreSQL server using PowerShell](quickstart-create-postgresql-server-database-using-azure-powershell.md)
postgresql How To Restore Dropped Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-restore-dropped-server.md
Previously updated : 04/26/2021 Last updated : 06/24/2022 + # Restore a dropped Azure Database for PostgreSQL server [!INCLUDE [applies-to-postgresql-single-server](../includes/applies-to-postgresql-single-server.md)]
-When a server is dropped, the database server backup can be retained up to five days in the service. The database backup can be accessed and restored only from the Azure subscription where the server originally resided. The following recommended steps can be followed to recover a dropped PostgreSQL server resource within 5 days from the time of server deletion. The recommended steps will work only if the backup for the server is still available and not deleted from the system.
+When a server is dropped, the database server backup can be retained up to five days in the service. The database backup can be accessed and restored only from the Azure subscription where the server originally resided. The following recommended steps can be followed to recover a dropped PostgreSQL server resource within 5 days from the time of server deletion. The recommended steps will work only if the backup for the server is still available and not deleted from the system.
## Pre-requisites+ To restore a dropped Azure Database for PostgreSQL server, you need following: - Azure Subscription name hosting the original server - Location where the server was created
To restore a dropped Azure Database for PostgreSQL server, you need following:
1. Browse to the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_ActivityLog/ActivityLogBlade). Select the **Azure Monitor** service, then select **Activity Log**.
-2. In Activity Log, click on **Add filter** as shown and set following filters for the following
+2. In Activity Log, select on **Add filter** as shown and set following filters for the following
- **Subscription** = Your Subscription hosting the deleted server - **Resource Type** = Azure Database for PostgreSQL servers (Microsoft.DBforPostgreSQL/servers) - **Operation** = Delete PostgreSQL Server (Microsoft.DBforPostgreSQL/servers/delete)
-
+ ![Activity log filtered for delete PostgreSQL server operation](./media/how-to-restore-dropped-server/activity-log-azure.png) 3. Select the **Delete PostgreSQL Server** event, then select the **JSON tab**. Copy the `resourceId` and `submissionTimestamp` attributes in JSON output. The resourceId is in the following format: `/subscriptions/ffffffff-ffff-ffff-ffff-ffffffffffff/resourceGroups/TargetResourceGroup/providers/Microsoft.DBforPostgreSQL/servers/deletedserver`.
+1. Browse to the PostgreSQL [Create Server REST API Page](/rest/api/postgresql/singleserver/servers/create) and select the **Try It** tab highlighted in green. Sign in with your Azure account.
- 1. Browse to the PostgreSQL [Create Server REST API Page](/rest/api/postgresql/singleserver/servers/create) and select the **Try It** tab highlighted in green. Sign in with your Azure account.
-
- 2. Provide the **resourceGroupName**, **serverName** (deleted server name), **subscriptionId** properties, based on the resourceId attribute JSON value captured in the preceding step 3. The api-version property is pre-populated and can be left as-is, as shown in the following image.
+2. Provide the **resourceGroupName**, **serverName** (deleted server name), **subscriptionId** properties, based on the resourceId attribute JSON value captured in the preceding step 3. The api-version property is pre-populated and can be left as-is, as shown in the following image.
![Create server using REST API](./media/how-to-restore-dropped-server/create-server-from-rest-api-azure.png)
-
- 3. Scroll below on Request Body section and paste the following replacing the "Dropped server Location"(e.g. CentralUS, EastUS etc.), "submissionTimestamp", and "resourceId". For "restorePointInTime", specify a value of "submissionTimestamp" minus **15 minutes** to ensure the command does not error out.
-
+
+3. Scroll below on Request Body section and paste the following replacing the "Dropped server Location"(e.g. CentralUS, EastUS etc.), "submissionTimestamp", and "resourceId". For "restorePointInTime", specify a value of "submissionTimestamp" minus **15 minutes** to ensure the command does not error out.
+ ```json { "location": "Dropped Server Location",
To restore a dropped Azure Database for PostgreSQL server, you need following:
> [!Important] > There is a time limit of five days after the server was dropped. After five days, an error is expected since the backup file cannot be found.
-
-4. If you see Response Code 201 or 202, the restore request is successfully submitted.
+
+4. If you see Response Code 201 or 202, the restore request is successfully submitted.
The server creation can take time depending on the database size and compute resources provisioned on the original server. The restore status can be monitored from Activity log by filtering for - **Subscription** = Your Subscription
To restore a dropped Azure Database for PostgreSQL server, you need following:
- **Operation** = Update PostgreSQL Server Create ## Next steps+ - If you are trying to restore a server within five days, and still receive an error after accurately following the steps discussed earlier, open a support incident for assistance. If you are trying to restore a dropped server after five days, an error is expected since the backup file cannot be found. Do not open a support ticket in this scenario. The support team cannot provide any assistance if the backup is deleted from the system. -- To prevent accidental deletion of servers, we highly recommend using [Resource Locks](https://techcommunity.microsoft.com/t5/azure-database-for-PostgreSQL/preventing-the-disaster-of-accidental-deletion-for-your-PostgreSQL/ba-p/825222).
+- To prevent accidental deletion of servers, we highly recommend using [Resource Locks](https://techcommunity.microsoft.com/t5/azure-database-for-PostgreSQL/preventing-the-disaster-of-accidental-deletion-for-your-PostgreSQL/ba-p/825222).
postgresql How To Restore Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-restore-server-cli.md
ms.devlang: azurecli Previously updated : 10/25/2019 Last updated : 06/24/2022 # How to back up and restore a server in Azure Database for PostgreSQL - Single Server using the Azure CLI
Last updated 10/25/2019
Azure Database for PostgreSQL servers are backed up periodically to enable Restore features. Using this feature you may restore the server and all its databases to an earlier point-in-time, on a new server. ## Prerequisites+ To complete this how-to guide: - You need an [Azure Database for PostgreSQL server and database](quickstart-create-server-database-azure-cli.md). [!INCLUDE [azure-cli-prepare-your-environment.md](../../../includes/azure-cli-prepare-your-environment-no-header.md)]
+- This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
## Set backup configuration
-You make the choice between configuring your server for either locally redundant backups or geographically redundant backups at server creation.
+You make the choice between configuring your server for either locally redundant backups or geographically redundant backups at server creation.
> [!NOTE] > After a server is created, the kind of redundancy it has, geographically redundant vs locally redundant, can't be switched. >
-While creating a server via the `az postgres server create` command, the `--geo-redundant-backup` parameter decides your Backup Redundancy Option. If `Enabled`, geo redundant backups are taken. Or if `Disabled` locally redundant backups are taken.
+While creating a server via the `az postgres server create` command, the `--geo-redundant-backup` parameter decides your Backup Redundancy Option. If `Enabled`, geo redundant backups are taken. Or if `Disabled` locally redundant backups are taken.
-The backup retention period is set by the parameter `--backup-retention-days`.
+The backup retention period is set by the parameter `--backup-retention-days`.
For more information about setting these values during create, see the [Azure Database for PostgreSQL server CLI Quickstart](quickstart-create-server-database-azure-cli.md).
The preceding example changes the backup retention period of mydemoserver to 10
The backup retention period governs how far back in time a point-in-time restore can be retrieved, since it's based on backups available. Point-in-time restore is described further in the next section. ## Server point-in-time restore
-You can restore the server to a previous point in time. The restored data is copied to a new server, and the existing server is left as is. For example, if a table is accidentally dropped at noon today, you can restore to the time just before noon. Then, you can retrieve the missing table and data from the restored copy of the server.
+
+You can restore the server to a previous point in time. The restored data is copied to a new server, and the existing server is left as is. For example, if a table is accidentally dropped at noon today, you can restore to the time just before noon. Then, you can retrieve the missing table and data from the restored copy of the server.
To restore the server, use the Azure CLI [az postgres server restore](/cli/azure/postgres/server) command.
The `az postgres server restore` command requires the following parameters:
When you restore a server to an earlier point in time, a new server is created. The original server and its databases from the specified point in time are copied to the new server.
-The location and pricing tier values for the restored server remain the same as the original server.
+The location and pricing tier values for the restored server remain the same as the original server.
After the restore process finishes, locate the new server and verify that the data is restored as expected. The new server has the same server admin login name and password that was valid for the existing server at the time the restore was initiated. The password can be changed from the new server's **Overview** page. The new server created during a restore does not have the firewall rules or VNet service endpoints that existed on the original server. These rules need to be set up separately for this new server. ## Geo restore
-If you configured your server for geographically redundant backups, a new server can be created from the backup of that existing server. This new server can be created in any region that Azure Database for PostgreSQL is available.
+
+If you configured your server for geographically redundant backups, a new server can be created from the backup of that existing server. This new server can be created in any region that Azure Database for PostgreSQL is available.
To create a server using a geo redundant backup, use the Azure CLI `az postgres server georestore` command.
After the restore process finishes, locate the new server and verify that the da
The new server created during a restore does not have the firewall rules or VNet service endpoints that existed on the original server. These rules need to be set up separately for this new server. ## Next steps+ - Learn more about the service's [backups](concepts-backup.md) - Learn about [replicas](concepts-read-replicas.md) - Learn more about [business continuity](concepts-business-continuity.md) options
postgresql How To Restore Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-restore-server-portal.md
Previously updated : 6/30/2020 Last updated : 06/24/2022 # How to backup and restore a server in Azure Database for PostgreSQL - Single Server using the Azure portal
Last updated 6/30/2020
[!INCLUDE [applies-to-postgresql-single-server](../includes/applies-to-postgresql-single-server.md)] ## Backup happens automatically+ Azure Database for PostgreSQL servers are backed up periodically to enable Restore features. Using this feature you may restore the server and all its databases to an earlier point-in-time, on a new server. ## Set backup configuration
The backup retention period of a server can be changed through the following ste
In the screenshot below it has been increased to 34 days. :::image type="content" source="./media/how-to-restore-server-portal/3-increase-backup-days.png" alt-text="Backup retention period increased":::
-4. Click **OK** to confirm the change.
+4. Select **OK** to confirm the change.
-The backup retention period governs how far back in time a point-in-time restore can be retrieved, since it's based on backups available. Point-in-time restore is described further in the following section.
+The backup retention period governs how far back in time a point-in-time restore can be retrieved, since it's based on backups available. Point-in-time restore is described further in the following section.
## Point-in-time restore+ Azure Database for PostgreSQL allows you to restore the server back to a point-in-time and into to a new copy of the server. You can use this new server to recover your data, or have your client applications point to this new server. For example, if a table was accidentally dropped at noon today, you could restore to the time just before noon and retrieve the missing table and data from that new copy of the server. Point-in-time restore is at the server level, not at the database level. The following steps restore the sample server to a point-in-time:
-1. In the Azure portal, select your Azure Database for PostgreSQL server.
+1. In the Azure portal, select your Azure Database for PostgreSQL server.
2. In the toolbar of the server's **Overview** page, select **Restore**.
The following steps restore the sample server to a point-in-time:
- **Restore point**: Select the point-in-time you want to restore to. - **Target server**: Provide a name for the new server. - **Location**: You cannot select the region. By default it is same as the source server.
- - **Pricing tier**: You cannot change these parameters when doing a point-in-time restore. It is same as the source server.
+ - **Pricing tier**: You cannot change these parameters when doing a point-in-time restore. It is same as the source server.
-4. Click **OK** to restore the server to restore to a point-in-time.
+4. Select **OK** to restore the server to restore to a point-in-time.
5. Once the restore finishes, locate the new server that is created to verify the data was restored as expected.
If your source PostgreSQL server is encrypted with customer-managed keys, please
## Geo restore
-If you configured your server for geographically redundant backups, a new server can be created from the backup of that existing server. This new server can be created in any region that Azure Database for PostgreSQL is available.
+If you configured your server for geographically redundant backups, a new server can be created from the backup of that existing server. This new server can be created in any region that Azure Database for PostgreSQL is available.
1. Select the **Create a resource** button (+) in the upper-left corner of the portal. Select **Databases** > **Azure Database for PostgreSQL**.
If you configured your server for geographically redundant backups, a new server
2. Select the **Single server** deployment option. :::image type="content" source="./media/how-to-restore-server-portal/2-select-deployment-option.png" alt-text="Select Azure Database for PostgreSQL - Single server deployment option.":::
-
-3. Provide the subscription, resource group, and name of the new server.
+
+3. Provide the subscription, resource group, and name of the new server.
4. Select **Backup** as the **Data source**. This action loads a dropdown that provides a list of servers that have geo redundant backups enabled.
-
+ :::image type="content" source="./media/how-to-restore-server-portal/4-geo-restore.png" alt-text="Select data source.":::
-
+ > [!NOTE] > When a server is first created it may not be immediately available for geo restore. It may take a few hours for the necessary metadata to be populated. > 5. Select the **Backup** dropdown.
-
+ :::image type="content" source="./media/how-to-restore-server-portal/5-geo-restore-backup.png" alt-text="Select backup dropdown."::: 6. Select the source server to restore from.
-
+ :::image type="content" source="./media/how-to-restore-server-portal/6-select-backup.png" alt-text="Select backup.":::
-7. The server will default to values for number of **vCores**, **Backup Retention Period**, **Backup Redundancy Option**, **Engine version**, and **Admin credentials**. Select **Continue**.
-
+7. The server will default to values for number of **vCores**, **Backup Retention Period**, **Backup Redundancy Option**, **Engine version**, and **Admin credentials**. Select **Continue**.
+ :::image type="content" source="./media/how-to-restore-server-portal/7-accept-backup.png" alt-text="Continue with backup."::: 8. Fill out the rest of the form with your preferences. You can select any **Location**. After selecting the location, you can select **Configure server** to update the **Compute Generation** (if available in the region you have chosen), number of **vCores**, **Backup Retention Period**, and **Backup Redundancy Option**. Changing **Pricing Tier** (Basic, General Purpose, or Memory Optimized) or **Storage** size during restore is not supported.
- :::image type="content" source="./media/how-to-restore-server-portal/8-create.png" alt-text="Fill form.":::
+ :::image type="content" source="./media/how-to-restore-server-portal/8-create.png" alt-text="Fill form.":::
-9. Select **Review + create** to review your selections.
+9. Select **Review + create** to review your selections.
10. Select **Create** to provision the server. This operation may take a few minutes.
The new server created during a restore does not have the firewall rules or VNet
If your source PostgreSQL server is encrypted with customer-managed keys, please see the [documentation](concepts-data-encryption-postgresql.md) for additional considerations. - ## Next steps+ - Learn more about the service's [backups](concepts-backup.md).-- Learn more about [business continuity](concepts-business-continuity.md) options.
+- Learn more about [business continuity](concepts-business-continuity.md) options.
postgresql How To Restore Server Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-restore-server-powershell.md
ms.devlang: azurepowershell Previously updated : 06/08/2020 Last updated : 06/24/2022 + # How to back up and restore an Azure Database for PostgreSQL server using PowerShell [!INCLUDE [applies-to-postgresql-single-server](../includes/applies-to-postgresql-single-server.md)]
original server are restored.
## Next steps > [!div class="nextstepaction"]
-> [How to generate an Azure Database for PostgreSQL connection string with PowerShell](how-to-connection-string-powershell.md)
+> [How to generate an Azure Database for PostgreSQL connection string with PowerShell](how-to-connection-string-powershell.md)
postgresql How To Tls Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-tls-configurations.md
Previously updated : 06/02/2020 Last updated : 06/24/2022 # Configuring TLS settings in Azure Database for PostgreSQL Single - server using Azure portal
Follow these steps to set PostgreSQL minimum TLS version:
1. In the [Azure portal](https://portal.azure.com/), select your existing Azure Database for PostgreSQL.
-1. On the Azure Database for PostgreSQL - Single server page, under **Settings**, click **Connection security** to open the connection security configuration page.
+1. On the Azure Database for PostgreSQL - Single server page, under **Settings**, select **Connection security** to open the connection security configuration page.
1. In **Minimum TLS version**, select **1.2** to deny connections with TLS version less than TLS 1.2 for your PostgreSQL Single server. :::image type="content" source="./media/how-to-tls-configurations/setting-tls-value.png" alt-text="Azure Database for PostgreSQL Single - server TLS configuration":::
-1. Click **Save** to save the changes.
+1. Select **Save** to save the changes.
1. A notification will confirm that connection security setting was successfully enabled.
Follow these steps to set PostgreSQL minimum TLS version:
## Next steps
-Learn about [how to create alerts on metrics](how-to-alert-on-metric.md)
+Learn about [how to create alerts on metrics](how-to-alert-on-metric.md)
postgresql How To Troubleshoot Common Connection Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-troubleshoot-common-connection-issues.md
Previously updated : 5/6/2019 Last updated : 06/24/2022 # Troubleshoot connection issues to Azure Database for PostgreSQL - Single Server
If the application persistently fails to connect to Azure Database for PostgreSQ
* Client firewall configuration: The firewall on your client must allow connections to your database server. IP addresses and ports of the server that you can't connect to must be allowed and the application names such as PostgreSQL in some firewalls. * User error: You might have mistyped connection parameters, such as the server name in the connection string or a missing *\@servername* suffix in the user name. * If you see the error _Server isn't configured to allow ipv6 connections_, note that the Basic tier doesn't support VNet service endpoints. You have to remove the Microsoft.Sql endpoint from the subnet that is attempting to connect to the Basic server.
-* If you see the connection error _sslmode value "***" invalid when SSL support is not compiled in_ error, it means your PostgreSQL client doesn't support SSL. Most probably, the client-side libpq hasn't been compiled with the "--with-openssl" flag. Try connecting with a PostgreSQL client that has SSL support.
+* If you see the connection error _sslmode value "***" invalid when SSL support is not compiled in_ error, it means your PostgreSQL client doesn't support SSL. Most probably, the client-side libpq hasn't been compiled with the "--with-openssl" flag. Try connecting with a PostgreSQL client that has SSL support.
### Steps to resolve persistent connectivity issues
postgresql How To Upgrade Using Dump And Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-upgrade-using-dump-and-restore.md
Previously updated : 11/30/2021 Last updated : 06/24/2022 # Upgrade your PostgreSQL database using dump and restore
Last updated 11/30/2021
[!INCLUDE [applies-to-postgresql-single-server](../includes/applies-to-postgresql-single-server.md)] >[!NOTE]
-> The concepts explained in this documentation are applicable to both Azure Database for PostgreSQL - Single Server and Azure Database for PostgreSQL - Flexible Server.
+> The concepts explained in this documentation are applicable to both Azure Database for PostgreSQL - Single Server and Azure Database for PostgreSQL - Flexible Server.
You can upgrade your PostgreSQL server deployed in Azure Database for PostgreSQL by migrating your databases to a higher major version server using following methods. * **Offline** method using PostgreSQL [pg_dump](https://www.postgresql.org/docs/current/static/app-pgdump.html) and [pg_restore](https://www.postgresql.org/docs/current/static/app-pgrestore.html) which incurs downtime for migrating the data. This document addresses this method of upgrade/migration.
-* **Online** method using [Database Migration Service](../../dms/tutorial-azure-postgresql-to-azure-postgresql-online-portal.md) (DMS). This method provides a reduced downtime migration and keeps the target database in-sync with the source and you can choose when to cut-over. However, there are few prerequisites and restrictions to be addressed for using DMS. For details, see the [DMS documentation](../../dms/tutorial-azure-postgresql-to-azure-postgresql-online-portal.md).
+* **Online** method using [Database Migration Service](../../dms/tutorial-azure-postgresql-to-azure-postgresql-online-portal.md) (DMS). This method provides a reduced downtime migration and keeps the target database in-sync with the source and you can choose when to cut-over. However, there are few prerequisites and restrictions to be addressed for using DMS. For details, see the [DMS documentation](../../dms/tutorial-azure-postgresql-to-azure-postgresql-online-portal.md).
- The following table provides some recommendations based on database sizes and scenarios.
+The following table provides some recommendations based on database sizes and scenarios.
| **Database/Scenario** | **Dump/restore (Offline)** | **DMS (Online)** | | | :: | :--: |
You can upgrade your PostgreSQL server deployed in Azure Database for PostgreSQL
This guide provides few offline migration methodologies and examples to show how you can migrate from your source server to the target server that runs a higher version of PostgreSQL. > [!NOTE]
-> PostgreSQL dump and restore can be performed in many ways. You may choose to migrate using one of the methods provided in this guide or choose any alternate ways to suit your needs. For detailed dump and restore syntax with additional parameters, see the articles [pg_dump](https://www.postgresql.org/docs/current/static/app-pgdump.html) and [pg_restore](https://www.postgresql.org/docs/current/static/app-pgrestore.html).
-
+> PostgreSQL dump and restore can be performed in many ways. You may choose to migrate using one of the methods provided in this guide or choose any alternate ways to suit your needs. For detailed dump and restore syntax with additional parameters, see the articles [pg_dump](https://www.postgresql.org/docs/current/static/app-pgdump.html) and [pg_restore](https://www.postgresql.org/docs/current/static/app-pgrestore.html).
## Prerequisites for using dump and restore with Azure Database for PostgreSQL
-
+ To step through this how-to-guide, you need: - A **source** PostgreSQL database server running a lower version of the engine that you want to upgrade. - A **target** PostgreSQL database server with the desired major version [Azure Database for PostgreSQL server - Single Server](quickstart-create-server-database-portal.md) or [Azure Database for PostgreSQL - Flexible Server](../flexible-server/quickstart-create-server-portal.md). - A PostgreSQL client system to run the dump and restore commands. It is recommended to use the higher database version. For example, if you are upgrading from PostgreSQL version 9.6 to 11, please use PostgreSQL version 11 client. - It can be a Linux or Windows client that has PostgreSQL installed and that has the [pg_dump](https://www.postgresql.org/docs/current/static/app-pgdump.html) and [pg_restore](https://www.postgresql.org/docs/current/static/app-pgrestore.html) command-line utilities installed.
- - Alternatively, you can use [Azure Cloud Shell](https://shell.azure.com) or by clicking the Azure Cloud Shell on the menu bar at the upper right in the [Azure portal](https://portal.azure.com). You will have to login to your account `az login` before running the dump and restore commands.
-- Your PostgreSQL client preferably running in the same region as the source and target servers. -
+ - Alternatively, you can use [Azure Cloud Shell](https://shell.azure.com) or by selecting the Azure Cloud Shell on the menu bar at the upper right in the [Azure portal](https://portal.azure.com). You will have to login to your account `az login` before running the dump and restore commands.
+- Your PostgreSQL client preferably running in the same region as the source and target servers.
## Additional details and considerations-- You can find the connection string to the source and target databases by clicking the ΓÇ£Connection StringsΓÇ¥ from the portal. +
+- You can find the connection string to the source and target databases by selecting the ΓÇ£Connection StringsΓÇ¥ from the portal.
- You may be running more than one database in your server. You can find the list of databases by connecting to your source server and running `\l`. - Create corresponding databases in the target database server or add `-C` option to the `pg_dump` command which creates the databases. - You must not upgrade `azure_maintenance` or template databases. If you have made any changes to template databases, you may choose to migrate the changes or make those changes in the target database. - Refer to the tables above to determine the database is suitable for this mode of migration.-- If you want to use Azure Cloud Shell, please note that the session times out after 20 minutes. If your database size is < 10 GB, you may be able to complete the upgrade without the session timing out. Otherwise, you may have to keep the session open by other means, such as pressing any key once in 10-15 minutes. -
+- If you want to use Azure Cloud Shell, please note that the session times out after 20 minutes. If your database size is < 10 GB, you may be able to complete the upgrade without the session timing out. Otherwise, you may have to keep the session open by other means, such as pressing any key once in 10-15 minutes.
## Example database used in this guide In this guide, the following source and target servers and database names are used to illustrate with examples.
- | **Description** | **Value** |
- | - | - |
- | Source server (v9.5) | pg-95.postgres.database.azure.com |
- | Source database | bench5gb |
- | Source database size | 5 GB |
- | Source user name | pg@pg-95 |
- | Target server (v11) | pg-11.postgres.database.azure.com |
- | Target database | bench5gb |
- | Target user name | pg@pg-11 |
+| **Description** | **Value** |
+| - | - |
+| Source server (v9.5) | pg-95.postgres.database.azure.com |
+| Source database | bench5gb |
+| Source database size | 5 GB |
+| Source user name | pg@pg-95 |
+| Target server (v11) | pg-11.postgres.database.azure.com |
+| Target database | bench5gb |
+| Target user name | pg@pg-11 |
>[!NOTE] > Flexible server supports PostgreSQL version 11 onwards. Also, flexible server user name does not require @dbservername. ## Upgrade your databases using offline migration methods+ You may choose to use one of the methods described in this section for your upgrades. You can use the following tips while performing the tasks. - If you are using the same password for source and the target database, you can set the `PGPASSWORD=yourPassword` environment variable. Then you donΓÇÖt have to provide password every time you run commands like psql, pg_dump, and pg_restore. Similarly you can setup additional variables like `PGUSER`, `PGSSLMODE` etc. see to [PostgreSQL environment variables](https://www.postgresql.org/docs/11/libpq-envars.html).
-
+ - If your PostgreSQL server requires TLS/SSL connections (on by default in Azure Database for PostgreSQL servers), set an environment variable `PGSSLMODE=require` so that the pg_restore tool connects with TLS. Without TLS, the error may read `FATAL: SSL connection is required. Please specify SSL options and retry.` - In the Windows command line, run the command `SET PGSSLMODE=require` before running the pg_restore command. In Linux or Bash run the command `export PGSSLMODE=require` before running the pg_restore command.
psql -f roles.sql --host=myTargetServer --port=5432 --username=myUser --dbname=p
The dump script should not be expected to run completely without errors. In particular, because the script will issue CREATE ROLE for every role existing in the source cluster, it is certain to get a ΓÇ£role already existsΓÇ¥ error for the bootstrap superuser like azure_pg_admin or azure_superuser. This error is harmless and can be ignored. Use of the `--clean` option is likely to produce additional harmless error messages about non-existent objects, although you can minimize those by adding `--if-exists`. - ### Method 1: Using pg_dump and psql This method involves two steps. First is to dump a SQL file from the source server using `pg_dump`. The second step is to import the file to the target server using `psql`. Please see the [Migrate using export and import](how-to-migrate-using-export-and-import.md) documentation for details. ### Method 2: Using pg_dump and pg_restore
-In this method of upgrade, you first create a dump from the source server using `pg_dump`. Then you restore that dump file to the target server using `pg_restore`. Please see the [Migrate using dump and restore](how-to-migrate-using-dump-and-restore.md) documentation for details.
+In this method of upgrade, you first create a dump from the source server using `pg_dump`. Then you restore that dump file to the target server using `pg_restore`. Please see the [Migrate using dump and restore](how-to-migrate-using-dump-and-restore.md) documentation for details.
### Method 3: Using streaming the dump data to the target database
-If you do not have a PostgreSQL client or you want to use Azure Cloud Shell, then you can use this method. The database dump is streamed directly to the target database server and does not store the dump in the client. Hence, this can be used with a client with limited storage and even can be run from the Azure Cloud Shell.
+If you do not have a PostgreSQL client or you want to use Azure Cloud Shell, then you can use this method. The database dump is streamed directly to the target database server and does not store the dump in the client. Hence, this can be used with a client with limited storage and even can be run from the Azure Cloud Shell.
1. Make sure the database exists in the target server using `\l` command. If the database does not exist, then create the database. ```azurecli-interactive
If you do not have a PostgreSQL client or you want to use Azure Cloud Shell, the
3. Once the upgrade (migration) process completes, you can test your application with the target server. 4. Repeat this process for all the databases within the server.
- As an example, the following table illustrates the time it took to migrate using streaming dump method. The sample data is populated using [pgbench](https://www.postgresql.org/docs/10/pgbench.html). As your database can have different number of objects with varied sizes than pgbench generated tables and indexes, it is highly recommended to test dump and restore of your database to understand the actual time it takes to upgrade your database.
+As an example, the following table illustrates the time it took to migrate using streaming dump method. The sample data is populated using [pgbench](https://www.postgresql.org/docs/10/pgbench.html). As your database can have different number of objects with varied sizes than pgbench generated tables and indexes, it is highly recommended to test dump and restore of your database to understand the actual time it takes to upgrade your database.
| **Database Size** | **Approx. time taken** | | -- | |
If you do not have a PostgreSQL client or you want to use Azure Cloud Shell, the
| 10 GB | 15-20 minutes | | 50 GB | 1-1.5 hours | | 100 GB | 2.5-3 hours|
-
-### Method 4: Using parallel dump and restore
+
+### Method 4: Using parallel dump and restore
You can consider this method if you have few larger tables in your database and you want to parallelize the dump and restore process for that database. You also need enough storage in your client system to accommodate backup dumps. This parallel dump and restore process reduces the time consumption to complete the whole migration. For example, the 50 GB pgbench database which took 1-1.5 hrs to migrate was completed using Method 1 and 2 took less than 30 minutes using this method.
You can consider this method if you have few larger tables in your database and
> The process mentioned in this document can also be used to upgrade your Azure Database for PostgreSQL - Flexible server. The main difference is the connection string for the flexible server target is without the `@dbName`. For example, if the user name is `pg`, the single serverΓÇÖs username in the connect string will be `pg@pg-95`, while with flexible server, you can simply use `pg`. ## Post upgrade/migrate+ After the major version upgrade is complete, we recommend to run the `ANALYZE` command in each database to refresh the `pg_statistic` table. Otherwise, you may run into performance issues. ```SQL
postgresql Overview Postgres Choose Server Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/overview-postgres-choose-server-options.md
Previously updated : 12/01/2021 Last updated : 06/24/2022 + # Choose the right PostgreSQL server option in Azure [!INCLUDE [applies-to-postgresql-single-server](../includes/applies-to-postgresql-single-server.md)]
postgresql Overview Single Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/overview-single-server.md
Previously updated : 11/30/2021 Last updated : 06/24/2022 + # Azure Database for PostgreSQL Single Server [!INCLUDE [applies-to-postgresql-single-server](../includes/applies-to-postgresql-single-server.md)]
During planned or unplanned failover events, if the server goes down, the servic
2. The storage with data files is mapped to the new container. 3. PostgreSQL database engine is brought online on the new compute container. 4. Gateway service ensures transparent failover ensuring no application side changes requires.
-
- :::image type="content" source="./media/overview/overview-azure-postgres-single-server.png" alt-text="Azure Database for PostgreSQL Single Server":::
+ The typical failover time ranges from 60-120 seconds. The cloud native design of the single server service allows it to support 99.99% of availability eliminating the cost of passive hot standby.
The service runs community version of PostgreSQL. This allows full application c
## Frequently Asked Questions
- Will Flexible Server replace Single Server or Will Single Server be retired soon?
+Will Flexible Server replace Single Server or Will Single Server be retired soon?
We continue to support Single Server and encourage you to adopt Flexible Server which has richer capabilities such as zone resilient HA, predictable performance, maximum control, custom maintenance window, cost optimization controls and simplified developer experience suitable for your enterprise workloads. If we decide to retire any service, feature, API or SKU, you will receive advance notice including a migration or transition path. Learn more about Microsoft Lifecycle policies [here](/lifecycle/faq/general-lifecycle). - ## Contacts For any questions or suggestions you might have about working with Azure Database for PostgreSQL, send an email to the Azure Database for PostgreSQL Team ([@Ask Azure DB for PostgreSQL](mailto:AskAzureDBforPostgreSQL@service.microsoft.com)). This email address is not a technical support alias.
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/overview.md
Previously updated : 01/24/2022 Last updated : 06/24/2022 adobe-target: true
Azure Database for PostgreSQL is a relational database service in the Microsoft
- Monitoring and automation to simplify management and monitoring for large-scale deployments. - Industry-leading support experience.
- :::image type="content" source="./media/overview/overview-what-is-azure-postgres.png" alt-text="Azure Database for PostgreSQL":::
These capabilities require almost no administration, and all are provided at no additional cost. They allow you to focus on rapid application development and accelerating your time to market rather than allocating precious time and resources to managing virtual machines and infrastructure. In addition, you can continue to develop your application with the open-source tools and platform of your choice to deliver with the speed and efficiency your business demands, all without having to learn new skills.
Flexible servers are best suited for
- Cost optimization controls with ability to stop/start server - Zone redundant high availability - Managed maintenance windows
-
+ For a detailed overview of flexible server deployment mode, see [flexible server overview](../flexible-server/overview.md). ### Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)
The Hyperscale (Citus) deployment option delivers:
- Horizontal scaling across multiple machines using sharding - Query parallelization across these servers for faster responses on large datasets - Excellent support for multi-tenant applications, real-time operational analytics, and high-throughput transactional workloads
-
+ Applications built for PostgreSQL can run distributed queries on Hyperscale (Citus) with standard [connection libraries](./concepts-connection-libraries.md) and minimal changes. ## Next steps
postgresql Partners Migration Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/partners-migration-postgresql.md
-
+ Title: Azure Database for PostgreSQL migration partners description: Lists of third-party migration partners with solutions that support Azure Database for PostgreSQL.
Previously updated : 08/07/2018 Last updated : 06/24/2022 # Azure Database for PostgreSQL migration partners
Last updated 08/07/2018
To broadly support your Azure Database for PostgreSQL solution, you can choose from a wide variety of industry-leading partners and tools. This article highlights Microsoft partners with migration solutions that support Azure Database for PostgreSQL. ## Migration partners+ | Partner | Description | Links | Videos | | | | | | | ![SNP Technologies][1] |**SNP Technologies**<br>SNP Technologies is a cloud-only service provider, building secure and reliable solutions for businesses of the future. The company believes in generating real value for your business. From thought to execution, SNP Technologies shares a common purpose with clients, to turn their investment into an advantage.|[Website][snp_website]<br>[Twitter][snp_twitter]<br>[Contact][snp_contact] | |
To broadly support your Azure Database for PostgreSQL solution, you can choose f
| ![Pactera][6] |**Pactera**<br>Pactera is a global company offering consulting, digital, technology, and operations services to the worldΓÇÖs leading enterprises. From their roots in engineering to the latest in digital transformation, they give customers a competitive edge. Their proven methodologies and tools ensure your data is secure, authentic, and accurate.|[Website][pactera_website]<br>[Twitter][pactera_twitter]<br>[Contact][pactera_contact] | | ## Next steps+ To learn more about some of Microsoft's other partners, see the [Microsoft Partner site](https://partner.microsoft.com/). <!--Image references-->
postgresql Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/policy-reference.md
Previously updated : 05/11/2022 Last updated : 06/24/2022 + # Azure Policy built-in definitions for Azure Database for PostgreSQL [!INCLUDE [applies-to-postgresql-single-server](../includes/applies-to-postgresql-single-server.md)]
postgresql Quickstart Create Postgresql Server Database Using Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/quickstart-create-postgresql-server-database-using-arm-template.md
Previously updated : 02/11/2021 Last updated : 06/24/2022 # Quickstart: Use an ARM template to create an Azure Database for PostgreSQL - single server
read -p "Press [ENTER] to continue: "
## Exporting ARM template from the portal+ You can [export an ARM template](../../azure-resource-manager/templates/export-template-portal.md) from the Azure portal. There are two ways to export a template: - [Export from resource group or resource](../../azure-resource-manager/templates/export-template-portal.md#export-template-from-a-resource). This option generates a new template from existing resources. The exported template is a "snapshot" of the current state of the resource group. You can export an entire resource group or specific resources within that resource group.
When exporting the template, in the ```"properties":{ }``` section of the Postg
``` - ## Clean up resources When it's no longer needed, delete the resource group, which deletes the resources in the resource group.
postgresql Quickstart Create Postgresql Server Database Using Azure Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/quickstart-create-postgresql-server-database-using-azure-powershell.md
ms.devlang: azurepowershell ms.tool: azure-powershell Previously updated : 06/08/2020 Last updated : 06/24/2022 # Quickstart: Create an Azure Database for PostgreSQL - Single Server using PowerShell
what is used in this Quickstart. Read the pgAdmin documentation if you need addi
1. Select **Save**. 1. In the **Browser** pane on the left, expand the **Servers** node. Select your server, for
- example, **mydemoserver**. Click to connect to it.
+ example, **mydemoserver**. Select to connect to it.
1. Expand the server node, and then expand **Databases** under it. The list should include your existing *postgres* database and any other databases you've created. You can create multiple
postgresql Quickstart Create Postgresql Server Database Using Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/quickstart-create-postgresql-server-database-using-bicep.md
Previously updated : 04/29/2022 Last updated : 06/24/2022 # Quickstart: Use Bicep to create an Azure Database for PostgreSQL - single server
postgresql Quickstart Create Server Database Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/quickstart-create-server-database-azure-cli.md
ms.devlang: azurecli Previously updated : 01/26/2022 Last updated : 06/24/2022 # Quickstart: Create an Azure Database for PostgreSQL server by using the Azure CLI
postgresql Quickstart Create Server Database Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/quickstart-create-server-database-portal.md
Previously updated : 10/18/2020 Last updated : 06/24/2022 # Quickstart: Create an Azure Database for PostgreSQL server by using the Azure portal
Last updated 10/18/2020
Azure Database for PostgreSQL is a managed service that you use to run, manage, and scale highly available PostgreSQL databases in the cloud. This quickstart shows you how to create a single Azure Database for PostgreSQL server and connect to it. ## Prerequisites+ An Azure subscription is required. If you don't have an Azure subscription, create a [free Azure account](https://azure.microsoft.com/free/) before you begin. ## Create an Azure Database for PostgreSQL server+ Go to the [Azure portal](https://portal.azure.com/) to create an Azure Database for PostgreSQL Single Server database. Search for and select *Azure Database for PostgreSQL servers*. >[!div class="mx-imgBorder"]
Go to the [Azure portal](https://portal.azure.com/) to create an Azure Database
|Version|The latest major version| The latest PostgreSQL major version, unless you have specific requirements otherwise.| |Compute + storage | *use the defaults*| The default pricing tier is **General Purpose** with **4 vCores** and **100 GB** storage. Backup retention is set to **7 days** with **Geographically Redundant** backup option.<br/>Learn about the [pricing](https://azure.microsoft.com/pricing/details/postgresql/server/) and update the defaults if needed.| - > [!NOTE] > Consider using the Basic pricing tier if light compute and I/O are adequate for your workload. Note that servers created in the Basic pricing tier can't later be scaled to General Purpose or Memory Optimized.
Go to the [Azure portal](https://portal.azure.com/) to create an Azure Database
[Having issues? Let us know.](https://aka.ms/postgres-doc-feedback) ## Configure a firewall rule+ By default, the server that you create is not publicly accessible. You need to give permissions to your IP address. Go to your server resource in the Azure portal and select **Connection security** from left-side menu for your server resource. If you're not sure how to find your resource, see [Open resources](../../azure-resource-manager/management/manage-resources-portal.md#open-resources). > [!div class="mx-imgBorder"]
You can use [psql](http://postgresguide.com/utilities/psql.html) or [pgAdmin](ht
> [!div class="mx-imgBorder"] > :::image type="content" source="./media/quickstart-create-database-portal/overview-new.png" alt-text="get connection information."::: - 2. Open Azure Cloud Shell in the portal by selecting the icon on the upper-left side. > [!NOTE]
You can use [psql](http://postgresguide.com/utilities/psql.html) or [pgAdmin](ht
[Having issues? Let us know.](https://aka.ms/postgres-doc-feedback) ## Clean up resources+ You've successfully created an Azure Database for PostgreSQL server in a resource group. If you don't expect to need these resources in the future, you can delete them by deleting either the resource group or the PostgreSQL server. To delete the resource group:
To delete the server, select the **Delete** button on the **Overview** page of y
[Having issues? Let us know.](https://aka.ms/postgres-doc-feedback) ## Next steps+ > [!div class="nextstepaction"] > [Migrate your database using export and import](./how-to-migrate-using-export-and-import.md) <br/>
postgresql Quickstart Create Server Up Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/quickstart-create-server-up-azure-cli.md
ms.devlang: azurecli Previously updated : 01/25/2022 Last updated : 06/24/2022 + # Quickstart: Use the az postgres up command to create an Azure Database for PostgreSQL - Single Server [!INCLUDE [applies-to-postgresql-single-server](../includes/applies-to-postgresql-single-server.md)]
postgresql Sample Scripts Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/sample-scripts-azure-cli.md
ms.devlang: azurecli Previously updated : 09/17/2021 Last updated : 06/24/2022 + # Azure CLI samples for Azure Database for PostgreSQL - Single Server [!INCLUDE [applies-to-postgresql-single-server](../includes/applies-to-postgresql-single-server.md)]
The following table includes links to sample Azure CLI scripts for Azure Databas
| [Restore a server](../scripts/sample-point-in-time-restore.md) | Azure CLI script that restores an Azure Database for PostgreSQL server to a previous point in time. | |**Download server logs**|| | [Enable and download server logs](../scripts/sample-server-logs.md) | Azure CLI script that enables and downloads server logs of an Azure Database for PostgreSQL server. |
-|||
-
postgresql Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/security-controls-policy.md
Previously updated : 06/16/2022 Last updated : 06/24/2022 # Azure Policy Regulatory Compliance controls for Azure Database for PostgreSQL
postgresql Tutorial Design Database Using Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/tutorial-design-database-using-azure-cli.md
ms.devlang: azurecli Previously updated : 01/26/2022 Last updated : 06/24/2022 # Tutorial: Design an Azure Database for PostgreSQL - Single Server using Azure CLI
In this tutorial, you learned how to use Azure CLI (command-line interface) and
> * Restore data > [!div class="nextstepaction"]
-> [Design your first Azure Database for PostgreSQL using the Azure portal](tutorial-design-database-using-azure-portal.md)
+> [Design your first Azure Database for PostgreSQL using the Azure portal](tutorial-design-database-using-azure-portal.md)
postgresql Tutorial Design Database Using Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/tutorial-design-database-using-azure-portal.md
Previously updated : 06/25/2019 Last updated : 06/24/2022 + # Tutorial: Design an Azure Database for PostgreSQL - Single Server using the Azure portal [!INCLUDE [applies-to-postgresql-single-server](../includes/applies-to-postgresql-single-server.md)]
In this tutorial, you use the Azure portal to learn how to:
> * Restore data ## Prerequisites+ If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin. ## Create an Azure Database for PostgreSQL
If you don't have an Azure subscription, create a [free](https://azure.microsoft
An Azure Database for PostgreSQL server is created with a defined set of [compute and storage resources](./concepts-pricing-tiers.md). The server is created within an [Azure resource group](../../azure-resource-manager/management/overview.md). Follow these steps to create an Azure Database for PostgreSQL server:
-1. Click **Create a resource** in the upper left-hand corner of the Azure portal.
+1. Select **Create a resource** in the upper left-hand corner of the Azure portal.
2. Select **Databases** from the **New** page, and select **Azure Database for PostgreSQL** from the **Databases** page. :::image type="content" source="./media/tutorial-design-database-using-azure-portal/1-create-database.png" alt-text="Azure Database for PostgreSQL - Create the database":::
Follow these steps to create an Azure Database for PostgreSQL server:
> [!NOTE] > Consider using the Basic pricing tier if light compute and I/O are adequate for your workload. Note that servers created in the Basic pricing tier cannot later be scaled to General Purpose or Memory Optimized. See the [pricing page](https://azure.microsoft.com/pricing/details/postgresql/) for more information.
- >
+ >
:::image type="content" source="./media/quickstart-create-database-portal/2-pricing-tier.png" alt-text="The Pricing tier pane":::
Follow these steps to create an Azure Database for PostgreSQL server:
6. On the toolbar, select the **Notifications** icon (a bell) to monitor the deployment process. Once the deployment is done, you can select **Pin to dashboard**, which creates a tile for this server on your Azure portal dashboard as a shortcut to the server's **Overview** page. Selecting **Go to resource** opens the server's **Overview** page. :::image type="content" source="./media/quickstart-create-database-portal/3-notifications.png" alt-text="The Notifications pane":::
-
- By default, a **postgres** database is created under your server. The [postgres](https://www.postgresql.org/docs/9.6/static/app-initdb.html) database is a default database that's meant for use by users, utilities, and third-party applications. (The other default database is **azure_maintenance**. Its function is to separate the managed service processes from user actions. You cannot access this database.)
+ By default, a **postgres** database is created under your server. The [postgres](https://www.postgresql.org/docs/9.6/static/app-initdb.html) database is a default database that's meant for use by users, utilities, and third-party applications. (The other default database is **azure_maintenance**. Its function is to separate the managed service processes from user actions. You cannot access this database.)
## Configure a server-level firewall rule
-The Azure Database for PostgreSQL service uses a firewall at the server-level. By default, this firewall prevents all external applications and tools from connecting to the server and any databases on the server unless a firewall rule is created to open the firewall for a specific IP address range.
+The Azure Database for PostgreSQL service uses a firewall at the server-level. By default, this firewall prevents all external applications and tools from connecting to the server and any databases on the server unless a firewall rule is created to open the firewall for a specific IP address range.
-1. After the deployment completes, click **All Resources** from the left-hand menu and type in the name **mydemoserver** to search for your newly created server. Click the server name listed in the search result. The **Overview** page for your server opens and provides options for further configuration.
+1. After the deployment completes, select **All Resources** from the left-hand menu and type in the name **mydemoserver** to search for your newly created server. Select the server name listed in the search result. The **Overview** page for your server opens and provides options for further configuration.
:::image type="content" source="./media/tutorial-design-database-using-azure-portal/4-locate.png" alt-text="Azure Database for PostgreSQL - Search for server":::
-2. In the server page, select **Connection security**.
+2. In the server page, select **Connection security**.
-3. Click in the text box under **Rule Name,** and add a new firewall rule to specify the IP range for connectivity. Enter your IP range. Click **Save**.
+3. Select in the text box under **Rule Name,** and add a new firewall rule to specify the IP range for connectivity. Enter your IP range. Select **Save**.
:::image type="content" source="./media/tutorial-design-database-using-azure-portal/5-firewall-2.png" alt-text="Azure Database for PostgreSQL - Create Firewall Rule":::
-4. Click **Save** and then click the **X** to close the **Connections security** page.
+4. Select **Save** and then select the **X** to close the **Connections security** page.
> [!NOTE] > Azure PostgreSQL server communicates over port 5432. If you are trying to connect from within a corporate network, outbound traffic over port 5432 may not be allowed by your network's firewall. If so, you cannot connect to your Azure SQL Database server unless your IT department opens port 5432.
The Azure Database for PostgreSQL service uses a firewall at the server-level. B
When you created the Azure Database for PostgreSQL server, the default **postgres** database was also created. To connect to your database server, you need to provide host information and access credentials.
-1. From the left-hand menu in the Azure portal, click **All resources** and search for the server you just created.
+1. From the left-hand menu in the Azure portal, select **All resources** and search for the server you just created.
:::image type="content" source="./media/tutorial-design-database-using-azure-portal/4-locate.png" alt-text="Azure Database for PostgreSQL - Search for server":::
-2. Click the server name **mydemoserver**.
+2. Select the server name **mydemoserver**.
3. Select the server's **Overview** page. Make a note of the **Server name** and **Server admin login name**. :::image type="content" source="./media/tutorial-design-database-using-azure-portal/6-server-name.png" alt-text="Azure Database for PostgreSQL - Server Admin Login"::: - ## Connect to PostgreSQL database using psql+ If your client computer has PostgreSQL installed, you can use a local instance of [psql](https://www.postgresql.org/docs/9.6/static/app-psql.html), or the Azure Cloud Console to connect to an Azure PostgreSQL server. Let's now use the psql command-line utility to connect to the Azure Database for PostgreSQL server. 1. Run the following psql command to connect to an Azure Database for PostgreSQL database:
If your client computer has PostgreSQL installed, you can use a local instance o
``` For example, the following command connects to the default database called **postgres** on your PostgreSQL server **mydemoserver.postgres.database.azure.com** using access credentials. Enter the `<server_admin_password>` you chose when prompted for password.
-
+ ``` psql --host=mydemoserver.postgres.database.azure.com --port=5432 --username=myadmin@mydemoserver --dbname=postgres ```
If your client computer has PostgreSQL installed, you can use a local instance o
``` ## Create tables in the database+ Now that you know how to connect to the Azure Database for PostgreSQL, you can complete some basic tasks: First, create a table and load it with some data. Let's create a table that tracks inventory information using this SQL code:
You can see the newly created table in the list of tables now by typing:
``` ## Load data into the tables+ Now that you have a table, insert some data into it. At the open command prompt window, run the following query to insert some rows of data. ```sql INSERT INTO inventory (id, name, quantity) VALUES (1, 'banana', 150);
INSERT INTO inventory (id, name, quantity) VALUES (2, 'orange', 154);
You have now two rows of sample data into the inventory table you created earlier. ## Query and update the data in the tables+ Execute the following query to retrieve information from the inventory database table. ```sql SELECT * FROM inventory;
SELECT * FROM inventory;
``` ## Restore data to a previous point in time+ Imagine you have accidentally deleted this table. This situation is something you cannot easily recover from. Azure Database for PostgreSQL allows you to go back to any point-in-time for which your server has backups (determined by the backup retention period you configured) and restore this point-in-time to a new server. You can use this new server to recover your deleted data. The following steps restore the **mydemoserver** server to a point before the inventory table was added.
-1. On the Azure Database for PostgreSQL **Overview** page for your server, click **Restore** on the toolbar. The **Restore** page opens.
+1. On the Azure Database for PostgreSQL **Overview** page for your server, select **Restore** on the toolbar. The **Restore** page opens.
:::image type="content" source="./media/tutorial-design-database-using-azure-portal/9-azure-portal-restore.png" alt-text="Screenshot that shows the Azure Database for PostgreSQL **Overview** page for your server and highlights the Restore button.":::
Imagine you have accidentally deleted this table. This situation is something yo
- **Target server**: Provide a new server name you want to restore to - **Location**: You cannot select the region, by default it is same as the source server - **Pricing tier**: You cannot change this value when restoring a server. It is same as the source server.
-3. Click **OK** to [restore the server to a point-in-time](./how-to-restore-server-portal.md) before the table was deleted. Restoring a server to a different point in time creates a duplicate new server as the original server as of the point in time you specify, provided that it is within the retention period for your [pricing tier](./concepts-pricing-tiers.md).
+3. Select **OK** to [restore the server to a point-in-time](./how-to-restore-server-portal.md) before the table was deleted. Restoring a server to a different point in time creates a duplicate new server as the original server as of the point in time you specify, provided that it is within the retention period for your [pricing tier](./concepts-pricing-tiers.md).
## Clean up resources
-In the preceding steps, you created Azure resources in a server group. If you don't expect to need these resources in the future, delete the server group. Press the *Delete* button in the *Overview* page for your server group. When prompted on a pop-up page, confirm the name of the server group and click the final *Delete* button.
+In the preceding steps, you created Azure resources in a server group. If you don't expect to need these resources in the future, delete the server group. Press the *Delete* button in the *Overview* page for your server group. When prompted on a pop-up page, confirm the name of the server group and select the final *Delete* button.
## Next steps+ In this tutorial, you learned how to use the Azure portal and other utilities to: > [!div class="checklist"] > * Create an Azure Database for PostgreSQL server
In this tutorial, you learned how to use the Azure portal and other utilities to
> * Restore data > [!div class="nextstepaction"]
->[Design your first Azure Database for PostgreSQL using Azure CLI](tutorial-design-database-using-azure-cli.md)
+>[Design your first Azure Database for PostgreSQL using Azure CLI](tutorial-design-database-using-azure-cli.md)
postgresql Tutorial Design Database Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/tutorial-design-database-using-powershell.md
ms.devlang: azurepowershell Previously updated : 06/08/2020 Last updated : 06/24/2022 # Tutorial: Design an Azure Database for PostgreSQL - Single Server using PowerShell
original server are restored.
## Clean up resources
-In the preceding steps, you created Azure resources in a server group. If you don't expect to need these resources in the future, delete the server group. Press the *Delete* button in the *Overview* page for your server group. When prompted on a pop-up page, confirm the name of the server group and click the final *Delete* button.
+In the preceding steps, you created Azure resources in a server group. If you don't expect to need these resources in the future, delete the server group. Press the *Delete* button in the *Overview* page for your server group. When prompted on a pop-up page, confirm the name of the server group and select the final *Delete* button.
## Next steps
postgresql Tutorial Monitor And Tune https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/tutorial-monitor-and-tune.md
Previously updated : 5/6/2019 Last updated : 06/24/2022 # Tutorial: Monitor and tune Azure Database for PostgreSQL - Single Server
Azure Database for PostgreSQL has features that help you understand and improve
> * Apply performance recommendations ## Prerequisites+ You need an Azure Database for PostgreSQL server with PostgreSQL version 9.6 or 10. You can follow the steps in the [Create tutorial](tutorial-design-database-using-azure-portal.md) to create a server. > [!IMPORTANT] > **Query Store**, **Query Performance Insight**, and **Performance Recommendation** are in Public Preview. ## Enabling data collection+ The [Query Store](concepts-query-store.md) captures a history of queries and wait statistics on your server and stores it in the **azure_sys** database on your server. It is an opt-in feature. To enable it: 1. Open the Azure portal.
The [Query Store](concepts-query-store.md) captures a history of queries and wai
3. Select **Server parameters** which is in the **Settings** section of the menu on the left. 4. Set **pg_qs.query_capture_mode** to **TOP** to start collecting query performance data. Set **pgms_wait_sampling.query_capture_mode** to **ALL** to start collecting wait statistics. Save.
-
+ :::image type="content" source="./media/tutorial-performance-intelligence/query-store-parameters.png" alt-text="Query Store server parameters"::: 5. Allow up to 20 minutes for the first batch of data to persist in the **azure_sys** database. - ## Performance insights
-The [Query Performance Insight](concepts-query-performance-insight.md) view in the Azure portal will surface visualizations on key information from Query Store.
+
+The [Query Performance Insight](concepts-query-performance-insight.md) view in the Azure portal will surface visualizations on key information from Query Store.
1. In the portal page of your Azure Database for PostgreSQL server, select **Query performance Insight** under the **Support + troubleshooting** section of the menu on the left.
-2. The **Long running queries** tab shows the top 5 queries by average duration per execution, aggregated in 15 minute intervals.
-
+2. The **Long running queries** tab shows the top 5 queries by average duration per execution, aggregated in 15 minute intervals.
+ :::image type="content" source="./media/tutorial-performance-intelligence/query-performance-insight-landing-page.png" alt-text="Query Performance Insight landing page"::: You can view more queries by selecting from the **Number of Queries** drop down. The chart colors may change for a specific Query ID when you do this.
-3. You can click and drag in the chart to narrow down to a specific time window.
+3. You can select and drag in the chart to narrow down to a specific time window.
4. Use the zoom in and out icons to view a smaller or larger period of time respectively. 5. View the table below the chart to learn more details about the long-running queries in that time window. 6. Select the **Wait Statistics** tab to view the corresponding visualizations on waits in the server.
-
- :::image type="content" source="./media/tutorial-performance-intelligence/query-performance-insight-wait-statistics.png" alt-text="Query Performance Insight wait statistics":::
+ :::image type="content" source="./media/tutorial-performance-intelligence/query-performance-insight-wait-statistics.png" alt-text="Query Performance Insight wait statistics":::
## Performance recommendations+ The [Performance Recommendations](concepts-performance-recommendations.md) feature analyzes workloads across your server to identify indexes with the potential to improve performance. 1. Open **Performance Recommendations** from the **Support + troubleshooting** section of the menu bar on the Azure portal page for your PostgreSQL server.
-
+ :::image type="content" source="./media/tutorial-performance-intelligence/performance-recommendations-landing-page-1.png" alt-text="Performance Recommendations landing page"::: 2. Select **Analyze** and choose a database. This will begin the analysis. 3. Depending on your workload, this may take several minutes to complete. Once the analysis is done, there will be a notification in the portal.
-4. The **Performance Recommendations** window will show a list of recommendations if any were found.
+4. The **Performance Recommendations** window will show a list of recommendations if any were found.
5. A recommendation will show information about the relevant **Database**, **Table**, **Column**, and **Index Size**.
The [Performance Recommendations](concepts-performance-recommendations.md) featu
6. To implement the recommendation, copy the query text and run it from your client of choice. ### Permissions+ **Owner** or **Contributor** permissions required to run analysis using the Performance Recommendations feature. ## Clean up resources
-In the preceding steps, you created Azure resources in a server group. If you don't expect to need these resources in the future, delete the server group. Press the *Delete* button in the *Overview* page for your server group. When prompted on a pop-up page, confirm the name of the server group and click the final *Delete* button.
+In the preceding steps, you created Azure resources in a server group. If you don't expect to need these resources in the future, delete the server group. Press the *Delete* button in the *Overview* page for your server group. When prompted on a pop-up page, confirm the name of the server group and select the final *Delete* button.
## Next steps > [!div class="nextstepaction"]
-> Learn more about [monitoring and tuning](concepts-monitoring.md) in Azure Database for PostgreSQL.
+> Learn more about [monitoring and tuning](concepts-monitoring.md) in Azure Database for PostgreSQL.
purview Concept Best Practices Classification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-best-practices-classification.md
Title: Microsoft Purview classification best practices
-description: This article provides best practices for classification in Microsoft Purview.
+ Title: Classification best practices for the Microsoft Purview governance portal
+description: This article provides best practices for classification in the Microsoft Purview governance portal so you can effectively identify sensitive data across your environment.
Last updated 11/18/2021
-# Microsoft Purview classification best practices
+# Classification best practices in the Microsoft Purview governance portal
-Data classification, in the context of Microsoft Purview, is a way of categorizing data assets by assigning unique logical labels or classes to the data assets. Classification is based on the business context of the data. For example, you might classify assets by *Passport Number*, *Driver's License Number*, *Credit Card Number*, *SWIFT Code*, *PersonΓÇÖs Name*, and so on.
+Data classification in the Microsoft Purview governance portal is a way of categorizing data assets by assigning unique logical labels or classes to the data assets. Classification is based on the business context of the data. For example, you might classify assets by *Passport Number*, *Driver's License Number*, *Credit Card Number*, *SWIFT Code*, *PersonΓÇÖs Name*, and so on. To learn more about classification itself, see our [classification article](concept-classification.md).
-To learn more about classification, see [Classification](concept-classification.md).
+This article describes best practices to adopt when you're classifying data assets, so that your scans will be more effective and you have the most complete information possible about your entire data estate.
-## Classification best practices
-
-This section describes best practices to adopt when you're classifying data assets.
-
-### Scan rule set
+## Scan rule set
By using a *scan rule set*, you can configure the relevant classifications that should be applied to the particular scan for the data source. Select the relevant system classifications, or select custom classifications if you've created one for the data you're scanning.
For example, in the following image, only the specific selected system and custo
:::image type="content" source="./media/concept-best-practices/classification-select-classification-rules-example-3.png" alt-text="Screenshot that shows a selected classification rule." lightbox="./media/concept-best-practices/classification-select-classification-rules-example-3.png":::
-### Annotation management
+## Annotation management
While you're deciding on which classifications to apply, we recommend that you:
While you're deciding on which classifications to apply, we recommend that you:
:::image type="content" source="./media/concept-best-practices/classification-classification-rules-example-2.png" alt-text="Screenshot that shows the 'Classification rules' pane." lightbox="./media/concept-best-practices/classification-classification-rules-example-2.png":::
-### Custom classifications
+## Custom classifications
Create custom classifications only if the available system classifications don't meet your needs.
When you create and configure the classification rules for a custom classificati
* Select the appropriate classification name for which the classification rule is to be created.
-* Microsoft Purview supports the following two methods for creating custom classification rules:
+* The Microsoft Purview governance portal supports the following two methods for creating custom classification rules:
* Use the **Regular expression** (regex) method if you can consistently express the data element by using a regular expression pattern or you can generate the pattern by using a data file. Ensure that the sample data reflects the population. * Use the **Dictionary** method only if the list of values in the dictionary file represents all possible values of data to be classified and is expected to conform to a given set of data (considering future values as well).
When you create and configure the classification rules for a custom classificati
* This method supports .csv and .tsv files, with a file size limit of 30 megabytes (MB).
-### Custom classification archetypes
+## Custom classification archetypes
-**How the "threshold" parameter works in the regular expression**
+### How the "threshold" parameter works in the regular expression
* Consider the sample source data in the following image. There are five columns, and the custom classification rule should be applied to columns **Sample_col1**, **Sample_col2**, and **Sample_col3** for the data pattern *N{Digit}{Digit}{Digit}AN*.
When you create and configure the classification rules for a custom classificati
:::image type="content" source="./media/concept-best-practices/classification-custom-classification-rule-threshold-11.png" alt-text="Screenshot that shows thresholds of a custom classification rule." lightbox="./media/concept-best-practices/classification-custom-classification-rule-threshold-11.png":::
- If you have a threshold of 55%, only columns **Sample_col1** and **Sample_col2** will be classified. **Sample_col3** will not be classified, because it doesn't meet the 55% threshold criterion.
+ If you have a threshold of 55%, only columns **Sample_col1** and **Sample_col2** will be classified. **Sample_col3** won't be classified, because it doesn't meet the 55% threshold criterion.
:::image type="content" source="./media/concept-best-practices/classification-test-custom-classification-rule-12.png" alt-text="Screenshot that shows the result of a high-threshold criterion." lightbox="./media/concept-best-practices/classification-test-custom-classification-rule-12.png":::
-**How to use both data and column patterns**
+### How to use both data and column patterns
* For the given sample data, where both column **B** and column **C** have similar data patterns, you can classify on column **B** based on the data pattern "^P[0-9]{3}[A-Z]{2}$".
When you create and configure the classification rules for a custom classificati
:::image type="content" source="./media/concept-best-practices/classification-custom-classification-rule-column-pattern-15.png" alt-text="Screenshot that shows a column pattern." lightbox="./media/concept-best-practices/classification-custom-classification-rule-column-pattern-15.png":::
-**How to use multiple column patterns**
+### How to use multiple column patterns
If there are multiple column patterns to be classified for the same classification rule, use pipe (|) character-separated column names. For example, for columns **Product ID**, **Product_ID**, **ProductID**, and so on, write the column pattern as shown in the following image:
Here are some considerations to bear in mind as you're defining classifications:
* Set priorities and develop a plan to achieve the security and compliance needs of an organization. * Describe the phases in the data preparation processes (raw zone, landing zone, and so on) and assign the classifications to specific assets to mark the phase in the process.
-* With Microsoft Purview, you can assign classifications at the asset or column level automatically by including relevant classifications in the scan rule, or you can assign them manually after you ingest the metadata into Microsoft Purview.
-* For automatic assignment, see [Supported data stores in Microsoft Purview](./azure-purview-connector-overview.md).
-* Before you scan your data sources in Microsoft Purview, it is important to understand your data and configure the appropriate scan rule set for it (for example, by selecting relevant system classification, custom classifications, or a combination of both), because it could affect your scan performance. For more information, see [Supported classifications in Microsoft Purview](./supported-classifications.md).
-* The Microsoft Purview scanner applies data sampling rules for deep scans (subject to classification) for both system and custom classifications. The sampling rule is based on the type of data sources. For more information, see the "Sampling within a file" section in [Supported data sources and file types in Microsoft Purview](./sources-and-scans.md#sampling-within-a-file).
+* You can assign classifications at the asset or column level automatically by including relevant classifications in the scan rule, or you can assign them manually after you ingest the metadata into the Microsoft Purview Data Map.
+* For automatic assignment, see [supported data stores in the Microsoft Purview governance portal](./azure-purview-connector-overview.md).
+* Before you scan your data sources in the Microsoft Purview Data Map, it's important to understand your data and configure the appropriate scan rule set for it (for example, by selecting relevant system classification, custom classifications, or a combination of both), because it could affect your scan performance. For more information, see [supported classifications in the Microsoft Purview governance portal](./supported-classifications.md).
+* The Microsoft Purview scanner applies data sampling rules for deep scans (subject to classification) for both system and custom classifications. The sampling rule is based on the type of data sources. For more information, see the "Sampling within a file" section in [Supported data sources and file types in Microsoft Purview](./sources-and-scans.md#sampling-within-a-file).
> [!Note] > **Distinct data threshold**: This is the total number of distinct data values that need to be found in a column before the scanner runs the data pattern on it. Distinct data threshold has nothing to do with pattern matching but it is a pre-requisite for pattern matching. System classification rules require there to be at least 8 distinct values in each column to subject them to classification. The system requires this value to make sure that the column contains enough data for the scanner to accurately classify it. For example, a column that contains multiple rows that all contain the value 1 won't be classified. Columns that contain one row with a value and the rest of the rows have null values also won't get classified. If you specify multiple patterns, this value applies to each of them.
-* The sampling rules apply to resource sets as well. For more information, see the "Resource set file sampling" section in [Supported data sources and file types in Microsoft Purview](./sources-and-scans.md#resource-set-file-sampling).
+* The sampling rules apply to resource sets as well. For more information, see the "Resource set file sampling" section in [supported data sources and file types in the Microsoft Purview governance portal](./sources-and-scans.md#resource-set-file-sampling).
* Custom classifications can't be applied on document type assets using custom classification rules. Classifications for such types can be applied manually only. * Custom classifications aren't included in any default scan rules. Therefore, if automatic assignment of custom classifications is expected, you must deploy and use a custom scan rule that includes the custom classification to run the scan. * If you apply classifications manually from the Microsoft Purview governance portal, such classifications are retained in subsequent scans.
Here are some considerations to bear in mind as you're defining classifications:
## Next steps+ - [Apply system classification](./apply-classifications.md) - [Create custom classification](./create-a-custom-classification-and-classification-rule.md)
purview Concept Classification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-classification.md
Title: Understand data classification feature in Microsoft Purview
-description: This article explains the concept of data classification in Microsoft Purview.
+ Title: Understand data classification in the Microsoft Purview governance portal
+description: This article explains the concepts behind data classification in the Microsoft Purview governance portal.
Last updated 01/04/2022
-# Data Classification in Microsoft Purview
+# Data classification in the Microsoft Purview governance portal
-Data classification, in the context of Microsoft Purview, is a way of categorizing data assets by assigning unique logical tags or classes to the data assets. Classification is based on the business context of the data. For example, you might classify assets by *Passport Number*, *Driver's License Number*, *Credit Card Number*, *SWIFT Code*, *PersonΓÇÖs Name*, and so on.
+Data classification in the Microsoft Purview governance portal is a way of categorizing data assets by assigning unique logical tags or classes to the data assets. Classification is based on the business context of the data. For example, you might classify assets by *Passport Number*, *Driver's License Number*, *Credit Card Number*, *SWIFT Code*, *PersonΓÇÖs Name*, and so on.
When you classify data assets, you make them easier to understand, search, and govern. Classifying data assets also helps you understand the risks associated with them. This in turn can help you implement measures to protect sensitive or important data from ungoverned proliferation and unauthorized access across the data estate.
-Microsoft Purview provides an automated classification capability while you scan your data sources. You get more than 200+ built-in system classifications and the ability to create custom classifications for your data. You can classify assets automatically when they're configured as part of a scan, or you can edit them manually in the Microsoft Purview governance portal after they're scanned and ingested.
+The Microsoft Purview Data Map provides an automated classification capability while you scan your data sources. You get more than 200+ built-in system classifications and the ability to create custom classifications for your data. You can classify assets automatically when they're configured as part of a scan, or you can edit them manually in the Microsoft Purview governance portal after they're scanned and ingested.
## Use of classification
-Classification is the process of organizing data into *logical categories* that make the data easy to retrieve, sort, and identify for future use. This can be particularly important for data governance. Among other reasons, classifying data assets is important because it helps you:
+Classification is the process of organizing data into *logical categories* that make the data easy to retrieve, sort, and identify for future use. This can be important for data governance. Among other reasons, classifying data assets is important because it helps you:
* Narrow down the search for data assets that you're interested in. * Organize and understand the variety of data classes that are important in your organization and where they're stored.
As shown in the following image, it's possible to apply classifications at both
## Types of classification
-Microsoft Purview supports both system and custom classifications.
+The Microsoft Purview governance portal supports both system and custom classifications.
-* **System classifications**: Microsoft Purview supports 200+ system classifications out of the box. For the entire list of available system classifications, see [Supported classifications in Microsoft Purview](./supported-classifications.md).
+* **System classifications**: 200+ system classifications supported out of the box. For the entire list of available system classifications, see [supported classifications in the Microsoft Purview governance portal](./supported-classifications.md).
In the example in the preceding image, *PersonΓÇÖs Name* is a system classification.
Custom classification rules can be based on a *regular expression* pattern or *d
Let's say that the *Employee ID* column follows the EMPLOYEE{GUID} pattern (for example, EMPLOYEE9c55c474-9996-420c-a285-0d0fc23f1f55). You can create your own custom classification by using a regular expression, such as `\^Employee\[A-Za-z0-9\]{8}-\[A-Za-z0-9\]{4}-\[A-Za-z0-9\]{4}-\[A-Za-z0-9\]{4}-\[A-Za-z0-9\]{12}\$`. > [!NOTE]
-> Sensitivity labels are different from classifications. Sensitivity labels categorize assets in the context of data security and privacy, such as *Highly Confidential*, *Restricted*, *Public*, and so on. To use sensitivity labels in the Microsoft Purview data map, you'll need at least one Microsoft 365 license or account within the same Azure Active Directory (Azure AD) tenant as your Microsoft Purview data map. For more information about the differences between sensitivity labels and classifications, see [Sensitivity labels in Microsoft Purview FAQ](sensitivity-labels-frequently-asked-questions.yml#what-is-the-difference-between-classifications-and-sensitivity-labels).
+> Sensitivity labels are different from classifications. Sensitivity labels categorize assets in the context of data security and privacy, such as *Highly Confidential*, *Restricted*, *Public*, and so on. To use sensitivity labels in the Microsoft Purview Data Map, you'll need at least one Microsoft 365 license or account within the same Azure Active Directory (Azure AD) tenant as your Microsoft Purview Data Map. For more information about the differences between sensitivity labels and classifications, see [sensitivity labels in the Microsoft Purview governance portal FAQ](sensitivity-labels-frequently-asked-questions.yml#what-is-the-difference-between-classifications-and-sensitivity-labels).
## Next steps
purview How To Workflow Self Service Data Access Hybrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-workflow-self-service-data-access-hybrid.md
This guide will show you how to create and manage self-service access workflows
1. Approval connector that specifies a user or group that will be contacted to approve the request. 1. Condition to check approval status - If approved:
- 1. Condition to check if data source is registered for [data use governance](how-to-enable-data-use-governance.md) (policy)
+ 1. Condition to check if data source is registered for [data use management](how-to-enable-data-use-governance.md) (policy)
1. If a data source is registered with policy: 1. Create a [self-service policy](concept-self-service-data-access-policy.md) 1. Send email to requestor that access is provided
purview Tutorial Purview Audit Logs Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/tutorial-purview-audit-logs-diagnostics.md
Title: Enable and capture Microsoft Purview audit logs and time series activity history via Azure Diagnostics event hubs
-description: This tutorial lists the step-by-step configuration required to enable and capture Microsoft Purview audit logs and time series activity history via Azure Diagnostics event hubs.
+ Title: Enable and capture audit logs and time series activity history for applications in the Microsoft Purview governance portal
+description: This tutorial lists the step-by-step configuration required to enable and capture audit logs for applications in the Microsoft Purview governance portal and time series activity history via Azure Diagnostics event hubs.
Last updated 02/10/2022
-# Microsoft Purview: Audit logs, diagnostics, and activity history
+# Audit logs, diagnostics, and activity history
-This tutorial lists the step-by-step configuration required to enable and capture Microsoft Purview audit and diagnostics logs via Azure Event Hubs.
+This tutorial lists the step-by-step configuration required to enable and capture audit and diagnostics logs for applications in the Microsoft Purview governance portal via Azure Event Hubs.
-An Microsoft Purview administrator or Microsoft Purview data-source admin needs the ability to monitor audit and diagnostics logs captured from [Microsoft Purview](https://azure.microsoft.com/services/purview/#get-started). Audit and diagnostics information consists of the timestamped history of actions taken and changes made to a Microsoft Purview account by every user. Captured activity history includes actions in the [Microsoft Purview portal](https://ms.web.purview.azure.com) and outside the portal. Actions outside the portal include calling [Microsoft Purview REST APIs](/rest/api/purview/) to perform write operations.
+A Microsoft Purview administrator or Microsoft Purview data-source admin needs the ability to monitor audit and diagnostics logs captured from [applications in the Microsoft Purview governance portal](https://azure.microsoft.com/services/purview/#get-started). Audit and diagnostics information consists of the timestamped history of actions taken and changes made to a Microsoft Purview account by every user. Captured activity history includes actions in the [Microsoft Purview governance portal](https://ms.web.purview.azure.com) and outside the portal. Actions outside the portal include calling [Microsoft Purview REST APIs](/rest/api/purview/) to perform write operations.
-This tutorial takes you through the steps to enable audit logging on Microsoft Purview. It also shows you how to configure and capture streaming audit events from Microsoft Purview via Azure Diagnostics event hubs.
+This tutorial takes you through the steps to enable audit logging. It also shows you how to configure and capture streaming audit events from the Microsoft Purview governance portal via Azure Diagnostics event hubs.
-## Microsoft Purview audit events categories
+## Audit events categories
-Some of the important categories of Microsoft Purview audit events that are currently available for capture and analysis are listed in the table.
+Some of the important categories of Microsoft Purview governance portal audit events that are currently available for capture and analysis are listed in the table.
-More types and categories of activity audit events will be added to Microsoft Purview.
+More types and categories of activity audit events will be added.
| Category | Activity | Operation | |||--|
More types and categories of activity audit events will be added to Microsoft Pu
| Management | Data source | Update | | Management | Data source | Delete |
-## Enable Microsoft Purview audit and diagnostics
+## Enable audit and diagnostics
-The following sections walk you through the process of enabling Microsoft Purview audit and diagnostics.
+The following sections walk you through the process of enabling audit and diagnostics.
### Configure Event Hubs
For step-by-step explanations and manual setup:
### Connect a Microsoft Purview account to Diagnostics event hubs
-Now that Event Hubs is deployed and created, connect Microsoft Purview diagnostics audit logging to Event Hubs.
+Now that Event Hubs is deployed and created, connect your Microsoft Purview account diagnostics audit logging to Event Hubs.
-1. Go to your Microsoft Purview account home page. This page is where the overview information is displayed. It's not the Microsoft Purview governance portal home page.
+1. Go to your Microsoft Purview account home page. This page is where the overview information is displayed in the [Azure portal](https://portal.azure.com). It's not the Microsoft Purview governance portal home page.
1. On the left menu, select **Monitoring** > **Diagnostic settings**.
Now that Event Hubs is deployed and created, connect Microsoft Purview diagnosti
:::image type="content" source="./media/tutorial-purview-audit-logs-diagnostics/azure-purview-diagnostics-audit-eventhub-f.png" alt-text="Screenshot that shows the Add or Edit Diagnostic settings screen." lightbox="./media/tutorial-purview-audit-logs-diagnostics/azure-purview-diagnostics-audit-eventhub-f.png":::
-1. Select the **audit** and **allLogs** checkboxes to enable collection of Microsoft Purview audit logs. Optionally, select **AllMetrics** if you also want to capture Data Map capacity units and Data Map size metrics of the Microsoft Purview account.
+1. Select the **audit** and **allLogs** checkboxes to enable collection of audit logs. Optionally, select **AllMetrics** if you also want to capture Data Map capacity units and Data Map size metrics of the account.
:::image type="content" source="./media/tutorial-purview-audit-logs-diagnostics/azure-purview-diagnostics-audit-eventhub-g.png" alt-text="Screenshot that shows configuring Microsoft Purview Diagnostic settings and selecting diagnostic types." lightbox="./media/tutorial-purview-audit-logs-diagnostics/azure-purview-diagnostics-audit-eventhub-g.png"::: Diagnostics configuration on the Microsoft Purview account is complete.
-Now that Microsoft Purview diagnostics audit logging configuration is complete, configure the data capture and data retention settings for Event Hubs.
+Now that diagnostics audit logging configuration is complete, configure the data capture and data retention settings for Event Hubs.
1. Go to the [Azure portal](https://portal.azure.com) home page, and search for the name of the Event Hubs namespace you created earlier.
Now that Microsoft Purview diagnostics audit logging configuration is complete,
:::image type="content" source="./media/tutorial-purview-audit-logs-diagnostics/azure-purview-diagnostics-audit-eventhub-i.png" alt-text="Screenshot that shows Event Hubs Properties message retention period." lightbox="./media/tutorial-purview-audit-logs-diagnostics/azure-purview-diagnostics-audit-eventhub-i.png":::
-1. At this stage, the Event Hubs configuration is complete. Microsoft Purview will start streaming all its audit history and diagnostics data to this event hub. You can now proceed to read, extract, and perform further analytics and operations on the captured diagnostics and audit events.
+1. At this stage, the Event Hubs configuration is complete. The Microsoft Purview governance portal will start streaming all its audit history and diagnostics data to this event hub. You can now proceed to read, extract, and perform further analytics and operations on the captured diagnostics and audit events.
### Read captured audit events
-To analyze the captured audit and diagnostics log data from Microsoft Purview:
+To analyze the captured audit and diagnostics log data:
-1. Go to **Process data** on the Event Hubs page to see a preview of the captured Microsoft Purview audit logs and diagnostics.
+1. Go to **Process data** on the Event Hubs page to see a preview of the captured audit logs and diagnostics.
:::image type="content" source="./media/tutorial-purview-audit-logs-diagnostics/azure-purview-diagnostics-audit-eventhub-d.png" alt-text="Screenshot that shows configuring Event Hubs Process data." lightbox="./media/tutorial-purview-audit-logs-diagnostics/azure-purview-diagnostics-audit-eventhub-d.png":::
search Index Add Custom Analyzers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/index-add-custom-analyzers.md
- Previously updated : 09/08/2021+ Last updated : 06/24/2022 # Add custom analyzers to string fields in an Azure Cognitive Search index
-A *custom analyzer* is a user-defined combination of tokenizer, one or more token filters, and one or more character filters specified in the search index, and then referenced on field definitions that require custom analysis. The tokenizer is responsible for breaking text into tokens, and the token filters for modifying tokens emitted by the tokenizer. Character filters prepare the input text before it is processed by the tokenizer. For concepts and examples, see [Analyzers in Azure Cognitive Search](search-analyzers.md).
+A *custom analyzer* is a user-defined combination of tokenizer, one or more token filters, and one or more character filters. A custom analyzer is specified within a search index, and then referenced on field definitions that require custom analysis. A custom analyzer is invoked on a per-field basis. Attributes on the field will determine whether it's used for indexing, queries, or both.
+
+In a custom analyzer, character filters prepare the input text before it's processed by the tokenizer (for example, removing markup). Next, the tokenizer breaks text into tokens. Finally, token filters modify the tokens emitted by the tokenizer. For concepts and examples, see [Analyzers in Azure Cognitive Search](search-analyzers.md).
+
+## Why use a custom analyzer?
A custom analyzer gives you control over the process of converting text into indexable and searchable tokens by allowing you to choose which types of analysis or filtering to invoke, and the order in which they occur.
Scenarios where custom analyzers can be helpful include:
- Phonetic search. Add a phonetic filter to enable searching based on how a word sounds, not how itΓÇÖs spelled. -- Disable lexical analysis. Use the Keyword analyzer to create searchable fields that are not analyzed.
+- Disable lexical analysis. Use the Keyword analyzer to create searchable fields that aren't analyzed.
- Fast prefix/suffix search. Add the Edge N-gram token filter to index prefixes of words to enable fast prefix matching. Combine it with the Reverse token filter to do suffix matching.
Scenarios where custom analyzers can be helpful include:
- ASCII folding. Add the Standard ASCII folding filter to normalize diacritics like ö or ê in search terms. > [!NOTE]
-> Custom analyzers are not exposed in the Azure portal. The only way to add a custom analyzer is through code that defines an index.
+> Custom analyzers aren't exposed in the Azure portal. The only way to add a custom analyzer is through code that defines an index.
## Create a custom analyzer
To create a custom analyzer, specify it in the "analyzers" section of an index a
An analyzer definition includes a name, type, one or more character filters, a maximum of one tokenizer, and one or more token filters for post-tokenization processing. Character filters are applied before tokenization. Token filters and character filters are applied from left to right. -- Names in a custom analyzer must be unique and cannot be the same as any of the built-in analyzers, tokenizers, token filters, or characters filters. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
+- Names in a custom analyzer must be unique and can't be the same as any of the built-in analyzers, tokenizers, token filters, or characters filters. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
- Type must be #Microsoft.Azure.Search.CustomAnalyzer.
Within an index definition, you can place this section anywhere in the body of a
} ```
-The analyzer definition is a part of the larger index. Definitions for char filters, tokenizers, and token filters are added to the index only if you are setting custom options. To use an existing filter or tokenizer as-is, specify it by name in the analyzer definition. For more information, see [Create Index (REST)](/rest/api/searchservice/create-index). For more examples, see [Add analyzers in Azure Cognitive Search](search-analyzers.md#examples).
+The analyzer definition is a part of the larger index. Definitions for char filters, tokenizers, and token filters are added to the index only if you're setting custom options. To use an existing filter or tokenizer as-is, specify it by name in the analyzer definition. For more information, see [Create Index (REST)](/rest/api/searchservice/create-index). For more examples, see [Add analyzers in Azure Cognitive Search](search-analyzers.md#examples).
## Test custom analyzers
You can use the [Test Analyzer (REST)](/rest/api/searchservice/test-analyzer) to
## Update custom analyzers
-Once an analyzer, a tokenizer, a token filter, or a character filter is defined, it cannot be modified. New ones can be added to an existing index only if the `allowIndexDowntime` flag is set to true in the index update request:
+Once an analyzer, a tokenizer, a token filter, or a character filter is defined, it can't be modified. New ones can be added to an existing index only if the `allowIndexDowntime` flag is set to true in the index update request:
```http PUT https://[search service name].search.windows.net/indexes/[index name]?api-version=[api-version]&allowIndexDowntime=true
This operation takes your index offline for at least a few seconds, causing your
## Built-in analyzers
-If you want to use a built-in analyzer with custom options, creating a custom analyzer is the mechanism by which you specify those options. In contrast, to use a built-in analyzer as-is, you simply need to [reference it by name](search-analyzers.md#how-to-specify-analyzers) in the field definition.
+If you want to use a built-in analyzer with custom options, creating a custom analyzer is the mechanism by which you specify those options. In contrast, to use a built-in analyzer as-is, you simply need to [reference it by name](search-analyzers.md) in the field definition.
|**analyzer_name**|**analyzer_type** <sup>1</sup>|**Description and Options**| |--|-||
If you want to use a built-in analyzer with custom options, creating a custom an
<sup>1</sup> Analyzer Types are always prefixed in code with "#Microsoft.Azure.Search" such that "PatternAnalyzer" would actually be specified as "#Microsoft.Azure.Search.PatternAnalyzer". We removed the prefix for brevity, but the prefix is required in your code.
-The analyzer_type is only provided for analyzers that can be customized. If there are no options, as is the case with the keyword analyzer, there is no associated #Microsoft.Azure.Search type.
+The analyzer_type is only provided for analyzers that can be customized. If there are no options, as is the case with the keyword analyzer, there's no associated #Microsoft.Azure.Search type.
<a name="CharFilter"></a>
Cognitive Search supports character filters in the following list. More informat
|[mapping](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/charfilter/MappingCharFilter.html)|MappingCharFilter|A char filter that applies mappings defined with the mappings option. Matching is greedy (longest pattern matching at a given point wins). Replacement is allowed to be the empty string. </br></br>**Options** </br></br> mappings (type: string array) - A list of mappings of the following format: "a=>b" (all occurrences of the character "a" are replaced with character "b"). Required.| |[pattern_replace](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/pattern/PatternReplaceCharFilter.html)|PatternReplaceCharFilter|A char filter that replaces characters in the input string. It uses a regular expression to identify character sequences to preserve and a replacement pattern to identify characters to replace. For example, input text = "aa bb aa bb", pattern="(aa)\\\s+(bb)" replacement="$1#$2", result = "aa#bb aa#bb". </br></br>**Options** </br></br>pattern (type: string) - Required. </br></br>replacement (type: string) - Required.|
- <sup>1</sup> Char Filter Types are always prefixed in code with "#Microsoft.Azure.Search" such that "MappingCharFilter" would actually be specified as "#Microsoft.Azure.Search.MappingCharFilter. We removed the prefix to reduce the width of the table, but please remember to include it in your code. Notice that char_filter_type is only provided for filters that can be customized. If there are no options, as is the case with html_strip, there is no associated #Microsoft.Azure.Search type.
+ <sup>1</sup> Char Filter Types are always prefixed in code with "#Microsoft.Azure.Search" such that "MappingCharFilter" would actually be specified as "#Microsoft.Azure.Search.MappingCharFilter. We removed the prefix to reduce the width of the table, but please remember to include it in your code. Notice that char_filter_type is only provided for filters that can be customized. If there are no options, as is the case with html_strip, there's no associated #Microsoft.Azure.Search type.
<a name="tokenizers"></a>
Cognitive Search supports tokenizers in the following list. More information abo
|[letter](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/core/LetterTokenizer.html)|(type applies only when options are available) |Divides text at non-letters. Tokens that are longer than 255 characters are split.| |[lowercase](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/core/LowerCaseTokenizer.html)|(type applies only when options are available) |Divides text at non-letters and converts them to lower case. Tokens that are longer than 255 characters are split.| | microsoft_language_tokenizer| MicrosoftLanguageTokenizer| Divides text using language-specific rules. </br></br>**Options** </br></br>maxTokenLength (type: int) - The maximum token length, default: 255, maximum: 300. Tokens longer than the maximum length are split. Tokens longer than 300 characters are first split into tokens of length 300 and then each of those tokens is split based on the maxTokenLength set. </br></br>isSearchTokenizer (type: bool) - Set to true if used as the search tokenizer, set to false if used as the indexing tokenizer. </br></br>language (type: string) - Language to use, default "english". Allowed values include: </br>"bangla", "bulgarian", "catalan", "chineseSimplified", "chineseTraditional", "croatian", "czech", "danish", "dutch", "english", "french", "german", "greek", "gujarati", "hindi", "icelandic", "indonesian", "italian", "japanese", "kannada", "korean", "malay", "malayalam", "marathi", "norwegianBokmaal", "polish", "portuguese", "portugueseBrazilian", "punjabi", "romanian", "russian", "serbianCyrillic", "serbianLatin", "slovenian", "spanish", "swedish", "tamil", "telugu", "thai", "ukrainian", "urdu", "vietnamese" |
-| microsoft_language_stemming_tokenizer | MicrosoftLanguageStemmingTokenizer| Divides text using language-specific rules and reduces words to their base forms </br></br>**Options** </br></br>maxTokenLength (type: int) - The maximum token length, default: 255, maximum: 300. Tokens longer than the maximum length are split. Tokens longer than 300 characters are first split into tokens of length 300 and then each of those tokens is split based on the maxTokenLength set. </br></br> isSearchTokenizer (type: bool) - Set to true if used as the search tokenizer, set to false if used as the indexing tokenizer. </br></br>language (type: string) - Language to use, default "english". Allowed values include: </br>"arabic", "bangla", "bulgarian", "catalan", "croatian", "czech", "danish", "dutch", "english", "estonian", "finnish", "french", "german", "greek", "gujarati", "hebrew", "hindi", "hungarian", "icelandic", "indonesian", "italian", "kannada", "latvian", "lithuanian", "malay", "malayalam", "marathi", "norwegianBokmaal", "polish", "portuguese", "portugueseBrazilian", "punjabi", "romanian", "russian", "serbianCyrillic", "serbianLatin", "slovak", "slovenian", "spanish", "swedish", "tamil", "telugu", "turkish", "ukrainian", "urdu" |
+| microsoft_language_stemming_tokenizer | MicrosoftLanguageStemmingTokenizer| Divides text using language-specific rules and reduces words to their base forms. This tokenizer performs lemmatization. </br></br>**Options** </br></br>maxTokenLength (type: int) - The maximum token length, default: 255, maximum: 300. Tokens longer than the maximum length are split. Tokens longer than 300 characters are first split into tokens of length 300 and then each of those tokens is split based on the maxTokenLength set. </br></br> isSearchTokenizer (type: bool) - Set to true if used as the search tokenizer, set to false if used as the indexing tokenizer. </br></br>language (type: string) - Language to use, default "english". Allowed values include: </br>"arabic", "bangla", "bulgarian", "catalan", "croatian", "czech", "danish", "dutch", "english", "estonian", "finnish", "french", "german", "greek", "gujarati", "hebrew", "hindi", "hungarian", "icelandic", "indonesian", "italian", "kannada", "latvian", "lithuanian", "malay", "malayalam", "marathi", "norwegianBokmaal", "polish", "portuguese", "portugueseBrazilian", "punjabi", "romanian", "russian", "serbianCyrillic", "serbianLatin", "slovak", "slovenian", "spanish", "swedish", "tamil", "telugu", "turkish", "ukrainian", "urdu" |
|[nGram](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/ngram/NGramTokenizer.html)|NGramTokenizer|Tokenizes the input into n-grams of the given size(s). </br></br>**Options** </br></br>minGram (type: int) - Default: 1, maximum: 300. </br></br>maxGram (type: int) - Default: 2, maximum: 300. Must be greater than minGram. </br></br>tokenChars (type: string array) - Character classes to keep in the tokens. Allowed values: "letter", "digit", "whitespace", "punctuation", "symbol". Defaults to an empty array - keeps all characters. | |[path_hierarchy_v2](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/path/PathHierarchyTokenizer.html)|PathHierarchyTokenizerV2|Tokenizer for path-like hierarchies. **Options** </br></br>delimiter (type: string) - Default: '/. </br></br>replacement (type: string) - If set, replaces the delimiter character. Default same as the value of delimiter. </br></br>maxTokenLength (type: int) - The maximum token length. Default: 300, maximum: 300. Paths longer than maxTokenLength are ignored. </br></br>reverse (type: bool) - If true, generates token in reverse order. Default: false. </br></br>skip (type: bool) - Initial tokens to skip. The default is 0.| |[pattern](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/pattern/PatternTokenizer.html)|PatternTokenizer|This tokenizer uses regex pattern matching to construct distinct tokens. </br></br>**Options** </br></br> [pattern](https://docs.oracle.com/javase/6/docs/api/java/util/regex/Pattern.html) (type: string) - Regular expression pattern to match token separators. The default is `\W+`, which matches non-word characters. </br></br>[flags](https://docs.oracle.com/javase/6/docs/api/java/util/regex/Pattern.html#field_summary) (type: string) - Regular expression flags. The default is an empty string. Allowed values: CANON_EQ, CASE_INSENSITIVE, COMMENTS, DOTALL, LITERAL, MULTILINE, UNICODE_CASE, UNIX_LINES </br></br>group (type: int) - Which group to extract into tokens. The default is -1 (split).|
Cognitive Search supports tokenizers in the following list. More information abo
|[uax_url_email](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/standard/UAX29URLEmailTokenizer.html)|UaxUrlEmailTokenizer|Tokenizes urls and emails as one token. </br></br>**Options** </br></br> maxTokenLength (type: int) - The maximum token length. Default: 255, maximum: 300. Tokens longer than the maximum length are split.| |[whitespace](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/core/WhitespaceTokenizer.html)|(type applies only when options are available) |Divides text at whitespace. Tokens that are longer than 255 characters are split.|
- <sup>1</sup> Tokenizer Types are always prefixed in code with "#Microsoft.Azure.Search" such that "ClassicTokenizer" would actually be specified as "#Microsoft.Azure.Search.ClassicTokenizer". We removed the prefix to reduce the width of the table, but please remember to include it in your code. Notice that tokenizer_type is only provided for tokenizers that can be customized. If there are no options, as is the case with the letter tokenizer, there is no associated #Microsoft.Azure.Search type.
+ <sup>1</sup> Tokenizer Types are always prefixed in code with "#Microsoft.Azure.Search" such that "ClassicTokenizer" would actually be specified as "#Microsoft.Azure.Search.ClassicTokenizer". We removed the prefix to reduce the width of the table, but please remember to include it in your code. Notice that tokenizer_type is only provided for tokenizers that can be customized. If there are no options, as is the case with the letter tokenizer, there's no associated #Microsoft.Azure.Search type.
<a name="TokenFilters"></a> ## Token filters
-A token filter is used to filter out or modify the tokens generated by a tokenizer. For example, you can specify a lowercase filter that converts all characters to lowercase. You can have multiple token filters in a custom analyzer. Token filters run in the order in which they are listed.
+A token filter is used to filter out or modify the tokens generated by a tokenizer. For example, you can specify a lowercase filter that converts all characters to lowercase. You can have multiple token filters in a custom analyzer. Token filters run in the order in which they're listed.
In the table below, the token filters that are implemented using Apache Lucene are linked to the Lucene API documentation.
In the table below, the token filters that are implemented using Apache Lucene a
|-|-|-| |[arabic_normalization](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/ar/ArabicNormalizationFilter.html)|(type applies only when options are available) |A token filter that applies the Arabic normalizer to normalize the orthography.| |[apostrophe](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/tr/ApostropheFilter.html)|(type applies only when options are available) |Strips all characters after an apostrophe (including the apostrophe itself). |
-|[asciifolding](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/miscellaneous/ASCIIFoldingFilter.html)|AsciiFoldingTokenFilter|Converts alphabetic, numeric, and symbolic Unicode characters which are not in the first 127 ASCII characters (the "Basic Latin" Unicode block) into their ASCII equivalents, if one exists.<br /><br /> **Options**<br /><br /> preserveOriginal (type: bool) - If true, the original token is kept. The default is false.|
+|[asciifolding](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/miscellaneous/ASCIIFoldingFilter.html)|AsciiFoldingTokenFilter|Converts alphabetic, numeric, and symbolic Unicode characters which aren't in the first 127 ASCII characters (the "Basic Latin" Unicode block) into their ASCII equivalents, if one exists.<br /><br /> **Options**<br /><br /> preserveOriginal (type: bool) - If true, the original token is kept. The default is false.|
|[cjk_bigram](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/cjk/CJKBigramFilter.html)|CjkBigramTokenFilter|Forms bigrams of CJK terms that are generated from StandardTokenizer.<br /><br /> **Options**<br /><br /> ignoreScripts (type: string array) - Scripts to ignore. Allowed values include: "han", "hiragana", "katakana", "hangul". The default is an empty list.<br /><br /> outputUnigrams (type: bool) - Set to true if you always want to output both unigrams and bigrams. The default is false.| |[cjk_width](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/cjk/CJKWidthFilter.html)|(type applies only when options are available) |Normalizes CJK width differences. Folds full width ASCII variants into the equivalent basic latin and half-width Katakana variants into the equivalent kana. | |[classic](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/standard/ClassicFilter.html)|(type applies only when options are available) |Removes the English possessives, and dots from acronyms. | |[common_grams](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/commongrams/CommonGramsFilter.html)|CommonGramTokenFilter|Construct bigrams for frequently occurring terms while indexing. Single terms are still indexed too, with bigrams overlaid.<br /><br /> **Options**<br /><br /> commonWords (type: string array) - The set of common words. The default is an empty list. Required.<br /><br /> ignoreCase (type: bool) - If true, matching is case insensitive. The default is false.<br /><br /> queryMode (type: bool) - Generates bigrams then removes common words and single terms followed by a common word. The default is false.| |[dictionary_decompounder](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/compound/DictionaryCompoundWordTokenFilter.html)|DictionaryDecompounderTokenFilter|Decomposes compound words found in many Germanic languages.<br /><br /> **Options**<br /><br /> wordList (type: string array) - The list of words to match against. The default is an empty list. Required.<br /><br /> minWordSize (type: int) - Only words longer than this get processed. The default is 5.<br /><br /> minSubwordSize (type: int) - Only subwords longer than this are outputted. The default is 2.<br /><br /> maxSubwordSize (type: int) - Only subwords shorter than this are outputted. The default is 15.<br /><br /> onlyLongestMatch (type: bool) - Add only the longest matching subword to output. The default is false.| |[edgeNGram_v2](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/ngram/EdgeNGramTokenFilter.html)|EdgeNGramTokenFilterV2|Generates n-grams of the given size(s) from starting from the front or the back of an input token.<br /><br /> **Options**<br /><br /> minGram (type: int) - Default: 1, maximum: 300.<br /><br /> maxGram (type: int) - Default: 2, maximum 300. Must be greater than minGram.<br /><br /> side (type: string) - Specifies which side of the input the n-gram should be generated from. Allowed values: "front", "back" |
-|[elision](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/util/ElisionFilter.html)|ElisionTokenFilter|Removes elisions. For example, "l'avion" (the plane) is converted to "avion" (plane).<br /><br /> **Options**<br /><br /> articles (type: string array) - A set of articles to remove. The default is an empty list. If there is no list of articles set, by default all French articles are removed.|
+|[elision](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/util/ElisionFilter.html)|ElisionTokenFilter|Removes elisions. For example, "l'avion" (the plane) is converted to "avion" (plane).<br /><br /> **Options**<br /><br /> articles (type: string array) - A set of articles to remove. The default is an empty list. If there's no list of articles set, by default all French articles are removed.|
|[german_normalization](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/de/GermanNormalizationFilter.html)|(type applies only when options are available) |Normalizes German characters according to the heuristics of the [German2 snowball algorithm](https://snowballstem.org/algorithms/german2/stemmer.html) .| |[hindi_normalization](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/hi/HindiNormalizationFilter.html)|(type applies only when options are available) |Normalizes text in Hindi to remove some differences in spelling variations. | |[indic_normalization](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/in/IndicNormalizationFilter.html)|IndicNormalizationTokenFilter|Normalizes the Unicode representation of text in Indian languages.
In the table below, the token filters that are implemented using Apache Lucene a
|[reverse](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/reverse/ReverseStringFilter.html)|(type applies only when options are available) |Reverses the token string. | |[scandinavian_normalization](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/miscellaneous/ScandinavianNormalizationFilter.html)|(type applies only when options are available) |Normalizes use of the interchangeable Scandinavian characters. | |[scandinavian_folding](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/miscellaneous/ScandinavianFoldingFilter.html)|(type applies only when options are available) |Folds Scandinavian characters åÅäæÄÆ->a and öÖøØ->o. It also discriminates against use of double vowels aa, ae, ao, oe and oo, leaving just the first one. |
-|[shingle](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/shingle/ShingleFilter.html)|ShingleTokenFilter|Creates combinations of tokens as a single token.<br /><br /> **Options**<br /><br /> maxShingleSize (type: int) - Defaults to 2.<br /><br /> minShingleSize (type: int) - Defaults to 2.<br /><br /> outputUnigrams (type: bool) - if true, the output stream contains the input tokens (unigrams) as well as shingles. The default is true.<br /><br /> outputUnigramsIfNoShingles (type: bool) - If true, override the behavior of outputUnigrams==false for those times when no shingles are available. The default is false.<br /><br /> tokenSeparator (type: string) - The string to use when joining adjacent tokens to form a shingle. The default is " ".<br /><br /> filterToken (type: string) - The string to insert for each position at which there is no token. The default is "_".|
+|[shingle](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/shingle/ShingleFilter.html)|ShingleTokenFilter|Creates combinations of tokens as a single token.<br /><br /> **Options**<br /><br /> maxShingleSize (type: int) - Defaults to 2.<br /><br /> minShingleSize (type: int) - Defaults to 2.<br /><br /> outputUnigrams (type: bool) - if true, the output stream contains the input tokens (unigrams) as well as shingles. The default is true.<br /><br /> outputUnigramsIfNoShingles (type: bool) - If true, override the behavior of outputUnigrams==false for those times when no shingles are available. The default is false.<br /><br /> tokenSeparator (type: string) - The string to use when joining adjacent tokens to form a shingle. The default is " ".<br /><br /> filterToken (type: string) - The string to insert for each position for which there is no token. The default is "_".|
|[snowball](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/snowball/SnowballFilter.html)|SnowballTokenFilter|Snowball Token Filter.<br /><br /> **Options**<br /><br /> language (type: string) - Allowed values include: "armenian", "basque", "catalan", "danish", "dutch", "english", "finnish", "french", "german", "german2", "hungarian", "italian", "kp", "lovins", "norwegian", "porter", "portuguese", "romanian", "russian", "spanish", "swedish", "turkish"| |[sorani_normalization](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/ckb/SoraniNormalizationFilter.html)|SoraniNormalizationTokenFilter|Normalizes the Unicode representation of Sorani text.<br /><br /> **Options**<br /><br /> None.| |stemmer|StemmerTokenFilter|Language-specific stemming filter.<br /><br /> **Options**<br /><br /> language (type: string) - Allowed values include: <br /> - ["arabic"](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/ar/ArabicStemmer.html)<br />- ["armenian"](https://snowballstem.org/algorithms/armenian/stemmer.html)<br />- ["basque"](https://snowballstem.org/algorithms/basque/stemmer.html)<br />- ["brazilian"](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/br/BrazilianStemmer.html)<br />- "bulgarian"<br />- ["catalan"](https://snowballstem.org/algorithms/catalan/stemmer.html)<br />- ["czech"](https://portal.acm.org/citation.cfm?id=1598600)<br />- ["danish"](https://snowballstem.org/algorithms/danish/stemmer.html)<br />- ["dutch"](https://snowballstem.org/algorithms/dutch/stemmer.html)<br />- ["dutchKp"](https://snowballstem.org/algorithms/kraaij_pohlmann/stemmer.html)<br />- ["english"](https://snowballstem.org/algorithms/porter/stemmer.html)<br />- ["lightEnglish"](https://ciir.cs.umass.edu/pubfiles/ir-35.pdf)<br />- ["minimalEnglish"](https://www.researchgate.net/publication/220433848_How_effective_is_suffixing)<br />- ["possessiveEnglish"](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/en/EnglishPossessiveFilter.html)<br />- ["porter2"](https://snowballstem.org/algorithms/english/stemmer.html)<br />- ["lovins"](https://snowballstem.org/algorithms/lovins/stemmer.html)<br />- ["finnish"](https://snowballstem.org/algorithms/finnish/stemmer.html)<br />- "lightFinnish"<br />- ["french"](https://snowballstem.org/algorithms/french/stemmer.html)<br />- ["lightFrench"](https://dl.acm.org/citation.cfm?id=1141523)<br />- ["minimalFrench"](https://dl.acm.org/citation.cfm?id=318984)<br />- "galician"<br />- "minimalGalician"<br />- ["german"](https://snowballstem.org/algorithms/german/stemmer.html)<br />- ["german2"](https://snowballstem.org/algorithms/german2/stemmer.html)<br />- ["lightGerman"](https://dl.acm.org/citation.cfm?id=1141523)<br />- "minimalGerman"<br />- ["greek"](https://sais.se/mthprize/2007/ntais2007.pdf)<br />- "hindi"<br />- ["hungarian"](https://snowballstem.org/algorithms/hungarian/stemmer.html)<br />- ["lightHungarian"](https://dl.acm.org/citation.cfm?id=1141523&dl=ACM&coll=DL&CFID=179095584&CFTOKEN=80067181)<br />- ["indonesian"](https://eprints.illc.uva.nl/741/2/MoL-2003-03.text.pdf)<br />- ["irish"](https://snowballstem.org/algorithms/irish/stemmer.html)<br />- ["italian"](https://snowballstem.org/algorithms/italian/stemmer.html)<br />- ["lightItalian"](https://www.ercim.eu/publication/ws-proceedings/CLEF2/savoy.pdf)<br />- ["sorani"](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/ckb/SoraniStemmer.html)<br />- ["latvian"](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/lv/LatvianStemmer.html)<br />- ["norwegian"](https://snowballstem.org/algorithms/norwegian/stemmer.html)<br />- ["lightNorwegian"](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/no/NorwegianLightStemmer.html)<br />- ["minimalNorwegian"](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/no/NorwegianMinimalStemmer.html)<br />- ["lightNynorsk"](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/no/NorwegianLightStemmer.html)<br />- ["minimalNynorsk"](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/no/NorwegianMinimalStemmer.html)<br />- ["portuguese"](https://snowballstem.org/algorithms/portuguese/stemmer.html)<br />- ["lightPortuguese"](https://dl.acm.org/citation.cfm?id=1141523&dl=ACM&coll=DL&CFID=179095584&CFTOKEN=80067181)<br />- ["minimalPortuguese"](https://www.inf.ufrgs.br/~buriol/papers/Orengo_CLEF07.pdf)<br />- ["portugueseRslp"](https://www.inf.ufrgs.br//~viviane/rslp/index.htm)<br />- ["romanian"](https://snowballstem.org/otherapps/romanian/)<br />- ["russian"](https://snowballstem.org/algorithms/russian/stemmer.html)<br />- ["lightRussian"](https://doc.rero.ch/lm.php?url=1000%2C43%2C4%2C20091209094227-CA%2FDolamic_Ljiljana_-_Indexing_and_Searching_Strategies_for_the_Russian_20091209.pdf)<br />- ["spanish"](https://snowballstem.org/algorithms/spanish/stemmer.html)<br />- ["lightSpanish"](https://www.ercim.eu/publication/ws-proceedings/CLEF2/savoy.pdf)<br />- ["swedish"](https://snowballstem.org/algorithms/swedish/stemmer.html)<br />- "lightSwedish"<br />- ["turkish"](https://snowballstem.org/algorithms/turkish/stemmer.html)| |[stemmer_override](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/miscellaneous/StemmerOverrideFilter.html)|StemmerOverrideTokenFilter|Any dictionary-Stemmed terms are marked as keywords, which prevents stemming down the chain. Must be placed before any stemming filters.<br /><br /> **Options**<br /><br /> rules (type: string array) - Stemming rules in the following format "word => stem" for example "ran => run". The default is an empty list. Required.|
-|[stopwords](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/core/StopFilter.html)|StopwordsTokenFilter|Removes stop words from a token stream. By default, the filter uses a predefined stop word list for English.<br /><br /> **Options**<br /><br /> stopwords (type: string array) - A list of stopwords. Cannot be specified if a stopwordsList is specified.<br /><br /> stopwordsList (type: string) - A predefined list of stopwords. Cannot be specified if stopwords is specified. Allowed values include:"arabic", "armenian", "basque", "brazilian", "bulgarian", "catalan", "czech", "danish", "dutch", "english", "finnish", "french", "galician", "german", "greek", "hindi", "hungarian", "indonesian", "irish", "italian", "latvian", "norwegian", "persian", "portuguese", "romanian", "russian", "sorani", "spanish", "swedish", "thai", "turkish", default: "english". Cannot be specified if stopwords is specified. <br /><br /> ignoreCase (type: bool) - If true, all words are lower cased first. The default is false.<br /><br /> removeTrailing (type: bool) - If true, ignore the last search term if it's a stop word. The default is true.
+|[stopwords](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/core/StopFilter.html)|StopwordsTokenFilter|Removes stop words from a token stream. By default, the filter uses a predefined stop word list for English.<br /><br /> **Options**<br /><br /> stopwords (type: string array) - A list of stopwords. Can't be specified if a stopwordsList is specified.<br /><br /> stopwordsList (type: string) - A predefined list of stopwords. Can't be specified if "stopwords" is specified. Allowed values include:"arabic", "armenian", "basque", "brazilian", "bulgarian", "catalan", "czech", "danish", "dutch", "english", "finnish", "french", "galician", "german", "greek", "hindi", "hungarian", "indonesian", "irish", "italian", "latvian", "norwegian", "persian", "portuguese", "romanian", "russian", "sorani", "spanish", "swedish", "thai", "turkish", default: "english". Can't be specified if "stopwords" is specified. <br /><br /> ignoreCase (type: bool) - If true, all words are lower cased first. The default is false.<br /><br /> removeTrailing (type: bool) - If true, ignore the last search term if it's a stop word. The default is true.
|[synonym](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/synonym/SynonymFilter.html)|SynonymTokenFilter|Matches single or multi word synonyms in a token stream.<br /><br /> **Options**<br /><br /> synonyms (type: string array) - Required. List of synonyms in one of the following two formats:<br /><br /> -incredible, unbelievable, fabulous => amazing - all terms on the left side of => symbol are replaced with all terms on its right side.<br /><br /> -incredible, unbelievable, fabulous, amazing - A comma-separated list of equivalent words. Set the expand option to change how this list is interpreted.<br /><br /> ignoreCase (type: bool) - Case-folds input for matching. The default is false.<br /><br /> expand (type: bool) - If true, all words in the list of synonyms (if => notation is not used) map to one another. <br />The following list: incredible, unbelievable, fabulous, amazing is equivalent to: incredible, unbelievable, fabulous, amazing => incredible, unbelievable, fabulous, amazing<br /><br />- If false, the following list: incredible, unbelievable, fabulous, amazing are equivalent to: incredible, unbelievable, fabulous, amazing => incredible.| |[trim](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/miscellaneous/TrimFilter.html)|(type applies only when options are available) |Trims leading and trailing whitespace from tokens. | |[truncate](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/miscellaneous/TruncateTokenFilter.html)|TruncateTokenFilter|Truncates the terms into a specific length.<br /><br /> **Options**<br /><br /> length (type: int) - Default: 300, maximum: 300. Required.|
search Index Add Language Analyzers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/index-add-language-analyzers.md
- Previously updated : 09/08/2021+ Last updated : 06/24/2022 # Add language analyzers to string fields in an Azure Cognitive Search index
You should consider a language analyzer when awareness of word or sentence struc
You should also consider language analyzers when content consists of non-Western language strings. While the [default analyzer (Standard Lucene)](search-analyzers.md#default-analyzer) is language-agnostic, the concept of using spaces and special characters (hyphens and slashes) to separate strings is more applicable to Western languages than non-Western ones.
-For example, in Chinese, Japanese, Korean (CJK), and other Asian languages, a space is not necessarily a word delimiter. Consider the following Japanese string. Because it has no spaces, a language-agnostic analyzer would likely analyze the entire string as one token, when in fact the string is actually a phrase.
+For example, in Chinese, Japanese, Korean (CJK), and other Asian languages, a space isn't necessarily a word delimiter. Consider the following Japanese string. Because it has no spaces, a language-agnostic analyzer would likely analyze the entire string as one token, when in fact the string is actually a phrase.
``` これは私たちの銀河系の中ではもっとも重く明るいクラスの球状星団です。
Azure Cognitive Search supports 35 language analyzers backed by Lucene, and 50 l
Some developers might prefer the more familiar, simple, open-source solution of Lucene. Lucene language analyzers are faster, but the Microsoft analyzers have advanced capabilities, such as lemmatization, word decompounding (in languages like German, Danish, Dutch, Swedish, Norwegian, Estonian, Finish, Hungarian, Slovak) and entity recognition (URLs, emails, dates, numbers). If possible, you should run comparisons of both the Microsoft and Lucene analyzers to decide which one is a better fit. You can use [Analyze API](/rest/api/searchservice/test-analyzer) to see the tokens generated from a given text using a specific analyzer.
-Indexing with Microsoft analyzers is on average two to three times slower than their Lucene equivalents, depending on the language. Search performance should not be significantly affected for average size queries.
+Indexing with Microsoft analyzers is on average two to three times slower than their Lucene equivalents, depending on the language. Search performance shouldn't be significantly affected for average size queries.
### English analyzers
The default analyzer is Standard Lucene, which works well for English, but perha
+ Lucene's English analyzer extends the Standard analyzer. It removes possessives (trailing 's) from words, applies stemming as per Porter Stemming algorithm, and removes English stop words.
-+ Microsoft's English analyzer performs lemmatization instead of stemming. This means it can handle inflected and irregular word forms much better which results in more relevant search results
++ Microsoft's English analyzer performs lemmatization instead of stemming. This means it can handle inflected and irregular word forms much better which results in more relevant search results. ## How to specify a language analyzer
-Set the analyzer during index creation, before it's loaded with data.
+Set the analyzer during index creation before it's loaded with data.
1. In the field definition, make sure the field is attributed as "searchable" and is of type Edm.String. 1. Set the "analyzer" property to one of the language analyzers from the [supported analyzers list](#language-analyzer-list).
- The "analyzer" property is the only property that will accept a language analyzer, and it's used for both indexing and queries. Other analyzer-related properties ("searchAnalyzer" and "indexAnalyzer") will not accept a language analyzer.
+ The "analyzer" property is the only property that will accept a language analyzer, and it's used for both indexing and queries. Other analyzer-related properties ("searchAnalyzer" and "indexAnalyzer") won't accept a language analyzer.
-Language analyzers cannot be customized. If an analyzer isn't meeting your requirements, you can try creating a [custom analyzer](cognitive-search-working-with-skillsets.md) with the microsoft_language_tokenizer or microsoft_language_stemming_tokenizer, and add filters for pre- and post-tokenization processing.
+Language analyzers can't be customized. If an analyzer isn't meeting your requirements, create a [custom analyzer](cognitive-search-working-with-skillsets.md) with the microsoft_language_tokenizer or microsoft_language_stemming_tokenizer, and then add filters for pre- and post-tokenization processing.
The following example illustrates a language analyzer specification in an index:
search Index Ranking Similarity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/index-ranking-similarity.md
Title: Configure BM25 similarity algorithm
+ Title: Configure scoring algorithm
description: Enable Okapi BM25 ranking to upgrade the search ranking and relevance behavior on older Azure Search services.
Last updated 06/22/2022
-# Configure the similarity ranking algorithm in Azure Cognitive Search
+# Configure the scoring algorithm in Azure Cognitive Search
-Depending on the age of your search service, Azure Cognitive Search supports two [similarity ranking algorithms](index-similarity-and-scoring.md) for scoring relevance on full text search results:
+Depending on the age of your search service, Azure Cognitive Search supports two [scoring algorithms](index-similarity-and-scoring.md) for assigning relevance to results in a full text search query:
+ An *Okapi BM25* algorithm, used in all search services created after July 15, 2020 + A *classic similarity* algorithm, used by all search services created before July 15, 2020
-BM25 ranking is the default because it tends to produce search rankings that align better with user expectations. It includes [parameters](#set-bm25-parameters) for tuning results based on factors such as document size. For search services created after July 2020, BM25 is the sole similarity algorithm. If you try to set "similarity" to ClassicSimilarity on a new service, an HTTP 400 error will be returned because that algorithm is not supported by the service.
+BM25 ranking is the default because it tends to produce search rankings that align better with user expectations. It includes [parameters](#set-bm25-parameters) for tuning results based on factors such as document size. For search services created after July 2020, BM25 is the sole scoring algorithm. If you try to set "similarity" to ClassicSimilarity on a new service, an HTTP 400 error will be returned because that algorithm is not supported by the service.
For older services, classic similarity remains the default algorithm. Older services can [upgrade to BM25](#enable-bm25-scoring-on-older-services) on a per-index basis. When switching from classic to BM25, you can expect to see some differences how search results are ordered.
search Index Similarity And Scoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/index-similarity-and-scoring.md
Title: Similarity and scoring
+ Title: Relevance and scoring
-description: Explains the concepts of similarity and scoring in Azure Cognitive Search, and what a developer can do to customize the scoring result.
+description: Explains the concepts of relevance and scoring in Azure Cognitive Search, and what a developer can do to customize the scoring result.
Last updated 06/22/2022
-# Similarity and scoring in Azure Cognitive Search
+# Relevance and scoring in Azure Cognitive Search
-This article describes relevance scoring and the similarity ranking algorithms used to compute search scores in Azure Cognitive Search. A relevance score applies to matches returned in [full text search](search-lucene-query-architecture.md), where the most relevant matches appear first. Filter queries, autocomplete and suggested queries, wildcard search or fuzzy search queries are not scored or ranked for relevance.
+This article describes relevance and the scoring algorithms used to compute search scores in Azure Cognitive Search. A relevance score applies to matches returned in [full text search](search-lucene-query-architecture.md), where the most relevant matches appear first. Filter queries, autocomplete and suggested queries, wildcard search or fuzzy search queries are not scored or ranked for relevance.
In Azure Cognitive Search, you can tune search relevance and boost search scores through these mechanisms:
-+ Similarity ranking configuration
++ Scoring algorithm configuration + Semantic ranking (in preview, described in [this article](semantic-ranking.md)) + Scoring profiles + Custom scoring logic enabled through the *featuresMode* parameter
If you want to break the tie among repeating scores, you can add an **$orderby**
> [!NOTE] > A `@search.score = 1` indicates an un-scored or un-ranked result set. The score is uniform across all results. Un-scored results occur when the query form is fuzzy search, wildcard or regex queries, or an empty search (`search=*`, sometimes paired with filters, where the filter is the primary means for returning a match).
-## Similarity ranking algorithms
+## Scoring algorithms in Search
Azure Cognitive Search provides the `BM25Similarity` ranking algorithm. On older search services, you might be using `ClassicSimilarity`. Both BM25 and Classic are TF-IDF-like retrieval functions that use the term frequency (TF) and the inverse document frequency (IDF) as variables to calculate relevance scores for each document-query pair, which is then used for ranking results. While conceptually similar to classic, BM25 is rooted in probabilistic information retrieval that produces more intuitive matches, as measured by user research.
-BM25 offers advanced customization options, such as allowing the user to decide how the relevance score scales with the term frequency of matched terms. For more information, see [Configure the similarity ranking algorithm](index-ranking-similarity.md).
+BM25 offers advanced customization options, such as allowing the user to decide how the relevance score scales with the term frequency of matched terms. For more information, see [Configure the scoring algorithm](index-ranking-similarity.md).
> [!NOTE]
-> If you're using a search service that was created before July 2020, the similarity algorithm is most likely the previous default, `ClassicSimilarity`, which you an upgrade on a per-index basis. See [Enable BM25 scoring on older services](index-ranking-similarity.md#enable-bm25-scoring-on-older-services) for details.
+> If you're using a search service that was created before July 2020, the scoring algorithm is most likely the previous default, `ClassicSimilarity`, which you an upgrade on a per-index basis. See [Enable BM25 scoring on older services](index-ranking-similarity.md#enable-bm25-scoring-on-older-services) for details.
The following video segment fast-forwards to an explanation of the generally available ranking algorithms used in Azure Cognitive Search. You can watch the full video for more background.
search Search Analyzers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-analyzers.md
Previously updated : 05/11/2022 Last updated : 06/24/2022
Analysis applies to `Edm.String` fields that are marked as "searchable", which i
For fields of this configuration, analysis occurs during indexing when tokens are created, and then again during query execution when queries are parsed and the engine scans for matching tokens. A match is more likely to occur when the same analyzer is used for both indexing and queries, but you can set the analyzer for each workload independently, depending on your requirements.
-Query types that are *not* full text search, such as filters or fuzzy search, do not go through the analysis phase on the query side. Instead, the parser sends those strings directly to the search engine, using the pattern that you provide as the basis for the match. Typically, these query forms require whole-string tokens to make pattern matching work. To ensure whole terms tokens during indexing, you might need [custom analyzers](index-add-custom-analyzers.md). For more information about when and why query terms are analyzed, see [Full text search in Azure Cognitive Search](search-lucene-query-architecture.md).
+Query types that are *not* full text search, such as filters or fuzzy search, don't go through the analysis phase on the query side. Instead, the parser sends those strings directly to the search engine, using the pattern that you provide as the basis for the match. Typically, these query forms require whole-string tokens to make pattern matching work. To ensure whole terms tokens during indexing, you might need [custom analyzers](index-add-custom-analyzers.md). For more information about when and why query terms are analyzed, see [Full text search in Azure Cognitive Search](search-lucene-query-architecture.md).
For more background on lexical analysis, listen to the following video clip for a brief explanation.
The following list describes which analyzers are available in Azure Cognitive Se
A few built-in analyzers, such as **Pattern** or **Stop**, support a limited set of configuration options. To set these options, create a custom analyzer, consisting of the built-in analyzer and one of the alternative options documented in [Built-in analyzers](index-add-custom-analyzers.md#built-in-analyzers). As with any custom configuration, provide your new configuration with a name, such as *myPatternAnalyzer* to distinguish it from the Lucene Pattern analyzer.
-## How to specify analyzers
+## Specifying analyzers
Setting an analyzer is optional. As a general rule, try using the default standard Lucene analyzer first to see how it performs. If queries fail to return the expected results, switching to a different analyzer is often the right solution.
-1. When creating a field definition in the [index](/rest/api/searchservice/create-index), set the "analyzer" property to one of the following: a [built-in analyzer](index-add-custom-analyzers.md#built-in-analyzers) such as **keyword**, a [language analyzer](index-add-language-analyzers.md) such as `en.microsoft`, or a custom analyzer (defined in the same index schema).
+1. If you're using a custom analyzer, add it to the search index under the "analyzer" section. For more information, see [Create Index](/rest/api/searchservice/create-index) and also [Add custom analyzers](index-add-custom-analyzers.md).
+
+1. When defining a field, set it's "analyzer" property to one of the following: a [built-in analyzer](index-add-custom-analyzers.md#built-in-analyzers) such as **keyword**, a [language analyzer](index-add-language-analyzers.md) such as `en.microsoft`, or a custom analyzer (defined in the same index schema).
```json "fields": [
Setting an analyzer is optional. As a general rule, try using the default standa
}, ```
- If you are using a [language analyzer](index-add-language-analyzers.md), you must use the "analyzer" property to specify it. The "searchAnalyzer" and "indexAnalyzer" properties do not apply to language analyzers.
+1. If you're using a [language analyzer](index-add-language-analyzers.md), you must use the "analyzer" property to specify it. The "searchAnalyzer" and "indexAnalyzer" properties don't apply to language analyzers.
-1. Alternatively, set "indexAnalyzer" and "searchAnalyzer" to vary the analyzer for each workload. These properties are set together and replace the "analyzer" property, which must be null. You might use different analyzers for indexing and queries if one of those activities required a specific transformation not needed by the other.
+1. Alternatively, set "indexAnalyzer" and "searchAnalyzer" to vary the analyzer for each workload. These properties work together as a substitute for the "analyzer" property, which must be null. You might use different analyzers for indexing and queries if one of those activities required a specific transformation not needed by the other.
```json "fields": [
Setting an analyzer is optional. As a general rule, try using the default standa
}, ```
-1. For custom analyzers only, create an entry in the **[analyzers]** section of the index, and then assign your custom analyzer to the field definition per either of the previous two steps. For more information, see [Create Index](/rest/api/searchservice/create-index) and also [Add custom analyzers](index-add-custom-analyzers.md).
- ## When to add analyzers The best time to add and assign analyzers is during active development, when dropping and recreating indexes is routine.
-Because analyzers are used to tokenize terms, you should assign an analyzer when the field is created. In fact, assigning an analyzer or indexAnalyzer to a field that has already been physically created is not allowed (although you can change the searchAnalyzer property at any time with no impact to the index).
+Because analyzers are used to tokenize terms, you should assign an analyzer when the field is created. In fact, assigning an analyzer or indexAnalyzer to a field that has already been physically created isn't allowed (although you can change the searchAnalyzer property at any time with no impact to the index).
-To change the analyzer of an existing field, you'll have to drop and recreate the entire index (you cannot rebuild individual fields). For indexes in production, you can defer a rebuild by creating a new field with the new analyzer assignment, and start using it in place of the old one. Use [Update Index](/rest/api/searchservice/update-index) to incorporate the new field and [mergeOrUpload](/rest/api/searchservice/addupdate-or-delete-documents) to populate it. Later, as part of planned index servicing, you can clean up the index to remove obsolete fields.
+To change the analyzer of an existing field, you'll have to drop and recreate the entire index (you can't rebuild individual fields). For indexes in production, you can defer a rebuild by creating a new field with the new analyzer assignment, and start using it in place of the old one. Use [Update Index](/rest/api/searchservice/update-index) to incorporate the new field and [mergeOrUpload](/rest/api/searchservice/addupdate-or-delete-documents) to populate it. Later, as part of planned index servicing, you can clean up the index to remove obsolete fields.
To add a new field to an existing index, call [Update Index](/rest/api/searchservice/update-index) to add the field, and [mergeOrUpload](/rest/api/searchservice/addupdate-or-delete-documents) to populate it.
This section offers advice on how to work with analyzers.
Azure Cognitive Search lets you specify different analyzers for indexing and search through the "indexAnalyzer" and "searchAnalyzer" field properties. If unspecified, the analyzer set with the analyzer property is used for both indexing and searching. If the analyzer is unspecified, the default Standard Lucene analyzer is used.
-A general rule is to use the same analyzer for both indexing and querying, unless specific requirements dictate otherwise. Be sure to test thoroughly. When text processing differs at search and indexing time, you run the risk of mismatch between query terms and indexed terms when the search and indexing analyzer configurations are not aligned.
+A general rule is to use the same analyzer for both indexing and querying, unless specific requirements dictate otherwise. Be sure to test thoroughly. When text processing differs at search and indexing time, you run the risk of mismatch between query terms and indexed terms when the search and indexing analyzer configurations aren't aligned.
### Test during active development
Walking through this example:
### Per-field analyzer assignment example
-The Standard analyzer is the default. Suppose you want to replace the default with a different predefined analyzer, such as the pattern analyzer. If you are not setting custom options, you only need to specify it by name in the field definition.
+The Standard analyzer is the default. Suppose you want to replace the default with a different predefined analyzer, such as the pattern analyzer. If you aren't setting custom options, you only need to specify it by name in the field definition.
The "analyzer" element overrides the Standard analyzer on a field-by-field basis. There is no global override. In this example, `text1` uses the pattern analyzer and `text2`, which doesn't specify an analyzer, uses the default.
The APIs include index attributes for specifying different analyzers for indexin
### Language analyzer example
-Fields containing strings in different languages can use a language analyzer, while other fields retain the default (or use some other predefined or custom analyzer). If you use a language analyzer, it must be used for both indexing and search operations. Fields that use a language analyzer cannot have different analyzers for indexing and search.
+Fields containing strings in different languages can use a language analyzer, while other fields retain the default (or use some other predefined or custom analyzer). If you use a language analyzer, it must be used for both indexing and search operations. Fields that use a language analyzer can't have different analyzers for indexing and search.
```json {
search Search How To Alias https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-how-to-alias.md
Title: Create an index alias
-description: Create an Azure Cognitive Search index alias to define a static secondary name that can be used for referencing a search index for querying and indexing.
+description: Create an alias to define a secondary name that can be used to refer to an index for querying, indexing, and other operations.
To create an alias in Visual Studio Code:
## Send requests to an index alias
-Once you've created your alias, you're ready to start using it. Aliases can be used for [querying](/rest/api/searchservice/search-documents) and [indexing](/rest/api/searchservice/addupdate-or-delete-documents).
+Once you've created your alias, you're ready to start using it. Aliases can be used for all [document operations](/rest/api/searchservice/document-operations) including querying, indexing, suggestions, and autocomplete.
In the query below, instead of sending the request to `hotel-samples-index`, you can instead send the request to `my-alias` and it will be routed accordingly.
POST /indexes/my-alias/docs/search?api-version=2021-04-30-preview
If you expect that you may need to make updates to your index definition for your production indexes, you should use an alias rather than the index name for requests in your client-side application. Scenarios that require you to create a new index are outlined under these [rebuild conditions](search-howto-reindex.md#rebuild-conditions). > [!NOTE]
-> You can only use an alias for [querying](/rest/api/searchservice/search-documents) and [indexing](/rest/api/searchservice/addupdate-or-delete-documents). Aliases can't be used to get or update an index definition, can't be used with the Analyze Text API, and can't be used as the `targetIndexName` on an indexer.
+> You can only use an alias with [document operations](/rest/api/searchservice/document-operations). Aliases can't be used to get or update an index definition, can't be used with the Analyze Text API, and can't be used as the `targetIndexName` on an indexer.
## Swap indexes
search Search How To Create Search Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-how-to-create-search-index.md
Previously updated : 01/19/2022 Last updated : 06/24/2022 # Create an index in Azure Cognitive Search In Azure Cognitive Search, query requests target the searchable text in a [**search index**](search-what-is-an-index.md).
-In this article, learn the steps for defining and publishing a search index. Once the index exists, [**data import**](search-what-is-data-import.md) follows as a separate task.
+In this article, learn the steps for defining and publishing a search index. Creating an index establishes the physical data structure (folders and files) on your search service. Once the index definition exists, [**loading the index**](search-what-is-data-import.md) follows as a separate task.
## Prerequisites
Use this checklist to assist the design decisions for your search index.
1. Review [supported data types](/rest/api/searchservice/supported-data-types). The data type will impact how the field is used. For example, numeric content is filterable but not full text searchable. The most common data type is `Edm.String` for searchable text, which is tokenized and queried using the full text search engine.
-1. Identify one string field in the source data that contains unique values, allowing it to function as the [document key](#document-keys) in your index. For example, if you're indexing from Blob Storage, the metadata storage path is often used as the document key.
+1. Identify a [document key](#document-keys). A document key is an index requirement. It's a single string field and it will be populated from a source data field that contains unique values. For example, if you're indexing from Blob Storage, the metadata storage path is often used as the document key because it uniquely identifies each blob in the container.
1. Identify the fields in your data source that will contribute searchable content in the index. Searchable content includes short or long strings that are queried using the full text search engine. If the content is verbose (small phrases or bigger chunks), experiment with different analyzers to see how the text is tokenized.
Use this checklist to assist the design decisions for your search index.
+ Filterable fields are returned in arbitrary order, so consider making them sortable as well.
+1. Determine whether you'll use the default analyzer (`"analyzer": null`) or a different analyzer. [Analyzers](search-analyzers.md) are used to tokenize text fields during indexing and query execution. If strings are descriptive and semantically rich, or if you have translated strings, consider overriding the default with a [language analyzer](index-add-language-analyzers.md).
+
+> [!NOTE]
+> Full text search is conducted over terms that are tokenized during indexing. If your queries fail to return the results you expect, [test for tokenization](/rest/api/searchservice/test-analyzer) to verify the string actually exists. You can try different analyzers on strings to see how tokens are produced for various analyzers.
+ ## Create an index When you're ready to create the index, use a search client that can send the request. You can use the Azure portal or REST APIs for early development and proof-of-concept testing.
The following screenshot highlights where **Add index** and **Import data** appe
:::image type="content" source="media/search-what-is-an-index/add-index.png" alt-text="Add index command" border="true":::
-> [!Tip]
+> [!TIP]
> After creating an index in the portal, you can copy the JSON representation and add it to your application code. ### [**REST**](#tab/index-rest)
POST https://[servicename].search.windows.net/indexes?api-version=[api-version]
"suggesters": [ ], "scoringProfiles": [ ], "analyzers":(optional)[ ... ]
- }
} ```
search Semantic Search Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/semantic-search-overview.md
Semantic search is a collection of features that improve the quality of search r
*Semantic ranking* looks for context and relatedness among terms, elevating matches that make more sense given the query. Language understanding finds summarizations or *captions* and *answers* within your content and includes them in the response, which can then be rendered on a search results page for a more productive search experience.
-State-of-the-art pretrained models are used for summarization and ranking. To maintain the fast performance that users expect from search, semantic summarization and ranking are applied to just the top 50 results, as scored by the [default similarity scoring algorithm](index-similarity-and-scoring.md#similarity-ranking-algorithms). Using those results as the document corpus, semantic ranking re-scores those results based on the semantic strength of the match.
+State-of-the-art pretrained models are used for summarization and ranking. To maintain the fast performance that users expect from search, semantic summarization and ranking are applied to just the top 50 results, as scored by the [default scoring algorithm](index-similarity-and-scoring.md). Using those results as the document corpus, semantic ranking re-scores those results based on the semantic strength of the match.
The underlying technology is from Bing and Microsoft Research, and integrated into the Cognitive Search infrastructure as an add-on feature. For more information about the research and AI investments backing semantic search, see [How AI from Bing is powering Azure Cognitive Search (Microsoft Research Blog)](https://www.microsoft.com/research/blog/the-science-behind-semantic-search-how-ai-from-bing-is-powering-azure-cognitive-search/).
Components of semantic search extend the existing query execution pipeline in bo
:::image type="content" source="media/semantic-search-overview/semantic-workflow.png" alt-text="Semantic components in query execution" border="true":::
-Query execution proceeds as usual, with term parsing, analysis, and scans over the inverted indexes. The engine retrieves documents using token matching, and scores the results using the [default similarity scoring algorithm](index-similarity-and-scoring.md#similarity-ranking-algorithms). Scores are calculated based on the degree of linguistic similarity between query terms and matching terms in the index. If you defined them, scoring profiles are also applied at this stage. Results are then passed to the semantic search subsystem.
+Query execution proceeds as usual, with term parsing, analysis, and scans over the inverted indexes. The engine retrieves documents using token matching, and scores the results using the [default scoring algorithm](index-similarity-and-scoring.md). Scores are calculated based on the degree of linguistic similarity between query terms and matching terms in the index. If you defined them, scoring profiles are also applied at this stage. Results are then passed to the semantic search subsystem.
In the preparation step, the document corpus returned from the initial result set is analyzed at the sentence and paragraph level to find passages that summarize each document. In contrast with keyword search, this step uses machine reading and comprehension to evaluate the content. Through this stage of content processing, a semantic query returns [captions](semantic-how-to-query-request.md) and [answers](semantic-answers.md). To formulate them, semantic search uses language representation to extract and highlight key passages that best summarize a result. If the search query is a question - and answers are requested - the response will also include a text passage that best answers the question, as expressed by the search query.
search Tutorial Create Custom Analyzer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-create-custom-analyzer.md
All of search comes down to searching for the terms stored in the inverted index
1. The query is parsed and the query terms are analyzed. 1. The inverted index is then scanned for documents with matching terms.
-1. Finally, the retrieved documents are ranked by the [similarity algorithm](index-ranking-similarity.md).
+1. Finally, the retrieved documents are ranked by the [scoring algorithm](index-ranking-similarity.md).
:::image type="content" source="media/tutorial-create-custom-analyzer/query-architecture-explained.png" alt-text="Diagram of Analyzer process ranking similarity":::
security Feature Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/feature-availability.md
The following tables display the current Microsoft Sentinel feature availability
<sup><a name="footnote1"></a>1</sup> SSH and RDP detections are not supported for sovereign clouds because the Databricks ML platform is not available.
-### Microsoft 365 data connectors
+### Microsoft Purview Data Connectors
Office 365 GCC is paired with Azure Active Directory (Azure AD) in Azure. Office 365 GCC High and Office 365 DoD are paired with Azure AD in Azure Government.
For more information, see Azure Attestation [public documentation](../../attesta
- Understand the [shared responsibility](shared-responsibility.md) model and which security tasks are handled by the cloud provider and which tasks are handled by you. - Understand the [Azure Government Cloud](../../azure-government/documentation-government-welcome.md) capabilities and the trustworthy design and security used to support compliance applicable to federal, state, and local government organizations and their partners. - Understand the [Office 365 Government plan](/office365/servicedescriptions/office-365-platform-service-description/office-365-us-government/office-365-us-government#about-office-365-government-environments).-- Understand [compliance in Azure](../../compliance/index.yml) for legal and regulatory standards.
+- Understand [compliance in Azure](../../compliance/index.yml) for legal and regulatory standards.
security Recover From Identity Compromise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/recover-from-identity-compromise.md
Therefore, we recommend also taking the following actions:
- Make sure that you've applied the [Azure security benchmark documentation](/security/benchmark/azure/), and are monitoring compliance via [Microsoft Defender for Cloud](../../security-center/index.yml). -- Incorporate threat intelligence feeds into your SIEM, such as by configuring Microsoft 365 data connectors in [Microsoft Sentinel](../../sentinel/understand-threat-intelligence.md).
+- Incorporate threat intelligence feeds into your SIEM, such as by configuring Microsoft Purview Data Connectors in [Microsoft Sentinel](../../sentinel/understand-threat-intelligence.md).
For more information, see Microsoft's security documentation:
The following table describes more methods for using Azure Active directory logs
|**Analyze risky sign-in events** | Azure Active Directory and its Identity Protection platform may generate risk events associated with the use of attacker-generated SAML tokens. <br><br>These events might be labeled as *unfamiliar properties*, *anonymous IP address*, *impossible travel*, and so on. <br><br>We recommend that you closely analyze all risk events associated with accounts that have administrative privileges, including any that may have been automatically been dismissed or remediated. For example, a risk event or an anonymous IP address might be automatically remediated because the user passed MFA. <br><br>Make sure to use [ADFS Connect Health](../../active-directory/hybrid/how-to-connect-health-adfs.md) so that all authentication events are visible in Azure AD. | |**Detect domain authentication properties** | Any attempt by the attacker to manipulate domain authentication policies will be recorded in the Azure Active Directory Audit logs, and reflected in the Unified Audit log. <br><br> For example, review any events associated with **Set domain authentication** in the Unified Audit Log, Azure AD Audit logs, and / or your SIEM environment to verify that all activities listed were expected and planned. | |**Detect credentials for OAuth applications** | Attackers who have gained control of a privileged account may search for an application with the ability to access any user's email in the organization, and then add attacker-controlled credentials to that application. <br><br>For example, you may want to search for any of the following activities, which would be consistent with attacker behavior: <br>- Adding or updating service principal credentials <br>- Updating application certificates and secrets <br>- Adding an app role assignment grant to a user <br>- Adding Oauth2PermissionGrant |
-|**Detect e-mail access by applications** | Search for access to email by applications in your environment. For example, use the [Microsoft 365 Advanced Auditing features](/microsoft-365/compliance/mailitemsaccessed-forensics-investigations) to investigate compromised accounts. |
+|**Detect e-mail access by applications** | Search for access to email by applications in your environment. For example, use the [Microsoft Purview Audit (Premium) features](/microsoft-365/compliance/mailitemsaccessed-forensics-investigations) to investigate compromised accounts. |
|**Detect non-interactive sign-ins to service principals** | The Azure Active Directory sign-in reports provide details about any non-interactive sign-ins that used service principal credentials. For example, you can use the sign-in reports to find valuable data for your investigation, such as an IP address used by the attacker to access email applications. | | | |
sentinel Connect Azure Windows Microsoft Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-azure-windows-microsoft-services.md
See below how to create data collection rules.
1. In the **Resources** tab, select **+Add resource(s)** to add machines to which the Data Collection Rule will apply. The **Select a scope** dialog will open, and you will see a list of available subscriptions. Expand a subscription to see its resource groups, and expand a resource group to see the available machines. You will see Azure virtual machines and Azure Arc-enabled servers in the list. You can mark the check boxes of subscriptions or resource groups to select all the machines they contain, or you can select individual machines. Select **Apply** when you've chosen all your machines. At the end of this process, the Azure Monitor Agent will be installed on any selected machines that don't already have it installed.
-1. On the **Collect** tab, choose the events you would like to collect: select **All events** or **Custom** to specify other logs or to filter events using [XPath queries](../azure-monitor/agents/data-collection-rule-azure-monitor-agent.md#limit-data-collection-with-custom-xpath-queries) (see note below). Enter expressions in the box that evaluate to specific XML criteria for events to collect, then select **Add**. You can enter up to 20 expressions in a single box, and up to 100 boxes in a rule.
+1. On the **Collect** tab, choose the events you would like to collect: select **All events** or **Custom** to specify other logs or to filter events using [XPath queries](../azure-monitor/agents/data-collection-rule-azure-monitor-agent.md#filter-events-using-xpath-queries) (see note below). Enter expressions in the box that evaluate to specific XML criteria for events to collect, then select **Add**. You can enter up to 20 expressions in a single box, and up to 100 boxes in a rule.
Learn more about [data collection rules](../azure-monitor/essentials/data-collection-rule-overview.md) from the Azure Monitor documentation.
sentinel Data Connectors Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors-reference.md
Add http://localhost:8081/ under **Authorized redirect URIs** while creating [We
| **Supported by** | Microsoft |
-## Microsoft 365 Insider Risk Management (IRM) (Preview)
-
+## Microsoft Purview Insider Risk Management (IRM) (Preview)
+<a id="microsoft-365-insider-risk-management-irm-preview"></a>
+
| Connector attribute | Description | | | |
-| **Data ingestion method** | **Azure service-to-service integration: <br>[API-based connections](connect-azure-windows-microsoft-services.md#api-based-connections)**<br><br>Also available in the [Microsoft 365 Insider Risk Management solution](sentinel-solutions-catalog.md#domain-solutions) |
-| **License and other prerequisites** | <ul><li>Valid subscription for Microsoft 365 E5/A5/G5, or their accompanying Compliance or IRM add-ons.<li>[Microsoft 365 Insider risk management](/microsoft-365/compliance/insider-risk-management) fully onboarded, and [IRM policies](/microsoft-365/compliance/insider-risk-management-policies) defined and producing alerts.<li>[Microsoft 365 IRM configured](/microsoft-365/compliance/insider-risk-management-settings#export-alerts-preview) to enable the export of IRM alerts to the Office 365 Management Activity API in order to receive the alerts through the Microsoft Sentinel connector.)
+| **Data ingestion method** | **Azure service-to-service integration: <br>[API-based connections](connect-azure-windows-microsoft-services.md#api-based-connections)**<br><br>Also available in the [Microsoft Purview Insider Risk Management solution](sentinel-solutions-catalog.md#domain-solutions) |
+| **License and other prerequisites** | <ul><li>Valid subscription for Microsoft 365 E5/A5/G5, or their accompanying Compliance or IRM add-ons.<li>[Microsoft Purview Insider Risk Management](/microsoft-365/compliance/insider-risk-management) fully onboarded, and [IRM policies](/microsoft-365/compliance/insider-risk-management-policies) defined and producing alerts.<li>[Microsoft 365 IRM configured](/microsoft-365/compliance/insider-risk-management-settings#export-alerts-preview) to enable the export of IRM alerts to the Office 365 Management Activity API in order to receive the alerts through the Microsoft Sentinel connector.)
| **Log Analytics table(s)** | SecurityAlert |
-| **Data query filter** | `SecurityAlert`<br>`| where ProductName == "Microsoft 365 Insider Risk Management"` |
+| **Data query filter** | `SecurityAlert`<br>`| where ProductName == "Microsoft Purview Insider Risk Management"` |
| **Supported by** | Microsoft |
sentinel Windows Security Event Id Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/windows-security-event-id-reference.md
When ingesting security events from Windows devices using the [Windows Security
The **Common** event set may contain some types of events that aren't so common. This is because the main point of the **Common** set is to reduce the volume of events to a more manageable level, while still maintaining full audit trail capability. -- **Minimal** - A small set of events that might indicate potential threats. This set does not contain a full audit trail. It covers only events that might indicate a successful breach, and other important events that have very low rates of occurrence. For example, it contains successful and failed user logons (event IDs 4624, 4625), but it doesn't contain sign-out information (4634) which, while important for auditing, is not meaningful for breach detection and has relatively high volume. Most of the data volume of this set is comprised of sign-in events and process creation events (event ID 4688).
+- **Minimal** - A small set of events that might indicate potential threats. This set does not contain a full audit trail. It covers only events that might indicate a successful breach, and other important events that have very low rates of occurrence. For example, it contains successful and failed user logons (event IDs 4624, 4625), but it doesn't contain sign-out information (4634) which, while important for auditing, is not meaningful for breach detection and has relatively high volume. Most of the data volume of this set is consists of sign-in events and process creation events (event ID 4688).
-- **Custom** - A set of events determined by you, the user, and defined in a data collection rule using XPath queries. [Learn more about data collection rules](../azure-monitor/agents/data-collection-rule-azure-monitor-agent.md#limit-data-collection-with-custom-xpath-queries).
+- **Custom** - A set of events determined by you, the user, and defined in a data collection rule using XPath queries. [Learn more about data collection rules](../azure-monitor/agents/data-collection-rule-azure-monitor-agent.md#filter-events-using-xpath-queries).
## Event ID reference
service-fabric Quickstart Cluster Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/quickstart-cluster-bicep.md
+
+ Title: Create a Service Fabric cluster using Bicep
+description: In this quickstart, you will create an Azure Service Fabric test cluster using Bicep.
++ Last updated : 06/22/2022+++++
+# Quickstart: Create a Service Fabric cluster using Bicep
+
+Azure Service Fabric is a distributed systems platform that makes it easy to package, deploy, and manage scalable and reliable microservices and containers. A Service Fabric *cluster* is a network-connected set of virtual machines into which your microservices are deployed and managed. This article describes how to deploy a Service Fabric test cluster in Azure using Bicep.
++
+This five-node Windows cluster is secured with a self-signed certificate and thus only intended for instructional purposes (rather than production workloads). We'll use Azure PowerShell to deploy the Bicep file.
+
+## Prerequisites
+
+If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin.
+
+### Install Service Fabric SDK and PowerShell modules
+
+To complete this quickstart, you'll need to install the [Service Fabric SDK and PowerShell module](service-fabric-get-started.md).
+
+### Download the sample Bicep file and certificate helper script
+
+Clone or download the [Azure Resource Manager Quickstart Templates](https://github.com/Azure/azure-quickstart-templates) repo. Alternatively, copy down locally the following files we'll be using from the *service-fabric-secure-cluster-5-node-1-nodetype* folder:
+
+* [New-ServiceFabricClusterCertificate.ps1](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.servicefabric/service-fabric-secure-cluster-5-node-1-nodetype/scripts/New-ServiceFabricClusterCertificate.ps1)
+* [main.bicep](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.servicefabric/service-fabric-secure-cluster-5-node-1-nodetype/main.bicep)
+* [azuredeploy.parameters.json](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.servicefabric/service-fabric-secure-cluster-5-node-1-nodetype/azuredeploy.parameters.json)
+
+### Sign in to Azure
+
+Sign in to Azure and designate the subscription to use for creating your Service Fabric cluster.
+
+```powershell
+# Sign in to your Azure account
+Login-AzAccount -SubscriptionId "<subscription ID>"
+```
+
+### Create a self-signed certificate stored in Key Vault
+
+Service Fabric uses X.509 certificates to [secure a cluster](./service-fabric-cluster-security.md) and provide application security features, and [Key Vault](../key-vault/general/overview.md) to manage those certificates. Successful cluster creation requires a cluster certificate to enable node-to-node communication. For the purpose of creating this quickstart test cluster, we'll create a self-signed certificate for cluster authentication. Production workloads require certificates created using a correctly configured Windows Server certificate service or one from an approved certificate authority (CA).
+
+```powershell
+# Designate unique (within cloudapp.azure.com) names for your resources
+$resourceGroupName = "SFQuickstartRG"
+$keyVaultName = "SFQuickstartKV"
+
+# Create a new resource group for your Key Vault and Service Fabric cluster
+New-AzResourceGroup -Name $resourceGroupName -Location SouthCentralUS
+
+# Create a Key Vault enabled for deployment
+New-AzKeyVault -VaultName $keyVaultName -ResourceGroupName $resourceGroupName -Location SouthCentralUS -EnabledForDeployment
+
+# Generate a certificate and upload it to Key Vault
+.\scripts\New-ServiceFabricClusterCertificate.ps1
+```
+
+The script will prompt you for the following (be sure to modify *CertDNSName* and *KeyVaultName* from the example values below):
+
+* **Password:** Password!1
+* **CertDNSName:** *sfquickstart*.southcentralus.cloudapp.azure.com
+* **KeyVaultName:** *SFQuickstartKV*
+* **KeyVaultSecretName:** clustercert
+
+Upon completion, the script will provide the parameter values needed for deployment. Be sure to store these in the following variables, as they will be needed for deployment:
+
+```powershell
+$sourceVaultId = "<Source Vault Resource Id>"
+$certUrlValue = "<Certificate URL>"
+$certThumbprint = "<Certificate Thumbprint>"
+```
+
+## Review the Bicep file
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/service-fabric-secure-cluster-5-node-1-nodetype/).
++
+Multiple Azure resources are defined in the Bicep file:
+
+* [Microsoft.Storage/storageAccounts](/azure/templates/microsoft.storage/storageaccounts)
+* [Microsoft.Network/virtualNetworks](/azure/templates/microsoft.network/virtualnetworks)
+* [Microsoft.Network/publicIPAddresses](/azure/templates/microsoft.network/publicipaddresses)
+* [Microsoft.Network/loadBalancers](/azure/templates/microsoft.network/loadbalancers)
+* [Microsoft.Compute/virtualMachineScaleSets](/azure/templates/microsoft.compute/virtualmachinescalesets)
+* [Microsoft.ServiceFabric/clusters](/azure/templates/microsoft.servicefabric/clusters)
+
+### Customize the parameters file
+
+Open *azuredeploy.parameters.json* and edit the parameter values so that:
+
+* **clusterName** matches the value you supplied for *CertDNSName* when creating your cluster certificate
+* **adminUserName** is some value other than the default *GEN-UNIQUE* token
+* **adminPassword** is some value other than the default *GEN-PASSWORD* token
+* **certificateThumbprint**, **sourceVaultResourceId**, and **certificateUrlValue** are all empty string (`""`)
+
+For example:
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "clusterName": {
+ "value": "sfquickstart"
+ },
+ "adminUsername": {
+ "value": "testadm"
+ },
+ "adminPassword": {
+ "value": "Password#1234"
+ },
+ "certificateThumbprint": {
+ "value": ""
+ },
+ "sourceVaultResourceId": {
+ "value": ""
+ },
+ "certificateUrlValue": {
+ "value": ""
+ }
+ }
+}
+```
+
+## Deploy the Bicep file
+
+Store the paths of your Bicep file and parameter file in variables, then deploy the Bicep file.
+
+```powershell
+$templateFilePath = "<full path to main.bicep>"
+$parameterFilePath = "<full path to azuredeploy.parameters.json>"
+
+New-AzResourceGroupDeployment `
+ -ResourceGroupName $resourceGroupName `
+ -TemplateFile $templateFilePath `
+ -TemplateParameterFile $parameterFilePath `
+ -CertificateThumbprint $certThumbprint `
+ -CertificateUrlValue $certUrlValue `
+ -SourceVaultResourceId $sourceVaultId `
+ -Verbose
+```
+
+## Review deployed resources
+
+Once the deployment completes, find the `managementEndpoint` value in the output and open the address in a web browser to view your cluster in [Service Fabric Explorer](./service-fabric-visualizing-your-cluster.md).
+
+![Screenshot of the Service Fabric Explorer showing new cluster.](./media/quickstart-cluster-template/service-fabric-explorer.png)
+
+You can also find the Service Fabric Explorer endpoint from your Service Explorer resource blade in Azure portal.
+
+![Screenshot of the Service Fabric resource blade showing Service Fabric Explorer endpoint.](./media/quickstart-cluster-template/service-fabric-explorer-endpoint-azure-portal.png)
+
+## Clean up resources
+
+When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete the resource group and its resources.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name exampleRG
+```
+++
+Next, remove the cluster certificate from your local store. List installed certificates to find the thumbprint for your cluster:
+
+```powershell
+Get-ChildItem Cert:\CurrentUser\My\
+```
+
+Then remove the certificate:
+
+```powershell
+Get-ChildItem Cert:\CurrentUser\My\{THUMBPRINT} | Remove-Item
+```
+
+## Next steps
+
+To learn how to create Bicep files with Visual Studio Code, see:
+
+> [!div class="nextstepaction"]
+> [Quickstart: Create Bicep files with Visual Studio Code](../azure-resource-manager/bicep/quickstart-create-bicep-use-visual-studio-code.md)
spring-cloud How To Set Up Sso With Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-set-up-sso-with-azure-ad.md
Title: How to set up Single Sign-on with Azure AD for Spring Cloud Gateway and API Portal for Tanzu
+ Title: How to set up single sign-on with Azure AD for Spring Cloud Gateway and API Portal for Tanzu
-description: How to set up Single Sign-on with Azure Active Directory for Spring Cloud Gateway and API Portal for Tanzu with Azure Spring Apps Enterprise Tier.
+description: How to set up single sign-on with Azure Active Directory for Spring Cloud Gateway and API Portal for Tanzu with Azure Spring Apps Enterprise Tier.
Last updated 05/20/2022
-# Set up Single Sign-on using Azure Active Directory for Spring Cloud Gateway and API Portal
+# Set up single sign-on using Azure Active Directory for Spring Cloud Gateway and API Portal
**This article applies to:** ❌ Basic/Standard tier ✔️ Enterprise tier
-This article shows you how to configure Single Sign-on (SSO) for Spring Cloud Gateway or API Portal using the Azure Active Directory (Azure AD) as an OpenID identify provider.
+This article shows you how to configure single sign-on (SSO) for Spring Cloud Gateway or API Portal using the Azure Active Directory (Azure AD) as an OpenID identify provider.
## Prerequisites - An Enterprise tier instance with Spring Cloud Gateway or API portal enabled. For more information, see [Quickstart: Provision an Azure Spring Apps service instance using the Enterprise tier](quickstart-provision-service-instance-enterprise.md). - Sufficient permissions to manage Azure AD applications. - To enable SSO for Spring Cloud Gateway or API Portal, you need the following four properties configured: | SSO Property | Azure AD Configuration |
spring-cloud How To Use Enterprise Api Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-use-enterprise-api-portal.md
This article shows you how to use API portal for VMware Tanzu® with Azure Spring Apps Enterprise Tier.
-[API portal](https://docs.vmware.com/en/API-portal-for-VMware-Tanzu/1.0/api-portal/GUID-https://docsupdatetracker.net/index.html) is one of the commercial VMware Tanzu components. API portal supports viewing API definitions from [Spring Cloud Gateway for VMware Tanzu®](./how-to-use-enterprise-spring-cloud-gateway.md) and testing of specific API routes from the browser. It also supports enabling Single Sign-on authentication via configuration.
+[API portal](https://docs.vmware.com/en/API-portal-for-VMware-Tanzu/1.0/api-portal/GUID-https://docsupdatetracker.net/index.html) is one of the commercial VMware Tanzu components. API portal supports viewing API definitions from [Spring Cloud Gateway for VMware Tanzu®](./how-to-use-enterprise-spring-cloud-gateway.md) and testing of specific API routes from the browser. It also supports enabling single sign-on (SSO) authentication via configuration.
## Prerequisites
This article shows you how to use API portal for VMware Tanzu® with Azure Sprin
The following sections describe configuration in API portal.
-### Configure single Sign-on (SSO)
+### Configure single sign-on (SSO)
-API portal supports authentication and authorization using single Sign-on (SSO) with an OpenID identity provider (IdP) that supports the OpenID Connect Discovery protocol.
+API portal supports authentication and authorization using single sign-on (SSO) with an OpenID identity provider (IdP) that supports the OpenID Connect Discovery protocol.
> [!NOTE] > Only authorization servers supporting the OpenID Connect Discovery protocol are supported. Be sure to configure the external authorization server to allow redirects back to the gateway. Refer to your authorization server's documentation and add `https://<gateway-external-url>/login/oauth2/code/sso` to the list of allowed redirect URIs.
API portal supports authentication and authorization using single Sign-on (SSO)
| clientSecret | Yes | The OpenID Connect client secret provided by your IdP | | scope | Yes | A list of scopes to include in JWT identity tokens. This list should be based on the scopes allowed by your identity provider |
-To set up SSO with Azure AD, see [How to set up Single Sign-on with Azure AD for Spring Cloud Gateway and API Portal for Tanzu](./how-to-set-up-sso-with-azure-ad.md).
+To set up SSO with Azure AD, see [How to set up single sign-on with Azure AD for Spring Cloud Gateway and API Portal for Tanzu](./how-to-set-up-sso-with-azure-ad.md).
> [!NOTE] > If you configure the wrong SSO property, such as the wrong password, you should remove the entire SSO property and re-add the correct configuration.
You can also use the Azure CLI to assign a public endpoint with the following co
az spring api-portal update --assign-endpoint ```
-## View the route information through API portal
+## Configure API routing with OpenAPI Spec on Spring Cloud Gateway for Tanzu
+
+This section describes how to view and try out APIs with schema definitions in API portal. Use the following steps to configure API routing with an OpenAPI spec URL on Spring Cloud Gateway for Tanzu.
+
+1. Create an app in Azure Spring Apps that the gateway will route traffic to.
+
+1. Generate the OpenAPI definition and get the URI to access it. The following two URI options are accepted:
+
+ - The first option is to use a publicly accessible endpoint like the URI `https://petstore3.swagger.io/api/v3/openapi.json`, which includes the OpenAPI specification.
+ - The second option is to put the OpenAPI definition in the relative path of the app in Azure Spring Apps, and construct the URI in the format `http://<app-name>/<relative-path-to-OpenAPI-spec>`. You can choose tools like `SpringDocs` to generate the OpenAPI specification automatically, so the URI can be like `http://<app-name>/v3/api-docs`.
+
+1. Use the following command to assign a public endpoint to the gateway to access it.
+
+ ```azurecli
+ az spring gateway update --assign-endpoint
+ ```
+
+1. Use the following command to configure Spring Cloud Gateway for Tanzu properties:
+
+ ```azurecli
+ az spring gateway update \
+ --api-description "<api-description>" \
+ --api-title "<api-title>" \
+ --api-version "v0.1" \
+ --server-url "<endpoint-in-the-previous-step>" \
+ --allowed-origins "*"
+ ```
+
+1. Configure routing rules to apps.
+
+ To create rules to access the app in Spring Cloud Gateway for Tanzu route configuration, save the following contents to the *sample.json* file.
+
+ ```json
+ {
+ "open_api": {
+ "uri": "https://petstore3.swagger.io/api/v3/openapi.json"
+ },
+ "routes": [
+ {
+ "title": "Petstore",
+ "description": "Route to application",
+ "predicates": [
+ "Path=/pet",
+ "Method=PUT"
+ ],
+ "filters": [
+ "StripPrefix=0",
+ ]
+ }
+ ]
+ }
+ ```
+
+ The `open_api.uri` value is the public endpoint or URI constructed in the second step above. You can add predicates and filters for paths defined in your OpenAPI specification.
+
+ Use the following command to apply the rule to the app created in the first step:
+
+ ```azurecli
+ az spring gateway route-config create \
+ --name sample \
+ --app-name <app-name> \
+ --routes-file sample.json
+ ```
+
+1. Check the response of the created routes. You can also view the routes in the portal.
+
+## View exposed APIs in API portal
> [!NOTE] > It takes several minutes to sync between Spring Cloud Gateway for Tanzu and API portal.
Select the `endpoint URL` to go to API portal. You'll see all the routes configu
:::image type="content" source="media/enterprise/how-to-use-enterprise-api-portal/api-portal.png" alt-text="Screenshot of A P I portal showing configured routes.":::
-## Try APIs using API portal
+## Try out APIs in API portal
-> [!NOTE]
-> Only `GET` operations are supported in the public preview.
+Use the following steps to try out APIs:
1. Select the API you would like to try.
-1. Select **EXECUTE** and the response will be shown.
+1. Select **EXECUTE**, and the response will be shown.
:::image type="content" source="media/enterprise/how-to-use-enterprise-api-portal/api-portal-tryout.png" alt-text="Screenshot of A P I portal.":::
spring-cloud How To Use Enterprise Spring Cloud Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-use-enterprise-spring-cloud-gateway.md
This article shows you how to use Spring Cloud Gateway for VMware Tanzu® with Azure Spring Apps Enterprise Tier.
-[Spring Cloud Gateway for Tanzu](https://docs.vmware.com/en/VMware-Spring-Cloud-Gateway-for-Kubernetes/https://docsupdatetracker.net/index.html) is one of the commercial VMware Tanzu components. It's based on the open-source Spring Cloud Gateway project. Spring Cloud Gateway for Tanzu handles cross-cutting concerns for API development teams, such as Single Sign-on (SSO), access control, rate-limiting, resiliency, security, and more. You can accelerate API delivery using modern cloud native patterns, and any programming language you choose for API development.
+[Spring Cloud Gateway for Tanzu](https://docs.vmware.com/en/VMware-Spring-Cloud-Gateway-for-Kubernetes/https://docsupdatetracker.net/index.html) is one of the commercial VMware Tanzu components. It's based on the open-source Spring Cloud Gateway project. Spring Cloud Gateway for Tanzu handles cross-cutting concerns for API development teams, such as single sign-on (SSO), access control, rate-limiting, resiliency, security, and more. You can accelerate API delivery using modern cloud native patterns, and any programming language you choose for API development.
-Spring Cloud Gateway for Tanzu also has other commercial API route filters for transporting authorized JSON Web Token (JWT) claims to application services, client certificate authorization, rate-limiting approaches, circuit breaker configuration, and support for accessing application services via HTTP Basic Authentication credentials.
+Spring Cloud Gateway for Tanzu also has the following features:
+
+- Other commercial API route filters for transporting authorized JSON Web Token (JWT) claims to application services.
+- Client certificate authorization.
+- Rate-limiting approaches.
+- Circuit breaker configuration.
+- Support for accessing application services via HTTP Basic Authentication credentials.
To integrate with [API portal for VMware Tanzu®](./how-to-use-enterprise-api-portal.md), Spring Cloud Gateway for Tanzu automatically generates OpenAPI version 3 documentation after the route configuration gets changed.
Spring Cloud Gateway for Tanzu is configured using the following sections and st
Spring Cloud Gateway for Tanzu metadata is used to automatically generate OpenAPI version 3 documentation so that the [API portal](./how-to-use-enterprise-api-portal.md) can gather information to show the route groups.
-| Property | Description |
-| - | - |
-| title | Title describing the context of the APIs available on the Gateway instance (default: `Spring Cloud Gateway for K8S`) |
-| description | Detailed description of the APIs available on the Gateway instance (default: `Generated OpenAPI 3 document that describes the API routes configured for '[Gateway instance name]' Spring Cloud Gateway instance deployed under '[namespace]' namespace.`) |
-| documentation | Location of more documentation for the APIs available on the Gateway instance |
-| version | Version of APIs available on this Gateway instance (default: `unspecified`) |
-| serverUrl | Base URL that API consumers will use to access APIs on the Gateway instance |
+| Property | Description |
+||-|
+| title | A title describing the context of the APIs available on the Gateway instance. The default value is *Spring Cloud Gateway for K8S*. |
+| description | A detailed description of the APIs available on the Gateway instance. The default value is *Generated OpenAPI 3 document that describes the API routes configured for '[Gateway instance name]' Spring Cloud Gateway instance deployed under '[namespace]' namespace.`* |
+| documentation | The location of more documentation for the APIs available on the Gateway instance. |
+| version | The version of the APIs available on this Gateway instance. The default value is *unspecified*. |
+| serverUrl | The base URL that API consumers will use to access APIs on the Gateway instance. |
> [!NOTE] > `serverUrl` is mandatory if you want to integrate with [API portal](./how-to-use-enterprise-api-portal.md).
Spring Cloud Gateway for Tanzu metadata is used to automatically generate OpenAP
Cross-origin resource sharing (CORS) allows restricted resources on a web page to be requested from another domain outside the domain from which the first resource was served.
-| Property | Description |
-| - | - |
-| allowedOrigins | Allowed origins to make cross-site requests |
-| allowedMethods | Allowed HTTP methods on cross-site requests |
-| allowedHeaders | Allowed headers in cross-site request |
-| maxAge | How long, in seconds, the response from a pre-flight request can be cached by clients |
-| allowCredentials | Whether user credentials are supported on cross-site requests |
-| exposedHeaders | HTTP response headers to expose for cross-site requests |
+| Property | Description |
+||-|
+| allowedOrigins | Allowed origins to make cross-site requests. |
+| allowedMethods | Allowed HTTP methods on cross-site requests. |
+| allowedHeaders | Allowed headers in cross-site request. |
+| maxAge | How long, in seconds, the response from a pre-flight request can be cached by clients. |
+| allowCredentials | A value that indicates whether user credentials are supported on cross-site requests. |
+| exposedHeaders | HTTP response headers to expose for cross-site requests. |
> [!NOTE] > Be sure you have the correct CORS configuration if you want to integrate with the [API portal](./how-to-use-enterprise-api-portal.md). For an example, see the [Create an example application](#create-an-example-application) section.
-### Configure single Sign-on (SSO)
+### Configure single sign-on (SSO)
-Spring Cloud Gateway for Tanzu supports authentication and authorization using Single Sign-on (SSO) with an OpenID identity provider (IdP) which supports OpenID Connect Discovery protocol.
+Spring Cloud Gateway for Tanzu supports authentication and authorization using single sign-on (SSO) with an OpenID identity provider (IdP) that supports OpenID Connect Discovery protocol.
-| Property | Required? | Description |
-| - | - | - |
-| issuerUri | Yes | The URI that is asserted as its Issuer Identifier. For example, if the issuer-uri provided is "https://example.com", then an OpenID Provider Configuration Request will be made to "https://example.com/.well-known/openid-configuration". The result is expected to be an OpenID Provider Configuration Response. |
-| clientId | Yes | The OpenID Connect client ID provided by your IdP |
-| clientSecret | Yes | The OpenID Connect client secret provided by your IdP |
-| scope | Yes | A list of scopes to include in JWT identity tokens. This list should be based on the scopes allowed by your identity provider |
+| Property | Required? | Description |
+|--|--|--|
+| issuerUri | Yes | The URI that is asserted as its Issuer Identifier. For example, if the issuer-uri provided is "https://example.com", then an OpenID Provider Configuration Request will be made to "https://example.com/.well-known/openid-configuration". The result is expected to be an OpenID Provider Configuration Response. |
+| clientId | Yes | The OpenID Connect client ID provided by your IdP. |
+| clientSecret | Yes | The OpenID Connect client secret provided by your IdP. |
+| scope | Yes | A list of scopes to include in JWT identity tokens. This list should be based on the scopes allowed by your identity provider. |
-To set up SSO with Azure AD, see [How to set up Single Sign-on with Azure AD for Spring Cloud Gateway and API Portal for Tanzu](./how-to-set up-sso-with-azure-ad.md).
+To set up SSO with Azure AD, see [How to set up single sign-on with Azure AD for Spring Cloud Gateway and API Portal for Tanzu](./how-to-set-up-sso-with-azure-ad.md).
> [!NOTE] > Only authorization servers supporting OpenID Connect Discovery protocol are supported. Also, be sure to configure the external authorization server to allow redirects back to the gateway. Refer to your authorization server's documentation and add `https://<gateway-external-url>/login/oauth2/code/sso` to the list of allowed redirect URIs.
This section describes how to add, update, and manage API routes for apps that u
### Define route config
-The route definition includes the following parts:
+The route config definition includes the following parts:
+
+- OpenAPI URI: The URI points to an OpenAPI specification. Both OpenAPI 2.0 and OpenAPI 3.0 specs are supported. The specification can be shown in API portal to try out. Two types of URI are accepted. The first type of URI is a public endpoint like `https://petstore3.swagger.io/api/v3/openapi.json`. The second type of URI is a constructed URL `http://<app-name>/{relative-path-to-OpenAPI-spec}`, where `app-name` is the name of an application in Azure Spring Apps that includes the API definition.
+- routes: A list of route rules about how the traffic goes to one app.
+
+Use the following command to create a route config. The `--app-name` value should be the name of an app hosted in Azure Spring Apps that the requests will route to.
-- appResourceId: The full app resource ID to route traffic to-- routes: A list of route rules about how the traffic goes to one app
+```azurecli
+az spring gateway route-config create \
+ --name <route-config-name> \
+ --app-name <app-name> \
+ --routes-file <routes-file.json>
+```
+
+Here's a sample of the JSON file that is passed to the `--routes-file` parameter in the create command:
+
+```json
+{
+ "open_api": {
+ "uri": "<OpenAPI-URI>"
+ },
+ "routes": [
+ {
+ "title": "<title-of-route>",
+ "description": "<description-of-route>",
+ "predicates": [
+ "<predicate-of-route>",
+ ],
+ "ssoEnabled": true,
+ "filters": [
+ "<filter-of-route>",
+ ],
+ "tags": [
+ "<tag-of-route>"
+ ],
+ "order": 0
+ }
+ ]
+}
+```
The following tables list the route definitions. All the properties are optional.
-| Property | Description |
-| - | - |
-| title | A title, will be applied to methods in the generated OpenAPI documentation |
-| description | A description, will be applied to methods in the generated OpenAPI documentation |
-| uri | Full uri, will override `appResourceId` |
-| ssoEnabled | Enable SSO validation. See "Using Single Sign-on" |
-| tokenRelay | Pass currently authenticated user's identity token to application service |
-| predicates | A list of predicates. See [Available Predicates](https://docs.vmware.com/en/VMware-Spring-Cloud-Gateway-for-Kubernetes/1.0/scg-k8s/GUID-configuring-routes.html#available-predicates) and [Commercial Route Filters](https://docs.vmware.com/en/VMware-Spring-Cloud-Gateway-for-Kubernetes/1.0/scg-k8s/GUID-route-predicates.html)|
-| filters | A list of filters. See [Available Filters](https://docs.vmware.com/en/VMware-Spring-Cloud-Gateway-for-Kubernetes/1.0/scg-k8s/GUID-configuring-routes.html#available-filters) and [Commercial Route Filters](https://docs.vmware.com/en/VMware-Spring-Cloud-Gateway-for-Kubernetes/1.0/scg-k8s/GUID-route-filters.html)|
-| order | Route processing order, same as Spring Cloud Gateway for Tanzu |
-| tags | Classification tags, will be applied to methods in the generated OpenAPI documentation |
+| Property | Description |
+|-|-|
+| title | A title to apply to methods in the generated OpenAPI documentation. |
+| description | A description to apply to methods in the generated OpenAPI documentation. |
+| uri | The full URI, which will override the name of app that requests route to. |
+| ssoEnabled | A value that indicates whether to enable SSO validation. See the [Configure single sign-on (SSO)](#configure-single-sign-on-sso) section. |
+| tokenRelay | Passes the currently authenticated user's identity token to the application service. |
+| predicates | A list of predicates. See [Available Predicates](https://docs.vmware.com/en/VMware-Spring-Cloud-Gateway-for-Kubernetes/1.0/scg-k8s/GUID-configuring-routes.html#available-predicates) and [Commercial Route Filters](https://docs.vmware.com/en/VMware-Spring-Cloud-Gateway-for-Kubernetes/1.0/scg-k8s/GUID-route-predicates.html). |
+| filters | A list of filters. See [Available Filters](https://docs.vmware.com/en/VMware-Spring-Cloud-Gateway-for-Kubernetes/1.0/scg-k8s/GUID-configuring-routes.html#available-filters) and [Commercial Route Filters](https://docs.vmware.com/en/VMware-Spring-Cloud-Gateway-for-Kubernetes/1.0/scg-k8s/GUID-route-filters.html). |
+| order | The route processing order, which is the same as in Spring Cloud Gateway for Tanzu. |
+| tags | Classification tags, which will be applied to methods in the generated OpenAPI documentation. |
Not all the filters/predicates are supported in Azure Spring Apps because of security/compatible reasons. The following aren't supported:
Not all the filters/predicates are supported in Azure Spring Apps because of sec
Use the following steps to create an example application using Spring Cloud Gateway for Tanzu.
-1. To create an app in Azure Spring Apps which the Spring Cloud Gateway for Tanzu would route traffic to, follow the instructions in [Quickstart: Build and deploy apps to Azure Spring Apps using the Enterprise tier](quickstart-deploy-apps-enterprise.md). Select `customers-service` for this example.
+1. To create an app in Azure Spring Apps that the Spring Cloud Gateway for Tanzu would route traffic to, follow the instructions in [Quickstart: Build and deploy apps to Azure Spring Apps using the Enterprise tier](quickstart-deploy-apps-enterprise.md). Select `customers-service` for this example.
1. Assign a public endpoint to the gateway to access it.
Use the following steps to create an example application using Spring Cloud Gate
Save the following content to the *customers-service.json* file. ```json
- [
- {
- "title": "Customers service",
- "description": "Route to customer service",
- "predicates": [
- "Path=/api/customers-service/owners"
- ],
- "filters": [
- "StripPrefix=2"
- ],
- "tags": [
- "pet clinic"
- ]
- }
- ]
+ {
+ "routes": [
+ {
+ "title": "Customers service",
+ "description": "Route to customer service",
+ "predicates": [
+ "Path=/api/customers-service/owners",
+ "Method=GET"
+ ],
+ "filters": [
+ "StripPrefix=2",
+ ],
+ "tags": [
+ "pet clinic"
+ ]
+ }
+ ]
+ }
``` Use the following command to apply the rule to the app `customers-service`:
static-web-apps Key Vault Secrets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/key-vault-secrets.md
When configuring custom authentication providers, you may want to store connecti
Security secrets require the following items to be in place. - Create a system-assigned identity in the Static Web Apps instance.-- Grant access a Key Vault secret access to the identity.
+- Grant the identity access to a Key Vault secret.
- Reference the Key Vault secret from the Static Web Apps application settings. This article demonstrates how to set up each of these items in production for [bring your own functions applications](./functions-bring-your-own.md).
storage Secure File Transfer Protocol Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-known-issues.md
The following clients are known to be incompatible with SFTP for Azure Blob Stor
## Networking - To access the storage account using SFTP, your network must allow traffic on port 22.
+
+- Static IP addresses are not supported for storage accounts.
+
+- Internet routing is not supported. Use Microsoft network routing.
- There's a 2 minute timeout for idle or inactive connections. OpenSSH will appear to stop responding and then disconnect. Some clients reconnect automatically.
The following clients are known to be incompatible with SFTP for Azure Blob Stor
## Integrations - Change feed notifications aren't supported.-- Network File System (NFS) 3.0 and SFTP can't be enabled on the same storage account. ## Performance
For performance issues and considerations, see [SSH File Transfer Protocol (SFTP
- The container name is specified in the connection string for local users that have a home directory that doesn't exist.
+- To resolve the `Received disconnect from XX.XXX.XX.XXX port 22:11:` when connecting, check that:
+
+ - Public network access is `Enabled from all networks` or `Enabled from selected virtual networks and IP addresses`.
+
+ - The client IP address is allowed by the firewall.
+
+ - Network Routing is set to `Microsoft network routing`.
+ ## See also - [SSH File Transfer Protocol (SFTP) support for Azure Blob Storage](secure-file-transfer-protocol-support.md)
storage Secure File Transfer Protocol Support How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-support-how-to.md
When using custom domains the connection string is `myaccount.myuser@customdomai
> [!IMPORTANT] > Ensure your DNS provider does not proxy requests. Proxying may cause the connection attempt to time out.
+## Connect using a private endpoint
+
+When using a private endpoint the connection string is `myaccount.myuser@myaccount.privatelink.blob.core.windows.net`. If home directory has not been specified for the user, it is `myaccount.mycontainer.myuser@myaccount.privatelink.blob.core.windows.net`.
+
+> [!NOTE]
+> Ensure you change networking configuration to "Enabled from selected virtual networks and IP addresses" and select your private endpoint, otherwise the regular SFTP endpoint will still be publicly accessible.
+ ## See also - [SSH File Transfer Protocol (SFTP) support for Azure Blob Storage](secure-file-transfer-protocol-support.md)
storage Understanding Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/understanding-billing.md
Azure Files provides two distinct billing models: provisioned and pay-as-you-go.
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/m5_-GsKv4-o" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> :::column-end::: :::column:::
- This video is an interview that discusses the basics of the Azure Files billing model. It covers how to optimize Azure file shares to achieve the lowest costs possible and how to compare Azure Files to other file storage offerings on-premises and in the cloud.
+ This video is an interview that discusses the basics of the Azure Files billing model. It covers how to optimize Azure file shares to achieve the lowest costs possible, and how to compare Azure Files to other file storage offerings on-premises and in the cloud.
:::column-end::: :::row-end:::
stream-analytics Stream Analytics Cicd Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-cicd-api.md
Previously updated : 12/04/2018 Last updated : 06/24/2022 # Implement CI/CD for Stream Analytics on IoT Edge using APIs
curl -u { <username:password> } -H "Content-Type: application/json" -X { <metho
```bash wget -q -O- --{ <method> } -data="<request body>" --header=Content-Type:application/json --auth-no-challenge --http-user="<Admin>" --http-password="<password>" <url> ```
-
+ ### Windows
-For Windows, use PowerShell:
+For Windows, use PowerShell:
-```powershell
+```powershell
$user = "<username>" $pass = "<password>" $encodedCreds = [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes(("{0}:{1}" -f $user,$pass)))
$content = "<request body>"
$response = Invoke-RestMethod <url> -Method <method> -Body $content -Headers $Headers echo $response ```
-
-## Create an ASA job on Edge
-
+
+## Create an ASA job on IoT Edge
+ To create Stream Analytics job, call the PUT method using the Stream Analytics API. |Method|Request URL| ||--|
-|PUT|`https://management.azure.com/subscriptions/{\**subscription-id**}/resourcegroups/{**resource-group-name**}/providers/Microsoft.StreamAnalytics/streamingjobs/{**job-name**}?api-version=2017-04-01-preview`|
-
+|PUT|`https://management.azure.com/subscriptions/{subscription-id}/resourcegroups/{resource-group-name}/providers/Microsoft.StreamAnalytics/streamingjobs/{job-name}?api-version=2017-04-01-preview`|
+ Example of command using **curl**: ```curl curl -u { <username:password> } -H "Content-Type: application/json" -X { <method> } -d "{ <request body> }" https://management.azure.com/subscriptions/{subscription-id}/resourcegroups/{resource-group-name}/providers/Microsoft.StreamAnalytics/streamingjobs/{jobname}?api-version=2017-04-01-preview
-```
-
+```
+ Example of request body in JSON: ```json
Example of request body in JSON:
ΓÇ» } } ```
-
+ For more information, see the [API documentation](/rest/api/streamanalytics/).
-
-## Publish Edge package
-
-To publish a Stream Analytics job on IoT Edge, call the POST method using the Edge Package Publish API.
+
+## Publish IoT Edge package
+
+To publish a Stream Analytics job on IoT Edge, call the POST method using the IoT Edge Package Publish API.
|Method|Request URL| ||--|
-|POST|`https://management.azure.com/subscriptions/{\**subscriptionid**}/resourceGroups/{**resourcegroupname**}/providers/Microsoft.StreamAnalytics/streamingjobs/{**jobname**}/publishedgepackage?api-version=2017-04-01-preview`|
+|POST|`https://management.azure.com/subscriptions/{subscriptionid}/resourceGroups/{resourcegroupname}/providers/Microsoft.StreamAnalytics/streamingjobs/{jobname}/publishedgepackage?api-version=2017-04-01-preview`|
-This asynchronous operation returns a status of 202 until the job has been successfully published. The location response header contains the URI used to get the status of the process. While the process is running, a call to the URI in the location header returns a status of 202. When the process finishes, the URI in the location header returns a status of 200.
+The previous call to the IoT Edge Package Publish API triggers an asynchronous operation and returns a status of 202. The **location** response header contains the URI used to get the status of that asynchronous operation. A call to the URI in the **location** header returns a status of 202 to indicate the asynchronous operation is still in progress. When the operation is completed, the call to the URI in the **location** header returns a status of 200.
-Example of an Edge package publish call using **curl**:
+Example of an IoT Edge package publish call using **curl**:
```bash curl -d -X POST https://management.azure.com/subscriptions/{subscriptionid}/resourceGroups/{resourcegroupname}/providers/Microsoft.StreamAnalytics/streamingjobs/{jobname}/publishedgepackage?api-version=2017-04-01-preview ```
-
-After making the POST call, you should expect a response with an empty body. Look for the URL located in the HEAD of the response and record it for further use.
-
-Example of the URL from the HEAD of response:
+After making the POST call, you should expect a response with an empty body. Look for the URI in the **location** header of the response, and record it for further use.
+
+Example of the URI from the **location** header of response:
+
+```rest
+https://management.azure.com/subscriptions/{subscriptionid}/resourcegroups/{resourcegroupname}/providers/Microsoft.StreamAnalytics/StreamingJobs/{resourcename}/OperationResults/{guidAssignedToTheAsynchronousOperation}?api-version=2017-04-01-preview
```
-https://management.azure.com/subscriptions/{**subscriptionid**}/resourcegroups/{**resourcegroupname**}/providers/Microsoft.StreamAnalytics/StreamingJobs/{**resourcename**}/OperationResults/023a4d68-ffaf-4e16-8414-cb6f2e14fe23?api-version=2017-04-01-preview
-```
-A
-Wait for one to two minutes before running the following command to make an API call with the URL you found in the HEAD of the response. Retry the command if you do not get a 200 response.
-
+
+Wait from a few seconds to a couple of minutes before making a call to the API whose URI you found in the **location** header of the response to the IoT Edge Package Publish API, and repeat the cycle of waiting and retrying until you get a 200 response.
+ Example of making API call with returned URL with **curl**: ```bash
-curl -d ΓÇôX GET https://management.azure.com/subscriptions/{subscriptionid}/resourceGroups/{resourcegroupname}/providers/Microsoft.StreamAnalytics/streamingjobs/{resourcename}/publishedgepackage?api-version=2017-04-01-preview
+curl -d ΓÇôX GET https://management.azure.com/subscriptions/{subscriptionid}/resourcegroups/{resourcegroupname}/providers/Microsoft.StreamAnalytics/StreamingJobs/{resourcename}/OperationResults/{guidAssignedToTheAsynchronousOperation}?api-version=2017-04-01-preview
```
-The response includes the information you need to add to the Edge deployment script. The examples below show what information you need to collect and where to add it in the deployment manifest.
-
+The response includes the information you need to add to the IoT Edge deployment script. The examples below show what information you need to collect and where to add it in the deployment manifest.
+ Sample response body after publishing successfully: ```json
Sample response body after publishing successfully:
} ```
-Sample of Deployment Manifest:
+Sample of Deployment Manifest:
```json {
Sample of Deployment Manifest:
} ```
-After the configuration of the deployment manifest, refer to [Deploy Azure IoT Edge modules with Azure CLI](../iot-edge/how-to-deploy-modules-cli.md) for deployment.
+After you configure the deployment manifest, refer to [Deploy Azure IoT Edge modules with Azure CLI](../iot-edge/how-to-deploy-modules-cli.md) for deployment.
+## Next steps
-## Next steps
-
* [Azure Stream Analytics on IoT Edge](stream-analytics-edge.md) * [ASA on IoT Edge tutorial](../iot-edge/tutorial-deploy-stream-analytics.md)
-* [Develop Stream Analytics Edge jobs using Visual Studio tools](stream-analytics-tools-for-visual-studio-edge-jobs.md)
+* [Develop Stream Analytics IoT Edge jobs using Visual Studio tools](stream-analytics-tools-for-visual-studio-edge-jobs.md)
synapse-analytics Best Practices Serverless Sql Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/best-practices-serverless-sql-pool.md
When throttling is detected, serverless SQL pool has built-in handling to resolv
> [!TIP] > For optimal query execution, don't stress the storage account with other workloads during query execution.
-### Azure AD Pass-through Authentication performance
-
-Serverless SQL pool allows you to access files in storage by using Azure Active Directory (Azure AD) Pass-through Authentication or shared access signature credentials. You might experience slower performance with Azure AD Pass-through Authentication than you would with shared access signatures.
-
-If you need better performance, try using shared access signature credentials to access storage.
- ### Prepare files for querying If possible, you can prepare files for better performance:
virtual-desktop Diagnostics Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/diagnostics-log-analytics.md
WVDErrors
>[!NOTE]
->- When a user opens Full Desktop, their app usage in the session isn't tracked as checkpoints in the WVDCheckpoints table.
->- The ResourcesAlias column in the WVDConnections table shows whether a user has connected to a full desktop or a published app. The column only shows the first app they open during the connection. Any published apps the user opens are tracked in WVDCheckpoints.
->- The WVDErrors table shows you management errors, host registration issues, and other issues that happen while the user subscribes to a list of apps or desktops.
->- WVDErrors helps you to identify issues that can be resolved by admin tasks. The value on ServiceError always says ΓÇ£falseΓÇ¥ for those types of issues. If ServiceError = ΓÇ£trueΓÇ¥, you'll need to escalate the issue to Microsoft. Ensure you provide the CorrelationID for the errors you escalate.
+>- When a user launches a full desktop session, their app usage in the session isn't tracked as checkpoints in the `WVDCheckpoints` table.
+>- The `ResourcesAlias` column in the `WVDConnections` table shows whether a user has connected to a full desktop or a published app. The column only shows the first app they open during the connection. Any published apps the user opens are tracked in `WVDCheckpoints`.
+>- The `WVDErrors` table shows you management errors, host registration issues, and other issues that happen while the user subscribes to a list of apps or desktops.
+>- The `WVDErrors` table also helps you to identify issues that can be resolved by admin tasks. The value on `ServiceError` should always equal `false` for these types of issues. If `ServiceError` equals `true`, you'll need to escalate the issue to Microsoft. Ensure you provide the *CorrelationID* for errors you escalate.
+>- When debugging connectivity issues, in some cases client information might be missing even if the connection events completes. This applies to the `WVDConnections` and `WVDCheckpoints` tables.
## Next steps
virtual-desktop Safe Url List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/safe-url-list.md
Below is the list of URLs your session host VMs need to access for Azure Virtual
| `www.microsoft.com` | 80 | Certificates | N/A | > [!IMPORTANT]
-> We have finished transitioning the URLs we use for Agent traffic. We no longer support the URLs below. To avoid your session host VMs from showing *Needs Assistance* related to this, please allow `\*.prod.warm.ingest.monitor.core.windows.net` if you have not already. Please remove these URLs if you have previously explicitly allowed them:
+> We have finished transitioning the URLs we use for Agent traffic. We no longer support the URLs below. To avoid your session host VMs from showing *Needs Assistance* related to this, please allow `*.prod.warm.ingest.monitor.core.windows.net` if you have not already. Please remove these URLs if you have previously explicitly allowed them:
> > | Address | Outbound TCP port | Purpose | Service Tag | > |--|--|--|--|
The following table lists optional URLs that your session host virtual machines
| `ocsp.msocsp.com` | 80 | Certificates | N/A | > [!IMPORTANT]
-> We have finished transitioning the URLs we use for Agent traffic. We no longer support the URLs below. To avoid your session host VMs from showing *Needs Assistance* related to this, please allow `\*.prod.warm.ingest.monitor.core.usgovcloudapi.net`, if you have not already. Please remove these URLs if you have previously explicitly allowed them:
+> We have finished transitioning the URLs we use for Agent traffic. We no longer support the URLs below. To avoid your session host VMs from showing *Needs Assistance* related to this, please allow `*.prod.warm.ingest.monitor.core.usgovcloudapi.net`, if you have not already. Please remove these URLs if you have previously explicitly allowed them:
> > | Address | Outbound TCP port | Purpose | Service Tag | > |--|--|--|--|
Any [Remote Desktop clients](user-documentation/connect-windows-7-10.md?toc=%2Fa
| Address | Outbound TCP port | Purpose | Client(s) | |--|--|--|--|
-| `\*.wvd.microsoft.com` | 443 | Service traffic | All |
-| `\*.servicebus.windows.net` | 443 | Troubleshooting data | All |
+| `*.wvd.microsoft.com` | 443 | Service traffic | All |
+| `*.servicebus.windows.net` | 443 | Troubleshooting data | All |
| `go.microsoft.com` | 443 | Microsoft FWLinks | All | | `aka.ms` | 443 | Microsoft URL shortener | All | | `docs.microsoft.com` | 443 | Documentation | All |
Any [Remote Desktop clients](user-documentation/connect-windows-7-10.md?toc=%2Fa
| Address | Outbound TCP port | Purpose | Client(s) | |--|--|--|--|
-| `\*.wvd.microsoft.us` | 443 | Service traffic | All |
-| `\*.servicebus.usgovcloudapi.net` | 443 | Troubleshooting data | All |
+| `*.wvd.microsoft.us` | 443 | Service traffic | All |
+| `*.servicebus.usgovcloudapi.net` | 443 | Troubleshooting data | All |
| `go.microsoft.com` | 443 | Microsoft FWLinks | All | | `aka.ms` | 443 | Microsoft URL shortener | All | | `docs.microsoft.com` | 443 | Documentation | All |
virtual-machines Azure Compute Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/azure-compute-gallery.md
The following table lists a few example operations that relate to gallery operat
| Microsoft.Compute/galleries/images/read | Gets the properties of Gallery Image | | Microsoft.Compute/galleries/images/write | Creates a new Gallery Image or updates an existing one | | Microsoft.Compute/galleries/images/versions/read | Gets the properties of Gallery Image Version |
-| | |
## Billing
virtual-machines Capacity Reservation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/capacity-reservation-overview.md
From this example accumulation of Minutes Not Available, here is the calculation
- D series, v2 and newer; AMD and Intel - E series, all versions; AMD and Intel - F series, all versions
+ - Lsv3 (Intel) and Lasv3 (AMD)
- At VM deployment, Fault Domain (FD) count of up to 3 may be set as desired using Virtual Machine Scale Sets. A deployment with more than 3 FDs will fail to deploy against a Capacity Reservation. - Support for additional VM Series isn't currently available:
- - L series
+ - Ls and Lsv2 series
- M series, any version - NC-series, v3 and newer - NV-series, v2 and newer
virtual-machines N Series Driver Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/n-series-driver-setup.md
description: How to set up NVIDIA GPU drivers for N-series VMs running Linux in
-+
virtual-machines Sap Rise Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-rise-integration.md
Applications within a customerΓÇÖs own vnet connect to the Internet directly fro
## SAP BTP Connectivity
-SAP Business Technology Platform (BTP) provides a multitude of applications that are accessed by public IP/hostname via the Internet.
-Customer services running in their Azure subscriptions access them either directly through VM/Azure service internet connection, or through User Defined Routes forcing all Internet bound traffic to go through a centrally managed firewall or other network virtual appliances.
+SAP Business Technology Platform (BTP) provides a multitude of applications that are mostly accessed by public IP/hostname via the Internet.
+Customer services running in their Azure subscriptions access them either directly through VM/Azure service internet connection, or through User Defined Routes forcing all Internet bound traffic to go through a centrally managed firewall or other network virtual appliances. Few SAP BTP services, such as SAP Data Intelligence, however is by design accessed through a [separate vnet peering](https://help.sap.com/docs/SAP_DATA_INTELLIGENCE/ca509b7635484070a655738be408da63/a7d98ac925e443ea9d4a716a91e0a604.html) instead of a public endpoint typically used for BTP application.
-SAP has a [preview program](https://help.sap.com/products/PRIVATE_LINK/42acd88cb4134ba2a7d3e0e62c9fe6cf/3eb3bc7aa5db4b5da9dcdbf8ee478e52.html) in operation for SAP Private Link Service for customers using SAP BTP on Azure. The SAP Private Link Service exposes SAP BTP services through a private IP range to customerΓÇÖs Azure network and thus accessible privately through previously described vnet peering or VPN site-to-site connections instead of through the Internet.
+SAP offers [Private Link Service](https://blogs.sap.com/2022/06/22/sap-private-link-service-on-azure-is-now-generally-available-ga/) for customers using SAP BTP on Azure. The SAP Private Link Service connects SAP BTP services through a private IP range into customerΓÇÖs Azure network and thus accessible privately through the private link service instead of through the Internet. Contact SAP for availability of this service for SAP RISE/ECS workloads.
-See a series of blog posts on the architecture of the SAP BTP Private Link Service and private connectivity methods, dealing with DNS and certificates in following SAP blog series [Getting Started with BTP Private Link Service for Azure](https://blogs.sap.com/2021/12/29/getting-started-with-btp-private-link-service-for-azure/)
+See [SAP's documentation](https://help.sap.com/docs/PRIVATE_LINK) and a series of blog posts on the architecture of the SAP BTP Private Link Service and private connectivity methods, dealing with DNS and certificates in following SAP blog series [Getting Started with BTP Private Link Service for Azure](https://blogs.sap.com/2021/12/29/getting-started-with-btp-private-link-service-for-azure/).
## Integration with Azure services
Check out the documentation:
- [SAP workloads on Azure: planning and deployment checklist](./sap-deployment-checklist.md) - [Virtual network peering](../../../virtual-network/virtual-network-peering-overview.md) - [Public endpoint connectivity for Virtual Machines using Azure Standard Load Balancer in SAP high-availability scenarios](./high-availability-guide-standard-load-balancer-outbound-connections.md)-- [SAP Data Integration Using Azure Data Factory](https://github.com/Azure/Azure-DataFactory/blob/main/whitepaper/SAP%20Data%20Integration%20using%20Azure%20Data%20Factory.pdf)
+- [SAP Data Integration Using Azure Data Factory](https://github.com/Azure/Azure-DataFactory/blob/main/whitepaper/SAP%20Data%20Integration%20using%20Azure%20Data%20Factory.pdf)
virtual-network-manager Concept Deployments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-deployments.md
Title: 'Configuration deployments in Azure Virtual Network Manager (Preview)' description: Learn about how configuration deployments work in Azure Virtual Network Manager.--++ Previously updated : 11/02/2021 Last updated : 06/09/2022
In this article, you'll learn about how configurations are applied to your netwo
## Deployment
-*Deployment* is the method Azure Virtual Network Manager uses to apply configurations to your virtual networks in network groups. Changes made to network groups, connectivity, and security admin configuration won't take effect unless a deployment has been committed. When committing a deployment, you select the region(s) to which the configuration will be applied. When a deployment request is sent to Azure Virtual Network Manager, it will calculate the [goal state](#goalstate) of network resources and request the necessary changes to your infrastructure. The changes can take about 15-20 minutes depending on how large the configuration is.
+*Deployment* is the method Azure Virtual Network Manager uses to apply configurations to your virtual networks in network groups. Configurations won't take effect until they are deployed. Changes to network groups, including events such as removal and addition of a virtual network into a network group, will take effect without the need of re-deployment. When committing a deployment, you select the region(s) to which the configuration will be applied. When a deployment request is sent to Azure Virtual Network Manager, it will calculate the [goal state](#goalstate) of network resources and request the necessary changes to your infrastructure. The changes can take about 15-20 minutes depending on how large the configuration is.
## <a name="deployment"></a>Deployment against network group membership types
virtual-network-manager Concept Network Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-network-groups.md
Title: "What is a network group in Azure Virtual Network Manager (Preview)?" description: Learn about how Network groups can help you manage your virtual networks.--++ Previously updated : 11/02/2021 Last updated : 06/09/2022
A *network group* is a set of virtual networks selected manually or by using con
## Static membership
-When you create a network group, you can add virtual networks to a group by manually selecting individual virtual networks from a provided list. The list of virtual networks is dependent on the scope (management group or subscription) defined at the time of the Azure Virtual Network Manager deployment. This method is useful when you have a few virtual networks you want to add to the network group. Updates to configurations containing static members will need to be deployed again to have the new changes applied.
+When you create a network group, you can add virtual networks to a group by manually selecting individual virtual networks from a provided list. The list of virtual networks is dependent on the scope (management group or subscription) defined at the time of the Azure Virtual Network Manager deployment. This method is useful when you have a few virtual networks you want to add to the network group.
## Dynamic membership
To create an Azure policy initiative definition and assignment for Azure Network
- Create an [Azure Virtual Network Manager](create-virtual-network-manager-portal.md) instance using the Azure portal. - Learn how to create a [Hub and spoke topology](how-to-create-hub-and-spoke.md) with Azure Virtual Network Manager.-- Learn how to block network traffic with a [SecurityAdmin configuration](how-to-block-network-traffic-portal.md).
+- Learn how to block network traffic with a [SecurityAdmin configuration](how-to-block-network-traffic-portal.md).
virtual-network-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/overview.md
A connectivity configuration enables you to create a mesh or a hub-and-spoke net
* Centrally manage connectivity and security policies globally across regions and subscriptions.
-* Enable transitive communication between spokes in a hub-and-spoke configuration without the complexity of managing a mesh network.
+* Enable direct connectivity between spokes in a hub-and-spoke configuration without the complexity of managing a mesh network.
* Highly scalable and highly available service with redundancy and replication across the globe.
virtual-network Tutorial Connect Virtual Networks Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-connect-virtual-networks-portal.md
Title: Connect virtual networks with VNet peering - tutorial - Azure portal
-description: In this tutorial, you learn how to connect virtual networks with virtual network peering, using the Azure portal.
+ Title: 'Tutorial: Connect virtual networks with VNet peering - Azure portal'
+description: In this tutorial, you learn how to connect virtual networks with virtual network peering using the Azure portal.
documentationcenter: virtual-network
-# Customer intent: I want to connect two virtual networks so that virtual machines in one virtual network can communicate with virtual machines in the other virtual network.
virtual-network Previously updated : 04/14/2022 Last updated : 06/24/2022 -+
+# Customer intent: I want to connect two virtual networks so that virtual machines in one virtual network can communicate with virtual machines in the other virtual network.
# Tutorial: Connect virtual networks with virtual network peering using the Azure portal
-You can connect virtual networks to each other with virtual network peering. These virtual networks can be in the same region or different regions (also known as Global VNet peering). Once virtual networks are peered, resources in both virtual networks can communicate with each other, with the same latency and bandwidth as if the resources were in the same virtual network. In this tutorial, you learn how to:
+You can connect virtual networks to each other with virtual network peering. These virtual networks can be in the same region or different regions (also known as global virtual network peering). Once virtual networks are peered, resources in both virtual networks can communicate with each other over a low-latency, high-bandwidth connection using Microsoft backbone network.
+
+In this tutorial, you learn how to:
> [!div class="checklist"]
-> * Create two virtual networks
+> * Create virtual networks
> * Connect two virtual networks with a virtual network peering > * Deploy a virtual machine (VM) into each virtual network > * Communicate between VMs
-If you prefer, you can complete this tutorial using the [Azure CLI](tutorial-connect-virtual-networks-cli.md) or [Azure PowerShell](tutorial-connect-virtual-networks-powershell.md).
+This tutorial uses the Azure portal. You can also complete it using [Azure CLI](tutorial-connect-virtual-networks-cli.md) or [PowerShell](tutorial-connect-virtual-networks-powershell.md).
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
## Prerequisites
-Before you begin, you need an Azure account with an active subscription. If you don't have one, you can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* An Azure subscription
-## Create virtual networks
+## Sign in to Azure
+
+Sign in to the [Azure portal](https://portal.azure.com).
-1. Sign in to the [Azure portal](https://portal.azure.com).
+## Create virtual networks
1. On the Azure portal, select **+ Create a resource**.
Before you begin, you need an Azure account with an active subscription. If you
||| |Subscription| Select your subscription.| |Resource group| Select **Create new** and enter *myResourceGroup*.|
+ |Name| Enter *myVirtualNetwork1*.|
|Region| Select **East US**.|
- |Name|myVirtualNetwork1|
+ :::image type="content" source="./media/tutorial-connect-virtual-networks-portal/create-basic-tab.png" alt-text="Screenshot of create virtual network basics tab.":::
-1. On the **IP Addresses** tab, enter *10.0.0.0/16* for the **Address Space** field. Click the **Add subnet** button below and enter *Subnet1* for **Subnet Name** and *10.0.0.0/24* for the **Subnet Address range**.
+1. On the **IP Addresses** tab, enter *10.0.0.0/16* for the **IPv4 address Space** field. Select the **+ Add subnet** button below and enter *Subnet1* for **Subnet Name** and *10.0.0.0/24* for the **Subnet Address range**.
:::image type="content" source="./media/tutorial-connect-virtual-networks-portal/ip-addresses-tab.png" alt-text="Screenshot of create a virtual network IP addresses tab.":::
Before you begin, you need an Azure account with an active subscription. If you
| | | | Name | myVirtualNetwork2 | | Address space | 10.1.0.0/16 |
- | Resource group | Select **Use existing** and then select **myResourceGroup**.|
+ | Resource group | myResourceGroup |
| Subnet name | Subnet2 | | Subnet address range | 10.1.0.0/24 |
Before you begin, you need an Azure account with an active subscription. If you
:::image type="content" source="./media/tutorial-connect-virtual-networks-portal/search-vnet.png" alt-text="Screenshot of searching for myVirtualNetwork1.":::
-1. Select **Peerings**, under **Settings**, and then select **+ Add**, as shown in the following picture:
+1. Under **Settings**, select **Peerings**, and then select **+ Add**, as shown in the following picture:
:::image type="content" source="./media/tutorial-connect-virtual-networks-portal/create-peering.png" alt-text="Screenshot of creating peerings for myVirtualNetwork1.":::
-1. Enter, or select, the following information, accept the defaults for the remaining settings, and then select **Add**.
+1. Enter or select the following information, accept the defaults for the remaining settings, and then select **Add**.
| Setting | Value | | | |
- | This virtual network - Peering link name | Name of the peering from myVirtualNetwork1 to the remote virtual network. *myVirtualNetwork1-myVirtualNetwork2* is used for this connection.|
- | Remote virtual network - Peering link name | Name of the peering from remote virtual network to myVirtualNetwork1. *myVirtualNetwork2-myVirtualNetwork1* is used for this connection. |
- | Subscription | Select your subscription.|
- | Virtual network | You can select a virtual network in the same region or in a different region. From the drop-down select *myVirtualNetwork2* |
+ | **This virtual network** | |
+ | Peering link name | Enter *myVirtualNetwork1-myVirtualNetwork2* for the name of the peering from **myVirtualNetwork1** to the remote virtual network. |
+ | **Remote virtual network** | |
+ | Peering link name | Enter *myVirtualNetwork2-myVirtualNetwork1* for the name of the peering from the remote virtual network to **myVirtualNetwork1**. |
+ | Subscription | Select your subscription of the remote virtual network. |
+ | Virtual network | Select **myVirtualNetwork2** for the name of the remote virtual network. The remote virtual network can be in the same region of **myVirtualNetwork1** or in a different region. |
- :::image type="content" source="./media/tutorial-connect-virtual-networks-portal/peering-settings-bidirectional.png" alt-text="Screenshot of virtual network peering configuration." lightbox="./media/tutorial-connect-virtual-networks-portal/peering-settings-bidirectional-expanded.png":::
+ :::image type="content" source="./media/tutorial-connect-virtual-networks-portal/peering-settings-bidirectional-inline.png" alt-text="Screenshot of virtual network peering configuration." lightbox="./media/tutorial-connect-virtual-networks-portal/peering-settings-bidirectional-expanded.png":::
- The **PEERING STATUS** is *Connected*, as shown in the following picture:
+ In the **Peerings** page, the **Peering status** is **Connected**, as shown in the following picture:
:::image type="content" source="./media/tutorial-connect-virtual-networks-portal/peering-status-connected.png" alt-text="Screenshot of virtual network peering connection status.":::
- If you don't see a *Connected* status, select the **Refresh** button.
+ If you don't see a **Connected** status, select the **Refresh** button.
## Create virtual machines
Create a VM in each virtual network so that you can test the communication betwe
1. On the Azure portal, select **+ Create a resource**.
-1. Select **Compute**, and then **Create** under *Virtual machine*.
+1. Select **Compute**, and then **Create** under **Virtual machine**.
:::image type="content" source="./media/tutorial-connect-virtual-networks-portal/create-vm.png" alt-text="Screenshot of create a resource for virtual machines.":::
-1. Enter, or select, the following information on the **Basics** tab, accept the defaults for the remaining settings, and then select **Create**:
+1. Enter or select the following information on the **Basics** tab. Accept the defaults for the remaining settings, and then select **Create**:
| Setting | Value | | | |
- | Resource group| Select **Use existing** and then select **myResourceGroup**. |
- | Name | myVm1 |
- | Location | Select **East US**. |
- | Image | Select an OS image. For this VM *Windows Server 2019 Datacenter - Gen1* is selected. |
- | Size | Select a VM size. For this VM *Standard_D2s_v3* is selected. |
- | Username | Enter a username. The username *azure* was chosen for this example. |
+ | Resource group| Select **myResourceGroup**. |
+ | Name | Enter *myVm1*. |
+ | Location | Select **(US) East US**. |
+ | Image | Select an OS image. For this tutorial, *Windows Server 2019 Datacenter - Gen2* is selected. |
+ | Size | Select a VM size. For this tutorial, *Standard_D2s_v3* is selected. |
+ | Username | Enter a username. For this tutorial, the username *azure* is used. |
| Password | Enter a password of your choosing. The password must be at least 12 characters long and meet the [defined complexity requirements](../virtual-machines/windows/faq.yml?toc=%2fazure%2fvirtual-network%2ftoc.json#what-are-the-password-requirements-when-creating-a-vm-). |
- :::image type="content" source="./media/tutorial-connect-virtual-networks-portal/create-vm-basic-tab.png" alt-text="Screenshot of virtual machine basic tab configuration." lightbox="./media/tutorial-connect-virtual-networks-portal/create-vm-basic-tab-expanded.png":::
+ :::image type="content" source="./media/tutorial-connect-virtual-networks-portal/create-vm-basic-tab-inline.png" alt-text="Screenshot of virtual machine basic tab configuration." lightbox="./media/tutorial-connect-virtual-networks-portal/create-vm-basic-tab-expanded.png":::
1. On the **Networking** tab, select the following values: | Setting | Value | | | |
- | Virtual network | myVirtualNetwork1 - If it's not already selected, select **Virtual network** and then select **myVirtualNetwork1**. |
- | Subnet | Subnet1 - If it's not already selected, select **Subnet** and then select **Subnet1**. |
- | Public inbound ports | *Allow selected ports* |
- | Select inbound ports | *RDP (3389)* |
+ | Virtual network | Select **myVirtualNetwork1**. |
+ | Subnet | Select **Subnet1**. |
+ | NIC network security group | Select **Basic**. |
+ | Public inbound ports | Select **Allow selected ports**. |
+ | Select inbound ports | Select **RDP (3389)**. |
- :::image type="content" source="./media/tutorial-connect-virtual-networks-portal/create-vm-networking-tab.png" alt-text="Screenshot of virtual machine networking tab configuration." lightbox="./media/tutorial-connect-virtual-networks-portal/create-vm-networking-tab-expanded.png":::
+ :::image type="content" source="./media/tutorial-connect-virtual-networks-portal/create-vm-networking-tab-inline.png" alt-text="Screenshot of virtual machine networking tab configuration." lightbox="./media/tutorial-connect-virtual-networks-portal/create-vm-networking-tab-expanded.png":::
1. Select the **Review + Create** and then **Create** to start the VM deployment. ### Create the second VM
-Repeat steps 1-6 again to create a second virtual machine with the following changes:
+Repeat steps 1-5 again to create a second virtual machine with the following changes:
| Setting | Value | | | | | Name | myVm2 | | Virtual network | myVirtualNetwork2 |
-The VMs take a few minutes to create. Do not continue with the remaining steps until both VMs are created.
+The VMs take a few minutes to create. Don't continue with the remaining steps until both VMs are created.
[!INCLUDE [ephemeral-ip-note.md](../../includes/ephemeral-ip-note.md)] ## Communicate between VMs
+Test the communication between the two virtual machines over the virtual network peering by pinging from **myVm2** to **myVm1**.
+ 1. In the search box at the top of the portal, look for *myVm1*. When **myVm1** appears in the search results, select it. :::image type="content" source="./media/tutorial-connect-virtual-networks-portal/search-vm.png" alt-text="Screenshot of searching for myVm1.":::
The VMs take a few minutes to create. Do not continue with the remaining steps u
:::image type="content" source="./media/tutorial-connect-virtual-networks-portal/rdp-connect.png" alt-text="Screenshot of connection screen for remote desktop.":::
-1. Enter the user name and password you specified when creating the VM (you may need to select **More choices**, then **Use a different account**, to specify the credentials you entered when you created the VM), then select **OK**.
+1. Enter the username and password you specified when creating **myVm1** (you may need to select **More choices**, then **Use a different account**, to specify the credentials you entered when you created the VM), then select **OK**.
- :::image type="content" source="./media/tutorial-connect-virtual-networks-portal/rdp-credentials.png" alt-text="Screenshot of RDP credential screen.":::
+ :::image type="content" source="./media/tutorial-connect-virtual-networks-portal/rdp-credentials.png" alt-text="Screenshot of R D P credential screen.":::
1. You may receive a certificate warning during the sign-in process. Select **Yes** to continue with the connection.
-1. In a later step, ping is used to communicate with *myVm1* VM from the *myVm2* VM. Ping uses the Internet Control Message Protocol (ICMP), which is denied through the Windows Firewall, by default. On the *myVm1* VM, enable ICMP through the Windows firewall, so that you can ping this VM from *myVm2* in a later step, using PowerShell:
+1. In a later step, ping is used to communicate with **myVm1** from **myVm2**. Ping uses the Internet Control Message Protocol (ICMP), which is denied through the Windows Firewall, by default. On **myVm1**, enable ICMP through the Windows firewall, so that you can ping this VM from **myVm2** in a later step, using PowerShell:
```powershell New-NetFirewallRule ΓÇôDisplayName "Allow ICMPv4-In" ΓÇôProtocol ICMPv4
The VMs take a few minutes to create. Do not continue with the remaining steps u
Though ping is used to communicate between VMs in this tutorial, allowing ICMP through the Windows Firewall for production deployments isn't recommended.
-1. To connect to the *myVm2* VM, enter the following command from a command prompt on the *myVm1* VM:
+1. To connect to **myVm2** from **myVm1**, enter the following command from a command prompt on **myVm1**:
``` mstsc /v:10.1.0.4 ```
+1. Enter the username and password you specified when creating **myVm2** and select **Yes** if you receive a certificate warning during the sign-in process.
+
+ :::image type="content" source="./media/tutorial-connect-virtual-networks-portal/rdp-credentials-to-second-vm.png" alt-text="Screenshot of R D P credential screen for R D P session from first virtual machine to second virtual machine.":::
-1. Since you enabled ping on *myVm1*, you can now ping it by IP address:
+1. Since you enabled ping on **myVm1**, you can now ping it from **myVm2**:
- ```
+ ```powershell
ping 10.0.0.4 ```
- :::image type="content" source="./media/tutorial-connect-virtual-networks-portal/myvm2-ping-myvm1.png" alt-text="Screenshot of myVM2 pinging myVM1.":::
+ :::image type="content" source="./media/tutorial-connect-virtual-networks-portal/myvm2-ping-myvm1.png" alt-text="Screenshot of second virtual machine pinging first virtual machine.":::
1. Disconnect your RDP sessions to both *myVm1* and *myVm2*.
The VMs take a few minutes to create. Do not continue with the remaining steps u
When no longer needed, delete the resource group and all resources it contains:
-1. Enter *myResourceGroup* in the **Search** box at the top of the portal. When you see **myResourceGroup** in the search results, select it.
+1. Enter *myResourceGroup* in the **Search** box at the top of the Azure portal. When you see **myResourceGroup** in the search results, select it.
1. Select **Delete resource group**. 1. Enter *myResourceGroup* for **TYPE THE RESOURCE GROUP NAME:** and select **Delete**.
- :::image type="content" source="./media/tutorial-connect-virtual-networks-portal/delete-resource-group.png" alt-text="Screenshot of delete resource group page.":::
- ## Next steps
+In this tutorial, you:
+
+* Created virtual network peering between two virtual networks.
+* Tested the communication between two virtual machines over the virtual network peering using ping command.
+
+To learn more about a virtual network peering:
+ > [!div class="nextstepaction"]
-> [Learn more about virtual network peering](virtual-network-peering-overview.md)
+> [Virtual network peering](virtual-network-peering-overview.md)
virtual-wan Connect Virtual Network Gateway Vwan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/connect-virtual-network-gateway-vwan.md
Title: 'Connect a virtual network gateway to an Azure Virtual WAN' description: Learn how to connect an Azure VPN gateway (virtual network gateway) to an Azure Virtual WAN VPN gateway.- Previously updated : 09/01/2021 Last updated : 06/22/2022
This article helps you set up connectivity from an Azure VPN Gateway (virtual network gateway) to an Azure Virtual WAN (VPN gateway). Creating a connection from a VPN Gateway (virtual network gateway) to a Virtual WAN (VPN gateway) is similar to setting up connectivity to a virtual WAN from branch VPN sites.
-In order to minimize possible confusion between two features, we will preface the gateway with the name of the feature that we are referring to. For example, VPN Gateway virtual network gateway, and Virtual WAN VPN gateway.
+In order to minimize possible confusion between two features, we'll preface the gateway with the name of the feature that we're referring to. For example, VPN Gateway virtual network gateway, and Virtual WAN VPN gateway.
## Before you begin
Before you begin, create the following resources:
Azure Virtual WAN * [Create a virtual WAN](virtual-wan-site-to-site-portal.md#openvwan).
-* [Create a hub](virtual-wan-site-to-site-portal.md#hub). The virtual hub contains the Virtual WAN VPN gateway.
+* [Create a hub](virtual-wan-site-to-site-portal.md#hub).
+* [Create an S2S VPN gateway](virtual-wan-site-to-site-portal.md#gateway) configured in the hub.
-Azure Virtual Network
+Virtual Network (for virtual network gateway)
-* Create a virtual network without any virtual network gateways. Verify that none of the subnets of your on-premises networks overlap with the virtual networks that you want to connect to. To create a virtual network in the Azure portal, see the [Quickstart](../virtual-network/quick-create-portal.md).
+* [Create a virtual network](../virtual-network/quick-create-portal.md) without any virtual network gateways. This virtual network will be configured with an active/active virtual network gateway in later steps. Verify that none of the subnets of your on-premises networks overlap with the virtual networks that you want to connect to.
-## <a name="vnetgw"></a>1. Create a VPN Gateway virtual network gateway
+## <a name="vnetgw"></a>1. Configure VPN Gateway virtual network gateway
-Create a **VPN Gateway** virtual network gateway in active-active mode for your virtual network. When you create the gateway, you can either use existing public IP addresses for the two instances of the gateway, or you can create new public IPs. You will use these public IPs when setting up the Virtual WAN sites. For more information about active-active VPN gateways and configuration steps, see [Configure active-active VPN gateways](../vpn-gateway/vpn-gateway-activeactive-rm-powershell.md#aagateway).
+Create a **VPN Gateway** virtual network gateway in active-active mode for your virtual network. When you create the gateway, you can either use existing public IP addresses for the two instances of the gateway, or you can create new public IPs. You'll use these public IPs when setting up the Virtual WAN sites. For more information about active-active VPN gateways and configuration steps, see [Configure active-active VPN gateways](../vpn-gateway/vpn-gateway-activeactive-rm-powershell.md#aagateway).
+
+The following sections show example settings for your gateway.
### <a name="active-active"></a>Active-active mode setting
-On the Virtual network gateway **Configuration** page, enable active-active mode.
+On the Virtual network gateway **Configuration** page, make sure **active-active** mode is enabled.
-![active-active](./media/connect-virtual-network-gateway-vwan/active.png "active-active")
### <a name="BGP"></a>BGP setting
-On the virtual network gateway **Configuration** page, you can (optionally) select **Configure BGP ASN**. If you configure BGP, change the ASN from the default value shown in the portal. For this configuration, the BGP ASN cannot be 65515. 65515 will be used by Azure Virtual WAN.
+On the virtual network gateway **Configuration** page, you can (optionally) select **Configure BGP ASN**. If you configure BGP, change the ASN from the default value shown in the portal. For this configuration, the BGP ASN can't be 65515. 65515 will be used by Azure Virtual WAN.
-![Screenshot shows a virtual network gateway Configuration page with Configure BGP ASN selected.](./media/connect-virtual-network-gateway-vwan/bgp.png "bgp")
### <a name="pip"></a>Public IP addresses
-When the gateway is created, navigate to the **Properties** page. The properties and configuration settings will be similar to the following example. Notice the two public IP addresses that are used for the gateway.
+Once the gateway is created, go to the **Properties** page. The properties and configuration settings will be similar to the following example. Notice the two public IP addresses that are used for the gateway.
-![properties](./media/connect-virtual-network-gateway-vwan/publicip.png "properties")
## <a name="vwansite"></a>2. Create Virtual WAN VPN sites
-To create Virtual WAN VPN sites, navigate your to your virtual WAN and, under **Connectivity**, select **VPN sites**. In this section, you will create two Virtual WAN VPN sites that correspond to the virtual network gateways you created in the previous section.
+To create Virtual WAN VPN sites, navigate to your virtual WAN and, under **Connectivity**, select **VPN sites**. In this section, you'll create two Virtual WAN VPN sites that correspond to the virtual network gateways you created in the previous section.
1. Select **+Create site**.
-2. On the **Create VPN sites** page, type the following values:
+1. On the **Create VPN sites** page, type the following values:
* **Region** - The same region as the Azure VPN Gateway virtual network gateway. * **Device vendor** - Enter the device vendor (any name). * **Private address space** - Enter a value, or leave blank when BGP is enabled. * **Border Gateway Protocol** - Set to **Enable** if the Azure VPN Gateway virtual network gateway has BGP enabled.
- * **Connect to Hubs** - Select the hub you created in the prerequisites from the dropdown. If you don't see a hub, verify that you created a site-to-site VPN gateway for your hub.
-3. Under **Links**, enter the following values:
+1. Under **Links**, enter the following values:
* **Provider Name** - Enter a Link name and a Provider name (any name). * **Speed** - Speed (any number). * **IP Address** - Enter IP address (same as the first public IP address shown under the (VPN Gateway) virtual network gateway properties). * **BGP Address** and **ASN** - BGP address and ASN. These must be the same as one of the BGP peer IP addresses, and ASN from the VPN Gateway virtual network gateway that you configured in [Step 1](#vnetgw).
-4. Review and select **Confirm** to create the site.
-5. Repeat the previous steps to create the second site to match with the second instance of the VPN Gateway virtual network gateway. You'll keep the same settings, except using second public IP address and second BGP peer IP address from VPN Gateway configuration.
-6. You now have two sites successfully provisioned and can proceed to the next section to download configuration files.
+1. Review and select **Confirm** to create the site.
+1. Repeat the previous steps to create the second site to match with the second instance of the VPN Gateway virtual network gateway. You'll keep the same settings, except using second public IP address and second BGP peer IP address from VPN Gateway configuration.
+1. You now have two sites successfully provisioned.
+
+## <a name="connect-sites"></a>3. Connect sites to the virtual hub
+
+Next, connect both sites to your virtual hub.
+
+1. On your Virtual WAN page, go to **Hubs**.
+
+1. On the **Hubs** page, click the hub that you created.
+
+1. On the page for the hub that you created, in the left pane, click **VPN (Site to site)**.
-## <a name="downloadconfig"></a>3. Download the VPN configuration files
+1. On the **VPN (Site to site)** page, you should see your sites. If you don't, you may need to click the **Hub association:x** bubble to clear the filters and view your site.
-In this section, you download the VPN configuration file for each of the sites that you created in the previous section.
+1. Select the checkbox next to the name of each site that you want to connect (don't click the site name directly), then click **Connect VPN sites**.
-1. At the top of the Virtual WAN **VPN sites** page, select the **Site**, then select **Download Site-to-site VPN configuration**. Azure creates a configuration file with the settings.
+1. On the **Connect sites** page, configure the settings.
- ![Screenshot that shows the "VPN sites" page with the "Download Site-to-Site VPN configuration" action selected.](./media/connect-virtual-network-gateway-vwan/download.png "download")
-2. Download and open the configuration file.
-3. Repeat these steps for the second site. Once you have both configuration files open, you can proceed to the next section.
+1. At the bottom of the page, select **Connect**. It takes a short while for the hub to update with the site settings.
-## <a name="createlocalgateways"></a>4. Create the local network gateways
+For more information, see [Connect the VPN sites to a virtual hub](virtual-wan-site-to-site-portal.md#connectsites).
+
+## <a name="downloadconfig"></a>4. Download the VPN configuration files
+
+In this section, you download the VPN configuration file for the sites that you created in the previous section.
+
+1. On your Virtual WAN page, go to **VPN sites**.
+1. On the **VPN sites** page, at the top of the page, select **Download Site-to-Site VPN configuration** and download the file. Azure creates a configuration file with the necessary values that are used to configure your local network gateways in the next section.
+
+ :::image type="content" source="./media/connect-virtual-network-gateway-vwan/download.png" alt-text="Screenshot of VPN sites page with the Download Site-to-Site VPN configuration action selected." lightbox="./media/connect-virtual-network-gateway-vwan/download.png":::
+
+## <a name="createlocalgateways"></a>5. Create the local network gateways
In this section, you create two Azure VPN Gateway local network gateways. The configuration files from the previous step contain the gateway configuration settings. Use these settings to create and configure the Azure VPN Gateway local network gateways.
In this section, you create two Azure VPN Gateway local network gateways. The co
* **IP address** - Use the Instance0 IP Address shown for *gatewayconfiguration* from the configuration file. * **BGP** - If the connection is over BGP, select **Configure BGP settings** and enter the ASN '65515'. Enter the BGP peer IP address. Use 'Instance0 BgpPeeringAddresses' for *gatewayconfiguration* from the configuration file.
- * **Address Space** If the connection isn't over BGP, make sure **Configure BGP settings** remains unchecked. Enter the address spaces that you're going to advertise from the virtual network gateway side. You can add multiple address space ranges. Make sure that the ranges you specify here do not overlap with ranges of other networks that you want to connect to.
- * **Subscription, Resource Group, and Location** are same as for the Virtual WAN hub.
-2. Review and create the local network gateway. Your local network gateway should look similar to this example.
+ * **Address Space** - If the connection isn't over BGP, make sure **Configure BGP settings** remains unchecked. Enter the address spaces that you're going to advertise from the virtual network gateway side. You can add multiple address space ranges. Make sure that the ranges you specify here don't overlap with ranges of other networks that you want to connect to.
+ * **Subscription, Resource Group, and Location** - These are same as for the Virtual WAN hub.
+1. Review and create the local network gateway. Your local network gateway should look similar to this example.
- ![Screenshot that shows the "Configuration" page with an IP address highlighted and "Configure BGP settings" selected.](./media/connect-virtual-network-gateway-vwan/lng1.png "instance0")
-3. Repeat these steps to create another local network gateway, but this time, use the 'Instance1' values instead of 'Instance0' values from the configuration file.
+ :::image type="content" source="./media/connect-virtual-network-gateway-vwan/local-1.png" alt-text="Screenshot that shows the Configuration page with an IP address highlighted for local network gateway 1." lightbox="./media/connect-virtual-network-gateway-vwan/local-1.png":::
+1. Repeat these steps to create another local network gateway, but this time, use the 'Instance1' values instead of 'Instance0' values from the configuration file.
- ![download configuration file](./media/connect-virtual-network-gateway-vwan/lng2.png "instance1")
+ :::image type="content" source="./media/connect-virtual-network-gateway-vwan/local-2.png" alt-text="Screenshot that shows the Configuration page with an IP address highlighted for local network gateway 2." lightbox="./media/connect-virtual-network-gateway-vwan/local-2.png":::
-## <a name="createlocalgateways"></a>5. Create connections
+## <a name="createlocalgateways"></a>6. Create connections
In this section, you create a connection between the VPN Gateway local network gateways and virtual network gateway. For steps on how to create a VPN Gateway connection, see [Configure a connection](../vpn-gateway/tutorial-site-to-site-portal.md#CreateConnection). 1. In the portal, navigate to your virtual network gateway and click **Connections**. At the top of the Connections page, click **+Add** to open the **Add connection** page.
-2. On the **Add connection** page, configure the following values for your connection:
+1. On the **Add connection** page, configure the following values for your connection:
* **Name:** Name your connection. * **Connection type:** Select **Site-to-site(IPSec)**
- * **Virtual network gateway:** The value is fixed because you are connecting from this gateway.
+ * **Virtual network gateway:** The value is fixed because you're connecting from this gateway.
* **Local network gateway:** This connection will connect the virtual network gateway to the local network gateway. Choose one of the local network gateways that you created earlier. * **Shared Key:** Enter a shared key. * **IKE Protocol:** Choose the IKE protocol.
-3. Click **OK** to create your connection.
-4. You can view the connection in the **Connections** page of the virtual network gateway.
-
- ![Connection](./media/connect-virtual-network-gateway-vwan/connect.png "connection")
-5. Repeat the preceding steps to create a second connection. For the second connection, select the other local network gateway that you created.
-6. If the connections are over BGP, after you have created your connections, navigate to a connection and select **Configuration**. On the **Configuration** page, for **BGP**, select **Enabled**. Then, click **Save**. Repeat for the second connection.
+1. Click **OK** to create your connection.
+1. You can view the connection in the **Connections** page of the virtual network gateway.
+1. Repeat the preceding steps to create a second connection. For the second connection, select the other local network gateway that you created.
+1. If the connections are over BGP, after you've created your connections, navigate to a connection and select **Configuration**. On the **Configuration** page, for **BGP**, select **Enabled**. Then, click **Save**. Repeat for the second connection.
-## <a name="test"></a>6. Test connections
+## <a name="test"></a>7. Test connections
You can test the connectivity by creating two virtual machines, one on the side of the VPN Gateway virtual network gateway, and one in a virtual network for the Virtual WAN, and then ping the two virtual machines.
-1. Create a virtual machine in the virtual network (Test1-VNet) for Azure VPN Gateway (Test1-VNG). Do not create the virtual machine in the GatewaySubnet.
-2. Create another virtual network to connect to the virtual WAN. Create a virtual machine in a subnet of this virtual network. This virtual network cannot contain any virtual network gateways. You can quickly create a virtual network using the PowerShell steps in the [site-to-site connection](virtual-wan-site-to-site-portal.md#vnet) article. Be sure to change the values before running the cmdlets.
-3. Connect the VNet to the Virtual WAN hub. On the page for your virtual WAN, select **Virtual network connections**, then **+Add connection**. On the **Add connection** page, fill in the following fields:
+1. Create a virtual machine in the virtual network (Test1-VNet) for Azure VPN Gateway (Test1-VNG). Don't create the virtual machine in the GatewaySubnet.
+1. Create another virtual network to connect to the virtual WAN. Create a virtual machine in a subnet of this virtual network. This virtual network can't contain any virtual network gateways. You can quickly create a virtual network using the PowerShell steps in the [site-to-site connection](virtual-wan-site-to-site-portal.md#vnet) article. Be sure to change the values before running the cmdlets.
+1. Connect the VNet to the Virtual WAN hub. On the page for your virtual WAN, select **Virtual network connections**, then **+Add connection**. On the **Add connection** page, fill in the following fields:
* **Connection name** - Name your connection. * **Hubs** - Select the hub you want to associate with this connection. * **Subscription** - Verify the subscription.
- * **Virtual network** - Select the virtual network you want to connect to this hub. The virtual network cannot have an already existing virtual network gateway.
-4. Click **OK** to create the virtual network connection.
-5. Connectivity is now set between the VMs. You should be able to ping one VM from the other, unless there are any firewalls or other policies blocking the communication.
+ * **Virtual network** - Select the virtual network you want to connect to this hub. The virtual network can't have an already existing virtual network gateway.
+1. Click **OK** to create the virtual network connection.
+1. Connectivity is now set between the VMs. You should be able to ping one VM from the other, unless there are any firewalls or other policies blocking the communication.
## Next steps
-For steps to configure a custom IPsec policy, see [Configure a custom IPsec policy for Virtual WAN](virtual-wan-custom-ipsec-portal.md).
-For more information about Virtual WAN, see [About Azure Virtual WAN](virtual-wan-about.md) and the [Azure Virtual WAN FAQ](virtual-wan-faq.md).
+* For more information about Virtual WAN site-to-site VPN, see [Tutorial: Virtual WAN Site-to-site VPN](virtual-wan-site-to-site-portal.md).
+* For more information about VPN Gateway active-active gateway settings, see [VPN Gateway active-active configurations](../vpn-gateway/active-active-portal.md).
web-application-firewall Quick Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/quick-create-bicep.md
+
+ Title: 'Quickstart: Create an Azure WAF v2 on Application Gateway - Bicep'
+
+description: Learn how to use Bicep to create a Web Application Firewall v2 on Azure Application Gateway.
++++ Last updated : 06/22/2022++++
+# Quickstart: Create an Azure WAF v2 on Application Gateway using Bicep
+
+In this quickstart, you use Bicep to create an Azure Web Application Firewall v2 on Application Gateway.
+++
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+## Review the Bicep file
+
+This Bicep file creates a simple Web Application Firewall v2 on Azure Application Gateway. This includes a public IP frontend IP address, HTTP settings, a rule with a basic listener on port 80, and a backend pool. The file also creates a WAF policy with a custom rule to block traffic to the backend pool based on an IP address match type.
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/ag-docs-wafv2/).
++
+Multiple Azure resources are defined in the Bicep file:
+
+- [**Microsoft.Network/applicationgateways**](/azure/templates/microsoft.network/applicationgateways)
+- [**Microsoft.Network/ApplicationGatewayWebApplicationFirewallPolicies**](/azure/templates/microsoft.network/ApplicationGatewayWebApplicationFirewallPolicies)
+- [**Microsoft.Network/publicIPAddresses**](/azure/templates/microsoft.network/publicipaddresses) : one for the application gateway, and two for the virtual machines.
+- [**Microsoft.Network/networkSecurityGroups**](/azure/templates/microsoft.network/networksecuritygroups)
+- [**Microsoft.Network/virtualNetworks**](/azure/templates/microsoft.network/virtualnetworks)
+- [**Microsoft.Compute/virtualMachines**](/azure/templates/microsoft.compute/virtualmachines) : two virtual machines
+- [**Microsoft.Network/networkInterfaces**](/azure/templates/microsoft.network/networkinterfaces) : two for the virtual machines
+- [**Microsoft.Compute/virtualMachine/extensions**](/azure/templates/microsoft.compute/virtualmachines/extensions) : to configure IIS and the web pages
+
+## Deploy the Bicep file
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep --parameters adminUsername=<admin-user>
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -adminUsername "<admin-user>"
+ ```
+
+
+
+> [!NOTE]
+> You'll be prompted to enter **adminPassword**, which is the password for the admin account on the backend servers. The password must be between 8-123 characters long and must contain at least three of the following: an uppercase character, a lowercase character, a numeric digit, or a special character.
+
+When the deployment finishes, you should see a message indicating the deployment succeeded. The deployment can take 10 minutes or longer to complete.
+
+## Validate the deployment
+
+Although IIS isn't required to create the application gateway, it's installed on the backend servers to verify if Azure successfully created a WAF v2 on the application gateway.
+
+Use IIS to test the application gateway:
+
+1. Find the public IP address for the application gateway on its **Overview** page.![Record application gateway public IP address](../../application-gateway/media/application-gateway-create-gateway-bicep/app-gateway-ip-address-bicep.png)
+2. Copy the public IP address, and then paste it into the address bar of your browser to browse that IP address.
+3. Check the response. A **403 Forbidden** response verifies that the WAF was successfully created and is blocking connections to the backend pool.
+4. Change the custom rule to **Allow traffic** using Azure PowerShell.
+
+ ```azurepowershell
+
+ $rgName = "exampleRG"
+ $appGWName = "myAppGateway"
+ $fwPolicyName = "WafPol01"
+
+ # Pull the existing Azure resources
+
+ $appGW = Get-AzApplicationGateway -Name $appGWName -ResourceGroupName $rgName
+ $pol = Get-AzApplicationGatewayFirewallPolicy -Name $fwPolicyName -ResourceGroupName $rgName
+
+ # Update the resources
+
+ $pol[0].CustomRules[0].Action = "allow"
+ $appGW.FirewallPolicy = $pol
+
+ # Push your changes to Azure
+
+ Set-AzApplicationGatewayFirewallPolicy -Name $fwPolicyName -ResourceGroupName $rgName -CustomRule $pol.CustomRules
+ Set-AzApplicationGateway -ApplicationGateway $appGW
+ ```
+
+
+
+ Refresh your browser multiple times and you should see connections to both myVM1 and myVM2.
+
+## Clean up resources
+
+When you no longer need the resources that you created with the application gateway, use the Azure portal, Azure CLI, or Azure PowerShell to delete the resource group. This removes the application gateway and all the related resources.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name exampleRG
+```
+++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Tutorial: Create an application gateway with a Web Application Firewall using the Azure portal](application-gateway-web-application-firewall-portal.md)