Updates from: 06/25/2022 01:08:54
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Application Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/application-types.md
In a web application, each execution of a [policy](user-flow-overview.md) takes
Validation of the `id_token` by using a public signing key that is received from Azure AD is sufficient to verify the identity of the user. This process also sets a session cookie that can be used to identify the user on subsequent page requests.
-To see this scenario in action, try one of the web application sign in code samples in our [Getting started section](overview.md).
+To see this scenario in action, try one of the web application sign-in code samples in our [Getting started section](overview.md).
In addition to facilitating simple sign in, a web server application might also need to access a back-end web service. In this case, the web application can perform a slightly different [OpenID Connect flow](openid-connect.md) and acquire tokens by using authorization codes and refresh tokens. This scenario is depicted in the following [Web APIs section](#web-apis).
In this flow, the application executes [policies](user-flow-overview.md) and rec
Applications that contain long-running processes or that operate without the presence of a user also need a way to access secured resources such as web APIs. These applications can authenticate and get tokens by using their identities (rather than a user's delegated identity) and by using the OAuth 2.0 client credentials flow. Client credential flow isn't the same as on-behalf-flow and on-behalf-flow shouldn't be used for server-to-server authentication.
-The [OAuth 2.0 client credentials flow](./client-credentials-grant-flow.md) is currently in public preview. You can also set up client credential flow using Azure AD and the Microsoft identity platform /token endpoint (`https://login.microsoftonline.com/your-tenant-name.onmicrosoft.com/oauth2/v2.0/token`) for a [Microsoft Graph application](microsoft-graph-get-started.md) or your own application. For more information, check out the [Azure AD token reference](../active-directory/develop/id-tokens.md) article.
+For Azure AD B2C, the [OAuth 2.0 client credentials flow](./client-credentials-grant-flow.md) is currently in public preview. However, you can set up client credential flow using Azure AD and the Microsoft identity platform `/token` endpoint (`https://login.microsoftonline.com/your-tenant-name.onmicrosoft.com/oauth2/v2.0/token`) for a [Microsoft Graph application](microsoft-graph-get-started.md) or your own application. For more information, check out the [Azure AD token reference](../active-directory/develop/id-tokens.md) article.
## Unsupported application types
active-directory-b2c Client Credentials Grant Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/client-credentials-grant-flow.md
Previously updated : 06/15/2022 Last updated : 06/21/2022
The OAuth 2.0 client credentials grant flow permits an app (confidential client)
In the client credentials flow, permissions are granted directly to the application itself by an administrator. When the app presents a token to a resource, the resource enforces that the app itself has authorization to perform an action since there's no user involved in the authentication. This article covers the steps needed to authorize an application to call an API, and how to get the tokens needed to call that API.
+**This feature is in public preview.**
+ ## App registration overview To enable your app to sign in with client credentials and call a web API, you register two applications in the Azure AD B2C directory.
can't contain spaces. The following example demonstrates two app roles, read and
## Step 2. Register an application
-To enable your app to sign in with Azure AD B2C using client credentials flow, register your applications (**App 1**). To create the web API app registration, follow these steps:
+To enable your app to sign in with Azure AD B2C using client credentials flow, you can use an existing application or register a new one (**App 1**).
+
+If you're using an existing app, make sure the app's `accessTokenAcceptedVersion` is set to `2`:
+
+1. In the Azure portal, search for and select **Azure AD B2C**.
+1. Select **App registrations**, and then select the your existing app from the list.
+1. In the left menu, under **Manage**, select **Manifest** to open the manifest editor.
+1. Locate the `accessTokenAcceptedVersion` element, and set its value to `2`.
+1. At the top of the page, select **Save** to save the changes.
+
+To create a new web app registration, follow these steps:
1. In the Azure portal, search for and select **Azure AD B2C** 1. Select **App registrations**, and then select **New registration**.
$appId = "<client ID>"
$secret = "<client secret>" $endpoint = "https://<tenant-name>.b2clogin.com/<tenant-name>.onmicrosoft.com/<policy>/oauth2/v2.0/token" $scope = "<Your API id uri>/.default"
-$body = "granttype=client_credentials&scope=" + $scope + "&client_id=" + $appId + "&client_secret=" + $secret
+$body = "grant_type=client_credentials&scope=" + $scope + "&client_id=" + $appId + "&client_secret=" + $secret
$token = Invoke-RestMethod -Method Post -Uri $endpoint -Body $body ```
active-directory-b2c Identity Provider Swissid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-swissid.md
To enable sign-in for users with a SwissID account in Azure AD B2C, you need to
|Key |Note | |||
- | Environment| The SwissID OpenId well-known configuration endpoint. For example, <https://login.sandbox.pre.swissid.ch/idp/oauth2/.well-known/openid-configuration>. |
- | Client ID | The SwissID client ID. For example, 11111111-2222-3333-4444-555555555555. |
+ | Environment| The SwissID OpenId well-known configuration endpoint. For example, `https://login.sandbox.pre.swissid.ch/idp/oauth2/.well-known/openid-configuration`. |
+ | Client ID | The SwissID client ID. For example, `11111111-2222-3333-4444-555555555555`. |
| Password| The SwissID client secret.|
active-directory-b2c Implicit Flow Single Page Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/implicit-flow-single-page-application.md
Title: Single-page application sign-in using the OAuth 2.0 implicit flow in Azure Active Directory B2C
-description: Learn how to add single-page sign in using the OAuth 2.0 implicit flow with Azure Active Directory B2C.
+description: Learn how to add single-page sign-in using the OAuth 2.0 implicit flow with Azure Active Directory B2C.
Some frameworks, like [MSAL.js 1.x](https://github.com/AzureAD/microsoft-authent
Azure AD B2C extends the standard OAuth 2.0 implicit flow to more than simple authentication and authorization. Azure AD B2C introduces the [policy parameter](user-flow-overview.md). With the policy parameter, you can use OAuth 2.0 to add policies to your app, such as sign-up, sign-in, and profile management user flows. In the example HTTP requests in this article, we use **{tenant}.onmicrosoft.com** for illustration. Replace `{tenant}` with [the name of your tenant](tenant-management.md#get-your-tenant-name) if you've one. Also, you need to have [created a user flow](tutorial-create-user-flows.md?pivots=b2c-user-flow).
-We use the following figure to illustrate implicit sign in flow. Each step is described in detail later in the article.
+We use the following figure to illustrate implicit sign-in flow. Each step is described in detail later in the article.
![Swimlane-style diagram showing the OpenID Connect implicit flow](./media/implicit-flow-single-page-application/convergence_scenarios_implicit.png)
The parameters in the HTTP GET request are explained in the table below.
| scope | Yes | A space-separated list of scopes. A single scope value indicates to Azure AD both of the permissions that are being requested. The `openid` scope indicates a permission to sign in the user and get data about the user in the form of ID tokens. The `offline_access` scope is optional for web apps. It indicates that your app needs a refresh token for long-lived access to resources. | | state | No | A value included in the request that also is returned in the token response. It can be a string of any content that you want to use. Usually, a randomly generated, unique value is used, to prevent cross-site request forgery attacks. The state is also used to encode information about the user's state in the app before the authentication request occurred, for example, the page the user was on, or the user flow that was being executed. | | nonce | Yes | A value included in the request (generated by the app) that is included in the resulting ID token as a claim. The app can then verify this value to mitigate token replay attacks. Usually, the value is a randomized, unique string that can be used to identify the origin of the request. |
-| prompt | No | The type of user interaction that's required. Currently, the only valid value is `login`. This parameter forces the user to enter their credentials on that request. Single sign-on doesn't take effect. |
+| prompt | No | The type of user interaction that's required. Currently, the only valid value is `login`. This parameter forces the user to enter their credentials on that request. Single Sign-On doesn't take effect. |
This is the interactive part of the flow. The user is asked to complete the policy's workflow. The user might have to enter their username and password, sign in with a social identity, sign up for a local account, or any other number of steps. User actions depend on how the user flow is defined.
ID tokens and access tokens both expire after a short period of time. Your app m
## Send a sign-out request
-When you want to sign the user out of the app, redirect the user to Azure AD B2C's sign-out endpoint. You can then clear the user's session in the app. If you don't redirect the user, they might be able to reauthenticate to your app without entering their credentials again because they have a valid single sign-on session with Azure AD B2C.
+When you want to sign the user out of the app, redirect the user to Azure AD B2C's sign-out endpoint. You can then clear the user's session in the app. If you don't redirect the user, they might be able to reauthenticate to your app without entering their credentials again because they have a valid Single Sign-On session with Azure AD B2C.
You can simply redirect the user to the `end_session_endpoint` that is listed in the same OpenID Connect metadata document described in [Validate the ID token](#validate-the-id-token). For example:
GET https://{tenant}.b2clogin.com/{tenant}.onmicrosoft.com/{policy}/oauth2/v2.0/
> [!NOTE]
-> Directing the user to the `end_session_endpoint` clears some of the user's single sign-on state with Azure AD B2C. However, it doesn't sign the user out of the user's social identity provider session. If the user selects the same identity provider during a subsequent sign in, the user is re-authenticated, without entering their credentials. If a user wants to sign out of your Azure AD B2C application, it doesn't necessarily mean they want to completely sign out of their Facebook account, for example. However, for local accounts, the user's session will be ended properly.
->
+> Directing the user to the `end_session_endpoint` clears some of the user's Single Sign-On state with Azure AD B2C. However, it doesn't sign the user out of the user's social identity provider session. If the user selects the same identity provider during a subsequent sign in, the user is re-authenticated, without entering their credentials. If a user wants to sign out of your Azure AD B2C application, it doesn't necessarily mean they want to completely sign out of their Facebook account, for example. However, for local accounts, the user's session will be ended properly.
+ ## Next steps
active-directory-b2c Microsoft Graph Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/microsoft-graph-get-started.md
Previously updated : 09/20/2021 Last updated : 06/24/2022
There are two modes of communication you can use when working with the Microsoft
You enable the **Automated** interaction scenario by creating an application registration shown in the following sections.
-Although the OAuth 2.0 client credentials grant flow is not currently directly supported by the Azure AD B2C authentication service, you can set up client credential flow using Azure AD and the Microsoft identity platform /token endpoint for an application in your Azure AD B2C tenant. An Azure AD B2C tenant shares some functionality with Azure AD enterprise tenants.
+Azure AD B2C authentication service directly supports OAuth 2.0 client credentials grant flow (**currently in public preview**), but you can't use it to manage your Azure AD B2C resources via Microsoft Graph API. However, you can set up [client credential flow](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md) using Azure AD and the Microsoft identity platform `/token` endpoint for an application in your Azure AD B2C tenant.
## Register management application
active-directory Application Proxy Connector Installation Problem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-connector-installation-problem.md
When the installation of a connector fails, the root cause is usually one of the
> >
-**Review the pre-requisites required:**
+**Review the prerequisites required:**
-1. Verify the machine supports TLS1.2 ΓÇô All Windows versions after 2012 R2 should support TLS 1.2. If your connector machine is from a version of 2012 R2 or prior, make sure that the following KBs are installed on the machine: <https://support.microsoft.com/help/2973337/sha512-is-disabled-in-windows-when-you-use-tls-1.2>
+1. Verify the machine supports TLS1.2 ΓÇô All Windows versions after 2012 R2 should support TLS 1.2. If your connector machine is from a version of 2012 R2 or prior, make sure that the [required updates](https://support.microsoft.com/help/2973337/sha512-is-disabled-in-windows-when-you-use-tls-1.2) are installed.
2. Contact your network admin and ask to verify that the backend proxy and firewall do not block SHA512 for outgoing traffic.
active-directory Concept Authentication Authenticator App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-authenticator-app.md
Title: Microsoft Entra Authenticator app authentication method - Azure Active Directory
-description: Learn about using the Microsoft Entra Authenticator app in Azure Active Directory to help secure your sign-ins
+ Title: Microsoft Authenticator authentication method - Azure Active Directory
+description: Learn about using the Microsoft Authenticator in Azure Active Directory to help secure your sign-ins
Previously updated : 06/09/2022 Last updated : 06/23/2022
# Customer intent: As an identity administrator, I want to understand how to use the Microsoft Authenticator app in Azure AD to improve and secure user sign-in events.
-# Authentication methods in Azure Active Directory - Microsoft Entra Authenticator app
+# Authentication methods in Azure Active Directory - Microsoft Authenticator app
-The Microsoft Entra Authenticator app provides an additional level of security to your Azure AD work or school account or your Microsoft account and is available for [Android](https://go.microsoft.com/fwlink/?linkid=866594) and [iOS](https://go.microsoft.com/fwlink/?linkid=866594). With the Microsoft Authenticator app, users can authenticate in a passwordless way during sign-in, or as an additional verification option during self-service password reset (SSPR) or multifactor authentication events.
+The Microsoft Authenticator app provides an additional level of security to your Azure AD work or school account or your Microsoft account and is available for [Android](https://go.microsoft.com/fwlink/?linkid=866594) and [iOS](https://go.microsoft.com/fwlink/?linkid=866594). With the Microsoft Authenticator app, users can authenticate in a passwordless way during sign-in, or as an additional verification option during self-service password reset (SSPR) or multifactor authentication events.
Users may receive a notification through the mobile app for them to approve or deny, or use the Authenticator app to generate an OATH verification code that can be entered in a sign-in interface. If you enable both a notification and verification code, users who register the Authenticator app can use either method to verify their identity.
-To use the Authenticator app at a sign-in prompt rather than a username and password combination, see [Enable passwordless sign-in with the Microsoft Entra Authenticator app](howto-authentication-passwordless-phone.md).
+To use the Authenticator app at a sign-in prompt rather than a username and password combination, see [Enable passwordless sign-in with the Microsoft Authenticator](howto-authentication-passwordless-phone.md).
> [!NOTE] > Users don't have the option to register their mobile app when they enable SSPR. Instead, users can register their mobile app at [https://aka.ms/mfasetup](https://aka.ms/mfasetup) or as part of the combined security info registration at [https://aka.ms/setupsecurityinfo](https://aka.ms/setupsecurityinfo).
Instead of seeing a prompt for a password after entering a username, a user that
This authentication method provides a high level of security, and removes the need for the user to provide a password at sign-in.
-To get started with passwordless sign-in, see [Enable passwordless sign-in with the Microsoft Entra Authenticator app](howto-authentication-passwordless-phone.md).
+To get started with passwordless sign-in, see [Enable passwordless sign-in with the Microsoft Authenticator](howto-authentication-passwordless-phone.md).
## Notification through mobile app
Users may have a combination of up to five OATH hardware tokens or authenticator
## Next steps -- To get started with passwordless sign-in, see [Enable passwordless sign-in with the Microsoft Entra Authenticator app](howto-authentication-passwordless-phone.md).
+- To get started with passwordless sign-in, see [Enable passwordless sign-in with the Microsoft Authenticator](howto-authentication-passwordless-phone.md).
- Learn more about configuring authentication methods using the [Microsoft Graph REST API](/graph/api/resources/authenticationmethods-overview).
active-directory Concept Authentication Passwordless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-passwordless.md
Title: Azure Active Directory passwordless sign-in
-description: Learn about options for passwordless sign-in to Azure Active Directory using FIDO2 security keys or the Microsoft Entra Authenticator app
+description: Learn about options for passwordless sign-in to Azure Active Directory using FIDO2 security keys or Microsoft Authenticator
Previously updated : 06/09/2022 Last updated : 06/23/2022
Features like multifactor authentication (MFA) are a great way to secure your or
Each organization has different needs when it comes to authentication. Microsoft global Azure and Azure Government offer the following three passwordless authentication options that integrate with Azure Active Directory (Azure AD): - Windows Hello for Business-- Microsoft Entra Authenticator app
+- Microsoft Authenticator
- FIDO2 security keys ![Authentication: Security versus convenience](./media/concept-authentication-passwordless/passwordless-convenience-security.png)
The following steps show how the sign-in process works with Azure AD:
The Windows Hello for Business [planning guide](/windows/security/identity-protection/hello-for-business/hello-planning-guide) can be used to help you make decisions on the type of Windows Hello for Business deployment and the options you'll need to consider.
-## Microsoft Entra Authenticator App
+## Microsoft Authenticator
You can also allow your employee's phone to become a passwordless authentication method. You may already be using the Authenticator app as a convenient multi-factor authentication option in addition to a password. You can also use the Authenticator App as a passwordless option.
-![Sign in to Microsoft Edge with the Microsoft Entra Authenticator app](./media/concept-authentication-passwordless/concept-web-sign-in-microsoft-authenticator-app.png)
+![Sign in to Microsoft Edge with the Microsoft Authenticator](./media/concept-authentication-passwordless/concept-web-sign-in-microsoft-authenticator-app.png)
-The Authenticator App turns any iOS or Android phone into a strong, passwordless credential. Users can sign in to any platform or browser by getting a notification to their phone, matching a number displayed on the screen to the one on their phone, and then using their biometric (touch or face) or PIN to confirm. Refer to [Download and install the Microsoft Entra Authenticator app](https://support.microsoft.com/account-billing/download-and-install-the-microsoft-authenticator-app-351498fc-850a-45da-b7b6-27e523b8702a) for installation details.
+The Authenticator App turns any iOS or Android phone into a strong, passwordless credential. Users can sign in to any platform or browser by getting a notification to their phone, matching a number displayed on the screen to the one on their phone, and then using their biometric (touch or face) or PIN to confirm. Refer to [Download and install the Microsoft Authenticator](https://support.microsoft.com/account-billing/download-and-install-the-microsoft-authenticator-app-351498fc-850a-45da-b7b6-27e523b8702a) for installation details.
Passwordless authentication using the Authenticator app follows the same basic pattern as Windows Hello for Business. It's a little more complicated as the user needs to be identified so that Azure AD can find the Authenticator app version being used:
active-directory How To Migrate Mfa Server To Azure Mfa User Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-migrate-mfa-server-to-azure-mfa-user-authentication.md
Last updated 04/07/2022 --++
active-directory How To Migrate Mfa Server To Azure Mfa With Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-migrate-mfa-server-to-azure-mfa-with-federation.md
Last updated 04/21/2022--++
active-directory How To Migrate Mfa Server To Azure Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-migrate-mfa-server-to-azure-mfa.md
Title: Migrate from MFA Server to Azure AD Multi-Factor Authentication - Azure Active Directory
-description: Step-by-step guidance to migrate from Azure MFA Server on-premises to Azure Multi-Factor Authentication
+description: Step-by-step guidance to migrate from MFA Server on-premises to Azure AD Multi-Factor Authentication
Previously updated : 06/09/2022 Last updated : 06/23/2022 --++
-# Migrate from Azure MFA Server to Azure AD Multi-Factor Authentication
+# Migrate from MFA Server to Azure AD Multi-Factor Authentication
-Multifactor authentication (MFA) is important to securing your infrastructure and assets from bad actors. Azure Multi-Factor Authentication Server (MFA Server) isnΓÇÖt available for new deployments and will be deprecated. Customers who are using MFA Server should move to using cloud-based Azure Active Directory (Azure AD) Multi-Factor Authentication.
+Multifactor authentication (MFA) is important to securing your infrastructure and assets from bad actors. Azure AD Multi-Factor Authentication Server (MFA Server) isnΓÇÖt available for new deployments and will be deprecated. Customers who are using MFA Server should move to using cloud-based Azure Active Directory (Azure AD) Multi-Factor Authentication.
In this article, we assume that you have a hybrid environment where:
There are multiple possible end states to your migration, depending on your goal
| <br> | Goal: Decommission MFA Server ONLY | Goal: Decommission MFA Server and move to Azure AD Authentication | Goal: Decommission MFA Server and AD FS | |||-|--| |MFA provider | Change MFA provider from MFA Server to Azure AD Multi-Factor Authentication. | Change MFA provider from MFA Server to Azure AD Multi-Factor Authentication. | Change MFA provider from MFA Server to Azure AD Multi-Factor Authentication. |
-|User authentication |Continue to use federation for Azure AD authentication. | Move to Azure AD with Password Hash Synchronization (preferred) or Passthrough Authentication **and** seamless single sign-on (SSO).| Move to Azure AD with Password Hash Synchronization (preferred) or Passthrough Authentication **and** SSO. |
+|User authentication |Continue to use federation for Azure AD authentication. | Move to Azure AD with Password Hash Synchronization (preferred) or Passthrough Authentication **and** Seamless Single Sign-On (SSO).| Move to Azure AD with Password Hash Synchronization (preferred) or Passthrough Authentication **and** SSO. |
|Application authentication | Continue to use AD FS authentication for your applications. | Continue to use AD FS authentication for your applications. | Move apps to Azure AD before migrating to Azure AD Multi-Factor Authentication. | If you can, move both your multifactor authentication and your user authentication to Azure. For step-by-step guidance, see [Moving to Azure AD Multi-Factor Authentication and Azure AD user authentication](how-to-migrate-mfa-server-to-azure-mfa-user-authentication.md).
While you can migrate usersΓÇÖ registered multifactor authentication phone numbe
Users will need to register and add a new account on the Authenticator app and remove the old account. To help users to differentiate the newly added account from the old account linked to the MFA Server, make sure the Account name for the Mobile App on the MFA Server is named in a way to distinguish the two accounts.
-For example, the Account name that appears under Mobile App on the MFA Server has been renamed to OnPrem MFA Server.
+For example, the Account name that appears under Mobile App on the MFA Server has been renamed to On-Premises MFA Server.
The account name on the Authenticator App will change with the next push notification to the user. Migrating phone numbers can also lead to stale numbers being migrated and make users more likely to stay on phone-based MFA instead of setting up more secure methods like Microsoft Authenticator in passwordless mode.
We recommend that you use Password Hash Synchronization (PHS).
### Passwordless authentication
-As part of enrolling users to use Microsoft Authenticator as a second factor, we recommend you enable passwordless phone sign-in as part of their registration. For more information, including other passwordless methods such as FIDO and Windows Hello for Business, visit [Plan a passwordless authentication deployment with Azure AD](howto-authentication-passwordless-deployment.md#plan-for-and-deploy-the-authenticator-app).
+As part of enrolling users to use Microsoft Authenticator as a second factor, we recommend you enable passwordless phone sign-in as part of their registration. For more information, including other passwordless methods such as FIDO and Windows Hello for Business, visit [Plan a passwordless authentication deployment with Azure AD](howto-authentication-passwordless-deployment.md#plan-for-and-deploy-microsoft-authenticator).
### Microsoft Identity Manager self-service password reset
Check with the service provider for supported product versions and their capabil
- The NPS extension doesn't use Azure AD Conditional Access policies. If you stay with RADIUS and use the NPS extension, all authentication requests going to NPS will require the user to perform MFA. - Users must register for Azure AD Multi-Factor Authentication prior to using the NPS extension. Otherwise, the extension fails to authenticate the user, which can generate help desk calls. - When the NPS extension invokes MFA, the MFA request is sent to the user's default MFA method.
- - Because the sign-in happens on non-Microsoft applications, it is unlikely that the user will see visual notification that multifactor authentication is required and that a request has been sent to their device.
- - During the multifactor authentication requirement, the user must have access to their default authentication method to complete the requirement. They cannot choose an alternative method. Their default authentication method will be used even if it is disabled in the tenant authentication methods and multifactor authentication policies.
+ - Because the sign-in happens on non-Microsoft applications, it's unlikely that the user will see visual notification that multifactor authentication is required and that a request has been sent to their device.
+ - During the multifactor authentication requirement, the user must have access to their default authentication method to complete the requirement. They can't choose an alternative method. Their default authentication method will be used even if it's disabled in the tenant authentication methods and multifactor authentication policies.
- Users can change their default multifactor authentication method in the Security Info page (aka.ms/mysecurityinfo). - Available MFA methods for RADIUS clients are controlled by the client systems sending the RADIUS access requests. - MFA methods that require user input after they enter a password can only be used with systems that support access-challenge responses with RADIUS. Input methods might include OTP, hardware OATH tokens or the Microsoft Authenticator application.
active-directory Howto Authentication Passwordless Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-deployment.md
Previously updated : 06/09/2022 Last updated : 06/23/2022 --++
Passwords are a primary attack vector. Bad actors use social engineering, phishi
Microsoft offers the following [three passwordless authentication options](concept-authentication-passwordless.md) that integrate with Azure Active Directory (Azure AD):
-* [Microsoft Entra Authenticator app](./concept-authentication-passwordless.md#microsoft-entra-authenticator-app) - turns any iOS or Android phone into a strong, passwordless credential by allowing users to sign into any platform or browser.
+* [Microsoft Authenticator](./concept-authentication-passwordless.md#microsoft-authenticator) - turns any iOS or Android phone into a strong, passwordless credential by allowing users to sign into any platform or browser.
* [FIDO2-compliant security keys](./concept-authentication-passwordless.md#fido2-security-keys) - useful for users who sign in to shared machines like kiosks, in situations where use of phones is restricted, and for highly privileged identities.
The following table lists the passwordless authentication methods by device type
| Device types| Passwordless authentication method | | - | - |
-| Dedicated non-windows devices| <li> **Microsoft Entra Authenticator app** <li> Security keys |
+| Dedicated non-windows devices| <li> **Microsoft Authenticator** <li> Security keys |
| Dedicated Windows 10 computers (version 1703 and later)| <li> **Windows Hello for Business** <li> Security keys |
-| Dedicated Windows 10 computers (before version 1703)| <li> **Windows Hello for Business** <li> Microsoft Entra Authenticator app |
-| Shared devices: tablets, and mobile devices| <li> **Microsoft Entra Authenticator app** <li> One-time password sign-in |
-| Kiosks (Legacy)| **Microsoft Entra Authenticator app** |
-| Kiosks and shared computers ΓÇÄ(Windows 10)| <li> **Security keys** <li> Microsoft Entra Authenticator app |
+| Dedicated Windows 10 computers (before version 1703)| <li> **Windows Hello for Business** <li> Microsoft Authenticator app |
+| Shared devices: tablets, and mobile devices| <li> **Microsoft Authenticator** <li> One-time password sign-in |
+| Kiosks (Legacy)| **Microsoft Authenticator** |
+| Kiosks and shared computers ΓÇÄ(Windows 10)| <li> **Security keys** <li> Microsoft Authenticator app |
## Prerequisites
As part of this deployment plan, we recommend that passwordless authentication b
The prerequisites are determined by your selected passwordless authentication methods.
-| Prerequisite| Microsoft Entra Authenticator app| FIDO2 Security Keys|
+| Prerequisite| Microsoft Authenticator| FIDO2 Security Keys|
| - | -|-| | [Combined registration for Azure AD Multi-Factor Authentication (MFA) and self-service password reset (SSPR)](howto-registration-mfa-sspr-combined.md) is enabled| √| √| | [Users can perform Azure AD MFA](howto-mfa-getstarted.md)| √| √|
Your communications to end users should include the following information:
* [Guidance on combined registration for both Azure AD MFA and SSPR](howto-registration-mfa-sspr-combined.md)
-* [Downloading the Microsoft Entra Authenticator app](https://support.microsoft.com/account-billing/download-and-install-the-microsoft-authenticator-app-351498fc-850a-45da-b7b6-27e523b8702a)
+* [Downloading Microsoft Authenticator](https://support.microsoft.com/account-billing/download-and-install-the-microsoft-authenticator-app-351498fc-850a-45da-b7b6-27e523b8702a)
-* [Registering in the Microsoft Entra Authenticator app](howto-authentication-passwordless-phone.md)
+* [Registering in Microsoft Authenticator](howto-authentication-passwordless-phone.md)
* [Signing in with your phone](https://support.microsoft.com/account-billing/sign-in-to-your-accounts-using-the-microsoft-authenticator-app-582bdc07-4566-4c97-a7aa-56058122714c)
This method can also be used for easy recovery when the user has lost or forgott
>[!NOTE] > If you can't use the security key or the Authenticator app for some scenarios, multifactor authentication with a username and password along with another registered method can be used as a fallback option.
-## Plan for and deploy the Authenticator app
+## Plan for and deploy Microsoft Authenticator
-The [Authenticator app](concept-authentication-passwordless.md) turns any iOS or Android phone into a strong, passwordless credential. It's a free download from Google Play or the Apple App Store. Have users [download the Microsoft Entra Authenticator app](https://support.microsoft.com/account-billing/download-and-install-the-microsoft-authenticator-app-351498fc-850a-45da-b7b6-27e523b8702a) and follow the directions to enable phone sign-in.
+[Microsoft Authenticator](concept-authentication-passwordless.md) turns any iOS or Android phone into a strong, passwordless credential. It's a free download from Google Play or the Apple App Store. Have users [download Microsoft Authenticator](https://support.microsoft.com/account-billing/download-and-install-the-microsoft-authenticator-app-351498fc-850a-45da-b7b6-27e523b8702a) and follow the directions to enable phone sign-in.
### Technical considerations **Active Directory Federation Services (AD FS) Integration** - When a user enables the Authenticator passwordless credential, authentication for that user defaults to sending a notification for approval. Users in a hybrid tenant are prevented from being directed to AD FS for sign-in unless they select "Use your password instead." This process also bypasses any on-premises Conditional Access policies, and pass-through authentication (PTA) flows. However, if a login_hint is specified, the user is forwarded to AD FS and bypasses the option to use the passwordless credential.
-**Azure MFA server** - End users enabled for multi-factor authentication through an organization's on-premises Azure MFA server can create and use a single passwordless phone sign-in credential. If the user attempts to upgrade multiple installations (5 or more) of the Authenticator app with the credential, this change may result in an error.
+**MFA server** - End users enabled for multi-factor authentication through an organization's on-premises MFA server can create and use a single passwordless phone sign-in credential. If the user attempts to upgrade multiple installations (5 or more) of the Authenticator app with the credential, this change may result in an error.
> [!IMPORTANT]
-> As of July 1, 2019, Microsoft no longer offers MFA Server for new deployments. New customers that want to require multi-factor authentication (MFA) during sign-in events should use cloud-based Azure AD Multi-Factor Authentication. Existing customers that activated MFA Server before July 1, 2019 can download the latest version, future updates, and generate activation credentials as usual. We recommend moving from Azure MFA Server to Azure Active Directory MFA.
+> As of July 1, 2019, Microsoft no longer offers MFA Server for new deployments. New customers that want to require multi-factor authentication (MFA) during sign-in events should use cloud-based Azure AD Multi-Factor Authentication. Existing customers that activated MFA Server before July 1, 2019 can download the latest version, future updates, and generate activation credentials as usual. We recommend moving from MFA Server to Azure AD MFA.
**Device registration** - To use the Authenticator app for passwordless authentication, the device must be registered in the Azure AD tenant and can't be a shared device. A device can only be registered in a single tenant. This limit means that only one work or school account is supported for phone sign-in using the Authenticator app. ### Deploy phone sign-in with the Authenticator app
-Follow the steps in the article, [Enable passwordless sign-in with the Microsoft Entra Authenticator app](howto-authentication-passwordless-phone.md) to enable the Authenticator app as a passwordless authentication method in your organization.
+Follow the steps in the article, [Enable passwordless sign-in with Microsoft Authenticator](howto-authentication-passwordless-phone.md) to enable the Authenticator app as a passwordless authentication method in your organization.
### Testing Authenticator app
active-directory Access Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/access-tokens.md
Title: Microsoft identity platform access tokens description: Learn about access tokens emitted by the Azure AD v1.0 and Microsoft identity platform (v2.0) endpoints. -+
Last updated 12/28/2021-+
active-directory Active Directory Claims Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-claims-mapping.md
Title: Customize Azure AD tenant app claims (PowerShell) description: Learn how to customize claims emitted in tokens for an application in a specific Azure Active Directory tenant.-+
Last updated 06/16/2021-+
active-directory Active Directory Optional Claims https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-optional-claims.md
Title: Provide optional claims to Azure AD apps description: How to add custom or additional claims to the SAML 2.0 and JSON Web Tokens (JWT) tokens issued by Microsoft identity platform.-+
Last updated 04/04/2022-+
active-directory Active Directory Saml Claims Customization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-saml-claims-customization.md
Title: Customize app SAML token claims description: Learn how to customize the claims issued by Microsoft identity platform in the SAML token for enterprise applications. -+ Last updated 02/07/2022-+
active-directory Active Directory Schema Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-schema-extensions.md
Title: Use Azure AD schema extension attributes in claims description: Describes how to use directory schema extension attributes for sending user data to applications in token claims. -+
Last updated 07/29/2020-+ # Using directory schema extension attributes in claims
active-directory Reference Claims Mapping Policy Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-claims-mapping-policy-type.md
Title: Claims mapping policy description: Learn about the claims mapping policy type, which is used to modify the claims emitted in tokens issued for specific applications. -+
Last updated 03/04/2022-+
active-directory Scenario Web App Sign User App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-sign-user-app-configuration.md
The initialization code is different depending on the platform. For ASP.NET Core
# [ASP.NET Core](#tab/aspnetcore)
-In ASP.NET Core web apps (and web APIs), the application is protected because you have a `[Authorize]` attribute on the controllers or the controller actions. This attribute checks that the user is authenticated. The code that's initializing the application is in the *Startup.cs* file.
+In ASP.NET Core web apps (and web APIs), the application is protected because you have a `[Authorize]` attribute on the controllers or the controller actions. This attribute checks that the user is authenticated. Prior to the release of .NET 6, the code that's initializing the application is in the *Startup.cs* file. New ASP.NET Core projects with .NET 6 no longer contain a *Startup.cs* file. Taking its place is the *Program.cs* file. The rest of this tutorial pertains to .NET 5 or lower.
To add authentication with the Microsoft identity platform (formerly Azure AD v2.0), you'll need to add the following code. The comments in the code should be self-explanatory.
In the code above:
- The `AddMicrosoftIdentityUI` extension method is defined in **Microsoft.Identity.Web.UI**. It provides a default controller to handle sign-in and sign-out.
-You can find more details about how Microsoft.Identity.Web enables you to create web apps in <https://aka.ms/ms-id-web/webapp>
+For more information about how Microsoft.Identity.Web enables you to create web apps, see [Web Apps in microsoft-identity-web](https://aka.ms/ms-id-web/webapp).
# [ASP.NET](#tab/aspnet)
active-directory Security Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/security-tokens.md
Title: Security tokens description: Learn about the basics of security tokens in the Microsoft identity platform. -+
Last updated 09/27/2021-+ #Customer intent: As an application developer, I want to understand the basic concepts of security tokens in the Microsoft identity platform.
active-directory V2 Oauth2 Auth Code Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-oauth2-auth-code-flow.md
Title: Microsoft identity platform and OAuth 2.0 authorization code flow description: Build web applications using the Microsoft identity platform implementation of the OAuth 2.0 authentication protocol. -+
Last updated 02/02/2022-+
active-directory Howto Vm Sign In Azure Ad Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-vm-sign-in-azure-ad-linux.md
Title: Login in to Linux virtual machine in Azure using Azure Active Directory and openSSH certificate-based authentication
+ Title: Login to Linux virtual machine in Azure using Azure Active Directory and openSSH certificate-based authentication
description: Login with Azure AD using openSSH certificate-based authentication to an Azure VM running Linux
The following Azure regions are currently supported for this feature:
- Azure Global - Azure Government - Azure China 21Vianet
-
+ It's not supported to use this extension on Azure Kubernetes Service (AKS) clusters. For more information, see [Support policies for AKS](../../aks/support-policies.md). If you choose to install and use the CLI locally, you must be running the Azure CLI version 2.22.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). > [!NOTE]
-> This is functionality is also available for [Azure Arc-enabled servers](../../azure-arc/servers/ssh-arc-overview.md).
+> This functionality is also available for [Azure Arc-enabled servers](../../azure-arc/servers/ssh-arc-overview.md).
## Requirements for login with Azure AD using openSSH certificate-based authentication
Ensure your VM is configured with the following functionality:
Ensure your client meets the following requirements: -- SSH client must support OpenSSH based certificates for authentication. You can use Az CLI (2.21.1 or higher) with OpenSSH (included in Windows 10 version 1803 or higher) or Azure Cloud Shell to meet this requirement. -- SSH extension for Az CLI. You can install this using `az extension add --name ssh`. You donΓÇÖt need to install this extension when using Azure Cloud Shell as it comes pre-installed.-- If youΓÇÖre using any other SSH client other than Az CLI or Azure Cloud Shell that supports OpenSSH certificates, youΓÇÖll still need to use Az CLI with SSH extension to retrieve ephemeral SSH cert and optionally a config file and then use the config file with your SSH client.
+- SSH client must support OpenSSH based certificates for authentication. You can use Azure CLI (2.21.1 or higher) with OpenSSH (included in Windows 10 version 1803 or higher) or Azure Cloud Shell to meet this requirement.
+- SSH extension for Azure CLI. You can install this using `az extension add --name ssh`. You donΓÇÖt need to install this extension when using Azure Cloud Shell as it comes pre-installed.
+- If youΓÇÖre using any other SSH client other than Azure CLI or Azure Cloud Shell that supports OpenSSH certificates, youΓÇÖll still need to use Azure CLI with SSH extension to retrieve ephemeral SSH cert and optionally a config file and then use the config file with your SSH client.
- TCP connectivity from the client to either the public or private IP of the VM (ProxyCommand or SSH forwarding to a machine with connectivity also works). > [!IMPORTANT] > SSH clients based on PuTTy do not support openSSH certificates and cannot be used to login with Azure AD openSSH certificate-based authentication.
-## Enabling Azure AD login in for Linux VM in Azure
+## Enabling Azure AD login for Linux VM in Azure
-To use Azure AD login in for Linux VM in Azure, you need to first enable Azure AD login option for your Linux VM, configure Azure role assignments for users who are authorized to login in to the VM and then use SSH client that supports OpensSSH such as Az CLI or Az Cloud Shell to SSH to your Linux VM. There are multiple ways you can enable Azure AD login for your Linux VM, as an example you can use:
+To use Azure AD login for Linux VM in Azure, you need to first enable Azure AD login option for your Linux VM, configure Azure role assignments for users who are authorized to login to the VM and then use SSH client that supports OpensSSH such as Azure CLI or Azure Cloud Shell to SSH to your Linux VM. There are multiple ways you can enable Azure AD login for your Linux VM, as an example you can use:
- Azure portal experience when creating a Linux VM - Azure Cloud Shell experience when creating a Windows VM or for an existing Linux VM
As an example, to create an Ubuntu Server 18.04 Long Term Support (LTS) VM in Az
1. Check the box to enable **Login with Azure Active Directory (Preview)**. 1. Ensure **System assigned managed identity** is checked. 1. Go through the rest of the experience of creating a virtual machine. During this preview, youΓÇÖll have to create an administrator account with username and password or SSH public key.
-
+ ### Using the Azure Cloud Shell experience to enable Azure AD login Azure Cloud Shell is a free, interactive shell that you can use to run the steps in this article. Common Azure tools are preinstalled and configured in Cloud Shell for you to use with your account. Just select the Copy button to copy the code, paste it in Cloud Shell, and then press Enter to run it. There are a few ways to open Cloud Shell:
The example can be customized to support your testing requirements as needed.
```azurecli-interactive az group create --name AzureADLinuxVM --location southcentralus- az vm create \ --resource-group AzureADLinuxVM \ --name myVM \
az vm create \
--assign-identity \ --admin-username azureuser \ --generate-ssh-keys- az vm extension set \ --publisher Microsoft.Azure.ActiveDirectory \ --name AADSSHLoginForLinux \
There are multiple ways you can configure role assignments for VM, as an example
- Azure AD Portal experience - Azure Cloud Shell experience
-> [!Note]
+> [!NOTE]
> The Virtual Machine Administrator Login and Virtual Machine User Login roles use dataActions and can be assigned at the management group, subscription, resource group, or resource scope. It is recommended that the roles be assigned at the management group, subscription or resource level and not at the individual VM level to avoid risk of running out of [Azure role assignments limit](../../role-based-access-control/troubleshooting.md#azure-role-assignments-limit) per subscription.- ### Using Azure AD Portal experience To configure role assignments for your Azure AD enabled Linux VMs:
To configure role assignments for your Azure AD enabled Linux VMs:
1. Select **Add** > **Add role assignment** to open the Add role assignment page. 1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
-
+ | Setting | Value | | | | | Role | **Virtual Machine Administrator Login** or **Virtual Machine User Login** |
To configure role assignments for your Azure AD enabled Linux VMs:
![Add role assignment page in Azure portal.](../../../includes/role-based-access-control/media/add-role-assignment-page.png) After a few moments, the security principal is assigned the role at the selected scope.
-
+ ### Using the Azure Cloud Shell experience The following example uses [az role assignment create](/cli/azure/role/assignment#az-role-assignment-create) to assign the Virtual Machine Administrator Login role to the VM for your current Azure user. The username of your current Azure account is obtained with [az account show](/cli/azure/account#az-account-show), and the scope is set to the VM created in a previous step with [az vm show](/cli/azure/vm#az-vm-show). The scope could also be assigned at a resource group or subscription level, normal Azure RBAC inheritance permissions apply.
az role assignment create \
> [!NOTE] > If your Azure AD domain and logon username domain do not match, you must specify the object ID of your user account with the `--assignee-object-id`, not just the username for `--assignee`. You can obtain the object ID for your user account with [az ad user list](/cli/azure/ad/user#az-ad-user-list).- For more information on how to use Azure RBAC to manage access to your Azure subscription resources, see the article [Steps to assign an Azure role](../../role-based-access-control/role-assignments-steps.md).
-## Install SSH extension for Az CLI
+## Install SSH extension for Azure CLI
-If youΓÇÖre using Azure Cloud Shell, then no other setup is needed as both the minimum required version of Az CLI and SSH extension for Az CLI are already included in the Cloud Shell environment.
+If youΓÇÖre using Azure Cloud Shell, then no other setup is needed as both the minimum required version of Azure CLI and SSH extension for Azure CLI are already included in the Cloud Shell environment.
-Run the following command to add SSH extension for Az CLI
+Run the following command to add SSH extension for Azure CLI
```azurecli az extension add --name ssh
az extension show --name ssh
## Using Conditional Access
-You can enforce Conditional Access policies such as require multi-factor authentication, require compliant or hybrid Azure AD joined device for the device running SSH client, and checking for risk before authorizing access to Linux VMs in Azure that are enabled with Azure AD login in. The application that appears in Conditional Access policy is called "Azure Linux VM Sign-In".
+You can enforce Conditional Access policies such as require multi-factor authentication, require compliant or hybrid Azure AD joined device for the device running SSH client, and checking for risk before authorizing access to Linux VMs in Azure that are enabled with Azure AD login. The application that appears in Conditional Access policy is called "Azure Linux VM Sign-In".
> [!NOTE]
-> Conditional Access policy enforcement requiring device compliance or Hybrid Azure AD join on the client device running SSH client only works with Az CLI running on Windows and macOS. It is not supported when using Az CLI on Linux or Azure Cloud Shell.
+> Conditional Access policy enforcement requiring device compliance or Hybrid Azure AD join on the client device running SSH client only works with Azure CLI running on Windows and macOS. It is not supported when using Azure CLI on Linux or Azure Cloud Shell.
### Missing application
Another way to verify it is via Graph PowerShell:
## Login using Azure AD user account to SSH into the Linux VM
-### Using Az CLI
+### Using Azure CLI
First do az login and then az ssh vm.
The following example automatically resolves the appropriate IP address for the
az ssh vm -n myVM -g AzureADLinuxVM ```
-If prompted, enter your Azure AD login credentials at the login page, perform an MFA, and/or satisfy device checks. YouΓÇÖll only be prompted if your az CLI session doesnΓÇÖt already meet any required Conditional Access criteria. Close the browser window, return to the SSH prompt, and youΓÇÖll be automatically connected to the VM.
+If prompted, enter your Azure AD login credentials at the login page, perform an MFA, and/or satisfy device checks. YouΓÇÖll only be prompted if your Azure CLI session doesnΓÇÖt already meet any required Conditional Access criteria. Close the browser window, return to the SSH prompt, and youΓÇÖll be automatically connected to the VM.
YouΓÇÖre now signed in to the Azure Linux virtual machine with the role permissions as assigned, such as VM User or VM Administrator. If your user account is assigned the Virtual Machine Administrator Login role, you can use sudo to run commands that require root privileges.
-### Using Az Cloud Shell
+### Using Azure Cloud Shell
-You can use Az Cloud Shell to connect to VMs without needing to install anything locally to your client machine. Start Cloud Shell by clicking the shell icon in the upper right corner of the Azure portal.
-
-Az Cloud Shell will automatically connect to a session in the context of the signed in user. During the Azure AD Login for Linux Preview, **you must run az login again and go through an interactive sign in flow**.
+You can use Azure Cloud Shell to connect to VMs without needing to install anything locally to your client machine. Start Cloud Shell by clicking the shell icon in the upper right corner of the Azure portal.
+
+Azure Cloud Shell will automatically connect to a session in the context of the signed in user. During the Azure AD Login for Linux Preview, **you must run az login again and go through an interactive sign in flow**.
```azurecli az login
az ssh vm -n myVM -g AzureADLinuxVM
``` > [!NOTE]
-> Conditional Access policy enforcement requiring device compliance or Hybrid Azure AD join is not supported when using Az Cloud Shell.
+> Conditional Access policy enforcement requiring device compliance or Hybrid Azure AD join is not supported when using Azure Cloud Shell.
### Login using Azure AD service principal to SSH into the Linux VM
Use the following example to authenticate to Azure CLI using the service princip
az login --service-principal -u <sp-app-id> -p <password-or-cert> --tenant <tenant-id> ```
-Once authentication with a service principal is complete, use the normal Az CLI SSH commands to connect to the VM.
+Once authentication with a service principal is complete, use the normal Azure CLI SSH commands to connect to the VM.
```azurecli az ssh vm -n myVM -g AzureADLinuxVM
az ssh vm --ip 10.11.123.456
For customers who are using previous version of Azure AD login for Linux that was based on device code flow, complete the following steps using Azure CLI. 1. Uninstall the AADLoginForLinux extension on the VM.
-
+ ```azurecli az vm extension delete -g MyResourceGroup --vm-name MyVm -n AADLoginForLinux ``` > [!NOTE] > The extension uninstall can fail if there are any Azure AD users currently logged in on the VM. Make sure all users are logged off first.- 1. Enable system-assigned managed identity on your VM. ```azurecli
Use Azure Policy to ensure Azure AD login is enabled for your new and existing L
## Troubleshoot sign-in issues
-Some common errors when you try to SSH with Azure AD credentials include no Azure roles assigned, and repeated prompts to sign in. Use the following sections to correct these issues.
+Some common errors when you try to SSH with Azure AD credentials include no Azure roles assigned, and repeated prompts to sign-in. Use the following sections to correct these issues.
### CouldnΓÇÖt retrieve token from local cache
-You must run az login again and go through an interactive sign in flow. Review the section [Using Az Cloud Shell](#using-az-cloud-shell).
+You must run `az login` again and go through an interactive sign-in flow. Review the section [Using Azure Cloud Shell](#using-azure-cloud-shell).
### Access denied: Azure role not assigned
active-directory Plan Device Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/plan-device-deployment.md
Last updated 02/15/2022 --++
active-directory Directory Delegated Administration Primer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/directory-delegated-administration-primer.md
Previously updated : 03/24/2022 Last updated : 06/23/2022
# What is delegated administration?
-Managing permissions for external partners is a key part of your security posture. WeΓÇÖve added capabilities to the Azure Active Directory (Azure AD) admin portal experience so that an administrator can see the relationships that their Azure AD tenant has with Microsoft Cloud Service Providers (CSP) who can manage the tenant. This permissions model is called delegated administration. This article introduces the Azure AD administrator to the relationship between the old Delegated Admin Permissions (DAP) permission model and the new Granular Delegated Admin Permissions (GDAP) permission model.
+Managing permissions for external partners is a key part of your security posture. WeΓÇÖve added capabilities to the administrator portal experience in Azure Active Directory (Azure AD), part of Microsoft Entra, so that an administrator can see the relationships that their Azure AD tenant has with Microsoft Cloud Service Providers (CSP) who can manage the tenant. This permissions model is called delegated administration. This article introduces the Azure AD administrator to the relationship between the old Delegated Admin Permissions (DAP) permission model and the new Granular Delegated Admin Permissions (GDAP) permission model.
## Delegated administration relationships
active-directory Directory Delete Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/directory-delete-howto.md
Previously updated : 11/23/2021 Last updated : 06/23/2022
# Delete a tenant in Azure Active Directory
-When an Azure AD organization (tenant) is deleted, all resources that are contained in the organization are also deleted. Prepare your organization by minimizing its associated resources before you delete. Only an Azure Active Directory (Azure AD) global administrator can delete an Azure AD organization from the portal.
+When an organization (tenant) is deleted in Azure Active Directory (Azure AD), part of Microsoft Entra, all resources that are contained in the organization are also deleted. Prepare your organization by minimizing its associated resources before you delete. Only a global administrator in Azure AD can delete an Azure AD organization from the portal.
## Prepare the organization
active-directory Directory Overview User Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/directory-overview-user-model.md
Previously updated : 09/01/2021 Last updated : 06/23/2022
# What is enterprise user management?
-This article introduces the Azure AD administrator to the relationship between top [identity management](../fundamentals/active-directory-whatis.md?context=azure%2factive-directory%2fusers-groups-roles%2fcontext%2fugr-context) tasks for users in terms of their groups, licenses, deployed enterprise apps, and administrator roles. As your organization grows, you can use Azure AD groups and administrator roles to:
+This article introduces and administrator for Azure Active Directory (Azure AD), part of Microsoft Entra, to the relationship between top [identity management](../fundamentals/active-directory-whatis.md?context=azure%2factive-directory%2fusers-groups-roles%2fcontext%2fugr-context) tasks for users in terms of their groups, licenses, deployed enterprise apps, and administrator roles. As your organization grows, you can use Azure AD groups and administrator roles to:
* Assign licenses to groups instead of to individually * Delegate permissions to distribute the work of Azure AD management to less-privileged roles
active-directory Directory Self Service Signup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/directory-self-service-signup.md
Previously updated : 09/01/2021 Last updated : 06/23/2022
# What is self-service sign-up for Azure Active Directory?
-This article explains how to use self-service sign-up to populate an organization in Azure Active Directory (Azure AD). If you want to take over a domain name from an unmanaged Azure AD organization, see [Take over an unmanaged tenant as administrator](domains-admin-takeover.md).
+This article explains how to use self-service sign-up to populate an organization in Azure Active Directory (Azure AD), part of Microsoft Entra. If you want to take over a domain name from an unmanaged Azure AD organization, see [Take over an unmanaged tenant as administrator](domains-admin-takeover.md).
## Why use self-service sign-up?
active-directory Directory Service Limits Restrictions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/directory-service-limits-restrictions.md
Previously updated : 10/27/2021 Last updated : 06/23/2022
# Azure AD service limits and restrictions
-This article contains the usage constraints and other service limits for the Azure Active Directory (Azure AD) service. If youΓÇÖre looking for the full set of Microsoft Azure service limits, see [Azure Subscription and Service Limits, Quotas, and Constraints](../../azure-resource-manager/management/azure-subscription-service-limits.md).
+This article contains the usage constraints and other service limits for the Azure Active Directory (Azure AD), part of Microsoft Entra, service. If youΓÇÖre looking for the full set of Microsoft Azure service limits, see [Azure Subscription and Service Limits, Quotas, and Constraints](../../azure-resource-manager/management/azure-subscription-service-limits.md).
[!INCLUDE [AAD-service-limits](../../../includes/active-directory-service-limits-include.md)]
active-directory Domains Admin Takeover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/domains-admin-takeover.md
Previously updated : 09/01/2021 Last updated : 06/23/2022
# Take over an unmanaged directory as administrator in Azure Active Directory
-This article describes two ways to take over a DNS domain name in an unmanaged directory in Azure Active Directory (Azure AD). When a self-service user signs up for a cloud service that uses Azure AD, they are added to an unmanaged Azure AD directory based on their email domain. For more about self-service or "viral" sign-up for a service, see [What is self-service sign-up for Azure Active Directory?](directory-self-service-signup.md)
+This article describes two ways to take over a DNS domain name in an unmanaged directory in Azure Active Directory (Azure AD), part of Microsoft Entra. When a self-service user signs up for a cloud service that uses Azure AD, they are added to an unmanaged Azure AD directory based on their email domain. For more about self-service or "viral" sign-up for a service, see [What is self-service sign-up for Azure Active Directory?](directory-self-service-signup.md)
> [!VIDEO https://www.youtube.com/embed/GOSpjHtrRsg]
active-directory Domains Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/domains-manage.md
Previously updated : 09/01/2021 Last updated : 06/23/2022
# Managing custom domain names in your Azure Active Directory
-A domain name is an important part of the identifier for many Azure Active Directory (Azure AD) resources: it's part of a user name or email address for a user, part of the address for a group, and is sometimes part of the app ID URI for an application. A resource in Azure AD can include a domain name that's owned by the Azure AD organization (sometimes called a tenant) that contains the resource. Only a Global Administrator can manage domains in Azure AD.
+A domain name is an important part of the identifier for resources in many Azure Active Directory (Azure AD), part of Microsoft Entra: it's part of a user name or email address for a user, part of the address for a group, and is sometimes part of the app ID URI for an application. A resource in Azure AD can include a domain name that's owned by the Azure AD organization (sometimes called a tenant) that contains the resource. Only a Global Administrator can manage domains in Azure AD.
## Set the primary domain name for your Azure AD organization
active-directory Domains Verify Custom Subdomain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/domains-verify-custom-subdomain.md
Previously updated : 04/05/2022 Last updated : 06/23/2022
# Change subdomain authentication type in Azure Active Directory
-After a root domain is added to Azure Active Directory (Azure AD), all subsequent subdomains added to that root in your Azure AD organization automatically inherit the authentication setting from the root domain. However, if you want to manage domain authentication settings independently from the root domain settings, you can now with the Microsoft Graph API. For example, if you have a federated root domain such as contoso.com, this article can help you verify a subdomain such as child.contoso.com as managed instead of federated.
+After a root domain is added to Azure Active Directory (Azure AD), part of Microsoft Entra, all subsequent subdomains added to that root in your Azure AD organization automatically inherit the authentication setting from the root domain. However, if you want to manage domain authentication settings independently from the root domain settings, you can now with the Microsoft Graph API. For example, if you have a federated root domain such as contoso.com, this article can help you verify a subdomain such as child.contoso.com as managed instead of federated.
In the Azure AD portal, when the parent domain is federated and the admin tries to verify a managed subdomain on the **Custom domain names** page, you'll get a 'Failed to add domain' error with the reason "One or more properties contains invalid values." If you try to add this subdomain from the Microsoft 365 admin center, you will receive a similar error. For more information about the error, see [A child domain doesn't inherit parent domain changes in Office 365, Azure, or Intune](/office365/troubleshoot/administration/child-domain-fails-inherit-parent-domain-changes).
active-directory Groups Assign Sensitivity Labels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-assign-sensitivity-labels.md
Previously updated : 04/19/2022 Last updated : 06/23/2022
# Assign sensitivity labels to Microsoft 365 groups in Azure Active Directory
-Azure Active Directory (Azure AD) supports applying sensitivity labels published by the [Microsoft Purview compliance portal](https://compliance.microsoft.com) to Microsoft 365 groups. Sensitivity labels apply to group across services like Outlook, Microsoft Teams, and SharePoint. For more information about Microsoft 365 apps support, see [Microsoft 365 support for sensitivity labels](/microsoft-365/compliance/sensitivity-labels-teams-groups-sites#support-for-the-sensitivity-labels).
+Azure Active Directory (Azure AD), part of Microsoft Entra, supports applying sensitivity labels published by the [Microsoft Purview compliance portal](https://compliance.microsoft.com) to Microsoft 365 groups. Sensitivity labels apply to group across services like Outlook, Microsoft Teams, and SharePoint. For more information about Microsoft 365 apps support, see [Microsoft 365 support for sensitivity labels](/microsoft-365/compliance/sensitivity-labels-teams-groups-sites#support-for-the-sensitivity-labels).
> [!IMPORTANT] > To configure this feature, there must be at least one active Azure Active Directory Premium P1 license in your Azure AD organization.
active-directory Groups Bulk Download Members https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-bulk-download-members.md
Previously updated : 10/26/2021 Last updated : 06/23/2022 -+ # Bulk download members of a group in Azure Active Directory
-Using Azure Active Directory (Azure AD) portal, you can bulk download the members of a group in your organization to a comma-separated values (CSV) file. All admins and non-admin users can download group membership lists.
+You can bulk download the members of a group in your organization to a comma-separated values (CSV) file in the portal for Azure Active Directory (Azure AD), part of Microsoft Entra. All admins and non-admin users can download group membership lists.
## To bulk download group membership
active-directory Groups Bulk Download https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-bulk-download.md
Previously updated : 10/26/2021 Last updated : 03/24/2022
# Bulk download a list of groups in Azure Active Directory
-Using Azure Active Directory (Azure AD) portal, you can bulk download the list of all the groups in your organization to a comma-separated values (CSV) file. All admins and non-admin users can download group lists.
+You can download a list of all the groups in your organization to a comma-separated values (CSV) file in the portal for Azure Active Directory (Azure AD), part of Microsoft Entra. All admins and non-admin users can download group lists.
## To download a list of groups
active-directory Groups Bulk Import Members https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-bulk-import-members.md
Previously updated : 09/02/2021 Last updated : 06/24/2022
# Bulk add group members in Azure Active Directory
-Using Azure Active Directory (Azure AD) portal, you can add a large number of members to a group by using a comma-separated values (CSV) file to bulk import group members.
+You can add multiple members to a group by using a comma-separated values (CSV) file to bulk import group members in the portal for Azure Active Directory (Azure AD), part of Microsoft Entra.
## Understand the CSV template
active-directory Groups Bulk Remove Members https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-bulk-remove-members.md
# Bulk remove group members in Azure Active Directory
-Using Azure Active Directory (Azure AD) portal, you can remove a large number of members from a group by using a comma-separated values (CSV) file to bulk remove group members.
+You can remove a large number of members from a group by using a comma-separated values (CSV) file to remove group members in bulk using the portal for Azure Active Directory (Azure AD), part of Microsoft Entra.
## Understand the CSV template
active-directory Groups Change Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-change-type.md
Previously updated : 09/02/2021 Last updated : 06/23/2022
# Change static group membership to dynamic in Azure Active Directory
-You can change a group's membership from static to dynamic (or vice-versa) In Azure Active Directory (Azure AD). Azure AD keeps the same group name and ID in the system, so all existing references to the group are still valid. If you create a new group instead, you would need to update those references. Dynamic group membership eliminates management overhead adding and removing users. This article tells you how to convert existing groups from static to dynamic membership using either Azure AD Admin center or PowerShell cmdlets.
+You can change a group's membership from static to dynamic (or vice-versa) In Azure Active Directory (Azure AD), part of Microsoft Entra. Azure AD keeps the same group name and ID in the system, so all existing references to the group are still valid. If you create a new group instead, you would need to update those references. Dynamic group membership eliminates management overhead adding and removing users. This article tells you how to convert existing groups from static to dynamic membership using either Azure AD Admin center or PowerShell cmdlets.
> [!WARNING] > When changing an existing static group to a dynamic group, all existing members are removed from the group, and then the membership rule is processed to add new members. If the group is used to control access to apps or resources, be aware that the original members might lose access until the membership rule is fully processed.
active-directory Groups Create Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-create-rule.md
Previously updated : 05/05/2022 Last updated : 06/23/2022
# Create or update a dynamic group in Azure Active Directory
-In Azure Active Directory (Azure AD), you can use rules to determine group membership based on user or device properties. This article tells how to set up a rule for a dynamic group in the Azure portal. Dynamic membership is supported for security groups and Microsoft 365 Groups. When a group membership rule is applied, user and device attributes are evaluated for matches with the membership rule. When an attribute changes for a user or device, all dynamic group rules in the organization are processed for membership changes. Users and devices are added or removed if they meet the conditions for a group. Security groups can be used for either devices or users, but Microsoft 365 Groups can be only user groups. Using Dynamic groups requires Azure AD premium P1 license or Intune for Education license. See [Dynamic membership rules for groups](./groups-dynamic-membership.md) for more details.
+You can use rules to determine group membership based on user or device properties In Azure Active Directory (Azure AD), part of Microsoft Entra. This article tells how to set up a rule for a dynamic group in the Azure portal. Dynamic membership is supported for security groups and Microsoft 365 Groups. When a group membership rule is applied, user and device attributes are evaluated for matches with the membership rule. When an attribute changes for a user or device, all dynamic group rules in the organization are processed for membership changes. Users and devices are added or removed if they meet the conditions for a group. Security groups can be used for either devices or users, but Microsoft 365 Groups can be only user groups. Using Dynamic groups requires Azure AD premium P1 license or Intune for Education license. See [Dynamic membership rules for groups](./groups-dynamic-membership.md) for more details.
## Rule builder in the Azure portal
active-directory Groups Dynamic Membership https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-dynamic-membership.md
Previously updated : 06/22/2022 Last updated : 06/23/2022
# Dynamic membership rules for groups in Azure Active Directory
-In Azure Active Directory (Azure AD), you can create attribute-based rules to enable dynamic membership for a group. Dynamic group membership adds and removes group members automatically using membership rules based on member attributes. This article details the properties and syntax to create dynamic membership rules for users or devices. You can set up a rule for dynamic membership on security groups or Microsoft 365 groups.
+You can create attribute-based rules to enable dynamic membership for a group in Azure Active Directory (Azure AD), part of Microsoft Entra. Dynamic group membership adds and removes group members automatically using membership rules based on member attributes. This article details the properties and syntax to create dynamic membership rules for users or devices. You can set up a rule for dynamic membership on security groups or Microsoft 365 groups.
-When any attributes of a user or device change, the system evaluates all dynamic group rules in a directory to see if the change would trigger any group adds or removes. If a user or device satisfies a rule on a group, they are added as a member of that group. If they no longer satisfy the rule, they are removed. You can't manually add or remove a member of a dynamic group.
+When the attributes of a user or a device change, the system evaluates all dynamic group rules in a directory to see if the change would trigger any group adds or removes. If a user or device satisfies a rule on a group, they're added as a member of that group. If they no longer satisfy the rule, they're removed. You can't manually add or remove a member of a dynamic group.
- You can create a dynamic group for devices or for users, but you can't create a rule that contains both users and devices. - You can't create a device group based on the user attributes of the device owner. Device membership rules can reference only device attributes.
Here are some examples of advanced rules or syntax for which we recommend that y
- Rule with more than five expressions - The Direct reports rule - Setting [operator precedence](#operator-precedence)-- [Rules with complex expressions](#rules-with-complex-expressions); for example `(user.proxyAddresses -any (_ -contains "contoso"))`
+- [Rules with complex expressions](#rules-with-complex-expressions); for example, `(user.proxyAddresses -any (_ -contains "contoso"))`
> [!NOTE] > The rule builder might not be able to display some rules constructed in the text box. You might see a message when the rule builder is not able to display the rule. The rule builder doesn't change the supported syntax, validation, or processing of dynamic group rules in any way.
For more step-by-step instructions, see [Create or update a dynamic group](group
### Rule syntax for a single expression
-A single expression is the simplest form of a membership rule and only has the three parts mentioned above. A rule with a single expression looks similar to this: `Property Operator Value`, where the syntax for the property is the name of object.property.
+A single expression is the simplest form of a membership rule and only has the three parts mentioned above. A rule with a single expression looks similar to this example: `Property Operator Value`, where the syntax for the property is the name of object.property.
-The following is an example of a properly constructed membership rule with a single expression:
+The following example illustrates a properly constructed membership rule with a single expression:
``` user.department -eq "Sales" ```
-Parentheses are optional for a single expression. The total length of the body of your membership rule cannot exceed 3072 characters.
+Parentheses are optional for a single expression. The total length of the body of your membership rule can't exceed 3072 characters.
## Constructing the body of a membership rule
dirSyncEnabled |true false |user.dirSyncEnabled -eq true
| streetAddress |Any string value or *null* | user.streetAddress -eq "value" | | surname |Any string value or *null* | user.surname -eq "value" | | telephoneNumber |Any string value or *null* | user.telephoneNumber -eq "value" |
-| usageLocation |Two lettered country/region code | user.usageLocation -eq "US" |
+| usageLocation |Two letter country or region code | user.usageLocation -eq "US" |
| userPrincipalName |Any string value | user.userPrincipalName -eq "alias@domain" | | userType |member guest *null* | user.userType -eq "Member" |
The following table lists all the supported operators and their syntax for a sin
### Using the -in and -notIn operators
-If you want to compare the value of a user attribute against a number of different values you can use the -in or -notIn operators. Use the bracket symbols "[" and "]" to begin and end the list of values.
+If you want to compare the value of a user attribute against multiple values, you can use the -in or -notIn operators. Use the bracket symbols "[" and "]" to begin and end the list of values.
In the following example, the expression evaluates to true if the value of user.department equals any of the values in the list:
The values used in an expression can consist of several types, including:
- Numbers - Arrays ΓÇô number array, string array
-When specifying a value within an expression it is important to use the correct syntax to avoid errors. Some syntax tips are:
+When specifying a value within an expression, it's important to use the correct syntax to avoid errors. Some syntax tips are:
- Double quotes are optional unless the value is a string.-- String and regex operations are not case sensitive.
+- String and regex operations aren't case sensitive.
- When a string value contains double quotes, both quotes should be escaped using the \` character, for example, user.department -eq \`"Sales\`" is the proper syntax when "Sales" is the value. Single quotes should be escaped by using two single quotes instead of one each time. - You can also perform Null checks, using null as a value, for example, `user.department -eq null`.
All operators are listed below in order of precedence from highest to lowest. Op
-any -all ```
-The following is an example of operator precedence where two expressions are being evaluated for the user:
+The following example illustrates operator precedence where two expressions are being evaluated for the user:
``` user.department ΓÇôeq "Marketing" ΓÇôand user.country ΓÇôeq "US" ```
-Parentheses are needed only when precedence does not meet your requirements. For example, if you want department to be evaluated first, the following shows how parentheses can be used to determine order:
+Parentheses are needed only when precedence doesn't meet your requirements. For example, if you want department to be evaluated first, the following shows how parentheses can be used to determine order:
``` user.country ΓÇôeq "US" ΓÇôand (user.department ΓÇôeq "Marketing" ΓÇôor user.department ΓÇôeq "Sales")
user.assignedPlans -all (assignedPlan.servicePlanId -eq "")
### Using the underscore (\_) syntax
-The underscore (\_) syntax matches occurrences of a specific value in one of the multivalued string collection properties to add users or devices to a dynamic group. It is used with the -any or -all operators.
+The underscore (\_) syntax matches occurrences of a specific value in one of the multivalued string collection properties to add users or devices to a dynamic group. It's used with the -any or -all operators.
Here's an example of using the underscore (\_) in a rule to add members based on user.proxyAddress (it works the same for user.otherMails). This rule adds any user with proxy address that contains "contoso" to the group.
The direct reports rule is constructed using the following syntax:
Direct Reports for "{objectID_of_manager}" ```
-Here's an example of a valid rule where "62e19b97-8b3d-4d4a-a106-4ce66896a863" is the objectID of the
+Here's an example of a valid rule, where "62e19b97-8b3d-4d4a-a106-4ce66896a863" is the objectID of the
``` Direct Reports for "62e19b97-8b3d-4d4a-a106-4ce66896a863"
The following tips can help you use the rule properly.
You can create a group containing all users within an organization using a membership rule. When users are added or removed from the organization in the future, the group's membership is adjusted automatically.
-The "All users" rule is constructed using single expression using the -ne operator and the null value. This rule adds B2B guest users as well as member users to the group.
+The "All users" rule is constructed using single expression using the -ne operator and the null value. This rule adds B2B guest users and member users to the group.
``` user.objectId -ne null
The following device attributes can be used.
managementType | MDM (for mobile devices) | device.managementType -eq "MDM" memberOf | Any string value (valid group object ID) | device.memberof -any (group.objectId -in ['value']) objectId | a valid Azure AD object ID | device.objectId -eq "76ad43c9-32c5-45e8-a272-7b58b58f596d"
- profileType | a valid [profile type](https://docs.microsoft.com/graph/api/resources/device?view=graph-rest-1.0#properties) in Azure AD | device.profileType -eq "RegisteredDevice"
+ profileType | a valid [profile type](/graph/api/resources/device?view=graph-rest-1.0#properties&preserve-view=true) in Azure AD | device.profileType -eq "RegisteredDevice"
systemLabels | any string matching the Intune device property for tagging Modern Workplace devices | device.systemLabels -contains "M365Managed" > [!NOTE]
-> When using deviceOwnership to create Dynamic Groups for devices, you need to set the value equal to "Company". On Intune the device ownership is represented instead as Corporate. Refer to [OwnerTypes](/intune/reports-ref-devices#ownertypes) for more details.
+> When using deviceOwnership to create Dynamic Groups for devices, you need to set the value equal to "Company." On Intune the device ownership is represented instead as Corporate. For more information, see [OwnerTypes](/intune/reports-ref-devices#ownertypes) for more details.
> When using deviceTrustType to create Dynamic Groups for devices, you need to set the value equal to "AzureAD" to represent Azure AD joined devices, "ServerAD" to represent Hybrid Azure AD joined devices or "Workplace" to represent Azure AD registered devices.
-> When using extensionAttribute1-15 to create Dynamic Groups for devices you need to set the value for extensionAttribute1-15 on the device. Learn more on [how to write extensionAttributes on an Azure AD device object](https://docs.microsoft.com/graph/api/device-update?view=graph-rest-1.0&tabs=http#example-2--write-extensionattributes-on-a-device)
+> When using extensionAttribute1-15 to create Dynamic Groups for devices you need to set the value for extensionAttribute1-15 on the device. Learn more on [how to write extensionAttributes on an Azure AD device object](/graph/api/device-update?view=graph-rest-1.0&tabs=http#example-2--write-extensionattributes-on-a-device&preserve-view=true)
## Next steps
active-directory Groups Dynamic Rule Member Of https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-dynamic-rule-member-of.md
Previously updated : 06/02/2022 Last updated : 06/23/2022
# Group membership in a dynamic group (preview) in Azure Active Directory
-This feature preview enables admins to create dynamic groups in Azure Active Directory (Azure AD) that populate by adding members of other groups using the memberOf attribute. Apps that couldn't read group-based membership previously in Azure AD can now read the entire membership of these new memberOf groups. Not only can these groups be used for apps, they can also be used for licensing assignment and role-based access control. The following diagram illustrates how you could create Dynamic-Group-A with members of Security-Group-X and Security-Group-Y. Members of the groups inside of Security-Group-X and Security-Group-Y don't become members of Dynamic-Group-A.
+This feature preview in Azure Active Directory (Azure AD), part of Microsoft Entra, enables admins to create dynamic groups that populate by adding members of other groups using the memberOf attribute. Apps that couldn't read group-based membership previously in Azure AD can now read the entire membership of these new memberOf groups. Not only can these groups be used for apps, they can also be used for licensing assignment and role-based access control. The following diagram illustrates how you could create Dynamic-Group-A with members of Security-Group-X and Security-Group-Y. Members of the groups inside of Security-Group-X and Security-Group-Y don't become members of Dynamic-Group-A.
:::image type="content" source="./media/groups-dynamic-rule-member-of/member-of-diagram.png" alt-text="Diagram showing how the memberOf attribute works.":::
active-directory Groups Dynamic Rule More Efficient https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-dynamic-rule-more-efficient.md
Previously updated : 03/29/2022 Last updated : 06/23/2022
# Create simpler, more efficient rules for dynamic groups in Azure Active Directory
-The team for Azure Active Directory (Azure AD) sees numerous incidents related to dynamic groups and the processing time for their membership rules. This article contains the methods by which our engineering team helps customers to simplify their membership rules. Simpler and more efficient rules result in better dynamic group processing times. When writing membership rules for dynamic groups, these are steps you can take to ensure the rules are as efficient as possible.
+The team for Azure Active Directory (Azure AD), part of Microsoft Entra, receives reports of incidents related to dynamic groups and the processing time for their membership rules. This article uses that reported information to present the most common methods by which our engineering team helps customers to simplify their membership rules. Simpler and more efficient rules result in better dynamic group processing times. When writing membership rules for dynamic groups, follow these steps to ensure that your rules are as efficient as possible.
## Minimize use of MATCH
-Minimize the usage of the 'match' operator in rules as much as possible. Instead, explore if it's possible to use the `contains`, `startswith`, or `-eq` operators. Considering using other properties that allow you to write rules to select the users you want to be in the group without using the `-match` operator. For example, if you want a rule for the group for all users whose city is Lagos, then instead of using rules like:
+Minimize the usage of the `match` operator in rules as much as possible. Instead, explore if it's possible to use the `contains`, `startswith`, or `-eq` operators. Considering using other properties that allow you to write rules to select the users you want to be in the group without using the `-match` operator. For example, if you want a rule for the group for all users whose city is Lagos, then instead of using rules like:
- `user.city -match "ago"` - `user.city -match ".*?ago.*"`
active-directory Groups Dynamic Rule Validation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-dynamic-rule-validation.md
Title: Validate rules for dynamic group membership (preview) - Azure AD | Microsoft Docs
-description: How to test members against a membership rule for a dynamic groups in Azure Active Directory.
+description: How to test members against a membership rule for a dynamic group in Azure Active Directory.
documentationcenter: ''
Previously updated : 09/02/2021 Last updated : 06/24/2022
# Validate a dynamic group membership rule (preview) in Azure Active Directory
-Azure Active Directory (Azure AD) now provides the means to validate dynamic group rules (in public preview). On the **Validate rules** tab, you can validate your dynamic rule against sample group members to confirm the rule is working as expected. When creating or updating dynamic group rules, administrators want to know whether a user or a device will be a member of the group. This helps evaluate whether user or device meets the rule criteria and aid in troubleshooting when membership is not expected.
+Azure Active Directory (Azure AD), part of Microsoft Entra, now provides the means to validate dynamic group rules (in public preview). On the **Validate rules** tab, you can validate your dynamic rule against sample group members to confirm the rule is working as expected. When you create or update dynamic group rules, you want to know whether a user or a device will be a member of the group. This knowledge helps you evaluate whether a user or device meets the rule criteria and help you troubleshoot when membership isn't expected.
## Prerequisites
-To use the evaluate dynamic group rule membership feature, the administrator must have one of the following rules assigned directly: Global Administrator, Groups Administrator, or Intune Administrator.
+To evaluate the dynamic group rule membership feature, the administrator must have one of the following rules assigned directly: Global Administrator, Groups Administrator, or Intune Administrator.
> [!TIP] > Assigning one of required roles via indirect group membership is not yet supported.
To use the evaluate dynamic group rule membership feature, the administrator mus
## Step-by-step walk-through
-To get started, go to **Azure Active Directory** > **Groups**. Select an existing dynamic group or create a new dynamic group and click on Dynamic membership rules. You can then see the **Validate Rules** tab.
+To get started, go to **Azure Active Directory** > **Groups**. Select an existing dynamic group or create a new dynamic group and select **Dynamic membership rules**. You can then see the **Validate Rules** tab.
![Find the Validate rules tab and start with an existing rule](./media/groups-dynamic-rule-validation/validate-tab.png)
On **Validate rules** tab, you can select users to validate their memberships. 2
![Add users to validate the existing rule against](./media/groups-dynamic-rule-validation/validate-tab-add-users.png)
-After choosing the users or devices from the picker, and **Select**, validation will automatically start and validation results will appear.
+After you select users or devices from the picker, and **Select**, validation will automatically start and validation results will appear.
![View the results of the rule validation](./media/groups-dynamic-rule-validation/validate-tab-results.png)
-The results tell whether a user is a member of the group or not. If the rule is not valid or there is a network issue, the result will show as **Unknown**. In case of **Unknown**, the detailed error message will describe the issue and actions needed.
+The results tell whether a user is a member of the group or not. If the rule isn't valid or there's a network issue, the result will show as **Unknown**. If the value is **Unknown**, the detailed error message will describe the issue and actions needed.
![View the details of the results of the rule validation](./media/groups-dynamic-rule-validation/validate-tab-view-details.png)
-You can modify the rule and validation of memberships will be triggered. To see why user is not a member of the group, click on "View details" and verification details will show the result of each expression composing the rule. Click **OK** to exit.
+You can modify the rule and validation of memberships will be triggered. To see why user isn't a member of the group, select **View details** and verification details will show the result of each expression composing the rule. Select **OK** to exit.
## Next steps
active-directory Groups Dynamic Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-dynamic-tutorial.md
Previously updated : 09/02/2021 Last updated : 06/24/2022
# Tutorial: Add or remove group members automatically
-In Azure Active Directory (Azure AD), you can automatically add or remove users to security groups or Microsoft 365 groups, so you don't always have to do it manually. Whenever any properties of a user or device change, Azure AD evaluates all dynamic group rules in your Azure AD organization to see if the change should add or remove members.
+In Azure Active Directory (Azure AD), part of Microsoft Entra, you can automatically add or remove users to security groups or Microsoft 365 groups, so you don't always have to do it manually. Whenever any properties of a user or device change, Azure AD evaluates all dynamic group rules in your Azure AD organization to see if the change should add or remove members.
In this tutorial, you learn how to: > [!div class="checklist"]
active-directory Groups Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-lifecycle.md
Previously updated : 10/22/2021 Last updated : 06/24/2022
# Configure the expiration policy for Microsoft 365 groups
-This article tells you how to manage the lifecycle of Microsoft 365 groups by setting an expiration policy for them. You can set expiration policy only for Microsoft 365 groups in Azure Active Directory (Azure AD).
+This article tells you how to manage the lifecycle of Microsoft 365 groups by setting an expiration policy for them. You can set expiration policy only for Microsoft 365 groups in Azure Active Directory (Azure AD), part of Microsoft Entra.
Once you set a group to expire:
active-directory Groups Members Owners Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-members-owners-search.md
Previously updated : 10/22/2021 Last updated : 06/24/2022
# Search groups and members in Azure Active Directory
-This article tells you how to search for members and owners of a group and how to use search filters the Azure Active Directory (Azure AD) portal. Search functions for groups include:
+This article tells you how to search for members and owners of a group and how to use search filters in the portal for Azure Active Directory (Azure AD), part of Microsoft Entra. Search functions for groups include:
- Groups search capabilities, such as substring search in group names - Filtering and sorting options on member and owner lists
active-directory Groups Naming Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-naming-policy.md
Previously updated : 09/02/2021 Last updated : 06/24/2022
# Enforce a naming policy on Microsoft 365 groups in Azure Active Directory
-To enforce consistent naming conventions for Microsoft 365 groups created or edited by your users, set up a group naming policy for your organizations in Azure Active Directory (Azure AD). For example, you could use the naming policy to communicate the function of a group, membership, geographic region, or who created the group. You could also use the naming policy to help categorize groups in the address book. You can use the policy to block specific words from being used in group names and aliases.
+To enforce consistent naming conventions for Microsoft 365 groups created or edited by your users, set up a group naming policy for your organizations in Azure Active Directory (Azure AD), part of Microsoft Entra. For example, you could use the naming policy to communicate the function of a group, membership, geographic region, or who created the group. You could also use the naming policy to help categorize groups in the address book. You can use the policy to block specific words from being used in group names and aliases.
> [!IMPORTANT] > Using Azure AD naming policy for Microsoft 365 groups requires that you possess but not necessarily assign an Azure Active Directory Premium P1 license or Azure AD Basic EDU license for each unique user that is a member of one or more Microsoft 365 groups.
active-directory Groups Quickstart Expiration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-quickstart-expiration.md
Previously updated : 09/02/2021 Last updated : 06/24/2022
Expiration policy is simple:
- A deleted Microsoft 365 group can be restored within 30 days by a group owner or by an Azure AD administrator > [!NOTE]
-> Groups now use Azure AD intelligence to automatically renewed based on whether they have been in recent use. This renewal decision is based on user activity in groups across Microsoft 365 services like Outlook, SharePoint, Teams, Yammer, and others.
+> Azure Active Directory (Azure AD), part of Microsoft Entra, uses intelligence to automatically renew groups based on whether they have been in recent use. This renewal decision is based on user activity in groups across Microsoft 365 services like Outlook, SharePoint, Teams, Yammer, and others.
If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
active-directory Groups Quickstart Naming Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-quickstart-naming-policy.md
Previously updated : 12/02/2020 Last updated : 06/24/2022
# Quickstart: Naming policy for groups in Azure Active Directory
-In this quickstart, you will set up naming policy in your Azure Active Directory (Azure AD) organization for user-created Microsoft 365 groups, to help you sort and search your organizationΓÇÖs groups. For example, you could use the naming policy to:
+In this quickstart, in Azure Active Directory (Azure AD), part of Microsoft Entra, you will set up naming policy in your Azure AD organization for user-created Microsoft 365 groups, to help you sort and search your groups. For example, you could use the naming policy to:
* Communicate the function of a group, membership, geographic region, or who created the group. * Help categorize groups in the address book.
active-directory Groups Restore Deleted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-restore-deleted.md
Previously updated : 12/02/2020 Last updated : 06/24/2022
# Restore a deleted Microsoft 365 group in Azure Active Directory
-When you delete a Microsoft 365 group in the Azure Active Directory (Azure AD), the deleted group is retained but not visible for 30 days from the deletion date. This behavior is so that the group and its contents can be restored if needed. This functionality is restricted exclusively to Microsoft 365 groups in Azure AD. It is not available for security groups and distribution groups. Please note that the 30-day group restoration period is not customizable.
+When you delete a Microsoft 365 group in Azure Active Directory (Azure AD), part of Microsoft Entra, the deleted group is retained but not visible for 30 days from the deletion date. This behavior is so that the group and its contents can be restored if needed. This functionality is restricted exclusively to Microsoft 365 groups in Azure AD. It is not available for security groups and distribution groups. Please note that the 30-day group restoration period is not customizable.
> [!NOTE] > Don't use `Remove-MsolGroup` because it purges the group permanently. Always use `Remove-AzureADMSGroup` to delete a Microsoft 365 group.
active-directory Groups Saasapps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-saasapps.md
Previously updated : 06/30/2021 Last updated : 06/24/2022
# Using a group to manage access to SaaS applications
-Using Azure Active Directory (Azure AD) with an Azure AD Premium license plan, you can use groups to assign access to a SaaS application that's integrated with Azure AD. For example, if you want to assign access for the marketing department to use five different SaaS applications, you can create an Office 365 or security group that contains the users in the marketing department, and then assign that group to these five SaaS applications that are needed by the marketing department. This way you can save time by managing the membership of the marketing department in one place. Users then are assigned to the application when they are added as members of the marketing group, and have their assignments removed from the application when they are removed from the marketing group. This capability can be used with hundreds of applications that you can add from within the Azure AD Application Gallery.
+Using Azure Active Directory (Azure AD), part of Microsoft Entra, with an Azure AD Premium license plan, you can use groups to assign access to a SaaS application that's integrated with Azure AD. For example, if you want to assign access for the marketing department to use five different SaaS applications, you can create an Office 365 or security group that contains the users in the marketing department, and then assign that group to these five SaaS applications that are needed by the marketing department. This way you can save time by managing the membership of the marketing department in one place. Users then are assigned to the application when they are added as members of the marketing group, and have their assignments removed from the application when they are removed from the marketing group. This capability can be used with hundreds of applications that you can add from within the Azure AD Application Gallery.
> [!IMPORTANT] > You can use this feature only after you start an Azure AD Premium trial or purchase Azure AD Premium license plan.
active-directory Groups Self Service Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-self-service-management.md
Previously updated : 03/22/2022 Last updated : 06/24/2022
# Set up self-service group management in Azure Active Directory
-You can enable users to create and manage their own security groups or Microsoft 365 groups in Azure Active Directory (Azure AD). The owner of the group can approve or deny membership requests, and can delegate control of group membership. Self-service group management features are not available for mail-enabled security groups or distribution lists.
+You can enable users to create and manage their own security groups or Microsoft 365 groups in Azure Active Directory (Azure AD), part of Microsoft Entra. The owner of the group can approve or deny membership requests, and can delegate control of group membership. Self-service group management features are not available for mail-enabled security groups or distribution lists.
## Self-service group membership defaults
active-directory Groups Settings Cmdlets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-settings-cmdlets.md
Previously updated : 07/19/2021 Last updated : 06/24/2022
# Azure Active Directory cmdlets for configuring group settings
-This article contains instructions for using Azure Active Directory (Azure AD) PowerShell cmdlets to create and update groups. This content applies only to Microsoft 365 groups (sometimes called unified groups).
+This article contains instructions for using PowerShell cmdlets to create and update groups in Azure Active Directory (Azure AD), part of Microsoft Entra. This content applies only to Microsoft 365 groups (sometimes called unified groups).
> [!IMPORTANT] > Some settings require an Azure Active Directory Premium P1 license. For more information, see the [Template settings](#template-settings) table.
active-directory Groups Settings V2 Cmdlets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-settings-v2-cmdlets.md
Previously updated : 12/02/2020 Last updated : 06/24/2022
> >
-This article contains examples of how to use PowerShell to manage your groups in Azure Active Directory (Azure AD). It also tells you how to get set up with the Azure AD PowerShell module. First, you must [download the Azure AD PowerShell module](https://www.powershellgallery.com/packages/AzureAD/).
+This article contains examples of how to use PowerShell to manage your groups in Azure Active Directory (Azure AD), part of Microsoft Entra. It also tells you how to get set up with the Azure AD PowerShell module. First, you must [download the Azure AD PowerShell module](https://www.powershellgallery.com/packages/AzureAD/).
## Install the Azure AD PowerShell module
active-directory Linkedin User Consent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/linkedin-user-consent.md
Previously updated : 12/02/2020 Last updated : 06/24/2022
# LinkedIn account connections data sharing and consent
-You can enable users in your Active Directory (Azure AD) organization to consent to connect their Microsoft work or school account with their LinkedIn account. After a user connects their accounts, information and highlights from LinkedIn are available in some Microsoft apps and services. Users can also expect their networking experience on LinkedIn to be improved and enriched with information from Microsoft.
+You can enable users in your organization in Active Directory (Azure AD), part of Microsoft Entra, to consent to connect their Microsoft work or school account with their LinkedIn account. After a user connects their accounts, information and highlights from LinkedIn are available in some Microsoft apps and services. Users can also expect their networking experience on LinkedIn to be improved and enriched with information from Microsoft.
To see LinkedIn information in Microsoft apps and services, users must consent to connect their own Microsoft and LinkedIn accounts. Users are prompted to connect their accounts the first time they click to see someone's LinkedIn information on a profile card in Outlook, OneDrive or SharePoint Online. LinkedIn account connections are not fully enabled for your users until they consent to the experience and to connect their accounts.
active-directory Signin Account Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/signin-account-support.md
Previously updated : 12/02/2020 Last updated : 06/24/2022
# Sign-in options for Microsoft accounts in Azure Active Directory
-The Microsoft 365 sign-in page for Azure Active Directory (Azure AD) supports work or school accounts and Microsoft accounts, but depending on the user's situation, it could be one or the other or both. For example, the Azure AD sign-in page supports:
+The Microsoft 365 sign-in page for Azure Active Directory (Azure AD), part of Microsoft Entra, supports work or school accounts and Microsoft accounts, but depending on the user's situation, it could be one or the other or both. For example, the Azure AD sign-in page supports:
* Apps that accept sign-ins from both types of account * Organizations that accept guests
active-directory Signin Realm Discovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/signin-realm-discovery.md
Previously updated : 12/02/2020 Last updated : 06/24/2022
# Home realm discovery for Azure Active Directory sign-in pages
-We are changing our Azure Active Directory (Azure AD) sign-in behavior to make room for new authentication methods and improve usability. During sign-in, Azure AD determines where a user needs to authenticate. Azure AD makes intelligent decisions by reading organization and user settings for the username entered on the sign-in page. This is a step towards a password-free future that enables additional credentials like FIDO 2.0.
+We are changing sign-in behavior in Azure Active Directory (Azure AD), part of Microsoft Entra, to make room for new authentication methods and improve usability. During sign-in, Azure AD determines where a user needs to authenticate. Azure AD makes intelligent decisions by reading organization and user settings for the username entered on the sign-in page. This is a step towards a password-free future that enables additional credentials like FIDO 2.0.
## Home realm discovery behavior
active-directory Users Bulk Add https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-bulk-add.md
Previously updated : 05/19/2021 Last updated : 06/24/2022
# Bulk create users in Azure Active Directory
-Azure Active Directory (Azure AD) supports bulk user create and delete operations and supports downloading lists of users. Just fill out comma-separated values (CSV) template you can download from the Azure AD portal.
+Azure Active Directory (Azure AD), part of Microsoft Entra, supports bulk user create and delete operations and supports downloading lists of users. Just fill out comma-separated values (CSV) template you can download from the Azure AD portal.
## Required permissions
The rows in a downloaded CSV template are as follows:
1. [Sign in to your Azure AD organization](https://aad.portal.azure.com) with an account that is a User administrator in the organization. 1. In Azure AD, select **Users** > **Bulk create**.
-1. On the **Bulk create user** page, select **Download** to receive a valid comma-separated values (CSV) file of user properties, and then add add users you want to create.
+1. On the **Bulk create user** page, select **Download** to receive a valid comma-separated values (CSV) file of user properties, and then add users you want to create.
![Select a local CSV file in which you list the users you want to add](./media/users-bulk-add/upload-button.png)
active-directory Users Bulk Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-bulk-delete.md
Previously updated : 07/09/2021 Last updated : 06/24/2022
+ # Bulk delete users in Azure Active Directory
-Using the Azure Active Directory (Azure AD) portal, you can remove a large number of members to a group by using a comma-separated values (CSV) file to bulk delete users.
+Using the admin center in Azure Active Directory (Azure AD), part of Microsoft Entra, you can remove a large number of members to a group by using a comma-separated values (CSV) file to bulk delete users.
## CSV template structure
active-directory Users Bulk Download https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-bulk-download.md
Previously updated : 10/26/2021 Last updated : 06/24/2022
# Download a list of users in Azure Active Directory portal
-Azure Active Directory (Azure AD) supports bulk user list download operations.
+Azure Active Directory (Azure AD), part of Microsoft Entra, supports bulk user list download operations.
## Required permissions
active-directory Users Bulk Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-bulk-restore.md
Previously updated : 12/02/2020 Last updated : 06/24/2022
# Bulk restore deleted users in Azure Active Directory
-Azure Active Directory (Azure AD) supports bulk user restore operations and supports downloading lists of users, groups, and group members.
+Azure Active Directory (Azure AD), part of Microsoft Entra, supports bulk user restore operations and supports downloading lists of users, groups, and group members.
## Understand the CSV template
active-directory Users Close Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-close-account.md
# Close your work or school account in an unmanaged Azure AD organization
-If you are a user in an unmanaged Azure Active Directory (Azure AD) organization, and you no longer need to use apps from that organization or maintain any association with it, you can close your account at any time. An unmanaged organization does not have a Global administrator. Users in an unmanaged organization can close their accounts on their own, without having to contact an administrator.
+If you are a user in an unmanaged organization (tenant) in Azure Active Directory (Azure AD), part of Microsoft Entra, and you no longer need to use apps from that organization or maintain any association with it, you can close your account at any time. An unmanaged organization does not have a Global Administrator. Users in an unmanaged organization can close their accounts on their own, without having to contact an administrator.
Users in an unmanaged organization are often created during self-service sign-up. An example might be an information worker in an organization who signs up for a free service. For more information about self-service sign-up, see [What is self-service sign-up for Azure Active Directory?](directory-self-service-signup.md).
active-directory Users Custom Security Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-custom-security-attributes.md
description: Assign or remove custom security attributes for a user in Azure Act
Previously updated : 02/03/2022 Last updated : 06/24/2022
> Custom security attributes are currently in PREVIEW. > See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-[Custom security attributes](../fundamentals/custom-security-attributes-overview.md) in Azure Active Directory (Azure AD) are business-specific attributes (key-value pairs) that you can define and assign to Azure AD objects. For example, you can assign custom security attribute to filter your employees or to help determine who gets access to resources. This article describes how to assign, update, remove, or filter custom security attributes for Azure AD.
+[Custom security attributes](../fundamentals/custom-security-attributes-overview.md) in Azure Active Directory (Azure AD), part of Microsoft Entra, are business-specific attributes (key-value pairs) that you can define and assign to Azure AD objects. For example, you can assign custom security attribute to filter your employees or to help determine who gets access to resources. This article describes how to assign, update, remove, or filter custom security attributes for Azure AD.
## Prerequisites
active-directory Users Restrict Guest Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-restrict-guest-permissions.md
Previously updated : 05/04/2022 Last updated : 06/24/2022
# Restrict guest access permissions in Azure Active Directory
-Azure Active Directory (Azure AD) allows you to restrict what external guest users can see in their organization in Azure AD. Guest users are set to a limited permission level by default in Azure AD, while the default for member users is the full set of user permissions. There's another guest user permission level in your Azure AD organization's external collaboration settings for even more restricted access, so that the guest access levels are:
+Azure Active Directory (Azure AD), part of Microsoft Entra, allows you to restrict what external guest users can see in their organization in Azure AD. Guest users are set to a limited permission level by default in Azure AD, while the default for member users is the full set of user permissions. There's another guest user permission level in your Azure AD organization's external collaboration settings for even more restricted access, so that the guest access levels are:
Permission level | Access level | Value - | | --
active-directory Auth Ssh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/auth-ssh.md
Previously updated : 03/01/2022 Last updated : 06/22/2022 - # SSH
-Secure Shell (SSH) is a network protocol that provides encryption for operating network services securely over an unsecured network. SSH also provides a command-line sign in, executes remote commands, and securely transfer files. It is commonly used in Unix-based systems such as Linux®. SSH replaces the Telnet protocol, which does not provide encryption in an unsecured network.
+Secure Shell (SSH) is a network protocol that provides encryption for operating network services securely over an unsecured network. SSH also provides a command-line sign-in, executes remote commands, and securely transfer files. It's commonly used in Unix-based systems such as Linux®. SSH replaces the Telnet protocol, which doesn't provide encryption in an unsecured network.
-Azure Active Directory (Azure AD) provides a Virtual Machine (VM) extension for Linux®-based systems running on Azure.
+Azure Active Directory (Azure AD) provides a Virtual Machine (VM) extension for Linux®-based systems running on Azure, and a client extension that integrates with [Azure CLI](/cli/azure/) and the OpenSSH client.
## Use when 
-* Working with Linux®-based VMs that require remote sign in
+* Working with Linux®-based VMs that require remote sign-in
* Executing remote commands in Linux®-based systems
Azure Active Directory (Azure AD) provides a Virtual Machine (VM) extension for
![diagram of Azure AD with SSH protocol](./media/authentication-patterns/ssh-auth.png)
-SSH with Azure AD
- ## Components of system 
-* **User**: Starts SSH client to set up a connection with the Linux® VMs and provides credentials for authentication.
+* **User**: Starts Azure CLI and SSH client to set up a connection with the Linux® VMs and provides credentials for authentication.
+
+* **Azure CLI**: The component that the user interacts with to initiate their session with Azure AD, request short-lived OpenSSH user certificates from Azure AD, and initiate the SSH session.
-* **Web browser**: The component that the user interacts with. It communicates with the Identity Provider (Azure AD) to securely authenticate and authorize the user.
+* **Web browser**: The component that the user interacts with to authenticate their Azure CLI session. It communicates with the Identity Provider (Azure AD) to securely authenticate and authorize the user.
-* **SSH Client**: Drives the connection setup process.
+* **OpenSSH Client**: This client is used by Azure CLI, or (optionally) directly by the end user, to initiate a connection to the Linux VM.
-* **Azure AD**: Authenticates the identity of the user using device flow, and issues token to the Linux VMs.
+* **Azure AD**: Authenticates the identity of the user and issues short-lived OpenSSH user certificates to their Azure CLI client.
-* **Linux VM**: Accepts token and provides successful connection.
+* **Linux VM**: Accepts OpenSSH user certificate and provides successful connection.
## Implement SSH with Azure AD  * [Log in to a Linux® VM with Azure Active Directory credentials - Azure Virtual Machines ](../devices/howto-vm-sign-in-azure-ad-linux.md) -
-* [OAuth 2.0 device code flow - Microsoft identity platform ](../develop/v2-oauth2-device-code.md)
-
-* [Integrate with Azure Active Directory (akamai.com)](https://learn.akamai.com/en-us/webhelp/enterprise-application-access/enterprise-application-access/GUID-6B16172C-86CC-48E8-B30D-8E678BF3325F.html)
active-directory Security Operations Privileged Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-operations-privileged-accounts.md
Title: Security operations for privileged accounts in Azure Active Directory description: Learn about baselines, and how to monitor and alert on potential security issues with privileged accounts in Azure Active Directory. -+ Last updated 04/29/2022-+
active-directory Cloud Governed Management For On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/cloud-governed-management-for-on-premises.md
In hybrid environments, Microsoft's strategy is to enable deployments where the
## Next steps
-For more information on how to get started on this journey, see the Azure AD deployment plans, located at <https://aka.ms/deploymentplans>. They provide end-to-end guidance about how to deploy Azure Active Directory (Azure AD) capabilities. Each plan explains the business value, planning considerations, design, and operational procedures needed to successfully roll out common Azure AD capabilities. Microsoft continually updates the deployment plans with best practices learned from customer deployments and other feedback when we add new capabilities to managing from the cloud with Azure AD.
+For more information on how to get started on this journey, see the [Azure AD deployment plans](https://aka.ms/deploymentplans). These plans provide end-to-end guidance for deploying Azure Active Directory (Azure AD) capabilities. Each plan explains the business value, planning considerations, design, and operational procedures needed to successfully roll out common Azure AD capabilities. Microsoft continually updates the deployment plans with best practices learned from customer deployments and other feedback when we add new capabilities to managing from the cloud with Azure AD.
active-directory Howto Troubleshoot Upn Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/howto-troubleshoot-upn-changes.md
Last updated 03/13/2020 --++
See these resources:
* [Azure AD UserPrincipalName population](./plan-connect-userprincipalname.md)
-* [Microsoft identity platform ID tokens](../develop/id-tokens.md)
+* [Microsoft identity platform ID tokens](../develop/id-tokens.md)
active-directory End User Experiences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/end-user-experiences.md
Which method(s) you choose to deploy in your organization is your discretion.
## Azure AD My Apps
-My Apps at <https://myapps.microsoft.com> is a web-based portal that allows an end user with an organizational account in Azure Active Directory to view and launch applications to which they have been granted access by the Azure AD administrator. If you are an end user with [Azure Active Directory Premium](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing), you can also utilize self-service group management capabilities through My Apps.
+[My Apps](https://myapps.microsoft.com) is a web-based portal that allows an end user with an organizational account in Azure Active Directory to view and launch applications to which they have been granted access by the Azure AD administrator. If you are an end user with [Azure Active Directory Premium](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing), you can also utilize self-service group management capabilities through My Apps.
By default, all applications are listed together on a single page. But you can use collections to group together related applications and present them on a separate tab, making them easier to find. For example, you can use collections to create logical groupings of applications for specific job roles, tasks, projects, and so on. For information, see [Create collections on the My Apps portal](access-panel-collections.md).
active-directory Troubleshoot Saml Based Sso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/troubleshoot-saml-based-sso.md
To know the patterns pre-configured for the application:
8. Select **SAML-based Sign-on** from the **Mode** dropdown. 9. Go to the **Identifier** or **Reply URL** textbox, under the **Domain and URLs section.** 10. There are three ways to know the supported patterns for the application:
- - In the textbox, you see the supported pattern(s) as a placeholder *Example:* <https://contoso.com>.
+ - In the textbox, you see the supported pattern(s) as a placeholder, for example: `https://contoso.com`.
- if the pattern is not supported, you see a red exclamation mark when you try to enter the value in the textbox. If you hover your mouse over the red exclamation mark, you see the supported patterns. - In the tutorial for the application, you can also get information about the supported patterns. Under the **Configure Azure AD single sign-on** section. Go to the step for configured the values under the **Domain and URLs** section.
active-directory Plan Monitoring And Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/plan-monitoring-and-reporting.md
Title: Plan reports & monitoring deployment - Azure AD description: Describes how to plan and execute implementation of reporting and monitoring. -+ Last updated 11/13/2018-+ # Customer intent: As an Azure AD administrator, I want to monitor logs and report on access to increase security
Depending on the decisions you have made earlier using the design guidance above
Consider implementing [Privileged Identity Management](../privileged-identity-management/pim-configure.md)
-Consider implementing [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md)
+Consider implementing [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md)
active-directory Aws Clientvpn Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/aws-clientvpn-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal.
| > [!NOTE]
- > These values are not real. Update these values with the actual Sign on URL and Reply URL. The Sign on URL and Reply URL can have the same value (http://127.0.0.1:35001). Refer to [AWS Client VPN Documentation](https://docs.aws.amazon.com/vpn/latest/clientvpn-admin/client-authentication.html#ad) for details. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. Contact [AWS ClientVPN support team](https://aws.amazon.com/contact-us/) for any configuration issues.
+ > These values are not real. Update these values with the actual Sign on URL and Reply URL. The Sign on URL and Reply URL can have the same value (`http://127.0.0.1:35001`). Refer to [AWS Client VPN Documentation](https://docs.aws.amazon.com/vpn/latest/clientvpn-admin/client-authentication.html#ad) for details. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. Contact [AWS ClientVPN support team](https://aws.amazon.com/contact-us/) for any configuration issues.
1. In the Azure Active Directory service, navigate to **App registrations** and then select **All Applications**.
active-directory Keystone Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/keystone-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Keystone'
+description: Learn how to configure single sign-on between Azure Active Directory and Keystone.
++++++++ Last updated : 06/16/2022++++
+# Tutorial: Azure AD SSO integration with Keystone
+
+In this tutorial, you'll learn how to integrate Keystone with Azure Active Directory (Azure AD). When you integrate Keystone with Azure AD, you can:
+
+* Control in Azure AD who has access to Keystone.
+* Enable your users to be automatically signed-in to Keystone with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Keystone single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Keystone supports **SP** initiated SSO.
+
+## Add Keystone from the gallery
+
+To configure the integration of Keystone into Azure AD, you need to add Keystone from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Keystone** in the search box.
+1. Select **Keystone** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Keystone
+
+Configure and test Azure AD SSO with Keystone using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user at Keystone.
+
+To configure and test Azure AD SSO with Keystone, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Keystone SSO](#configure-keystone-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Keystone test user](#create-keystone-test-user)** - to have a counterpart of B.Simon in Keystone that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Keystone** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a value using the following pattern:
+ `urn:auth0:irdeto:<InstanceName>`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://irdeto.auth0.com/login/callback?connection=<InstanceName>`
+
+ c. In the **Sign-on URL** text box, type the URL:
+ `https://fms.live.fm.ks.irdeto.com/`
+
+ > [!Note]
+ > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [Keystone support team](mailto:soc@irdeto.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
+
+1. On the **Set up Keystone** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Keystone.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Keystone**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Keystone SSO
+
+To configure single sign-on on **Keystone** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Keystone support team](mailto:soc@irdeto.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Keystone test user
+
+In this section, you create a user called Britta Simon at Keystone. Work with [Keystone support team](mailto:soc@irdeto.com) to add the users in the Keystone platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Keystone Sign-On URL where you can initiate the login flow.
+
+* Go to Keystone Sign-On URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Keystone tile in the My Apps, this will redirect to Keystone Sign-On URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Keystone you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Preset Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/preset-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Preset'
+description: Learn how to configure single sign-on between Azure Active Directory and Preset.
++++++++ Last updated : 06/14/2022++++
+# Tutorial: Azure AD SSO integration with Preset
+
+In this tutorial, you'll learn how to integrate Preset with Azure Active Directory (Azure AD). When you integrate Preset with Azure AD, you can:
+
+* Control in Azure AD who has access to Preset.
+* Enable your users to be automatically signed-in to Preset with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Preset single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Preset supports **SP** and **IDP** initiated SSO.
+
+## Add Preset from the gallery
+
+To configure the integration of Preset into Azure AD, you need to add Preset from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Preset** in the search box.
+1. Select **Preset** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Preset
+
+Configure and test Azure AD SSO with Preset using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Preset.
+
+To configure and test Azure AD SSO with Preset, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Preset SSO](#configure-preset-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Preset test user](#create-preset-test-user)** - to have a counterpart of B.Simon in Preset that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Preset** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a value using the following pattern:
+ `urn:auth0:preset-io-prod:<ConnectionID>`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://auth.app.preset.io/login/callback?connection=<ConnectionID>`
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type the URL:
+ `https://manage.app.preset.io/login`
+
+ > [!Note]
+ > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [Preset support team](mailto:support@preset.io) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. Preset application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![image](common/default-attributes.png)
+
+1. In addition to above, Preset application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | | |
+ | firstName | user.givenname |
+ | lastName | user.surname |
+ | email | user.mail |
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
+
+1. On the **Set up Preset** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Attributes")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Preset.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Preset**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Preset SSO
+
+To configure single sign-on on **Preset** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Preset support team](mailto:support@preset.io). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Preset test user
+
+In this section, you create a user called Britta Simon in Preset. Work with [Preset support team](mailto:support@preset.io) to add the users in the Preset platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Preset Sign on URL where you can initiate the login flow.
+
+* Go to Preset Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Preset for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Preset tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Preset for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Preset you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Sap Successfactors Inbound Provisioning Cloud Only Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sap-successfactors-inbound-provisioning-cloud-only-tutorial.md
This section provides steps for user account provisioning from SuccessFactors to
**To configure SuccessFactors to Azure AD provisioning:**
-1. Go to <https://portal.azure.com>
+1. Go to the [Azure portal](https://portal.azure.com).
2. In the left navigation bar, select **Azure Active Directory**
active-directory Sap Successfactors Inbound Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sap-successfactors-inbound-provisioning-tutorial.md
This section provides steps for user account provisioning from SuccessFactors to
**To configure SuccessFactors to Active Directory provisioning:**
-1. Go to <https://portal.azure.com>
+1. Go to the [Azure portal](https://portal.azure.com).
2. In the left navigation bar, select **Azure Active Directory**
active-directory Sap Successfactors Writeback Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sap-successfactors-writeback-tutorial.md
This section provides steps for
**To configure SuccessFactors Writeback:**
-1. Go to <https://portal.azure.com>
+1. Go to the [Azure portal](https://portal.azure.com).
2. In the left navigation bar, select **Azure Active Directory**
active-directory Tendium Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tendium-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Tendium'
+description: Learn how to configure single sign-on between Azure Active Directory and Tendium.
++++++++ Last updated : 06/14/2022++++
+# Tutorial: Azure AD SSO integration with Tendium
+
+In this tutorial, you'll learn how to integrate Tendium with Azure Active Directory (Azure AD). When you integrate Tendium with Azure AD, you can:
+
+* Control in Azure AD who has access to Tendium.
+* Enable your users to be automatically signed-in to Tendium with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Tendium single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Tendium supports **SP** initiated SSO.
+* Tendium supports **Just In Time** user provisioning.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Add Tendium from the gallery
+
+To configure the integration of Tendium into Azure AD, you need to add Tendium from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Tendium** in the search box.
+1. Select **Tendium** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Tendium
+
+Configure and test Azure AD SSO with Tendium using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Tendium.
+
+To configure and test Azure AD SSO with Tendium, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Tendium SSO](#configure-tendium-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Tendium test user](#create-tendium-test-user)** - to have a counterpart of B.Simon in Tendium that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Tendium** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** text box, type the value:
+ `urn:amazon:cognito:sp:eu-west-1_bIV0Yblnt`
+
+ b. In the **Reply URL** text box, type the URL:
+ `https://auth-prod.app.tendium.com/saml2/idpresponse`
+
+ c. In the **Sign-on URL** text box, type the URL:
+ `https://app.tendium.com/auth/sign-in`
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Tendium.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Tendium**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Tendium SSO
+
+To configure single sign-on on **Tendium** side, you need to send the **App Federation Metadata Url** to [Tendium support team](mailto:tech-partners@tendium.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Tendium test user
+
+In this section, a user called B.Simon is created in Tendium. Tendium supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Tendium, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Tendium Sign-on URL where you can initiate the login flow.
+
+* Go to Tendium Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Tendium tile in the My Apps, this will redirect to Tendium Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+
+## Next steps
+
+Once you configure Tendium you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Training Platform Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/training-platform-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Training Platform'
+description: Learn how to configure single sign-on between Azure Active Directory and Training Platform.
++++++++ Last updated : 06/14/2022++++
+# Tutorial: Azure AD SSO integration with Training Platform
+
+In this tutorial, you'll learn how to integrate Training Platform with Azure Active Directory (Azure AD). When you integrate Training Platform with Azure AD, you can:
+
+* Control in Azure AD who has access to Training Platform.
+* Enable your users to be automatically signed-in to Training Platform with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Training Platform single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Training Platform supports **SP** and **IDP** initiated SSO.
+* Training Platform supports **Just In Time** user provisioning.
+
+## Add Training Platform from the gallery
+
+To configure the integration of Training Platform into Azure AD, you need to add Training Platform from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Training Platform** in the search box.
+1. Select **Training Platform** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Training Platform
+
+Configure and test Azure AD SSO with Training Platform using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Training Platform.
+
+To configure and test Azure AD SSO with Training Platform, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Training Platform SSO](#configure-training-platform-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Training Platform test user](#create-training-platform-test-user)** - to have a counterpart of B.Simon in Training Platform that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Training Platform** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a value using the following pattern:
+ `urn:auth0:living-security:<ID>`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://identity.livingsecurity.com/login/callback?connection=<ID>`
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type the URL:
+ `https://app.livingsecurity.com`
+
+ > [!Note]
+ > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [Training Platform support team](mailto:support@livingsecurity.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Training Platform.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Training Platform**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Training Platform SSO
+
+1. Log in to your Training Platform company site as an administrator.
+
+1. Go to **Configuration** section and select **SAML SSO Configuration** tab.
+
+1. Make sure your application is set to **Metadata URL** Mode.
+
+1. In the **Identity Provider Metadata Url*** textbox, paste the **App Federation Metadata Url** which you have copied from the Azure portal.
+
+ ![Screenshot that shows the Configuration Settings.](./media/training-platform-tutorial/settings.png "Configuration")
+
+1. Click **Save**.
+
+### Create Training Platform test user
+
+In this section, a user called B.Simon is created in Training Platform. Training Platform supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Training Platform, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Training Platform Sign-on URL where you can initiate the login flow.
+
+* Go to Training Platform Sign on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Training Platform for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Training Platform tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Training Platform for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Training Platform you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Veza Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/veza-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Veza'
+description: Learn how to configure single sign-on between Azure Active Directory and Veza.
++++++++ Last updated : 06/23/2022++++
+# Tutorial: Azure AD SSO integration with Veza
+
+In this tutorial, you'll learn how to integrate Veza with Azure Active Directory (Azure AD). When you integrate Veza with Azure AD, you can:
+
+* Control in Azure AD who has access to Veza.
+* Enable your users to be automatically signed-in to Veza with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Veza single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD. For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Veza supports **SP** and **IDP** initiated SSO.
+* Veza supports **Just In Time** user provisioning.
+
+## Add Veza from the gallery
+
+To configure the integration of Veza into Azure AD, you need to add Veza from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Veza** in the search box.
+1. Select **Veza** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Veza
+
+Configure and test Azure AD SSO with Veza using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Veza.
+
+To configure and test Azure AD SSO with Veza, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Veza SSO](#configure-veza-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Veza test user](#create-veza-test-user)** - to have a counterpart of B.Simon in Veza that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Veza** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** text box, type a value using one of the following patterns:
+
+ | **Identifier** |
+ |-|
+ `urn:auth0:<Cookie-auth0-instance-name>:saml-<customer-name>-cookie-connection` |
+ `urn:auth0:<Veza-auth0-instance-name>:saml-<customer-name>-cookie-connection` |
+
+ b. In the **Reply URL** text box, type a URL using one of the following patterns:
+
+ | **Reply URL** |
+ ||
+ | `https://<instancename>.veza.com` |
+ | `https://<instancename>.cookie.ai` |
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type a URL using one of the following patterns:
+
+ | **Sign-on URL** |
+ |--|
+ | `https://<instancename>.cookie.ai/login/callback?connection=saml-<customer-name>-cookie-connection` |
+ | `https://<instancename>.veza.com/ login/callback?connection=saml-<customer-name>-veza-connection` |
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Veza Client support team](mailto:support@veza.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
+
+1. On the **Set up Veza** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Veza.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Veza**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Veza SSO
+
+1. Log in to your Veza company site as an administrator.
+
+1. Go to **Administration** > **Sign-in Settings**, toggle **Enable MFA** button and choose to configure SSO.
+
+ ![Screenshot that shows the Configuration Settings.](./media/veza-tutorial/settings.png "Configuration")
+
+1. In the **Configure SSO** page, perform the following steps:
+
+ ![Screenshot that shows the Configuration of SSO Authentication.](./media/veza-tutorial/details.png "Profile")
+
+ a. In the **Sign In Url** textbox, paste the **Login URL** value, which you've copied from the Azure portal.
+
+ b. Open the downloaded **Certificate (Base64)** from the Azure portal and upload the file into the **X509 Signing Certificate** by clicking **Choose File** option.
+
+ c. In the **Sign Out Url** textbox, paste the **Logout URL** value, which you've copied from the Azure portal.
+
+ d. Toggle **Enable Request Signing** button and select RSA-SHA-256 and SHA-256 as the **Sign Request Algorithm**.
+
+ e. Click **Save** on the Veza SSO configuration and toggle the option to **Enable SSO**.
+
+### Create Veza test user
+
+In this section, a user called B.Simon is created in Veza. Veza supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Veza, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Veza Sign-On URL where you can initiate the login flow.
+
+* Go to Veza Sign-On URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Veza for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Veza tile in the My Apps, if configured in SP mode you would be redirected to the application Sign-On page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Veza for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Veza you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Workday Inbound Cloud Only Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/workday-inbound-cloud-only-tutorial.md
The following sections describe steps for configuring user provisioning from Wor
**To configure Workday to Azure Active Directory provisioning for cloud-only users:**
-1. Go to <https://portal.azure.com>.
+1. Go to the [Azure portal](https://portal.azure.com).
2. In the Azure portal, search for and select **Azure Active Directory**.
active-directory Workday Inbound Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/workday-inbound-tutorial.md
This section provides steps for user account provisioning from Workday to each A
**To configure Workday to Active Directory provisioning:**
-1. Go to <https://portal.azure.com>.
+1. Go to the [Azure portal](https://portal.azure.com).
2. In the Azure portal, search for and select **Azure Active Directory**.
active-directory Workday Writeback Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/workday-writeback-tutorial.md
Follow these instructions to configure writeback of user email addresses and use
**To configure Workday Writeback connector:**
-1. Go to <https://portal.azure.com>.
+1. Go to the [Azure portal](https://portal.azure.com).
2. In the Azure portal, search for and select **Azure Active Directory**.
active-directory Configure Azure Active Directory For Fedramp High Impact https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/configure-azure-active-directory-for-fedramp-high-impact.md
--++ Last updated 4/26/2021
The following is a list of FedRAMP resources:
* [Microsoft Purview compliance portal](/microsoft-365/compliance/microsoft-365-compliance-center)
-* [Microsoft Compliance Manager](/microsoft-365/compliance/compliance-manager)
+* [Microsoft Purview Compliance Manager](/microsoft-365/compliance/compliance-manager)
## Next steps
active-directory Fedramp Access Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/fedramp-access-controls.md
--++ Last updated 4/26/2021
active-directory Fedramp Identification And Authentication Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/fedramp-identification-and-authentication-controls.md
--++ Last updated 4/07/2022
active-directory Fedramp Other Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/fedramp-other-controls.md
--++ Last updated 4/26/2021
active-directory Memo 22 09 Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/memo-22-09-authorization.md
--++ Last updated 3/10/2022
active-directory Memo 22 09 Enterprise Wide Identity Management System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/memo-22-09-enterprise-wide-identity-management-system.md
--++ Last updated 3/10/2022
active-directory Memo 22 09 Meet Identity Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/memo-22-09-meet-identity-requirements.md
--++ Last updated 3/10/2022
active-directory Memo 22 09 Multi Factor Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/memo-22-09-multi-factor-authentication.md
--++ Last updated 3/10/2022
The following articles are part of this documentation set:
For more information about Zero Trust, see:
-[Securing identity with Zero Trust](/security/zero-trust/deploy/identity)
+[Securing identity with Zero Trust](/security/zero-trust/deploy/identity)
active-directory Memo 22 09 Other Areas Zero Trust https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/memo-22-09-other-areas-zero-trust.md
--++ Last updated 3/10/2022
active-directory Nist About Authenticator Assurance Levels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/nist-about-authenticator-assurance-levels.md
--++ Last updated 4/26/2021
In addition, Microsoft is fully committed to [protecting and managing customer d
[Achieve NIST AAL2 with Azure AD](nist-authenticator-assurance-level-2.md) [Achieve NIST AAL3 with Azure AD](nist-authenticator-assurance-level-3.md)
-ΓÇÄ
+ΓÇÄ
active-directory Nist Authentication Basics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/nist-authentication-basics.md
--++ Last updated 4/26/2021
One example is the Microsoft Authenticator app used in passwordless mode. With t
[Achieving NIST AAL2 by using Azure AD](nist-authenticator-assurance-level-2.md)
-[Achieving NIST AAL3 by using Azure AD](nist-authenticator-assurance-level-3.md)
+[Achieving NIST AAL3 by using Azure AD](nist-authenticator-assurance-level-3.md)
active-directory Nist Authenticator Assurance Level 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/nist-authenticator-assurance-level-1.md
--++ Last updated 4/26/2021
All communications between the claimant and Azure AD are performed over an authe
[Achieve NIST AAL2 with Azure AD](nist-authenticator-assurance-level-2.md)
-[Achieve NIST AAL3 with Azure AD](nist-authenticator-assurance-level-3.md)
+[Achieve NIST AAL3 with Azure AD](nist-authenticator-assurance-level-3.md)
active-directory Nist Authenticator Assurance Level 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/nist-authenticator-assurance-level-2.md
--++ Last updated 4/26/2021
active-directory Nist Authenticator Assurance Level 3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/nist-authenticator-assurance-level-3.md
--++ Last updated 4/26/2021
active-directory Nist Authenticator Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/nist-authenticator-types.md
--++ Last updated 4/26/2021
active-directory Nist Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/nist-overview.md
--++ Last updated 4/26/2021
active-directory Standards Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/standards-overview.md
--++ Last updated 4/26/2021
To learn more about supported compliance frameworks, see [Azure compliance offer
[Configure Azure Active Directory to achieve NIST authenticator assurance levels](nist-overview.md)
-[Configure Azure Active directory to meet FedRAMP High Impact level](configure-azure-active-directory-for-fedramp-high-impact.md)
+[Configure Azure Active directory to meet FedRAMP High Impact level](configure-azure-active-directory-for-fedramp-high-impact.md)
aks Node Pool Snapshot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-pool-snapshot.md
NODEPOOL_ID=$(az aks nodepool show --name nodepool1 --cluster-name myAKSCluster
Now, to take a snapshot from the previous node pool you'll use the `az aks snapshot` CLI command. ```azurecli-interactive
-az aks snapshot create --name MySnapshot --resource-group MyResourceGroup --nodepool-id $NODEPOOL_ID --location eastus
+az aks nodepool snapshot create --name MySnapshot --resource-group MyResourceGroup --nodepool-id $NODEPOOL_ID --location eastus
``` ## Create a node pool from a snapshot
api-management Api Management Get Started Publish Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-get-started-publish-versions.md
# Tutorial: Publish multiple versions of your API
-There are times when it's impractical to have all callers to your API use exactly the same version. When callers want to upgrade to a later version, they want an approach that's easy to understand. As shown in this tutorial, it is possible to provided multiple *versions* in Azure API Management.
+There are times when it's impractical to have all callers to your API use exactly the same version. When callers want to upgrade to a later version, they want an approach that's easy to understand. As shown in this tutorial, it is possible to provide multiple *versions* in Azure API Management.
For background, see [Versions & revisions](https://azure.microsoft.com/blog/versions-revisions/).
After creating the version, it now appears underneath **Demo Conference API** in
![Versions listed under an API in the Azure portal](media/api-management-getstarted-publish-versions/version-list.png)
-You can now edit and configure **v1** as an API that is separate from **Original**. Changes to one version do not affect another.
- > [!Note] > If you add a version to a non-versioned API, an **Original** is also automatically created. This version responds on the default URL. Creating an Original version ensures that any existing callers are not broken by the process of adding a version. If you create a new API with versions enabled at the start, an Original isn't created.
+## Edit a version
+
+After adding the version, you can now edit and configure it as an API that is separate from an Original. Changes to one version do not affect another. For example, add or remove API operations, or edit the OpenAPI specification. For more information, see [Edit an API](edit-api.md).
+ ## Add the version to a product In order for callers to see the new version, it must be added to a *product*. If you didn't already add the version to a product, you can add it to a product at any time.
api-management Authorizations How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authorizations-how-to.md
Four steps are needed to set up an authorization with the authorization code gra
:::image type="content" source="media/authorizations-how-to/register-application.png" alt-text="Screenshot of registering a new OAuth application in GitHub."::: 1. Enter an **Application name** and **Homepage URL** for the application. 1. Optionally, add an **Application description**.
- 1. In **Authorization callback URL** (the redirect URL), enter `https://authorization-manager-test.consent.azure-apim.net/redirect/apim/<YOUR-APIM-SERVICENAME>`, substituting the API Management service name that is used.
+ 1. In **Authorization callback URL** (the redirect URL), enter `https://authorization-manager.consent.azure-apim.net/redirect/apim/<YOUR-APIM-SERVICENAME>`, substituting the API Management service name that is used.
1. Select **Register application**. 1. In the **General** page, copy the **Client ID**, which you'll use in a later step. 1. Select **Generate a new client secret**. Copy the secret, which won't be displayed again, and which you'll use in a later step.
api-management Edit Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/edit-api.md
Title: Edit an API with the Azure portal | Microsoft Docs
-description: Learn how to use API Management (APIM) to edit an API. Add, delete, or rename operations in the APIM instance, or edit the API's swagger.
+description: Learn how to use API Management to edit an API. Add, delete, or rename operations in the APIM instance, or edit the API's swagger.
documentationcenter: ''
# Edit an API
-The steps in this tutorial show you how to use API Management (APIM) to edit an API.
+The steps in this tutorial show you how to use API Management to edit an API.
+ You can add, rename, or delete operations in the Azure portal. + You can edit your API's swagger.
The steps in this tutorial show you how to use API Management (APIM) to edit an
[!INCLUDE [api-management-navigate-to-instance.md](../../includes/api-management-navigate-to-instance.md)]
-## Edit an API in APIM
+## Edit an operation
-![Screenshot that highlights the process for editing an API in APIM.](./media/edit-api/edit-api001.png)
+![Screenshot that highlights the process for editing an API in API Management.](./media/edit-api/edit-api001.png)
1. Click the **APIs** tab. 2. Select one of the APIs that you previously imported.
The steps in this tutorial show you how to use API Management (APIM) to edit an
## Update the swagger
-You can update your backbend API from the Azure portal by following these steps:
+You can update your backend API from the Azure portal by following these steps:
1. Select **All operations** 2. Click pencil in the **Frontend** window.
You can update your backbend API from the Azure portal by following these steps:
> [!div class="nextstepaction"] > [APIM policy samples](./policy-reference.md)
-> [Transform and protect a published API](transform-api.md)
+> [Transform and protect a published API](transform-api.md)
api-management Soft Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/soft-delete.md
https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{reso
## Purge a soft-deleted instance
-Use the API Management [Purge](/rest/api/apimanagement/current-ga/deleted-services/purge) operation, substituting `{subscriptionId}`, `{location}`, and `{serviceName}` with your Azure subscription, resource location, and API Management name:
+Use the API Management [Purge](/rest/api/apimanagement/current-ga/deleted-services/purge) operation, substituting `{subscriptionId}`, `{location}`, and `{serviceName}` with your Azure subscription, resource location, and API Management name.
+
+> [!NOTE]
+> To purge a soft-deleted instance, you must have the following RBAC permissions at the subscription scope: Microsoft.ApiManagement/locations/deletedservices/delete, Microsoft.ApiManagement/deletedservices/read.
```rest DELETE https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.ApiManagement/locations/{location}/deletedservices/{serviceName}?api-version=2021-08-01
api-management Visual Studio Code Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/visual-studio-code-tutorial.md
The following example imports an OpenAPI Specification in JSON format into API M
1. In the Explorer pane, expand the API Management instance you created. 1. Right-click **APIs**, and select **Import from OpenAPI Link**. 1. When prompted, enter the following values:
- 1. An **OpenAPI link** for content in JSON format. For this example: *<https://conferenceapi.azurewebsites.net?format=json>*.
+ 1. An **OpenAPI link** for content in JSON format. For this example: `https://conferenceapi.azurewebsites.net?format=json`.
This URL is the service that implements the example API. API Management forwards requests to this address. 1. An **API name**, such as *demo-conference-api*, that is unique in the API Management instance. This name can contain only letters, number, and hyphens. The first and last characters must be alphanumeric. This name is used in the path to call the API.
app-service Quickstart Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-custom-container.md
Create an ASP.NET web app by following these steps:
:::image type="content" source="./media/quickstart-custom-container/select-mvc-template-for-container.png?text=Create ASP.NET Web Application" alt-text="Create ASP.NET Web Application":::
-1. If the _Dockerfile_ file isn't opened automatically, open it from the **Solution Explorer**.
+1. If the `Dockerfile` isn't opened automatically, open it from the **Solution Explorer**.
1. You need a [supported parent image](configure-custom-container.md#supported-parent-images). Change the parent image by replacing the `FROM` line with the following code and save the file:
app-service Tutorial Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-custom-container.md
In Solution Explorer, right-click the **CustomFontSample** project and select **
Select **Docker Compose** > **OK**.
-Your project is now set to run in a Windows container. A _Dockerfile_ is added to the **CustomFontSample** project, and a **docker-compose** project is added to the solution.
+Your project is now set to run in a Windows container. A `Dockerfile` is added to the **CustomFontSample** project, and a **docker-compose** project is added to the solution.
From the Solution Explorer, open **Dockerfile**.
A terminal window is opened and displays the image deployment progress. Wait for
## Sign in to Azure
-Sign in to the Azure portal at <https://portal.azure.com>.
+Sign in to the [Azure portal](https://portal.azure.com).
## Create a web app
application-gateway Application Gateway Backend Health Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-backend-health-troubleshooting.md
The message displayed in the **Details** column provides more detailed insights
> [!NOTE] > The default probe request is sent in the format of
-\<protocol\>://127.0.0.1:\<port\>/. For example, http://127.0.0.1:80 for an http probe on port 80. Only HTTP status codes of 200 through 399 are considered healthy. The protocol and destination port are inherited from the HTTP settings. If you want Application Gateway to probe on a different protocol, host name, or path and to recognize a different status code as Healthy, configure a custom probe and associate it with the HTTP settings.
+`<protocol>://127.0.0.1:<port>`. For example, `http://127.0.0.1:80` for an HTTP probe on port 80. Only HTTP status codes of 200 through 399 are considered healthy. The protocol and destination port are inherited from the HTTP settings. If you want Application Gateway to probe on a different protocol, host name, or path and to recognize a different status code as Healthy, configure a custom probe and associate it with the HTTP settings.
## Error messages
application-gateway Application Gateway Probe Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-probe-overview.md
In addition to using default health probe monitoring, you can also customize the
An application gateway automatically configures a default health probe when you don't set up any custom probe configuration. The monitoring behavior works by making an HTTP GET request to the IP addresses or FQDN configured in the back-end pool. For default probes if the backend http settings are configured for HTTPS, the probe uses HTTPS to test health of the backend servers.
-For example: You configure your application gateway to use back-end servers A, B, and C to receive HTTP network traffic on port 80. The default health monitoring tests the three servers every 30 seconds for a healthy HTTP response with a 30 second timeout for each request. A healthy HTTP response has a [status code](https://msdn.microsoft.com/library/aa287675.aspx) between 200 and 399. In this case, the HTTP GET request for the health probe will look like http://127.0.0.1/.
+For example: You configure your application gateway to use back-end servers A, B, and C to receive HTTP network traffic on port 80. The default health monitoring tests the three servers every 30 seconds for a healthy HTTP response with a 30 second timeout for each request. A healthy HTTP response has a [status code](https://msdn.microsoft.com/library/aa287675.aspx) between 200 and 399. In this case, the HTTP GET request for the health probe will look like `http://127.0.0.1/`.
If the default probe check fails for server A, the application gateway stops forwarding requests to this server. The default probe still continues to check for server A every 30 seconds. When server A responds successfully to one request from a default health probe, application gateway starts forwarding the requests to the server again.
application-gateway Configure Application Gateway With Private Frontend Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configure-application-gateway-with-private-frontend-ip.md
This article guides you through the steps to configure a Standard v1 Application
## Sign in to Azure
-Sign in to the Azure portal at <https://portal.azure.com>
+Sign in to the [Azure portal](https://portal.azure.com).
## Create an application gateway
applied-ai-services Applied Ai Services Customer Spotlight Use Cases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/applied-ai-services-customer-spotlight-use-cases.md
Customers are using Azure Applied AI Services to add AI horsepower to their busi
- AI-driven search offers strong data security and delivers smart results that add value. - Azure Form Recognizer increases ROI by using automation to streamline data extraction.
+## Use cases
+ | Partner | Description | Customer story | ||-|-| | <center>![Logo of Progressive Insurance, which consists of the word progressive in a slanted font in blue, capital letters.](./media/logo-progressive-02.png) | **Progressive uses Azure Bot Service and Azure Cognitive Search to help customers make smarter insurance decisions.** <br>"One of the great things about Bot Service is that, out of the box, we could use it to quickly put together the basic framework for our bot." *-Matt White, Marketing Manager, Personal Lines Acquisition Experience, Progressive Insurance* | [Insurance shoppers gain new service channel with artificial intelligence chatbot](https://customers.microsoft.com/story/789698-progressive-insurance-cognitive-services-insurance) |
applied-ai-services Form Recognizer Container Image Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/containers/form-recognizer-container-image-tags.md
Previously updated : 09/02/2021 Last updated : 06/23/2022 keywords: Docker, container, images
The following tags are available for Form Recognizer:
Release notes for `v2.1` (gated preview):
-| Container | Tags |
-||:|
-| **Layout**| &bullet; `latest` </br> &bullet; `2.1-preview` </br> &bullet; `2.1.0.016140001-08108749-amd64-preview`|
-| **Business Card** | &bullet; `latest` </br> &bullet; `2.1-preview` </br> &bullet; `2.1.016190001-amd64-preview` </br> &bullet; `2.1.016320001-amd64-preview` |
-| **ID Document** | &bullet; `latest` </br> &bullet; `2.1-preview`</br>&bullet; `2.1.016190001-amd64-preview`</br>&bullet; `2.1.016320001-amd64-preview` |
-| **Receipt**| &bullet; `latest` </br> &bullet; `2.1-preview`</br>&bullet; `2.1.016190001-amd64-preview`</br>&bullet; `2.1.016320001-amd64-preview` |
-| **Invoice**| &bullet; `latest` </br> &bullet; `2.1-preview`</br>&bullet; `2.1.016190001-amd64-preview`</br>&bullet; `2.1.016320001-amd64-preview` |
-| **Custom API** | &bullet; `latest` </br> &bullet;`2.1-distroless-20210622013115034-0cc5fcf6`</br>&bullet; `2.1-preview`|
-| **Custom Supervised**| &bullet; `latest` </br> &bullet; `2.1-distroless-20210622013149174-0cc5fcf6`</br>&bullet; `2.1-preview`|
+| Container | Tags | Retrieve image |
+||:||
+| **Layout**| &bullet; `latest` </br> &bullet; `2.1-preview`| `docker pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout)`|
+| **Business Card** | &bullet; `latest` </br> &bullet; `2.1-preview` |`docker pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/receipt` |
+| **ID Document** | &bullet; `latest` </br> &bullet; `2.1-preview`| `docker pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/id-document`|
+| **Receipt**| &bullet; `latest` </br> &bullet; `2.1-preview`| `docker pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/receipt` |
+| **Invoice**| &bullet; `latest` </br> &bullet; `2.1-preview`|`docker pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/invoice` |
+| **Custom API** | &bullet; `latest` </br> &bullet; `2.1-preview`| `docker pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/custom-api`|
+| **Custom Supervised**| &bullet; `latest` </br> &bullet; `2.1-preview`|`docker pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/custom-supervised` |
### [Previous versions](#tab/previous)
Release notes for `v2.1` (gated preview):
> [!div class="nextstepaction"] > [Install and run Form Recognizer containers](form-recognizer-container-install-run.md)
->
+>
applied-ai-services Deploy Label Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/deploy-label-tool.md
# Deploy the Sample Labeling tool
+>[!TIP]
+>
+> * For an enhanced experience and advanced model quality, try the [Form Recognizer v3.0 Studio (preview)](https://formrecognizer.appliedai.azure.com/studio).
+> * The v3.0 Studio supports any model trained with v2.1 labeled data.
+> * You can refer to the API migration guide for detailed information about migrating from v2.1 to v3.0.
+> * *See* our [**REST API**](quickstarts/try-v3-rest-api.md) or [**C#**](quickstarts/try-v3-csharp-sdk.md), [**Java**](quickstarts/try-v3-java-sdk.md), [**JavaScript**](quickstarts/try-v3-javascript-sdk.md), or [Python](quickstarts/try-v3-python-sdk.md) SDK quickstarts to get started with the V3.0 preview.
+ > [!NOTE] > The [cloud hosted](https://fott-2-1.azurewebsites.net/) labeling tool is available at [https://fott-2-1.azurewebsites.net/](https://fott-2-1.azurewebsites.net/). Follow the steps in this document only if you want to deploy the sample labeling tool for yourself.
applied-ai-services Label Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/label-tool.md
Title: "How-to: Analyze documents, Label forms, train a model, and analyze forms with Form Recognizer"
-description: In this how-to, you'll use the Form Recognizer sample tool to analyze documents, invoices, receipts etc. Label and create a custom model to extract text, tables, selection marks, structure and key-value pairs from documents.
+description: How to use the Form Recognizer sample tool to analyze documents, invoices, receipts etc. Label and create a custom model to extract text, tables, selection marks, structure and key-value pairs from documents.
Previously updated : 11/02/2021 Last updated : 06/23/2022 keywords: document processing
keywords: document processing
<!-- markdownlint-disable MD034 --> # Train a custom model using the Sample Labeling tool
-In this article, you'll use the Form Recognizer REST API with the Sample Labeling tool to train a custom model with manually labeled data.
+In this article, you'll use the Form Recognizer REST API with the Sample Labeling tool to train a custom model with manually labeled data.
> [!VIDEO https://docs.microsoft.com/Shows/Docs-Azure/Azure-Form-Recognizer/player] ## Prerequisites
-To complete this quickstart, you must have:
+ You'll need the following resources to complete this project:
* Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services) * Once you have your Azure subscription, <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer" title="Create a Form Recognizer resource" target="_blank">create a Form Recognizer resource </a> in the Azure portal to get your key and endpoint. After it deploys, select **Go to resource**.
- * You will need the key and endpoint from the resource you create to connect your application to the Form Recognizer API. You'll paste your key and endpoint into the code below later in the quickstart.
+ * You'll need the key and endpoint from the resource you create to connect your application to the Form Recognizer API. You'll paste your key and endpoint into the code below later in the quickstart.
* You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production. * A set of at least six forms of the same type. You'll use this data to train the model and test a form. You can use a [sample data set](https://go.microsoft.com/fwlink/?linkid=2090451) (download and extract *sample_data.zip*) for this quickstart. Upload the training files to the root of a blob storage container in a standard-performance-tier Azure Storage account.
Try out the [**Form Recognizer Sample Labeling tool**](https://fott-2-1.azureweb
> [!div class="nextstepaction"] > [Try Prebuilt Models](https://fott-2-1.azurewebsites.net/)
-You will need an Azure subscription ([create one for free](https://azure.microsoft.com/free/cognitive-services)) and a [Form Recognizer resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) endpoint and key to try out the Form Recognizer service.
+You'll need an Azure subscription ([create one for free](https://azure.microsoft.com/free/cognitive-services)) and a [Form Recognizer resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) endpoint and key to try out the Form Recognizer service.
## Set up the Sample Labeling tool
When you create or open a project, the main tag editor window opens. The tag edi
* The main editor pane that allows you to apply tags. * The tags editor pane that allows users to modify, lock, reorder, and delete tags.
-### Identify text and tables
+### Identify text and tables
Select **Run Layout on unvisited documents** on the left pane to get the text and table layout information for each document. The labeling tool will draw bounding boxes around each text element.
-The labeling tool will also show which tables have been automatically extracted. Select the table/grid icon on the left hand of the document to see the extracted table. In this quickstart, because the table content is automatically extracted, we will not be labeling the table content, but rather rely on the automated extraction.
+The labeling tool will also show which tables have been automatically extracted. Select the table/grid icon on the left hand of the document to see the extracted table. In this quickstart, because the table content is automatically extracted, we won't be labeling the table content, but rather rely on the automated extraction.
:::image type="content" source="media/label-tool/table-extraction.png" alt-text="Table visualization in Sample Labeling tool.":::
-In v2.1, if your training document does not have a value filled in, you can draw a box where the value should be. Use **Draw region** on the upper left corner of the window to make the region taggable.
+In v2.1, if your training document doesn't have a value filled in, you can draw a box where the value should be. Use **Draw region** on the upper left corner of the window to make the region taggable.
### Apply labels to text
The following value types and variations are currently supported:
* `number` * default, `currency`
- * Formatted as a Floating point value.
- * Example:1234.98 on the document will be formatted into 1234.98 on the output
+ * Formatted as a Floating point value.
+ * Example: 1234.98 on the document will be formatted into 1234.98 on the output
* `date` * default, `dmy`, `mdy`, `ymd` * `time` * `integer`
- * Formatted as a Integer value.
- * Example:1234.98 on the document will be formatted into 123498 on the output
+ * Formatted as an integer value.
+ * Example: 1234.98 on the document will be formatted into 123498 on the output.
* `selectionMark` > [!NOTE]
The following value types and variations are currently supported:
### Label tables (v2.1 only)
-At times, your data might lend itself better to being labeled as a table rather than key-value pairs. In this case, you can create a table tag by clicking on "Add a new table tag," specify whether the table will have a fixed number of rows or variable number of rows depending on the document, and define the schema.
+At times, your data might lend itself better to being labeled as a table rather than key-value pairs. In this case, you can create a table tag by selecting **Add a new table tag**. Specify whether the table will have a fixed number of rows or variable number of rows depending on the document and define the schema.
:::image type="content" source="media/label-tool/table-tag.png" alt-text="Configuring a table tag.":::
-Once you have defined your table tag, tag the cell values.
+Once you've defined your table tag, tag the cell values.
:::image type="content" source="media/table-labeling.png" alt-text="Labeling a table.":::
Once you have defined your table tag, tag the cell values.
Choose the Train icon on the left pane to open the Training page. Then select the **Train** button to begin training the model. Once the training process completes, you'll see the following information: * **Model ID** - The ID of the model that was created and trained. Each training call creates a new model with its own ID. Copy this string to a secure location; you'll need it if you want to do prediction calls through the [REST API](./quickstarts/try-sdk-rest-api.md?pivots=programming-language-rest-api&tabs=preview%2cv2-1) or [client library guide](./quickstarts/try-sdk-rest-api.md).
-* **Average Accuracy** - The model's average accuracy. You can improve model accuracy by labeling additional forms and retraining to create a new model. We recommend starting by labeling five forms and adding more forms as needed.
+* **Average Accuracy** - The model's average accuracy. You can improve model accuracy by adding and labeling more forms, then retraining to create a new model. We recommend starting by labeling five forms and adding more forms as needed.
* The list of tags, and the estimated accuracy per tag.
Select the Analyze (light bulb) icon on the left to test your model. Select sour
## Improve results
-Depending on the reported accuracy, you may want to do further training to improve the model. After you've done a prediction, examine the confidence values for each of the applied tags. If the average accuracy training value was high, but the confidence scores are low (or the results are inaccurate), you should add the prediction file to the training set, label it, and train again.
+Depending on the reported accuracy, you may want to do further training to improve the model. After you've done a prediction, examine the confidence values for each of the applied tags. If the average accuracy training value is high, but the confidence scores are low (or the results are inaccurate), add the prediction file to the training set, label it, and train again.
The reported average accuracy, confidence scores, and actual accuracy can be inconsistent when the analyzed documents differ from documents used in training. Keep in mind that some documents look similar when viewed by people but can look distinct to the AI model. For example, you might train with a form type that has two variations, where the training set consists of 20% variation A and 80% variation B. During prediction, the confidence scores for documents of variation A are likely to be lower.
Go to your project settings page (slider icon) and take note of the security tok
### Restore project credentials
-When you want to resume your project, you first need to create a connection to the same blob storage container. To do so, repeat the steps above. Then, go to the application settings page (gear icon) and see if your project's security token is there. If it isn't, add a new security token and copy over your token name and key from the previous step. Select **Save** to retain your settings..
+When you want to resume your project, you first need to create a connection to the same blob storage container. To do so, repeat the steps above. Then, go to the application settings page (gear icon) and see if your project's security token is there. If it isn't, add a new security token and copy over your token name and key from the previous step. Select **Save** to retain your settings.
### Resume a project
-Finally, go to the main page (house icon) and select **Open Cloud Project**. Then select the blob storage connection, and select your project's **.fott** file. The application will load all of the project's settings because it has the security token.
+Finally, go to the main page (house icon) and select **Open Cloud Project**. Then select the blob storage connection, and select your project's `.fott` file. The application will load all of the project's settings because it has the security token.
## Next steps
applied-ai-services Try Sample Label Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-sample-label-tool.md
Previously updated : 11/02/2021 Last updated : 06/24/2022 keywords: document processing
keywords: document processing
<!-- markdownlint-disable MD029 --> # Get started with the Form Recognizer Sample Labeling tool
-Azure Form Recognizer is a cloud-based Azure Applied AI Service that uses machine-learning models to extract key-value pairs, text, and tables from your documents. You can use Form Recognizer to automate your data processing in applications and workflows, enhance data-driven strategies, and enrich document search capabilities.
+>[!TIP]
+>
+> * For an enhanced experience and advanced model quality, try the [Form Recognizer v3.0 Studio (preview)](https://formrecognizer.appliedai.azure.com/studio).
+> * The v3.0 Studio supports any model trained with v2.1 labeled data.
+> * You can refer to the API migration guide for detailed information about migrating from v2.1 to v3.0.
+> * *See* our [**REST API**](try-v3-rest-api.md) or [**C#**](try-v3-csharp-sdk.md), [**Java**](try-v3-java-sdk.md), [**JavaScript**](try-v3-javascript-sdk.md), or [Python](try-v3-python-sdk.md) SDK quickstarts to get started with the V3.0 preview.
The Form Recognizer Sample Labeling tool is an open source tool that enables you to test the latest features of Azure Form Recognizer and Optical Character Recognition (OCR)
Form Recognizer offers several prebuilt models to choose from. Each model has it
* [**Sample receipt image**](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/contoso-allinone.jpg). * [**Sample business card image**](https://raw.githubusercontent.com/Azure/azure-sdk-for-python/master/sdk/formrecognizer/azure-ai-formrecognizer/samples/sample_forms/business_cards/business-card-english.jpg).
-1. In the **Source: URL** field, paste the selected URL and select the **Fetch** button.
+1. In the **Source** field, select **URL** from the dropdown menu, paste the selected URL, and select the **Fetch** button.
+
+ :::image type="content" source="../media/label-tool/fott-select-url.png" alt-text="Screenshot of source location dropdown menu.":::
1. In the **Form recognizer service endpoint** field, paste the endpoint that you obtained with your Form Recognizer subscription.
Azure the Form Recognizer Layout API extracts text, tables, selection marks, and
1. In the **key** field, paste the key you obtained from your Form Recognizer resource.
-1. In the **Source: URL** field, paste the following URL `https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/layout-page-001.jpg` and select the **Fetch** button.
+1. In the **Source** field, select **URL** from the dropdown menu, paste the following URL `https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/layout-page-001.jpg`, and select the **Fetch** button.
1. Select **Run Layout**. The Form Recognizer Sample Labeling tool will call the Analyze Layout API and analyze the document.
Train a custom model to analyze and extract data from forms and documents specif
### Prerequisites for training a custom form model
-* An Azure Storage blob container that contains a set of training data. Make sure all the training documents are of the same format. If you have forms in multiple formats, organize them into subfolders based on common format. For this project, you can use our [sample data set](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/curl/form-recognizer/sample_data_without_labels.zip).
+* An Azure Storage blob container that contains a set of training data. Make sure all the training documents are of the same format. If you have forms in multiple formats, organize them into subfolders based on common format. For this project, you can use our [sample data set](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/curl/form-recognizer/sample_data_without_labels.zip). If you don't know how to create an Azure storage account with a container, follow the [Azure Storage quickstart for Azure portal](../../../storage/blobs/storage-quickstart-blobs-portal.md).
* Configure CORS
- [CORS (Cross Origin Resource Sharing)](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services) needs to be configured on your Azure storage account for it to be accessible from the Form Recognizer Studio. To configure CORS in the Azure portal, you'll need access to the CORS blade of your storage account.
+ [CORS (Cross Origin Resource Sharing)](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services) needs to be configured on your Azure storage account for it to be accessible from the Form Recognizer Studio. To configure CORS in the Azure portal, you'll need access to the CORS tab of your storage account.
- 1. Select the CORS blade for the storage account.
+ 1. Select the CORS tab for the storage account.
:::image type="content" source="../media/quickstarts/cors-setting-menu.png" alt-text="Screenshot of the CORS setting menu in the Azure portal.":::
applied-ai-services Try V3 Csharp Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-csharp-sdk.md
Analyze and extract text, tables, structure, key-value pairs, and named entities
> [!div class="checklist"] > > * For this example, you'll need a **form document file from a URI**. You can use our [sample form document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-layout.pdf) for this quickstart.
-> * To analyze a given file at a URI, you'll use the `StartAnalyzeDocumentFromUri` method. The returned value is an `AnalyzeResult` object containing data about the submitted document.
+> * To analyze a given file at a URI, you'll use the `StartAnalyzeDocumentFromUri` method and pass `prebuilt-document` as the model ID. The returned value is an `AnalyzeResult` object containing data about the submitted document.
> * We've added the file URI value to the `Uri fileUri` variable at the top of the script. > * For simplicity, all the entity fields that the service returns are not shown here. To see the list of all supported fields and corresponding types, see the [General document](../concept-general-document.md#named-entity-recognition-ner-categories) concept page.
Analyze and extract common fields from specific document types using a prebuilt
> [!div class="checklist"] > > * Analyze an invoice using the prebuilt-invoice model. You can use our [sample invoice document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-invoice.pdf) for this quickstart.
-> * We've added the file URI value to the `Uri fileUri` variable at the top of the Program.cs file.
+> * We've added the file URI value to the `Uri invoiceUri` variable at the top of the Program.cs file.
> * To analyze a given file at a URI, use the `StartAnalyzeDocumentFromUri` method and pass `prebuilt-invoice` as the model ID. The returned value is an `AnalyzeResult` object containing data from the submitted document. > * For simplicity, all the key-value pairs that the service returns are not shown here. To see the list of all supported fields and corresponding types, see our [Invoice](../concept-invoice.md#field-extraction) concept page.
applied-ai-services Supervised Table Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/supervised-table-tags.md
# Use table tags to train your custom template model
+>[!TIP]
+>
+> * For an enhanced experience and advanced model quality, try the [Form Recognizer v3.0 Studio (preview)](https://formrecognizer.appliedai.azure.com/studio).
+> * The v3.0 Studio supports any model trained with v2.1 labeled data.
+> * You can refer to the API migration guide for detailed information about migrating from v2.1 to v3.0.
+> * *See* our [**REST API**](quickstarts/try-v3-rest-api.md) or [**C#**](quickstarts/try-v3-csharp-sdk.md), [**Java**](quickstarts/try-v3-java-sdk.md), [**JavaScript**](quickstarts/try-v3-javascript-sdk.md), or [Python](quickstarts/try-v3-python-sdk.md) SDK quickstarts to get started with the V3.0 preview.
+ In this article, you'll learn how to train your custom template model with table tags (labels). Some scenarios require more complex labeling than simply aligning key-value pairs. Such scenarios include extracting information from forms with complex hierarchical structures or encountering items that not automatically detected and extracted by the service. In these cases, you can use table tags to train your custom template model. ## When should I use table tags?
In this article, you'll learn how to train your custom template model with table
Here are some examples of when using table tags would be appropriate: - There's data that you wish to extract presented as tables in your forms, and the structure of the tables are meaningful. For instance, each row of the table represents one item and each column of the row represents a specific feature of that item. In this case, you could use a table tag where a column represents features and a row represents information about each feature.-- There's data you wish to extract that is not presented in specific form fields but semantically, the data could fit in a two-dimensional grid. For instance, your form has a list of people, and includes, a first name, a last name, and an email address. You would like to extract this information. In this case, you could use a table tag with first name, last name, and email address as columns and each row is populated with information about a person from your list.
+- There's data you wish to extract that isn't presented in specific form fields but semantically, the data could fit in a two-dimensional grid. For instance, your form has a list of people, and includes, a first name, a last name, and an email address. You would like to extract this information. In this case, you could use a table tag with first name, last name, and email address as columns and each row is populated with information about a person from your list.
> [!NOTE] > Form Recognizer automatically finds and extracts all tables in your documents whether the tables are tagged or not. Therefore, you don't have to label every table from your form with a table tag and your table tags don't have to replicate the structure of very table found in your form. Tables extracted automatically by Form Recognizer will be included in the pageResults section of the JSON output.
automation Private Link Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/how-to/private-link-security.md
Before setting up your Automation account resource, consider your network isolat
Follow the steps below to create a private endpoint for your Automation account. 1. Go to [Private Link center](https://portal.azure.com/#blade/Microsoft_Azure_Network/PrivateLinkCenterBlade/privateendpoints) in Azure portal to create a private endpoint to connect our network.
-Once your changes to public Network Access and Private Link are applied, it can take up to 35 minutes for them to take effect.
1. On **Private Link Center**, select **Create private endpoint**.
Once your changes to public Network Access and Private Link are applied, it can
:::image type="content" source="./media/private-link-security/create-private-endpoint-dns-inline.png" alt-text="Screenshot of how to create a private endpoint in DNS tab." lightbox="./media/private-link-security/create-private-endpoint-dns-expanded.png":::
-1. On **Tags**, you can categorize resources. Select **Name** and **Value** and select **Review + create**. You're taken to the **Review + create** page where Azure validates your configuration.
+1. On **Tags**, you can categorize resources. Select **Name** and **Value** and select **Review + create**.
+You're taken to the **Review + create** page where Azure validates your configuration. Once your changes to public Network Access and Private Link are applied, it can take up to 35 minutes for them to take effect.
On the **Private Link Center**, select **Private endpoints** to view your private link resource.
automation Create Account Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/quickstarts/create-account-portal.md
The following table describes the fields on the **Basics** tab.
|||| |Subscription|Required |From the drop-down list, select the Azure subscription for the account.| |Resource group|Required |From the drop-down list, select your existing resource group, or select **Create new**.|
-|Automation account name|Required |Enter a name unique for it's location and resource group. Names for Automation accounts that have been deleted might not be immediately available. You can't change the account name once it has been entered in the user interface. |
+|Automation account name|Required |Enter a name unique for its location and resource group. Names for Automation accounts that have been deleted might not be immediately available. You can't change the account name once it has been entered in the user interface. |
|Region|Required |From the drop-down list, select a region for the account. For an updated list of locations that you can deploy an Automation account to, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=automation&regions=all).| The following image shows a standard configuration for a new Automation account. ### Advanced
You can chose to enable managed identities later, and the Automation account is
The following image shows a standard configuration for a new Automation account.
-### Tags tab
+### Networking
+
+On the **Networking** tab, you can connect to your automation account either publicly, (via public IP addresses), or privately, using a private endpoint. The following image shows the connectivity configuration that you can define for a new automation account.
+
+- **Public Access** ΓÇô This default option provides a public endpoint for the Automation account that can receive traffic over the internet and does not require any additional configuration. However, we don't recommend it for private applications or secure environments. Instead, the second option **Private access**, a private Link mentioned below can be leveraged to restrict access to automation endpoints only from authorized virtual networks. Public access can simultaneously coexist with the private endpoint enabled on the Automation account. If you select public access while creating the Automation account, you can add a Private endpoint later from the Networking blade of the Automation Account.
+
+- **Private Access** ΓÇô This option provides a private endpoint for the Automation account that uses a private IP address from your virtual network. This network interface connects you privately and securely to the Automation account. You bring the service into your virtual network by enabling a private endpoint. This is the recommended configuration from a security point of view; however, this requires you to configure Hybrid Runbook Worker connected to an Azure virtual network & currently does not support cloud jobs.
++
+### Tags
On the **Tags** tab, you can specify Resource Manager tags to help organize your Azure resources. For more information, see [Tag resources, resource groups, and subscriptions for logical organization](../../azure-resource-manager/management/tag-resources.md).
azure-fluid-relay Version Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/concepts/version-compatibility.md
Title: Version compatibility with Fluid Framework releases
-description: |
- How to determine what versions of the Fluid Framework releases are compatible with Azure Fluid Relay service.
+description: How to determine what versions of the Fluid Framework releases are compatible with Azure Fluid Relay
npx install-peerdeps @fluidframework/azure-client
``` > [!TIP]
-> During Public Preview, the versions of **@fluidframework/azure-client** and **fluid-framework** will match. That is, if
-> the current release of **@fluidframework/azure-client** is 0.48, then it will be compatible with **fluid-framework** 0.48. The inverse is also true.
+> If building with any pre-release version of of **@fluidframework/azure-client** and **fluid-framework** we strongly recommend that you update to the latest 1.0 version. Earlier versions will not be
+> supported with the General Availability of Azure Fluid Relay. With this upgrade, youΓÇÖll make use of our new multi-region routing capability where
+> Azure Fluid Relay will host your session closer to your end users to improve customer experience. In the latest package, you will need to update your
+> serviceConfig object to the new Azure Fluid Relay service endpoint instead of the storage and orderer endpoints:
+> If your Azure Fluid Relay resource is in West US 2, please use **https://us.fluidrelay.azure.com**. If it is West Europe,
+> use **https://eu.fluidrelay.azure.com**. If it is in Southeast Asia, use **https://global.fluidrelay.azure.com**.
+> These values can also be found in the "Access Key" section of the Fluid Relay resource in the Azure portal. The orderer and storage endpoints will be deprecated soon.
+ ## Compatibility table | npm package | Minimum version | API | | - | :-- | : |
-| @fluidframework/azure-client | [0.48.4][] | [API](https://fluidframework.com/docs/apis/azure-client/) |
-| fluid-framework | [0.48.4][] | [API](https://fluidframework.com/docs/apis/fluid-framework/) |
-| @fluidframework/azure-service-utils | [0.48.4][] | [API](https://fluidframework.com/docs/apis/azure-service-utils/) |
-| @fluidframework/test-client-utils | [0.48.4][] | [API](https://fluidframework.com/docs/apis/test-client-utils/) |
+| @fluidframework/azure-client | [1.0.1][] | [API](https://fluidframework.com/docs/apis/azure-client/) |
+| fluid-framework | [1.0.1][] | [API](https://fluidframework.com/docs/apis/fluid-framework/) |
+| @fluidframework/azure-service-utils | [1.0.1][] | [API](https://fluidframework.com/docs/apis/azure-service-utils/) |
+| @fluidframework/test-client-utils | [1.0.1][] | [API](https://fluidframework.com/docs/apis/test-client-utils/) |
-[0.48.4]: https://fluidframework.com/docs/updates/v0.48/
+[1.0.1]: https://fluidframework.com/docs/updates/v1.0.0/
## Next steps
azure-fluid-relay Connect Fluid Azure Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/how-tos/connect-fluid-azure-service.md
To connect to an Azure Fluid Relay instance you first need to create an `AzureCl
const config = { tenantId: "myTenantId", tokenProvider: new InsecureTokenProvider("myTenantKey", { id: "userId" }),
- orderer: "https://myOrdererUrl",
- storage: "https://myStorageUrl",
+ endpoint: "https://myServiceEndpointUrl",
+ type: "remote",
}; const clientProps = {
const config = {
"myAzureFunctionUrl" + "/api/GetAzureToken", { userId: "userId", userName: "Test User" } ),
- orderer: "https://myOrdererUrl",
- storage: "https://myStorageUrl",
+ endpoint: "https://myServiceEndpointUrl",
+ type: "remote",
}; const clientProps = {
const config = {
"myAzureFunctionUrl" + "/api/GetAzureToken", { userId: "UserId", userName: "Test User", additionalDetails: userDetails } ),
- orderer: "https://myOrdererUrl",
- storage: "https://myStorageUrl",
+ endpoint: "https://myServiceEndpointUrl",
+ type: "remote",
}; ```
azure-fluid-relay Local Mode With Azure Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/how-tos/local-mode-with-azure-client.md
This article walks through the steps to configure **AzureClient** in local mode
connection: { tenantId: LOCAL_MODE_TENANT_ID, tokenProvider: new InsecureTokenProvider("", { id: "123", name: "Test User" }),
- orderer: "http://localhost:7070",
- storage: "http://localhost:7070",
+ endpoint: "http://localhost:7070",
+ type: "remote",
}, };
azure-fluid-relay Test Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/how-tos/test-automation.md
function createAzureClient(): AzureClient {
const user = { id: "userId", name: "Test User" }; const connectionConfig = useAzure ? {
+ type: "remote",
tenantId: "myTenantId", tokenProvider: new InsecureTokenProvider(tenantKey, user),
- orderer: "https://myOrdererUrl",
- storage: "https://myStorageUrl",
+ endpoint: "https://myServiceEndpointUrl",
} : {
- tenantId: LOCAL_MODE_TENANT_ID,
+ type: "local",
tokenProvider: new InsecureTokenProvider("", user),
- orderer: "http://localhost:7070",
- storage: "http://localhost:7070",
+ endpoint: "http://localhost:7070",
};- const clientProps = { connection: config, };
azure-fluid-relay Quickstart Dice Roll https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/quickstarts/quickstart-dice-roll.md
To run against the Azure Fluid Relay service, you'll need to update your app's c
### Configure and create an Azure client
-To configure the Azure client, replace the values in the `serviceConfig` object in `app.js` with your Azure Fluid Relay
-service configuration values. These values can be found in the "Access Key" section of the Fluid Relay resource in the Azure portal.
+To configure the Azure client, replace the local connection `serviceConfig` object in `app.js` with your Azure Fluid Relay
+service configuration values. These values can be found in the "Access Key" section of the Fluid Relay resource in the Azure portal. Your `serviceConfig` object should look like this with the values replaced
```javascript const serviceConfig = { connection: {
- tenantId: LOCAL_MODE_TENANT_ID, // REPLACE WITH YOUR TENANT ID
+ tenantId: "MY_TENANT_ID", // REPLACE WITH YOUR TENANT ID
tokenProvider: new InsecureTokenProvider("" /* REPLACE WITH YOUR PRIMARY KEY */, { id: "userId" }),
- orderer: "http://localhost:7070", // REPLACE WITH YOUR ORDERER ENDPOINT
- storage: "http://localhost:7070", // REPLACE WITH YOUR STORAGE ENDPOINT
+ endpoint: "https://myServiceEndpointUrl", // REPLACE WITH YOUR SERVICE ENDPOINT
+ type: "remote",
} }; ```
azure-fluid-relay Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/reference/service-limits.md
-# Azure Fluid Relay Limits
+# Azure Fluid Relay limits
-This article outlines known limitation of Azure Fluid Relay.
+This article outlines known limitations of Azure Fluid Relay.
## Distributed Data Structures
The Azure Fluid Relay doesn't support [experimental distributed data structures
The maximum number of simultaneous users in one session on Azure Fluid Relay is 100 users. This limit is on simultaneous users. What this means is that the 101st user won't be allowed to join the session. In the case where an existing user leaves the session, a new user will be able to join. This is because the number of simultaneous users at that point will be less than the limit.
-## Fluid Summaries
+## Fluid summaries
Incremental summaries uploaded to Azure Fluid Relay can't exceed 28 MB in size. More info [here](https://fluidframework.com/docs/concepts/summarizer).
azure-fluid-relay Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/resources/faq.md
The following are frequently asked questions about Azure Fluid Relay
+## When will Azure Fluid Relay be Generally Available?
+
+Azure Fluid Relay will be Generally Available on 8/1/2022. At that point, the service will no longer be free. Charges will apply based on your usage of Azure Fluid Relay. The service will be metering 4 activities:
+
+- Operations in: As end users join, leave, and contribute to a collaborative session, the Fluid Framework client libraries send messages (also referred to as operations or ops) to the service. Each message incoming from one client is counted as one message. Heartbeat messages and other session messages are also counted. Messages larger than 2KB are counted as multiple messages of 2KB each (for example, 11KB message is counted as 6 messages).
+- Operations out: Once the service processes incoming messages, it broadcasts them to all participants in the collaborative session. Each message sent to each client is counted as one message (for example, in a 3-user session, one of the users sends an op, that will generate 3 ops out).
+- Client connectivity minutes: The duration of each user being connected to the session will be charged on a per user basis (for example, 3 users collaborate on a session for an hour, this is charged as 180 connectivity minutes).
+- Storage: Each collaborative Fluid session stores session artifacts in the service. Storage of this data will be charged on a per GB per month basis (prorated as appropriate).
+
+Reference the table below for the prices (in USD) we will start to charge at General Availability for each of these meters in the regions Azure Fluid Relay is currently offered. Additional regions and additional information about other currencies will be available on our pricing page soon.
+
+| Meter | Unit | West US 2 | West Europe | Southeast Asia
+|--|--|--|--|--|
+| Operations In | 1 million ops | 1.50 | 1.95 | 1.95 |
+| Operations Out | 1 million ops | 0.50 | 0.65 | 0.65 |
+| Client Connectivity Minutes | 1 million minutes | 1.50 | 1.95 | 1.95 |
+| Storage | 1 GB/month | 0.20 | 0.26 | 0.26 |
+++ ## Which Azure regions currently provide Fluid Relay? For a complete list of available regions, see [Azure Fluid Relay regions and availability](https://azure.microsoft.com/global-infrastructure/services/?products=fluid-relay).
azure-fluid-relay Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/resources/support.md
-# Help and Support options for Azure Fluid Relay
+# Help and support options for Azure Fluid Relay
If you have an issue or question involving Azure Fluid Relay, the following options are available.
-## Check out Frequently Asked Questions
+## Check out frequently asked questions
You can see if your question is already answered on our Frequently Asked Questions [page](faq.md).
-## Create an Azure Support Request
+## Create an Azure support request
With Azure, there are many [support options and plans](https://azure.microsoft.com/support/plans/) available, which you can explore and review. You can create a support ticket in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview).
azure-functions Quickstart Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/quickstart-java.md
To complete this tutorial, you need:
- [Apache Maven](https://maven.apache.org), version 3.0 or above. - Latest version of the [Azure Functions Core Tools](../functions-run-local.md).
+ - For Azure Functions 3.x, Core Tools **v3.0.4585** or newer is required.
+ - For Azure Functions 4.x, Core Tools **v4.0.4590** or newer is required.
- An Azure Storage account, which requires that you have an Azure subscription.
azure-functions Functions Bindings Event Grid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-grid.md
zone_pivot_groups: programming-languages-set-functions-lang-workers
This reference shows how to connect to Azure Event Grid using Azure Functions triggers and bindings. | Action | Type | |||
Add this version of the extension to your project by installing the [NuGet packa
# [Extension v2.x](#tab/extensionv2/in-process)
-Supports the default Event Grid binding parameter type of [Microsoft.Azure.EventGrid.Models.EventGridEvent](/dotnet/api/microsoft.azure.eventgrid.models.eventgridevent).
+Supports the default Event Grid binding parameter type of [Microsoft.Azure.EventGrid.Models.EventGridEvent](/dotnet/api/microsoft.azure.eventgrid.models.eventgridevent). Event Grid extension versions earlier than 3.x don't support [CloudEvents schema](../event-grid/cloudevents-schema.md#azure-functions). To consume this schema, instead use an HTTP trigger.
Add the extension to your project by installing the [NuGet package], version 2.x. # [Functions 1.x](#tab/functionsv1/in-process)
-Functions 1.x apps automatically have a reference to the [Microsoft.Azure.WebJobs](https://www.nuget.org/packages/Microsoft.Azure.WebJobs) NuGet package, version 2.x.
+Functions 1.x apps automatically have a reference to the [Microsoft.Azure.WebJobs](https://www.nuget.org/packages/Microsoft.Azure.WebJobs) NuGet package, version 2.x. Event Grid extension versions earlier than 3.x don't support [CloudEvents schema](../event-grid/cloudevents-schema.md#azure-functions). To consume this schema, instead use an HTTP trigger.
The Event Grid output binding is only available for Functions 2.x and higher.
Add the extension to your project by installing the [NuGet package](https://www.
# [Extension v2.x](#tab/extensionv2/isolated-process)
-Add the extension to your project by installing the [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.EventGrid), version 2.x.
+Add the extension to your project by installing the [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.EventGrid), version 2.x. Event Grid extension versions earlier than 3.x don't support [CloudEvents schema](../event-grid/cloudevents-schema.md#azure-functions). To consume this schema, instead use an HTTP trigger.
# [Functions 1.x](#tab/functionsv1/isolated-process)
You can install this version of the extension in your function app by registerin
# [Extension v2.x](#tab/extensionv2/csharp-script)
-Supports the default Event Grid binding parameter type of [Microsoft.Azure.EventGrid.Models.EventGridEvent](/dotnet/api/microsoft.azure.eventgrid.models.eventgridevent).
+Supports the default Event Grid binding parameter type of [Microsoft.Azure.EventGrid.Models.EventGridEvent](/dotnet/api/microsoft.azure.eventgrid.models.eventgridevent). Event Grid extension versions earlier than 3.x don't support [CloudEvents schema](../event-grid/cloudevents-schema.md#azure-functions). To consume this schema, instead use an HTTP trigger.
You can install this version of the extension in your function app by registering the [extension bundle], version 2.x. # [Functions 1.x](#tab/functionsv1/csharp-script)
-Functions 1.x apps automatically have a reference to the [Microsoft.Azure.WebJobs](https://www.nuget.org/packages/Microsoft.Azure.WebJobs) NuGet package, version 2.x.
+Functions 1.x apps automatically have a reference to the [Microsoft.Azure.WebJobs](https://www.nuget.org/packages/Microsoft.Azure.WebJobs) NuGet package, version 2.x. Event Grid extension versions earlier than 3.x don't support [CloudEvents schema](../event-grid/cloudevents-schema.md#azure-functions). To consume this schema, instead use an HTTP trigger.
The Event Grid output binding is only available for Functions 2.x and higher.
To learn more, see [Update your extensions].
# [Bundle v2.x](#tab/extensionv2)
-You can install this version of the extension in your function app by registering the [extension bundle], version 2.x.
+You can install this version of the extension in your function app by registering the [extension bundle], version 2.x. Event Grid extension versions earlier than 3.x don't support [CloudEvents schema](../event-grid/cloudevents-schema.md#azure-functions). To consume this schema, instead use an HTTP trigger.
# [Functions 1.x](#tab/functionsv1)
-The Event Grid output binding is only available for Functions 2.x and higher.
+The Event Grid output binding is only available for Functions 2.x and higher. Event Grid extension versions earlier than 3.x don't support [CloudEvents schema](../event-grid/cloudevents-schema.md#azure-functions). To consume this schema, instead use an HTTP trigger.
azure-functions Functions Reference Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-python.md
Title: Python developer reference for Azure Functions
-description: Understand how to develop functions with Python
+description: Understand how to develop functions with Python.
Last updated 05/25/2022 ms.devlang: python
# Azure Functions Python developer guide
-This article is an introduction to developing Azure Functions using Python. The content below assumes that you've already read the [Azure Functions developers guide](functions-reference.md).
+This article is an introduction to developing for Azure Functions by using Python. It assumes that you've already read the [Azure Functions developer guide](functions-reference.md).
-As a Python developer, you may also be interested in one of the following articles:
+As a Python developer, you might also be interested in one of the following articles:
| Getting started | Concepts| Scenarios/Samples | |--|--|--| | <ul><li>[Python function using Visual Studio Code](./create-first-function-vs-code-python.md)</li><li>[Python function with terminal/command prompt](./create-first-function-cli-python.md)</li></ul> | <ul><li>[Developer guide](functions-reference.md)</li><li>[Hosting options](functions-scale.md)</li><li>[Performance&nbsp;considerations](functions-best-practices.md)</li></ul> | <ul><li>[Image classification with PyTorch](machine-learning-pytorch.md)</li><li>[Azure Automation sample](/samples/azure-samples/azure-functions-python-list-resource-groups/azure-functions-python-sample-list-resource-groups/)</li><li>[Machine learning with TensorFlow](functions-machine-learning-tensorflow.md)</li><li>[Browse Python samples](/samples/browse/?products=azure-functions&languages=python)</li></ul> | > [!NOTE]
-> While you can [develop your Python based Azure Functions locally on Windows](create-first-function-vs-code-python.md#run-the-function-locally), Python is only supported on a Linux based hosting plan when running in Azure. See the list of supported [operating system/runtime](functions-scale.md#operating-systemruntime) combinations.
+> Although you can [develop your Python-based functions locally on Windows](create-first-function-vs-code-python.md#run-the-function-locally), Python functions are supported in Azure only when they're running on Linux. See the [list of supported operating system/runtime combinations](functions-scale.md#operating-systemruntime).
## Programming model
-Azure Functions expects a function to be a stateless method in your Python script that processes input and produces output. By default, the runtime expects the method to be implemented as a global method called `main()` in the `__init__.py` file. You can also [specify an alternate entry point](#alternate-entry-point).
+Azure Functions expects a function to be a stateless method in your Python script that processes input and produces output. By default, the runtime expects the method to be implemented as a global method called `main()` in the *\__init\__.py* file. You can also [specify an alternate entry point](#alternate-entry-point).
-Data from triggers and bindings is bound to the function via method attributes using the `name` property defined in the *function.json* file. For example, the _function.json_ below describes a simple function triggered by an HTTP request named `req`:
+Data from triggers and bindings is bound to the function via method attributes that use the `name` property defined in the *function.json* file. For example, the following _function.json_ file describes a simple function triggered by an HTTP request named `req`:
:::code language="json" source="~/functions-quickstart-templates/Functions.Templates/Templates/HttpTrigger-Python/function.json":::
-Based on this definition, the `__init__.py` file that contains the function code might look like the following example:
+Based on this definition, the *\__init\__.py* file that contains the function code might look like the following example:
```python def main(req):
def main(req):
return f'Hello, {user}!' ```
-You can also explicitly declare the attribute types and return type in the function using Python type annotations. This action helps you to use the IntelliSense and autocomplete features provided by many Python code editors.
+You can also explicitly declare the attribute types and return type in the function by using Python type annotations. This action helps you to use the IntelliSense and autocomplete features that many Python code editors provide.
```python import azure.functions
def main(req: azure.functions.HttpRequest) -> str:
return f'Hello, {user}!' ```
-Use the Python annotations included in the [azure.functions.*](/python/api/azure-functions/azure.functions) package to bind input and outputs to your methods.
+Use the Python annotations included in the [azure.functions.*](/python/api/azure-functions/azure.functions) package to bind inputs and outputs to your methods.
## Alternate entry point
-You can change the default behavior of a function by optionally specifying the `scriptFile` and `entryPoint` properties in the *function.json* file. For example, the _function.json_ below tells the runtime to use the `customentry()` method in the _main.py_ file, as the entry point for your Azure Function.
+You can change the default behavior of a function by optionally specifying the `scriptFile` and `entryPoint` properties in the *function.json* file. For example, the following _function.json_ file tells the runtime to use the `customentry()` method in the _main.py_ file as the entry point for your function:
```json {
You can change the default behavior of a function by optionally specifying the `
## Folder structure
-The recommended folder structure for a Python Functions project looks like the following example:
+The recommended folder structure for an Azure Functions project in Python looks like the following example:
``` <project_root>/
The recommended folder structure for a Python Functions project looks like the f
| - requirements.txt | - Dockerfile ```
-The main project folder (<project_root>) can contain the following files:
+The main project folder (*<project_root>*) can contain the following files:
-* *local.settings.json*: Used to store app settings and connection strings when running locally. This file doesn't get published to Azure. To learn more, see [local.settings.file](functions-develop-local.md#local-settings-file).
-* *requirements.txt*: Contains the list of Python packages the system installs when publishing to Azure.
-* *host.json*: Contains configuration options that affect all functions in a function app instance. This file does get published to Azure. Not all options are supported when running locally. To learn more, see [host.json](functions-host-json.md).
-* *.vscode/*: (Optional) Contains store VSCode configuration. To learn more, see [VSCode setting](https://code.visualstudio.com/docs/getstarted/settings).
-* *.venv/*: (Optional) Contains a Python virtual environment used by local development.
-* *Dockerfile*: (Optional) Used when publishing your project in a [custom container](functions-create-function-linux-custom-image.md).
+* *local.settings.json*: Used to store app settings and connection strings when functions are running locally. This file isn't published to Azure. To learn more, see [Local settings file](functions-develop-local.md#local-settings-file).
+* *requirements.txt*: Contains the list of Python packages that the system installs when you're publishing to Azure.
+* *host.json*: Contains configuration options that affect all functions in a function app instance. This file is published to Azure. Not all options are supported when functions are running locally. To learn more, see the [host.json reference](functions-host-json.md).
+* *.vscode/*: (Optional) Contains stored Visual Studio Code configurations. To learn more, see [User and Workspace Settings](https://code.visualstudio.com/docs/getstarted/settings).
+* *.venv/*: (Optional) Contains a Python virtual environment that's used for local development.
+* *Dockerfile*: (Optional) Used when you're publishing your project in a [custom container](functions-create-function-linux-custom-image.md).
* *tests/*: (Optional) Contains the test cases of your function app.
-* *.funcignore*: (Optional) Declares files that shouldn't get published to Azure. Usually, this file contains `.vscode/` to ignore your editor setting, `.venv/` to ignore local Python virtual environment, `tests/` to ignore test cases, and `local.settings.json` to prevent local app settings being published.
+* *.funcignore*: (Optional) Declares files that shouldn't be published to Azure. Usually, this file contains `.vscode/` to ignore your editor setting, `.venv/` to ignore the local Python virtual environment, `tests/` to ignore test cases, and `local.settings.json` to prevent local app settings from being published.
-Each function has its own code file and binding configuration file (function.json).
+Each function has its own code file and binding configuration file (*function.json*).
-When you deploy your project to a function app in Azure, the entire contents of the main project (*<project_root>*) folder should be included in the package, but not the folder itself, which means `host.json` should be in the package root. We recommend that you maintain your tests in a folder along with other functions, in this example `tests/`. For more information, see [Unit Testing](#unit-testing).
+When you deploy your project to a function app in Azure, the entire contents of the main project (*<project_root>*) folder should be included in the package, but not the folder itself. That means *host.json* should be in the package root. We recommend that you maintain your tests in a folder along with other functions. In this example, the folder is *tests/*. For more information, see [Unit testing](#unit-testing).
## Import behavior
-You can import modules in your function code using both absolute and relative references. Based on the folder structure shown above, the following imports work from within the function file *<project_root>\my\_first\_function\\_\_init\_\_.py*:
+You can import modules in your function code by using both absolute and relative references. Based on the folder structure shown earlier, the following imports work from within the function file *<project_root>\my\_first\_function\\_\_init\_\_.py*:
```python from shared_code import my_first_helper_function #(absolute)
from . import example #(relative)
``` > [!NOTE]
-> The *shared_code/* folder needs to contain an \_\_init\_\_.py file to mark it as a Python package when using absolute import syntax.
+> The *shared_code/* folder needs to contain an *\_\_init\_\_.py* file to mark it as a Python package when you're using absolute import syntax.
-The following \_\_app\_\_ import and beyond top-level relative import are deprecated, since it isn't supported by static type checker and not supported by Python test frameworks:
+The following *\_\_app\_\_* import and beyond top-level relative import are deprecated. The static type checker and the Python test frameworks don't support them.
```python from __app__.shared_code import my_first_helper_function #(deprecated __app__ import)
from __app__.shared_code import my_first_helper_function #(deprecated __app__ im
from ..shared_code import my_first_helper_function #(deprecated beyond top-level relative import) ```
-## Triggers and Inputs
+## Triggers and inputs
-Inputs are divided into two categories in Azure Functions: trigger input and other input. Although they're different in the `function.json` file, usage is identical in Python code. Connection strings or secrets for trigger and input sources map to values in the `local.settings.json` file when running locally, and the application settings when running in Azure.
+Inputs are divided into two categories in Azure Functions: trigger input and other binding input. Although they're different in the *function.json* file, usage is identical in Python code. When functions are running locally, connection strings or secrets required by trigger and input sources are maintained in the `Values` collection of the *local.settings.json* file. When functions are running in Azure, those same connection strings or secrets are stored securely as [application settings](functions-how-to-use-azure-function-app-settings.md#settings).
-For example, the following code demonstrates the difference between the two:
+The following example code demonstrates the difference between the two:
```json // function.json
def main(req: func.HttpRequest,
logging.info(f'Python HTTP triggered function processed: {obj.read()}') ```
-When the function is invoked, the HTTP request is passed to the function as `req`. An entry will be retrieved from the Azure Blob Storage based on the _ID_ in the route URL and made available as `obj` in the function body. Here, the storage account specified is the connection string found in the AzureWebJobsStorage app setting, which is the same storage account used by the function app.
-
+When the function is invoked, the HTTP request is passed to the function as `req`. An entry will be retrieved from Azure Blob Storage based on the ID in the route URL and made available as `obj` in the function body. Here, the storage account specified is the connection string found in the `AzureWebJobsStorage` app setting, which is the same storage account that the function app uses.
## Outputs
-Output can be expressed both in return value and output parameters. If there's only one output, we recommend using the return value. For multiple outputs, you'll have to use output parameters.
+Output can be expressed in the return value and in output parameters. If there's only one output, we recommend using the return value. For multiple outputs, you'll have to use output parameters.
-To use the return value of a function as the value of an output binding, the `name` property of the binding should be set to `$return` in `function.json`.
+To use the return value of a function as the value of an output binding, set the `name` property of the binding to `$return` in *function.json*.
-To produce multiple outputs, use the `set()` method provided by the [`azure.functions.Out`](/python/api/azure-functions/azure.functions.out) interface to assign a value to the binding. For example, the following function can push a message to a queue and also return an HTTP response.
+To produce multiple outputs, use the `set()` method provided by the [azure.functions.Out](/python/api/azure-functions/azure.functions.out) interface to assign a value to the binding. For example, the following function can push a message to a queue and return an HTTP response:
```json {
def main(req: func.HttpRequest,
Access to the Azure Functions runtime logger is available via a root [`logging`](https://docs.python.org/3/library/logging.html#module-logging) handler in your function app. This logger is tied to Application Insights and allows you to flag warnings and errors that occur during the function execution.
-The following example logs an info message when the function is invoked via an HTTP trigger.
+The following example logs an info message when the function is invoked via an HTTP trigger:
```python import logging
More logging methods are available that let you write to the console at differen
| Method | Description | | - | |
-| **`critical(_message_)`** | Writes a message with level CRITICAL on the root logger. |
-| **`error(_message_)`** | Writes a message with level ERROR on the root logger. |
-| **`warning(_message_)`** | Writes a message with level WARNING on the root logger. |
-| **`info(_message_)`** | Writes a message with level INFO on the root logger. |
-| **`debug(_message_)`** | Writes a message with level DEBUG on the root logger. |
+| `critical(_message_)` | Writes a message with level CRITICAL on the root logger. |
+| `error(_message_)` | Writes a message with level ERROR on the root logger. |
+| `warning(_message_)` | Writes a message with level WARNING on the root logger. |
+| `info(_message_)` | Writes a message with level INFO on the root logger. |
+| `debug(_message_)` | Writes a message with level DEBUG on the root logger. |
To learn more about logging, see [Monitor Azure Functions](functions-monitoring.md). ### Log custom telemetry
-By default, the Functions runtime collects logs and other telemetry data generated by your functions. This telemetry ends up as traces in Application Insights. Request and dependency telemetry for certain Azure services are also collected by default by [triggers and bindings](functions-triggers-bindings.md#supported-bindings). To collect custom request and custom dependency telemetry outside of bindings, you can use the [OpenCensus Python Extensions](https://github.com/census-ecosystem/opencensus-python-extensions-azure). This extension sends custom telemetry data to your Application Insights instance. You can find a list of supported extensions at the [OpenCensus repository](https://github.com/census-instrumentation/opencensus-python/tree/master/contrib).
+By default, the Azure Functions runtime collects logs and other telemetry data that your functions generate. This telemetry ends up as traces in Application Insights. By default, [triggers and bindings](functions-triggers-bindings.md#supported-bindings) also collect request and dependency telemetry for certain Azure services.
+
+To collect custom request and custom dependency telemetry outside bindings, you can use the [OpenCensus Python Extensions](https://github.com/census-ecosystem/opencensus-python-extensions-azure). The Azure Functions extension sends custom telemetry data to your Application Insights instance. You can find a list of supported extensions at the [OpenCensus repository](https://github.com/census-instrumentation/opencensus-python/tree/master/contrib).
>[!NOTE] >To use the OpenCensus Python extensions, you need to enable [Python worker extensions](#python-worker-extensions) in your function app by setting `PYTHON_ENABLE_WORKER_EXTENSIONS` to `1`. You also need to switch to using the Application Insights connection string by adding the [`APPLICATIONINSIGHTS_CONNECTION_STRING`](functions-app-settings.md#applicationinsights_connection_string) setting to your [application settings](functions-how-to-use-azure-function-app-settings.md#settings), if it's not already there.
def main(req, context):
}) ```
-## HTTP Trigger and bindings
+## HTTP trigger and bindings
+
+The HTTP trigger is defined in the *function.json* file. The `name` parameter of the binding must match the named parameter in the function.
-The HTTP trigger is defined in the function.json file. The `name` of the binding must match the named parameter in the function.
-In the previous examples, a binding name `req` is used. This parameter is an [HttpRequest] object, and an [HttpResponse] object is returned.
+The previous examples use the binding name `req`. This parameter is an [HttpRequest] object, and an [HttpResponse] object is returned.
-From the [HttpRequest] object, you can get request headers, query parameters, route parameters, and the message body.
+From the `HttpRequest` object, you can get request headers, query parameters, route parameters, and the message body.
-The following example is from the [HTTP trigger template for Python](https://github.com/Azure/azure-functions-templates/tree/dev/Functions.Templates/Templates/HttpTrigger-Python).
+The following example is from the [HTTP trigger template for Python](https://github.com/Azure/azure-functions-templates/tree/dev/Functions.Templates/Templates/HttpTrigger-Python):
```python def main(req: func.HttpRequest) -> func.HttpResponse:
def main(req: func.HttpRequest) -> func.HttpResponse:
) ```
-In this function, the value of the `name` query parameter is obtained from the `params` parameter of the [HttpRequest] object. The JSON-encoded message body is read using the `get_json` method.
+In this function, the value of the `name` query parameter is obtained from the `params` parameter of the `HttpRequest` object. The JSON-encoded message body is read using the `get_json` method.
-Likewise, you can set the `status_code` and `headers` for the response message in the returned [HttpResponse] object.
+Likewise, you can set the `status_code` and `headers` information for the response message in the returned `HttpResponse` object.
## Web frameworks You can use WSGI and ASGI-compatible frameworks such as Flask and FastAPI with your HTTP-triggered Python functions. This section shows how to modify your functions to support these frameworks.
-First, the function.json file must be updated to include a `route` in the HTTP trigger, as shown in the following example:
+First, the *function.json* file must be updated to include `route` in the HTTP trigger, as shown in the following example:
```json {
First, the function.json file must be updated to include a `route` in the HTTP t
} ```
-The host.json file must also be updated to include an HTTP `routePrefix`, as shown in the following example.
+The *host.json* file must also be updated to include an HTTP `routePrefix` value, as shown in the following example:
```json {
The host.json file must also be updated to include an HTTP `routePrefix`, as sho
} ```
-Update the Python code file `init.py`, depending on the interface used by your framework. The following example shows either an ASGI handler approach or a WSGI wrapper approach for Flask:
+Update the Python code file *init.py*, based on the interface that your framework uses. The following example shows either an ASGI handler approach or a WSGI wrapper approach for Flask:
# [ASGI](#tab/asgi)
def main(req: func.HttpRequest, context) -> func.HttpResponse:
logging.info('Python HTTP trigger function processed a request.') return func.WsgiMiddleware(app).handle(req, context) ```
-For a full example, see [Using Flask Framework with Azure Functions](/samples/azure-samples/flask-app-on-azure-functions/azure-functions-python-create-flask-app/).
+For a full example, see [Using the Flask framework with Azure Functions](/samples/azure-samples/flask-app-on-azure-functions/azure-functions-python-create-flask-app/).
+## Scaling and performance
-## Scaling and Performance
-
-For scaling and performance best practices for Python function apps, see the [Python scale and performance article](python-scale-performance-reference.md).
+For scaling and performance best practices for Python function apps, see [Improve throughput performance of Python apps in Azure Functions](python-scale-performance-reference.md).
## Context
def main(req: azure.functions.HttpRequest,
return f'{context.invocation_id}' ```
-The [**Context**](/python/api/azure-functions/azure.functions.context) class has the following string attributes:
+The [Context](/python/api/azure-functions/azure.functions.context) class has the following string attributes:
-`function_directory`
-The directory in which the function is running.
+- `function_directory`: Directory in which the function is running.
-`function_name`
-Name of the function.
+- `function_name`: Name of the function.
-`invocation_id`
-ID of the current function invocation.
+- `invocation_id`: ID of the current function invocation.
-`trace_context`
-Context for distributed tracing. For more information, see [`Trace Context`](https://www.w3.org/TR/trace-context/).
+- `trace_context`: Context for distributed tracing. For more information, see [Trace Context](https://www.w3.org/TR/trace-context/) on the W3C website.
-`retry_context`
-Context for retries to the function. For more information, see [`retry-policies`](./functions-bindings-errors.md#retry-policies).
+- `retry_context`: Context for retries to the function. For more information, see [Retry policies](./functions-bindings-errors.md#retry-policies).
## Global variables
-It isn't guaranteed that the state of your app will be preserved for future executions. However, the Azure Functions runtime often reuses the same process for multiple executions of the same app. In order to cache the results of an expensive computation, declare it as a global variable.
+It isn't guaranteed that the state of your app will be preserved for future executions. However, the Azure Functions runtime often reuses the same process for multiple executions of the same app. To cache the results of an expensive computation, declare it as a global variable:
```python CACHED_DATA = None
def main(req):
## Environment variables
-In Functions, [application settings](functions-app-settings.md), such as service connection strings, are exposed as environment variables during execution. There are two main ways to access these settings in your code.
+In Azure Functions, [application settings](functions-app-settings.md), such as service connection strings, are exposed as environment variables during execution. There are two main ways to access these settings in your code:
| Method | Description | | | |
-| **`os.environ["myAppSetting"]`** | Tries to get the application setting by key name, raising an error when unsuccessful. |
-| **`os.getenv("myAppSetting")`** | Tries to get the application setting by key name, returning null when unsuccessful. |
+| `os.environ["myAppSetting"]` | Tries to get the application setting by key name. It raises an error when unsuccessful. |
+| `os.getenv("myAppSetting")` | Tries to get the application setting by key name. It returns `null` when unsuccessful. |
Both of these ways require you to declare `import os`.
For local development, application settings are [maintained in the local.setting
## Python version
-Azure Functions supports the following Python versions:
+Azure Functions supports the following Python versions. These are official Python distributions.
-| Functions version | Python<sup>*</sup> versions |
+| Functions version | Python versions |
| -- | -- | | 4.x | 3.9<br/> 3.8<br/>3.7 | | 3.x | 3.9<br/> 3.8<br/>3.7<br/>3.6 | | 2.x | 3.7<br/>3.6 |
-<sup>*</sup>Official Python distributions
-
-To request a specific Python version when you create your function app in Azure, use the `--runtime-version` option of the [`az functionapp create`](/cli/azure/functionapp#az-functionapp-create) command. The Functions runtime version is set by the `--functions-version` option. The Python version is set when the function app is created and can't be changed.
+To request a specific Python version when you create your function app in Azure, use the `--runtime-version` option of the [`az functionapp create`](/cli/azure/functionapp#az-functionapp-create) command. The `--functions-version` option sets the Azure Functions runtime version. The Python version is set when the function app is created and can't be changed.
The runtime uses the available Python version, when you run it locally. ### Changing Python version
-To set a Python function app to a specific language version, you need to specify the language and the version of the language in `LinuxFxVersion` field in site config. For example, to change Python app to use Python 3.8, set `linuxFxVersion` to `python|3.8`.
+To set a Python function app to a specific language version, you need to specify the language and the version of the language in `linuxFxVersion` field in site configuration. For example, to change Python app to use Python 3.8, set `linuxFxVersion` to `python|3.8`.
-To learn more about Azure Functions runtime support policy, refer to this [article](./language-support-policy.md)
+To learn more about the Azure Functions runtime support policy, see [Language runtime support policy](./language-support-policy.md).
-To see the full list of supported Python versions functions apps, refer to this [article](./supported-languages.md)
+To see the full list of supported Python versions for function apps, see [Supported languages in Azure Functions](./supported-languages.md).
# [Azure CLI](#tab/azurecli-linux)
-You can view and set the `linuxFxVersion` from the Azure CLI.
-
-Using the Azure CLI, view the current `linuxFxVersion` with the [az functionapp config show](/cli/azure/functionapp/config) command.
+You can view and set `linuxFxVersion` from the Azure CLI by using the [az functionapp config show](/cli/azure/functionapp/config) command. Replace `<function_app>` with the name of your function app. Replace `<my_resource_group>` with the name of the resource group for your function app.
```azurecli-interactive az functionapp config show --name <function_app> \ --resource-group <my_resource_group> ```
-In this code, replace `<function_app>` with the name of your function app. Also replace `<my_resource_group>` with the name of the resource group for your function app.
-
-You see the `linuxFxVersion` in the following output, which has been truncated for clarity:
+You see `linuxFxVersion` in the following output, which has been truncated for clarity:
```output {
You see the `linuxFxVersion` in the following output, which has been truncated f
} ```
-You can update the `linuxFxVersion` setting in the function app with the [az functionapp config set](/cli/azure/functionapp/config) command.
+You can update the `linuxFxVersion` setting in the function app by using the [az functionapp config set](/cli/azure/functionapp/config) command. In the following code:
+
+- Replace `<FUNCTION_APP>` with the name of your function app.
+- Replace `<RESOURCE_GROUP>` with the name of the resource group for your function app.
+- Replace `<LINUX_FX_VERSION>` with the Python version that you want to use, prefixed by `python|`. For example: `python|3.9`.
```azurecli-interactive az functionapp config set --name <FUNCTION_APP> \
az functionapp config set --name <FUNCTION_APP> \
--linux-fx-version <LINUX_FX_VERSION> ```
-Replace `<FUNCTION_APP>` with the name of your function app. Also replace `<RESOURCE_GROUP>` with the name of the resource group for your function app. Also, replace `<LINUX_FX_VERSION>` with the Python version you want to use, prefixed by `python|` for example, `python|3.9`.
-
-You can run this command from the [Azure Cloud Shell](../cloud-shell/overview.md) by choosing **Try it** in the preceding code sample. You can also use the [Azure CLI locally](/cli/azure/install-azure-cli) to execute this command after executing [az login](/cli/azure/reference-index#az-login) to sign in.
+You can run the command from [Azure Cloud Shell](../cloud-shell/overview.md) by selecting **Try it** in the preceding code sample. You can also use the [Azure CLI locally](/cli/azure/install-azure-cli) to run the command after you use [az login](/cli/azure/reference-index#az-login) to sign in.
-The function app restarts after the change is made to the site config.
+The function app restarts after you change the site configuration.
## Package management
-When developing locally using the Azure Functions Core Tools or Visual Studio Code, add the names and versions of the required packages to the `requirements.txt` file and install them using `pip`.
+When you're developing locally by using the Azure Functions Core Tools or Visual Studio Code, add the names and versions of the required packages to the *requirements.txt* file and install them by using `pip`.
-For example, the following requirements file and pip command can be used to install the `requests` package from PyPI.
+For example, you can use the following requirements file and `pip` command to install the `requests` package from PyPI:
```txt requests==2.19.1
pip install -r requirements.txt
## Publishing to Azure
-When you're ready to publish, make sure that all your publicly available dependencies are listed in the requirements.txt file. You can locate this file at the root of your project directory.
+When you're ready to publish, make sure that all your publicly available dependencies are listed in the *requirements.txt* file. This file is at the root of your project directory.
-Project files and folders that are excluded from publishing, including the virtual environment folder, you can find them in the root directory of your project.
+You can also find project files and folders that are excluded from publishing, including the virtual environment folder, in the root directory of your project.
-There are three build actions supported for publishing your Python project to Azure: remote build, local build, and builds using custom dependencies.
+Three build actions are supported for publishing your Python project to Azure: remote build, local build, and builds that use custom dependencies.
-You can also use Azure Pipelines to build your dependencies and publish using continuous delivery (CD). To learn more, see [Continuous delivery by using Azure DevOps](functions-how-to-azure-devops.md).
+You can also use Azure Pipelines to build your dependencies and publish by using continuous delivery (CD). To learn more, see [Continuous delivery by using Azure DevOps](functions-how-to-azure-devops.md).
### Remote build
-When you use remote build, dependencies restored on the server and native dependencies match the production environment. This results in a smaller deployment package to upload. Use remote build when developing Python apps on Windows. If your project has custom dependencies, you can [use remote build with extra index URL](#remote-build-with-extra-index-url).
+When you use a remote build, dependencies restored on the server and native dependencies match the production environment. This results in a smaller deployment package to upload. Use a remote build when you're developing Python apps on Windows. If your project has custom dependencies, you can [use a remote build with an extra index URL](#remote-build-with-extra-index-url).
-Dependencies are obtained remotely based on the contents of the requirements.txt file. [Remote build](functions-deployment-technologies.md#remote-build) is the recommended build method. By default, the Azure Functions Core Tools requests a remote build when you use the following [`func azure functionapp publish`](functions-run-local.md#publish) command to publish your Python project to Azure.
+Dependencies are obtained remotely based on the contents of the *requirements.txt* file. [Remote build](functions-deployment-technologies.md#remote-build) is the recommended build method. By default, Azure Functions Core Tools requests a remote build when you use the following [func azure functionapp publish](functions-run-local.md#publish) command to publish your Python project to Azure. Replace `<APP_NAME>` with the name of your function app in Azure.
```bash func azure functionapp publish <APP_NAME> ```
-Remember to replace `<APP_NAME>` with the name of your function app in Azure.
-
-The [Azure Functions Extension for Visual Studio Code](./create-first-function-vs-code-csharp.md#publish-the-project-to-azure) also requests a remote build by default.
+The [Azure Functions extension for Visual Studio Code](./create-first-function-vs-code-csharp.md#publish-the-project-to-azure) also requests a remote build by default.
### Local build
-Dependencies are obtained locally based on the contents of the requirements.txt file. You can prevent doing a remote build by using the following [`func azure functionapp publish`](functions-run-local.md#publish) command to publish with a local build.
+Dependencies are obtained locally based on the contents of the *requirements.txt* file. You can prevent a remote build by using the following [func azure functionapp publish](functions-run-local.md#publish) command to publish with a local build. Replace `<APP_NAME>` with the name of your function app in Azure.
```command func azure functionapp publish <APP_NAME> --build local ```
-Remember to replace `<APP_NAME>` with the name of your function app in Azure.
-
-When you use the `--build local` option, project dependencies are read from the requirements.txt file and those dependent packages are downloaded and installed locally. Project files and dependencies are deployed from your local computer to Azure. This results in a larger deployment package being uploaded to Azure. If for some reason, you can't get requirements.txt file by Core Tools, you must use the custom dependencies option for publishing.
+When you use the `--build local` option, project dependencies are read from the *requirements.txt* file. Those dependent packages are downloaded and installed locally. Project files and dependencies are deployed from your local computer to Azure. This results in the upload of a larger deployment package to Azure. If you can't get the *requirements.txt* file by using Core Tools, you must use the custom dependencies option for publishing.
-We don't recommend using local builds when developing locally on Windows.
+We don't recommend using local builds when you're developing locally on Windows.
### Custom dependencies
-When your project has dependencies not found in the [Python Package Index](https://pypi.org/), there are two ways to build the project. The build method depends on how you build the project.
+When your project has dependencies not found in the [Python Package Index](https://pypi.org/), there are two ways to build the project.
#### Remote build with extra index URL
-When your packages are available from an accessible custom package index, use a remote build. Before publishing, make sure to [create an app setting](functions-how-to-use-azure-function-app-settings.md#settings) named `PIP_EXTRA_INDEX_URL`. The value for this setting is the URL of your custom package index. Using this setting tells the remote build to run `pip install` using the `--extra-index-url` option. To learn more, see the [Python pip install documentation](https://pip.pypa.io/en/stable/reference/pip_install/#requirements-file-format).
+When your packages are available from an accessible custom package index, use a remote build. Before publishing, make sure to [create an app setting](functions-how-to-use-azure-function-app-settings.md#settings) named `PIP_EXTRA_INDEX_URL`. The value for this setting is the URL of your custom package index. Using this setting tells the remote build to run `pip install` with the `--extra-index-url` option. To learn more, see the [Python pip install documentation](https://pip.pypa.io/en/stable/reference/pip_install/#requirements-file-format).
You can also use basic authentication credentials with your extra package index URLs. To learn more, see [Basic authentication credentials](https://pip.pypa.io/en/stable/user_guide/#basic-authentication-credentials) in Python documentation.
-#### Install local packages
+#### Installing local packages
-If your project uses packages not publicly available to our tools, you can make them available to your app by putting them in the \_\_app\_\_/.python_packages directory. Before publishing, run the following command to install the dependencies locally:
+If your project uses packages that aren't publicly available, you can make them available to your app by putting them in the *\_\_app\_\_/.python_packages* directory. Before publishing, run the following command to install the dependencies locally:
```command pip install --target="<PROJECT_DIR>/.python_packages/lib/site-packages" -r requirements.txt ```
-When using custom dependencies, you should use the `--no-build` publishing option, since you've already installed the dependencies into the project folder.
+When you're using custom dependencies, use the following `--no-build` publishing option because you've already installed the dependencies into the project folder. Replace `<APP_NAME>` with the name of your function app in Azure.
```command func azure functionapp publish <APP_NAME> --no-build ```
-Remember to replace `<APP_NAME>` with the name of your function app in Azure.
-
-## Unit Testing
+## Unit testing
-Functions written in Python can be tested like other Python code using standard testing frameworks. For most bindings, it's possible to create a mock input object by creating an instance of an appropriate class from the `azure.functions` package. Since the [`azure.functions`](https://pypi.org/project/azure-functions/) package isn't immediately available, be sure to install it via your `requirements.txt` file as described in the [package management](#package-management) section above.
+You can test functions written in Python the same way that you test other Python code: through standard testing frameworks. For most bindings, it's possible to create a mock input object by creating an instance of an appropriate class from the [azure.functions](https://pypi.org/project/azure-functions/) package. Because the `azure.functions` package isn't immediately available, be sure to install it via your *requirements.txt* file as described in the earlier [Package management](#package-management) section.
-Take *my_second_function* as an example, following is a mock test of an HTTP triggered function:
+Take *my_second_function* as an example. Following is a mock test of an HTTP triggered function.
-First we need to create *<project_root>/my_second_function/function.json* file and define this function as an http trigger.
+First, to create the *<project_root>/my_second_function/function.json* file and define this function as an HTTP trigger, use the following code:
```json {
First we need to create *<project_root>/my_second_function/function.json* file a
} ```
-Now, we can implement the *my_second_function* and the *shared_code.my_second_helper_function*.
+Now, you can implement *my_second_function* and *shared_code.my_second_helper_function*:
```python # <project_root>/my_second_function/__init__.py
import logging
# Use absolute import to resolve shared_code modules from shared_code import my_second_helper_function
-# Define an http trigger which accepts ?value=<int> query parameter
+# Define an HTTP trigger that accepts the ?value=<int> query parameter
# Double the value and return the result in HttpResponse def main(req: func.HttpRequest) -> func.HttpResponse: logging.info('Executing my_second_function.')
def double(value: int) -> int:
return value * 2 ```
-We can start writing test cases for our http trigger.
+You can start writing test cases for your HTTP trigger:
```python # <project_root>/tests/test_my_second_function.py
class TestFunction(unittest.TestCase):
) ```
-Inside your `.venv` Python virtual environment, install your favorite Python test framework, such as `pip install pytest`. Then run `pytest tests` to check the test result.
+Inside your *.venv* Python virtual environment, install your favorite Python test framework, such as `pip install pytest`. Then run `pytest tests` to check the test result.
## Temporary files
-The `tempfile.gettempdir()` method returns a temporary folder, which on Linux is `/tmp`. Your application can use this directory to store temporary files generated and used by your functions during execution.
+The `tempfile.gettempdir()` method returns a temporary folder, which on Linux is */tmp*. Your application can use this directory to store temporary files that your functions generate and use during execution.
> [!IMPORTANT]
-> Files written to the temporary directory aren't guaranteed to persist across invocations. During scale out, temporary files aren't shared between instances.
+> Files written to the temporary directory aren't guaranteed to persist across invocations. During scale-out, temporary files aren't shared between instances.
-The following example creates a named temporary file in the temporary directory (`/tmp`):
+The following example creates a named temporary file in the temporary directory (*/tmp*):
```python import logging
from os import listdir
filesDirListInTemp = listdir(tempFilePath) ```
-We recommend that you maintain your tests in a folder separate from the project folder. This action keeps you from deploying test code with your app.
+We recommend that you maintain your tests in a folder that's separate from the project folder. This action keeps you from deploying test code with your app.
## Preinstalled libraries
-There are a few libraries that come with the Python Functions runtime.
+A few libraries come with the runtime for Azure Functions on Python.
### Python Standard Library
-The Python Standard Library contains a list of built-in Python modules that are shipped with each Python distribution. Most of these libraries help you access system functionality, like file I/O. On Windows systems, these libraries are installed with Python. On the Unix-based systems, they're provided by package collections.
+The Python Standard Library contains a list of built-in Python modules that are shipped with each Python distribution. Most of these libraries help you access system functionality, like file I/O. On Windows systems, these libraries are installed with Python. On Unix systems, package collections provide them.
-To view the full details of the list of these libraries, see the links below:
+To view the full details of these libraries, use these links:
* [Python 3.6 Standard Library](https://docs.python.org/3.6/library/) * [Python 3.7 Standard Library](https://docs.python.org/3.7/library/) * [Python 3.8 Standard Library](https://docs.python.org/3.8/library/) * [Python 3.9 Standard Library](https://docs.python.org/3.9/library/)
-### Azure Functions Python worker dependencies
+### Worker dependencies
-The Functions Python worker requires a specific set of libraries. You can also use these libraries in your functions, but they aren't a part of the Python standard. If your functions rely on any of these libraries, they may not be available to your code when running outside of Azure Functions. You can find a detailed list of dependencies in the **install\_requires** section in the [setup.py](https://github.com/Azure/azure-functions-python-worker/blob/dev/setup.py#L282) file.
+The Python worker for Azure Functions requires a specific set of libraries. You can also use these libraries in your functions, but they aren't a part of the Python standard. If your functions rely on any of these libraries, they might not be available to your code when you're running outside Azure Functions. You can find a detailed list of dependencies in the `install\_requires` section in the [setup.py](https://github.com/Azure/azure-functions-python-worker/blob/dev/setup.py#L282) file.
> [!NOTE]
-> If your function app's requirements.txt contains an `azure-functions-worker` entry, remove it. The functions worker is automatically managed by Azure Functions platform, and we regularly update it with new features and bug fixes. Manually installing an old version of worker in requirements.txt may cause unexpected issues.
+> If your function app's *requirements.txt* file contains an `azure-functions-worker` entry, remove it. The Azure Functions platform automatically manages this worker, and we regularly update it with new features and bug fixes. Manually installing an old version of the worker in *requirements.txt* might cause unexpected problems.
> [!NOTE]
-> If your package contains certain libraries that may collide with worker's dependencies (e.g. protobuf, tensorflow, grpcio), please configure [`PYTHON_ISOLATE_WORKER_DEPENDENCIES`](functions-app-settings.md#python_isolate_worker_dependencies-preview) to `1` in app settings to prevent your application from referring worker's dependencies. This feature is in preview.
+> If your package contains certain libraries that might collide with the worker's dependencies (for example, protobuf, TensorFlow, or grpcio), configure [PYTHON_ISOLATE_WORKER_DEPENDENCIES](functions-app-settings.md#python_isolate_worker_dependencies-preview) to `1` in app settings to prevent your application from referring to the worker's dependencies. This feature is in preview.
-### Azure Functions Python library
+### Python library for Azure Functions
-Every Python worker update includes a new version of [Azure Functions Python library (azure.functions)](https://github.com/Azure/azure-functions-python-library). This approach makes it easier to continuously update your Python function apps, because each update is backwards-compatible. A list of releases of this library can be found in [azure-functions PyPi](https://pypi.org/project/azure-functions/#history).
+Every Python worker update includes a new version of the [Python library for Azure Functions (azure.functions)](https://github.com/Azure/azure-functions-python-library). This approach makes it easier to continuously update your Python function apps, because each update is backward compatible. You can find a list of releases of this library in the [azure-functions information on the PyPi website](https://pypi.org/project/azure-functions/#history).
-The runtime library version is fixed by Azure, and it can't be overridden by requirements.txt. The `azure-functions` entry in requirements.txt is only for linting and customer awareness.
+The runtime library version is fixed by Azure, and *requirements.txt* can't override it. The `azure-functions` entry in *requirements.txt* is only for linting and customer awareness.
-Use the following code to track the actual version of the Python Functions library in your runtime:
+Use the following code to track the version of the Python library for Azure Functions in your runtime:
```python getattr(azure.functions, '__version__', '< 1.2.1')
getattr(azure.functions, '__version__', '< 1.2.1')
### Runtime system libraries
-For a list of preinstalled system libraries in Python worker Docker images, see the links below:
+The following table lists preinstalled system libraries in Docker images for the Python worker:
| Functions runtime | Debian version | Python versions | ||||
Extensions are imported in your function code much like a standard Python librar
| Scope | Description | | | |
-| **Application-level** | When imported into any function trigger, the extension applies to every function execution in the app. |
-| **Function-level** | Execution is limited to only the specific function trigger into which it's imported. |
+| **Application level** | When the extension is imported into any function trigger, it applies to every function execution in the app. |
+| **Function level** | Execution is limited to only the specific function trigger into which it's imported. |
-Review the information for a given extension to learn more about the scope in which the extension runs.
+Review the information for an extension to learn more about the scope in which the extension runs.
-Extensions implement a Python worker extension interface. This action lets the Python worker process call into the extension code during the function execution lifecycle. To learn more, see [Creating extensions](#creating-extensions).
+Extensions implement a Python worker extension interface. This action lets the Python worker process call into the extension code during the function execution lifecycle.
### Using extensions You can use a Python worker extension library in your Python functions by following these basic steps:
-1. Add the extension package in the requirements.txt file for your project.
+1. Add the extension package in the *requirements.txt* file for your project.
1. Install the library into your app. 1. Add the application setting `PYTHON_ENABLE_WORKER_EXTENSIONS`:
- + Locally: add `"PYTHON_ENABLE_WORKER_EXTENSIONS": "1"` in the `Values` section of your [local.settings.json file](functions-develop-local.md#local-settings-file).
- + Azure: add `PYTHON_ENABLE_WORKER_EXTENSIONS=1` to your [app settings](functions-how-to-use-azure-function-app-settings.md#settings).
+ + To add the setting locally, add `"PYTHON_ENABLE_WORKER_EXTENSIONS": "1"` in the `Values` section of your [local.settings.json file](functions-develop-local.md#local-settings-file).
+ + To add the setting in Azure, add `PYTHON_ENABLE_WORKER_EXTENSIONS=1` to your [app settings](functions-how-to-use-azure-function-app-settings.md#settings).
1. Import the extension module into your function trigger.
-1. Configure the extension instance, if needed. Configuration requirements should be called-out in the extension's documentation.
+1. Configure the extension instance, if needed. Configuration requirements should be called out in the extension's documentation.
> [!IMPORTANT]
-> Third-party Python worker extension libraries are not supported or warrantied by Microsoft. You must make sure that any extensions you use in your function app is trustworthy, and you bear the full risk of using a malicious or poorly written extension.
+> Microsoft doesn't support or warranty third-party Python worker extension libraries. Make sure that any extensions you use in your function app are trustworthy. You bear the full risk of using a malicious or poorly written extension.
-Third-parties should provide specific documentation on how to install and consume their specific extension in your function app. For a basic example of how to consume an extension, see [Consuming your extension](develop-python-worker-extensions.md#consume-your-extension-locally).
+Third parties should provide specific documentation on how to install and consume their specific extension in your function app. For a basic example of how to consume an extension, see [Consuming your extension](develop-python-worker-extensions.md#consume-your-extension-locally).
Here are examples of using extensions in a function app, by scope:
-# [Application-level](#tab/application-level)
+# [Application level](#tab/application-level)
```python # <project_root>/requirements.txt
AppExtension.configure(key=value)
def main(req, context): # Use context.app_ext_attributes here ```
-# [Function-level](#tab/function-level)
+# [Function level](#tab/function-level)
```python # <project_root>/requirements.txt function-level-extension==1.0.0
def main(req, context):
### Creating extensions
-Extensions are created by third-party library developers who have created functionality that can be integrated into Azure Functions. An extension developer design, implements, and releases Python packages that contain custom logic designed specifically to be run in the context of function execution. These extensions can be published either to the PyPI registry or to GitHub repositories.
+Extensions are created by third-party library developers who have created functionality that can be integrated into Azure Functions. An extension developer designs, implements, and releases Python packages that contain custom logic designed specifically to be run in the context of function execution. These extensions can be published either to the PyPI registry or to GitHub repositories.
To learn how to create, package, publish, and consume a Python worker extension package, see [Develop Python worker extensions for Azure Functions](develop-python-worker-extensions.md).
An extension inherited from [`AppExtensionBase`](https://github.com/Azure/azure-
| Method | Description | | | |
-| **`init`** | Called after the extension is imported. |
-| **`configure`** | Called from function code when needed to configure the extension. |
-| **`post_function_load_app_level`** | Called right after the function is loaded. The function name and function directory are passed to the extension. Keep in mind that the function directory is read-only, and any attempt to write to local file in this directory fails. |
-| **`pre_invocation_app_level`** | Called right before the function is triggered. The function context and function invocation arguments are passed to the extension. You can usually pass other attributes in the context object for the function code to consume. |
-| **`post_invocation_app_level`** | Called right after the function execution completes. The function context, function invocation arguments, and the invocation return object are passed to the extension. This implementation is a good place to validate whether execution of the lifecycle hooks succeeded. |
+| `init` | Called after the extension is imported. |
+| `configure` | Called from function code when it's needed to configure the extension. |
+| `post_function_load_app_level` | Called right after the function is loaded. The function name and function directory are passed to the extension. Keep in mind that the function directory is read-only. Any attempt to write to a local file in this directory fails. |
+| `pre_invocation_app_level` | Called right before the function is triggered. The function context and function invocation arguments are passed to the extension. You can usually pass other attributes in the context object for the function code to consume. |
+| `post_invocation_app_level` | Called right after the function execution finishes. The function context, function invocation arguments, and invocation return object are passed to the extension. This implementation is a good place to validate whether execution of the lifecycle hooks succeeded. |
#### Function-level extensions
An extension that inherits from [FuncExtensionBase](https://github.com/Azure/azu
| Method | Description | | | |
-| **`__init__`** | This method is the constructor of the extension. It's called when an extension instance is initialized in a specific function. When implementing this abstract method, you may want to accept a `filename` parameter and pass it to the parent's method `super().__init__(filename)` for proper extension registration. |
-| **`post_function_load`** | Called right after the function is loaded. The function name and function directory are passed to the extension. Keep in mind that the function directory is read-only, and any attempt to write to local file in this directory fails. |
-| **`pre_invocation`** | Called right before the function is triggered. The function context and function invocation arguments are passed to the extension. You can usually pass other attributes in the context object for the function code to consume. |
-| **`post_invocation`** | Called right after the function execution completes. The function context, function invocation arguments, and the invocation return object are passed to the extension. This implementation is a good place to validate whether execution of the lifecycle hooks succeeded. |
+| `__init__` | Called when an extension instance is initialized in a specific function. This method is the constructor of the extension. When you're implementing this abstract method, you might want to accept a `filename` parameter and pass it to the parent's `super().__init__(filename)` method for proper extension registration. |
+| `post_function_load` | Called right after the function is loaded. The function name and function directory are passed to the extension. Keep in mind that the function directory is read-only. Any attempt to write to a local file in this directory fails. |
+| `pre_invocation` | Called right before the function is triggered. The function context and function invocation arguments are passed to the extension. You can usually pass other attributes in the context object for the function code to consume. |
+| `post_invocation` | Called right after the function execution finishes. The function context, function invocation arguments, and invocation return object are passed to the extension. This implementation is a good place to validate whether execution of the lifecycle hooks succeeded. |
## Cross-origin resource sharing
By default, a host instance for Python can process only one function invocation
## <a name="shared-memory"></a>Shared memory (preview)
-To improve throughput, Functions let your out-of-process Python language worker share memory with the Functions host process. When your function app is hitting bottlenecks, you can enable shared memory by adding an application setting named [FUNCTIONS_WORKER_SHARED_MEMORY_DATA_TRANSFER_ENABLED](functions-app-settings.md#functions_worker_shared_memory_data_transfer_enabled) with a value of `1`. With shared memory enabled, you can then use the [DOCKER_SHM_SIZE](functions-app-settings.md#docker_shm_size) setting to set the shared memory to something like `268435456`, which is equivalent to 256 MB.
+To improve throughput, Azure Functions lets your out-of-process Python language worker share memory with the host process. When your function app is hitting bottlenecks, you can enable shared memory by adding an application setting named [FUNCTIONS_WORKER_SHARED_MEMORY_DATA_TRANSFER_ENABLED](functions-app-settings.md#functions_worker_shared_memory_data_transfer_enabled) with a value of `1`. With shared memory enabled, you can then use the [DOCKER_SHM_SIZE](functions-app-settings.md#docker_shm_size) setting to set the shared memory to something like `268435456`, which is equivalent to 256 MB.
-For example, you might enable shared memory to reduce bottlenecks when using Blob storage bindings to transfer payloads larger than 1 MB.
+For example, you might enable shared memory to reduce bottlenecks when using Azure Blob Storage bindings to transfer payloads larger than 1 MB.
-This functionality is available only for function apps running in Premium and Dedicated (App Service) plans. To learn more, see [Shared memory](https://github.com/Azure/azure-functions-python-worker/wiki/Shared-Memory).
+This functionality is available only for function apps running in Premium and Dedicated (Azure App Service) plans. To learn more, see [Shared memory](https://github.com/Azure/azure-functions-python-worker/wiki/Shared-Memory).
-## Known issues and FAQ
+## Known issues and FAQs
-Following is a list of troubleshooting guides for common issues:
+Here's a list of troubleshooting guides for common issues:
* [ModuleNotFoundError and ImportError](recover-python-functions.md#troubleshoot-modulenotfounderror) * [Can't import 'cygrpc'](recover-python-functions.md#troubleshoot-cannot-import-cygrpc)
-All known issues and feature requests are tracked using [GitHub issues](https://github.com/Azure/azure-functions-python-worker/issues) list. If you run into a problem and can't find the issue in GitHub, open a new issue and include a detailed description of the problem.
+All known issues and feature requests are tracked through the [GitHub issues](https://github.com/Azure/azure-functions-python-worker/issues) list. If you run into a problem and can't find the issue in GitHub, open a new issue and include a detailed description of the problem.
## Next steps
For more information, see the following resources:
* [Azure Functions package API documentation](/python/api/azure-functions/azure.functions) * [Best practices for Azure Functions](functions-best-practices.md) * [Azure Functions triggers and bindings](functions-triggers-bindings.md)
-* [Blob storage bindings](functions-bindings-storage-blob.md)
-* [HTTP and Webhook bindings](functions-bindings-http-webhook.md)
-* [Queue storage bindings](functions-bindings-storage-queue.md)
+* [Blob Storage bindings](functions-bindings-storage-blob.md)
+* [HTTP and webhook bindings](functions-bindings-http-webhook.md)
+* [Azure Queue Storage bindings](functions-bindings-storage-queue.md)
* [Timer trigger](functions-bindings-timer.md) [Having issues? Let us know.](https://aka.ms/python-functions-ref-survey)
azure-maps Drawing Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-requirements.md
You can see an example of the Walls layer in the [sample Drawing package](https:
You can include a DWG layer that contains doors. Each door must overlap the edge of a unit from the Unit layer.
-Door openings in an Azure Maps dataset are represented as a single-line segment that overlaps multiple unit boundaries. The following images show how to convert geometry in the Door layer to opening features in a dataset.
+Door openings in an Azure Maps dataset are represented as a single-line segment that overlaps multiple unit boundaries. The following images show how Azure Maps converts door layer geometry into opening features in a dataset..
![Four graphics that show the steps to generate openings](./media/drawing-requirements/opening-steps.png)
azure-monitor Agent Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-data-sources.md
# Log Analytics agent data sources in Azure Monitor The data that Azure Monitor collects from virtual machines with the legacy [Log Analytics](./log-analytics-agent.md) agent is defined by the data sources that you configure on the [Log Analytics workspace](../logs/data-platform-logs.md). Each data source creates records of a particular type with each type having its own set of properties.
-> [!IMPORTANT]
-> This article only covers data sources for the legacy [Log Analytics agent](./log-analytics-agent.md) which is one of the agents used by Azure Monitor. This agent **will be deprecated by August, 2024**. Please plan to [migrate to Azure Monitor agent](./azure-monitor-agent-migration.md) before that. Other agents collect different data and are configured differently. See [Overview of Azure Monitor agents](agents-overview.md) for a list of the available agents and the data they can collect.
- ![Log data collection](media/agent-data-sources/overview.png) + > [!IMPORTANT] > The data sources described in this article apply only to virtual machines running the Log Analytics agent.
azure-monitor Agent Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-linux.md
This article provides details on installing the Log Analytics agent on Linux computers using the following methods: * [Install the agent for Linux using a wrapper-script](#install-the-agent-using-wrapper-script) hosted on GitHub. This is the recommended method to install and upgrade the agent when the computer has connectivity with the Internet, directly or through a proxy server.
-* [Manually download and install](#install-the-agent-manually) the agent. This is required when the Linux computer does not have access to the Internet and will be communicating with Azure Monitor or Azure Automation through the [Log Analytics gateway](./gateway.md).
+* [Manually download and install](#install-the-agent-manually) the agent. This is required when the Linux computer doesn't have access to the Internet and will be communicating with Azure Monitor or Azure Automation through the [Log Analytics gateway](./gateway.md).
The installation methods described in this article are typically used for virtual machines on-premises or in other clouds. See [Installation options](./log-analytics-agent.md#installation-options) for more efficient options you can use for Azure virtual machines.
->[!IMPORTANT]
-> The Log Analytics agents are on a **deprecation path** and will no longer be supported after **August 31, 2024**. If you use the Log Analytics agents to ingest data to Azure Monitor, make sure to [migrate to the new Azure Monitor agent](./azure-monitor-agent-migration.md) prior to that date.
-- ## Supported operating systems
See [Overview of Azure Monitor agents](agents-overview.md#supported-operating-sy
>[!NOTE] >Running the Log Analytics Linux Agent in containers is not supported. If you need to monitor containers, please leverage the [Container Monitoring solution](../containers/containers.md) for Docker hosts or [Container insights](../containers/container-insights-overview.md) for Kubernetes.
-Starting with versions released after August 2018, we are making the following changes to our support model:
+Starting with versions released after August 2018, we're making the following changes to our support model:
* Only the server versions are supported, not client.
-* Focus support on any of the [Azure Linux Endorsed distros](../../virtual-machines/linux/endorsed-distros.md). Note that there may be some delay between a new distro/version being Azure Linux Endorsed and it being supported for the Log Analytics Linux agent.
+* Focus support on any of the [Azure Linux Endorsed distros](../../virtual-machines/linux/endorsed-distros.md). There may be some delay between a new distro/version being Azure Linux Endorsed and it being supported for the Log Analytics Linux agent.
* All minor releases are supported for each major version listed.
-* Versions that have passed their manufacturer's end-of-support date are not supported.
-* Only support VM images; containers, even those derived from official distro publishers' images, are not supported.
-* New versions of AMI are not supported.
+* Versions that have passed their manufacturer's end-of-support date aren't supported.
+* Only support VM images; containers, even those derived from official distro publishers' images, aren't supported.
+* New versions of AMI aren't supported.
* Only versions that run OpenSSL 1.x by default are supported. >[!NOTE]
Starting with versions released after August 2018, we are making the following c
Starting from Agent version 1.13.27, the Linux Agent will support both Python 2 and 3. We always recommend using the latest agent.
-If you are using an older version of the agent, you must have the Virtual Machine use Python 2 by default. If your virtual machine is using a distro that doesn't include Python 2 by default then you must install it. The following sample commands will install Python 2 on different distros.
+If you're using an older version of the agent, you must have the Virtual Machine use Python 2 by default. If your virtual machine is using a distro that doesn't include Python 2 by default, then you must install it. The following sample commands will install Python 2 on different distros.
- Red Hat, CentOS, Oracle: `yum install -y python2` - Ubuntu, Debian: `apt-get install -y python2` - SUSE: `zypper install -y python2`
-Again, only if you are using an older version of the agent, the python2 executable must be aliased to *python*. Following is one method that you can use to set this alias:
+Again, only if you're using an older version of the agent, the python2 executable must be aliased to *python*. Following is one method that you can use to set this alias:
1. Run the following command to remove any existing aliases.
The following are currently supported:
- FIPS - SELinux (Marketplace images for CentOS and RHEL with their default settings)
-The following are not supported:
+The following aren't supported:
- CIS - SELinux (custom hardening like MLS)
-CIS and SELinux hardening support is planned for [Azure Monitoring Agent](./azure-monitor-agent-overview.md). Further hardening and customization methods are not supported nor planned for OMS Agent. For instance, OS images like GitHub Enterprise Server which include customizations such as limitations to user account privileges are not supported.
+CIS and SELinux hardening support is planned for [Azure Monitoring Agent](./azure-monitor-agent-overview.md). Further hardening and customization methods aren't supported nor planned for OMS Agent. For instance, OS images like GitHub Enterprise Server which include customizations such as limitations to user account privileges aren't supported.
## Agent prerequisites
See [Log Analytics agent overview](./log-analytics-agent.md#network-requirements
## Workspace ID and key
-Regardless of the installation method used, you will require the workspace ID and key for the Log Analytics workspace that the agent will connect to. Select the workspace from the **Log Analytics workspaces** menu in the Azure portal. Then select **Agents management** in the **Settings** section.
+Regardless of the installation method used, you'll require the workspace ID and key for the Log Analytics workspace that the agent will connect to. Select the workspace from the **Log Analytics workspaces** menu in the Azure portal. Then select **Agents management** in the **Settings** section.
[![Workspace details](media/log-analytics-agent/workspace-details.png)](media/log-analytics-agent/workspace-details.png#lightbox)
docker-cimprov | 1.0.0 | Docker provider for OMI. Only installed if Docker is de
### Agent installation details
-After installing the Log Analytics agent for Linux packages, the following additional system-wide configuration changes are applied. These artifacts are removed when the omsagent package is uninstalled.
+After installing the Log Analytics agent for Linux packages, the following system-wide configuration changes are also applied. These artifacts are removed when the omsagent package is uninstalled.
* A non-privileged user named: `omsagent` is created. The daemon runs under this credential.
-* A sudoers *include* file is created in `/etc/sudoers.d/omsagent`. This authorizes `omsagent` to restart the syslog and omsagent daemons. If sudo *include* directives are not supported in the installed version of sudo, these entries will be written to `/etc/sudoers`.
+* A sudoers *include* file is created in `/etc/sudoers.d/omsagent`. This authorizes `omsagent` to restart the syslog and omsagent daemons. If sudo *include* directives aren't supported in the installed version of sudo, these entries will be written to `/etc/sudoers`.
* The syslog configuration is modified to forward a subset of events to the agent. For more information, see [Configure Syslog data collection](data-sources-syslog.md). On a monitored Linux computer, the agent is listed as `omsagent`. `omsconfig` is the Log Analytics agent for Linux configuration agent that looks for new portal side configuration every 5 minutes. The new and updated configuration is applied to the agent configuration files located at `/etc/opt/microsoft/omsagent/conf/omsagent.conf`.
Upgrading from a previous version, starting with version 1.0.0-47, is supported
## Cache information Data from the Log Analytics agent for Linux is cached on the local machine at *%STATE_DIR_WS%/out_oms_common*.buffer* before it's sent to Azure Monitor. Custom log data is buffered in *%STATE_DIR_WS%/out_oms_blob*.buffer*. The path may be different for some [solutions and data types](https://github.com/microsoft/OMS-Agent-for-Linux/search?utf8=%E2%9C%93&q=+buffer_path&type=).
-The agent attempts to upload every 20 seconds. If it fails, it will wait an exponentially increasing length of time until it succeeds: 30 seconds before the second attempt, 60 seconds before the third, 120 seconds ... and so on up to a maximum of 16 minutes between retries until it successfully connects again. The agent will retry up to 6 times for a given chunk of data before discarding and moving to the next one. This continues until the agent can successfully upload again. This means that data may be buffered up to approximately 30 minutes before being discarded.
+The agent attempts to upload every 20 seconds. If it fails, it waits an exponentially increasing length of time until it succeeds: 30 seconds before the second attempt, 60 seconds before the third, 120 seconds, and so on, up to a maximum of 16 minutes between retries until it successfully connects again. The agent retries up to 6 times for a given chunk of data before discarding and moving to the next one. This continues until the agent can successfully upload again. This means that data may be buffered up to approximately 30 minutes before being discarded.
The default cache size is 10 MB but can be modified in the [omsagent.conf file](https://github.com/microsoft/OMS-Agent-for-Linux/blob/e2239a0714ae5ab5feddcc48aa7a4c4f971417d4/installer/conf/omsagent.conf).
azure-monitor Agent Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-manage.md
After initial deployment of the Log Analytics Windows or Linux agent in Azure Monitor, you may need to reconfigure the agent, upgrade it, or remove it from the computer if it has reached the retirement stage in its lifecycle. You can easily manage these routine maintenance tasks manually or through automation, which reduces both operational error and expenses. + ## Upgrading agent The Log Analytics agent for Windows and Linux can be upgraded to the latest release manually or automatically depending on the deployment scenario and environment the VM is running in. The following methods can be used to upgrade the agent.
azure-monitor Agent Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-windows.md
This article provides details on installing the Log Analytics agent on Windows c
The installation methods described in this article are typically used for virtual machines on-premises or in other clouds. See [Installation options](./log-analytics-agent.md#installation-options) for more efficient options you can use for Azure virtual machines.
->[!IMPORTANT]
->The Log Analytics agents are on a **deprecation path** and will no longer be supported after **August 31, 2024**. If you use the Log Analytics agents to ingest data to Azure Monitor, make sure to [migrate to the new Azure Monitor agent](./azure-monitor-agent-migration.md) prior to that date.
> [!NOTE] > Installing the Log Analytics agent will typically not require you to restart the machine.
azure-monitor Azure Monitor Agent Data Collection Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-data-collection-endpoint.md
You can use data collection endpoints to enable the Azure Monitor agent to commu
![Data collection endpoint network isolation](media/azure-monitor-agent-dce/data-collection-endpoint-network-isolation.png) ## Next steps-- [Associate endpoint to machines](../agents/data-collection-rule-azure-monitor-agent.md#create-rule-and-association-in-azure-portal)
+- [Associate endpoint to machines](../agents/data-collection-rule-azure-monitor-agent.md#create-data-collection-rule-and-association)
- [Add endpoint to AMPLS resource](../logs/private-link-configure.md#connect-azure-monitor-resources)
azure-monitor Azure Monitor Agent Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-manage.md
The following prerequisites must be met prior to installing the Azure Monitor ag
- **Authentication**: [Managed identity](../../active-directory/managed-identities-azure-resources/overview.md) must be enabled on Azure virtual machines. Both system-assigned and user-assigned managed identities are supported. - **User-assigned**: This is recommended for large scale deployments, configurable via [built-in Azure policies](#using-azure-policy). It can be created once and shared across multiple VMs, and is thus more scalable than system-assigned. - **System-assigned**: This is suited for initial testing or small deployments. When used at scale (for example, for all VMs in a subscription) it results in substantial number of identities created (and deleted) in Azure AD (Azure Active Directory). To avoid this churn of identities, it is recommended to use user-assigned managed identities instead. **For Arc-enabled servers, system-assigned managed identity is enabled automatically** (as soon as you install the Arc agent) as it's the only supported type for Arc-enabled servers.
- - This is not required for Azure Arc-enabled servers. The system identity will be enabled automatically if the agent is installed via [creating and assigning a data collection rule using the Azure portal](data-collection-rule-azure-monitor-agent.md#create-rule-and-association-in-azure-portal).
+ - This is not required for Azure Arc-enabled servers. The system identity will be enabled automatically if the agent is installed via [creating and assigning a data collection rule using the Azure portal](data-collection-rule-azure-monitor-agent.md#create-data-collection-rule-and-association).
- **Networking**: The [AzureResourceManager service tag](../../virtual-network/service-tags-overview.md) must be enabled on the virtual network for the virtual machine. Additionally, the virtual machine must have access to the following HTTPS endpoints: - global.handler.control.monitor.azure.com - `<virtual-machine-region-name>`.handler.control.monitor.azure.com (example: westus.handler.control.azure.com)
The following prerequisites must be met prior to installing the Azure Monitor ag
## Using the Azure portal ### Install
-To install the Azure Monitor agent using the Azure portal, follow the process to [create a data collection rule](data-collection-rule-azure-monitor-agent.md#create-rule-and-association-in-azure-portal) in the Azure portal. This not only creates the rule, but it also associates it to the selected resources and installs the Azure Monitor agent on them if not already installed.
+To install the Azure Monitor agent using the Azure portal, follow the process to [create a data collection rule](data-collection-rule-azure-monitor-agent.md#create-data-collection-rule-and-association) in the Azure portal. This not only creates the rule, but it also associates it to the selected resources and installs the Azure Monitor agent on them if not already installed.
### Uninstall To uninstall the Azure Monitor agent using the Azure portal, navigate to your virtual machine, scale set or Arc-enabled server, select the **Extensions** tab and click on **AzureMonitorWindowsAgent** or **AzureMonitorLinuxAgent**. In the dialog that pops up, click **Uninstall**.
azure-monitor Azure Monitor Agent Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration.md
The following tables show gap analyses for the **log types** that are currently
## Test migration by using the Azure portal To ensure safe deployment during migration, you should begin testing with a few resources in your nonproduction environment that are running the existing Log Analytics agent. After you can validate the data collected on these test resources, roll out to production by following the same steps.
-See [create new data collection rules](./data-collection-rule-azure-monitor-agent.md#create-rule-and-association-in-azure-portal) to start collecting some of the existing data types. Once you validate data is flowing as expected with the Azure Monitor agent, check the `Category` column in the [Heartbeat](/azure/azure-monitor/reference/tables/heartbeat) table for the value *Azure Monitor Agent* for AMA collected data. Ensure it matches data flowing through the existing Log Analytics agent.
+See [create new data collection rules](./data-collection-rule-azure-monitor-agent.md#create-data-collection-rule-and-association) to start collecting some of the existing data types. Once you validate data is flowing as expected with the Azure Monitor agent, check the `Category` column in the [Heartbeat](/azure/azure-monitor/reference/tables/heartbeat) table for the value *Azure Monitor Agent* for AMA collected data. Ensure it matches data flowing through the existing Log Analytics agent.
## At-scale migration using Azure Policy
azure-monitor Azure Monitor Agent Windows Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-windows-client.md
Here is a comparison between client installer and VM extension for Azure Monitor
- `<virtual-machine-region-name>`.handler.control.monitor.azure.com (example: westus.handler.control.azure.com) - `<log-analytics-workspace-id>`.ods.opinsights.azure.com (example: 12345a01-b1cd-1234-e1f2-1234567g8h99.ods.opsinsights.azure.com) (If using private links on the agent, you must also add the [data collection endpoints](../essentials/data-collection-endpoint-overview.md#components-of-a-data-collection-endpoint))
-6. Existing data collection rule(s) you wish to associate with the devices. If it doesn't exist already, [follow the guidance here to create data collection rule(s)](./data-collection-rule-azure-monitor-agent.md#create-rule-and-associationusingrestapi). **Do not associate the rule to any resources yet**.
+6. Existing data collection rule(s) you wish to associate with the devices. If it doesn't exist already, [follow the guidance here to create data collection rule(s)](./data-collection-rule-azure-monitor-agent.md#create-data-collection-rule-and-association). **Do not associate the rule to any resources yet**.
## Install the agent 1. Download the Windows MSI installer for the agent using [this link](https://go.microsoft.com/fwlink/?linkid=2192409). You can also download it from **Monitor** > **Data Collection Rules** > **Create** experience on Azure portal (shown below):
PUT https://management.azure.com/providers/Microsoft.Insights/monitoredObjects/{
#### 3. Associate DCR to Monitored Object
-Now we associate the Data Collection Rules (DCR) to the Monitored Object by creating a Data Collection Rule Associations. If you haven't already, [follow instructions here](./data-collection-rule-azure-monitor-agent.md#create-rule-and-associationusingrestapi) to create data collection rule(s) first.
+Now we associate the Data Collection Rules (DCR) to the Monitored Object by creating Data Collection Rule Associations. If you haven't already, [follow instructions here](./data-collection-rule-azure-monitor-agent.md#create-data-collection-rule-and-association) to create data collection rules first.
**Permissions required**: Anyone who has ΓÇÿMonitored Object ContributorΓÇÖ at an appropriate scope can perform this operation, as assigned in step 1. **Request URI**
azure-monitor Data Collection Rule Azure Monitor Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-rule-azure-monitor-agent.md
Title: Configure data collection for the Azure Monitor agent
-description: Describes how to create a data collection rule to collect events and performance data from virtual machines using the Azure Monitor agent.
+ Title: Monitor data from virtual machines with Azure Monitor agent
+description: Describes how to collect events and performance data from virtual machines using the Azure Monitor agent.
Previously updated : 03/16/2022 Last updated : 06/23/2022+++
-# Configure data collection for the Azure Monitor agent
-This article describes how to create a [data collection rule](../essentials/data-collection-rule-overview.md) to collect events and performance counters from virtual machines using the Azure Monitor agent. The data collection rule defines data coming into Azure Monitor and specify where it should be sent.
+# Collect data from virtual machines with the Azure Monitor agent
-> [!NOTE]
-> This article describes how to configure data for virtual machines with the Azure Monitor agent only.
+This article describes how to collect events and performance counters from virtual machines using the Azure Monitor agent.
-## Data collection rule associations
+To collect data from virtual machines using the Azure Monitor agent, you'll:
-To apply a DCR to a virtual machine, you create an association for the virtual machine. A virtual machine may have an association to multiple DCRs, and a DCR may have multiple virtual machines associated to it. This allows you to define a set of DCRs, each matching a particular requirement, and apply them to only the virtual machines where they apply.
+1. Create [data collection rules (DCR)](../essentials/data-collection-rule-overview.md) that define which data Azure Monitor agent sends to which destinations.
+1. Associate the data collection rule to specific virtual machines.
-For example, consider an environment with a set of virtual machines running a line of business application and others running SQL Server. You might have one default data collection rule that applies to all virtual machines and separate data collection rules that collect data specifically for the line of business application and for SQL Server. The associations for the virtual machines to the data collection rules would look similar to the following diagram.
+## How data collection rule associations work
-![Diagram shows virtual machines hosting line of business application and SQL Server associated with data collection rules named central-i t-default and lob-app for line of business application and central-i t-default and s q l for SQL Server.](media/data-collection-rule-azure-monitor-agent/associations.png)
+You can associate virtual machines to multiple data collection rules. This allows you to define each data collection rule to address a particular requirement, and associate the data collection rules to virtual machines based on the specific data you want to collect from each machine.
+For example, consider an environment with a set of virtual machines running a line of business application and other virtual machines running SQL Server. You might have:
-## Create rule and association in Azure portal
+- One default data collection rule that applies to all virtual machines.
+- Separate data collection rules that collect data specifically for the line of business application and for SQL Server.
+
+The following diagram illustrates the associations for the virtual machines to the data collection rules.
-You can use the Azure portal to create a data collection rule and associate virtual machines in your subscription to that rule. The Azure Monitor agent will be automatically installed and a managed identity created for any virtual machines that don't already have it installed.
+![A diagram showing one virtual machine hosting a line of business application and one virtual machine hosting SQL Server. Both virtual machine are associated with data collection rule named central-i t-default. The virtual machine hosting the line of business application is also associated with a data collection rule called lob-app. The virtual machine hosting SQL Server is associated with a data collection rule called s q l.](media/data-collection-rule-azure-monitor-agent/associations.png)
-> [!IMPORTANT]
-> Creating a data collection rule using the portal also enables System-Assigned managed identity on the target resources, in addition to existing User-Assigned Identities (if any). For existing applications unless they specify the User-Assigned identity in the request, the machine will default to using System-Assigned Identity instead. [Learn More](../../active-directory/managed-identities-azure-resources/managed-identities-faq.md#what-identity-will-imds-default-to-if-dont-specify-the-identity-in-the-request)
-
+## Create data collection rule and association
-> [!NOTE]
-> If you wish to send data to Log Analytics, you must create the data collection rule in the **same region** where your Log Analytics workspace resides. The rule can be associated to machines in other supported region(s).
+To send data to Log Analytics, create the data collection rule in the **same region** where your Log Analytics workspace resides. You can still associate the rule to machines in other supported regions.
-In the **Monitor** menu in the Azure portal, select **Data Collection Rules** from the **Settings** section. Click **Create** to create a new Data Collection Rule and assignment.
+### [Portal](#tab/portal)
-[![Data Collection Rules](media/data-collection-rule-azure-monitor-agent/data-collection-rules-updated.png)](media/data-collection-rule-azure-monitor-agent/data-collection-rules-updated.png#lightbox)
+1. From the **Monitor** menu, select **Data Collection Rules**.
+1. Select **Create** to create a new Data Collection Rule and associations.
-Click **Create** to create a new rule and set of associations. Provide a **Rule name** and specify a **Subscription**, **Resource Group** and **Region**. This specifies where the DCR will be created. The virtual machines and their associations can be in any subscription or resource group in the tenant.
-Additionally, choose the appropriate **Platform Type** which specifies the type of resources this rule can apply to. Custom will allow for both Windows and Linux types. This allows for pre-curated creation experiences with options scoped to the selected platform type.
+ [![Screenshot showing the Create button on the Data Collection Rules screen.](media/data-collection-rule-azure-monitor-agent/data-collection-rules-updated.png)](media/data-collection-rule-azure-monitor-agent/data-collection-rules-updated.png#lightbox)
+
+1. Provide a **Rule name** and specify a **Subscription**, **Resource Group**, **Region**, and **Platform Type**.
-[![Data Collection Rule Basics](media/data-collection-rule-azure-monitor-agent/data-collection-rule-basics-updated.png)](media/data-collection-rule-azure-monitor-agent/data-collection-rule-basics-updated.png#lightbox)
+ **Region** specifies where the DCR will be created. The virtual machines and their associations can be in any subscription or resource group in the tenant.
-In the **Resources** tab, add the resources (virtual machines, virtual machine scale sets, Arc for servers) that should have the Data Collection Rule applied. The Azure Monitor Agent will be installed on resources that don't already have it installed, and will enable Azure Managed Identity as well.
+ **Platform Type** specifies the type of resources this rule can apply to. Custom allows for both Windows and Linux types.
-### Private link configuration using data collection endpoints
-If you need network isolation using private links for collecting data using agents from your resources, simply select existing endpoints (or create a new endpoint) from the same region for the respective resource(s) as shown below. See [how to create data collection endpoint](../essentials/data-collection-endpoint-overview.md).
+ [![Screenshot showing the Basics tab of the Data Collection Rules screen.](media/data-collection-rule-azure-monitor-agent/data-collection-rule-basics-updated.png)](media/data-collection-rule-azure-monitor-agent/data-collection-rule-basics-updated.png#lightbox)
-[![Data Collection Rule virtual machines](media/data-collection-rule-azure-monitor-agent/data-collection-rule-virtual-machines-with-endpoint.png)](media/data-collection-rule-azure-monitor-agent/data-collection-rule-virtual-machines-with-endpoint.png#lightbox)
+1. On the **Resources** tab, add the resources (virtual machines, virtual machine scale sets, Arc for servers) to which to associate the data collection rule. The portal will install Azure Monitor Agent on resources that don't already have it installed, and will also enable Azure Managed Identity.
-On the **Collect and deliver** tab, click **Add data source** to add a data source and destination set. Select a **Data source type**, and the corresponding details to select will be displayed. For performance counters, you can select from a predefined set of objects and their sampling rate. For events, you can select from a set of logs or facilities and the severity level.
+ > [!IMPORTANT]
+ > The portal enables System-Assigned managed identity on the target resources, in addition to existing User-Assigned Identities (if any). For existing applications, unless you specify the User-Assigned identity in the request, the machine will default to using System-Assigned Identity instead.
-[![Data source basic](media/data-collection-rule-azure-monitor-agent/data-collection-rule-data-source-basic-updated.png)](media/data-collection-rule-azure-monitor-agent/data-collection-rule-data-source-basic-updated.png#lightbox)
+ If you need network isolation using private links, select existing endpoints from the same region for the respective resources, or [create a new endpoint](../essentials/data-collection-endpoint-overview.md).
+ [![Data Collection Rule virtual machines](media/data-collection-rule-azure-monitor-agent/data-collection-rule-virtual-machines-with-endpoint.png)](media/data-collection-rule-azure-monitor-agent/data-collection-rule-virtual-machines-with-endpoint.png#lightbox)
-To specify other logs and performance counters from the [currently supported data sources](azure-monitor-agent-overview.md#data-sources-and-destinations) or to filter events using XPath queries, select **Custom**. You can then specify an [XPath ](https://www.w3schools.com/xml/xpath_syntax.asp) for any specific values to collect. See [Sample DCR](data-collection-rule-sample-agent.md) for an example.
+1. On the **Collect and deliver** tab, select **Add data source** to add a data source and set a destination.
+1. Select a **Data source type**.
+1. Select which data you want to collect. For performance counters, you can select from a predefined set of objects and their sampling rate. For events, you can select from a set of logs and severity levels.
-[![Data source custom](media/data-collection-rule-azure-monitor-agent/data-collection-rule-data-source-custom-updated.png)](media/data-collection-rule-azure-monitor-agent/data-collection-rule-data-source-custom-updated.png#lightbox)
+ [![Data source basic](media/data-collection-rule-azure-monitor-agent/data-collection-rule-data-source-basic-updated.png)](media/data-collection-rule-azure-monitor-agent/data-collection-rule-data-source-basic-updated.png#lightbox)
-On the **Destination** tab, add one or more destinations for the data source. You can select multiple destinations of same of different types, for instance multiple Log Analytics workspaces (i.e. "multi-homing"). Windows event and Syslog data sources can only send to Azure Monitor Logs. Performance counters can send to both Azure Monitor Metrics and Azure Monitor Logs.
+1. Select **Custom** to collect logs and performance counters that are not [currently supported data sources](azure-monitor-agent-overview.md#data-sources-and-destinations) or to [filter events using XPath queries](#filter-events-using-xpath-queries). You can then specify an [XPath](https://www.w3schools.com/xml/xpath_syntax.asp) to collect any specific values. See [Sample DCR](data-collection-rule-sample-agent.md) for an example.
-[![Destination](media/data-collection-rule-azure-monitor-agent/data-collection-rule-destination.png)](media/data-collection-rule-azure-monitor-agent/data-collection-rule-destination.png#lightbox)
+ [![Data source custom](media/data-collection-rule-azure-monitor-agent/data-collection-rule-data-source-custom-updated.png)](media/data-collection-rule-azure-monitor-agent/data-collection-rule-data-source-custom-updated.png#lightbox)
-Click **Add Data Source** and then **Review + create** to review the details of the data collection rule and association with the set of VMs. Click **Create** to create it.
+1. On the **Destination** tab, add one or more destinations for the data source. You can select multiple destinations of the same or different types - for instance multiple Log Analytics workspaces (known as "multi-homing").
-> [!NOTE]
-> After the data collection rule and associations have been created, it might take up to 5 minutes for data to be sent to the destinations.
+ You can send Windows event and Syslog data sources can to Azure Monitor Logs only. You can send performance counters to both Azure Monitor Metrics and Azure Monitor Logs.
-## Limit data collection with custom XPath queries
-Since you're charged for any data collected in a Log Analytics workspace, you should collect only the data that you require. Using basic configuration in the Azure portal, you only have limited ability to filter events to collect. For Application and System logs, this is all logs with a particular severity. For Security logs, this is all audit success or all audit failure logs.
+ [![Destination](media/data-collection-rule-azure-monitor-agent/data-collection-rule-destination.png)](media/data-collection-rule-azure-monitor-agent/data-collection-rule-destination.png#lightbox)
-To specify additional filters, you must use Custom configuration and specify an XPath that filters out the events you don't. XPath entries are written in the form `LogName!XPathQuery`. For example, you may want to return only events from the Application event log with an event ID of 1035. The XPathQuery for these events would be `*[System[EventID=1035]]`. Since you want to retrieve the events from the Application event log, the XPath would be `Application!*[System[EventID=1035]]`
-
-### Extracting XPath queries from Windows Event Viewer
-One of the ways to create XPath queries is to use Windows Event Viewer to extract XPath queries as shown below.
-*In step 5 when pasting over the 'Select Path' parameter value, you must append the log type category followed by '!' and then paste the copied value.
-
-[![Extract XPath](media/data-collection-rule-azure-monitor-agent/data-collection-rule-extract-xpath.png)](media/data-collection-rule-azure-monitor-agent/data-collection-rule-extract-xpath.png#lightbox)
-
-See [XPath 1.0 limitations](/windows/win32/wes/consuming-events#xpath-10-limitations) for a list of limitations in the XPath supported by Windows event log.
-
-> [!TIP]
-> You can use the PowerShell cmdlet `Get-WinEvent` with the `FilterXPath` parameter to test the validity of an XPathQuery locally on your machine first. The following script shows an example.
->
-> ```powershell
-> $XPath = '*[System[EventID=1035]]'
-> Get-WinEvent -LogName 'Application' -FilterXPath $XPath
-> ```
->
-> - **In the cmdlet above, the value for '-LogName' parameter is the initial part of the XPath query until the '!', while only the rest of the XPath query goes into the $XPath parameter.**
-> - If events are returned, the query is valid.
-> - If you receive the message *No events were found that match the specified selection criteria.*, the query may be valid, but there are no matching events on the local machine.
-> - If you receive the message *The specified query is invalid* , the query syntax is invalid.
-
-The following table shows examples for filtering events using a custom XPath.
-
-| Description | XPath |
-|:|:|
-| Collect only System events with Event ID = 4648 | `System!*[System[EventID=4648]]`
-| Collect Security Log events with Event ID = 4648 and a process name of consent.exe | `Security!*[System[(EventID=4648)]] and *[EventData[Data[@Name='ProcessName']='C:\Windows\System32\consent.exe']]` |
-| Collect all Critical, Error, Warning, and Information events from the System event log except for Event ID = 6 (Driver loaded) | `System!*[System[(Level=1 or Level=2 or Level=3) and (EventID != 6)]]` |
-| Collect all success and failure Security events except for Event ID 4624 (Successful logon) | `Security!*[System[(band(Keywords,13510798882111488)) and (EventID != 4624)]]` |
--
-## Create rule and association using REST API
-
-Follow the steps below to create a data collection rule and associations using the REST API.
+1. Select **Add Data Source** and then **Review + create** to review the details of the data collection rule and association with the set of virtual machines.
+1. Select **Create** to create the data collection rule.
> [!NOTE]
-> If you wish to send data to Log Analytics, you must create the data collection rule in the **same region** where your Log Analytics workspace resides. The rule can be associated to machines in other supported region(s).
+> It might take up to 5 minutes for data to be sent to the destinations after you create the data collection rule and associations.
+
+### [API](#tab/api)
1. Manually create the DCR file using the JSON format shown in [Sample DCR](data-collection-rule-sample-agent.md).
Follow the steps below to create a data collection rule and association
3. Create an association for each virtual machine to the data collection rule using the [REST API](/rest/api/monitor/datacollectionruleassociations/create#examples). -
-## Create rule and association using Resource Manager template
-
-> [!NOTE]
-> If you wish to send data to Log Analytics, you must create the data collection rule in the **same region** where your Log Analytics workspace resides. The rule can be associated to machines in other supported region(s).
-
-You can create a rule and an association for an Azure virtual machine or Azure Arc-enabled server using Resource Manager templates. See [Resource Manager template samples for data collection rules in Azure Monitor](./resource-manager-data-collection-rules.md) for sample templates).
--
-## Manage rules and association using PowerShell
-
-> [!NOTE]
-> If you wish to send data to Log Analytics, you must create the data collection rule in the **same region** where your Log Analytics workspace resides. The rule can be associated to machines in other supported region(s).
+### [PowerShell](#tab/powershell)
**Data collection rules**
You can create a rule and an association for an Azure virtual machine or Azure A
| Create an association | [New-AzDataCollectionRuleAssociation](/powershell/module/az.monitor/new-azdatacollectionruleassociation?view=azps-6.0.0&viewFallbackFrom=azps-5.4.0&preserve-view=true) | | Delete an association | [Remove-AzDataCollectionRuleAssociation](/powershell/module/az.monitor/remove-azdatacollectionruleassociation?view=azps-6.0.0&viewFallbackFrom=azps-5.4.0&preserve-view=true) |
+### [Azure CLI](#tab/cli)
+This is enabled as part of Azure CLI **monitor-control-service** Extension. [View all commands](/cli/azure/monitor/data-collection/rule)
-## Manage rules and association using Azure CLI
+### [Resource Manager template](#tab/arm)
-> [!NOTE]
-> If you wish to send data to Log Analytics, you must create the data collection rule in the **same region** where your Log Analytics workspace resides. The rule can be associated to machines in other supported region(s).
+See [Resource Manager template samples for data collection rules in Azure Monitor](./resource-manager-data-collection-rules.md) for sample templates.
-This is enabled as part of Azure CLI **monitor-control-service** Extension. [View all commands](/cli/azure/monitor/data-collection/rule)
+
+## Filter events using XPath queries
+Since you're charged for any data you collect in a Log Analytics workspace, collect only the data you need. The basic configuration in the Azure portal provides you with a limited ability to filter out events.
+To specify additional filters, use Custom configuration and specify an XPath that filters out the events you don't need. XPath entries are written in the form `LogName!XPathQuery`. For example, you may want to return only events from the Application event log with an event ID of 1035. The XPathQuery for these events would be `*[System[EventID=1035]]`. Since you want to retrieve the events from the Application event log, the XPath is `Application!*[System[EventID=1035]]`
+
+### Extracting XPath queries from Windows Event Viewer
+In Windows, you can use Event Viewer to extract XPath queries as shown below.
+
+When you paste the XPath query into the field on the **Add data source** screen, (step 5 in the picture below), you must append the log type category followed by '!'.
+
+[![Extract XPath](media/data-collection-rule-azure-monitor-agent/data-collection-rule-extract-xpath.png)](media/data-collection-rule-azure-monitor-agent/data-collection-rule-extract-xpath.png#lightbox)
+
+See [XPath 1.0 limitations](/windows/win32/wes/consuming-events#xpath-10-limitations) for a list of limitations in the XPath supported by Windows event log.
+
+> [!TIP]
+> You can use the PowerShell cmdlet `Get-WinEvent` with the `FilterXPath` parameter to test the validity of an XPathQuery locally on your machine first. The following script shows an example.
+>
+> ```powershell
+> $XPath = '*[System[EventID=1035]]'
+> Get-WinEvent -LogName 'Application' -FilterXPath $XPath
+> ```
+>
+> - **In the cmdlet above, the value of the *-LogName* parameter is the initial part of the XPath query until the '!'. The rest of the XPath query goes into the *$XPath* parameter.**
+> - If the script returns events, the query is valid.
+> - If you receive the message *No events were found that match the specified selection criteria.*, the query may be valid, but there are no matching events on the local machine.
+> - If you receive the message *The specified query is invalid* , the query syntax is invalid.
+
+Examples of filtering events using a custom XPath:
+
+| Description | XPath |
+|:|:|
+| Collect only System events with Event ID = 4648 | `System!*[System[EventID=4648]]`
+| Collect Security Log events with Event ID = 4648 and a process name of consent.exe | `Security!*[System[(EventID=4648)]] and *[EventData[Data[@Name='ProcessName']='C:\Windows\System32\consent.exe']]` |
+| Collect all Critical, Error, Warning, and Information events from the System event log except for Event ID = 6 (Driver loaded) | `System!*[System[(Level=1 or Level=2 or Level=3) and (EventID != 6)]]` |
+| Collect all success and failure Security events except for Event ID 4624 (Successful logon) | `Security!*[System[(band(Keywords,13510798882111488)) and (EventID != 4624)]]` |
## Next steps
azure-monitor Data Collection Rule Sample Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-rule-sample-agent.md
The sample [data collection rule](../essentials/data-collection-rule-overview.md
- Sends all data to a Log Analytics workspace named centralWorkspace. > [!NOTE]
-> For an explanation of XPaths that are used to specify event collection in data collection rules, see [Limit data collection with custom XPath queries](../agents/data-collection-rule-azure-monitor-agent.md#limit-data-collection-with-custom-xpath-queries)
+> For an explanation of XPaths that are used to specify event collection in data collection rules, see [Limit data collection with custom XPath queries](../agents/data-collection-rule-azure-monitor-agent.md#filter-events-using-xpath-queries)
## Sample DCR
azure-monitor Data Sources Collectd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-collectd.md
# Collect data from CollectD on Linux agents in Azure Monitor
-[CollectD](https://collectd.org/) is an open source Linux daemon that periodically collects performance metrics from applications and system level information. Example applications include the Java Virtual Machine (JVM), MySQL Server, and Nginx. This article provides information on collecting performance data from CollectD in Azure Monitor.
+[CollectD](https://collectd.org/) is an open source Linux daemon that periodically collects performance metrics from applications and system level information. Example applications include the Java Virtual Machine (JVM), MySQL Server, and Nginx. This article provides information on collecting performance data from CollectD in Azure Monitor using the Log Analytics agent.
+ A full list of available plugins can be found at [Table of Plugins](https://collectd.org/wiki/index.php/Table_of_Plugins).
azure-monitor Data Sources Custom Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-custom-logs.md
The Custom Logs data source for the Log Analytics agent in Azure Monitor allows you to collect events from text files on both Windows and Linux computers. Many applications log information to text files instead of standard logging services such as Windows Event log or Syslog. Once collected, you can either parse the data into individual fields in your queries or extract the data during collection to individual fields.
-> [!IMPORTANT]
-> This article covers collecting text logs with the [Log Analytics agent](./log-analytics-agent.md). See [Collect text logs with Azure Monitor agent (preview)](../agents/data-collection-text-log.md) for details on collecting text logs with [Azure Monitor agent](azure-monitor-agent-overview.md).
![Custom log collection](media/data-sources-custom-logs/overview.png)
azure-monitor Data Sources Event Tracing Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-event-tracing-windows.md
Last updated 02/07/2022
*Event Tracing for Windows (ETW)* provides a mechanism for instrumentation of user-mode applications and kernel-mode drivers. The Log Analytics agent is used to [collect Windows events](./data-sources-windows-events.md) written to the Administrative and Operational [ETW channels](/windows/win32/wes/eventmanifestschema-channeltype-complextype). However, it is occasionally necessary to capture and analyze other events, such as those written to the Analytic channel. + ## Event flow To successfully collect [manifest-based ETW events](/windows/win32/etw/about-event-tracing#types-of-providers) for analysis in Azure Monitor Logs, you must use the [Azure diagnostics extension](./diagnostics-extension-overview.md) for Windows (WAD). In this scenario, the diagnostics extension acts as the ETW consumer, writing events to Azure Storage (tables) as an intermediate store. Here it will be stored in a table named **WADETWEventTable**. Log Analytics then collects the table data from Azure storage, presenting it as a table named **ETWEvent**.
azure-monitor Data Sources Iis Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-iis-logs.md
# Collect IIS logs with Log Analytics agent in Azure Monitor Internet Information Services (IIS) stores user activity in log files that can be collected by the Log Analytics agent and stored in [Azure Monitor Logs](../data-platform.md).
+![IIS logs](media/data-sources-iis-logs/overview.png)
+ > [!IMPORTANT]
-> This article covers collecting IIS logs with the [Log Analytics agent](./log-analytics-agent.md). See [Collect text logs with Azure Monitor agent (preview)](../agents/data-collection-text-log.md) for details on collecting IIS logs with [Azure Monitor agent](azure-monitor-agent-overview.md).
+> This article covers collecting IIS logs with the [Log Analytics agent](./log-analytics-agent.md), which **will be deprecated by August 2024**. Please be sure to [migrate to Azure Monitor agent](./azure-monitor-agent-manage.md) before August 2024 to continue ingesting data. See [Collect text logs with Azure Monitor agent (preview)](../agents/data-collection-text-log.md) for details on collecting IIS logs with [Azure Monitor agent](azure-monitor-agent-overview.md).
-![IIS logs](media/data-sources-iis-logs/overview.png)
## Configuring IIS logs Azure Monitor collects entries from log files created by IIS, so you must [configure IIS for logging](/previous-versions/orphan-topics/ws.11/hh831775(v=ws.11)).
azure-monitor Data Sources Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-json.md
Custom JSON data sources can be collected into [Azure Monitor](../data-platform.
> [!NOTE]
-> Log Analytics agent for Linux v1.1.0-217+ is required for Custom JSON Data
+> Log Analytics agent for Linux v1.1.0-217+ is required for Custom JSON Data.
## Configuration
azure-monitor Data Sources Linux Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-linux-applications.md
This article provides details for configuring the [Log Analytics agent for Linux
- [MySQL](#mysql) - [Apache HTTP Server](#apache-http-server) ++ ## MySQL If MySQL Server or MariaDB Server is detected on the computer when the Log Analytics agent is installed, a performance monitoring provider for MySQL Server will be automatically installed. This provider connects to the local MySQL/MariaDB server to expose performance statistics. MySQL user credentials must be configured so that the provider can access the MySQL Server.
azure-monitor Data Sources Performance Counters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-performance-counters.md
Last updated 02/26/2021
# Collect Windows and Linux performance data sources with Log Analytics agent Performance counters in Windows and Linux provide insight into the performance of hardware components, operating systems, and applications. Azure Monitor can collect performance counters from Log Analytics agents at frequent intervals for Near Real Time (NRT) analysis in addition to aggregating performance data for longer term analysis and reporting.
-> [!IMPORTANT]
-> This article covers collecting performance data with the [Log Analytics agent](./log-analytics-agent.md) which is one of the agents used by Azure Monitor. Other agents collect different data and are configured differently. See [Overview of Azure Monitor agents](../agents/agents-overview.md) for a list of the available agents and the data they can collect.
![Performance counters](media/data-sources-performance-counters/overview.png)
azure-monitor Data Sources Syslog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-syslog.md
# Collect Syslog data sources with Log Analytics agent Syslog is an event logging protocol that is common to Linux. Applications will send messages that may be stored on the local machine or delivered to a Syslog collector. When the Log Analytics agent for Linux is installed, it configures the local Syslog daemon to forward messages to the agent. The agent then sends the message to Azure Monitor where a corresponding record is created.
-> [!IMPORTANT]
-> This article covers collecting Syslog events with the [Log Analytics agent](./log-analytics-agent.md) which is one of the agents used by Azure Monitor. Other agents collect different data and are configured differently. See [Overview of Azure Monitor agents](../agents/agents-overview.md) for a list of the available agents and the data they can collect.
+ > [!NOTE] > Azure Monitor supports collection of messages sent by rsyslog or syslog-ng, where rsyslog is the default daemon. The default syslog daemon on version 5 of Red Hat Enterprise Linux, CentOS, and Oracle Linux version (sysklog) is not supported for syslog event collection. To collect syslog data from this version of these distributions, the [rsyslog daemon](http://rsyslog.com) should be installed and configured to replace sysklog.
->
->
+ ![Syslog collection](media/data-sources-syslog/overview.png)
azure-monitor Data Sources Windows Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-windows-events.md
# Collect Windows event log data sources with Log Analytics agent
-Windows Event logs are one of the most common [data sources](../agents/agent-data-sources.md) for Log Analytics agents on Windows virtual machines since many applications write to the Windows event log. You can collect events from standard logs such as System and Application in addition to specifying any custom logs created by applications you need to monitor.
+Windows Event logs are one of the most common [data sources](../agents/agent-data-sources.md) for Log Analytics agents on Windows virtual machines since many applications write to the Windows event log. You can collect events from standard logs, such as System and Application, and any custom logs created by applications you need to monitor.
-> [!IMPORTANT]
-> This article covers collecting Windows events with the [Log Analytics agent](./log-analytics-agent.md) which is one of the agents used by Azure Monitor. Other agents collect different data and are configured differently. See [Overview of Azure Monitor agents](../agents/agents-overview.md) for a list of the available agents and the data they can collect.
+![Diagram that shows the Log Analytics agent sending Windows events to the Event table in Azure Monitor.](media/data-sources-windows-events/overview.png)
-![Windows Events](media/data-sources-windows-events/overview.png)
## Configuring Windows Event logs Configure Windows Event logs from the [Agents configuration menu](../agents/agent-data-sources.md#configuring-data-sources) for the Log Analytics workspace.
-Azure Monitor only collects events from the Windows event logs that are specified in the settings. You can add an event log by typing in the name of the log and clicking **+**. For each log, only the events with the selected severities are collected. Check the severities for the particular log that you want to collect. You cannot provide any additional criteria to filter events.
+Azure Monitor only collects events from the Windows event logs that are specified in the settings. You can add an event log by typing in the name of the log and clicking **+**. For each log, only the events with the selected severities are collected. Check the severities for the particular log that you want to collect. You can't provide any additional criteria to filter events.
-As you type the name of an event log, Azure Monitor provides suggestions of common event log names. If the log you want to add does not appear in the list, you can still add it by typing in the full name of the log. You can find the full name of the log by using event viewer. In event viewer, open the *Properties* page for the log and copy the string from the *Full Name* field.
+As you type the name of an event log, Azure Monitor provides suggestions of common event log names. If the log you want to add doesn't appear in the list, you can still add it by typing in the full name of the log. You can find the full name of the log by using event viewer. In event viewer, open the *Properties* page for the log and copy the string from the *Full Name* field.
-[![Configure Windows events](media/data-sources-windows-events/configure.png)](media/data-sources-windows-events/configure.png#lightbox)
+[![Screenshot showing the Windows event logs tab on the Agents configuration screen.](media/data-sources-windows-events/configure.png)](media/data-sources-windows-events/configure.png#lightbox)
> [!IMPORTANT] > You can't configure collection of security events from the workspace using Log Analytics agent. You must use [Microsoft Defender for Cloud](../../security-center/security-center-enable-data-collection.md) or [Microsoft Sentinel](../../sentinel/connect-windows-security-events.md) to collect security events. [Azure Monitor agent](azure-monitor-agent-overview.md) can also be used to collect security events.
As you type the name of an event log, Azure Monitor provides suggestions of comm
> Critical events from the Windows event log will have a severity of "Error" in Azure Monitor Logs. ## Data collection
-Azure Monitor collects each event that matches a selected severity from a monitored event log as the event is created. The agent records its place in each event log that it collects from. If the agent goes offline for a period of time, then it collects events from where it last left off, even if those events were created while the agent was offline. There is a potential for these events to not be collected if the event log wraps with uncollected events being overwritten while the agent is offline.
+Azure Monitor collects each event that matches a selected severity from a monitored event log as the event is created. The agent records its place in each event log that it collects from. If the agent goes offline for a while, it collects events from where it last left off, even if those events were created while the agent was offline. There's a potential for these events to not be collected if the event log wraps with uncollected events being overwritten while the agent is offline.
>[!NOTE] >Azure Monitor does not collect audit events created by SQL Server from source *MSSQLSERVER* with event ID 18453 that contains keywords - *Classic* or *Audit Success* and keyword *0xa0000000000000*.
azure-monitor Log Analytics Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/log-analytics-agent.md
# Log Analytics agent overview
-The Log Analytics agents are on a **deprecation path** and will no longer be supported after **August 31, 2024**. If you use the Log Analytics agents to ingest data to Azure Monitor, make sure to [migrate to the new Azure Monitor agent](./azure-monitor-agent-migration.md) prior to that date.
The Azure Log Analytics agent collects telemetry from Windows and Linux virtual machines in any cloud, on-premises machines, and those monitored by [System Center Operations Manager](/system-center/scom/) and sends collected data to your Log Analytics workspace in Azure Monitor. The Log Analytics agent also supports insights and other services in Azure Monitor such as [VM insights](../vm/vminsights-enable-overview.md), [Microsoft Defender for Cloud](../../security-center/index.yml), and [Azure Automation](../../automation/automation-intro.md). This article provides a detailed overview of the agent, system and network requirements, and deployment methods.
+>[!IMPORTANT]
+>The Log Analytics agent is on a **deprecation path** and won't be supported after **August 31, 2024**. If you use the Log Analytics agent to ingest data to Azure Monitor, make sure to [migrate to the new Azure Monitor agent](./azure-monitor-agent-migration.md) prior to that date.
+ > [!NOTE] > You may also see the Log Analytics agent referred to as the Microsoft Monitoring Agent (MMA).
azure-monitor Java In Process Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-in-process-agent.md
# Azure Monitor OpenTelemetry-based auto-instrumentation for Java applications
-This article describes how to enable and configure the OpenTelemetry-based Azure Monitor Java offering. After you finish the instructions in this article, you'll be able to use Azure Monitor Application Insights to monitor your application.
+This article describes how to enable and configure the OpenTelemetry-based Azure Monitor Java offering. It can be used for any environment, including on-premises. After you finish the instructions in this article, you'll be able to use Azure Monitor Application Insights to monitor your application.
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)] ## Get started
-Java auto-instrumentation can be enabled without any code changes.
+Java auto-instrumentation is enabled through configuration changes; no code changes are required.
### Prerequisites
To provide feedback:
## Next steps
+- Review [Java auto-instrumentation configuration options](java-standalone-config.md).
- To review the source code, see the [Azure Monitor Java auto-instrumentation GitHub repository](https://github.com/Microsoft/ApplicationInsights-Java). - To learn more about OpenTelemetry and its community, see the [OpenTelemetry Java GitHub repository](https://github.com/open-telemetry/opentelemetry-java-instrumentation). - To enable usage experiences, see [Enable web or browser user monitoring](javascript.md).
azure-monitor Java On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-on-premises.md
- Title: Monitor Java applications running on premises - Azure Monitor Application Insights
-description: Application performance monitoring for Java applications running on premises without instrumenting the app. Distributed tracing and application map.
-- Previously updated : 04/16/2020---
-# Java codeless application monitoring on-premises - Azure Monitor Application Insights
-
-Java codeless application monitoring is all about simplicity - there are no code changes, the Java agent can be enabled through just a couple of configuration changes.
-
-## Overview
-
-Once enabled, the Java agent will automatically collect a multitude of requests, dependencies, logs, and metrics from the most widely used libraries and frameworks.
-
-Please follow [the detailed instructions](./java-in-process-agent.md) for all of the environments, including on-premises.
-
-## Next steps
-
-* [Application Insights Java 3.x](./java-in-process-agent.md)
azure-monitor Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript.md
# Application Insights for web pages
-Find out about the performance and usage of your web page or app. If you add [Application Insights](app-insights-overview.md) to your page script, you get timings of page loads and AJAX calls, counts, and details of browser exceptions and AJAX failures, as well as users and session counts. All these can be segmented by page, client OS and browser version, geo location, and other dimensions. You can set alerts on failure counts or slow page loading. And by inserting trace calls in your JavaScript code, you can track how the different features of your web page application are used.
+> [!NOTE]
+> We continue to assess the viability of OpenTelemetry for browser scenarios. The Application Insights JavaScript SDK is recommended for the forseeable future, which is fully compatible with OpenTelemetry distributed tracing.
-Application Insights can be used with any web pages - you just add a short piece of JavaScript, Node.js has a [standalone SDK](nodejs.md). If your web service is [Java](java-in-process-agent.md) or [ASP.NET](asp-net.md), you can use the server-side SDKs with the client-side JavaScript SDK to get an end-to-end understanding of your app's performance.
+Find out about the performance and usage of your web page or app. If you add [Application Insights](app-insights-overview.md) to your page script, you get timings of page loads and AJAX calls, counts, and details of browser exceptions and AJAX failures, as well as users and session counts. All of this telemetry can be segmented by page, client OS and browser version, geo location, and other dimensions. You can set alerts on failure counts or slow page loading. And by inserting trace calls in your JavaScript code, you can track how the different features of your web page application are used.
+Application Insights can be used with any web pages - you just add a short piece of JavaScript, Node.js has a [standalone SDK](nodejs.md). If your web service is [Java](java-in-process-agent.md) or [ASP.NET](asp-net.md), you can use the server-side SDKs with the client-side JavaScript SDK to get an end-to-end understanding of your app's performance.
## Adding the JavaScript SDK
Application Insights can be used with any web pages - you just add a short piece
### npm based setup
-Install via NPM.
+Install via Node Package Manager (npm).
```sh npm i --save @microsoft/applicationinsights-web
appInsights.trackPageView(); // Manually call trackPageView to establish the cur
If your app doesn't use npm, you can directly instrument your webpages with Application Insights by pasting this snippet at the top of each your pages. Preferably, it should be the first script in your `<head>` section so that it can monitor any potential issues with all of your dependencies and optionally any JavaScript errors. If you're using Blazor Server App, add the snippet at the top of the file `_Host.cshtml` in the `<head>` section.
-To assist with tracking which version of the snippet your application is using, starting from version 2.5.5 the page view event will include a new tag "ai.internal.snippet" that will contain the identified snippet version.
+Starting from version 2.5.5, the page view event will include a new tag "ai.internal.snippet" that contains the identified snippet version. This feature assists with tracking which version of the snippet your application is using.
The current Snippet (listed below) is version "5", the version is encoded in the snippet as sv:"#" and the [current version is also available on GitHub](https://go.microsoft.com/fwlink/?linkid=2156318).
cfg: { // Application Insights Configuration
#### Reporting Script load failures
-This version of the snippet detects and reports failures when loading the SDK from the CDN as an exception to the Azure Monitor portal (under the failures &gt; exceptions &gt; browser), this exception provides visibility into failures of this type so that you're aware your application isn't reporting telemetry (or other exceptions) as expected. This signal is an important measurement in understanding that you have lost telemetry because the SDK didn't load or initialize which can lead to:
+This version of the snippet detects and reports failures when loading the SDK from the CDN as an exception to the Azure Monitor portal (under the failures &gt; exceptions &gt; browser). The exception provides visibility into failures of this type so that you're aware your application isn't reporting telemetry (or other exceptions) as expected. This signal is an important measurement in understanding that you have lost telemetry because the SDK didn't load or initialize which can lead to:
- Under-reporting of how users are using (or trying to use) your site; - Missing telemetry on how your end users are using your site; - Missing JavaScript errors that could potentially be blocking your end users from successfully using your site.
For details on this exception see the [SDK load failure](javascript-sdk-load-fai
Reporting of this failure as an exception to the portal doesn't use the configuration option ```disableExceptionTracking``` from the application insights configuration and therefore if this failure occurs it will always be reported by the snippet, even when the window.onerror support is disabled.
-Reporting of SDK load failures isn't supported on Internet Explorer 8 or earlier. This reduces the minified size of the snippet by assuming that most environments aren't exclusively IE 8 or less. If you have this requirement and you wish to receive these exceptions, you'll need to either include a fetch poly fill or create your own snippet version that uses ```XDomainRequest``` instead of ```XMLHttpRequest```, it's recommended that you use the [provided snippet source code](https://github.com/microsoft/ApplicationInsights-JS/blob/master/AISKU/snippet/snippet.js) as a starting point.
+Reporting of SDK load failures isn't supported on Internet Explorer 8 or earlier. This behavior reduces the minified size of the snippet by assuming that most environments aren't exclusively IE 8 or less. If you have this requirement and you wish to receive these exceptions, you'll need to either include a fetch poly fill or create your own snippet version that uses ```XDomainRequest``` instead of ```XMLHttpRequest```, it's recommended that you use the [provided snippet source code](https://github.com/microsoft/ApplicationInsights-JS/blob/master/AISKU/snippet/snippet.js) as a starting point.
> [!NOTE] > If you are using a previous version of the snippet, it is highly recommended that you update to the latest version so that you will receive these previously unreported issues. #### Snippet configuration options
-All configuration options have now been move towards the end of the script to help avoid accidentally introducing JavaScript errors that wouldn't just cause the SDK to fail to load, but also it would disable the reporting of the failure.
+All configuration options have been move towards the end of the script. This placement avoids accidentally introducing JavaScript errors that wouldn't just cause the SDK to fail to load, but also it would disable the reporting of the failure.
Each configuration option is shown above on a new line, if you don't wish to override the default value of an item listed as [optional] you can remove that line to minimize the resulting size of your returned page.
The available configuration options are
### Connection String Setup + ```js import { ApplicationInsights } from '@microsoft/applicationinsights-web'
appInsights.trackPageView();
### Sending telemetry to the Azure portal
-By default the Application Insights JavaScript SDK autocollects many telemetry items that are helpful in determining the health of your application and the underlying user experience. These include:
+By default, the Application Insights JavaScript SDK auto-collects many telemetry items that are helpful in determining the health of your application and the underlying user experience.
+
+This telemetry includes:
- **Uncaught exceptions** in your app, including information on - Stack trace
Most configuration fields are named such that they can be defaulted to false. Al
| maxBatchInterval | How long to batch telemetry for before sending (milliseconds) | numeric<br/>15000 | | disable&#8203;ExceptionTracking | If true, exceptions aren't autocollected. | boolean<br/> false | | disableTelemetry | If true, telemetry isn't collected or sent. | boolean<br/>false |
-| enableDebug | If true, **internal** debugging data is thrown as an exception **instead** of being logged, regardless of SDK logging settings. Default is false. <br>***Note:*** Enabling this setting will result in dropped telemetry whenever an internal error occurs. This can be useful for quickly identifying issues with your configuration or usage of the SDK. If you don't want to lose telemetry while debugging, consider using `loggingLevelConsole` or `loggingLevelTelemetry` instead of `enableDebug`. | boolean<br/>false |
+| enableDebug | If true, **internal** debugging data is thrown as an exception **instead** of being logged, regardless of SDK logging settings. Default is false. <br>***Note:*** Enabling this setting will result in dropped telemetry whenever an internal error occurs. This setting can be useful for quickly identifying issues with your configuration or usage of the SDK. If you don't want to lose telemetry while debugging, consider using `loggingLevelConsole` or `loggingLevelTelemetry` instead of `enableDebug`. | boolean<br/>false |
| loggingLevelConsole | Logs **internal** Application Insights errors to console. <br>0: off, <br>1: Critical errors only, <br>2: Everything (errors & warnings) | numeric<br/> 0 | | loggingLevelTelemetry | Sends **internal** Application Insights errors as telemetry. <br>0: off, <br>1: Critical errors only, <br>2: Everything (errors & warnings) | numeric<br/> 1 | | diagnosticLogInterval | (internal) Polling interval (in ms) for internal logging queue | numeric<br/> 10000 |
-| samplingPercentage | Percentage of events that will be sent. Default is 100, meaning all events are sent. Set this if you wish to preserve your data cap for large-scale applications. | numeric<br/>100 |
+| samplingPercentage | Percentage of events that will be sent. Default is 100, meaning all events are sent. Set this option if you wish to preserve your data cap for large-scale applications. | numeric<br/>100 |
| autoTrackPageVisitTime | If true, on a pageview, the _previous_ instrumented page's view time is tracked and sent as telemetry and a new timer is started for the current pageview. It's sent as a custom metric named `PageVisitTime` in `milliseconds` and is calculated via the Date [now()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date/now) function (if available) and falls back to (new Date()).[getTime()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date/getTime) if now() is unavailable (IE8 or less). Default is false. | boolean<br/>false | | disableAjaxTracking | If true, Ajax calls aren't autocollected. | boolean<br/> false | | disableFetchTracking | If true, Fetch requests aren't autocollected.|boolean<br/>true |
Most configuration fields are named such that they can be defaulted to false. Al
| enableSessionStorageBuffer | If true, the buffer with all unsent telemetry is stored in session storage. The buffer is restored on page load | boolean<br />true | | cookieCfg | Defaults to cookie usage enabled see [ICookieCfgConfig](#icookiemgrconfig) settings for full defaults. | [ICookieCfgConfig](#icookiemgrconfig)<br>(Since 2.6.0)<br/>undefined | | ~~isCookieUseDisabled~~<br>disableCookiesUsage | If true, the SDK won't store or read any data from cookies. Disables the User and Session cookies and renders the usage blades and experiences useless. isCookieUseDisable is deprecated in favor of disableCookiesUsage, when both are provided disableCookiesUsage takes precedence.<br>(Since v2.6.0) And if `cookieCfg.enabled` is also defined it will take precedence over these values, Cookie usage can be re-enabled after initialization via the core.getCookieMgr().setEnabled(true). | alias for [`cookieCfg.enabled`](#icookiemgrconfig)<br>false |
-| cookieDomain | Custom cookie domain. This is helpful if you want to share Application Insights cookies across subdomains.<br>(Since v2.6.0) If `cookieCfg.domain` is defined it will take precedence over this value. | alias for [`cookieCfg.domain`](#icookiemgrconfig)<br>null |
-| cookiePath | Custom cookie path. This is helpful if you want to share Application Insights cookies behind an application gateway.<br>If `cookieCfg.path` is defined it will take precedence over this value. | alias for [`cookieCfg.path`](#icookiemgrconfig)<br>(Since 2.6.0)<br/>null |
+| cookieDomain | Custom cookie domain. This option is helpful if you want to share Application Insights cookies across subdomains.<br>(Since v2.6.0) If `cookieCfg.domain` is defined it will take precedence over this value. | alias for [`cookieCfg.domain`](#icookiemgrconfig)<br>null |
+| cookiePath | Custom cookie path. This option is helpful if you want to share Application Insights cookies behind an application gateway.<br>If `cookieCfg.path` is defined it will take precedence over this value. | alias for [`cookieCfg.path`](#icookiemgrconfig)<br>(Since 2.6.0)<br/>null |
| isRetryDisabled | If false, retry on 206 (partial success), 408 (timeout), 429 (too many requests), 500 (internal server error), 503 (service unavailable), and 0 (offline, only if detected) | boolean<br/>false | | isStorageUseDisabled | If true, the SDK won't store or read any data from local and session storage. | boolean<br/> false | | isBeaconApiDisabled | If false, the SDK will send all telemetry using the [Beacon API](https://www.w3.org/TR/beacon) | boolean<br/>true |
Most configuration fields are named such that they can be defaulted to false. Al
| distributedTracingMode | Sets the distributed tracing mode. If AI_AND_W3C mode or W3C mode is set, W3C trace context headers (traceparent/tracestate) will be generated and included in all outgoing requests. AI_AND_W3C is provided for back-compatibility with any legacy Application Insights instrumented services. See example [here](./correlation.md#enable-w3c-distributed-tracing-support-for-web-apps).| `DistributedTracingModes`or<br/>numeric<br/>(Since v2.6.0) `DistributedTracingModes.AI_AND_W3C`<br />(v2.5.11 or earlier) `DistributedTracingModes.AI` | | enable&#8203;AjaxErrorStatusText | If true, include response error data text in dependency event on failed AJAX requests. | boolean<br/> false | | enable&#8203;AjaxPerfTracking |Flag to enable looking up and including more browser window.performance timings in the reported `ajax` (XHR and fetch) reported metrics. | boolean<br/> false |
-| maxAjaxPerf&#8203;LookupAttempts | The maximum number of times to look for the window.performance timings (if available), this is required as not all browsers populate the window.performance before reporting the end of the XHR request and for fetch requests this is added after its complete.| numeric<br/> 3 |
+| maxAjaxPerf&#8203;LookupAttempts | The maximum number of times to look for the window.performance timings (if available). This option is sometimes required as not all browsers populate the window.performance before reporting the end of the XHR request and for fetch requests this is added after its complete.| numeric<br/> 3 |
| ajaxPerfLookupDelay | The amount of time to wait before reattempting to find the window.performance timings for an `ajax` request, time is in milliseconds and is passed directly to setTimeout(). | numeric<br/> 25 ms | | enableUnhandled&#8203;PromiseRejection&#8203;Tracking | If true, unhandled promise rejections will be autocollected and reported as a JavaScript error. When disableExceptionTracking is true (don't track exceptions), the config value will be ignored and unhandled promise rejections won't be reported. | boolean<br/> false |
-| enablePerfMgr | When enabled (true) this will create local perfEvents for code that has been instrumented to emit perfEvents (via the doPerf() helper). This can be used to identify performance issues within the SDK based on your usage or optionally within your own instrumented code. [More details are available by the basic documentation](https://github.com/microsoft/ApplicationInsights-JS/blob/master/docs/PerformanceMonitoring.md). Since v2.5.7 | boolean<br/>false |
+| enablePerfMgr | When enabled (true) this will create local perfEvents for code that has been instrumented to emit perfEvents (via the doPerf() helper). This option can be used to identify performance issues within the SDK based on your usage or optionally within your own instrumented code. [More details are available by the basic documentation](https://github.com/microsoft/ApplicationInsights-JS/blob/master/docs/PerformanceMonitoring.md). Since v2.5.7 | boolean<br/>false |
| perfEvtsSendAll | When _enablePerfMgr_ is enabled and the [IPerfManager](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/IPerfManager.ts) fires a [INotificationManager](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/INotificationManager.ts).perfEvent() this flag determines whether an event is fired (and sent to all listeners) for all events (true) or only for 'parent' events (false &lt;default&gt;).<br />A parent [IPerfEvent](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/IPerfEvent.ts) is an event where no other IPerfEvent is still running at the point of this event being created and its _parent_ property isn't null or undefined. Since v2.5.7 | boolean<br />false | | idLength | The default length used to generate new random session and user ID values. Defaults to 22, previous default value was 5 (v2.5.8 or less), if you need to keep the previous maximum length you should set this value to 5. | numeric<br />22 |
Cookie Configuration for instance-based cookie management added in version 2.6.0
| Name | Description | Type and Default | ||-|| | enabled | A boolean that indicates whether the use of cookies by the SDK is enabled by the current instance. If false, the instance of the SDK initialized by this configuration won't store or read any data from cookies | boolean<br/> true |
-| domain | Custom cookie domain. This is helpful if you want to share Application Insights cookies across subdomains. If not provided uses the value from root `cookieDomain` value. | string<br/>null |
+| domain | Custom cookie domain, which is helpful if you want to share Application Insights cookies across subdomains. If not provided uses the value from root `cookieDomain` value. | string<br/>null |
| path | Specifies the path to use for the cookie, if not provided it will use any value from the root `cookiePath` value. | string <br/> / | | getCookie | Function to fetch the named cookie value, if not provided it will use the internal cookie parsing / caching. | `(name: string) => string` <br/> null | | setCookie | Function to set the named cookie with the specified value, only called when adding or updating a cookie. | `(name: string, value: string) => void` <br/> null |
cfg: { // Application Insights Configuration
</script> ```
-# [NPM](#tab/npm)
+# [npm](#tab/npm)
```javascript // excerpt of the config section of the JavaScript SDK snippet with correlation
By default, this SDK will **not** handle state-based route changing that occurs
### Single Page Applications
-For Single Page Applications, please reference plugin documentation for plugin specific guidance.
+For Single Page Applications, reference plugin documentation for plugin specific guidance.
| Plugins | ||
For Single Page Applications, please reference plugin documentation for plugin s
### Advanced Correlation
-When a page is first loading and the SDK has not fully initialized, we are unable to generate the Operation ID for the first request. As a result, distributed tracing is incomplete until the SDK fully initializes.
-To remedy this problem, you can include dynamic JavaScript on the returned HTML page and the SDK will use a callback function during initialization to retroactively pull the Operation ID from the serverside and populate the clientside with it.
+When a page is first loading and the SDK hasn't fully initialized, we're unable to generate the Operation ID for the first request. As a result, distributed tracing is incomplete until the SDK fully initializes.
+To remedy this problem, you can include dynamic JavaScript on the returned HTML page. The SDK will use a callback function during initialization to retroactively pull the Operation ID from the `serverside` and populate the `clientside` with it.
# [Snippet](#tab/snippet)
Here's a sample of how to create a dynamic JS using Razor:
}}); </script> ```
-# [NPM](#tab/npm)
+# [npm](#tab/npm)
```js import { ApplicationInsights } from '@microsoft/applicationinsights-web'
appInsights.context.telemetryContext.parentID = serverId;
appInsights.loadAppInsights(); ```
-When using a npm based configuration, a location must be determined to store the Operation ID (generally global) to enable access for the SDK initialization bundle to `appInsights.context.telemetryContext.parentID` so it can populate it before the first page view event is sent.
+When using an npm based configuration, a location must be determined to store the Operation ID to enable access for the SDK initialization bundle to `appInsights.context.telemetryContext.parentID` so it can populate it before the first page view event is sent.
Test in internal environment to verify monitoring telemetry is working as expect
## SDK performance/overhead
-At just 36 KB gzipped, and taking only ~15 ms to initialize, Application Insights adds a negligible amount of loadtime to your website. By using the snippet, minimal components of the library are quickly loaded. In the meantime, the full script is downloaded in the background.
+At just 36 KB gzipped, and taking only ~15 ms to initialize, Application Insights adds a negligible amount of loadtime to your website. Minimal components of the library are quickly loaded when using this snippet. In the meantime, the full script is downloaded in the background.
While the script is downloading from the CDN, all tracking of your page is queued. Once the downloaded script finishes asynchronously initializing, all events that were queued are tracked. As a result, you won't lose any telemetry during the entire life cycle of your page. This setup process provides your page with a seamless analytics system, invisible to your users.
While the script is downloading from the CDN, all tracking of your page is queue
![Chrome](https://raw.githubusercontent.com/alrra/browser-logos/master/src/chrome/chrome_48x48.png) | ![Firefox](https://raw.githubusercontent.com/alrra/browser-logos/master/src/firefox/firefox_48x48.png) | ![IE](https://raw.githubusercontent.com/alrra/browser-logos/master/src/edge/edge_48x48.png) | ![Opera](https://raw.githubusercontent.com/alrra/browser-logos/master/src/opera/opera_48x48.png) | ![Safari](https://raw.githubusercontent.com/alrra/browser-logos/master/src/safari/safari_48x48.png) | | | | |
-Chrome Latest Γ£ö | Firefox Latest Γ£ö | IE 9+ & Edge Γ£ö<br>IE 8- Compatible | Opera Latest Γ£ö | Safari Latest Γ£ö |
+Chrome Latest Γ£ö | Firefox Latest Γ£ö | IE 9+ & Microsoft Edge Γ£ö<br>IE 8- Compatible | Opera Latest Γ£ö | Safari Latest Γ£ö |
## ES3/IE8 Compatibility
-As an SDK there are numerous users that canΓÇÖt control the browsers that their customers use. As such we need to ensure that this SDK continues to "work" and doesn't break the JS execution when loaded by an older browser. While it would be ideal to not support IE8 and older generation (ES3) browsers, there are numerous large customers/users that continue to require pages to "work" and as noted they may or canΓÇÖt control which browser that their end users choose to use.
+As such we need to ensure that this SDK continues to "work" and doesn't break the JS execution when loaded by an older browser. It would be ideal to not support older browsers, but numerous large customers canΓÇÖt control which browser their end users choose to use.
-This does NOT mean that we'll only support the lowest common set of features, just that we need to maintain ES3 code compatibility and when adding new features they'll need to be added in a manner that wouldn't break ES3 JavaScript parsing and added as an optional feature.
+This statement does NOT mean that we'll only support the lowest common set of features. We need to maintain ES3 code compatibility and when adding new features, they'll need to be added in a manner that wouldn't break ES3 JavaScript parsing and added as an optional feature.
[See GitHub for full details on IE8 support](https://github.com/Microsoft/ApplicationInsights-JS#es3ie8-compatibility)
For the latest updates and bug fixes, [consult the release notes](./release-note
## Troubleshooting
-### I am getting an error message of Failed to get Request-Context correlation header as it may be not included in the response or not accessible
+### I'm getting an error message of Failed to get Request-Context correlation header as it may be not included in the response or not accessible
-The `correlationHeaderExcludedDomains` configuration property is an exclude list that disables correlation headers for specific domains, this is useful for when including those headers would cause the request to fail or not be sent due to third-party server configuration. This property supports wildcards.
+The `correlationHeaderExcludedDomains` configuration property is an exclude list that disables correlation headers for specific domains. This option is useful when including those headers would cause the request to fail or not be sent due to third-party server configuration. This property supports wildcards.
An example would be `*.queue.core.windows.net`, as seen in the code sample above. Adding the application domain to this property should be avoided as it stops the SDK from including the required distributed tracing `Request-Id`, `Request-Context` and `traceparent` headers as part of the request.
The server-side needs to be able to accept connections with those headers presen
Access-Control-Allow-Headers: `Request-Id`, `traceparent`, `Request-Context`, `<your header>`
-### I am receiving duplicate telemetry data from the Application Insights JavaScript SDK
+### I'm receiving duplicate telemetry data from the Application Insights JavaScript SDK
-If the SDK reports correlation recursively enable the configuration setting of `excludeRequestFromAutoTrackingPatterns` to exclude the duplicate data, this can occur when using connection strings. The syntax for the configuration setting is `excludeRequestFromAutoTrackingPatterns: [<endpointUrl>]`.
+If the SDK reports correlation recursively, enable the configuration setting of `excludeRequestFromAutoTrackingPatterns` to exclude the duplicate data. This scenario can occur when using connection strings. The syntax for the configuration setting is `excludeRequestFromAutoTrackingPatterns: [<endpointUrl>]`.
## <a name="next"></a> Next steps
+* [Source map for JavaScript](source-map-support.md)
* [Track usage](usage-overview.md) * [Custom events and metrics](api-custom-events-metrics.md) * [Build-measure-learn](usage-overview.md)
azure-monitor Best Practices Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-cost.md
Virtual machines can vary significantly in the amount of data they collect, depe
| Source | Strategy | Log Analytics agent | Azure Monitor agent | |:|:|:|:|
-| Event logs | Collect only required event logs and levels. For example, *Information* level events are rarely used and should typically not be collected. For Azure Monitor agent, filter particular event IDs that are frequent but not valuable. | Change the [event log configuration for the workspace](agents/data-sources-windows-events.md) | Change the [data collection rule](agents/data-collection-rule-azure-monitor-agent.md). Use [custom XPath queries](agents/data-collection-rule-azure-monitor-agent.md#limit-data-collection-with-custom-xpath-queries) to filter specific event IDs. |
-| Syslog | Reduce the number of facilities collected and only collect required event levels. For example, *Info* and *Debug* level events are rarely used and should typically not be collected. | Change the [syslog configuration for the workspace](agents/data-sources-syslog.md). | Change the [data collection rule](agents/data-collection-rule-azure-monitor-agent.md). Use [custom XPath queries](agents/data-collection-rule-azure-monitor-agent.md#limit-data-collection-with-custom-xpath-queries) to filter specific events. |
-| Performance counters | Collect only the performance counters required and reduce the frequency of collection. For Azure Monitor agent, consider sending performance data only to Metrics and not Logs. | Change the [performance counter configuration for the workspace](agents/data-sources-performance-counters.md). | Change the [data collection rule](agents/data-collection-rule-azure-monitor-agent.md). Use [custom XPath queries](agents/data-collection-rule-azure-monitor-agent.md#limit-data-collection-with-custom-xpath-queries) to filter specific counters. |
+| Event logs | Collect only required event logs and levels. For example, *Information* level events are rarely used and should typically not be collected. For Azure Monitor agent, filter particular event IDs that are frequent but not valuable. | Change the [event log configuration for the workspace](agents/data-sources-windows-events.md) | Change the [data collection rule](agents/data-collection-rule-azure-monitor-agent.md). Use [custom XPath queries](agents/data-collection-rule-azure-monitor-agent.md#filter-events-using-xpath-queries) to filter specific event IDs. |
+| Syslog | Reduce the number of facilities collected and only collect required event levels. For example, *Info* and *Debug* level events are rarely used and should typically not be collected. | Change the [syslog configuration for the workspace](agents/data-sources-syslog.md). | Change the [data collection rule](agents/data-collection-rule-azure-monitor-agent.md). Use [custom XPath queries](agents/data-collection-rule-azure-monitor-agent.md#filter-events-using-xpath-queries) to filter specific events. |
+| Performance counters | Collect only the performance counters required and reduce the frequency of collection. For Azure Monitor agent, consider sending performance data only to Metrics and not Logs. | Change the [performance counter configuration for the workspace](agents/data-sources-performance-counters.md). | Change the [data collection rule](agents/data-collection-rule-azure-monitor-agent.md). Use [custom XPath queries](agents/data-collection-rule-azure-monitor-agent.md#filter-events-using-xpath-queries) to filter specific counters. |
### Use transformations to filter events
azure-monitor Data Collection Endpoint Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-endpoint-overview.md
The sample data collection endpoint below is for virtual machines with Azure Mon
``` ## Next steps-- [Associate endpoint to machines](../agents/data-collection-rule-azure-monitor-agent.md#create-rule-and-association-in-azure-portal)
+- [Associate endpoint to machines](../agents/data-collection-rule-azure-monitor-agent.md#create-data-collection-rule-and-association)
- [Add endpoint to AMPLS resource](../logs/private-link-configure.md#connect-azure-monitor-resources)
azure-monitor Azure Cli Log Analytics Workspace Sample https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/azure-cli-log-analytics-workspace-sample.md
Use the Azure CLI commands described here to manage your log analytics workspace in Azure Monitor.
-> [!NOTE]
-> On August 31, 2024, Microsoft will retire the Log Analytics agent. You can use the Azure Monitor agent after that time. For more information, see [Overview of Azure Monitor agents](../agents/agents-overview.md).
[!INCLUDE [Prepare your Azure CLI environment](../../../includes/azure-cli-prepare-your-environment.md)]
azure-monitor Vminsights Health Configure Dcr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-health-configure-dcr.md
The following table lists the default configuration for each monitor. This defau
## Overrides An *override* changes one ore more properties of a monitor. For example, an override could disable a monitor that's enabled by default, define warning criteria for the monitor, or modify the monitor's critical threshold.
-Overrides are defined in a [Data Collection Rule (DCR)](../essentials/data-collection-rule-overview.md). You can create multiple DCRs with different sets of overrides and apply them to multiple virtual machines. You apply a DCR to a virtual machine by creating an association as described in [Configure data collection for the Azure Monitor agent (preview)](../agents/data-collection-rule-azure-monitor-agent.md#data-collection-rule-associations).
+Overrides are defined in a [Data Collection Rule (DCR)](../essentials/data-collection-rule-overview.md). You can create multiple DCRs with different sets of overrides and apply them to multiple virtual machines. You apply a DCR to a virtual machine by creating an association as described in [Configure data collection for the Azure Monitor agent (preview)](../agents/data-collection-rule-azure-monitor-agent.md#create-data-collection-rule-and-association).
## Multiple overrides
azure-monitor Vminsights Health Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-health-migrate.md
Before you can remove the data collection rule for VM insights guest health, you
From the **Monitor** menu in the Azure portal, select **Data Collection Rules**. Click on the DCR for VM insights guest health, and then select **Resources**. Select the VMs to remove and click **Delete**.
-You can also remove the Data Collection Rule Association using [Azure PowerShell](../agents/data-collection-rule-azure-monitor-agent.md#manage-rules-and-association-using-powershell) or [Azure CLI](/cli/azure/monitor/data-collection/rule/association#az-monitor-data-collection-rule-association-delete).
+You can also remove the Data Collection Rule Association using [Azure PowerShell](../agents/data-collection-rule-azure-monitor-agent.md#create-data-collection-rule-and-association) or [Azure CLI](/cli/azure/monitor/data-collection/rule/association#az-monitor-data-collection-rule-association-delete).
### 3. Delete Data Collection Rule created for VM insights guest health
-To remove the data collection rule, click **Delete** from the DCR page in the Azure portal. You can also delete the Data Collection Rule using [Azure PowerShell](../agents/data-collection-rule-azure-monitor-agent.md#manage-rules-and-association-using-powershell) or [Azure CLI](/cli/azure/monitor/data-collection/rule#az-monitor-data-collection-rule-delete).
+To remove the data collection rule, click **Delete** from the DCR page in the Azure portal. You can also delete the Data Collection Rule using [Azure PowerShell](../agents/data-collection-rule-azure-monitor-agent.md#create-data-collection-rule-and-association) or [Azure CLI](/cli/azure/monitor/data-collection/rule#az-monitor-data-collection-rule-delete).
## Next steps
azure-netapp-files Performance Impact Kerberos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-impact-kerberos.md
na Previously updated : 02/18/2021 Last updated : 06/25/2021 # Performance impact of Kerberos on Azure NetApp Files NFSv4.1 volumes
-Azure NetApp Files supports [NFS client encryption in Kerberos](configure-kerberos-encryption.md) modes (krb5, krb5i, and krb5p) with AES-256 encryption. This article describes the performance impact of Kerberos on NFSv4.1 volumes.
+Azure NetApp Files supports [NFS client encryption in Kerberos](configure-kerberos-encryption.md) modes (krb5, krb5i, and krb5p) with AES-256 encryption. This article describes the performance impact of Kerberos on NFSv4.1 volumes. **Performance comparisons referenced in this article are made against the `sec=sys` security parameter, testing on a single volume with a single client.**
## Available security options
This section describes the single client-side performance impact of the various
## Expected performance impact
-There are two areas of focus: light load and upper limit. The following lists describe the performance impact security setting by security setting and scenario by scenario. All comparisons are made against the `sec=sys` security parameter. The test was done on a single volume, using a single client.
+There are two areas of focus: light load and upper limit. The following lists describe the performance impact security setting by security setting and scenario by scenario.
-Performance impact of krb5:
+**Testing Scope**
+* All comparisons are made against the `sec=sys` security parameter.
+* The test was done on a single volume, using a single client.
+
+**Performance impact of krb5:**
* Low concurrency (r/w): * Sequential latency increased 0.3 ms.
Performance impact of krb5:
* Maximum random I/O decreased by 30% for pure read workloads with the overall impact dropping to zero as the workload shifts to pure write. * Maximum metadata workload decreased 30%.
-Performance impact of krb5i:
+**Performance impact of krb5i:**
* Low concurrency (r/w): * Sequential latency increased 0.5 ms.
Performance impact of krb5i:
* Maximum random I/O decreased by 50% for pure read workloads with the overall impact decreasing to 25% as the workload shifts to pure write. * Maximum metadata workload decreased 30%.
-Performance impact of krb5p:
+**Performance impact of krb5p:**
* Low concurrency (r/w): * Sequential latency increased 0.8 ms.
azure-resource-manager Concepts Built In Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/concepts-built-in-policy.md
Title: Deploy associations for managed application using policy
-description: Learn about deploying associations for a managed application using Azure Policy service.
+ Title: Deploy associations for managed application using Azure Policy
+description: Learn about deploying associations for a managed application using Azure Policy.
Previously updated : 09/06/2019 Last updated : 06/23/2022
Azure policies can be used to deploy associations to associate resources to a ma
## Built-in policy to deploy associations
-Deploy associations for a managed application is a built-in policy that can be used to deploy association to associate a resource to a managed application. The policy accepts three parameters:
+Deploy associations for a managed application is a built-in policy that associates a resource type to a managed application. The policy deployment doesn't support nested resource types. The policy accepts three parameters:
- Managed application ID - This ID is the resource ID of the managed application to which the resources need to be associated. - Resource types to associate - These resource types are the list of resource types to be associated to the managed application. You can associate multiple resource types to a managed application using the same policy.-- Association name prefix - This string is the prefix to be added to the name of the association resource being created. The default value is "DeployedByPolicy".
+- Association name prefix - This string is the prefix to be added to the name of the association resource being created. The default value is `DeployedByPolicy`.
-The policy uses DeployIfNotExists evaluation. It runs after a Resource Provider has handled a create or update resource request of the selected resource type(s) and the evaluation has returned a success status code. After that, the association resource gets deployed using a template deployment.
+The policy uses `DeployIfNotExists` evaluation. It runs after a Resource Provider has handled a create or update resource request of the selected resource type and the evaluation has returned a success status code. After that, the association resource gets deployed using a template deployment.
For more information on associations, see [Azure Custom Providers resource onboarding](../custom-providers/concepts-resource-onboarding.md)
-## How to use the deploy associations built-in policy
+For more information, see [Deploy associations for a managed application](../../governance/policy/samples/built-in-policies.md#managed-application).
+
+## How to use the deploy associations built-in policy
### Prerequisites+ If the managed application needs permissions to the subscription to perform an action, the policy deployment of association resource wouldn't work without granting the permissions. ### Policy assignment
-To use the built-in policy, create a policy assignment and assign the Deploy associations for a managed application policy. Once the policy has been assigned successfully,
+
+To use the built-in policy, create a policy assignment and assign the Deploy associations for a managed application policy. Once the policy has been assigned successfully,
the policy will identify non-compliant resources and deploy association for those resources.
-![Assign built-in policy](media/concepts-built-in-policy/assign-builtin-policy-managedapp.png)
## Getting help
-If you have questions about Azure Custom Resource Providers development, try asking them on [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-custom-providers). A similar question might have already been answered, so check first before posting. Add the tag ```azure-custom-providers``` to get a fast response!
+If you have questions about Azure Custom Resource Providers development, try asking them on [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-custom-providers). A similar question might have already been answered, so check first before posting. Use the tag `azure-custom-providers`.
## Next steps
-In this article, you learnt about using built-in policy to deploy associations. See these articles to learn more:
+In this article, you learned about using built-in policy to deploy associations. See these articles to learn more:
- [Concepts: Azure Custom Providers resource onboarding](../custom-providers/concepts-resource-onboarding.md) - [Tutorial: Resource onboarding with custom providers](../custom-providers/tutorial-resource-onboarding.md)
azure-resource-manager Create Private Link Access Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/create-private-link-access-commands.md
+
+ Title: Manage resources through private link
+description: Restrict management access for resource to private link
+ Last updated : 06/16/2022++
+# Use APIs to create a private link for managing Azure resources
+
+This article explains how you can use [Azure Private Link](../../private-link/index.yml) to restrict access for managing resources in your subscriptions.
++
+## Create resource management private link
+
+To create resource management private link, send the following request:
+# [Azure CLI](#tab/azure-cli)
+ ### Example
+ ```azurecli
+ # Login first with az login if not using Cloud Shell
+ az resourcemanagement private-link create --location WestUS --resource-group PrivateLinkTestRG --name NewRMPL --public-network-access enabled
+ ```
+
+# [PowerShell](#tab/azure-powershell)
+ ### Example
+ ```azurepowershell-interactive
+ # Login first with Connect-AzAccount if not using Cloud Shell
+ New-AzResourceManagementPrivateLink -ResourceGroupName PrivateLinkTestRG -Name NewRMPL
+ ```
+
+# [REST](#tab/REST)
+ REST call
+ ```http
+ PUT
+ https://management.azure.com/subscriptions/{subscriptionID}/resourceGroups/{resourceGroupName}/providers/Microsoft.Authorization/resourceManagementPrivateLinks/{rmplName}?api-version=2020-05-01
+ ```
+
+ In the request body, include the location you want for the resource:
+
+ ```json
+ {
+ "location":"{region}"
+ }
+ ```
+
+ The operation returns:
+
+ ```json
+ {
+ "id": "/subscriptions/{subID}/resourceGroups/{rgName}/providers/Microsoft.Authorization/resourceManagementPrivateLinks/{name}",
+ "location": "{region}",
+ "name": "{rmplName}",
+ "properties": {
+ "privateEndpointConnections": []
+ },
+ "resourceGroup": "{rgName}",
+ "type": "Microsoft.Authorization/resourceManagementPrivateLinks"
+ }
+ ```
+
++
+Note the ID that is returned for the new resource management private link. You'll use it for creating the private link association.
+
+## Create private link association
+The resource name of a private link association resource must be a GUID, and it isn't yet supported to disable the publicNetworkAccess field.
+
+To create the private link association, use:
+# [Azure CLI](#tab/azure-cli)
+ ### Example
+ ```azurecli
+ # Login first with az login if not using Cloud Shell
+ az private-link association create --management-group-id fc096d27-0434-4460-a3ea-110df0422a2d --name 1d7942d1-288b-48de-8d0f-2d2aa8e03ad4 --privatelink "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/PrivateLinkTestRG/providers/Microsoft.Authorization/resourceManagementPrivateLinks/newRMPL"
+ ```
+
+# [PowerShell](#tab/azure-powershell)
+ ### Example
+ ```azurepowershell-interactive
+ # Login first with Connect-AzAccount if not using Cloud Shell
+ New-AzPrivateLinkAssociation -ManagementGroupId fc096d27-0434-4460-a3ea-110df0422a2d -Name 1d7942d1-288b-48de-8d0f-2d2aa8e03ad4 -PrivateLink "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/PrivateLinkTestRG/providers/Microsoft.Authorization/resourceManagementPrivateLinks/newRMPL" -PublicNetworkAccess enabled | fl
+ ```
+
+# [REST](#tab/REST)
+ REST call
+
+ ```http
+ PUT
+ https://management.azure.com/providers/Microsoft.Management/managementGroups/{managementGroupId}/providers/Microsoft.Authorization/privateLinkAssociations/{GUID}?api-version=2020-05-01
+ ```
+
+ In the request body, include:
+
+ ```json
+ {
+ "properties": {
+ "privateLink": "/subscriptions/{subscription-id}/resourceGroups/{rg-name}/providers/Microsoft.Authorization/resourceManagementPrivateLinks/{rmplName}",
+ "publicNetworkAccess": "enabled"
+ }
+ }
+ ```
+
+ The operation returns:
+
+ ```json
+ {
+ "id": {plaResourceId},
+ "name": {plaName},
+ "properties": {
+ "privateLink": {rmplResourceId},
+ "publicNetworkAccess": "Enabled",
+ "tenantId": "{tenantId}",
+ "scope": "/providers/Microsoft.Management/managementGroups/{managementGroupId}"
+ },
+ "type": "Microsoft.Authorization/privateLinkAssociations"
+ }
+ ```
++
+## Add private endpoint
+
+This article assumes you already have a virtual network. In the subnet that will be used for the private endpoint, you must turn off private endpoint network policies. If you haven't turned off private endpoint network policies, see [Disable network policies for private endpoints](../../private-link/disable-private-endpoint-network-policy.md).
+
+To create a private endpoint, see Private Endpoint documentation for creating via [Portal](../../private-link/create-private-endpoint-portal.md), [PowerShell](../../private-link/create-private-endpoint-powershell.md), [CLI](../../private-link/create-private-endpoint-cli.md), [Bicep](../../private-link/create-private-endpoint-bicep.md), or [template](../../private-link/create-private-endpoint-template.md).
+
+In the request body, set the `privateServiceLinkId` to the ID from your resource management private link. The `groupIds` must contain `ResourceManagement`. The location of the private endpoint must be the same as the location of the subnet.
+
+```json
+{
+ "location": "westus2",
+ "properties": {
+ "privateLinkServiceConnections": [
+ {
+ "name": "{connection-name}",
+ "properties": {
+ "privateLinkServiceId": "/subscriptions/{subID}/resourceGroups/{rgName}/providers/Microsoft.Authorization/resourceManagementPrivateLinks/{name}",
+ "groupIds": [
+ "ResourceManagement"
+ ]
+ }
+ }
+ ],
+ "subnet": {
+ "id": "/subscriptions/{subID}/resourceGroups/{rgName}/providers/Microsoft.Network/virtualNetworks/{vnet-name}/subnets/{subnet-name}"
+ }
+ }
+}
+```
+
+The next step varies depending whether you're using automatic or manual approval. For more information about approval, see [Access to a private link resource using approval workflow](../../private-link/private-endpoint-overview.md#access-to-a-private-link-resource-using-approval-workflow).
+
+The response includes approval state.
+
+```json
+"privateLinkServiceConnectionState": {
+ "actionsRequired": "None",
+ "description": "",
+ "status": "Approved"
+},
+```
+
+If your request is automatically approved, you can continue to the next section. If your request requires manual approval, wait for the network admin to approve your private endpoint connection.
+
+## Next steps
+
+To learn more about private links, see [Azure Private Link](../../private-link/index.yml).
azure-resource-manager Create Private Link Access Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/create-private-link-access-rest.md
- Title: Manage resources through private link
-description: Restrict management access for resource to private link
- Previously updated : 04/26/2022--
-# Use REST API to create private link for managing Azure resources (preview)
-
-This article explains how you can use [Azure Private Link](../../private-link/index.yml) to restrict access for managing resources in your subscriptions.
--
-## Create resource management private link
-
-To create resource management private link, send the following request:
-
-```http
-PUT
-https://management.azure.com/subscriptions/{subscriptionID}/resourceGroups/{resourceGroupName}/providers/Microsoft.Authorization/resourceManagementPrivateLinks/{rmplName}?api-version=2020-05-01
-```
-
-In the request body, include the location you want for the resource:
-
-```json
-{
- "location":"{region}"
-}
-```
-
-The operation returns:
-
-```json
-{
- "id": "/subscriptions/{subID}/resourceGroups/{rgName}/providers/Microsoft.Authorization/resourceManagementPrivateLinks/{name}",
- "location": "{region}",
- "name": "{rmplName}",
- "properties": {
- "privateEndpointConnections": []
- },
- "resourceGroup": "{rgName}",
- "type": "Microsoft.Authorization/resourceManagementPrivateLinks"
-}
-```
-
-Note the ID that is returned for the new resource management private link. You'll use it for creating the private link association.
-
-## Create private link association
-
-To create the private link association, use:
-
-```http
-PUT
-https://management.azure.com/providers/Microsoft.Management/managementGroups/{managementGroupId}/providers/Microsoft.Authorization/privateLinkAssociations/{GUID}?api-version=2020-05-01
-```
-
-In the request body, include:
-
-```json
-{
- "properties": {
- "privateLink": "/subscriptions/{subscription-id}/resourceGroups/{rg-name}/providers/Microsoft.Authorization/resourceManagementPrivateLinks/{rmplName}",
- "publicNetworkAccess": "enabled"
- }
-}
-```
-
-The operation returns:
-
-```json
-{
- "id": {plaResourceId},
- "name": {plaName},
- "properties": {
- "privateLink": {rmplResourceId},
- "publicNetworkAccess": "Enabled",
- "tenantId": "{tenantId}",
- "scope": "/providers/Microsoft.Management/managementGroups/{managementGroupId}"
- },
- "type": "Microsoft.Authorization/privateLinkAssociations"
-}
-```
-
-## Add private endpoint
-
-This article assumes you already have a virtual network. In the subnet that will be used for the private endpoint, you must turn off private endpoint network policies. If you haven't turned off private endpoint network policies, see [Disable network policies for private endpoints](../../private-link/disable-private-endpoint-network-policy.md).
-
-To create a private endpoint, use the following operation:
-
-```http
-PUT
-https://management.azure.com/subscriptions/{subscriptionID}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/privateEndpoints/{privateEndpointName}?api-version=2020-11-01
-```
-
-In the request body, set the `privateServiceLinkId` to the ID from your resource management private link. The `groupIds` must contain `ResourceManagement`. The location of the private endpoint must be the same as the location of the subnet.
-
-```json
-{
- "location": "westus2",
- "properties": {
- "privateLinkServiceConnections": [
- {
- "name": "{connection-name}",
- "properties": {
- "privateLinkServiceId": "/subscriptions/{subID}/resourceGroups/{rgName}/providers/Microsoft.Authorization/resourceManagementPrivateLinks/{name}",
- "groupIds": [
- "ResourceManagement"
- ]
- }
- }
- ],
- "subnet": {
- "id": "/subscriptions/{subID}/resourceGroups/{rgName}/providers/Microsoft.Network/virtualNetworks/{vnet-name}/subnets/{subnet-name}"
- }
- }
-}
-```
-
-The next step varies depending whether you're using automatic or manual approval. For more information about approval, see [Access to a private link resource using approval workflow](../../private-link/private-endpoint-overview.md#access-to-a-private-link-resource-using-approval-workflow).
-
-The response includes approval state.
-
-```json
-"privateLinkServiceConnectionState": {
- "actionsRequired": "None",
- "description": "",
- "status": "Approved"
-},
-```
-
-If your request is automatically approved, you can continue to the next section. If your request requires manual approval, wait for the network admin to approve your private endpoint connection.
-
-## Next steps
-
-To learn more about private links, see [Azure Private Link](../../private-link/index.yml).
azure-resource-manager Manage Private Link Access Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-private-link-access-commands.md
+
+ Title: Manage resource management private links
+description: Use APIs to manage existing resource management private links
+ Last updated : 06/16/2022++
+# Manage resource management private links
+
+This article explains how you to work with existing resource management private links. It shows API operations for getting and deleting existing resources.
+
+If you need to create a resource management private link, see [Use portal to create private link for managing Azure resources](create-private-link-access-portal.md) or [Use APIs to create private link for managing Azure resources](create-private-link-access-commands.md).
+
+## Resource management private links
+
+To **get a specific** resource management private link, send the following request:
+
+# [Azure CLI](#tab/azure-cli)
+ ### Example
+ ```azurecli
+ # Login first with az login if not using Cloud Shell
+ az resourcemanagement private-link show --resource-group PrivateLinkTestRG --name NewRMPL
+ ```
+
+# [PowerShell](#tab/azure-powershell)
+ ### Example
+ ```azurepowershell-interactive
+ # Login first with Connect-AzAccount if not using Cloud Shell
+ Get-AzResourceManagementPrivateLink -ResourceGroupName PrivateLinkTestRG -Name NewRMPL
+ ```
+
+# [REST](#tab/REST)
+ REST call
+ ```http
+ GET https://management.azure.com/subscriptions/{subscriptionID}/resourceGroups/{resourceGroupName}/providers/Microsoft.Authorization/resourceManagementPrivateLinks/{rmplName}?api-version=2020-05-01
+ ```
+
+The operation returns:
+
+```json
+{
+ "properties": {
+ "privateEndpointConnections": []
+ },
+ "id": {rmplResourceId},
+ "name": {rmplName},
+ "type": "Microsoft.Authorization/resourceManagementPrivateLinks",
+ "location": {region}
+}
+```
+++
+To **get all** resource management private links in a subscription, use:
+# [Azure CLI](#tab/azure-cli)
+ ```azurecli
+ # Login first with az login if not using Cloud Shell
+ az resourcemanagement private-link list
+ ```
+
+# [PowerShell](#tab/azure-powershell)
+ ```azurepowershell-interactive
+ # Login first with Connect-AzAccount if not using Cloud Shell
+ Get-AzResourceManagementPrivateLink
+ ```
+
+# [REST](#tab/REST)
+ REST call
+ ```http
+ GET
+ https://management.azure.com/subscriptions/{subscriptionID}/providers/Microsoft.Authorization/resourceManagementPrivateLinks?api-version=2020-05-01
+ ```
+
+ The operation returns:
+
+ ```json
+ [
+ {
+ "properties": {
+ "privateEndpointConnections": []
+ },
+ "id": {rmplResourceId},
+ "name": {rmplName},
+ "type": "Microsoft.Authorization/resourceManagementPrivateLinks",
+ "location": {region}
+ },
+ {
+ "properties": {
+ "privateEndpointConnections": []
+ },
+ "id": {rmplResourceId},
+ "name": {rmplName},
+ "type": "Microsoft.Authorization/resourceManagementPrivateLinks",
+ "location": {region}
+ }
+ ]
+ ```
+++
+To **delete a specific** resource management private link, use:
+# [Azure CLI](#tab/azure-cli)
+ ### Example
+ ```azurecli
+ # Login first with az login if not using Cloud Shell
+ az resourcemanagement private-link delete --resource-group PrivateLinkTestRG --name NewRMPL
+ ```
+
+# [PowerShell](#tab/azure-powershell)
+ ### Example
+ ```azurepowershell-interactive
+ # Login first with Connect-AzAccount if not using Cloud Shell
+ Remove-AzResourceManagementPrivateLink -ResourceGroupName PrivateLinkTestRG -Name NewRMPL
+ ```
+
+# [REST](#tab/REST)
+ REST call
+ ```http
+ DELETE
+ https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Authorization/resourceManagementPrivateLinks/{rmplName}?api-version=2020-05-01
+ ```
+
+ The operation returns: `Status 200 OK`.
+++
+## Private link association
+
+To **get a specific** private link association for a management group, use:
+# [Azure CLI](#tab/azure-cli)
+ ### Example
+ ```azurecli
+ # Login first with az login if not using Cloud Shell
+ az private-link association show --management-group-id fc096d27-0434-4460-a3ea-110df0422a2d --name 1d7942d1-288b-48de-8d0f-2d2aa8e03ad4
+ ```
+
+# [PowerShell](#tab/azure-powershell)
+ ### Example
+ ```azurepowershell-interactive
+ # Login first with Connect-AzAccount if not using Cloud Shell
+ Get-AzPrivateLinkAssociation -ManagementGroupId fc096d27-0434-4460-a3ea-110df0422a2d -Name 1d7942d1-288b-48de-8d0f-2d2aa8e03ad4 | fl
+ ```
+
+# [REST](#tab/REST)
+ REST call
+ ```http
+ GET
+ https://management.azure.com/providers/Microsoft.Management/managementGroups/{managementGroupID}/providers/Microsoft.Authorization/privateLinkAssociations?api-version=2020-05-01
+ ```
+
+ The operation returns:
+
+ ```json
+ {
+ "value": [
+ {
+ "properties": {
+ "privateLink": {rmplResourceID},
+ "tenantId": {tenantId},
+ "scope": "/providers/Microsoft.Management/managementGroups/{managementGroupId}"
+ },
+ "id": {plaResourceId},
+ "type": "Microsoft.Authorization/privateLinkAssociations",
+ "name": {plaName}
+ }
+ ]
+ }
+ ```
+++
+To **delete** a private link association, use:
+# [Azure CLI](#tab/azure-cli)
+ ### Example
+ ```azurecli
+ # Login first with az login if not using Cloud Shell
+ az private-link association delete --management-group-id 24f15700-370c-45bc-86a7-aee1b0c4eb8a --name 1d7942d1-288b-48de-8d0f-2d2aa8e03ad4
+ ```
+
+# [PowerShell](#tab/azure-powershell)
+ ### Example
+ ```azurepowershell-interactive
+ # Login first with Connect-AzAccount if not using Cloud Shell
+ Remove-AzPrivateLinkAssociation -ManagementGroupId 24f15700-370c-45bc-86a7-aee1b0c4eb8a -Name 1d7942d1-288b-48de-8d0f-2d2aa8e03ad4
+ ```
+
+# [REST](#tab/REST)
+ REST call
+
+ ```http
+ DELETE
+ https://management.azure.com/providers/Microsoft.Management/managementGroups/{managementGroupID}/providers/Microsoft.Authorization/privateLinkAssociations/{plaID}?api-version=2020-05-01
+ ```
+
+The operation returns: `Status 200 OK`.
++++
+## Next steps
+
+* To learn more about private links, see [Azure Private Link](../../private-link/index.yml).
+* To manage your private endpoints, see [Manage Private Endpoints](../../private-link/manage-private-endpoint.md).
+* To create a resource management private links, see [Use portal to create private link for managing Azure resources](create-private-link-access-portal.md) or [Use REST API to create private link for managing Azure resources](create-private-link-access-rest.md).
azure-resource-manager Manage Private Link Access Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-private-link-access-rest.md
- Title: Manage resource management private links
-description: Use REST API to manage existing resource management private links
- Previously updated : 07/29/2021--
-# Manage resource management private links with REST API (preview)
-
-This article explains how you to work with existing resource management private links. It shows REST API operations for getting and deleting existing resources.
-
-If you need to create a resource management private link, see [Use portal to create private link for managing Azure resources](create-private-link-access-portal.md) or [Use REST API to create private link for managing Azure resources](create-private-link-access-rest.md).
-
-## Resource management private links
-
-To **get a specific** resource management private link, send the following request:
-
-```http
-GET
-https://management.azure.com/subscriptions/{subscriptionID}/resourceGroups/{resourceGroupName}/providers/Microsoft.Authorization/resourceManagementPrivateLinks/{rmplName}?api-version=2020-05-01
-```
-
-The operation returns:
-
-```json
-{
- "properties": {
- "privateEndpointConnections": []
- },
- "id": {rmplResourceId},
- "name": {rmplName},
- "type": "Microsoft.Authorization/resourceManagementPrivateLinks",
- "location": {region}
-}
-```
-
-To **get all** resource management private links in a subscription, use:
-
-```http
-GET
-https://management.azure.com/subscriptions/{subscriptionID}/providers/Microsoft.Authorization/resourceManagementPrivateLinks?api-version=2020-05-01
-```
-
-The operation returns:
-
-```json
-[
- {
- "properties": {
- "privateEndpointConnections": []
- },
- "id": {rmplResourceId},
- "name": {rmplName},
- "type": "Microsoft.Authorization/resourceManagementPrivateLinks",
- "location": {region}
- },
- {
- "properties": {
- "privateEndpointConnections": []
- },
- "id": {rmplResourceId},
- "name": {rmplName},
- "type": "Microsoft.Authorization/resourceManagementPrivateLinks",
- "location": {region}
- }
-]
-```
-
-To **delete a specific** resource management private link, use:
-
-```http
-DELETE
-https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Authorization/resourceManagementPrivateLinks/{rmplName}?api-version=2020-05-01
-```
-
-The operation returns: `Status 200 OK`.
-
-## Private link association
-
-To **get a specific** private link association for a management group, use:
-
-```http
-GET
-https://management.azure.com/providers/Microsoft.Management/managementGroups/{managementGroupID}/providers/Microsoft.Authorization/privateLinkAssociations?api-version=2020-05-01
-```
-
-The operation returns:
-
-```json
-{
- "value": [
- {
- "properties": {
- "privateLink": {rmplResourceID},
- "tenantId": {tenantId},
- "scope": "/providers/Microsoft.Management/managementGroups/{managementGroupId}"
- },
- "id": {plaResourceId},
- "type": "Microsoft.Authorization/privateLinkAssociations",
- "name": {plaName}
- }
- ]
-}
-```
-
-To **delete** a private link association, use:
-
-```http
-DELETE
-https://management.azure.com/providers/Microsoft.Management/managementGroups/{managementGroupID}/providers/Microsoft.Authorization/privateLinkAssociations/{plaID}?api-version=2020-05-01
-```
-
-The operation returns: `Status 200 OK`.
-
-## Private endpoints
-
-To **get all** private endpoints in a subscription, use:
-
-```http
-GET
-https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Network/privateEndpoints?api-version=2020-04-01
-```
-
-The operation returns:
-
-```json
-{
- "value": [
- {
- "name": {privateEndpointName},
- "id": {privateEndpointResourceId},
- "etag": {etag},
- "type": "Microsoft.Network/privateEndpoints",
- "location": {region},
- "properties": {
- "provisioningState": "Updating",
- "resourceGuid": {GUID},
- "privateLinkServiceConnections": [
- {
- "name": {connectionName},
- "id": {connectionResourceId},
- "etag": {etag},
- "properties": {
- "provisioningState": "Succeeded",
- "privateLinkServiceId": {rmplResourceId},
- "groupIds": [
- "ResourceManagement"
- ],
- "privateLinkServiceConnectionState": {
- "status": "Approved",
- "description": "",
- "actionsRequired": "None"
- }
- },
- "type": "Microsoft.Network/privateEndpoints/privateLinkServiceConnections"
- }
- ],
- "manualPrivateLinkServiceConnections": [],
- "subnet": {
- "id": {subnetResourceId}
- },
- "networkInterfaces": [
- {
- "id": {networkInterfaceResourceId}
- }
- ],
- "customDnsConfigs": [
- {
- "fqdn": "management.azure.com",
- "ipAddresses": [
- "10.0.0.4"
- ]
- }
- ]
- }
- }
- ]
-}
-```
-
-## Next steps
-
-* To learn more about private links, see [Azure Private Link](../../private-link/index.yml).
-* To create a resource management private links, see [Use portal to create private link for managing Azure resources](create-private-link-access-portal.md) or [Use REST API to create private link for managing Azure resources](create-private-link-access-rest.md).
azure-video-indexer Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/release-notes.md
Azure Video Indexer is now part of [Network Service Tags](network-security.md).
### Celebrity recognition toggle
-You can now enable or disable the celebrity recognition model on the account level (on classic account only). To turn on or off the model, go to the account settings > and toggle on/off the model. Once you disable the model, Video Indexer insights will not include the output of celebrity model and will not run the celebrity model pipeline.
+You can now enable or disable the celebrity recognition model on the account level (on classic account only). To turn on or off the model, go to the **Model customization** > toggle on/off the model. Once you disable the model, Video Indexer insights will not include the output of celebrity model and will not run the celebrity model pipeline.
+ ### Azure Video Indexer repository name
cloud-services Cloud Services Guestos Msrc Releases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-msrc-releases.md
na Previously updated : 6/21/2022 Last updated : 6/24/2022
The following tables show the Microsoft Security Response Center (MSRC) updates
| Rel 22-06 | [5014692] | Latest Cumulative Update(LCU) | 6.44 | Jun 14, 2022 | | Rel 22-06 | [5014678] | Latest Cumulative Update(LCU) | 7.12 | Jun 14, 2022 | | Rel 22-06 | [5014702] | Latest Cumulative Update(LCU) | 5.68 | Jun 14, 2022 |
+| Rel 22-06 | [5011486] | IE Cumulative Updates | 2.124, 3.111, 4.104 | Mar 8, 2022 |
| Rel 22-06 | [5013641] | . NET Framework 3.5 and 4.7.2 Cumulative Update | 6.45 | May 10, 2022 | | Rel 22-06 | [5013630] | .NET Framework 4.8 Security and Quality Rollup | 7.13 | May 10, 2022 | | Rel 22-06 | [5014026] | Servicing Stack update | 5.69 | May 10, 2022 | | Rel 22-06 | [4494175] | Microcode | 5.69 | Sep 1, 2020 | | Rel 22-06 | [4494174] | Microcode | 6.45 | Sep 1, 2020 |
+| Rel 22-06 | [5013637] | .NET Framework 3.5 Security and Quality Rollup | 2.124 | Jun 14, 2022 |
+| Rel 22-06 | [5013644] | .NET Framework 4.6.2 Security and Quality Rollup | 2.124 | May 10, 2022 |
+| Rel 22-06 | [5013638] | .NET Framework 3.5 Security and Quality Rollup | 4.104 | Jun 14, 2022 |
+| Rel 22-06 | [5013643] | .NET Framework 4.6.2 Security and Quality Rollup | 4.104 | May 10, 2022 |
+| Rel 22-06 | [5013635] | .NET Framework 3.5 Security and Quality Rollup | 3.111 | Jun 14, 2022 |
+| Rel 22-06 | [5013642] | . NET Framework 4.6.2 Security and Quality Rollup | 3.111 | May 10, 2022 |
+| Rel 22-06 | [5014748] | Monthly Rollup | 2.124 | Jun 14, 2022 |
+| Rel 22-06 | [5014747] | Monthly Rollup | 3.111 | Jun 14, 2022 |
+| Rel 22-06 | [5014738] | Monthly Rollup | 4.104 | Jun 14, 2022 |
+| Rel 22-06 | [5014027] | Servicing Stack update | 3.111 | May 10, 2022 |
+| Rel 22-06 | [5014025] | Servicing Stack update | 4.104 | May 10, 2022 |
+| Rel 22-06 | [4578013] | Standalone Security Update | 4.104 | Aug 19, 2020 |
+| Rel 22-06 | [5011649] | Servicing Stack update | 2.124 | Mar 8, 2022 |
[5014692]: https://support.microsoft.com/kb/5014692 [5014678]: https://support.microsoft.com/kb/5014678
The following tables show the Microsoft Security Response Center (MSRC) updates
[5014026]: https://support.microsoft.com/kb/5014026 [4494175]: https://support.microsoft.com/kb/4494175 [4494174]: https://support.microsoft.com/kb/4494174
+[5011486]: https://support.microsoft.com/kb/5011486
+[5013637]: https://support.microsoft.com/kb/5013637
+[5013644]: https://support.microsoft.com/kb/5013644
+[5013638]: https://support.microsoft.com/kb/5013638
+[5013643]: https://support.microsoft.com/kb/5013643
+[5013635]: https://support.microsoft.com/kb/5013635
+[5013642]: https://support.microsoft.com/kb/5013642
+[5014748]: https://support.microsoft.com/kb/5014748
+[5014747]: https://support.microsoft.com/kb/5014747
+[5014738]: https://support.microsoft.com/kb/5014738
+[5014027]: https://support.microsoft.com/kb/5014027
+[5014025]: https://support.microsoft.com/kb/5014025
+[4578013]: https://support.microsoft.com/kb/4578013
+[5011649]: https://support.microsoft.com/kb/5011649
## May 2022 Guest OS
cloud-services Cloud Services Guestos Update Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-update-matrix.md
na Previously updated : 6/21/2022 Last updated : 6/22/2022 # Azure Guest OS releases and SDK compatibility matrix
The September Guest OS has released.
## Family 4 releases **Windows Server 2012 R2**
-.NET Framework installed: 3.5, 4.5.1, 4.5.2
+.NET Framework installed: 3.5, 4.6.2
| Configuration string | Release date | Disable date | | | | |
The September Guest OS has released.
## Family 3 releases **Windows Server 2012**
-.NET Framework installed: 3.5, 4.5
+.NET Framework installed: 3.5, 4.6.2
| Configuration string | Release date | Disable date | | | | |
The September Guest OS has released.
## Family 2 releases **Windows Server 2008 R2 SP1**
-.NET Framework installed: 3.5 (includes 2.0 and 3.0), 4.5
+.NET Framework installed: 3.5 (includes 2.0 and 3.0), 4.6.2
| Configuration string | Release date | Disable date | | | | |
cognitive-services Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/regions.md
The Speech service is available in these regions for speech-to-text, pronunciati
If you plan to train a custom model with audio data, use one of the regions with dedicated hardware for faster training. Then you can use the [Speech-to-text REST API v3.0](rest-speech-to-text.md) to [copy the trained model](how-to-custom-speech-train-model.md#copy-a-model) to another region. > [!TIP]
-> For pronunciation assessment, `en-US` and `en-GB` are available in all regions listed above, `zh-CN` is available in East Asia and Southeast Asia regions, `es-ES` and `fr-FR` are available in West Europe region, and `en-AU` is available in Australia East region.
+> For pronunciation assessment, `en-US` and `en-GB` are available in all regions listed above, `zh-CN` is available in East Asia and Southeast Asia regions, `de-DE`, `es-ES`, and `fr-FR` are available in West Europe region, and `en-AU` is available in Australia East region.
### Intent recognition
connectors Built In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/built-in.md
Title: Overview about built-in connectors in Azure Logic Apps
-description: Learn about built-in connectors that run natively to create automated integration workflows in Azure Logic Apps.
+ Title: Built-in connector overview
+description: Learn about built-in connectors that run natively in Azure Logic Apps.
ms.suite: integration
This article provides a general overview about built-in connectors in Consumptio
## Built-in connectors in Consumption versus Standard
-The following table lists the current and expanding galleries of built-in connectors available for Consumption versus Standard logic app workflows. An asterisk (**\***) marks [service provider-based built-in connectors](../logic-apps/custom-connector-overview.md#service-provider-interface-implementation).
+The following table lists the current and expanding galleries of built-in connectors available for Consumption versus Standard logic app workflows. For Standard workflows, an asterisk (**\***) marks [built-in connectors based on the *service provider* model](#service-provider-interface-implementation), which is described in more detail later.
| Consumption | Standard | |-|-| | Azure API Management<br>Azure App Services <br>Azure Functions <br>Azure Logic Apps <br>Batch <br>Control <br>Data Operations <br>Date Time <br>Flat File <br>HTTP <br>Inline Code <br>Integration Account <br>Liquid <br>Request <br>Schedule <br>Variables <br>XML | Azure Blob* <br>Azure Cosmos DB* <br>Azure Functions <br>Azure Queue* <br>Azure Table Storage* <br>Control <br>Data Operations <br>Date Time <br>DB2* <br>Event Hubs* <br>Flat File <br>FTP* <br>HTTP <br>IBM Host File* <br>Inline Code <br>Liquid operations <br>MQ* <br>Request <br>Schedule <br>Service Bus* <br>SFTP* <br>SQL Server* <br>Variables <br>Workflow operations <br>XML operations | |||
+<a name="service-provider-interface-implementation"></a>
+
+## Service provider-based built-in connectors
+
+In Standard logic app workflows, a built-in connector that has the following attributes is informally known as a *service provider*:
+
+* Is based on the [Azure Functions extensibility model](../azure-functions/functions-bindings-register.md).
+
+* Provides access from a Standard logic app workflow to a service, such as Azure Blob Storage, Azure Service Bus, Azure Event Hubs, SFTP, and SQL Server.
+
+ Some built-in connectors support only a single way to authenticate a connection to the underlying service. Other built-in connectors can offer a choice, such as using a connection string, Azure Active Directory (Azure AD), or a managed identity.
+
+* Runs in the same process as the redesigned Azure Logic Apps runtime.
+
+These service provider-based built-in connectors are available alongside their [managed connector versions](managed.md).
+
+In contrast, a built-in connector that's *not a service provider* has the following attributes:
+
+* Isn't based on the Azure Functions extensibility model.
+
+* Is directly implemented as a job within the Azure Logic Apps runtime, such as Schedule, HTTP, Request, and XML operations.
+ <a name="custom-built-in"></a> ## Custom built-in connectors
-For Standard logic apps, if a built-connector isn't available for your scenario, you can create your own built-in connector. You can use the same [*service provider interface implementation*](../logic-apps/custom-connector-overview.md#service-provider-interface-implementation) that's used by service provider-based built-in connectors, such as SQL Server, Service Bus, Blob Storage, Event Hubs, and Blob Storage. This interface implementation is based on the [Azure Functions extensibility model](../azure-functions/functions-bindings-register.md) and provides the capability for you to create custom built-in connectors that anyone can use in Standard logic apps.
+For Standard logic apps, you can create your own built-in connector with the same [built-in connector extensibility model](../logic-apps/custom-connector-overview.md#built-in-connector-extensibility-model) that's used by service provider-based built-in connectors, such as Azure Blob, Azure Event Hubs, Azure Service Bus, SQL Server, and more. This interface implementation is based on the [Azure Functions extensibility model](../azure-functions/functions-bindings-register.md) and provides the capability for you to create custom built-in connectors that anyone can use in Standard logic apps.
+
+For Consumption logic apps, you can't create your own built-in connectors, but you create your own managed connectors.
For more information, review the following documentation:
-* [Custom connectors for Standard logic apps](../logic-apps/custom-connector-overview.md#custom-connector-standard)
+* [Custom connectors in Azure Logic Apps](../logic-apps/custom-connector-overview.md#custom-connector-standard)
* [Create custom built-in connectors for Standard logic apps](../logic-apps/create-custom-built-in-connector-standard.md) <a name="general-built-in"></a>
You can use the following built-in connectors to perform general tasks, for exam
:::row-end::: :::row::: :::column:::
- [![FTP icon][ftp-icon]][ftp-doc]
+ ![FTP icon][ftp-icon]
\ \
- [**FTP**][ftp-doc]<br>(*Standard logic app only*)
+ **FTP**<br>(*Standard logic app only*)
\ \ Connect to FTP or FTPS servers you can access from the internet so that you can work with your files and folders. :::column-end::: :::column:::
- [![SFTP-SSH icon][sftp-ssh-icon]][sftp-ssh-doc]
+ ![SFTP-SSH icon][sftp-ssh-icon]
\ \
- [**SFTP-SSH**][sftp-ssh-doc]<br>(*Standard logic app only*)
+ **SFTP-SSH**<br>(*Standard logic app only*)
\ \ Connect to SFTP servers that you can access from the internet by using SSH so that you can work with your files and folders.
You can use the following built-in connectors to perform general tasks, for exam
<a name="service-built-in"></a>
-## Service-based built-in connectors
+## Built-in connectors for specific services and systems
-Connectors for some services provide both built-in connectors and managed connectors, which might differ across these versions.
+You can use the following built-in connectors to access specific services and systems. In Standard logic app workflows, some of these built-in connectors are also informally known as *service providers*, which can differ from their managed connector counterparts in some ways.
:::row::: :::column:::
Connectors for some services provide both built-in connectors and managed connec
When Swagger is included, the triggers and actions defined by these apps appear like any other first-class triggers and actions in Azure Logic Apps. :::column-end::: :::column:::
- [![Azure Blob icon icon][azure-blob-storage-icon]][azure-app-services-doc]
+ ![Azure Blob icon][azure-blob-storage-icon]
\ \
- [**Azure Blob**][azure-blob-storage-doc]<br>(*Standard logic app only*)
+ **Azure Blob**<br>(*Standard logic app only*)
\ \ Connect to your Azure Blob Storage account so you can create and manage blob content. :::column-end::: :::column:::
- [![Azure Cosmos DB icon][azure-cosmos-db-icon]][azure-cosmos-db-doc]
+ ![Azure Cosmos DB icon][azure-cosmos-db-icon]
\ \
- [**Azure Cosmos DB**][azure-cosmos-db-doc]<br>(*Standard logic app only*)
+ **Azure Cosmos DB**<br>(*Standard logic app only*)
\ \ Connect to Azure Cosmos DB so that you can access and manage Azure Cosmos DB documents. :::column-end::: :::column:::
- [![Azure Functions icon][azure-functions-icon]][azure-functions-doc]
+ ![Azure Event Hubs icon][azure-event-hubs-icon]
\ \
- [**Azure Functions**][azure-functions-doc]
+ **Azure Event Hubs**<br>(*Standard logic app only*)
\ \
- Call [Azure-hosted functions](../azure-functions/functions-overview.md) to run your own *code snippets* (C# or Node.js) within your workflow.
+ Consume and publish events through an event hub. For example, get output from your logic app with Event Hubs, and then send that output to a real-time analytics provider.
:::column-end::: :::row-end::: :::row::: :::column:::
- [![Azure Logic Apps icon][azure-logic-apps-icon]][nested-logic-app-doc]
+ [![Azure Functions icon][azure-functions-icon]][azure-functions-doc]
\ \
- [**Azure Logic Apps**][nested-logic-app-doc]<br>(*Consumption logic app*) <br><br>-or-<br><br>[**Workflow operations**][nested-logic-app-doc]<br>(*Standard logic app*)
+ [**Azure Functions**][azure-functions-doc]
\ \
- Call other workflows that start with the Request trigger named **When a HTTP request is received**.
+ Call [Azure-hosted functions](../azure-functions/functions-overview.md) to run your own *code snippets* (C# or Node.js) within your workflow.
:::column-end::: :::column:::
- [![Azure Service Bus icon][azure-service-bus-icon]][azure-service-bus-doc]
+ [![Azure Logic Apps icon][azure-logic-apps-icon]][nested-logic-app-doc]
\ \
- [**Azure Service Bus**][azure-service-bus-doc]<br>(*Standard logic app only*)
+ [**Azure Logic Apps**][nested-logic-app-doc]<br>(*Consumption logic app*) <br><br>-or-<br><br>**Workflow operations**<br>(*Standard logic app*)
\ \
- Manage asynchronous messages, queues, sessions, topics, and topic subscriptions.
+ Call other workflows that start with the Request trigger named **When a HTTP request is received**.
:::column-end::: :::column:::
- [![Azure Table Storage icon][azure-table-storage-icon]][azure-table-storage-doc]
+ ![Azure Service Bus icon][azure-service-bus-icon]
\ \
- [**Azure Table Storage**][azure-table-storage-doc]<br>(*Standard logic app only*)
+ **Azure Service Bus**<br>(*Standard logic app only*)
\ \
- Connect to your Azure Storage account so that you can create, update, query, and manage tables.
+ Manage asynchronous messages, queues, sessions, topics, and topic subscriptions.
:::column-end::: :::column:::
- [![Azure Event Hubs icon][azure-event-hubs-icon]][azure-event-hubs-doc]
+ ![Azure Table Storage icon][azure-table-storage-icon]
\ \
- [**Event Hubs**][azure-event-hubs-doc]<br>(*Standard logic app only*)
+ **Azure Table Storage**<br>(*Standard logic app only*)
\ \
- Consume and publish events through an event hub. For example, get output from your logic app with Event Hubs, and then send that output to a real-time analytics provider.
+ Connect to your Azure Storage account so that you can create, update, query, and manage tables.
:::column-end::: :::column:::
- [![IBM DB2 icon][ibm-db2-icon]][ibm-db2-doc]
+ ![IBM DB2 icon][ibm-db2-icon]
\ \
- [**DB2**][ibm-db2-doc]<br>(*Standard logic app only*)
+ **DB2**<br>(*Standard logic app only*)
\ \ Connect to IBM DB2 in the cloud or on-premises. Update a row, get a table, and more.
Connectors for some services provide both built-in connectors and managed connec
Connect to IBM Host File and generate or parse contents. :::column-end::: :::column:::
- [![IBM MQ icon][ibm-mq-icon]][ibm-mq-doc]
+ ![IBM MQ icon][ibm-mq-icon]
\ \
- [**MQ**][ibm-mq-doc]<br>(*Standard logic app only*)
+ **IBM MQ**<br>(*Standard logic app only*)
\ \ Connect to IBM MQ on-premises or in Azure to send and receive messages.
container-apps Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/containers.md
When allocating resources, the total amount of CPUs and memory requested for all
## Multiple containers
-You can define multiple containers in a single container app. The containers in a container app share hard disk and network resources and experience the same [application lifecycle](application-lifecycle-management.md).
+You can define multiple containers in a single container app to implement the [sidecar pattern](/azure/architecture/patterns/sidecar). The containers in a container app share hard disk and network resources and experience the same [application lifecycle](./application-lifecycle-management.md).
-To run multiple containers in a container app, add more than one container in the `containers` array of the container app template.
+Examples of sidecar containers include:
-Reasons to run containers together in a container app include:
+- An agent that reads logs from the primary app container on a [shared volume](storage-mounts.md?pivots=aca-cli#temporary-storage) and forwards them to a logging service.
+- A background process that refreshes a cache used by the primary app container in a shared volume.
-- Use a container as a sidecar to your primary app.-- Share disk space and the same virtual network.-- Share scale rules among containers.-- Group multiple containers that need to always run together.-- Enable direct communication among containers.
+> [!NOTE]
+> Running multiple containers in a single container app is an advanced use case. You should use this pattern only in specific instances in which your containers are tightly coupled. In most situations where you want to run multiple containers, such as when implementing a microservice architecture, deploy each service as a separate container app.
+
+To run multiple containers in a container app, add more than one container in the containers array of the container app template.
## Container registries
container-apps Vnet Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/vnet-custom.md
The following example shows you how to create a Container Apps environment in an
<!-- Create --> [!INCLUDE [container-apps-create-portal-steps.md](../../includes/container-apps-create-portal-steps.md)]
+> [!NOTE]
+> Network address prefixes requires a CDIR range of `/23`.
+ 7. Select the **Networking** tab to create a VNET. 8. Select **Yes** next to *Use your own virtual network*. 9. Next to the *Virtual network* box, select the **Create new** link and enter the following value.
$VNET_NAME="my-custom-vnet"
Now create an Azure virtual network to associate with the Container Apps environment. The virtual network must have a subnet available for the environment deployment. > [!NOTE]
-> You can use an existing virtual network, but a dedicated subnet is required for use with Container Apps.
+> You can use an existing virtual network, but a dedicated subnet with a CDIR range of `/23` is required for use with Container Apps.
# [Bash](#tab/bash)
cosmos-db Synapse Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/synapse-link.md
Synapse Link isn't recommended if you're looking for traditional data warehouse
## Limitations
-* Azure Synapse Link for Azure Cosmos DB is supported for SQL API and Azure Cosmos DB API for MongoDB. It isn't supported for Gremlin API, Cassandra API, and Table API.
+* Azure Synapse Link for Azure Cosmos DB is not supported for Gremlin API, Cassandra API, and Table API. It is supported for SQL API and API for Mongo DB.
* Accessing the Azure Cosmos DB analytics store with Azure Synapse Dedicated SQL Pool currently isn't supported.
-* Enabling Synapse Link on existing Cosmos DB containers is only supported for SQL API accounts. Synapse Link can be enabled on new containers for both SQL API and MongoDB API accounts.
-
-* Backup and restore of your data in analytical store isn't supported at this time. You can recreate your analytical store data in some scenarios as below:
- * Azure Synapse Link and periodic backup mode can coexist in the same database account. In this mode, your transactional store data will be automatically backed up. However, analytical store data isn't included in backups and restores. If you use `transactional TTL` equal or bigger than your `analytical TTL` on your container, you can
- fully recreate your analytical store data by enabling analytical store on the restored container. Please note, at present, you can only recreate analytical store on your
- restored containers for SQL API.
- * Synapse Link and continuous backup mode (point=in-time restore) coexistence in the same database account isn't supported. If you enable continuous backup mode, you can't
- turn on Synapse Link, and vice versa.
-
-* Role-Based Access (RBAC) isn't supported when querying using Synapse SQL serverless pools.
+* Enabling Synapse Link on existing Cosmos DB containers is only supported for SQL API accounts. Synapse Link can be enabled on new containers for both SQL API and MongoDB API accounts.
+
+* Backup and restore:
+ * You can recreate your analytical store data in some scenarios as below. In this mode, your transactional store data will be automatically backed up. If `transactional TTL` is equal or greater than your `analytical TTL` on your container, you can fully recreate your analytical store data by enabling analytical store on the restored container:
+ - Azure Synapse Link can be enabled on accounts configured with periodic backups.
+ - If continuous backup (point-in-time restore) is enabled on your account, you can now restore your analytical data. To enable Synapse Link for such accounts, please reach out to cosmosdbsynapselink@microsoft.com. This is applicable only for SQL API.
+ * Restoring analytical data is not supported in following scenarios, for SQL API and API for Mongo DB:
+ - If you already enabled Synapse Link on your database account, you cannot enable point-in-time restore on such accounts.
+ - If `analytical TTL` is greater than `transactional TTL`, data that only exists in analytical store cannot be restored. You can continue to access full data from analytical store in the parent region.
+
+* Granular Role-based Access (RBAC)s isn't supported when querying from Synapse. Users that have access to your Synapse workspace and have access to the Cosmos DB account can access all containers within that account. We currently don't support more granular access to the containers.
## Security
data-factory Format Parquet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/format-parquet.md
Previously updated : 03/25/2022 Last updated : 06/22/2022
For a list of supported features for all available connectors, visit the [Connec
## Using Self-hosted Integration Runtime > [!IMPORTANT]
-> For copy empowered by Self-hosted Integration Runtime e.g. between on-premises and cloud data stores, if you are not copying Parquet files **as-is**, you need to install the **64-bit JRE 8 (Java Runtime Environment) or OpenJDK** and **Microsoft Visual C++ 2010 Redistributable Package** on your IR machine. Check the following paragraph with more details.
+> For copy empowered by Self-hosted Integration Runtime e.g. between on-premises and cloud data stores, if you are not copying Parquet files **as-is**, you need to install the **64-bit JRE 8 (Java Runtime Environment) or OpenJDK** on your IR machine. Check the following paragraph with more details.
For copy running on Self-hosted IR with Parquet file serialization/deserialization, the service locates the Java runtime by firstly checking the registry *`(SOFTWARE\JavaSoft\Java Runtime Environment\{Current Version}\JavaHome)`* for JRE, if not found, secondly checking system variable *`JAVA_HOME`* for OpenJDK. - **To use JRE**: The 64-bit IR requires 64-bit JRE. You can find it from [here](https://go.microsoft.com/fwlink/?LinkId=808605).-- **To use OpenJDK**: It's supported since IR version 3.13. Package the jvm.dll with all other required assemblies of OpenJDK into Self-hosted IR machine, and set system environment variable JAVA_HOME accordingly.-- **To install Visual C++ 2010 Redistributable Package**: Visual C++ 2010 Redistributable Package is not installed with self-hosted IR installations. You can find it from [here](https://www.microsoft.com/download/details.aspx?id=26999).
+- **To use OpenJDK**: It's supported since IR version 3.13. Package the jvm.dll with all other required assemblies of OpenJDK into Self-hosted IR machine, and set system environment variable JAVA_HOME accordingly, and then restart Self-hosted IR for taking effect immediately.
> [!TIP] > If you copy data to/from Parquet format using Self-hosted Integration Runtime and hit error saying "An error occurred when invoking java, message: **java.lang.OutOfMemoryError:Java heap space**", you can add an environment variable `_JAVA_OPTIONS` in the machine that hosts the Self-hosted IR to adjust the min/max heap size for JVM to empower such copy, then rerun the pipeline.
data-factory Managed Virtual Network Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/managed-virtual-network-private-endpoint.md
Previously updated : 06/16/2022 Last updated : 06/24/2022 # Azure Data Factory managed virtual network
Only a managed private endpoint in an approved state can send traffic to a speci
## Interactive authoring
-Interactive authoring capabilities are used for functionalities like test connection, browse folder list and table list, get schema, and preview data. You can enable interactive authoring when you create or edit an integration runtime in a Data Factory managed virtual network. The back-end service preallocates the compute for interactive authoring functionalities. Otherwise, the compute is allocated every time any interactive operation is performed, which takes more time.
-
-The Time-To-Live (TTL) for interactive authoring is 60 minutes. This means it will be automatically disabled 60 minutes after the last interactive authoring operation.
+Interactive authoring capabilities are used for functionalities like test connection, browse folder list and table list, get schema, and preview data. You can enable interactive authoring when creating or editing an Azure integration runtime, which is in Azure Data Factory managed virtual network. The backend service will pre-allocate compute for interactive authoring functionalities. Otherwise, the compute will be allocated every time any interactive operation is performed which will take more time. The time to live (TTL) for interactive authoring is 60 minutes by default, which means it will automatically become disabled after 60 minutes of the last interactive authoring operation. You can change the TTL value according to your actual needs.
:::image type="content" source="./media/managed-vnet/interactive-authoring.png" alt-text="Screenshot that shows interactive authoring.":::
-## Activity execution time using a managed virtual network
+## Time to live
+
+### Copy activity
-By design, an integration runtime in a managed virtual network takes longer queue time than a global integration runtime. One compute node isn't reserved per data factory, so warm-up is required before each activity starts. Warm-up occurs primarily on the virtual network join rather than the integration runtime.
+By default, every copy activity spins up a new compute based upon the configuration in copy activity. With managed virtual network enabled, cold computes start-up time takes a few minutes and data movement can't start until it is complete. If your pipelines contain multiple sequential copy activities or you have a lot of copy activities in foreach loop and canΓÇÖt run them all in parallel, you can enable a time to live (TTL) value in the Azure integration runtime configuration. Specifying a time to live value and DIU numbers required for the copy activity keeps the corresponding computes alive for a certain period of time after its execution completes. If a new copy activity starts during the TTL time, it will reuse the existing computes and start-up time will be greatly reduced. After the second copy activity completes, the computes will again stay alive for the TTL time.
+
+> [!NOTE]
+> Reconfiguring the DIU number will not affect the current copy activity execution.
-For non-Copy activities, including pipeline activity and external activity, there's a 60-minute TTL when you trigger them the first time. Within TTL, the queue time is shorter because the node is already warmed up.
+### Pipeline and external activity
+
+Unlike copy activity, pipeline and external activity have a default time to live (TTL) of 60 minutes. You can change the default TTL on Azure integration runtime configuration according to your actual needs, but itΓÇÖs not supported to disable the TTL.
+
+> [!NOTE]
+> Time to live (TTL) is only applicable to managed virtual network.
-The Copy activity doesn't have TTL support yet.
> [!NOTE] > The data integration unit (DIU) measure of 2 DIU isn't supported for the Copy activity in a managed virtual network.
data-factory Data Factory Data Movement Security Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-data-movement-security-considerations.md
The following cloud data stores require approving of IP address of the gateway m
**Answer:** We do not support this feature yet. We are actively working on it. **Question:** What are the port requirements for the gateway to work?
-**Answer:** Gateway makes HTTP-based connections to open internet. The **outbound ports 443 and 80** must be opened for gateway to make this connection. Open **Inbound Port 8050** only at the machine level (not at corporate firewall level) for Credential Manager application. If Azure SQL Database or Azure Synapse Analytics is used as source/ destination, then you need to open **1433** port as well. For more information, see [Firewall configurations and filtering IP addresses](#firewall-configurations-and-filtering-ip-address-of-gateway) section.
+**Answer:** Gateway makes HTTP-based connections to open internet. The **outbound ports 443 and 80** must be opened for gateway to make this connection. Open **inbound port 8050** only at the machine level (not at corporate firewall level) for Credential Manager application. If Azure SQL Database or Azure Synapse Analytics is used as source or destination, then you need to open **port 1433** as well. For more information, see [Firewall configurations and filtering IP addresses](#firewall-configurations-and-filtering-ip-address-of-gateway) section.
**Question:** What are certificate requirements for Gateway? **Answer:** Current gateway requires a certificate that is used by the credential manager application for securely setting data store credentials. This certificate is a self-signed certificate created and configured by the gateway setup. You can use your own TLS/SSL certificate instead. For more information, see [click-once credential manager application](#click-once-credentials-manager-app) section. ## Next steps
-For information about performance of copy activity, see [Copy activity performance and tuning guide](data-factory-copy-activity-performance.md).
+For information about performance of copy activity, see [Copy activity performance and tuning guide](data-factory-copy-activity-performance.md).
databox-online Azure Stack Edge Deploy Nvidia Deepstream Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-deploy-nvidia-deepstream-module.md
+
+ Title: Deploy the Nvidia DeepStream module on Ubuntu VM on Azure Stack Edge Pro with GPU | Microsoft Docs
+description: Learn how to deploy the Nvidia Deepstream module on an Ubuntu virtual machine that is running on your Azure Stack Edge Pro GPU device.
++++++ Last updated : 06/23/2022+++
+# Deploy the Nvidia DeepStream module on Ubuntu VM on Azure Stack Edge Pro with GPU
++
+This article walks you through deploying NvidiaΓÇÖs DeepStream module on an Ubuntu VM running on your Azure Stack Edge device. The DeepStream module is supported only on GPU devices.
+
+## Prerequisites
+
+Before you begin, make sure you have:
+
+- Deployed an IoT Edge runtime on a GPU VM running on an Azure Stack Edge device. For detailed steps, see [Deploy IoT Edge on an Ubuntu VM on Azure Stack Edge](azure-stack-edge-gpu-deploy-iot-edge-linux-vm.md).
+
+## Get module from IoT Edge Module Marketplace
+
+1. In the [Azure portal](https://portal.azure.com), go to **Device management** > **IoT Edge**.
+1. Select the IoT Hub device that you configured while deploying the IoT Edge runtime.
+
+ ![Screenshot of the Azure portal, I o T Edge, I o T Hub device.](media/azure-stack-edge-deploy-nvidia-deepstream-module/azure-portal-select-iot-edge-device.png)
+
+1. Select **Set modules**.
+
+ ![Screenshot of the Azure portal, I o T Hub, set modules page.](media/azure-stack-edge-deploy-nvidia-deepstream-module/azure-portal-create-vm-iot-hub-set-module.png)
+
+1. Select **Add** > **Marketplace Module**.
+
+ ![Screenshot of the Azure portal, Marketplace Module, Add Marketplace Module selection.](media/azure-stack-edge-deploy-nvidia-deepstream-module/azure-portal-create-vm-add-iot-edge-module.png)
+
+1. Search for **NVIDIA DeepStream SDK 5.1 for x86/AMD64** and then select it.
+
+ ![Screenshot of the Azure portal, I o T Edge Module Marketplace, modules options.](media/azure-stack-edge-deploy-nvidia-deepstream-module/azure-portal-create-vm-iot-edge-module-marketplace.png)
+
+1. Select **Review + Create**, and then select **Create module**.
+
+## Verify module runtime status
+
+1. Verify that the module is running.
+
+ ![Screenshot of the Azure portal, modules runtime status.](media/azure-stack-edge-deploy-nvidia-deepstream-module/azure-portal-create-vm-verify-module-status.png)
+
+1. Verify that the module provides the following output in the troubleshooting page of the IoT Edge device on IoT Hub:
+
+ ![Screenshot of the Azure portal, NVIDIADeepStreamSDK log file output.](media/azure-stack-edge-deploy-nvidia-deepstream-module/azure-portal-create-vm-troubleshoot-iot-edge-module.png)
+
+After a certain period of time, the module runtime will complete and quit, causing the module status to return an error. This error condition is expected behavior.
+
+![Screenshot of the Azure portal, NVIDIADeepStreamSDK module runtime status with error condition.](media/azure-stack-edge-deploy-nvidia-deepstream-module/azure-portal-create-vm-add-iot-edge-module-error.png)
databox-online Azure Stack Edge Gpu Deploy Iot Edge Linux Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-iot-edge-linux-vm.md
+
+ Title: Deploy IoT Edge runtime on Ubuntu VM on Azure Stack Edge Pro with GPU | Microsoft Docs
+description: Learn how to deploy IoT Edge runtime and run IoT Edge module on an Ubuntu virtual machine that is running on your Azure Stack Edge Pro GPU device.
++++++ Last updated : 06/23/2022+++
+# Deploy IoT Edge on an Ubuntu VM on Azure Stack Edge
++
+This article describes how to deploy an IoT Edge runtime on an Ubuntu VM running on your Azure Stack Edge device. For new development work, use the self-serve deployment method described in this article as it uses the latest software version.
+
+## High-level flow
+
+The high-level flow is as follows:
+
+1. Create or identify the IoT Hub or [Azure IoT Hub Device Provisioning Service (DPS)](../iot-dps/about-iot-dps.md) instance.
+1. Use Azure CLI to acquire the Ubuntu 20.04 LTS VM image.
+1. Upload the Ubuntu image onto the Azure Stack Edge VM image library.
+1. Deploy the Ubuntu image as a VM using the following steps:
+ 1. Provide the name of the VM, the username, and the password. Creating another disk is optional.
+ 1. Set up the network configuration.
+ 1. Provide a prepared *cloud-init* script on the *Advanced* tab.
+
+## Prerequisites
+
+Before you begin, make sure you have:
+
+- An Azure Stack Edge device that you've activated. For detailed steps, see [Activate Azure Stack Edge Pro GPU](azure-stack-edge-gpu-deploy-activate.md).
+- Access to the latest Ubuntu 20.04 VM image, either the image from Azure Marketplace or a custom image that you're bringing:
+
+ ```$urn = Canonical:0001-com-ubuntu-server-focal:20_04-lts:20.04.202007160```
+
+ Use steps in [Search for Azure Marketplace images](azure-stack-edge-gpu-create-virtual-machine-marketplace-image.md#search-for-azure-marketplace-images) to acquire the VM image.
+
+## Prepare the cloud-init script
+
+To deploy the IoT Edge runtime onto the Ubuntu VM, use a *cloud-init* script during the VM deployment.
+
+Use steps in one of the following sections:
+
+- [Prepare the cloud-init script with symmetric key provisioning](azure-stack-edge-gpu-deploy-iot-edge-linux-vm.md#use-symmetric-key-provisioning).
+- [Prepare the cloud-init script with IoT Hub DPS](azure-stack-edge-gpu-deploy-iot-edge-linux-vm.md#use-dps).
+
+### Use symmetric key provisioning
+
+To connect your device to IoT Hub without DPS, use the steps in this section to prepare a *cloud-init* script for the VM creation *Advanced* page to deploy the IoT Edge runtime and NvidiaΓÇÖs container runtime.
+
+1. Use an existing IoT Hub or create a new Hub. Use these steps to [create an IoT Hub](../iot-hub/iot-hub-create-through-portal.md).
+
+1. Use these steps to [register your Azure Stack Edge device in IoT Hub](../iot-edge/how-to-provision-single-device-linux-symmetric.md#register-your-device).
+
+1. Retrieve the Primary Connection String from IoT Hub for your device, and then paste it into the location below for *DeviceConnectionString*.
+
+**Cloud-init script for symmetric key provisioning**
+
+```azurecli
+
+#cloud-config
+
+runcmd:
+ - dcs="<DeviceConnectionString>"
+ - |
+ set -x
+ (
+
+ # Wait for docker daemon to start
+
+ while [ $(ps -ef | grep -v grep | grep docker | wc -l) -le 0 ]; do
+ sleep 3
+ done
+
+ if [ $(lspci | grep NVIDIA | wc -l) -gt 0 ]; then
+
+ #install Nvidia drivers
+
+ apt install -y ubuntu-drivers-common
+ ubuntu-drivers devices
+ ubuntu-drivers autoinstall
+
+ # Install NVIDIA Container Runtime
+
+ curl -s -L https://nvidia.github.io/nvidia-container-runtime/gpgkey | apt-key add -
+ distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
+ curl -s -L https://nvidia.github.io/nvidia-container-runtime/$distribution/nvidia-container-runtime.list | tee /etc/apt/sources.list.d/nvidia-container-runtime.list
+ apt update
+ apt install -y nvidia-container-runtime
+ fi
+
+ # Restart Docker
+
+ systemctl daemon-reload
+ systemctl restart docker
+
+ # Install IoT Edge
+
+ apt install -y aziot-edge
+
+ if [ ! -z $dcs ]; then
+ iotedge config mp --connection-string $dcs
+ iotedge config apply
+ fi
+ if [ $(lspci | grep NVIDIA | wc -l) -gt 0 ]; then
+ reboot
+ fi ) &
+
+apt:
+ preserve_sources_list: true
+ sources:
+ msft.list:
+ source: "deb https://packages.microsoft.com/ubuntu/20.04/prod focal main"
+ key: |
+ --BEGIN PGP PUBLIC KEY BLOCK--
+ Version: GnuPG v1.4.7 (GNU/Linux)
+
+ mQENBFYxWIwBCADAKoZhZlJxGNGWzqV+1OG1xiQeoowKhssGAKvd+buXCGISZJwT
+ LXZqIcIiLP7pqdcZWtE9bSc7yBY2MalDp9Liu0KekywQ6VVX1T72NPf5Ev6x6DLV
+ 7aVWsCzUAF+eb7DC9fPuFLEdxmOEYoPjzrQ7cCnSV4JQxAqhU4T6OjbvRazGl3ag
+ OeizPXmRljMtUUttHQZnRhtlzkmwIrUivbfFPD+fEoHJ1+uIdfOzZX8/oKHKLe2j
+ H632kvsNzJFlROVvGLYAk2WRcLu+RjjggixhwiB+Mu/A8Tf4V6b+YppS44q8EvVr
+ M+QvY7LNSOffSO6Slsy9oisGTdfE39nC7pVRABEBAAG0N01pY3Jvc29mdCAoUmVs
+ ZWFzZSBzaWduaW5nKSA8Z3Bnc2VjdXJpdHlAbWljcm9zb2Z0LmNvbT6JATUEEwEC
+ AB8FAlYxWIwCGwMGCwkIBwMCBBUCCAMDFgIBAh4BAheAAAoJEOs+lK2+EinPGpsH
+ /32vKy29Hg51H9dfFJMx0/a/F+5vKeCeVqimvyTM04C+XENNuSbYZ3eRPHGHFLqe
+ MNGxsfb7C7ZxEeW7J/vSzRgHxm7ZvESisUYRFq2sgkJ+HFERNrqfci45bdhmrUsy
+ 7SWw9ybxdFOkuQoyKD3tBmiGfONQMlBaOMWdAsic965rvJsd5zYaZZFI1UwTkFXV
+ KJt3bp3Ngn1vEYXwijGTa+FXz6GLHueJwF0I7ug34DgUkAFvAs8Hacr2DRYxL5RJ
+ XdNgj4Jd2/g6T9InmWT0hASljur+dJnzNiNCkbn9KbX7J/qK1IbR8y560yRmFsU+
+ NdCFTW7wY0Fb1fWJ+/KTsC4=
+ =J6gs
+ --END PGP PUBLIC KEY BLOCK--
+packages:
+ - moby-cli
+ - moby-engine
+write_files:
+ - path: /etc/systemd/system/docker.service.d/override.conf
+ permissions: "0644"
+ content: |
+ [Service]
+ ExecStart=
+ ExecStart=/usr/bin/dockerd --host=fd:// --add-runtime=nvidia=/usr/bin/nvidia-container-runtime --log-driver local
+
+```
+
+### Use DPS
+
+Use steps in this section to connect your device to DPS and IoT Central. You'll prepare a *script.sh* file to deploy the IoT Edge runtime as you create the VM.
+
+1. Use the existing IoT Hub and DPS, or create a new IoT Hub.
+
+ - Use these steps to [create an IoT Hub](../iot-hub/iot-hub-create-through-portal.md).
+ - Use these steps to [create the DPS, and then link the IoT Hub to the DPS scope](../iot-dps/quick-setup-auto-provision.md).
+
+1. Go to the DPS resource and create an individual enrollment. 
+
+ 1. Go to **Device Provisioning Service** > **Manage enrollments** > **Add individual enrollment**.
+ 1. Make sure that the selection for **Symmetric Key for attestation type and IoT Edge device** is **True**. The default selection is **False**.
+ 1. Retrieve the following information from the DPS resource page:
+ - **Registration ID**. We recommend that you use the same ID as the **Device ID** for your IoT Hub.
+ - **ID Scope** which is available in the [Overview menu](../iot-dps/quick-create-simulated-device-symm-key.md#prepare-and-run-the-device-provisioning-code).
+ - **Primary SAS Key** from the Individual Enrollment menu.
+1. Copy and paste values from IoT Hub (IDScope) and DPS (RegistrationID, Symmetric Key) into the script arguments.
+
+**Cloud-init script for IoT Hub DPS**
+
+```azurecli
+
+#cloud-config
+
+runcmd:
+ - dps_idscope="<DPS IDScope>"
+ - registration_device_id="<RegistrationID>"
+ - key="<Symmetric Key>"
+ - |
+ set -x
+ (
+
+ wget https://github.com/Azure/iot-edge-config/releases/latest/download/azure-iot-edge-installer.sh -O azure-iot-edge-installer.sh \
+ && chmod +x azure-iot-edge-installer.sh \
+ && sudo -H ./azure-iot-edge-installer.sh -s $dps_idscope -r $registration_device_id -k $key \
+ && rm -rf azure-iot-edge-installer.sh
+
+ # Wait for docker daemon to start
+
+ while [ $(ps -ef | grep -v grep | grep docker | wc -l) -le 0 ]; do
+ sleep 3
+ done
+
+ systemctl stop aziot-edge
+
+ if [ $(lspci | grep NVIDIA | wc -l) -gt 0 ]; then
+
+ #install Nvidia drivers
+
+ apt install -y ubuntu-drivers-common
+ ubuntu-drivers devices
+ ubuntu-drivers autoinstall
+
+ # Install NVIDIA Container Runtime
+
+ curl -s -L https://nvidia.github.io/nvidia-container-runtime/gpgkey | apt-key add -
+ distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
+ curl -s -L https://nvidia.github.io/nvidia-container-runtime/$distribution/nvidia-container-runtime.list | tee /etc/apt/sources.list.d/nvidia-container-runtime.list
+ apt update
+ apt install -y nvidia-container-runtime
+ fi
+
+ # Restart Docker
+
+ systemctl daemon-reload
+ systemctl restart docker
+
+ systemctl start aziot-edge
+ if [ $(lspci | grep NVIDIA | wc -l) -gt 0 ]; then
+ reboot
+ fi
+ ) &
+write_files:
+ - path: /etc/systemd/system/docker.service.d/override.conf
+ permissions: "0644"
+ content: |
+ [Service]
+ ExecStart=
+ ExecStart=/usr/bin/dockerd --host=fd:// --add-runtime=nvidia=/usr/bin/nvidia-container-runtime --log-driver local
+
+```
+
+## Deploy IoT Edge runtime
+
+Deploying the IoT Edge runtime is part of VM creation, using the *cloud-init* script mentioned above.
+
+Here are the high-level steps to deploy the VM and IoT Edge runtime:
+
+1. In the [Azure portal](https://portal.azure.com), go to Azure Marketplace.
+ 1. Connect to the Azure Cloud Shell or a client with Azure CLI installed. For detailed steps, see [Quickstart for Bash in Azure Cloud Shell](../cloud-shell/quickstart.md).
+ 1. Use steps in [Search for Azure Marketplace images](azure-stack-edge-gpu-create-virtual-machine-marketplace-image.md#search-for-azure-marketplace-images) to search the Azure Marketplace for the following Ubuntu 20.04 LTS image:
+
+ ```azurecli
+ $urn = Canonical:0001-com-ubuntu-server-focal:20_04-lts:20.04.202007160
+ ```
+
+ 1. Create a new managed disk from the Marketplace image.
+
+ 1. Export a VHD from the managed disk to an Azure Storage account.
+
+ For detailed steps, follow the instructions in [Use Azure Marketplace image to create VM image for your Azure Stack Edge](azure-stack-edge-gpu-create-virtual-machine-marketplace-image.md).
+
+1. Follow these steps to create an Ubuntu VM using the VM image.
+ 1. Specify the *cloud-init* script on the **Advanced** tab. To create a VM, see [Deploy GPU VM via Azure portal](azure-stack-edge-gpu-deploy-gpu-virtual-machine.md?tabs=portal) or [Deploy VM via Azure portal](azure-stack-edge-gpu-deploy-virtual-machine-portal.md).
+
+ ![Screenshot of the Advanced tab of V M configuration in the Azure portal.](media/azure-stack-edge-gpu-deploy-iot-edge-linux-vm/azure-portal-create-vm-advanced-page-2.png)
+
+ 1. Specify the appropriate device connection strings in the *cloud-init* to connect to the IoT Hub or DPS device. For detailed steps, see [Provision with symmetric keys](azure-stack-edge-gpu-deploy-iot-edge-linux-vm.md#use-symmetric-key-provisioning) or [Provision with IoT Hub DPS](azure-stack-edge-gpu-deploy-iot-edge-linux-vm.md#use-dps).
+
+ ![Screenshot of the Custom data field of V M configuration in the Azure portal.](media/azure-stack-edge-gpu-deploy-iot-edge-linux-vm/azure-portal-create-vm-init-script.png)
+
+ If you didn't specify the *cloud-init* during VM creation, you'll have to manually deploy the IoT Edge runtime after the VM is created:
+
+ 1. Connect to the VM via SSH.
+ 1. Install the container engine on the VM. For detailed steps, see [Create and provision an IoT Edge device on Linux using symmetric keys](../iot-edge/how-to-provision-single-device-linux-symmetric.md#install-a-container-engine) or [Quickstart - Set up IoT Hub DPS with the Azure portal](../iot-dps/quick-setup-auto-provision.md).
++
+## Verify the IoT Edge runtime
+
+Use these steps to verify that your IoT Edge runtime is running.
+
+1. Go to IoT Hub resource in the Azure portal.
+1. Select the IoT Edge device.
+1. Verify that the IoT Edge runtime is running.
+
+ ![Screenshot of the I o T Edge runtime status in the Azure portal.](media/azure-stack-edge-gpu-deploy-iot-edge-linux-vm/azure-portal-iot-edge-runtime-status.png)
+
+## Update the IoT Edge runtime
+
+To update the VM, follow the instructions in [Update IoT Edge](../iot-edge/how-to-update-iot-edge.md?view=iotedge-2020-11&tabs=linux&preserve-view=true). To find the latest version of Azure IoT Edge, see [Azure IoT Edge releases](../iot-edge/how-to-update-iot-edge.md?view=iotedge-2020-11&tabs=linux&preserve-view=true).
+
+## Next steps
+
+To deploy and run an IoT Edge module on your Ubuntu VM, see the steps in [Deploy IoT Edge modules](../iot-edge/how-to-deploy-modules-portal.md?view=iotedge-2020-11&preserve-view=true).
+
+To deploy NvidiaΓÇÖs DeepStream module, see [Deploy the Nvidia DeepStream module on Ubuntu VM on Azure Stack Edge Pro with GPU](azure-stack-edge-deploy-nvidia-deepstream-module.md).
governance Guest Configuration Create Definition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/guest-configuration-create-definition.md
Parameters of the `New-GuestConfigurationPolicy` cmdlet:
- **DisplayName**: Policy display name. - **Description**: Policy description. - **Parameter**: Policy parameters provided in hashtable format.-- **Version**: Policy version.
+- **PolicyVersion**: Policy version.
- **Path**: Destination path where policy definitions are created. - **Platform**: Target platform (Windows/Linux) for guest configuration policy and content package.
New-GuestConfigurationPolicy `
-Description 'Details about my policy.' ` -Path './policies' ` -Platform 'Windows' `
- -Version 1.0.0 `
+ -PolicyVersion 1.0.0 `
-Verbose ```
New-GuestConfigurationPolicy `
-Description 'Details about my policy.' ` -Path './policies' ` -Platform 'Windows' `
- -Version 1.0.0 `
+ -PolicyVersion 1.0.0 `
-Mode 'ApplyAndAutoCorrect' ` -Verbose ```
New-GuestConfigurationPolicy `
-Description 'Audit if a Windows Service isn't enabled on Windows machine.' ` -Path '.\policies' ` -Parameter $PolicyParameterInfo `
- -Version 1.0.0
+ -PolicyVersion 1.0.0
``` ### Publish the Azure Policy definition
hdinsight Hdinsight Hadoop Port Settings For Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-port-settings-for-services.md
Linux-based HDInsight clusters only expose three ports publicly on the internet:
HDInsight is implemented by several Azure Virtual Machines (cluster nodes) running on an Azure Virtual Network. From within the virtual network, you can access ports not exposed over the internet. If you connect via SSH to the head node, you can directly access services running on the cluster nodes.
-> [!IMPORTANT]
+> [!IMPORTANT]
> If you do not specify an Azure Virtual Network as a configuration option for HDInsight, one is created automatically. However, you can't join other machines (such as other Azure Virtual Machines or your client development machine) to this virtual network. To join additional machines to the virtual network, you must create the virtual network first, and then specify it when creating your HDInsight cluster. For more information, see [Plan a virtual network for HDInsight](hdinsight-plan-virtual-network-deployment.md).
All services publicly exposed on the internet must be authenticated:
## Non-public ports
-> [!NOTE]
+> [!NOTE]
> Some services are only available on specific cluster types. For example, HBase is only available on HBase cluster types.
-> [!IMPORTANT]
+> [!IMPORTANT]
> Some services only run on one headnode at a time. If you attempt to connect to the service on the primary headnode and receive an error, retry using the secondary headnode. ### Ambari
Examples:
| | | | | | | HMaster |Head nodes |16000 |&nbsp; |&nbsp; | | HMaster info Web UI |Head nodes |16010 |HTTP |The port for the HBase Master web UI |
-| Region server |All worker nodes |16020 |&nbsp; |&nbsp; |
-| &nbsp; |&nbsp; |2181 |&nbsp; |The port that clients use to connect to ZooKeeper |
+|Region server|All worker nodes |16020 ||&nbsp;|
+|Region server info Web UI&nbsp;|&nbsp;All worker nodes |16030|HTTP|The port for the HBase Region server Web UI|
+||| 2181 ||The port that clients use to connect to ZooKeeper |
### Kafka ports
hdinsight Apache Hive Migrate Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/apache-hive-migrate-workloads.md
Title: Migrate Azure HDInsight 3.6 Hive workloads to HDInsight 4.0
description: Learn how to migrate Apache Hive workloads on HDInsight 3.6 to HDInsight 4.0. - Last updated 11/4/2020
Last updated 11/4/2020
# Migrate Azure HDInsight 3.6 Hive workloads to HDInsight 4.0
-HDInsight 4.0 has several advantages over HDInsight 3.6. Here is an [overview of what's new in HDInsight 4.0](../hdinsight-version-release.md).
+HDInsight 4.0 has several advantages over HDInsight 3.6. Here's an [overview of what's new in HDInsight 4.0](../hdinsight-version-release.md).
This article covers steps to migrate Hive workloads from HDInsight 3.6 to 4.0, including
Migration of Hive tables to a new Storage Account needs to be done as a separate
### 1. Prepare the data
-* HDInsight 3.6 by default does not support ACID tables. If ACID tables are present, however, run 'MAJOR' compaction on them. See the [Hive Language Manual](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-AlterTable/Partition/Compact) for details on compaction.
+* HDInsight 3.6 by default doesn't support ACID tables. If ACID tables are present, however, run 'MAJOR' compaction on them. See the [Hive Language Manual](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-AlterTable/Partition/Compact) for details on compaction.
* If using [Azure Data Lake Storage Gen1](../overview-data-lake-storage-gen1.md), Hive table locations are likely dependent on the cluster's HDFS configurations. Run the following script action to make these locations portable to other clusters. See [Script action to a running cluster](../hdinsight-hadoop-customize-cluster-linux.md#script-action-to-a-running-cluster).
- |Property | Value |
- |||
- |Bash script URI|`https://hdiconfigactions.blob.core.windows.net/linuxhivemigrationv01/hive-adl-expand-location-v01.sh`|
- |Node type(s)|Head|
- |Parameters||
+ |Property | Value |
+ |||
+ |Bash script URI|`https://hdiconfigactions.blob.core.windows.net/linuxhivemigrationv01/hive-adl-expand-location-v01.sh`|
+ |Node type(s)|Head|
+ |Parameters||
### 2. Copy the SQL database
Migration of Hive tables to a new Storage Account needs to be done as a separate
This step uses the [`Hive Schema Tool`](https://cwiki.apache.org/confluence/display/Hive/Hive+Schema+Tool) from HDInsight 4.0 to upgrade the metastore schema.
-> [!Warning]
+> [!WARNING]
> This step is not reversible. Run this only on a copy of the metastore. 1. Create a temporary HDInsight 4.0 cluster to access the 4.0 Hive `schematool`. You can use the [default Hive metastore](../hdinsight-use-external-metadata-stores.md#default-metastore) for this step.
-1. From the HDInsight 4.0 cluster, execute `schematool` to upgrade the target HDInsight 3.6 metastore:
+1. From the HDInsight 4.0 cluster, execute `schematool` to upgrade the target HDInsight 3.6 metastore. Edit the following shell script to add your SQL server name, database name, username, and password. Open an [SSH Session](../hdinsight-hadoop-linux-use-ssh-unix.md) on the headnode and run it.
- ```sh
- SERVER='servername.database.windows.net' # replace with your SQL Server
- DATABASE='database' # replace with your 3.6 metastore SQL Database
- USERNAME='username' # replace with your 3.6 metastore username
- PASSWORD='password' # replace with your 3.6 metastore password
- STACK_VERSION=$(hdp-select status hive-server2 | awk '{ print $3; }')
- /usr/hdp/$STACK_VERSION/hive/bin/schematool -upgradeSchema -url "jdbc:sqlserver://$SERVER;databaseName=$DATABASE;trustServerCertificate=false;encrypt=true;hostNameInCertificate=*.database.windows.net;" -userName "$USERNAME" -passWord "$PASSWORD" -dbType "mssql" --verbose
- ```
+ ```sh
+ SERVER='servername.database.windows.net' # replace with your SQL Server
+ DATABASE='database' # replace with your 3.6 metastore SQL Database
+ USERNAME='username' # replace with your 3.6 metastore username
+ PASSWORD='password' # replace with your 3.6 metastore password
+ STACK_VERSION=$(hdp-select status hive-server2 | awk '{ print $3; }')
+ /usr/hdp/$STACK_VERSION/hive/bin/schematool -upgradeSchema -url "jdbc:sqlserver://$SERVER;databaseName=$DATABASE;trustServerCertificate=false;encrypt=true;hostNameInCertificate=*.database.windows.net;" -userName "$USERNAME" -passWord "$PASSWORD" -dbType "mssql" --verbose
+ ```
- > [!NOTE]
- > This utility uses client `beeline` to execute SQL scripts in `/usr/hdp/$STACK_VERSION/hive/scripts/metastore/upgrade/mssql/upgrade-*.mssql.sql`.
- >
- > SQL Syntax in these scripts is not necessarily compatible to other client tools. For example, [SSMS](/sql/ssms/download-sql-server-management-studio-ssms) and [Query Editor on Azure Portal](/azure/azure-sql/database/connect-query-portal) require keyword `GO` after each command.
- >
- > If any script fails due to resource capacity or transaction timeouts, scale up the SQL Database.
+ > [!NOTE]
+ > This utility uses client `beeline` to execute SQL scripts in `/usr/hdp/$STACK_VERSION/hive/scripts/metastore/upgrade/mssql/upgrade-*.mssql.sql`.
+ >
+ > SQL Syntax in these scripts is not necessarily compatible to other client tools. For example, [SSMS](/sql/ssms/download-sql-server-management-studio-ssms) and [Query Editor on Azure Portal](/azure/azure-sql/database/connect-query-portal) require keyword `GO` after each command.
+ >
+ > If any script fails due to resource capacity or transaction timeouts, scale up the SQL Database.
1. Verify the final version with query `select schema_version from dbo.version`.
- The output should match that of the following bash command from the HDInsight 4.0 cluster.
+ The output should match that of the following bash command from the HDInsight 4.0 cluster.
- ```bash
- grep . /usr/hdp/$(hdp-select --version)/hive/scripts/metastore/upgrade/mssql/upgrade.order.mssql | tail -n1 | rev | cut -d'-' -f1 | rev
- ```
+ ```bash
+ grep . /usr/hdp/$(hdp-select --version)/hive/scripts/metastore/upgrade/mssql/upgrade.order.mssql | tail -n1 | rev | cut -d'-' -f1 | rev
+ ```
1. Delete the temporary HDInsight 4.0 cluster.
Create a new HDInsight 4.0 cluster, [selecting the upgraded Hive metastore](../h
* If Hive jobs fail due to storage inaccessibility, verify that the table location is in a Storage Account added to the cluster.
- Use the following Hive command to identify table location:
+ Use the following Hive command to identify table location:
- ```sql
- SHOW CREATE TABLE ([db_name.]table_name|view_name);
- ```
+ ```sql
+ SHOW CREATE TABLE ([db_name.]table_name|view_name);
+ ```
### 5. Convert Tables for ACID Compliance
HDInsight optionally integrates with Azure Active Directory using HDInsight Ente
* `HiveCLI` is replaced with `Beeline`.
-Refer to [HDInsight 4.0 Announcement](../hdinsight-version-release.md) for additional changes.
+Refer to [HDInsight 4.0 Announcement](../hdinsight-version-release.md) for other changes.
## Troubleshooting guide
Refer to [HDInsight 4.0 Announcement](../hdinsight-version-release.md) for addit
* [HDInsight 4.0 Announcement](../hdinsight-version-release.md) * [HDInsight 4.0 deep dive](https://azure.microsoft.com/blog/deep-dive-into-azure-hdinsight-4-0/)
-* [Hive 3 ACID Tables](https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.1.0/using-hiveql/content/hive_3_internals.html)
+* [Hive 3 ACID Tables](https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.1.0/using-hiveql/content/hive_3_internals.html)
hdinsight Interactive Query Troubleshoot Error Message Hive View https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/interactive-query-troubleshoot-error-message-hive-view.md
Title: Error message not shown in Apache Hive View - Azure HDInsight
description: Query fails in Apache Hive View without any details on Azure HDInsight cluster. Previously updated : 07/30/2019 Last updated : 06/24/2022 # Scenario: Query error message not displayed in Apache Hive View in Azure HDInsight
Check the Notifications tab on the Top-right corner of the Hive_view to see the
## Next steps
key-vault Rbac Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/rbac-guide.md
> Key Vault resource provider supports two resource types: **vaults** and **managed HSMs**. Access control described in this article only applies to **vaults**. To learn more about access control for managed HSM, see [Managed HSM access control](../managed-hsm/access-control.md). > [!NOTE]
-> Azure App Service certificate configuration does not support Key Vault RBAC permission model.
+> Azure App Service certificate configuration through Azure Portal does not support Key Vault RBAC permission model. It is supported using client libraries like Azure PowerShell, Azure CLI, ARM template deployments with **Key Vault Secrets User** and **Key Vault Reader** role assignemnts.
Azure role-based access control (Azure RBAC) is an authorization system built on [Azure Resource Manager](../../azure-resource-manager/management/overview.md) that provides fine-grained access management of Azure resources.
logic-apps Custom Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/custom-connector-overview.md
ms.suite: integration Previously updated : 05/17/2022 Last updated : 06/10/2022 # As a developer, I want learn about the capability to create custom connectors with operations that I can use in my Azure Logic Apps workflows.
In [multi-tenant Azure Logic Apps](logic-apps-overview.md), you can create [cust
In [single-tenant Azure Logic Apps](logic-apps-overview.md), the redesigned Azure Logic Apps runtime powers Standard logic app workflows. This runtime differs from the multi-tenant Azure Logic Apps runtime that powers Consumption logic app workflows. The single-tenant runtime uses the [Azure Functions extensibility model](../azure-functions/functions-bindings-register.md), which provides a key capability for you to create your own [built-in connectors](../connectors/built-in.md) for anyone to use in Standard workflows. In most cases, the built-in version provides better performance, capabilities, pricing, and so on.
-When single-tenant Azure Logic Apps officially released, new built-in connectors included Azure Blob Storage, Azure Event Hubs, Azure Service Bus, and SQL Server. Over time, this list of built-in connectors continues to grow. However, if you need connectors that aren't available in Standard logic app workflows, you can [create your own built-in connectors](create-custom-built-in-connector-standard.md) using the same extensibility model that's used by built-in connectors in Standard workflows.
+When single-tenant Azure Logic Apps officially released, new built-in connectors included Azure Blob Storage, Azure Event Hubs, Azure Service Bus, and SQL Server. Over time, this list of built-in connectors continues to grow. However, if you need connectors that aren't available in Standard logic app workflows, you can [create your own built-in connectors](create-custom-built-in-connector-standard.md) using the same extensibility model that's used by *service provider-based* built-in connectors in Standard workflows.
<a name="service-provider-interface-implementation"></a>
-### Built-in connectors as service providers
+### Service provider-based built-in connectors
-In single-tenant Azure Logic Apps, a built-in connector that has the following attributes is called a *service provider*:
+In single-tenant Azure Logic Apps, a [built-in connector with specific attributes is informally known as a *service provider*](../connectors/built-in.md#service-provider-interface-implementation). For example, these connectors are based on the [Azure Functions extensibility model](../azure-functions/functions-bindings-register.md), which provide the capability for you to create your own custom built-in connectors to use in Standard logic app workflows.
-* Is based on the [Azure Functions extensibility model](../azure-functions/functions-bindings-register.md).
-
-* Provides access from a Standard logic app workflow to a service, such as Azure Blob Storage, Azure Service Bus, Azure Event Hubs, SFTP, and SQL Server.
-
- Some built-in connectors support only a single way to authenticate a connection to the underlying service. Other built-in connectors can offer a choice, such as using a connection string, Azure Active Directory (Azure AD), or a managed identity.
-
-* Runs in the same process as the redesigned Azure Logic Apps runtime.
-
-A built-in connector that's *not a service provider* has the following attributes:
+In contrast, non-service provider built-in connectors have the following attributes:
* Isn't based on the Azure Functions extensibility model.
This method has a default implementation, so you don't need to explicitly implem
When you're ready to start the implementation steps, continue to the following article:
-* [Create custom built-in connectors for Standard logic apps in single-tenant Azure Logic Apps](create-custom-built-in-connector-standard.md)
+* [Create custom built-in connectors for Standard logic apps in single-tenant Azure Logic Apps](create-custom-built-in-connector-standard.md)
logic-apps Single Tenant Overview Compare https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/single-tenant-overview-compare.md
The single-tenant model and **Logic App (Standard)** resource type include many
* Create logic apps and their workflows from [hundreds of managed connectors](/connectors/connector-reference/connector-reference-logicapps-connectors) for Software-as-a-Service (SaaS) and Platform-as-a-Service (PaaS) apps and services plus connectors for on-premises systems.
- * More managed connectors are now available as built-in connectors in Standard logic app workflows. The built-in versions run natively on the single-tenant Azure Logic Apps runtime. Some built-in connectors are also [*service provider-based* connectors](custom-connector-overview.md#service-provider-interface-implementation). For a list, review the [Built-in connectors for Standard logic apps](#built-connectors-standard) section later in this article.
+ * More managed connectors are now available as built-in connectors in Standard logic app workflows. The built-in versions run natively on the single-tenant Azure Logic Apps runtime. Some built-in connectors are also informally known as [*service provider* connectors](../connectors/built-in.md#service-provider-interface-implementation). For a list, review [Built-in connectors in Consumption and Standard](../connectors/built-in.md#built-in-connectors).
* You can create your own custom built-in connectors for any service that you need by using the single-tenant Azure Logic Apps extensibility framework. Similar to built-in connectors such as Azure Service Bus and SQL Server, custom built-in connectors provide higher throughput, low latency, and local connectivity because they run in the same process as the single-tenant runtime. However, custom built-in connectors aren't similar to [custom managed connectors](../connectors/apis-list.md#custom-connectors-and-apis), which aren't currently supported. For more information, review [Custom connector overview](custom-connector-overview.md#custom-connector-standard) and [Create custom built-in connectors for Standard logic apps in single-tenant Azure Logic Apps](create-custom-built-in-connector-standard.md).
A Standard logic app workflow has many of the same built-in connectors as a Cons
For example, a Standard logic app workflow has both managed connectors and built-in connectors for Azure Blob, Azure Cosmos DB, Azure Event Hubs, Azure Service Bus, DB2, FTP, MQ, SFTP, SQL Server, and others. Although a Consumption logic app workflow doesn't have these same built-in connector versions, other built-in connectors such as Azure API Management, Azure App Services, and Batch, are available.
-In single-tenant Azure Logic Apps, [built-in connectors with specific attributes are informally known as *service providers*](custom-connector-overview.md#service-provider-interface-implementation). Some built-in connectors support only a single way to authenticate a connection to the underlying service. Other built-in connectors can offer a choice, such as using a connection string, Azure Active Directory (Azure AD), or a managed identity. All built-in connectors run in the same process as the redesigned Azure Logic Apps runtime. For more information, review the [built-in connector list for Standard logic app workflows](../connectors/built-in.md).
+In single-tenant Azure Logic Apps, [built-in connectors with specific attributes are informally known as *service providers*](../connectors/built-in.md#service-provider-interface-implementation). Some built-in connectors support only a single way to authenticate a connection to the underlying service. Other built-in connectors can offer a choice, such as using a connection string, Azure Active Directory (Azure AD), or a managed identity. All built-in connectors run in the same process as the redesigned Azure Logic Apps runtime. For more information, review the [built-in connector list for Standard logic app workflows](../connectors/built-in.md#built-in-connectors).
<a name="limited-unavailable-unsupported"></a>
machine-learning Concept Compute Target https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-compute-target.md
When you select a node size for a managed compute resource in Azure Machine Lear
There are a few exceptions and limitations to choosing a VM size: * Some VM series aren't supported in Azure Machine Learning.
-* Some VM series are restricted. To use a restricted series, contact support and request a quota increase for the series. Please note that for GPUs and specialty SKUs, you would always have to request for quota due to high demand and limited supply. For information on how to contact support, see [Azure support options](https://azure.microsoft.com/support/options/).
-
-See the following table to learn more about supported series and restrictions.
+* There are some VM series, such as GPUs and other special SKUs, which may not initially appear in your list of available VMs. But you can still use them, once you request a quota change. For more information about requesting quotas, see [Request quota increases](how-to-manage-quotas.md#request-quota-increases).
+See the following table to learn more about supported series.
| **Supported VM series** | **Category** | **Supported by** | |||||
machine-learning Concept Secure Network Traffic Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-secure-network-traffic-flow.md
When you create a compute instance or compute cluster, the following resources a
* A Network Security Group with required outbound rules. These rules allow __inbound__ access from the Azure Machine Learning (TCP on port 44224) and Azure Batch service (TCP on ports 29876-29877). > [!IMPORTANT]
- > If you usee a firewall to block internet access into the VNet, you must configure the firewall to allow this traffic. For example, with Azure Firewall you can create user-defined routes. For more information, see [How to use Azure Machine Learning with a firewall](how-to-access-azureml-behind-firewall.md#inbound-configuration).
+ > If you use a firewall to block internet access into the VNet, you must configure the firewall to allow this traffic. For example, with Azure Firewall you can create user-defined routes. For more information, see [How to use Azure Machine Learning with a firewall](how-to-access-azureml-behind-firewall.md#inbound-configuration).
* A load balancer with a public IP.
machine-learning How To Deploy Fpga Web Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-fpga-web-service.md
Title: Deploy ML models to FPGAs
+ Title: Deploy ML models to FPGAs
description: Learn about field-programmable gate arrays. You can deploy a web service on an FPGA with Azure Machine Learning for ultra-low latency inference.
Next, create a Docker image from the converted model and all dependencies. This
#### Deploy to a local edge server
-All [Azure Data Box Edge devices](../databox-online/azure-stack-edge-overview.md) contain an FPGA for running the model. Only one model can be running on the FPGA at one time. To run a different model, just deploy a new container. Instructions and sample code can be found in [this Azure Sample](https://github.com/Azure-Samples/aml-hardware-accelerated-models).
+All [Azure Data Box Edge devices](../databox-online/azure-stack-edge-overview.md) contain an FPGA for running the model. Only one model can be running on the FPGA at one time. To run a different model, just deploy a new container. Instructions and sample code can be found in [this Azure Sample](https://github.com/Azure-Samples/aml-hardware-accelerated-models).
### Consume the deployed model
machine-learning How To Manage Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-quotas.md
In addition, the maximum **run time** is 30 days and the maximum number of **met
[Request a quota increase](#request-quota-increases) to raise the limits for various VM family core quotas, total subscription core quotas, cluster quota and resources in this section. Available resources:
-+ **Dedicated cores per region** have a default limit of 24 to 300, depending on your subscription offer type. You can increase the number of dedicated cores per subscription for each VM family. Specialized VM families like NCv2, NCv3, or ND series start with a default of zero cores.
++ **Dedicated cores per region** have a default limit of 24 to 300, depending on your subscription offer type. You can increase the number of dedicated cores per subscription for each VM family. Specialized VM families like NCv2, NCv3, or ND series start with a default of zero cores. GPUs also default to zero cores. + **Low-priority cores per region** have a default limit of 100 to 3,000, depending on your subscription offer type. The number of low-priority cores per subscription can be increased and is a single value across VM families.
The following table shows additional limits in the platform. Please reach out to
| **Resource or Action** | **Maximum limit** | | | | | Workspaces per resource group | 800 |
-| Nodes in a single Azure Machine Learning Compute (AmlCompute) **cluster** setup as a non communication-enabled pool (i.e. cannot run MPI jobs) | 100 nodes but configurable up to 65000 nodes |
-| Nodes in a single Parallel Run Step **run** on an Azure Machine Learning Compute (AmlCompute) cluster | 100 nodes but configurable up to 65000 nodes if your cluster is setup to scale per above |
-| Nodes in a single Azure Machine Learning Compute (AmlCompute) **cluster** setup as a communication-enabled pool | 300 nodes but configurable up to 4000 nodes |
-| Nodes in a single Azure Machine Learning Compute (AmlCompute) **cluster** setup as a communication-enabled pool on an RDMA enabled VM Family | 100 nodes |
+| Nodes in a single Azure Machine Learning Compute (AmlCompute) **cluster** set up as a non communication-enabled pool (i.e. cannot run MPI jobs) | 100 nodes but configurable up to 65000 nodes |
+| Nodes in a single Parallel Run Step **run** on an Azure Machine Learning Compute (AmlCompute) cluster | 100 nodes but configurable up to 65000 nodes if your cluster is set up to scale per above |
+| Nodes in a single Azure Machine Learning Compute (AmlCompute) **cluster** set up as a communication-enabled pool | 300 nodes but configurable up to 4000 nodes |
+| Nodes in a single Azure Machine Learning Compute (AmlCompute) **cluster** set up as a communication-enabled pool on an RDMA enabled VM Family | 100 nodes |
| Nodes in a single MPI **run** on an Azure Machine Learning Compute (AmlCompute) cluster | 100 nodes but can be increased to 300 nodes |
-| GPU MPI processes per node | 1-4 |
-| GPU workers per node | 1-4 |
| Job lifetime | 21 days<sup>1</sup> | | Job lifetime on a low-priority node | 7 days<sup>2</sup> | | Parameter servers per node | 1 |
You can't set a negative value or a value higher than the subscription-level quo
> [!NOTE] > You need subscription-level permissions to set a quota at the workspace level.
-## View your usage and quotas
+## View quotas in the studio
-To view your quota for various Azure resources like virtual machines, storage, or network, use the Azure portal:
+1. When you create a new compute resource, by default you'll see only VM sizes that you already have quota to use. Switch the view to **Select from all options**.
+
+ :::image type="content" source="media/how-to-manage-quotas/select-all-options.png" alt-text="Screenshot shows select all options to see compute resources that need more quota":::
+
+1. Scroll down until you see the list of VM sizes you do not have quota for.
+
+ :::image type="content" source="media/how-to-manage-quotas/scroll-to-zero-quota.png" alt-text="Screenshot shows list of zero quota":::
+
+1. Use the link to go directly to the online customer support request for more quota.
+
+## View your usage and quotas in the Azure portal
+
+To view your quota for various Azure resources like virtual machines, storage, or network, use the [Azure portal](https://portal.azure.com):
1. On the left pane, select **All services** and then select **Subscriptions** under the **General** category.
machine-learning How To Setup Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-setup-customer-managed-keys.md
Previously updated : 03/17/2022 Last updated : 06/24/2022 # Use customer-managed keys with Azure Machine Learning
In the [customer-managed keys concepts article](concept-customer-managed-keys.md
* The customer-managed key for resources the workspace depends on canΓÇÖt be updated after workspace creation. * Resources managed by Microsoft in your subscription canΓÇÖt transfer ownership to you. * You can't delete Microsoft-managed resources used for customer-managed keys without also deleting your workspace.
+* The key vault that contains your customer-managed key must be in the same Azure subscription as the Azure Machine Learning workspace
> [!IMPORTANT] > When using a customer-managed key, the costs for your subscription will be higher because of the additional resources in your subscription. To estimate the cost, use the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/).
In the [customer-managed keys concepts article](concept-customer-managed-keys.md
To create the key vault, see [Create a key vault](../key-vault/general/quick-create-portal.md). When creating Azure Key Vault, you must enable __soft delete__ and __purge protection__.
+> [!IMPORTANT]
+> The key vault must be in the same Azure subscription that will contain your Azure Machine Learning workspace.
+ ### Create a key > [!TIP]
To create the key vault, see [Create a key vault](../key-vault/general/quick-cre
Create an Azure Machine Learning workspace. When creating the workspace, you must select the __Azure Key Vault__ and the __key__. Depending on how you create the workspace, you specify these resources in different ways:
+> [!WARNING]
+> The key vault that contains your customer-managed key must be in the same Azure subscription as the workspace.
+ * __Azure portal__: Select the key vault and key from a dropdown input box when configuring the workspace. * __SDK, REST API, and Azure Resource Manager templates__: Provide the Azure Resource Manager ID of the key vault and the URL for the key. To get these values, use the [Azure CLI](/cli/azure/install-azure-cli) and the following commands:
marketplace Azure App Managed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-app-managed.md
Indicate who should have management access to this managed application in each s
Complete the following steps for Global Azure and Azure Government Cloud, as applicable. 1. In the **Azure Active Directory Tenant ID** box, enter the Azure AD Tenant ID (also known as directory ID) containing the identities of the users, groups, or applications you want to grant permissions to.
-1. In the **Principal ID** box, provide the Azure AD object ID of the user, group, or application that you want to be granted permission to the managed resource group. Identify the user by their Principal ID, which can be found at the [Azure Active Directory users blade](https://portal.azure.com/#blade/Microsoft_AAD_IAM/UsersManagementMenuBlade/AllUsers) on the Azure portal.
+1. In the **Principal ID** box, provide the Azure AD object ID of the user, group, or application that you want to be granted permission to the managed resource group. Identify the user by their Principal ID, which can be found at the [Azure Active Directory users blade](https://portal.azure.com/#view/Microsoft_AAD_UsersAndTenants/UserManagementMenuBlade/~/AllUsers) on the Azure portal.
1. From the **Role definition** list, select an Azure AD built-in role. The role you select describes the permissions the principal will have on the resources in the customer subscription. 1. To add another authorization, select the **Add authorization (max 100)** link, and repeat steps 1 through 3.
marketplace Private Offers Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/private-offers-api.md
Sample absolute pricing resource:
"paymentOption": { "type": "month", "value": 1
- }
+ },
"billingTerm": { "type": "year", "value": 1
migrate Tutorial Migrate Vmware Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-vmware-powershell.md
You can specify the replication properties as follows.
Target VM size | Mandatory | Specify the Azure VM size to be used for the replicating VM by using (`TargetVMSize`) parameter. For instance, to migrate a VM to D2_v2 VM in Azure, specify the value for (`TargetVMSize`) as "Standard_D2_v2". License | Mandatory | To use Azure Hybrid Benefit for your Windows Server machines that are covered with active Software Assurance or Windows Server subscriptions, specify the value for (`LicenseType`) parameter as **WindowsServer**. Otherwise, specify the value as **NoLicenseType**. OS Disk | Mandatory | Specify the unique identifier of the disk that has the operating system bootloader and installer. The disk ID to be used is the unique identifier (UUID) property for the disk retrieved using the [Get-AzMigrateDiscoveredServer](/powershell/module/az.migrate/get-azmigratediscoveredserver) cmdlet.
- Disk Type | Mandatory | Specify the name of the load balancer to be created.
+ Disk Type | Mandatory | Specify the type of disk to be used.
Infrastructure redundancy | Optional | Specify infrastructure redundancy option as follows. <br/><br/> - **Availability Zone** to pin the migrated machine to a specific Availability Zone in the region. Use this option to distribute servers that form a multi-node application tier across Availability Zones. This option is only available if the target region selected for the migration supports Availability Zones. To use availability zones, specify the availability zone value for (`TargetAvailabilityZone`) parameter. <br/> - **Availability Set** to place the migrated machine in an Availability Set. The target Resource Group that was selected must have one or more availability sets to use this option. To use availability set, specify the availability set ID for (`TargetAvailabilitySet`) parameter. Boot Diagnostic Storage Account | Optional | To use a boot diagnostic storage account, specify the ID for (`TargetBootDiagnosticStorageAccount`) parameter. <br/> - The storage account used for boot diagnostics should be in the same subscription that you're migrating your VMs to. <br/> - By default, no value is set for this parameter. Tags | Optional | Add tags to your migrated virtual machines, disks, and NICs. <br/> Use (`Tag`) to add tags to virtual machines, disks, and NICs. <br/> or <br/> Use (`VMTag`) for adding tags to your migrated virtual machines.<br/> Use (`DiskTag`) for adding tags to disks. <br/> Use (`NicTag`) for adding tags to network interfaces. <br/> For example, add the required tags to a variable $tags and pass the variable in the required parameter. $tags = @{Organization=ΓÇ¥ContosoΓÇ¥}
mysql Tutorial Archive Laravel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/tutorial-archive-laravel.md
+
+ Title: 'Tutorial: Build a PHP (Laravel) app with Azure Database for MySQL Flexible Server'
+description: This tutorial explains how to build a PHP app with flexible server.
+++++
+ms.devlang: php
Last updated : 9/21/2020+++
+# Tutorial: Build a PHP (Laravel) and MySQL Flexible Server app in Azure App Service
+++
+[Azure App Service](../../app-service/overview.md) provides a highly scalable, self-patching web hosting service using the Linux operating system. This tutorial shows how to create a PHP app in Azure and connect it to a MySQL database. When you're finished, you'll have a [Laravel](https://laravel.com/) app running on Azure App Service on Linux.
+
+In this tutorial, you learn how to:
+> [!div class="checklist"]
+>
+> * Setup a PHP (Laravel) app with local MySQL
+> * Create a MySQL Flexible Server
+> * Connect a PHP app to MySQL Flexible Server
+> * Deploy the app to Azure App Service
+> * Update the data model and redeploy the app
+> * Manage the app in the Azure portal
++
+## Prerequisites
+
+To complete this tutorial:
+
+1. [Install Git](https://git-scm.com/)
+2. [Install PHP 5.6.4 or above](https://php.net/downloads.php)
+3. [Install Composer](https://getcomposer.org/doc/00-intro.md)
+4. Enable the following PHP extensions Laravel needs: OpenSSL, PDO-MySQL, Mbstring, Tokenizer, XML
+5. [Install and start MySQL](https://dev.mysql.com/doc/refman/5.7/en/installing.html)
+
+## Prepare local MySQL
+
+In this step, you create a database in your local MySQL server for your use in this tutorial.
+
+### Connect to local MySQL server
+
+In a terminal window, connect to your local MySQL server. You can use this terminal window to run all the commands in this tutorial.
+
+```bash
+mysql -u root -p
+```
+
+If you're prompted for a password, enter the password for the `root` account. If you don't remember your root account password, see [MySQL: How to Reset the Root Password](https://dev.mysql.com/doc/refman/5.7/en/resetting-permissions.html).
+
+If your command runs successfully, then your MySQL server is running. If not, make sure that your local MySQL server is started by following the [MySQL post-installation steps](https://dev.mysql.com/doc/refman/5.7/en/postinstallation.html).
+
+### Create a database locally
+
+At the `mysql` prompt, create a database.
+
+```sql
+CREATE DATABASE sampledb;
+```
+
+Exit your server connection by typing `quit`.
+
+```sql
+quit
+```
+
+<a name="step2"></a>
+
+## Create a PHP app locally
+
+In this step, you get a Laravel sample application, configure its database connection, and run it locally.
+
+### Clone the sample
+
+In the terminal window, navigate to an empty directory where you can clone the sample application. Run the following command to clone the sample repository.
+
+```bash
+git clone https://github.com/Azure-Samples/laravel-tasks
+```
+
+`cd` to your cloned directory.
+Install the required packages.
+
+```bash
+cd laravel-tasks
+composer install
+```
+
+### Configure MySQL connection
+
+In the repository root, create a file named *.env*. Copy the following variables into the *.env* file. Replace the _&lt;root_password>_ placeholder with the MySQL root user's password.
+
+```txt
+APP_ENV=local
+APP_DEBUG=true
+APP_KEY=
+
+DB_CONNECTION=mysql
+DB_HOST=127.0.0.1
+DB_DATABASE=sampledb
+DB_USERNAME=root
+DB_PASSWORD=<root_password>
+```
+
+For information on how Laravel uses the *.env* file, see [Laravel Environment Configuration](https://laravel.com/docs/5.4/configuration#environment-configuration).
+
+### Run the sample locally
+
+Run [Laravel database migrations](https://laravel.com/docs/5.4/migrations) to create the tables the application needs. To see which tables are created in the migrations, look in the *database/migrations* directory in the Git repository.
+
+```bash
+php artisan migrate
+```
+
+Generate a new Laravel application key.
+
+```bash
+php artisan key:generate
+```
+
+Run the application.
+
+```bash
+php artisan serve
+```
+
+Navigate to `http://localhost:8000` in a browser. Add a few tasks in the page.
++
+To stop PHP, type `Ctrl + C` in the terminal.
+
+## Create a MySQL Flexible Server
+
+In this step, you create a MySQL database in [Azure Database for MySQL Flexible Server](../index.yml). Later, you configure the PHP application to connect to this database. In the [Azure Cloud Shell](../../cloud-shell/overview.md), create a server in with the [`az flexible-server create`](/cli/azure/mysql/server#az-mysql-flexible-server-create) command.
+
+```azurecli-interactive
+az mysql flexible-server create --resource-group myResourceGroup --public-access <IP-Address>
+```
+
+> [!IMPORTANT]
+>
+>* Make a note of the **servername** and **connection string** to use it in the next step to connect and run laravel data migration.
+> * For **IP-Address** argument, provide the IP of your client machine. The server is locked when created and you need to permit access to your client machine to manage the server locally.
+
+### Configure server firewall to allow web app to connect to the server
+
+In the Cloud Shell, create a firewall rule for your MySQL server to allow client connections by using the az mysql server firewall-rule create command. When both starting IP and end IP are set to ```0.0.0.0```, the firewall is only opened for other Azure services that do not have a static IP to connect to the server.
+
+```azurecli
+az mysql flexible-server firewall-rule create --name allanyAzureIPs --server <mysql-server-name> --resource-group myResourceGroup --start-ip-address 0.0.0.0 --end-ip-address 0.0.0.0
+```
+
+### Connect to production MySQL server locally
+
+In the local terminal window, connect to the MySQL server in Azure. Use the value you specified previously for ```<admin-user>``` and ```<mysql-server-name>``` . When prompted for a password, use the password you specified when you created the database in Azure.
+
+```bash
+mysql -u <admin-user> -h <mysql-server-name>.mysql.database.azure.com -P 3306 -p
+```
+
+### Create a production database
+
+At the `mysql` prompt, create a database.
+
+```sql
+CREATE DATABASE sampledb;
+```
+
+### Create a user with permissions
+
+Create a database user called *phpappuser* and give it all privileges in the `sampledb` database. For simplicity of the tutorial, use *MySQLAzure2020* as the password.
+
+```sql
+CREATE USER 'phpappuser' IDENTIFIED BY 'MySQLAzure2020';
+GRANT ALL PRIVILEGES ON sampledb.* TO 'phpappuser';
+```
+
+Exit the server connection by typing `quit`.
+
+```sql
+quit
+```
+
+## Connect app to MySQL flexible server
+
+In this step, you connect the PHP application to the MySQL database you created in Azure Database for MySQL.
+
+<a name="devconfig"></a>
+
+### Configure the database connection
+
+In the repository root, create an *.env.production* file and copy the following variables into it. Replace the placeholder _&lt;mysql-server-name>_ in both *DB_HOST* and *DB_USERNAME*.
+
+```
+APP_ENV=production
+APP_DEBUG=true
+APP_KEY=
+
+DB_CONNECTION=mysql
+DB_HOST=<mysql-server-name>.mysql.database.azure.com
+DB_DATABASE=sampledb
+DB_USERNAME=phpappuser
+DB_PASSWORD=MySQLAzure2017
+MYSQL_SSL=true
+```
+
+Save the changes.
+
+> [!TIP]
+> To secure your MySQL connection information, this file is already excluded from the Git repository (See *.gitignore* in the repository root). Later, you learn how to configure environment variables in App Service to connect to your database in Azure Database for MySQL. With environment variables, you don't need the *.env* file in App Service.
+>
+
+### Configure TLS/SSL certificate
+
+By default, MySQL Flexible Server enforces TLS connections from clients. To connect to your MySQL database in Azure, you must use the [*.pem* certificate supplied by Azure Database for MySQL Flexible Server](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem). Download [this certificate](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem)) and place it in the **SSL** folder in the local copy of the sample app repository.
+
+Open *config/database.php* and add the `sslmode` and `options` parameters to `connections.mysql`, as shown in the following code.
+
+```php
+'mysql' => [
+ ...
+ 'sslmode' => env('DB_SSLMODE', 'prefer'),
+ 'options' => (env('MYSQL_SSL') && extension_loaded('pdo_mysql')) ? [
+ PDO::MYSQL_ATTR_SSL_KEY => '/ssl/DigiCertGlobalRootCA.crt.pem',
+ ] : []
+],
+```
+
+### Test the application locally
+
+Run Laravel database migrations with *.env.production* as the environment file to create the tables in your MySQL database in Azure Database for MySQL. Remember that _.env.production_ has the connection information to your MySQL database in Azure.
+
+```bash
+php artisan migrate --env=production --force
+```
+
+*.env.production* doesn't have a valid application key yet. Generate a new one for it in the terminal.
+
+```bash
+php artisan key:generate --env=production --force
+```
+
+Run the sample application with *.env.production* as the environment file.
+
+```bash
+php artisan serve --env=production
+```
+
+Navigate to `http://localhost:8000`. If the page loads without errors, the PHP application is connecting to the MySQL database in Azure.
+
+Add a few tasks in the page.
++
+To stop PHP, type `Ctrl + C` in the terminal.
+
+### Commit your changes
+
+Run the following Git commands to commit your changes:
+
+```bash
+git add .
+git commit -m "database.php updates"
+```
+
+Your app is ready to be deployed.
+
+## Deploy to Azure
+
+In this step, you deploy the MySQL-connected PHP application to Azure App Service.
+
+### Configure a deployment user
+
+FTP and local Git can deploy to an Azure web app by using a deployment user. Once you configure your deployment user, you can use it for all your Azure deployments. Your account-level deployment username and password are different from your Azure subscription credentials.
+
+To configure the deployment user, run the [az webapp deployment user set](/cli/azure/webapp/deployment/user#az-webapp-deployment-user-set) command in Azure Cloud Shell. Replace _&lt;username>_ and _&lt;password>_ with your deployment user username and password.
+
+The username must be unique within Azure, and for local Git pushes, must not contain the '@' symbol.
+The password must be at least eight characters long, with two of the following three elements: letters, numbers, and symbols.
+
+```azurecli
+az appservice plan create --name myAppServicePlan --resource-group myResourceGroup --sku F1 --is-linux
+```
+
+The JSON output shows the password as null. If you get a 'Conflict'. Details: 409 error, change the username. If you get a 'Bad Request'. Details: 400 error, use a stronger password. **Record your username and password to use to deploy your web apps.**
+
+### Create an App Service plan
+
+In the Cloud Shell, create an App Service plan in the resource group with the [az appservice plan create](/cli/azure/appservice/plan#az-appservice-plan-create) command. The following example creates an App Service plan named myAppServicePlan in the Free pricing tier (--sku F1) and in a Linux container (--is-linux).
+
+az appservice plan create --name myAppServicePlan --resource-group myResourceGroup --sku F1 --is-linux
+
+<a name="create"></a>
+
+### Create a web app
+
+Create a [web app](../../app-service/overview.md#app-service-on-linux) in the myAppServicePlan App Service plan.
+
+In the Cloud Shell, you can use the [az webapp create](/cli/azure/webapp#az-webapp-create) command. In the following example, replace _&lt;app-name>_ with a globally unique app name (valid characters are `a-z`, `0-9`, and `-`). The runtime is set to `PHP|7.0`. To see all supported runtimes, run [az webapp list-runtimes --os linux](/cli/azure/webapp#az-webapp-list-runtimes).
+
+```azurecli
+az webapp create --resource-group myResourceGroup --plan myAppServicePlan --name <app-name> --runtime "PHP|7.3" --deployment-local-git
+```
+
+When the web app has been created, the Azure CLI shows output similar to the following example:
+
+```
+Local git is configured with url of 'https://<username>@<app-name>.scm.azurewebsites.net/<app-name>.git'
+{
+ "availabilityState": "Normal",
+ "clientAffinityEnabled": true,
+ "clientCertEnabled": false,
+ "cloningInfo": null,
+ "containerSize": 0,
+ "dailyMemoryTimeQuota": 0,
+ "defaultHostName": "<app-name>.azurewebsites.net",
+ "deploymentLocalGitUrl": "https://<username>@<app-name>.scm.azurewebsites.net/<app-name>.git",
+ "enabled": true,
+ < JSON data removed for brevity. >
+}
+```
+
+You've created an empty new web app, with git deployment enabled.
+
+> [!NOTE]
+> The URL of the Git remote is shown in the deploymentLocalGitUrl property, with the format `https://<username>@<app-name>.scm.azurewebsites.net/<app-name>.git`. Save this URL as you need it later.
+
+### Configure database settings
+
+In App Service, you set environment variables as *app settings* by using the [`az webapp config appsettings set`](/cli/azure/webapp/config/appsettings#az-webapp-config-appsettings-set) command.
+
+The following command configures the app settings `DB_HOST`, `DB_DATABASE`, `DB_USERNAME`, and `DB_PASSWORD`. Replace the placeholders _&lt;app-name>_ and _&lt;mysql-server-name>_.
+
+```azurecli-interactive
+az webapp config appsettings set --name <app-name> --resource-group myResourceGroup --settings DB_HOST="<mysql-server-name>.mysql.database.azure.com" DB_DATABASE="sampledb" DB_USERNAME="phpappuser" DB_PASSWORD="MySQLAzure2017" MYSQL_SSL="true"
+```
+
+You can use the PHP [getenv](https://www.php.net/manual/en/function.getenv.php) method to access the settings. the Laravel code uses an [env](https://laravel.com/docs/5.4/helpers#method-env) wrapper over the PHP `getenv`. For example, the MySQL configuration in *config/database.php* looks like the following code:
+
+```php
+'mysql' => [
+ 'driver' => 'mysql',
+ 'host' => env('DB_HOST', 'localhost'),
+ 'database' => env('DB_DATABASE', 'forge'),
+ 'username' => env('DB_USERNAME', 'forge'),
+ 'password' => env('DB_PASSWORD', ''),
+ ...
+],
+```
+
+### Configure Laravel environment variables
+
+Laravel needs an application key in App Service. You can configure it with app settings.
+
+In the local terminal window, use `php artisan` to generate a new application key without saving it to *.env*.
+
+```bash
+php artisan key:generate --show
+```
+
+In the Cloud Shell, set the application key in the App Service app by using the [`az webapp config appsettings set`](/cli/azure/webapp/config/appsettings#az-webapp-config-appsettings-set) command. Replace the placeholders _&lt;app-name>_ and _&lt;outputofphpartisankey:generate>_.
+
+```azurecli-interactive
+az webapp config appsettings set --name <app-name> --resource-group myResourceGroup --settings APP_KEY="<output_of_php_artisan_key:generate>" APP_DEBUG="true"
+```
+
+`APP_DEBUG="true"` tells Laravel to return debugging information when the deployed app encounters errors. When running a production application, set it to `false`, which is more secure.
+
+### Set the virtual application path
+
+[Laravel application lifecycle](https://laravel.com/docs/5.4/lifecycle) begins in the *public* directory instead of the application's root directory. The default PHP Docker image for App Service uses Apache, and it doesn't let you customize the `DocumentRoot` for Laravel. However, you can use `.htaccess` to rewrite all requests to point to */public* instead of the root directory. In the repository root, an `.htaccess` is added already for this purpose. With it, your Laravel application is ready to be deployed.
+
+For more information, see [Change site root](../../app-service/configure-language-php.md?pivots=platform-linux#change-site-root).
+
+### Push to Azure from Git
+
+Back in the local terminal window, add an Azure remote to your local Git repository. Replace _&lt;deploymentLocalGitUrl-from-create-step>_ with the URL of the Git remote that you saved from [Create a web app](#create-a-web-app).
+
+```bash
+git remote add azure <deploymentLocalGitUrl-from-create-step>
+```
+
+Push to the Azure remote to deploy your app with the following command. When Git Credential Manager prompts you for credentials, make sure you enter the credentials you created in **Configure a deployment user**, not the credentials you use to sign in to the Azure portal.
+
+```bash
+git push azure main
+```
+
+This command may take a few minutes to run. While running, it displays information similar to the following example:
+
+<pre>
+Counting objects: 3, done.
+Delta compression using up to 8 threads.
+Compressing objects: 100% (3/3), done.
+Writing objects: 100% (3/3), 291 bytes | 0 bytes/s, done.
+Total 3 (delta 2), reused 0 (delta 0)
+remote: Updating branch 'main'.
+remote: Updating submodules.
+remote: Preparing deployment for commit id 'a5e076db9c'.
+remote: Running custom deployment command...
+remote: Running deployment command...
+...
+&lt; Output has been truncated for readability &gt;
+</pre>
+
+### Browse to the Azure app
+
+Browse to `http://<app-name>.azurewebsites.net` and add a few tasks to the list.
++
+Congratulations, you're running a data-driven PHP app in Azure App Service.
+
+## Update model locally and redeploy
+
+In this step, you make a simple change to the `task` data model and the webapp, and then publish the update to Azure.
+
+For the tasks scenario, you modify the application so that you can mark a task as complete.
+
+### Add a column
+
+In the local terminal window, navigate to the root of the Git repository.
+
+Generate a new database migration for the `tasks` table:
+
+```bash
+php artisan make:migration add_complete_column --table=tasks
+```
+
+This command shows you the name of the migration file that's generated. Find this file in *database/migrations* and open it.
+
+Replace the `up` method with the following code:
+
+```php
+public function up()
+{
+ Schema::table('tasks', function (Blueprint $table) {
+ $table->boolean('complete')->default(False);
+ });
+}
+```
+
+The preceding code adds a boolean column in the `tasks` table called `complete`.
+
+Replace the `down` method with the following code for the rollback action:
+
+```php
+public function down()
+{
+ Schema::table('tasks', function (Blueprint $table) {
+ $table->dropColumn('complete');
+ });
+}
+```
+
+In the local terminal window, run Laravel database migrations to make the change in the local database.
+
+```bash
+php artisan migrate
+```
+
+Based on the [Laravel naming convention](https://laravel.com/docs/5.4/eloquent#defining-models), the model `Task` (see *app/Task.php*) maps to the `tasks` table by default.
+
+### Update application logic
+
+Open the *routes/web.php* file. The application defines its routes and business logic here.
+
+At the end of the file, add a route with the following code:
+
+```php
+/**
+ * Toggle Task completeness
+ */
+Route::post('/task/{id}', function ($id) {
+ error_log('INFO: post /task/'.$id);
+ $task = Task::findOrFail($id);
+
+ $task->complete = !$task->complete;
+ $task->save();
+
+ return redirect('/');
+});
+```
+
+The preceding code makes a simple update to the data model by toggling the value of `complete`.
+
+### Update the view
+
+Open the *resources/views/tasks.blade.php* file. Find the `<tr>` opening tag and replace it with:
+
+```html
+<tr class="{{ $task->complete ? 'success' : 'active' }}" >
+```
+
+The preceding code changes the row color depending on whether the task is complete.
+
+In the next line, you have the following code:
+
+```html
+<td class="table-text"><div>{{ $task->name }}</div></td>
+```
+
+Replace the entire line with the following code:
+
+```html
+<td>
+ <form action="{{ url('task/'.$task->id) }}" method="POST">
+ {{ csrf_field() }}
+
+ <button type="submit" class="btn btn-xs">
+ <i class="fa {{$task->complete ? 'fa-check-square-o' : 'fa-square-o'}}"></i>
+ </button>
+ {{ $task->name }}
+ </form>
+</td>
+```
+
+The preceding code adds the submit button that references the route that you defined earlier.
+
+### Test the changes locally
+
+In the local terminal window, run the development server from the root directory of the Git repository.
+
+```bash
+php artisan serve
+```
+
+To see the task status change, navigate to `http://localhost:8000` and select the checkbox.
++
+To stop PHP, type `Ctrl + C` in the terminal.
+
+### Publish changes to Azure
+
+In the local terminal window, run Laravel database migrations with the production connection string to make the change in the Azure database.
+
+```bash
+php artisan migrate --env=production --force
+```
+
+Commit all the changes in Git, and then push the code changes to Azure.
+
+```bash
+git add .
+git commit -m "added complete checkbox"
+git push azure main
+```
+
+Once the `git push` is complete, navigate to the Azure app and test the new functionality.
++
+If you added any tasks, they are retained in the database. Updates to the data schema leave existing data intact.
+
+## Clean up resources
+
+In the preceding steps, you created Azure resources in a resource group. If you don't expect to need these resources in the future, delete the resource group by running the following command in the Cloud Shell:
+
+```azurecli
+az group delete --name myResourceGroup
+```
+
+<a name="next"></a>
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [How to manage your resources in Azure portal](../../azure-resource-manager/management/manage-resources-portal.md) <br/>
+> [!div class="nextstepaction"]
+> [How to manage your server](how-to-manage-server-cli.md)
mysql Tutorial Php Database App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/tutorial-php-database-app.md
Title: 'Tutorial: Build a PHP (Laravel) app with Azure Database for MySQL Flexible Server'
-description: This tutorial explains how to build a PHP app with flexible server.
--
+ Title: 'Tutorial: Build a PHP app with Azure Database for MySQL - Flexible Server'
+description: This tutorial explains how to build a PHP app with flexible server and deploy it on Azure App Service.
++ ms.devlang: php Previously updated : 9/21/2020 Last updated : 6/21/2022
-# Tutorial: Build a PHP (Laravel) and MySQL Flexible Server app in Azure App Service
+# Tutorial: Deploy a PHP and MySQL - Flexible Server app on Azure App Service
[!INCLUDE[applies-to-mysql-flexible-server](../includes/applies-to-mysql-flexible-server.md)]
+[Azure App Service](../../app-service/overview.md) provides a highly scalable, self-patching web hosting service using the Linux operating system.
-[Azure App Service](../../app-service/overview.md) provides a highly scalable, self-patching web hosting service using the Linux operating system. This tutorial shows how to create a PHP app in Azure and connect it to a MySQL database. When you're finished, you'll have a [Laravel](https://laravel.com/) app running on Azure App Service on Linux.
+This tutorial shows how to build and deploy a sample PHP application to Azure App Service, and integrate it with Azure Database for MySQL - Flexible Server on the back end.
-In this tutorial, you learn how to:
+In this tutorial, you'll learn how to:
> [!div class="checklist"] >
-> * Setup a PHP (Laravel) app with local MySQL
-> * Create a MySQL Flexible Server
-> * Connect a PHP app to MySQL Flexible Server
+> * Create a MySQL flexible server
+> * Connect a PHP app to the MySQL flexible server
> * Deploy the app to Azure App Service
-> * Update the data model and redeploy the app
-> * Manage the app in the Azure portal
+> * Update and redeploy the app
[!INCLUDE [flexible-server-free-trial-note](../includes/flexible-server-free-trial-note.md)] ## Prerequisites
-To complete this tutorial:
+- [Install Git](https://git-scm.com/).
+- The [Azure Command-Line Interface (CLI)](/cli/azure/install-azure-cli).
+- An Azure subscription [!INCLUDE [flexible-server-free-trial-note](../includes/flexible-server-free-trial-note.md)]
-1. [Install Git](https://git-scm.com/)
-2. [Install PHP 5.6.4 or above](https://php.net/downloads.php)
-3. [Install Composer](https://getcomposer.org/doc/00-intro.md)
-4. Enable the following PHP extensions Laravel needs: OpenSSL, PDO-MySQL, Mbstring, Tokenizer, XML
-5. [Install and start MySQL](https://dev.mysql.com/doc/refman/5.7/en/installing.html)
+## Create an Azure Database for MySQL flexible server
-## Prepare local MySQL
+First, we'll provision a MySQL flexible server with public access connectivity, configure firewall rules to allow the application to access the server, and create a production database.
-In this step, you create a database in your local MySQL server for your use in this tutorial.
+To learn how to use private access connectivity instead and isolate app and database resources in a virtual network, see [Tutorial: Connect an App Services Web app to an Azure Database for MySQL flexible server in a virtual network](tutorial-webapp-server-vnet.md).
-### Connect to local MySQL server
+### Create a resource group
-In a terminal window, connect to your local MySQL server. You can use this terminal window to run all the commands in this tutorial.
+An Azure resource group is a logical group in which Azure resources are deployed and managed. Let's create a resource group *rg-php-demo* using the [az group create](/cli/azure/group#az-group-create) command in the *centralus* location.
-```bash
-mysql -u root -p
-```
-
-If you're prompted for a password, enter the password for the `root` account. If you don't remember your root account password, see [MySQL: How to Reset the Root Password](https://dev.mysql.com/doc/refman/5.7/en/resetting-permissions.html).
-
-If your command runs successfully, then your MySQL server is running. If not, make sure that your local MySQL server is started by following the [MySQL post-installation steps](https://dev.mysql.com/doc/refman/5.7/en/postinstallation.html).
-
-### Create a database locally
-
-At the `mysql` prompt, create a database.
-
-```sql
-CREATE DATABASE sampledb;
-```
-
-Exit your server connection by typing `quit`.
-
-```sql
-quit
-```
-
-<a name="step2"></a>
-
-## Create a PHP app locally
-
-In this step, you get a Laravel sample application, configure its database connection, and run it locally.
-
-### Clone the sample
-
-In the terminal window, navigate to an empty directory where you can clone the sample application. Run the following command to clone the sample repository.
-
-```bash
-git clone https://github.com/Azure-Samples/laravel-tasks
-```
-
-`cd` to your cloned directory.
-Install the required packages.
-
-```bash
-cd laravel-tasks
-composer install
-```
-
-### Configure MySQL connection
+1. Open command prompt.
+1. Sign in to your Azure account.
+ ```azurecli-interactive
+ az login
+ ```
+1. Choose your Azure subscription.
+ ```azurecli-interactive
+ az account set -s <your-subscription-ID>
+ ```
+1. Create the resource group.
+ ```azurecli-interactive
+ az group create --name rg-php-demo --location centralus
+ ```
-In the repository root, create a file named *.env*. Copy the following variables into the *.env* file. Replace the _&lt;root_password>_ placeholder with the MySQL root user's password.
+### Create a MySQL flexible server
-```txt
-APP_ENV=local
-APP_DEBUG=true
-APP_KEY=
+1. To create a MySQL flexible server with public access connectivity, run the following [`az flexible-server create`](/cli/azure/mysql/server#az-mysql-flexible-server-create) command. Replace your values for server name, admin username and password.
-DB_CONNECTION=mysql
-DB_HOST=127.0.0.1
-DB_DATABASE=sampledb
-DB_USERNAME=root
-DB_PASSWORD=<root_password>
-```
-
-For information on how Laravel uses the *.env* file, see [Laravel Environment Configuration](https://laravel.com/docs/5.4/configuration#environment-configuration).
-
-### Run the sample locally
-
-Run [Laravel database migrations](https://laravel.com/docs/5.4/migrations) to create the tables the application needs. To see which tables are created in the migrations, look in the *database/migrations* directory in the Git repository.
-
-```bash
-php artisan migrate
-```
-
-Generate a new Laravel application key.
-
-```bash
-php artisan key:generate
-```
-
-Run the application.
+ ```azurecli-interactive
+ az mysql flexible-server create \
+ --name <your-mysql-server-name> \
+ --resource-group rg-php-demo \
+ --location centralus \
+ --admin-user <your-mysql-admin-username> \
+ --admin-password <your-mysql-admin-password>
+ ```
-```bash
-php artisan serve
-```
-
-Navigate to `http://localhost:8000` in a browser. Add a few tasks in the page.
--
-To stop PHP, type `Ctrl + C` in the terminal.
-
-## Create a MySQL Flexible Server
+ YouΓÇÖve now created a flexible server in the CentralUS region. The server is based on the Burstable B1MS compute SKU, with 32 GB storage, a 7-day backup retention period, and configured with public access connectivity.
-In this step, you create a MySQL database in [Azure Database for MySQL Flexible Server](../index.yml). Later, you configure the PHP application to connect to this database. In the [Azure Cloud Shell](../../cloud-shell/overview.md), create a server in with the [`az flexible-server create`](/cli/azure/mysql/server#az-mysql-flexible-server-create) command.
+1. Next, to create a firewall rule for your MySQL flexible server to allow client connections, run the following command. When both starting IP and end IP are set to 0.0.0.0, only other Azure resources (like App Services apps, VMs, AKS cluster, etc.) can connect to the flexible server.
-```azurecli-interactive
-az mysql flexible-server create --resource-group myResourceGroup --public-access <IP-Address>
-```
+ ```azurecli-interactive
+ az mysql flexible-server firewall-rule create \
+ --name <your-mysql-server-name> \
+ --resource-group rg-php-demo \
+ --rule-name AllowAzureIPs \
+ --start-ip-address 0.0.0.0 \
+ --end-ip-address 0.0.0.0
+ ```
-> [!IMPORTANT]
->
->* Make a note of the **servername** and **connection string** to use it in the next step to connect and run laravel data migration.
-> * For **IP-Address** argument, provide the IP of your client machine. The server is locked when created and you need to permit access to your client machine to manage the server locally.
+1. To create a new MySQL production database *sampledb* to use with the PHP application, run the following command:
-### Configure server firewall to allow web app to connect to the server
+ ```azurecli-interactive
+ az mysql flexible-server db create \
+ --resource-group rg-php-demo \
+ --server-name <your-mysql-server-name> \
+ --database-name sampledb
+ ```
-In the Cloud Shell, create a firewall rule for your MySQL server to allow client connections by using the az mysql server firewall-rule create command. When both starting IP and end IP are set to ```0.0.0.0```, the firewall is only opened for other Azure services that do not have a static IP to connect to the server.
-```azurecli
-az mysql flexible-server firewall-rule create --name allanyAzureIPs --server <mysql-server-name> --resource-group myResourceGroup --start-ip-address 0.0.0.0 --end-ip-address 0.0.0.0
-```
+## Build your application
-### Connect to production MySQL server locally
+For the purposes of this tutorial, we'll use a sample PHP application that displays and manages a product catalog. The application provides basic functionalities like viewing the products in the catalog, adding new products, updating existing item prices and removing products.
-In the local terminal window, connect to the MySQL server in Azure. Use the value you specified previously for ```<admin-user>``` and ```<mysql-server-name>``` . When prompted for a password, use the password you specified when you created the database in Azure.
+To learn more about the application code, go ahead and explore the app in the [GitHub repository](https://github.com/Azure-Samples/php-mysql-app-service). To learn how to connect a PHP app to MySQL flexible server, refer [Quickstart: Connect using PHP](connect-php.md).
-```bash
-mysql -u <admin-user> -h <mysql-server-name>.mysql.database.azure.com -P 3306 -p
-```
+In this tutorial, we'll directly clone the coded sample app and learn how to deploy it on Azure App Service.
-### Create a production database
+1. To clone the sample application repository and change to the repository root, run the following commands:
-At the `mysql` prompt, create a database.
+ ```bash
+ git clone https://github.com/Azure-Samples/php-mysql-app-service.git
+ cd php-mysql-app-service
+ ```
-```sql
-CREATE DATABASE sampledb;
-```
+1. Run the following command to ensure that the default branch is `main`.
-### Create a user with permissions
+ ```bash
+ git branch -m main
+ ```
-Create a database user called *phpappuser* and give it all privileges in the `sampledb` database. For simplicity of the tutorial, use *MySQLAzure2020* as the password.
+## Create and configure an Azure App Service Web App
-```sql
-CREATE USER 'phpappuser' IDENTIFIED BY 'MySQLAzure2020';
-GRANT ALL PRIVILEGES ON sampledb.* TO 'phpappuser';
-```
+In Azure App Service (Web Apps, API Apps, or Mobile Apps), an app always runs in an App Service plan. An App Service plan defines a set of compute resources for a web app to run. In this step, we'll create an Azure App Service plan and an App Service web app within it, which will host the sample application.
-Exit the server connection by typing `quit`.
+1. To create an App Service plan in the Free pricing tier, run the following command:
-```sql
-quit
-```
+ ```azurecli-interactive
+ az appservice plan create --name plan-php-demo \
+ --resource-group rg-php-demo \
+ --location centralus \
+ --sku FREE --is-linux
+ ```
-## Connect app to MySQL flexible server
+1. If you want to deploy an application to Azure web app using deployment methods like FTP or Local Git, you need to configure a deployment user with username and password credentials. After you configure your deployment user, you can take advantage of it for all your Azure App Service deployments.
-In this step, you connect the PHP application to the MySQL database you created in Azure Database for MySQL.
+ ```azurecli-interactive
+ az webapp deployment user set \
+ --user-name <your-deployment-username> \
+ --password <your-deployment-password>
+ ```
-<a name="devconfig"></a>
+1. To create an App Service web app with PHP 8.0 runtime and to configure the Local Git deployment option to deploy your app from a Git repository on your local computer, run the following command. Replace `<your-app-name>` with a globally unique app name (valid characters are a-z, 0-9, and -).
-### Configure the database connection
+ ```azurecli-interactive
+ az webapp create \
+ --resource-group rg-php-demo \
+ --plan plan-php-demo \
+ --name <your-app-name> \
+ --runtime "PHP|8.0" \
+ --deployment-local-git
+ ```
-In the repository root, create an *.env.production* file and copy the following variables into it. Replace the placeholder _&lt;mysql-server-name>_ in both *DB_HOST* and *DB_USERNAME*.
+ > [!IMPORTANT]
+ > In the Azure CLI output, the URL of the Git remote is displayed in the deploymentLocalGitUrl property, with the format `https://<username>@<app-name>.scm.azurewebsites.net/<app-name>.git`. Save this URL, as you'll need it later.
-```
-APP_ENV=production
-APP_DEBUG=true
-APP_KEY=
-
-DB_CONNECTION=mysql
-DB_HOST=<mysql-server-name>.mysql.database.azure.com
-DB_DATABASE=sampledb
-DB_USERNAME=phpappuser
-DB_PASSWORD=MySQLAzure2017
-MYSQL_SSL=true
-```
+1. Next we'll configure the MySQL flexible server database connection settings on the Web App.
-Save the changes.
+ The `config.php` file in the sample PHP application retrieves the database connection information (server name, database name, server username and password) from environment variables using the `getenv()` function. In App Service, to set environment variables as **Application Settings** (*appsettings*), run the following command:
-> [!TIP]
-> To secure your MySQL connection information, this file is already excluded from the Git repository (See *.gitignore* in the repository root). Later, you learn how to configure environment variables in App Service to connect to your database in Azure Database for MySQL. With environment variables, you don't need the *.env* file in App Service.
->
+ ```azurecli-interactive
+ az webapp config appsettings set \
+ --name <your-app-name> \
+ --resource-group rg-php-demo \
+ --settings DB_HOST="<your-server-name>.mysql.database.azure.com" \
+ DB_DATABASE="sampledb" \
+ DB_USERNAME="<your-mysql-admin-username>" \
+ DB_PASSWORD="<your-mysql-admin-password>" \
+ MYSQL_SSL="true"
+ ```
+
+ Alternatively, you can use Service Connector to establish a connection between the App Service app and the MySQL flexible server. For more details, see [Integrate Azure Database for MySQL with Service Connector](../../service-connector/how-to-integrate-mysql.md).
-### Configure TLS/SSL certificate
+## Deploy your application using Local Git
-By default, MySQL Flexible Server enforces TLS connections from clients. To connect to your MySQL database in Azure, you must use the [*.pem* certificate supplied by Azure Database for MySQL Flexible Server](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem). Download [this certificate](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem)) and place it in the **SSL** folder in the local copy of the sample app repository.
+Now, we'll deploy the sample PHP application to Azure App Service using the Local Git deployment option.
-Open *config/database.php* and add the `sslmode` and `options` parameters to `connections.mysql`, as shown in the following code.
+1. Since you're deploying the main branch, you need to set the default deployment branch for your App Service app to main. To set the DEPLOYMENT_BRANCH under **Application Settings**, run the following command:
-```php
-'mysql' => [
- ...
- 'sslmode' => env('DB_SSLMODE', 'prefer'),
- 'options' => (env('MYSQL_SSL') && extension_loaded('pdo_mysql')) ? [
- PDO::MYSQL_ATTR_SSL_KEY => '/ssl/DigiCertGlobalRootCA.crt.pem',
- ] : []
-],
-```
+ ```azurecli-interactive
+ az webapp config appsettings set \
+ --name <your-app-name> \
+ --resource-group rg-php-demo \
+ --settings DEPLOYMENT_BRANCH='main'
+ ```
-### Test the application locally
+1. Verify that you are in the application repository's root directory.
-Run Laravel database migrations with *.env.production* as the environment file to create the tables in your MySQL database in Azure Database for MySQL. Remember that _.env.production_ has the connection information to your MySQL database in Azure.
+1. To add an Azure remote to your local Git repository, run the following command.
-```bash
-php artisan migrate --env=production --force
-```
+ **Note:** Replace `<deploymentLocalGitUrl>` with the URL of the Git remote that you saved in the **Create an App Service web app** step.
-*.env.production* doesn't have a valid application key yet. Generate a new one for it in the terminal.
+ ```azurecli-interactive
+ git remote add azure <deploymentLocalGitUrl>
+ ```
-```bash
-php artisan key:generate --env=production --force
-```
+1. To deploy your app by performing a `git push` to the Azure remote, run the following command. When Git Credential Manager prompts you for credentials, enter the deployment credentials that you created in **Configure a deployment user** step.
-Run the sample application with *.env.production* as the environment file.
+ ```azurecli-interactive
+ git push azure main
+ ```
-```bash
-php artisan serve --env=production
-```
+The deployment may take a few minutes to succeed.
-Navigate to `http://localhost:8000`. If the page loads without errors, the PHP application is connecting to the MySQL database in Azure.
+## Test your application
-Add a few tasks in the page.
+Finally, test the application by browsing to `https://<app-name>.azurewebsites.net`, and then add, view, update or delete items from the product catalog.
-To stop PHP, type `Ctrl + C` in the terminal.
+Congratulations! You have successfully deployed a sample PHP application to Azure App Service and integrated it with Azure Database for MySQL - Flexible Server on the back end.
-### Commit your changes
+## Update and redeploy the app
-Run the following Git commands to commit your changes:
+To update the Azure app, make the necessary code changes, commit all the changes in Git, and then push the code changes to Azure.
```bash git add .
-git commit -m "database.php updates"
-```
-
-Your app is ready to be deployed.
-
-## Deploy to Azure
-
-In this step, you deploy the MySQL-connected PHP application to Azure App Service.
-
-### Configure a deployment user
-
-FTP and local Git can deploy to an Azure web app by using a deployment user. Once you configure your deployment user, you can use it for all your Azure deployments. Your account-level deployment username and password are different from your Azure subscription credentials.
-
-To configure the deployment user, run the [az webapp deployment user set](/cli/azure/webapp/deployment/user#az-webapp-deployment-user-set) command in Azure Cloud Shell. Replace _&lt;username>_ and _&lt;password>_ with your deployment user username and password.
-
-The username must be unique within Azure, and for local Git pushes, must not contain the '@' symbol.
-The password must be at least eight characters long, with two of the following three elements: letters, numbers, and symbols.
-
-```azurecli
-az appservice plan create --name myAppServicePlan --resource-group myResourceGroup --sku F1 --is-linux
-```
-
-The JSON output shows the password as null. If you get a 'Conflict'. Details: 409 error, change the username. If you get a 'Bad Request'. Details: 400 error, use a stronger password. **Record your username and password to use to deploy your web apps.**
-
-### Create an App Service plan
-
-In the Cloud Shell, create an App Service plan in the resource group with the [az appservice plan create](/cli/azure/appservice/plan#az-appservice-plan-create) command. The following example creates an App Service plan named myAppServicePlan in the Free pricing tier (--sku F1) and in a Linux container (--is-linux).
-
-az appservice plan create --name myAppServicePlan --resource-group myResourceGroup --sku F1 --is-linux
-
-<a name="create"></a>
-
-### Create a web app
-
-Create a [web app](../../app-service/overview.md#app-service-on-linux) in the myAppServicePlan App Service plan.
-
-In the Cloud Shell, you can use the [az webapp create](/cli/azure/webapp#az-webapp-create) command. In the following example, replace _&lt;app-name>_ with a globally unique app name (valid characters are `a-z`, `0-9`, and `-`). The runtime is set to `PHP|7.0`. To see all supported runtimes, run [az webapp list-runtimes --os linux](/cli/azure/webapp#az-webapp-list-runtimes).
-
-```azurecli
-az webapp create --resource-group myResourceGroup --plan myAppServicePlan --name <app-name> --runtime "PHP|7.3" --deployment-local-git
-```
-
-When the web app has been created, the Azure CLI shows output similar to the following example:
-
-```
-Local git is configured with url of 'https://<username>@<app-name>.scm.azurewebsites.net/<app-name>.git'
-{
- "availabilityState": "Normal",
- "clientAffinityEnabled": true,
- "clientCertEnabled": false,
- "cloningInfo": null,
- "containerSize": 0,
- "dailyMemoryTimeQuota": 0,
- "defaultHostName": "<app-name>.azurewebsites.net",
- "deploymentLocalGitUrl": "https://<username>@<app-name>.scm.azurewebsites.net/<app-name>.git",
- "enabled": true,
- < JSON data removed for brevity. >
-}
-```
-
-You've created an empty new web app, with git deployment enabled.
-
-> [!NOTE]
-> The URL of the Git remote is shown in the deploymentLocalGitUrl property, with the format `https://<username>@<app-name>.scm.azurewebsites.net/<app-name>.git`. Save this URL as you need it later.
-
-### Configure database settings
-
-In App Service, you set environment variables as *app settings* by using the [`az webapp config appsettings set`](/cli/azure/webapp/config/appsettings#az-webapp-config-appsettings-set) command.
-
-The following command configures the app settings `DB_HOST`, `DB_DATABASE`, `DB_USERNAME`, and `DB_PASSWORD`. Replace the placeholders _&lt;app-name>_ and _&lt;mysql-server-name>_.
-
-```azurecli-interactive
-az webapp config appsettings set --name <app-name> --resource-group myResourceGroup --settings DB_HOST="<mysql-server-name>.mysql.database.azure.com" DB_DATABASE="sampledb" DB_USERNAME="phpappuser" DB_PASSWORD="MySQLAzure2017" MYSQL_SSL="true"
-```
-
-You can use the PHP [getenv](https://www.php.net/manual/en/function.getenv.php) method to access the settings. the Laravel code uses an [env](https://laravel.com/docs/5.4/helpers#method-env) wrapper over the PHP `getenv`. For example, the MySQL configuration in *config/database.php* looks like the following code:
-
-```php
-'mysql' => [
- 'driver' => 'mysql',
- 'host' => env('DB_HOST', 'localhost'),
- 'database' => env('DB_DATABASE', 'forge'),
- 'username' => env('DB_USERNAME', 'forge'),
- 'password' => env('DB_PASSWORD', ''),
- ...
-],
-```
-
-### Configure Laravel environment variables
-
-Laravel needs an application key in App Service. You can configure it with app settings.
-
-In the local terminal window, use `php artisan` to generate a new application key without saving it to *.env*.
-
-```bash
-php artisan key:generate --show
-```
-
-In the Cloud Shell, set the application key in the App Service app by using the [`az webapp config appsettings set`](/cli/azure/webapp/config/appsettings#az-webapp-config-appsettings-set) command. Replace the placeholders _&lt;app-name>_ and _&lt;outputofphpartisankey:generate>_.
-
-```azurecli-interactive
-az webapp config appsettings set --name <app-name> --resource-group myResourceGroup --settings APP_KEY="<output_of_php_artisan_key:generate>" APP_DEBUG="true"
-```
-
-`APP_DEBUG="true"` tells Laravel to return debugging information when the deployed app encounters errors. When running a production application, set it to `false`, which is more secure.
-
-### Set the virtual application path
-
-[Laravel application lifecycle](https://laravel.com/docs/5.4/lifecycle) begins in the *public* directory instead of the application's root directory. The default PHP Docker image for App Service uses Apache, and it doesn't let you customize the `DocumentRoot` for Laravel. However, you can use `.htaccess` to rewrite all requests to point to */public* instead of the root directory. In the repository root, an `.htaccess` is added already for this purpose. With it, your Laravel application is ready to be deployed.
-
-For more information, see [Change site root](../../app-service/configure-language-php.md?pivots=platform-linux#change-site-root).
-
-### Push to Azure from Git
-
-Back in the local terminal window, add an Azure remote to your local Git repository. Replace _&lt;deploymentLocalGitUrl-from-create-step>_ with the URL of the Git remote that you saved from [Create a web app](#create-a-web-app).
-
-```bash
-git remote add azure <deploymentLocalGitUrl-from-create-step>
-```
-
-Push to the Azure remote to deploy your app with the following command. When Git Credential Manager prompts you for credentials, make sure you enter the credentials you created in **Configure a deployment user**, not the credentials you use to sign in to the Azure portal.
-
-```bash
+git commit -m "Update Azure app"
git push azure main ```
-This command may take a few minutes to run. While running, it displays information similar to the following example:
-
-<pre>
-Counting objects: 3, done.
-Delta compression using up to 8 threads.
-Compressing objects: 100% (3/3), done.
-Writing objects: 100% (3/3), 291 bytes | 0 bytes/s, done.
-Total 3 (delta 2), reused 0 (delta 0)
-remote: Updating branch 'main'.
-remote: Updating submodules.
-remote: Preparing deployment for commit id 'a5e076db9c'.
-remote: Running custom deployment command...
-remote: Running deployment command...
-...
-&lt; Output has been truncated for readability &gt;
-</pre>
-
-### Browse to the Azure app
-
-Browse to `http://<app-name>.azurewebsites.net` and add a few tasks to the list.
--
-Congratulations, you're running a data-driven PHP app in Azure App Service.
-
-## Update model locally and redeploy
-
-In this step, you make a simple change to the `task` data model and the webapp, and then publish the update to Azure.
-
-For the tasks scenario, you modify the application so that you can mark a task as complete.
-
-### Add a column
-
-In the local terminal window, navigate to the root of the Git repository.
-
-Generate a new database migration for the `tasks` table:
-
-```bash
-php artisan make:migration add_complete_column --table=tasks
-```
-
-This command shows you the name of the migration file that's generated. Find this file in *database/migrations* and open it.
-
-Replace the `up` method with the following code:
-
-```php
-public function up()
-{
- Schema::table('tasks', function (Blueprint $table) {
- $table->boolean('complete')->default(False);
- });
-}
-```
-
-The preceding code adds a boolean column in the `tasks` table called `complete`.
-
-Replace the `down` method with the following code for the rollback action:
-
-```php
-public function down()
-{
- Schema::table('tasks', function (Blueprint $table) {
- $table->dropColumn('complete');
- });
-}
-```
-
-In the local terminal window, run Laravel database migrations to make the change in the local database.
-
-```bash
-php artisan migrate
-```
-
-Based on the [Laravel naming convention](https://laravel.com/docs/5.4/eloquent#defining-models), the model `Task` (see *app/Task.php*) maps to the `tasks` table by default.
-
-### Update application logic
-
-Open the *routes/web.php* file. The application defines its routes and business logic here.
-
-At the end of the file, add a route with the following code:
-
-```php
-/**
- * Toggle Task completeness
- */
-Route::post('/task/{id}', function ($id) {
- error_log('INFO: post /task/'.$id);
- $task = Task::findOrFail($id);
-
- $task->complete = !$task->complete;
- $task->save();
-
- return redirect('/');
-});
-```
-
-The preceding code makes a simple update to the data model by toggling the value of `complete`.
-
-### Update the view
-
-Open the *resources/views/tasks.blade.php* file. Find the `<tr>` opening tag and replace it with:
-
-```html
-<tr class="{{ $task->complete ? 'success' : 'active' }}" >
-```
-
-The preceding code changes the row color depending on whether the task is complete.
-
-In the next line, you have the following code:
-
-```html
-<td class="table-text"><div>{{ $task->name }}</div></td>
-```
-
-Replace the entire line with the following code:
-
-```html
-<td>
- <form action="{{ url('task/'.$task->id) }}" method="POST">
- {{ csrf_field() }}
-
- <button type="submit" class="btn btn-xs">
- <i class="fa {{$task->complete ? 'fa-check-square-o' : 'fa-square-o'}}"></i>
- </button>
- {{ $task->name }}
- </form>
-</td>
-```
-
-The preceding code adds the submit button that references the route that you defined earlier.
-
-### Test the changes locally
-
-In the local terminal window, run the development server from the root directory of the Git repository.
-
-```bash
-php artisan serve
-```
-
-To see the task status change, navigate to `http://localhost:8000` and select the checkbox.
--
-To stop PHP, type `Ctrl + C` in the terminal.
-
-### Publish changes to Azure
-
-In the local terminal window, run Laravel database migrations with the production connection string to make the change in the Azure database.
-
-```bash
-php artisan migrate --env=production --force
-```
-
-Commit all the changes in Git, and then push the code changes to Azure.
-
-```bash
-git add .
-git commit -m "added complete checkbox"
-git push azure main
-```
-
-Once the `git push` is complete, navigate to the Azure app and test the new functionality.
--
-If you added any tasks, they are retained in the database. Updates to the data schema leave existing data intact.
+Once the `git push` is complete, navigate to or refresh the Azure app to test the new functionality.
## Clean up resources
-In the preceding steps, you created Azure resources in a resource group. If you don't expect to need these resources in the future, delete the resource group by running the following command in the Cloud Shell:
+In this tutorial, you created all the Azure resources in a resource group. If you don't expect to need these resources in the future, delete the resource group by running the following command in the Cloud Shell:
-```azurecli
-az group delete --name myResourceGroup
+```azurecli-interactive
+az group delete --name rg-php-demo
```
-<a name="next"></a>
- ## Next steps > [!div class="nextstepaction"]
-> [How to manage your resources in Azure portal](../../azure-resource-manager/management/manage-resources-portal.md) <br/>
+> [How to manage your resources in Azure portal](../../azure-resource-manager/management/manage-resources-portal.md)
+ > [!div class="nextstepaction"] > [How to manage your server](how-to-manage-server-cli.md)+
mysql Tutorial Webapp Server Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/tutorial-webapp-server-vnet.md
Title: 'Tutorial: Create Azure Database for MySQL Flexible Server and Azure App Service Web App in same virtual network'
-description: Quickstart guide to create Azure Database for MySQL Flexible Server with Web App in a virtual network
+ Title: 'Tutorial: Connect an App Services Web app to an Azure Database for MySQL flexible server in a virtual network'
+description: Tutorial to create and connect Web App to Azure Database for MySQL Flexible Server in a virtual network
Last updated 03/18/2021
-# Tutorial: Create an Azure Database for MySQL - Flexible Server with App Services Web App in virtual network
+# Tutorial: Connect an App Services Web app to an Azure Database for MySQL flexible server in a virtual network
[[!INCLUDE[applies-to-mysql-flexible-server](../includes/applies-to-mysql-flexible-server.md)]
-This tutorial shows you how create a Azure App Service Web App with MySQL Flexible Server inside a [Virtual network](../../virtual-network/virtual-networks-overview.md).
+This tutorial shows you how to create and connect an Azure App Service Web App to an Azure Database for MySQL flexible server isolated inside same or different [virtual networks](../../virtual-network/virtual-networks-overview.md).
In this tutorial you will learn how to: >[!div class="checklist"]
+>
> * Create a MySQL flexible server in a virtual network
-> * Create a subnet to delegate to App Service
-> * Create a web app
+> * Create a subnet to delegate to App Service and create a web app
> * Add the web app to the virtual network
-> * Connect to Postgres from the web app
+> * Connect to MySQL flexible server from the web app
+> * Connect a Web app and MySQL flexible server isolated in different VNets
## Prerequisites
Configure the web app to allow all outbound connections from within the virtual
az webapp config set --name mywebapp --resource-group myresourcesourcegroup --generic-configurations '{"vnetRouteAllEnabled": true}' ```
+## App Service Web app and MySQL flexible server in different virtual networks
+
+If you have created the App Service app and the MySQL flexible server in different virtual networks (VNets), you will need to use one of the following methods to establish a seamless connection:
+
+- **Connect the two VNets using VNet peering** (local or global). See [Connect virtual networks with virtual network peering](../../virtual-network/tutorial-connect-virtual-networks-cli.md) guide.
+- **Link MySQL flexible server's Private DNS zone to the web app's VNet using virtual network links.** If you use the Azure portal or the Azure CLI to create MySQL flexible servers in a VNet, a new private DNS zone is auto-provisioned in your subscription using the server name provided. Navigate to the flexible server's private DNS zone and follow the [How to link the private DNS zone to a virtual network](../../dns/private-dns-getstarted-portal.md#link-the-virtual-network) guide to set up a virtual network link.
+ ## Clean up resources Clean up all resources you created in the tutorial using the following command. This command deletes all the resources in this resource group.
mysql How To Auto Grow Storage Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-auto-grow-storage-powershell.md
Last updated 06/20/2022 + # Auto grow storage in Azure Database for MySQL server using PowerShell [!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)]
mysql Quickstart Create Mysql Server Database Using Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/quickstart-create-mysql-server-database-using-azure-portal.md
Title: 'Quickstart: Create a server - Azure portal - Azure Database for MySQL'
description: This article walks you through using the Azure portal to create a sample Azure Database for MySQL server in about five minutes. + - Last updated 06/20/2022
object-anchors Unity Remoting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/object-anchors/quickstarts/unity-remoting.md
description: In this quickstart, you learn how to enable Unity Remoting in a pro
Previously updated : 04/04/2022 Last updated : 06/22/2022
To complete this quickstart, make sure you have:
|Component |Unity 2019 |Unity 2020 | |--|-|-| |Unity Editor | 2019.4.36f1 | 2020.3.30f1 |
-|Windows Mixed Reality XR Plugin | 2.9.2 | 4.6.2 |
+|Windows Mixed Reality XR Plugin | 2.9.3 | 4.6.3 |
|Holographic Remoting Player | 2.7.5 | 2.7.5 | |Azure Object Anchors SDK | 0.19.0 | 0.19.0 | |Mixed Reality WinRT Projections | 0.5.2009 | 0.5.2009 |
To complete this quickstart, make sure you have:
## One-time setup 1. On your HoloLens, install version 2.7.5 or newer of the [Holographic Remoting Player](https://www.microsoft.com/p/holographic-remoting-player/9nblggh4sv40) via the Microsoft Store. 1. In the <a href="/windows/mixed-reality/develop/unity/welcome-to-mr-feature-tool" target="_blank">Mixed Reality Feature Tool</a>, under the **Platform Support** section, install the **Mixed Reality WinRT Projections** feature package, version 0.5.2009 or newer, into your Unity project folder.
-1. In the Unity **Package Manager** window, ensure that the **Windows XR Plugin** is updated to version 2.9.2 or newer for Unity 2019, or version 4.6.2 or newer for Unity 2020.
-1. In the Unity **Project Settings** window, click on the **XR Plug-in Management** section, select the **PC Standalone** tab, and ensure that the box for **Windows Mixed Reality** is checked, as well as **Initialize XR on Startup**.
-1. Open the **Windows XR Plugin Remoting** window from the **Window/XR** menu, select **Remote to Device** from the drop-down, and enter your device's IP address in the **Remote Machine** box.
-1. Place .ou model files in `%USERPROFILE%\AppData\LocalLow\<companyname>\<productname>` where `<companyname>` and `<productname>` match the values in the **Player** section of your project's **Project Settings** (e.g. `Microsoft\AOABasicApp`). (See the **Windows Editor and Standalone Player** section of [Unity - Scripting API: Application.persistentDataPath](https://docs.unity3d.com/ScriptReference/Application-persistentDataPath.html).)
+1. In the Unity **Package Manager** window, ensure that the **Windows XR Plugin** is updated to version 2.9.3 or newer for Unity 2019, or version 4.6.3 or newer for Unity 2020.
+1. In the Unity **Project Settings** window, select the **XR Plug-in Management** section, select the **PC Standalone** tab, and ensure that the **Windows Mixed Reality** and **Initialize XR on Startup** checkboxes are checked.
+1. Place `.ou` model files in `%USERPROFILE%\AppData\LocalLow\<companyname>\<productname>` where `<companyname>` and `<productname>` match the values in the **Player** section of your project's **Project Settings** (for example, `Microsoft\AOABasicApp`). (See the **Windows Editor and Standalone Player** section of [Unity - Scripting API: Application.persistentDataPath](https://docs.unity3d.com/ScriptReference/Application-persistentDataPath.html).)
## Using Remoting with Object Anchors
+1. Launch the **Holographic Remoting Player** app on your HoloLens. Your device's IP address will be displayed for convenient reference.
1. Open your project in the Unity Editor.
-1. Launch the **Holographic Remoting Player** app on your HoloLens.
-1. *Before* entering **Play Mode** for the first time, *uncheck* the **Connect on Play** checkbox, and manually connect to the HoloLens by pressing **Connect**.
- 1. Enter **Play Mode** to finish initializing the connection.
- 1. After this, you may reenable **Connect on Play** for the remainder of the session.
-1. Enter and exit Play Mode as needed; iterate on changes in the Editor; use Visual Studio to debug script execution, and all the normal Unity development activities you're used to in Play Mode!
+1. Open the **Windows XR Plugin Remoting** window from the **Window/XR** menu, select **Remote to Device** from the drop-down, ensure your device's IP address is entered in the **Remote Machine** box, and make sure that the **Connect on Play** checkbox is checked.
+1. Enter and exit Play Mode as needed - Unity will connect to the Player app running on the device, and display your scene in real time! You can iterate on changes in the Editor, use Visual Studio to debug script execution, and do all the normal Unity development activities you're used to in Play Mode!
## Known limitations
-* Some Object Anchors SDK features are not supported since they rely on access to the HoloLens cameras which is not currently available via Remoting. These include <a href="/dotnet/api/microsoft.azure.objectanchors.objectobservationmode">Active Observation Mode</a> and <a href="/dotnet/api/microsoft.azure.objectanchors.objectinstancetrackingmode">High Accuracy Tracking Mode</a>.
-* The Object Anchors SDK currently only supports Unity Remoting while using the **Windows Mixed Reality XR Plugin**. If the **OpenXR XR Plugin** is used, <a href="/dotnet/api/microsoft.azure.objectanchors.objectobserver.issupported">`ObjectObserver.IsSupported`</a> will return `false` in **Play Mode** and other APIs may throw exceptions.
+* Some Object Anchors SDK features aren't supported since they rely on access to the HoloLens cameras, which isn't currently available via Remoting. These include <a href="/dotnet/api/microsoft.azure.objectanchors.objectobservationmode">Active Observation Mode</a> and <a href="/dotnet/api/microsoft.azure.objectanchors.objectinstancetrackingmode">High Accuracy Tracking Mode</a>.
+* The Object Anchors SDK currently only supports Unity Remoting while using the **Windows Mixed Reality XR Plugin**. If the **OpenXR XR Plugin** is used, <a href="/dotnet/api/microsoft.azure.objectanchors.objectobserver.issupported">`ObjectObserver.IsSupported`</a> will return `false` in **Play Mode** and other APIs may throw exceptions.
openshift Howto Configure Ovn Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-configure-ovn-kubernetes.md
+
+ Title: Configure OVN-Kubernetes network provider for Azure Red Hat OpenShift clusters (preview)
+description: In this how-to article, learn how to configure OVN-Kubernetes network provider for Azure Red Hat OpenShift clusters (preview).
++++ Last updated : 06/13/2022
+topic: how-to
+keywords: azure, openshift, aro, red hat, azure CLI, azure portal, ovn, ovn-kubernetes, CNI, Container Network Interface
+Customer intent: I need to configure OVN-Kubernetes network provider for Azure Red Hat OpenShift clusters.
++
+# Configure OVN-Kubernetes network provider for Azure Red Hat OpenShift clusters
+
+This article explains how to Configure OVN-Kubernetes network provider for Azure Red Hat OpenShift clusters.
+
+## About the OVN-Kubernetes default Container Network Interface (CNI) network provider (preview)
+
+OVN-Kubernetes Container Network Interface (CNI) for Azure Red Hat OpenShift (ARO) cluster is now available for preview.
+
+The OpenShift Container Platform cluster uses a virtualized network for pod and service networks. The OVN-Kubernetes Container Network Interface (CNI) plug-in is a network provider for the default cluster network. OVN-Kubernetes, which is based on the Open Virtual Network (OVN), provides an overlay-based networking implementation.
+
+A cluster that uses the OVN-Kubernetes network provider also runs Open vSwitch (OVS) on each node. OVN configures OVS on each node to implement the declared network configuration.
+
+> [!IMPORTANT]
+> Currently, this Azure Red Hat OpenShift feature is being offered in preview only. Preview features are available on a self-service, opt-in basis. Previews are provided "as is" and "as available," and they are excluded from the service-level agreements and limited warranty. Azure Red Hat OpenShift previews are partially covered by customer support on a best-effort basis. As such, these features are not meant for production use.
+
+## OVN-Kubernetes features
+
+The OVN-Kubernetes CNI cluster network provider offers the following features:
+
+* Uses OVN to manage network traffic flows. OVN is a community developed, vendor-agnostic network virtualization solution.
+* Implements Kubernetes network policy support, including ingress and egress rules.
+* Uses the Generic Network Virtualization Encapsulation (Geneve) protocol rather than the Virtual Extensible LAN (VXLAN) protocol to create an overlay network between nodes.
+
+For more information about OVN-Kubernetes CNI network provider, see [About the OVN-Kubernetes default Container Network Interface (CNI) network provider](https://docs.openshift.com/container-platform/4.10/networking/ovn_kubernetes_network_provider/about-ovn-kubernetes.html).
+
+## Prerequisites
+
+Complete the following prerequisites.
+### Install and use the preview Azure Command-Line Interface (CLI)
+
+> [!NOTE]
+> The Azure CLI extension is required for the preview feature only.
+
+If you choose to install and use the CLI locally, ensure you're running Azure CLI version 2.37.0 or later. Run `az --version` to find the version. For details on installing or upgrading Azure CLI, see [Install Azure CLI](/cli/azure/install-azure-cli).
+
+1. Use the following URL to download both the Python wheel and the CLI extension:
+
+ [https://aka.ms/az-aroext-latest.whl](https://aka.ms/az-aroext-latest.whl)
+
+2. Run the following command:
+
+```azurecli-interactive
+az extension add --upgrade -s <path to downloaded .whl file>
+```
+
+3. Verify the CLI extension is being used:
+
+```azurecli-interactive
+az extension list
+[
+ {
+ "experimental": false,
+ "extensionType": "whl",
+ "name": "aro",
+ "path": "<path may differ depending on system>",
+ "preview": true,
+ "version": "1.0.6"
+ }
+]
+```
+
+4. Run the following command:
+
+```azurecli-interactive
+az aro create --help
+```
+
+The result should show the `ΓÇôsdn-type` option, as follows:
+
+```json
+--sdn-type --software-defined-network-type : SDN type either "OpenShiftSDN" (default) or "OVNKubernetes". Allowed values: OVNKubernetes, OpenShiftSDN
+```
+
+## Create an Azure Red Hat OpenShift cluster with OVN as the network provider
+
+The process to create an Azure Red Hat OpenShift cluster with OVN is exactly the same as the existing process explained in [Tutorial: Create an Azure Red Hat OpenShift 4 cluster](tutorial-create-cluster.md), with the following exception. You must also pass in the SDN type of `OVNKubernetes` in step 4 below.
+
+The following high-level procedure outlines the steps to create an Azure Red Hat OpenShift cluster with OVN as the network provider:
+
+1. Install the preview Azure CLI extension.
+2. Verify your permissions.
+3. Register the resource providers.
+4. Create a virtual network containing two empty subnets.
+5. Create an Azure Red Hat OpenShift cluster by using OVN CNI network provider.
+6. Verify the Azure Red Hat OpenShift cluster is using OVN CNI network provider.
+
+## Verify your permissions
+
+Using OVN CNI network provider for Azure Red Hat OpenShift clusters requires you to create a resource group, which will contain the virtual network for the cluster. You must have either Contributor and User Access Administrator permissions or have Owner permissions either directly on the virtual network or on the resource group or subscription containing it.
+
+You'll also need sufficient Azure Active Directory permissions (either a member user of the tenant, or a guest user assigned with role Application administrator) for the tooling to create an application and service principal on your behalf for the cluster. For more information about user roles, see [Member and guest users](../active-directory/fundamentals/users-default-permissions.md#member-and-guest-users) and [Assign administrator and non-administrator roles to users with Azure Active Directory](../active-directory/fundamentals/active-directory-users-assign-role-azure-portal.md).
+
+## Register the resource providers
+
+If you have multiple Azure subscriptions, you must register the resource providers. For information about the registration procedure, see [Register the resource providers](tutorial-create-cluster.md#register-the-resource-providers).
+
+## Create a virtual network containing two empty subnets
+
+If you have an existing virtual network that meets your needs, you can skip this step. To know the procedure of creating a virtual network, see [Create a virtual network containing two empty subnets](tutorial-create-cluster.md#create-a-virtual-network-containing-two-empty-subnets).
+
+## Create an Azure Red Hat OpenShift cluster by using OVN-Kubernetes CNI network provider
+
+Run the following command to create an Azure Red Hat OpenShift cluster that uses the OVN CNI network provider:
+
+```
+az aro create --resource-group $RESOURCEGROUP \
+ --name $CLUSTER \
+ --vnet aro-vnet \
+ --master-subnet master-subnet \
+ --worker-subnet worker-subnet \
+ --sdn-type OVNKubernetes \
+ --pull-secret @pull-secret.txt \
+```
+
+## Verify an Azure Red Hat OpenShift cluster is using the OVN CNI network provider
+
+After the cluster is successfully configured to use the OVN CNI network provider, sign in to your account and run the following command:
+
+```
+oc get network.config/cluster -o jsonpath='{.status.networkType}{"\n"}'
+```
+
+The value of `status.networkType` must be `OVNKubernetes`.
partner-solutions Add Connectors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/apache-kafka-confluent-cloud/add-connectors.md
Title: Add connectors for Confluent Cloud - Azure partner solutions
-description: This article describes how to install connectors for Confluent Cloud that you use with Azure resources.
+ Title: Azure services and Confluent Cloud integration - Azure partner solutions
+description: This article describes how to use Azure services and install connectors for Confluent Cloud integration.
Previously updated : 09/03/2021 Last updated : 06/24/2022
-# Add connectors for Confluent Cloud
+# Azure services and Confluent Cloud integrations
-This article describes how to install connectors to Azure resources for Confluent Cloud.
+This article describes how to use Azure services like Azure Functions, and install connectors to Azure resources for Confluent Cloud.
-## Connector to Azure Cosmos DB
+## Azure Cosmos DB connector
**Azure Cosmos DB Sink Connector fully managed connector** is generally available within Confluent Cloud. The fully managed connector eliminates the need for the development and management of custom integrations, and reduces the overall operational burden of connecting your data between Confluent Cloud and Azure Cosmos DB. The Microsoft Azure Cosmos Sink Connector for Confluent Cloud reads from and writes data to a Microsoft Azure Cosmos database. The connector polls data from Kafka and writes to database containers. To set up your connector, see [Azure Cosmos DB Sink Connector for Confluent Cloud](https://docs.confluent.io/cloud/current/connectors/cc-azure-cosmos-sink.html).
-**Azure Cosmos DB Self Managed connector** must be installed manually. First download an uber JAR from the [Cosmos DB Releases page](https://github.com/microsoft/kafka-connect-cosmosdb/releases). Or, you can [build your own uber JAR directly from the source code](https://github.com/microsoft/kafka-connect-cosmosdb/blob/dev/doc/README_Sink.md#install-sink-connector). Complete the installation by following the guidance described in the Confluent documentation for [installing connectors manually](https://docs.confluent.io/home/connect/install.html#install-connector-manually).
+**Azure Cosmos DB Self Managed connector** must be installed manually. First download an uber JAR from the [Cosmos DB Releases page](https://github.com/microsoft/kafka-connect-cosmosdb/releases). Or, you can [build your own uber JAR directly from the source code](https://github.com/microsoft/kafka-connect-cosmosdb/blob/dev/doc/README_Sink.md#install-sink-connector). Complete the installation by following the guidance described in the Confluent documentation for [installing connectors manually](https://docs.confluent.io/home/connect/install.html#install-connector-manually).
+
+## Azure Functions
+
+**Azure Functions Kafka trigger extension** is used to run your function code in response to messages in Kafka topics. You can also use a Kafka output binding to write from your function to a topic. For information about setup and configuration details, see [Apache Kafka bindings for Azure Functions overview](../../azure-functions/functions-bindings-kafka.md).
## Next steps
postgresql Concepts High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-high-availability.md
This model of high availability deployment enables Flexible server to be highly
Automatic backups are performed periodically from the primary database server, while the transaction logs are continuously archived to the backup storage from the standby replica. If the region supports availability zones, then backup data is stored on zone-redundant storage (ZRS). In regions that doesn't support availability zones, backup data is stored on local redundant storage (LRS). :::image type="content" source="./media/business-continuity/concepts-same-zone-high-availability-architecture.png" alt-text="Same-zone high availability":::
+>[!NOTE]
+> See the [HA limitation section](#high-availabilitylimitations) for a current restriction with same-zone HA deployment.
+ ## Components and workflow ### Transaction completion
Flexible servers that are configured with high availability, log data is replica
## High availability - limitations
+>[!NOTE]
+> New server creates with **Same-zone HA** are currently restricted when you choose the primary server's AZ. Workarounds are to (a) create your same-zone HA server without choosing the primary AZ, or (b) create as a single instance (non-HA) server and then enable same-zone HA after the server is created.
+ * High availability is not supported with burstable compute tier. * High availability is supported only in regions where multiple zones are available. * Due to synchronous replication to the standby server, especially with zone-redundant HA, applications can experience elevated write and commit latency.
postgresql Connect Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/connect-csharp.md
Last updated 11/30/2021
# Quickstart: Use .NET (C#) to connect and query data in Azure Database for PostgreSQL - Flexible Server This quickstart demonstrates how to connect to an Azure Database for PostgreSQL using a C# application. It shows how to use SQL statements to query, insert, update, and delete data in the database. The steps in this article assume that you are familiar with developing using C#, and that you are new to working with Azure Database for PostgreSQL. ## Prerequisites+ For this quickstart you need: - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
For this quickstart you need:
- Install [Npgsql](https://www.nuget.org/packages/Npgsql/) NuGet package in Visual Studio. ## Get connection information+ Get the connection information needed to connect to the Azure Database for PostgreSQL. You need the fully qualified server name and login credentials. 1. Log in to the [Azure portal](https://portal.azure.com/).
Get the connection information needed to connect to the Azure Database for Postg
:::image type="content" source="./media/connect-csharp/1-connection-string.png" alt-text="Azure Database for PostgreSQL server name"::: ## Step 1: Connect and insert data+ Use the following code to connect and load the data using **CREATE TABLE** and **INSERT INTO** SQL statements. The code uses NpgsqlCommand class with method: - [Open()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_Open) to establish a connection to the PostgreSQL database. - [CreateCommand()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_CreateCommand) sets the CommandText property.
namespace Driver
[Having any issues? Let us know.](https://github.com/MicrosoftDocs/azure-docs/issues) ## Step 2: Read data+ Use the following code to connect and read the data using a **SELECT** SQL statement. The code uses NpgsqlCommand class with method: - [Open()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_Open) to establish a connection to PostgreSQL. - [CreateCommand()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_CreateCommand) and [ExecuteReader()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlCommand.html#Npgsql_NpgsqlCommand_ExecuteReader) to run the database commands.
namespace Driver
## Step 3: Update data+ Use the following code to connect and update the data using an **UPDATE** SQL statement. The code uses NpgsqlCommand class with method: - [Open()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_Open) to establish a connection to PostgreSQL. - [CreateCommand()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_CreateCommand), sets the CommandText property.
namespace Driver
[Having any issues? Let us know.](https://github.com/MicrosoftDocs/azure-docs/issues) ## Step 4: Delete data+ Use the following code to connect and delete data using a **DELETE** SQL statement. The code uses NpgsqlCommand class with method [Open()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_Open) to establish a connection to the PostgreSQL database. Then, the code uses the [CreateCommand()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_CreateCommand) method, sets the CommandText property, and calls the method [ExecuteNonQuery()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlCommand.html#Npgsql_NpgsqlCommand_ExecuteNonQuery) to run the database commands.
az group delete \
``` ## Next steps+ > [!div class="nextstepaction"] > [Manage Azure Database for MySQL server using Portal](./how-to-manage-server-portal.md)<br/>
postgresql How To Manage High Availability Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-manage-high-availability-portal.md
Previously updated : 06/16/2022 Last updated : 06/23/2022 # Manage high availability in Flexible Server
This section provides details specifically for HA-related fields. You can follow
4. If you chose the Availability zone in step 2 and if you chose zone-redundant HA, then you can choose the standby zone. :::image type="content" source="./media/how-to-manage-high-availability-portal/choose-standby-availability-zone.png" alt-text="Screenshot of Standby AZ selection.":::
-5. If you want to change the default compute and storage, click **Configure server**.
+>[!NOTE]
+> See the [HA limitation section](concepts-high-availability.md#high-availabilitylimitations) for a current restriction with same-zone HA deployment.
+
+1. If you want to change the default compute and storage, click **Configure server**.
:::image type="content" source="./media/how-to-manage-high-availability-portal/configure-server.png" alt-text="Screenshot of configure compute and storage screen.":::
-6. If high availability option is checked, the burstable tier will not be available to choose. You can choose either
+2. If high availability option is checked, the burstable tier will not be available to choose. You can choose either
**General purpose** or **Memory Optimized** compute tiers. Then you can select **compute size** for your choice from the dropdown. :::image type="content" source="./media/how-to-manage-high-availability-portal/select-compute.png" alt-text="Compute tier selection screen.":::
-7. Select **storage size** in GiB using the sliding bar and select the **backup retention period** between 7 days and 35 days.
+3. Select **storage size** in GiB using the sliding bar and select the **backup retention period** between 7 days and 35 days.
:::image type="content" source="./media/how-to-manage-high-availability-portal/storage-backup.png" alt-text="Screenshot of Storage Backup.":::
-8. Click **Save**.
+4. Click **Save**.
## Enable high availability post server creation
Follow these steps to perform a planned failover from your primary to the standb
> > * The overall end-to-end operation time may be longer than the actual downtime experienced by the application. Please measure the downtime from the application perspective.
+## Enabling Zone redundant HA after the region supports AZ
+
+There are Azure regions that do not support availability zones. If you have already deployed non-HA servers, you cannot directly enable zone redundant HA on the server, but you can perform restore and enable HA in that server. Following steps shows how to enable Zone redundant HA for that server.
+
+1. From the overview page of the server, click **Restore** to [perform a PITR](how-to-restore-server-portal.md#restoring-to-the-latest-restore-point). Choose **Latest restore point**.
+2. Choose a server name, availability zone.
+3. Click **Review+Create**".
+4. A new Flexible server will be created from the backup.
+5. Once the new server is created, from the overview page of the server, follow the [guide](#enable-high-availability-post-server-creation) to enable HA.
+6. After data verification, you can optionally [delete](how-to-manage-server-portal.md#delete-a-server) the old server.
+7. Make sure your clients connection strings are modified to point to your new HA-enabled server.
+
## Next steps
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/overview.md
Previously updated : 06/15/2022 Last updated : 06/23/2022
$ New Zone-redundant high availability deployments are temporarily blocked in th
$$ New server deployments are temporarily blocked in these regions. Already provisioned servers are fully supported.
-** Zone-redundant high availability can now be deployed when you provision new servers in these regions. Pre-existing servers deployed in AZ with *no preference* (which you can check on the Azure portal), the standby will be provisioned in the same AZ. To configure zone-redundant high availability, perform a point-in-time restore of the server and enable HA on the restored server.
+** Zone-redundant high availability can now be deployed when you provision new servers in these regions. Any existing servers deployed in AZ with *no preference* (which you can check on the Azure portal) prior to the region started to support AZ, even when you enable zone-redundant HA, the standby will be provisioned in the same AZ (same-zone HA) as the primary server. To enable zone-redundant high availability, [follow the steps.](how-to-manage-high-availability-portal.md#enabling-zone-redundant-ha-after-the-region-supports-az).
<!-- We continue to add more regions for flexible server. --> > [!NOTE]
postgresql Application Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/application-best-practices.md
description: Learn about best practices for building an app by using Azure Datab
- + Previously updated : 12/10/2020 Last updated : 06/24/2022 # Best practices for building an application with Azure Database for PostgreSQL
Here are a few tools and practices that you can use to help debug performance is
With connection pooling, a fixed set of connections is established at the startup time and maintained. This also helps reduce the memory fragmentation on the server that is caused by the dynamic new connections established on the database server. The connection pooling can be configured on the application side if the app framework or database driver supports it. If that is not supported, the other recommended option is to leverage a proxy connection pooler service like [PgBouncer](https://pgbouncer.github.io/) or [Pgpool](https://pgpool.net/mediawiki/index.php/Main_Page) running outside the application and connecting to the database server. Both PgBouncer and Pgpool are community based tools that work with Azure Database for PostgreSQL. ### Retry logic to handle transient errors+ Your application might experience transient errors where connections to the database are dropped or lost intermittently. In such situations, the server is up and running after one to two retries in 5 to 10 seconds. A good practice is to wait for 5 seconds before your first retry. Then follow each retry by increasing the wait gradually, up to 60 seconds. Limit the maximum number of retries at which point your application considers the operation failed, so you can then further investigate. See [How to troubleshoot connection errors](./concepts-connectivity.md) to learn more. ### Enable read replication to mitigate failovers
-You can use [Data-in Replication](./concepts-read-replicas.md) for failover scenarios. When you're using read replicas, no automated failover between source and replica servers occurs. You'll notice a lag between the source and the replica because the replication is asynchronous. Network lag can be influenced by many factors, like the size of the workload running on the source server and the latency between datacenters. In most cases, replica lag ranges from a few seconds to a couple of minutes.
+You can use [Data-in Replication](./concepts-read-replicas.md) for failover scenarios. When you're using read replicas, no automated failover between source and replica servers occurs. You'll notice a lag between the source and the replica because the replication is asynchronous. Network lag can be influenced by many factors, like the size of the workload running on the source server and the latency between datacenters. In most cases, replica lag ranges from a few seconds to a couple of minutes.
## Database deployment ### Configure CI/CD deployment pipeline+ Occasionally, you need to deploy changes to your database. In such cases, you can use continuous integration (CI) through [GitHub Actions](https://github.com/Azure/postgresql/blob/master/README.md) for your PostgreSQL server to update the database by running a custom script against it. ### Define manual database deployment process+ During manual database deployment, follow these steps to minimize downtime or reduce the risk of failed deployment: - Create a copy of a production database on a new database by using pg_dump.
During manual database deployment, follow these steps to minimize downtime or re
> If the application is like an e-commerce app and you can't put it in read-only state, deploy the changes directly on the production database after making a backup. Theses change should occur during off-peak hours with low traffic to the app to minimize the impact, because some users might experience failed requests. Make sure your application code also handles any failed requests. ## Database schema and queries+ Here are few tips to keep in mind when you build your database schema and your queries. ### Use BIGINT or UUID for Primary Keys+ When building custom application or some frameworks they maybe using `INT` instead of `BIGINT` for primary keys. When you use ```INT```, you run the risk of where the value in your database can exceed storage capacity of ```INT``` data type. Making this change to an existing production application can be time consuming with cost more development time. Another option is to use [UUID](https://www.postgresql.org/docs/current/datatype-uuid.html) for primary keys.This identifier uses an auto-generated 128-bit string, for example ```a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11```. Learn more about [PostgreSQL data types](https://www.postgresql.org/docs/8.1/datatype.html). ### Use indexes
When building custom application or some frameworks they maybe using `INT` inste
There are many types of [indexes](https://www.postgresql.org/docs/9.1/indexes.html) in Postgres which can be used in different ways. Using an index helps the server find and retrieve specific rows much faster than it could do without an index. But indexes also add overhead to the database server, hence avoid having too many indexes. ### Use autovacuum+ You can optimize your server with autovacuum on an Azure Database for PostgreSQL server. PostgreSQL allow greater database concurrency but with every update results in insert and delete. For delete, the records are soft marked which will be purged later. To carry out these tasks, PostgreSQL runs a vacuum job. If you don't vacuum from time to time, the dead tuples that accumulate can result in: - Data bloat, such as larger databases and tables.
You can optimize your server with autovacuum on an Azure Database for PostgreSQL
Learn more about [how to optimize with autovacuum](how-to-optimize-autovacuum.md). ### Use pg_stats_statements
-Pg_stat_statements is a PostgreSQL extension that's enabled by default in Azure Database for PostgreSQL. The extension provides a means to track execution statistics for all SQL statements executed by a server. See [how to use pg_statement](how-to-optimize-query-stats-collection.md).
+Pg_stat_statements is a PostgreSQL extension that's enabled by default in Azure Database for PostgreSQL. The extension provides a means to track execution statistics for all SQL statements executed by a server. See [how to use pg_statement](how-to-optimize-query-stats-collection.md).
### Use the Query Store+ The [Query Store](./concepts-query-store.md) feature in Azure Database for PostgreSQL provides a more effective method to track query statistics. We recommend this feature as an alternative to using pg_stats_statements. ### Optimize bulk inserts and use transient data+ If you have workload operations that involve transient data or that insert large datasets in bulk, consider using unlogged tables. It provides atomicity and durability, by default. Atomicity, consistency, isolation, and durability make up the ACID properties. See [how to optimize bulk inserts](how-to-optimize-bulk-inserts.md). ## Next Steps+ [Postgres Guide](http://postgresguide.com/)
postgresql Concept Reserved Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concept-reserved-pricing.md
Previously updated : 10/06/2021 Last updated : 06/24/2022 # Prepay for Azure Database for PostgreSQL compute resources with reserved capacity
Last updated 10/06/2021
Azure Database for PostgreSQL now helps you save money by prepaying for compute resources compared to pay-as-you-go prices. With Azure Database for PostgreSQL reserved capacity, you make an upfront commitment on PostgreSQL server for a one or three year period to get a significant discount on the compute costs. To purchase Azure Database for PostgreSQL reserved capacity, you need to specify the Azure region, deployment type, performance tier, and term. </br> ## How does the instance reservation work?+ You don't need to assign the reservation to specific Azure Database for PostgreSQL servers. An already running Azure Database for PostgreSQL (or ones that are newly deployed) will automatically get the benefit of reserved pricing. By purchasing a reservation, you're pre-paying for the compute costs for a period of one or three years. As soon as you buy a reservation, the Azure database for PostgreSQL compute charges that match the reservation attributes are no longer charged at the pay-as-you go rates. A reservation does not cover software, networking, or storage charges associated with the PostgreSQL Database servers. At the end of the reservation term, the billing benefit expires, and the Azure Database for PostgreSQL are billed at the pay-as-you go price. Reservations do not auto-renew. For pricing information, see the [Azure Database for PostgreSQL reserved capacity offering](https://azure.microsoft.com/pricing/details/postgresql/). </br> > [!IMPORTANT]
The size of reservation should be based on the total amount of compute used by t
For example, let's suppose that you are running one general purpose Gen5 ΓÇô 32 vCore PostgreSQL database, and two memory-optimized Gen5 ΓÇô 16 vCore PostgreSQL databases. Further, let's supposed that you plan to deploy within the next month an additional general purpose Gen5 ΓÇô 8 vCore database server, and one memory-optimized Gen5 ΓÇô 32 vCore database server. Let's suppose that you know that you will need these resources for at least one year. In this case, you should purchase a 40 (32 + 8) vCores, one-year reservation for single database general purpose - Gen5 and a 64 (2x16 + 32) vCore one year reservation for single database memory optimized - Gen5 - ## Buy Azure Database for PostgreSQL reserved capacity 1. Sign in to the [Azure portal](https://portal.azure.com/).
For example, let's suppose that you are running one general purpose Gen5 ΓÇô 32
3. Select **Add** and then in the Purchase reservations pane, select **Azure Database for PostgreSQL** to purchase a new reservation for your PostgreSQL databases. 4. Fill in the required fields. Existing or new databases that match the attributes you select qualify to get the reserved capacity discount. The actual number of your Azure Database for PostgreSQL servers that get the discount depend on the scope and quantity selected. - :::image type="content" source="media/concepts-reserved-pricing/postgresql-reserved-price.png" alt-text="Overview of reserved pricing"::: - The following table describes required fields. | Field | Description |
Use Azure APIs to programmatically get information for your organization about A
- View and manage reservation access - Split or merge reservations - Change the scope of reservations
-
+ For more information, see [APIs for Azure reservation automation](../../cost-management-billing/reservations/reservation-apis.md). ## vCore size flexibility
postgresql Concepts Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-aks.md
Previously updated : 07/14/2020 Last updated : 06/24/2022 # Connecting Azure Kubernetes Service and Azure Database for PostgreSQL - Single Server
Last updated 07/14/2020
Azure Kubernetes Service (AKS) provides a managed Kubernetes cluster you can use in Azure. Below are some options to consider when using AKS and Azure Database for PostgreSQL together to create an application. ## Accelerated networking+ Use accelerated networking-enabled underlying VMs in your AKS cluster. When accelerated networking is enabled on a VM, there is lower latency, reduced jitter, and decreased CPU utilization on the VM. Learn more about how accelerated networking works, the supported OS versions, and supported VM instances for [Linux](../../virtual-network/create-vm-accelerated-networking-cli.md). From November 2018, AKS supports accelerated networking on those supported VM instances. Accelerated networking is enabled by default on new AKS clusters that use those VMs.
az network nic list --resource-group nodeResourceGroup -o table
``` ## Connection pooling
-A connection pooler minimizes the cost and time associated with creating and closing new connections to the database. The pool is a collection of connections that can be reused.
-There are multiple connection poolers you can use with PostgreSQL. One of these is [PgBouncer](https://pgbouncer.github.io/). In the Microsoft Container Registry, we provide a lightweight containerized PgBouncer that can be used in a sidecar to pool connections from AKS to Azure Database for PostgreSQL. Visit the [docker hub page](https://hub.docker.com/r/microsoft/azureossdb-tools-pgbouncer/) to learn how to access and use this image.
+A connection pooler minimizes the cost and time associated with creating and closing new connections to the database. The pool is a collection of connections that can be reused.
+
+There are multiple connection poolers you can use with PostgreSQL. One of these is [PgBouncer](https://pgbouncer.github.io/). In the Microsoft Container Registry, we provide a lightweight containerized PgBouncer that can be used in a sidecar to pool connections from AKS to Azure Database for PostgreSQL. Visit the [docker hub page](https://hub.docker.com/r/microsoft/azureossdb-tools-pgbouncer/) to learn how to access and use this image.
## Next steps
-Create an AKS cluster [using the Azure CLI](../../aks/learn/quick-kubernetes-deploy-cli.md), [using Azure PowerShell](../../aks/learn/quick-kubernetes-deploy-powershell.md), or [using the Azure portal](../../aks/learn/quick-kubernetes-deploy-portal.md).
+Create an AKS cluster [using the Azure CLI](../../aks/learn/quick-kubernetes-deploy-cli.md), [using Azure PowerShell](../../aks/learn/quick-kubernetes-deploy-powershell.md), or [using the Azure portal](../../aks/learn/quick-kubernetes-deploy-portal.md).
postgresql Concepts Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-audit.md
Previously updated : 01/28/2020 Last updated : 06/24/2022 # Audit logging in Azure Database for PostgreSQL - Single Server
To use the [portal](https://portal.azure.com):
1. On the left, under **Settings**, select **Server parameters**. 1. Search for **shared_preload_libraries**. 1. Select **PGAUDIT**.
-
+ :::image type="content" source="./media/concepts-audit/share-preload-parameter.png" alt-text="Screenshot that shows Azure Database for PostgreSQL enabling shared_preload_libraries for PGAUDIT."::: 1. Restart the server to apply the change. 1. Check that `pgaudit` is loaded in `shared_preload_libraries` by executing the following query in psql:
-
+ ```SQL show shared_preload_libraries; ``` You should see `pgaudit` in the query result that will return `shared_preload_libraries`. 1. Connect to your server by using a client like psql, and enable the pgAudit extension:
-
+ ```SQL CREATE EXTENSION pgaudit; ```
To configure pgAudit, in the [portal](https://portal.azure.com):
1. On the left, under **Settings**, select **Server parameters**. 1. Search for the **pgaudit** parameters. 1. Select appropriate settings parameters to edit. For example, to start logging, set **pgaudit.log** to **WRITE**.
-
+ :::image type="content" source="./media/concepts-audit/pgaudit-config.png" alt-text="Screenshot that shows Azure Database for PostgreSQL configuring logging with pgAudit."::: 1. Select **Save** to save your changes.
postgresql Concepts Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-azure-ad-authentication.md
Previously updated : 07/23/2020 Last updated : 06/24/2022 # Use Azure Active Directory for authenticating with PostgreSQL
postgresql Concepts Azure Advisor Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-azure-advisor-recommendations.md
Previously updated : 04/08/2021 Last updated : 06/24/2022 + # Azure Advisor for PostgreSQL [!INCLUDE [applies-to-postgresql-single-server](../includes/applies-to-postgresql-single-server.md)]
postgresql Concepts Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-backup.md
Previously updated : 06/14/2022 Last updated : 06/24/2022 # Backup and restore in Azure Database for PostgreSQL - Single Server
These backup files cannot be exported. The backups can only be used for restore
For servers that support up to 4-TB maximum storage, full backups occur once every week. Differential backups occur twice a day. Transaction log backups occur every five minutes. - #### Servers with up to 16-TB storage
-In a subset of [Azure regions](./concepts-pricing-tiers.md#storage), all newly provisioned servers can support up to 16-TB storage. Backups on these large storage servers are snapshot-based. The first full snapshot backup is scheduled immediately after a server is created. That first full snapshot backup is retained as the server's base backup. Subsequent snapshot backups are differential backups only. Differential snapshot backups do not occur on a fixed schedule. In a day, multiple differential snapshot backups are performed, but only 3 backups are retained. Transaction log backups occur every five minutes.
+In a subset of [Azure regions](./concepts-pricing-tiers.md#storage), all newly provisioned servers can support up to 16-TB storage. Backups on these large storage servers are snapshot-based. The first full snapshot backup is scheduled immediately after a server is created. That first full snapshot backup is retained as the server's base backup. Subsequent snapshot backups are differential backups only. Differential snapshot backups do not occur on a fixed schedule. In a day, multiple differential snapshot backups are performed, but only 3 backups are retained. Transaction log backups occur every five minutes.
> [!NOTE] > Automatic backups are performed for [replica servers](./concepts-read-replicas.md) that are configured with up to 4TB storage configuration. ### Backup retention
-Backups are retained based on the backup retention period setting on the server. You can select a retention period of 7 to 35 days. The default retention period is 7 days. You can set the retention period during server creation or later by updating the backup configuration using [Azure portal](./how-to-restore-server-portal.md#set-backup-configuration) or [Azure CLI](./how-to-restore-server-cli.md#set-backup-configuration).
+Backups are retained based on the backup retention period setting on the server. You can select a retention period of 7 to 35 days. The default retention period is 7 days. You can set the retention period during server creation or later by updating the backup configuration using [Azure portal](./how-to-restore-server-portal.md#set-backup-configuration) or [Azure CLI](./how-to-restore-server-cli.md#set-backup-configuration).
The backup retention period governs how far back in time a point-in-time restore can be retrieved, since it's based on backups available. The backup retention period can also be treated as a recovery window from a restore perspective. All backups required to perform a point-in-time restore within the backup retention period are retained in backup storage. For example - if the backup retention period is set to 7 days, the recovery window is considered last 7 days. In this scenario, all the backups required to restore the server in last 7 days are retained. With a backup retention window of seven days: - Servers with up to 4-TB storage will retain up to 2 full database backups, all the differential backups, and transaction log backups performed since the earliest full database backup.
Azure Database for PostgreSQL provides the flexibility to choose between locally
Azure Database for PostgreSQL provides up to 100% of your provisioned server storage as backup storage at no extra cost. Any additional backup storage used is charged in GB per month. For example, if you have provisioned a server with 250 GB of storage, you have 250 GB of additional storage available for server backups at no extra cost. Storage consumed for backups more than 250 GB is charged as per the [pricing model](https://azure.microsoft.com/pricing/details/postgresql/).
-You can use the [Backup Storage used](concepts-monitoring.md) metric in Azure Monitor available in the Azure portal to monitor the backup storage consumed by a server. The Backup Storage used metric represents the sum of storage consumed by all the full database backups, differential backups, and log backups retained based on the backup retention period set for the server. The frequency of the backups is service managed and explained earlier. Heavy transactional activity on the server can cause backup storage usage to increase irrespective of the total database size. For geo-redundant storage, backup storage usage is twice that of the locally redundant storage.
+You can use the [Backup Storage used](concepts-monitoring.md) metric in Azure Monitor available in the Azure portal to monitor the backup storage consumed by a server. The Backup Storage used metric represents the sum of storage consumed by all the full database backups, differential backups, and log backups retained based on the backup retention period set for the server. The frequency of the backups is service managed and explained earlier. Heavy transactional activity on the server can cause backup storage usage to increase irrespective of the total database size. For geo-redundant storage, backup storage usage is twice that of the locally redundant storage.
The primary means of controlling the backup storage cost is by setting the appropriate backup retention period and choosing the right backup redundancy options to meet your desired recovery goals. You can select a retention period from a range of 7 to 35 days. General Purpose and Memory Optimized servers can choose to have geo-redundant storage for backups. ## Restore
-In Azure Database for PostgreSQL, performing a restore creates a new server from the original server's backups.
+In Azure Database for PostgreSQL, performing a restore creates a new server from the original server's backups.
There are two types of restore available:
There are two types of restore available:
The estimated time of recovery depends on several factors including the database sizes, the transaction log size, the network bandwidth, and the total number of databases recovering in the same region at the same time. The recovery time varies depending on the the last data backup and the amount of recovery needs to be performed. It is usually less than 12 hours. > [!NOTE]
-> If your source PostgreSQL server is encrypted with customer-managed keys, please see the [documentation](concepts-data-encryption-postgresql.md) for additional considerations.
+> If your source PostgreSQL server is encrypted with customer-managed keys, please see the [documentation](concepts-data-encryption-postgresql.md) for additional considerations.
> [!NOTE] > If you want to restore a deleted PostgreSQL server, follow the procedure documented [here](how-to-restore-dropped-server.md).
If you want to restore a dropped table,
5. You can optionally delete the restored server. >[!Note]
-> It is recommended not to create multiple restores for the same server at the same time.
+> It is recommended not to create multiple restores for the same server at the same time.
### Geo-restore
After a restore from either recovery mechanism, you should perform the following
- Learn how to restore usingΓÇ»[the Azure portal](how-to-restore-server-portal.md). - Learn how to restore usingΓÇ»[the Azure CLI](how-to-restore-server-cli.md).-- To learn more about business continuity, see theΓÇ»[business continuity overview](concepts-business-continuity.md).
+- To learn more about business continuity, see theΓÇ»[business continuity overview](concepts-business-continuity.md).
postgresql Concepts Business Continuity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-business-continuity.md
Previously updated : 08/07/2020 Last updated : 06/24/2022 # Overview of business continuity with Azure Database for PostgreSQL - Single Server
The following table compares RTO and RPO in a **typical workload** scenario:
| Geo-restore from geo-replicated backups | Not supported | RTO - Varies <br/>RPO < 1 h | RTO - Varies <br/>RPO < 1 h | | Read replicas | RTO - Minutes* <br/>RPO < 5 min* | RTO - Minutes* <br/>RPO < 5 min*| RTO - Minutes* <br/>RPO < 5 min*|
- \* RTO and RPO **can be much higher** in some cases depending on various factors including latency between sites, the amount of data to be transmitted, and importantly primary database write workload.
+\* RTO and RPO **can be much higher** in some cases depending on various factors including latency between sites, the amount of data to be transmitted, and importantly primary database write workload.
## Recover a server after a user or application error
The geo-restore feature restores the server using geo-redundant backups. The bac
> Geo-restore is only possible if you provisioned the server with geo-redundant backup storage. If you wish to switch from locally redundant to geo-redundant backups for an existing server, you must take a dump using pg_dump of your existing server and restore it to a newly created server configured with geo-redundant backups. ## Cross-region read replicas
-You can use cross region read replicas to enhance your business continuity and disaster recovery planning. Read replicas are updated asynchronously using PostgreSQL's physical replication technology, and may lag the primary. Learn more about read replicas, available regions, and how to fail over from the [read replicas concepts article](concepts-read-replicas.md).
+
+You can use cross region read replicas to enhance your business continuity and disaster recovery planning. Read replicas are updated asynchronously using PostgreSQL's physical replication technology, and may lag the primary. Learn more about read replicas, available regions, and how to fail over from the [read replicas concepts article](concepts-read-replicas.md).
## FAQ+ ### Where does Azure Database for PostgreSQL store customer data?
-By default, Azure Database for PostgreSQL doesn't move or store customer data out of the region it is deployed in. However, customers can optionally chose to enable [geo-redundant backups](concepts-backup.md#backup-redundancy-options) or create [cross-region read replica](concepts-read-replicas.md#cross-region-replication) for storing data in another region.
+By default, Azure Database for PostgreSQL doesn't move or store customer data out of the region it is deployed in. However, customers can optionally chose to enable [geo-redundant backups](concepts-backup.md#backup-redundancy-options) or create [cross-region read replica](concepts-read-replicas.md#cross-region-replication) for storing data in another region.
## Next steps+ - Learn more about the [automated backups in Azure Database for PostgreSQL](concepts-backup.md). - Learn how to restore using [the Azure portal](how-to-restore-server-portal.md) or [the Azure CLI](how-to-restore-server-cli.md). - Learn about [read replicas in Azure Database for PostgreSQL](concepts-read-replicas.md).
postgresql Concepts Certificate Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-certificate-rotation.md
Previously updated : 09/02/2020 Last updated : 06/24/2022 # Understanding the changes in the Root CA change for Azure Database for PostgreSQL Single server
Azure database for PostgreSQL users can only use the predefined certificate to c
As per the industry's compliance requirements, CA vendors began revoking CA certificates for non-compliant CAs, requiring servers to use certificates issued by compliant CAs, and signed by CA certificates from those compliant CAs. Since Azure Database for MySQL used one of these non-compliant certificates, we needed to rotate the certificate to the compliant version to minimize the potential threat to your MySQL servers.
-The new certificate is rolled out and in effect starting February 15, 2021 (02/15/2021).
+The new certificate is rolled out and in effect starting February 15, 2021 (02/15/2021).
## What change was performed on February 15, 2021 (02/15/2021)?
There is no change required on client side. if you followed our previous recomme
Then replace the original keystore file with the new generated one: * System.setProperty("javax.net.ssl.trustStore","path_to_truststore_file"); * System.setProperty("javax.net.ssl.trustStorePassword","password");
-
+ * For .NET (Npgsql) users on Windows, make sure **Baltimore CyberTrust Root** and **DigiCert Global Root G2** both exist in Windows Certificate Store, Trusted Root Certification Authorities. If any certificates do not exist, import the missing certificate. ![Azure Database for PostgreSQL .net cert](media/overview/netconnecter-cert.png)
There is no change required on client side. if you followed our previous recomme
* For other PostgreSQL client users, you can merge two CA certificate files like this format below </br>--BEGIN CERTIFICATE--
- </br>(Root CA1: BaltimoreCyberTrustRoot.crt.pem)
- </br>--END CERTIFICATE--
- </br>--BEGIN CERTIFICATE--
- </br>(Root CA2: DigiCertGlobalRootG2.crt.pem)
- </br>--END CERTIFICATE--
+</br>(Root CA1: BaltimoreCyberTrustRoot.crt.pem)
+</br>--END CERTIFICATE--
+</br>--BEGIN CERTIFICATE--
+</br>(Root CA2: DigiCertGlobalRootG2.crt.pem)
+</br>--END CERTIFICATE--
* Replace the original root CA pem file with the combined root CA file and restart your application/client. * In future, after the new certificate deployed on the server side, you can change your CA pem file to DigiCertGlobalRootG2.crt.pem. > [!NOTE]
-> Please do not drop or alter **Baltimore certificate** until the cert change is made. We will send a communication once the change is done, after which it is safe for them to drop the Baltimore certificate.
+> Please do not drop or alter **Baltimore certificate** until the cert change is made. We will send a communication once the change is done, after which it is safe for them to drop the Baltimore certificate.
## Why was BaltimoreCyberTrustRoot certificate not replaced to DigiCertGlobalRootG2 during this change on February 15, 2021?
-We evaluated the customer readiness for this change and realized many customers were looking for additional lead time to manage this change. In the interest of providing more lead time to customers for readiness, we have decided to defer the certificate change to DigiCertGlobalRootG2 for at least a year providing sufficient lead time to the customers and end users.
+We evaluated the customer readiness for this change and realized many customers were looking for additional lead time to manage this change. In the interest of providing more lead time to customers for readiness, we have decided to defer the certificate change to DigiCertGlobalRootG2 for at least a year providing sufficient lead time to the customers and end users.
-Our recommendations to users is, use the aforementioned steps to create a combined certificate and connect to your server but do not remove BaltimoreCyberTrustRoot certificate until we send a communication to remove it.
+Our recommendations to users is, use the aforementioned steps to create a combined certificate and connect to your server but do not remove BaltimoreCyberTrustRoot certificate until we send a communication to remove it.
## What if we removed the BaltimoreCyberTrustRoot certificate? You will start to connectivity errors while connecting to your Azure Database for PostgreSQL server. You will need to configure SSL with [BaltimoreCyberTrustRoot](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) certificate again to maintain connectivity. - ## Frequently asked questions ### 1. If I am not using SSL/TLS, do I still need to update the root CA?
-No actions required if you are not using SSL/TLS.
+
+No actions required if you are not using SSL/TLS.
### 2. If I am using SSL/TLS, do I need to restart my database server to update the root CA?+ No, you do not need to restart the database server to start using the new certificate. This is a client-side change and the incoming client connections need to use the new certificate to ensure that they can connect to the database server. ### 3. How do I know if I'm using SSL/TLS with root certificate verification? You can identify whether your connections verify the root certificate by reviewing your connection string.-- If your connection string includes `sslmode=verify-ca` or `sslmode=verify-full`, you need to update the certificate.-- If your connection string includes `sslmode=disable`, `sslmode=allow`, `sslmode=prefer`, or `sslmode=require`, you do not need to update certificates. -- If your connection string does not specify sslmode, you do not need to update certificates.
+- If your connection string includes `sslmode=verify-ca` or `sslmode=verify-full`, you need to update the certificate.
+- If your connection string includes `sslmode=disable`, `sslmode=allow`, `sslmode=prefer`, or `sslmode=require`, you do not need to update certificates.
+- If your connection string does not specify sslmode, you do not need to update certificates.
If you are using a client that abstracts the connection string away, review the client's documentation to understand whether it verifies certificates. To understand PostgreSQL sslmode review the [SSL mode descriptions](https://www.postgresql.org/docs/11/libpq-ssl.html#ssl-mode-descriptions) in PostgreSQL documentation. ### 4. What is the impact if using App Service with Azure Database for PostgreSQL?+ For Azure app services, connecting to Azure Database for PostgreSQL, we can have two possible scenarios and it depends on how on you are using SSL with your application. * This new certificate has been added to App Service at platform level. If you are using the SSL certificates included on App Service platform in your application, then no action is needed. * If you are explicitly including the path to SSL cert file in your code, then you would need to download the new cert and update the code to use the new cert. A good example of this scenario is when you use custom containers in App Service as shared in the [App Service documentation](../../app-service/tutorial-multi-container-app.md#configure-database-variables-in-wordpress) ### 5. What is the impact if using Azure Kubernetes Services (AKS) with Azure Database for PostgreSQL?+ If you are trying to connect to the Azure Database for PostgreSQL using Azure Kubernetes Services (AKS), it is similar to access from a dedicated customers host environment. Refer to the steps [here](../../aks/ingress-own-tls.md). ### 6. What is the impact if using Azure Data Factory to connect to Azure Database for PostgreSQL?+ For connector using Azure Integration Runtime, the connector leverage certificates in the Windows Certificate Store in the Azure-hosted environment. These certificates are already compatible to the newly applied certificates and therefore no action is needed. For connector using Self-hosted Integration Runtime where you explicitly include the path to SSL cert file in your connection string, you will need to download the [new certificate](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem) and update the connection string to use it. ### 7. Do I need to plan a database server maintenance downtime for this change?+ No. Since the change here is only on the client side to connect to the database server, there is no maintenance downtime needed for the database server for this change. ### 8. If I create a new server after February 15, 2021 (02/15/2021), will I be impacted?+ For servers created after February 15, 2021 (02/15/2021), you will continue to use the [BaltimoreCyberTrustRoot](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) for your applications to connect using SSL. ### 9. How often does Microsoft update their certificates or what is the expiry policy?+ These certificates used by Azure Database for PostgreSQL are provided by trusted Certificate Authorities (CA). So the support of these certificates is tied to the support of these certificates by CA. The [BaltimoreCyberTrustRoot](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) certificate is scheduled to expire in 2025 so Microsoft will need to perform a certificate change before the expiry. Also in case if there are unforeseen bugs in these predefined certificates, Microsoft will need to make the certificate rotation at the earliest similar to the change performed on February 15, 2021 to ensure the service is secure and compliant at all times. ### 10. If I am using read replicas, do I need to perform this update only on the primary server or the read replicas?
-Since this update is a client-side change, if the client used to read data from the replica server, you will need to apply the changes for those clients as well.
+
+Since this update is a client-side change, if the client used to read data from the replica server, you will need to apply the changes for those clients as well.
### 11. Do we have server-side query to verify if SSL is being used?+ To verify if you are using SSL connection to connect to the server refer [SSL verification](concepts-ssl-connection-security.md#applications-that-require-certificate-verification-for-tls-connectivity). ### 12. Is there an action needed if I already have the DigiCertGlobalRootG2 in my certificate file?+ No. There is no action needed if your certificate file already has the **DigiCertGlobalRootG2**. ### 13. What if you are using docker image of PgBouncer sidecar provided by Microsoft?
-A new docker image which supports both [**Baltimore**](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) and [**DigiCert**](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem) is published to below [here](https://hub.docker.com/_/microsoft-azure-oss-db-tools-pgbouncer-sidecar) (Latest tag). You can pull this new image to avoid any interruption in connectivity starting February 15, 2021.
+
+A new docker image which supports both [**Baltimore**](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem) and [**DigiCert**](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem) is published to below [here](https://hub.docker.com/_/microsoft-azure-oss-db-tools-pgbouncer-sidecar) (Latest tag). You can pull this new image to avoid any interruption in connectivity starting February 15, 2021.
### 14. What if I have further questions?+ If you have questions, get answers from community experts in [Microsoft Q&A](mailto:AzureDatabaseforPostgreSQL@service.microsoft.com). If you have a support plan and you need technical help, [contact us](mailto:AzureDatabaseforPostgreSQL@service.microsoft.com)
postgresql Concepts Connection Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-connection-libraries.md
Previously updated : 5/6/2019 Last updated : 06/24/2022 # Connection libraries for Azure Database for PostgreSQL - Single Server
Most language client libraries used to connect to PostgreSQL server are external
| C++ | [libpqxx](http://pqxx.org/) | New-style C++ interface | [Download](http://pqxx.org/download/software/) | ## Next steps+ Read these quickstarts on how to connect to and query Azure Database for PostgreSQL by using your language of choice: [Python](./connect-python.md) | [Node.JS](./connect-nodejs.md) | [Java](./connect-java.md) | [Ruby](./connect-ruby.md) | [PHP](./connect-php.md) | [.NET (C#)](./connect-csharp.md) | [Go](./connect-go.md)
postgresql Concepts Connectivity Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-connectivity-architecture.md
Previously updated : 10/15/2021 Last updated : 06/24/2022 # Connectivity architecture in Azure Database for PostgreSQL
Connection to your Azure Database for PostgreSQL is established through a gatewa
:::image type="content" source="./media/concepts-connectivity-architecture/connectivity-architecture-overview-proxy.png" alt-text="Overview of the connectivity architecture"::: - As client connects to the database, the connection string to the server resolves to the gateway IP address. The gateway listens on the IP address on port 5432. Inside the database cluster, traffic is forwarded to appropriate Azure Database for PostgreSQL. Therefore, in order to connect to your server, such as from corporate networks, it is necessary to open up the **client-side firewall to allow outbound traffic to be able to reach our gateways**. Below you can find a complete list of the IP addresses used by our gateways per region. ## Azure Database for PostgreSQL gateway IP addresses
-The gateway service is hosted on group of stateless compute nodes sitting behind an IP address, which your client would reach first when trying to connect to an Azure Database for PostgreSQL server.
+The gateway service is hosted on group of stateless compute nodes sitting behind an IP address, which your client would reach first when trying to connect to an Azure Database for PostgreSQL server.
-As part of ongoing service maintenance, we will periodically refresh compute hardware hosting the gateways to ensure we provide the most secure and performant connectivity experience. When the gateway hardware is refreshed, a new ring of the compute nodes is built out first. This new ring serves the traffic for all the newly created Azure Database for PostgreSQL servers and it will have a different IP address from older gateway rings in the same region to differentiate the traffic. The older gateway hardware continues serving existing servers but are planned for decommissioning in future. Before decommissioning a gateway hardware, customers running their servers and connecting to older gateway rings will be notified via email and in the Azure portal, three months in advance before decommissioning. The decommissioning of gateways can impact the connectivity to your servers if
+As part of ongoing service maintenance, we will periodically refresh compute hardware hosting the gateways to ensure we provide the most secure and performant connectivity experience. When the gateway hardware is refreshed, a new ring of the compute nodes is built out first. This new ring serves the traffic for all the newly created Azure Database for PostgreSQL servers and it will have a different IP address from older gateway rings in the same region to differentiate the traffic. The older gateway hardware continues serving existing servers but are planned for decommissioning in future. Before decommissioning a gateway hardware, customers running their servers and connecting to older gateway rings will be notified via email and in the Azure portal, three months in advance before decommissioning. The decommissioning of gateways can impact the connectivity to your servers if
* You hard code the gateway IP addresses in the connection string of your application. It is **not recommended**.You should use fully qualified domain name (FQDN) of your server in the format `<servername>.postgres.database.azure.com`, in the connection string for your application. * You do not update the newer gateway IP addresses in the client-side firewall to allow outbound traffic to be able to reach our new gateway rings.
The following table lists the gateway IP addresses of the Azure Database for Pos
* **Gateway IP addresses:** This column lists the current IP addresses of the gateways hosted on the latest generation of hardware. If you are provisioning a new server, we recommend that you open the client-side firewall to allow outbound traffic for the IP addresses listed in this column. * **Gateway IP addresses (decommissioning):** This column lists the IP addresses of the gateways hosted on an older generation of hardware that is being decommissioned right now. If you are provisioning a new server, you can ignore these IP addresses. If you have an existing server, continue to retain the outbound rule for the firewall for these IP addresses as we have not decommissioned it yet. If you drop the firewall rules for these IP addresses, you may get connectivity errors. Instead, you are expected to proactively add the new IP addresses listed in Gateway IP addresses column to the outbound firewall rule as soon as you receive the notification for decommissioning. This will ensure when your server is migrated to latest gateway hardware, there is no interruptions in connectivity to your server.
-* **Gateway IP addresses (decommissioned):** This columns lists the IP addresses of the gateway rings, which are decommissioned and are no longer in operations. You can safely remove these IP addresses from your outbound firewall rule.
-
+* **Gateway IP addresses (decommissioned):** This columns lists the IP addresses of the gateway rings, which are decommissioned and are no longer in operations. You can safely remove these IP addresses from your outbound firewall rule.
| **Region name** | **Gateway IP addresses** |**Gateway IP addresses (decommissioning)** | **Gateway IP addresses (decommissioned)** | |:-|:-|:-|:|
The following table lists the gateway IP addresses of the Azure Database for Pos
| West US |13.86.216.212, 13.86.217.212 |104.42.238.205 | 23.99.34.75| | West US 2 | 13.66.226.202, 13.66.136.192,13.66.136.195 | | | | West US 3 | 20.150.184.2 | | |
-||||
## Frequently asked questions ### What you need to know about this planned maintenance?+ This is a DNS change only which makes it transparent to clients. While the IP address for FQDN is changed in the DNS server, the local DNS cache will be refreshed within 5 minutes, and it is automatically done by the operating systems. After the local DNS refresh, all the new connections will connect to the new IP address, all existing connections will remain connected to the old IP address with no interruption until the old IP addresses are fully decommissioned. The old IP address will roughly take three to four weeks before getting decommissioned; therefore, it should have no effect on the client applications. ### What are we decommissioning?+ Only Gateway nodes will be decommissioned. When users connect to their servers, the first stop of the connection is to gateway node, before connection is forwarded to server. We are decommissioning old gateway rings (not tenant rings where the server is running) refer to the [connectivity architecture](#connectivity-architecture) for more clarification. ### How can you validate if your connections are going to old gateway nodes or new gateway nodes?+ Ping your server's FQDN, for example ``ping xxx.postgres.database.azure.com``. If the returned IP address is one of the IPs listed under Gateway IP addresses (decommissioning) in the document above, it means your connection is going through the old gateway. Contrarily, if the returned Ip address is one of the IPs listed under Gateway IP addresses, it means your connection is going through the new gateway. You may also test by [PSPing](/sysinternals/downloads/psping) or TCPPing the database server from your client application with port 3306 and ensure that return IP address isn't one of the decommissioning IP addresses ### How do I know when the maintenance is over and will I get another notification when old IP addresses are decommissioned?
-You will receive an email to inform you when we will start the maintenance work. The maintenance can take up to one month depending on the number of servers we need to migrate in al regions. Please prepare your client to connect to the database server using the FQDN or using the new IP address from the table above.
+
+You will receive an email to inform you when we will start the maintenance work. The maintenance can take up to one month depending on the number of servers we need to migrate in al regions. Please prepare your client to connect to the database server using the FQDN or using the new IP address from the table above.
### What do I do if my client applications are still connecting to old gateway server ?+ This indicates that your applications connect to server using static IP address instead of FQDN. Review connection strings and connection pooling setting, AKS setting, or even in the source code. ### Is there any impact for my application connections?+ This maintenance is just a DNS change, so it is transparent to the client. Once the DNS cache is refreshed in the client (automatically done by operation system), all the new connection will connect to the new IP address and all the existing connection will still working fine until the old IP address fully get decommissioned, which usually several weeks later. And the retry logic is not required for this case, but it is good to see the application have retry logic configured. Please either use FQDN to connect to the database server or enable list the new 'Gateway IP addresses' in your application connection string. This maintenance operation will not drop the existing connections. It only makes the new connection requests go to new gateway ring. ### Can I request for a specific time window for the maintenance? + As the migration should be transparent and no impact to customer's connectivity, we expect there will be no issue for majority of users. Review your application proactively and ensure that you either use FQDN to connect to the database server or enable list the new 'Gateway IP addresses' in your application connection string. ### I am using private link, will my connections get affected?
-No, this is a gateway hardware decommission and have no relation to private link or private IP addresses, it will only affect public IP addresses mentioned under the decommissioning IP addresses.
+No, this is a gateway hardware decommission and have no relation to private link or private IP addresses, it will only affect public IP addresses mentioned under the decommissioning IP addresses.
## Next steps * [Create and manage Azure Database for PostgreSQL firewall rules using the Azure portal](./how-to-manage-firewall-using-portal.md)
-* [Create and manage Azure Database for PostgreSQL firewall rules using Azure CLI](quickstart-create-server-database-azure-cli.md#configure-a-server-based-firewall-rule)
+* [Create and manage Azure Database for PostgreSQL firewall rules using Azure CLI](quickstart-create-server-database-azure-cli.md#configure-a-server-based-firewall-rule)
postgresql Concepts Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-connectivity.md
Previously updated : 5/6/2019 Last updated : 06/24/2022 # Handling transient connectivity errors for Azure Database for PostgreSQL - Single Server
postgresql Concepts Data Access And Security Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-data-access-and-security-private-link.md
Previously updated : 03/10/2020 Last updated : 06/24/2022 # Private Link for Azure Database for PostgreSQL-Single server
Private endpoints are required to enable Private Link. This can be done using th
* [CLI](./how-to-configure-privatelink-cli.md) ### Approval Process
-Once the network admin creates the private endpoint (PE), the PostgreSQL admin can manage the private endpoint Connection (PEC) to Azure Database for PostgreSQL. This separation of duties between the network admin and the DBA is helpful for management of the Azure Database for PostgreSQL connectivity.
+
+Once the network admin creates the private endpoint (PE), the PostgreSQL admin can manage the private endpoint Connection (PEC) to Azure Database for PostgreSQL. This separation of duties between the network admin and the DBA is helpful for management of the Azure Database for PostgreSQL connectivity.
* Navigate to the Azure Database for PostgreSQL server resource in the Azure portal. * Select the private endpoint connections in the left pane
Once the network admin creates the private endpoint (PE), the PostgreSQL admin c
## Use cases of Private Link for Azure Database for PostgreSQL - Clients can connect to the private endpoint from the same VNet, [peered VNet](../../virtual-network/virtual-network-peering-overview.md) in same region or across regions, or via [VNet-to-VNet connection](../../vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md) across regions. Additionally, clients can connect from on-premises using ExpressRoute, private peering, or VPN tunneling. Below is a simplified diagram showing the common use cases. :::image type="content" source="media/concepts-data-access-and-security-private-link/show-private-link-overview.png" alt-text="select the private endpoint overview"::: ### Connecting from an Azure VM in Peered Virtual Network (VNet)+ Configure [VNet peering](../../virtual-network/tutorial-connect-virtual-networks-powershell.md) to establish connectivity to the Azure Database for PostgreSQL - Single server from an Azure VM in a peered VNet. ### Connecting from an Azure VM in VNet-to-VNet environment+ Configure [VNet-to-VNet VPN gateway connection](../../vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md) to establish connectivity to a Azure Database for PostgreSQL - Single server from an Azure VM in a different region or subscription. ### Connecting from an on-premises environment over VPN+ To establish connectivity from an on-premises environment to the Azure Database for PostgreSQL - Single server, choose and implement one of the options: * [Point-to-Site connection](../../vpn-gateway/vpn-gateway-howto-point-to-site-rm-ps.md)
The following situations and outcomes are possible when you use Private Link in
## Deny public access for Azure Database for PostgreSQL Single server
-If you want to rely only on private endpoints for accessing their Azure Database for PostgreSQL Single server, you can disable setting all public endpoints ([firewall rules](concepts-firewall-rules.md) and [VNet service endpoints](concepts-data-access-and-security-vnet.md)) by setting the **Deny Public Network Access** configuration on the database server.
+If you want to rely only on private endpoints for accessing their Azure Database for PostgreSQL Single server, you can disable setting all public endpoints ([firewall rules](concepts-firewall-rules.md) and [VNet service endpoints](concepts-data-access-and-security-vnet.md)) by setting the **Deny Public Network Access** configuration on the database server.
When this setting is set to *YES* only connections via private endpoints are allowed to your Azure Database for PostgreSQL. When this setting is set to *NO* clients can connect to your Azure Database for PostgreSQL based on your firewall or VNet service endpoint setting. Additionally, once the value of the Private network access is set, customers cannot add and/or update existing 'Firewall rules' and 'VNet service endpoint rules'.
postgresql Concepts Data Access And Security Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-data-access-and-security-vnet.md
Previously updated : 07/17/2020 Last updated : 06/24/2022 # Use Virtual Network service endpoints and rules for Azure Database for PostgreSQL - Single Server
You can also consider using [Private Link](concepts-data-access-and-security-pri
**Subnet:** A virtual network contains **subnets**. Any Azure virtual machines (VMs) within the VNet is assigned to a subnet. A subnet can contain multiple VMs and/or other compute nodes. Compute nodes that are outside of your virtual network cannot access your virtual network unless you configure your security to allow access.
-**Virtual Network service endpoint:** A [Virtual Network service endpoint][vm-virtual-network-service-endpoints-overview-649d] is a subnet whose property values include one or more formal Azure service type names. In this article we are interested in the type name of **Microsoft.Sql**, which refers to the Azure service named SQL Database. This service tag also applies to the Azure Database for PostgreSQL and MySQL services. It is important to note when applying the **Microsoft.Sql** service tag to a VNet service endpoint it will configure service endpoint traffic for Azure Database
+**Virtual Network service endpoint:** A [Virtual Network service endpoint][vm-virtual-network-service-endpoints-overview-649d] is a subnet whose property values include one or more formal Azure service type names. In this article we are interested in the type name of **Microsoft.Sql**, which refers to the Azure service named SQL Database. This service tag also applies to the Azure Database for PostgreSQL and MySQL services. It is important to note when applying the **Microsoft.Sql** service tag to a VNet service endpoint it will configure service endpoint traffic for Azure Database
**Virtual network rule:** A virtual network rule for your Azure Database for PostgreSQL server is a subnet that is listed in the access control list (ACL) of your Azure Database for PostgreSQL server. To be in the ACL for your Azure Database for PostgreSQL server, the subnet must contain the **Microsoft.Sql** type name.
You can salvage the IP option by obtaining a *static* IP address for your VM. Fo
However, the static IP approach can become difficult to manage, and it is costly when done at scale. Virtual network rules are easier to establish and to manage. - <a name="anch-details-about-vnet-rules-38q"></a> ## Details about virtual network rules
Merely setting a VNet firewall rule does not help secure the server to the VNet.
You can set the **IgnoreMissingServiceEndpoint** flag by using the Azure CLI or portal. ## Related articles+ - [Azure virtual networks][vm-virtual-network-overview] - [Azure virtual network service endpoints][vm-virtual-network-service-endpoints-overview-649d] ## Next steps+ For articles on creating VNet rules, see: - [Create and manage Azure Database for PostgreSQL VNet rules using the Azure portal](how-to-manage-vnet-using-portal.md) - [Create and manage Azure Database for PostgreSQL VNet rules using Azure CLI](how-to-manage-vnet-using-cli.md) - <!-- Link references, to text, Within this same GitHub repo. --> [arm-deployment-model-568f]: ../../azure-resource-manager/management/deployment-models.md
For articles on creating VNet rules, see:
[expressroute-indexmd-744v]: ../../expressroute/index.yml
-[resource-manager-portal]: ../../azure-resource-manager/management/resource-providers-and-types.md
+[resource-manager-portal]: ../../azure-resource-manager/management/resource-providers-and-types.md
postgresql Concepts Data Encryption Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-data-encryption-postgresql.md
Previously updated : 01/13/2020 Last updated : 06/24/2022 # Azure Database for PostgreSQL Single server data encryption with a customer-managed key
Data encryption with customer-managed keys for Azure Database for PostgreSQL Sin
* Enabling encryption does not have any additional performance impact with or without customers managed key (CMK) as PostgreSQL relies on Azure storage layer for data encryption in both the scenarios ,the only difference is when CMK is used **Azure Storage Encryption Key** which performs actual data encryption is encrypted using CMK. * Ability to implement separation of duties between security officers, and DBA and system administrators. - ## Terminology and description **Data encryption key (DEK)**: A symmetric AES256 key used to encrypt a partition or block of data. Encrypting each block of data with a different key makes crypto analysis attacks more difficult. Access to DEKs is needed by the resource provider or application instance that is encrypting and decrypting a specific block. When you replace a DEK with a new key, only the data in its associated block must be re-encrypted with the new key.
postgresql Concepts Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-extensions.md
Previously updated : 03/25/2021 Last updated : 06/24/2022 + # PostgreSQL extensions in Azure Database for PostgreSQL - Single Server [!INCLUDE [applies-to-postgresql-single-server](../includes/applies-to-postgresql-single-server.md)]
Azure Database for PostgreSQL supports a subset of key extensions as listed belo
## Postgres 11 extensions
-The following extensions are available in Azure Database for PostgreSQL servers which have Postgres version 11.
+The following extensions are available in Azure Database for PostgreSQL servers which have Postgres version 11.
> [!div class="mx-tableFixed"] > | **Extension**| **Extension version** | **Description** |
The following extensions are available in Azure Database for PostgreSQL servers
> |[unaccent](https://www.postgresql.org/docs/11/unaccent.html) | 1.1 | text search dictionary that removes accents| > |[uuid-ossp](https://www.postgresql.org/docs/11/uuid-ossp.html) | 1.1 | generate universally unique identifiers (UUIDs)|
-## Postgres 10 extensions
+## Postgres 10 extensions
The following extensions are available in Azure Database for PostgreSQL servers which have Postgres version 10.
The following extensions are available in Azure Database for PostgreSQL servers
> |[unaccent](https://www.postgresql.org/docs/10/unaccent.html) | 1.1 | text search dictionary that removes accents| > |[uuid-ossp](https://www.postgresql.org/docs/10/uuid-ossp.html) | 1.1 | generate universally unique identifiers (UUIDs)|
-## Postgres 9.6 extensions
+## Postgres 9.6 extensions
The following extensions are available in Azure Database for PostgreSQL servers which have Postgres version 9.6.
The following extensions are available in Azure Database for PostgreSQL servers
> |[unaccent](https://www.postgresql.org/docs/9.5/unaccent.html) | 1.0 | text search dictionary that removes accents| > |[uuid-ossp](https://www.postgresql.org/docs/9.5/uuid-ossp.html) | 1.0 | generate universally unique identifiers (UUIDs)| - ## pg_stat_statements+ The [pg_stat_statements extension](https://www.postgresql.org/docs/current/pgstatstatements.html) is preloaded on every Azure Database for PostgreSQL server to provide you a means of tracking execution statistics of SQL statements. The setting `pg_stat_statements.track`, which controls what statements are counted by the extension, defaults to `top`, meaning all statements issued directly by clients are tracked. The two other tracking levels are `none` and `all`. This setting is configurable as a server parameter through the [Azure portal](./how-to-configure-server-parameters-using-portal.md) or the [Azure CLI](./how-to-configure-server-parameters-using-cli.md). There is a tradeoff between the query execution information pg_stat_statements provides and the impact on server performance as it logs each SQL statement. If you are not actively using the pg_stat_statements extension, we recommend that you set `pg_stat_statements.track` to `none`. Note that some third party monitoring services may rely on pg_stat_statements to deliver query performance insights, so confirm whether this is the case for you or not. ## dblink and postgres_fdw+ [dblink](https://www.postgresql.org/docs/current/contrib-dblink-function.html) and [postgres_fdw](https://www.postgresql.org/docs/current/postgres-fdw.html) allow you to connect from one PostgreSQL server to another, or to another database in the same server. The receiving server needs to allow connections from the sending server through its firewall. When using these extensions to connect between Azure Database for PostgreSQL servers, this can be done by setting "Allow access to Azure services" to ON. This is also needed if you want to use the extensions to loop back to the same server. The "Allow access to Azure services" setting can be found in the Azure portal page for the Postgres server, under Connection Security. Turning "Allow access to Azure services" ON puts all Azure IPs on the allow list. > [!NOTE] > Currently, outbound connections from Azure Database for PostgreSQL via foreign data wrapper extensions such as postgres_fdw are not supported, except for connections to other Azure Database for PostgreSQL servers in the same Azure region. ## uuid+ If you are planning to use `uuid_generate_v4()` from the [uuid-ossp extension](https://www.postgresql.org/docs/current/uuid-ossp.html), consider comparing with `gen_random_uuid()` from the [pgcrypto extension](https://www.postgresql.org/docs/current/pgcrypto.html) for performance benefits. ## pgAudit
-The [pgAudit extension](https://github.com/pgaudit/pgaudit/blob/master/README.md) provides session and object audit logging. To learn how to use this extension in Azure Database for PostgreSQL, visit the [auditing concepts article](concepts-audit.md).
+
+The [pgAudit extension](https://github.com/pgaudit/pgaudit/blob/master/README.md) provides session and object audit logging. To learn how to use this extension in Azure Database for PostgreSQL, visit the [auditing concepts article](concepts-audit.md).
## pg_prewarm+ The pg_prewarm extension loads relational data into cache. Prewarming your caches means that your queries have better response times on their first run after a restart. In Postgres 10 and below, prewarming is done manually using the [prewarm function](https://www.postgresql.org/docs/10/pgprewarm.html).
-In Postgres 11 and above, you can configure prewarming to happen [automatically](https://www.postgresql.org/docs/current/pgprewarm.html). You need to include pg_prewarm in your `shared_preload_libraries` parameter's list and restart the server to apply the change. Parameters can be set from the [Azure portal](how-to-configure-server-parameters-using-portal.md), [CLI](how-to-configure-server-parameters-using-cli.md), REST API, or ARM template.
+In Postgres 11 and above, you can configure prewarming to happen [automatically](https://www.postgresql.org/docs/current/pgprewarm.html). You need to include pg_prewarm in your `shared_preload_libraries` parameter's list and restart the server to apply the change. Parameters can be set from the [Azure portal](how-to-configure-server-parameters-using-portal.md), [CLI](how-to-configure-server-parameters-using-cli.md), REST API, or ARM template.
## TimescaleDB+ TimescaleDB is a time-series database that is packaged as an extension for PostgreSQL. TimescaleDB provides time-oriented analytical functions, optimizations, and scales Postgres for time-series workloads. [Learn more about TimescaleDB](https://docs.timescale.com/timescaledb/latest/), a registered trademark of [Timescale, Inc.](https://www.timescale.com/). Azure Database for PostgreSQL provides the TimescaleDB [Apache-2 edition](https://www.timescale.com/legal/licenses). ### Installing TimescaleDB+ To install TimescaleDB, you need to include it in the server's shared preload libraries. A change to Postgres's `shared_preload_libraries` parameter requires a **server restart** to take effect. You can change parameters using the [Azure portal](how-to-configure-server-parameters-using-portal.md) or the [Azure CLI](how-to-configure-server-parameters-using-cli.md). Using the [Azure portal](https://portal.azure.com/):
Using the [Azure portal](https://portal.azure.com/):
4. Select **TimescaleDB**.
-5. Select **Save** to preserve your changes. You get a notification once the change is saved.
+5. Select **Save** to preserve your changes. You get a notification once the change is saved.
6. After the notification, **restart** the server to apply these changes. To learn how to restart a server, see [Restart an Azure Database for PostgreSQL server](how-to-restart-server-portal.md). - You can now enable TimescaleDB in your Postgres database. Connect to the database and issue the following command: ```sql CREATE EXTENSION IF NOT EXISTS timescaledb CASCADE; ``` > [!TIP]
-> If you see an error, confirm that you [restarted your server](how-to-restart-server-portal.md) after saving shared_preload_libraries.
+> If you see an error, confirm that you [restarted your server](how-to-restart-server-portal.md) after saving shared_preload_libraries.
You can now create a TimescaleDB hypertable [from scratch](https://docs.timescale.com/getting-started/creating-hypertables) or migrate [existing time-series data in PostgreSQL](https://docs.timescale.com/getting-started/migrating-data). ### Restoring a Timescale database using pg_dump and pg_restore+ To restore a Timescale database using pg_dump and pg_restore, you need to run two helper procedures in the destination database: `timescaledb_pre_restore()` and `timescaledb_post restore()`. First prepare the destination database:
SELECT timescaledb_post_restore();
``` For more details on restore method wiith Timescale enabled database see [Timescale documentation](https://docs.timescale.com/timescaledb/latest/how-to-guides/backup-and-restore/pg-dump-and-restore/#restore-your-entire-database-from-backup) - ### Restoring a Timescale database using timescaledb-backup
- While running `SELECT timescaledb_post_restore()` procedure listed above you may get permissions denied error updating timescaledb.restoring flag. This is due to limited ALTER DATABASE permission in Cloud PaaS database services. In this case you can perform alternative method using `timescaledb-backup` tool to backup and restore Timescale database. Timescaledb-backup is a program for making dumping and restoring a TimescaleDB database simpler, less error-prone, and more performant.
- To do so you should do following
+While running `SELECT timescaledb_post_restore()` procedure listed above you may get permissions denied error updating timescaledb.restoring flag. This is due to limited ALTER DATABASE permission in Cloud PaaS database services. In this case you can perform alternative method using `timescaledb-backup` tool to backup and restore Timescale database. Timescaledb-backup is a program for making dumping and restoring a TimescaleDB database simpler, less error-prone, and more performant.
+To do so you should do following
1. Install tools as detailed [here](https://github.com/timescale/timescaledb-backup#installing-timescaledb-backup) 2. Create target Azure Database for PostgreSQL server and database 3. Enable Timescale extension as shown above 4. Grant azure_pg_admin [role](https://www.postgresql.org/docs/11/database-roles.html) to user that will be used by [ts-restore](https://github.com/timescale/timescaledb-backup#using-ts-restore) 5. Run [ts-restore](https://github.com/timescale/timescaledb-backup#using-ts-restore) to restore database
- More details on these utilities can be found [here](https://github.com/timescale/timescaledb-backup).
+More details on these utilities can be found [here](https://github.com/timescale/timescaledb-backup).
> [!NOTE]
-> When using `timescale-backup` utilities to restore to Azure is that since database user names for non-flexible Azure Database for PostgresQL must use the `<user@db-name>` format, you need to replace `@` with `%40` character encoding.
+> When using `timescale-backup` utilities to restore to Azure is that since database user names for non-flexible Azure Database for PostgresQL must use the `<user@db-name>` format, you need to replace `@` with `%40` character encoding.
## Next steps
-If you don't see an extension that you'd like to use, let us know. Vote for existing requests or create new feedback requests in our [feedback forum](https://feedback.azure.com/d365community/forum/c5e32b97-ee24-ec11-b6e6-000d3a4f0da0).
+If you don't see an extension that you'd like to use, let us know. Vote for existing requests or create new feedback requests in our [feedback forum](https://feedback.azure.com/d365community/forum/c5e32b97-ee24-ec11-b6e6-000d3a4f0da0).
postgresql Concepts Firewall Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-firewall-rules.md
Previously updated : 07/17/2020 Last updated : 06/24/2022 # Firewall rules in Azure Database for PostgreSQL - Single Server
To configure your firewall, you create firewall rules that specify ranges of acc
**Firewall rules:** These rules enable clients to access your entire Azure Database for PostgreSQL Server, that is, all the databases within the same logical server. Server-level firewall rules can be configured by using the Azure portal or using Azure CLI commands. To create server-level firewall rules, you must be the subscription owner or a subscription contributor. ## Firewall overview+ All access to your Azure Database for PostgreSQL server is blocked by the firewall by default. To access your server from another computer/client or application, you need to specify one or more server-level firewall rules to enable access to your server. Use the firewall rules to specify allowed public IP address ranges. Access to the Azure portal website itself is not impacted by the firewall rules. Connection attempts from the internet and Azure must first pass through the firewall before they can reach your PostgreSQL Database, as shown in the following diagram: :::image type="content" source="../media/concepts-firewall-rules/1-firewall-concept.png" alt-text="Example flow of how the firewall works"::: ## Connecting from the Internet+ Server-level firewall rules apply to all databases on the same Azure Database for PostgreSQL server. If the source IP address of the request is within one of the ranges specified in the server-level firewall rules, the connection is granted otherwise it is rejected. For example, if your application connects with JDBC driver for PostgreSQL, you may encounter this error attempting to connect when the firewall is blocking the connection. > java.util.concurrent.ExecutionException: java.lang.RuntimeException: > org.postgresql.util.PSQLException: FATAL: no pg\_hba.conf entry for host "123.45.67.890", user "adminuser", database "postgresql", SSL ## Connecting from Azure
-It is recommended that you find the outgoing IP address of any application or service and explicitly allow access to those individual IP addresses or ranges. For example, you can find the outgoing IP address of an Azure App Service or use a public IP tied to a virtual machine or other resource (see below for info on connecting with a virtual machine's private IP over service endpoints).
+
+It is recommended that you find the outgoing IP address of any application or service and explicitly allow access to those individual IP addresses or ranges. For example, you can find the outgoing IP address of an Azure App Service or use a public IP tied to a virtual machine or other resource (see below for info on connecting with a virtual machine's private IP over service endpoints).
If a fixed outgoing IP address isn't available for your Azure service, you can consider enabling connections from all Azure datacenter IP addresses. This setting can be enabled from the Azure portal by setting the **Allow access to Azure services** option to **ON** from the **Connection security** pane and hitting **Save**. From the Azure CLI, a firewall rule setting with starting and ending address equal to 0.0.0.0 does the equivalent. If the connection attempt is rejected by firewall rules, it does not reach the Azure Database for PostgreSQL server. > [!IMPORTANT] > The **Allow access to Azure services** option configures the firewall to allow all connections from Azure including connections from the subscriptions of other customers. When selecting this option, make sure your login and user permissions limit access to only authorized users.
->
+>
:::image type="content" source="../media/concepts-firewall-rules/allow-azure-services.png" alt-text="Configure Allow access to Azure services in the portal"::: ## Connecting from a VNet
-To connect securely to your Azure Database for PostgreSQL server from a VNet, consider using [VNet service endpoints](./concepts-data-access-and-security-vnet.md).
+
+To connect securely to your Azure Database for PostgreSQL server from a VNet, consider using [VNet service endpoints](./concepts-data-access-and-security-vnet.md).
## Programmatically managing firewall rules+ In addition to the Azure portal, firewall rules can be managed programmatically using Azure CLI. See also [Create and manage Azure Database for PostgreSQL firewall rules using Azure CLI](quickstart-create-server-database-azure-cli.md#configure-a-server-based-firewall-rule) ## Troubleshooting firewall issues+ Consider the following points when access to the Microsoft Azure Database for PostgreSQL Server service does not behave as you expect: * **Changes to the allow list have not taken effect yet:** There may be as much as a five-minute delay for changes to the Azure Database for PostgreSQL Server firewall configuration to take effect.
Consider the following points when access to the Microsoft Azure Database for Po
* **Firewall rule is not available for IPv6 format:** The firewall rules must be in IPv4 format. If you specify firewall rules in IPv6 format, it will show the validation error. - ## Next steps+ * [Create and manage Azure Database for PostgreSQL firewall rules using the Azure portal](how-to-manage-firewall-using-portal.md) * [Create and manage Azure Database for PostgreSQL firewall rules using Azure CLI](quickstart-create-server-database-azure-cli.md#configure-a-server-based-firewall-rule) * [VNet service endpoints in Azure Database for PostgreSQL](./concepts-data-access-and-security-vnet.md)
postgresql Concepts High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-high-availability.md
Previously updated : 6/15/2020 Last updated : 06/24/2022 # High availability in Azure Database for PostgreSQL ΓÇô Single Server
Last updated 6/15/2020
The Azure Database for PostgreSQL ΓÇô Single Server service provides a guaranteed high level of availability with the financially backed service level agreement (SLA) of [99.99%](https://azure.microsoft.com/support/legal/sla/postgresql) uptime. Azure Database for PostgreSQL provides high availability during planned events such as user-initiated scale compute operation, and also when unplanned events such as underlying hardware, software, or network failures occur. Azure Database for PostgreSQL can quickly recover from most critical circumstances, ensuring virtually no application down time when using this service.
-Azure Database for PostgreSQL is suitable for running mission critical databases that require high uptime. Built on Azure architecture, the service has inherent high availability, redundancy, and resiliency capabilities to mitigate database downtime from planned and unplanned outages, without requiring you to configure any additional components.
+Azure Database for PostgreSQL is suitable for running mission critical databases that require high uptime. Built on Azure architecture, the service has inherent high availability, redundancy, and resiliency capabilities to mitigate database downtime from planned and unplanned outages, without requiring you to configure any additional components.
## Components in Azure Database for PostgreSQL ΓÇô Single Server
Azure Database for PostgreSQL is suitable for running mission critical databases
| <b>Gateway | The Gateway acts as a database proxy, routes all client connections to the database server. | ## Planned downtime mitigation
-Azure Database for PostgreSQL is architected to provide high availability during planned downtime operations.
+
+Azure Database for PostgreSQL is architected to provide high availability during planned downtime operations.
:::image type="content" source="./media/concepts-high-availability/azure-postgresql-elastic-scaling.png" alt-text="view of Elastic Scaling in Azure PostgreSQL":::
Here are some planned maintenance scenarios:
| <b>New Software Deployment (Azure) | New features rollout or bug fixes automatically happen as part of serviceΓÇÖs planned maintenance. For more information, refer to the [documentation](./concepts-monitoring.md#planned-maintenance-notification), and also check your [portal](https://aka.ms/servicehealthpm).| | <b>Minor version upgrades | Azure Database for PostgreSQL automatically patches database servers to the minor version determined by Azure. It happens as part of service's planned maintenance. This would incur a short downtime in terms of seconds, and the database server is automatically restarted with the new minor version. For more information, refer to the [documentation](./concepts-monitoring.md#planned-maintenance-notification), and also check your [portal](https://aka.ms/servicehealthpm).| - ## Unplanned downtime mitigation
-Unplanned downtime can occur as a result of unforeseen failures, including underlying hardware fault, networking issues, and software bugs. If the database server goes down unexpectedly, a new database server is automatically provisioned in seconds. The remote storage is automatically attached to the new database server. PostgreSQL engine performs the recovery operation using WAL and database files, and opens up the database server to allow clients to connect. Uncommitted transactions are lost, and they have to be retried by the application. While an unplanned downtime cannot be avoided, Azure Database for PostgreSQL mitigates the downtime by automatically performing recovery operations at both database server and storage layers without requiring human intervention.
-
+Unplanned downtime can occur as a result of unforeseen failures, including underlying hardware fault, networking issues, and software bugs. If the database server goes down unexpectedly, a new database server is automatically provisioned in seconds. The remote storage is automatically attached to the new database server. PostgreSQL engine performs the recovery operation using WAL and database files, and opens up the database server to allow clients to connect. Uncommitted transactions are lost, and they have to be retried by the application. While an unplanned downtime cannot be avoided, Azure Database for PostgreSQL mitigates the downtime by automatically performing recovery operations at both database server and storage layers without requiring human intervention.
:::image type="content" source="./media/concepts-high-availability/azure-postgresql-built-in-high-availability.png" alt-text="view of High Availability in Azure PostgreSQL":::
Unplanned downtime can occur as a result of unforeseen failures, including under
2. Gateway that acts as a proxy to route client connections to the proper database server 3. Azure storage with three copies for reliability, availability, and redundancy. 4. Remote storage also enables fast detach/re-attach after the server failover.
-
+ ### Unplanned downtime: failure scenarios and service recovery+ Here are some failure scenarios and how Azure Database for PostgreSQL automatically recovers: | **Scenario** | **Automatic recovery** |
Here are some failure scenarios that require user action to recover:
| <b> Logical/user errors | Recovery from user errors, such as accidentally dropped tables or incorrectly updated data, involves performing a [point-in-time recovery](./concepts-backup.md) (PITR), by restoring and recovering the data until the time just before the error had occurred.<br> <br> If you want to restore only a subset of databases or specific tables rather than all databases in the database server, you can restore the database server in a new instance, export the table(s) via [pg_dump](https://www.postgresql.org/docs/11/app-pgdump.html), and then use [pg_restore](https://www.postgresql.org/docs/11/app-pgrestore.html) to restore those tables into your database. | - ## Summary Azure Database for PostgreSQL provides fast restart capability of database servers, redundant storage, and efficient routing from the Gateway. For additional data protection, you can configure backups to be geo-replicated, and also deploy one or more read replicas in other regions. With inherent high availability capabilities, Azure Database for PostgreSQL protects your databases from most common outages, and offers an industry leading, finance-backed [99.99% of uptime SLA](https://azure.microsoft.com/support/legal/sla/postgresql). All these availability and reliability capabilities enable Azure to be the ideal platform to run your mission-critical applications. ## Next steps+ - Learn about [Azure regions](../../availability-zones/az-overview.md) - Learn about [handling transient connectivity errors](concepts-connectivity.md)-- Learn how to [replicate your data with read replicas](how-to-read-replicas-portal.md)
+- Learn how to [replicate your data with read replicas](how-to-read-replicas-portal.md)
postgresql Concepts Infrastructure Double Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-infrastructure-double-encryption.md
Previously updated : 6/30/2020 Last updated : 06/24/2022 # Azure Database for PostgreSQL Infrastructure double encryption
Infrastructure double encryption adds a second layer of encryption using service
> [!NOTE] > This feature is only supported for "General Purpose" and "Memory Optimized" pricing tiers in Azure Database for PostgreSQL.
-Infrastructure Layer encryption has the benefit of being implemented at the layer closest to the storage device or network wires. Azure Database for PostgreSQL implements the two layers of encryption using service-managed keys. Although still technically in the service layer, it is very close to hardware that stores the data at rest. You can still optionally enable data encryption at rest using [customer managed key](concepts-data-encryption-postgresql.md) for the provisioned PostgreSQL server.
+Infrastructure Layer encryption has the benefit of being implemented at the layer closest to the storage device or network wires. Azure Database for PostgreSQL implements the two layers of encryption using service-managed keys. Although still technically in the service layer, it is very close to hardware that stores the data at rest. You can still optionally enable data encryption at rest using [customer managed key](concepts-data-encryption-postgresql.md) for the provisioned PostgreSQL server.
-Implementation at the infrastructure layers also supports a diversity of keys. Infrastructure must be aware of different clusters of machine and networks. As such, different keys are used to minimize the blast radius of infrastructure attacks and a variety of hardware and network failures.
+Implementation at the infrastructure layers also supports a diversity of keys. Infrastructure must be aware of different clusters of machine and networks. As such, different keys are used to minimize the blast radius of infrastructure attacks and a variety of hardware and network failures.
> [!NOTE] > Using Infrastructure double encryption will have performance impact on the Azure Database for PostgreSQL server due to the additional encryption process.
The encryption capabilities that are provided by Azure Database for PostgreSQL c
| 2 | *Yes* | *Yes* | *No* | | 3 | *Yes* | *No* | *Yes* | | 4 | *Yes* | *Yes* | *Yes* |
-| | | | |
> [!Important] > - Scenario 2 and 4 will have performance impact on the Azure Database for PostgreSQL server due to the additional layer of infrastructure encryption.
postgresql Concepts Known Issues Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-known-issues-limitations.md
Previously updated : 11/30/2021 Last updated : 06/24/2022 # Azure Database for PostgreSQL - Known issues and limitations
Applicable to Azure Database for PostgreSQL - Single Server.
| Applicable | Cause | Remediation| | -- | | - |
-| PostgreSQL 9.6, 10, 11 | Turning on the server parameter `pg_qs.replace_parameter_placeholders` might lead to a server shutdown in some rare scenarios. | Through Azure Portal, Server Parameters section, turn the parameter `pg_qs.replace_parameter_placeholders` value to `OFF` and save. |
-
+| PostgreSQL 9.6, 10, 11 | Turning on the server parameter `pg_qs.replace_parameter_placeholders` might lead to a server shutdown in some rare scenarios. | Through Azure Portal, Server Parameters section, turn the parameter `pg_qs.replace_parameter_placeholders` value to `OFF` and save. |
## Next steps+ - See Query Store [best practices](./concepts-query-store-best-practices.md)
postgresql Concepts Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-limits.md
Previously updated : 01/28/2020 Last updated : 06/24/2022 + # Limits in Azure Database for PostgreSQL - Single Server [!INCLUDE [applies-to-postgresql-single-server](../includes/applies-to-postgresql-single-server.md)]
The following sections describe capacity and functional limits in the database s
## Maximum connections
-The maximum number of connections per pricing tier and vCores are shown below. The Azure system requires five connections to monitor the Azure Database for PostgreSQL server.
+The maximum number of connections per pricing tier and vCores are shown below. The Azure system requires five connections to monitor the Azure Database for PostgreSQL server.
|**Pricing Tier**| **vCore(s)**| **Max Connections** | **Max User Connections** | |||||
When connections exceed the limit, you may receive the following error:
A PostgreSQL connection, even idle, can occupy about 10MB of memory. Also, creating new connections takes time. Most applications request many short-lived connections, which compounds this situation. The result is fewer resources available for your actual workload leading to decreased performance. A connection pooler that decreases idle connections and reuses existing connections will help avoid this. To learn more, visit our [blog post](https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/not-all-postgres-connection-pooling-is-equal/ba-p/825717). ## Functional limitations+ ### Scale operations+ - Dynamic scaling to and from the Basic pricing tiers is currently not supported. - Decreasing server storage size is currently not supported. ### Server version upgrades+ - Automated migration between major database engine versions is currently not supported. If you would like to upgrade to the next major version, take a [dump and restore](./how-to-migrate-using-dump-and-restore.md) it to a server that was created with the new engine version. > Note that prior to PostgreSQL version 10, the [PostgreSQL versioning policy](https://www.postgresql.org/support/versioning/) considered a _major version_ upgrade to be an increase in the first _or_ second number (for example, 9.5 to 9.6 was considered a _major_ version upgrade). > As of version 10, only a change in the first number is considered a major version upgrade (for example, 10.0 to 10.1 is a _minor_ version upgrade, and 10 to 11 is a _major_ version upgrade). ### VNet service endpoints+ - Support for VNet service endpoints is only for General Purpose and Memory Optimized servers. ### Restoring a server+ - When using the PITR feature, the new server is created with the same pricing tier configurations as the server it is based on. - The new server created during a restore does not have the firewall rules that existed on the original server. Firewall rules need to be set up separately for this new server. - Restoring a deleted server is not supported. ### UTF-8 characters on Windows+ - In some scenarios, UTF-8 characters are not supported fully in open source PostgreSQL on Windows, which affects Azure Database for PostgreSQL. Please see the thread on [Bug #15476 in the postgresql-archive](https://www.postgresql.org/message-id/2101.1541220270%40sss.pgh.pa.us) for more information. ### GSS error+ If you see an error related to **GSS**, you are likely using a newer client/driver version which Azure Postgres Single Server does not yet fully support. This error is known to affect [JDBC driver versions 42.2.15 and 42.2.16](https://github.com/pgjdbc/pgjdbc/issues/1868). - We plan to complete the update by the end of November. Consider using a working driver version in the meantime. - Or, consider disabling the GSS request. Use a connection parameter like `gssEncMode=disable`. ### Storage size reduction+ Storage size cannot be reduced. You have to create a new server with desired storage size, perform manual [dump and restore](./how-to-migrate-using-dump-and-restore.md) and migrate your database(s) to the new server. ## Next steps+ - Understand [whatΓÇÖs available in each pricing tier](concepts-pricing-tiers.md) - Learn about [Supported PostgreSQL Database Versions](concepts-supported-versions.md) - Review [how to back up and restore a server in Azure Database for PostgreSQL using the Azure portal](how-to-restore-server-portal.md)
postgresql Concepts Logical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-logical.md
Previously updated : 12/09/2020 Last updated : 06/24/2022 # Logical decoding
Last updated 12/09/2020
Logical decoding uses an output plugin to convert PostgresΓÇÖs write ahead log (WAL) into a readable format. Azure Database for PostgreSQL provides the output plugins [wal2json](https://github.com/eulerto/wal2json), [test_decoding](https://www.postgresql.org/docs/current/test-decoding.html) and pgoutput. pgoutput is made available by PostgreSQL from PostgreSQL version 10 and up.
-For an overview of how Postgres logical decoding works, [visit our blog](https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/change-data-capture-in-postgres-how-to-use-logical-decoding-and/ba-p/1396421).
+For an overview of how Postgres logical decoding works, [visit our blog](https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/change-data-capture-in-postgres-how-to-use-logical-decoding-and/ba-p/1396421).
> [!NOTE] > Logical replication using PostgreSQL publication/subscription is not supported with Azure Database for PostgreSQL - Single Server. - ## Set up your server + Logical decoding and [read replicas](concepts-read-replicas.md) both depend on the Postgres write ahead log (WAL) for information. These two features need different levels of logging from Postgres. Logical decoding needs a higher level of logging than read replicas. To configure the right level of logging, use the Azure replication support parameter. Azure replication support has three setting options:
To configure the right level of logging, use the Azure replication support param
* **Replica** - More verbose than **Off**. This is the minimum level of logging needed for [read replicas](concepts-read-replicas.md) to work. This setting is the default on most servers. * **Logical** - More verbose than **Replica**. This is the minimum level of logging for logical decoding to work. Read replicas also work at this setting. - ### Using Azure CLI 1. Set azure.replication_support to `logical`. ```azurecli-interactive az postgres server configuration set --resource-group mygroup --server-name myserver --name azure.replication_support --value logical
- ```
+ ```
2. Restart the server to apply the change. ```azurecli-interactive az postgres server restart --resource-group mygroup --name myserver ```
-3. If you are running Postgres 9.5 or 9.6, and use public network access, add the firewall rule to include the public IP address of the client from where you will run the logical replication. The firewall rule name must include **_replrule**. For example, *test_replrule*. To create a new firewall rule on the server, run the [az postgres server firewall-rule create](/cli/azure/postgres/server/firewall-rule) command.
+3. If you are running Postgres 9.5 or 9.6, and use public network access, add the firewall rule to include the public IP address of the client from where you will run the logical replication. The firewall rule name must include **_replrule**. For example, *test_replrule*. To create a new firewall rule on the server, run the [az postgres server firewall-rule create](/cli/azure/postgres/server/firewall-rule) command.
### Using Azure portal
To configure the right level of logging, use the Azure replication support param
:::image type="content" source="./media/concepts-logical/confirm-restart.png" alt-text="Azure Database for PostgreSQL - Replication - Confirm restart":::
-3. If you are running Postgres 9.5 or 9.6, and use public network access, add the firewall rule to include the public IP address of the client from where you will run the logical replication. The firewall rule name must include **_replrule**. For example, *test_replrule*. Then click **Save**.
+3. If you are running Postgres 9.5 or 9.6, and use public network access, add the firewall rule to include the public IP address of the client from where you will run the logical replication. The firewall rule name must include **_replrule**. For example, *test_replrule*. Then select **Save**.
:::image type="content" source="./media/concepts-logical/client-replrule-firewall.png" alt-text="Azure Database for PostgreSQL - Replication - Add firewall rule":::
To configure the right level of logging, use the Azure replication support param
Logical decoding can be consumed via streaming protocol or SQL interface. Both methods use [replication slots](https://www.postgresql.org/docs/current/logicaldecoding-explanation.html#LOGICALDECODING-REPLICATION-SLOTS). A slot represents a stream of changes from a single database.
-Using a replication slot requires Postgres's replication privileges. At this time, the replication privilege is only available for the server's admin user.
+Using a replication slot requires Postgres's replication privileges. At this time, the replication privilege is only available for the server's admin user.
### Streaming protocol
-Consuming changes using the streaming protocol is often preferable. You can create your own consumer / connector, or use a tool like [Debezium](https://debezium.io/).
-Visit the wal2json documentation for [an example using the streaming protocol with pg_recvlogical](https://github.com/eulerto/wal2json#pg_recvlogical).
+Consuming changes using the streaming protocol is often preferable. You can create your own consumer / connector, or use a tool like [Debezium](https://debezium.io/).
+Visit the wal2json documentation for [an example using the streaming protocol with pg_recvlogical](https://github.com/eulerto/wal2json#pg_recvlogical).
### SQL interface+ In the example below, we use the SQL interface with the wal2json plugin.
-
+ 1. Create a slot. ```SQL SELECT * FROM pg_create_logical_replication_slot('test_slot', 'wal2json'); ```
-
+ 2. Issue SQL commands. For example: ```SQL CREATE TABLE a_table (
In the example below, we use the SQL interface with the wal2json plugin.
item varchar(40), PRIMARY KEY (id) );
-
+ INSERT INTO a_table (id, item) VALUES ('id1', 'item1'); DELETE FROM a_table WHERE id='id1'; ```
In the example below, we use the SQL interface with the wal2json plugin.
SELECT pg_drop_replication_slot('test_slot'); ``` - ## Monitoring slots You must monitor logical decoding. Any unused replication slot must be dropped. Slots hold on to Postgres WAL logs and relevant system catalogs until changes have been read by a consumer. If your consumer fails or has not been properly configured, the unconsumed logs will pile up and fill your storage. Also, unconsumed logs increase the risk of transaction ID wraparound. Both situations can cause the server to become unavailable. Therefore, it is critical that logical replication slots are consumed continuously. If a logical replication slot is no longer used, drop it immediately.
The 'active' column in the pg_replication_slots view will indicate whether there
SELECT * FROM pg_replication_slots; ```
-[Set alerts](how-to-alert-on-metric.md) on *Storage used* and *Max lag across replicas* metrics to notify you when the values increase past normal thresholds.
+[Set alerts](how-to-alert-on-metric.md) on *Storage used* and *Max lag across replicas* metrics to notify you when the values increase past normal thresholds.
> [!IMPORTANT] > You must drop unused replication slots. Failing to do so can lead to server unavailability. ## How to drop a slot+ If you are not actively consuming a replication slot you should drop it. To drop a replication slot called `test_slot` using SQL:
SELECT pg_drop_replication_slot('test_slot');
``` > [!IMPORTANT]
-> If you stop using logical decoding, change azure.replication_support back to `replica` or `off`. The WAL details retained by `logical` are more verbose, and should be disabled when logical decoding is not in use.
+> If you stop using logical decoding, change azure.replication_support back to `replica` or `off`. The WAL details retained by `logical` are more verbose, and should be disabled when logical decoding is not in use.
-
## Next steps * Visit the Postgres documentation to [learn more about logical decoding](https://www.postgresql.org/docs/current/logicaldecoding-explanation.html). * Reach out to [our team](mailto:AskAzureDBforPostgreSQL@service.microsoft.com) if you have questions about logical decoding. * Learn more about [read replicas](concepts-read-replicas.md).-
postgresql Concepts Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-monitoring.md
Previously updated : 10/21/2020 Last updated : 06/24/2022 # Monitor and tune Azure Database for PostgreSQL - Single Server
Last updated 10/21/2020
Monitoring data about your servers helps you troubleshoot and optimize for your workload. Azure Database for PostgreSQL provides various monitoring options to provide insight into the behavior of your server. ## Metrics+ Azure Database for PostgreSQL provides various metrics that give insight into the behavior of the resources supporting the PostgreSQL server. Each metric is emitted at a one-minute frequency, and has up to [93 days of history](../../azure-monitor/essentials/data-platform-metrics.md#retention-of-metrics). You can configure alerts on the metrics. For step by step guidance, see [How to set up alerts](how-to-alert-on-metric.md). Other tasks include setting up automated actions, performing advanced analytics, and archiving history. For more information, see the [Azure Metrics Overview](../../azure-monitor/data-platform.md). ### List of metrics+ These metrics are available for Azure Database for PostgreSQL: |Metric|Metric Display Name|Unit|Description|
These metrics are available for Azure Database for PostgreSQL:
|pg_replica_log_delay_in_seconds|Replica Lag|Seconds|The time since the last replayed transaction. This metric is available for replica servers only.| ## Server logs+ You can enable logging on your server. These resource logs can be sent to [Azure Monitor logs](../../azure-monitor/logs/log-query-overview.md), Event Hubs, and a Storage Account. To learn more about logging, visit the [server logs](concepts-server-logs.md) page. ## Query Store+ [Query Store](concepts-query-store.md) keeps track of query performance over time including query runtime statistics and wait events. The feature persists query runtime performance information in a system database named **azure_sys** under the query_store schema. You can control the collection and storage of data via various configuration knobs. ## Query Performance Insight+ [Query Performance Insight](concepts-query-performance-insight.md) works in conjunction with Query Store to provide visualizations accessible from the Azure portal. These charts enable you to identify key queries that impact performance. Query Performance Insight is accessible from the **Support + troubleshooting** section of your Azure Database for PostgreSQL server's portal page. ## Performance Recommendations
-The [Performance Recommendations](concepts-performance-recommendations.md) feature identifies opportunities to improve workload performance. Performance Recommendations provides you with recommendations for creating new indexes that have the potential to improve the performance of your workloads. To produce index recommendations, the feature takes into consideration various database characteristics, including its schema and the workload as reported by Query Store. After implementing any performance recommendation, customers should test performance to evaluate the impact of those changes.
+
+The [Performance Recommendations](concepts-performance-recommendations.md) feature identifies opportunities to improve workload performance. Performance Recommendations provides you with recommendations for creating new indexes that have the potential to improve the performance of your workloads. To produce index recommendations, the feature takes into consideration various database characteristics, including its schema and the workload as reported by Query Store. After implementing any performance recommendation, customers should test performance to evaluate the impact of those changes.
## Planned maintenance notification
The [Performance Recommendations](concepts-performance-recommendations.md) featu
Learn more about how to set up notifications in the [planned maintenance notifications](./concepts-planned-maintenance-notification.md) document. ## Next steps+ - See [how to set up alerts](how-to-alert-on-metric.md) for guidance on creating an alert on a metric. - For more information on how to access and export metrics using the Azure portal, REST API, or CLI, see the [Azure Metrics Overview](../../azure-monitor/data-platform.md) - Read our blog on [best practices for monitoring your server](https://azure.microsoft.com/blog/best-practices-for-alerting-on-metrics-with-azure-database-for-postgresql-monitoring/).-- Learn more about [planned maintenance notifications](./concepts-planned-maintenance-notification.md) in Azure Database for PostgreSQL - Single Server.
+- Learn more about [planned maintenance notifications](./concepts-planned-maintenance-notification.md) in Azure Database for PostgreSQL - Single Server.
postgresql Concepts Performance Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-performance-recommendations.md
Previously updated : 08/21/2019 Last updated : 06/24/2022 + # Performance Recommendations in Azure Database for PostgreSQL - Single Server [!INCLUDE [applies-to-postgresql-single-server](../includes/applies-to-postgresql-single-server.md)] **Applies to:** Azure Database for PostgreSQL - Single Server versions 9.6, 10, 11
-The Performance Recommendations feature analyses your databases to create customized suggestions for improved performance. To produce the recommendations, the analysis looks at various database characteristics including schema. Enable [Query Store](concepts-query-store.md) on your server to fully utilize the Performance Recommendations feature. After implementing any performance recommendation, you should test performance to evaluate the impact of those changes.
+The Performance Recommendations feature analyses your databases to create customized suggestions for improved performance. To produce the recommendations, the analysis looks at various database characteristics including schema. Enable [Query Store](concepts-query-store.md) on your server to fully utilize the Performance Recommendations feature. After implementing any performance recommendation, you should test performance to evaluate the impact of those changes.
## Permissions+ **Owner** or **Contributor** permissions required to run analysis using the Performance Recommendations feature. ## Performance recommendations+ The [Performance Recommendations](concepts-performance-recommendations.md) feature analyzes workloads across your server to identify indexes with the potential to improve performance. Open **Performance Recommendations** from the **Intelligent Performance** section of the menu bar on the Azure portal page for your PostgreSQL server. :::image type="content" source="./media/concepts-performance-recommendations/performance-recommendations-page.png" alt-text="Performance Recommendations landing page":::
-Select **Analyze** and choose a database, which will begin the analysis. Depending on your workload, th analysis may take several minutes to complete. Once the analysis is done, there will be a notification in the portal. Analysis performs a deep examination of your database. We recommend you perform analysis during off-peak periods.
+Select **Analyze** and choose a database, which will begin the analysis. Depending on your workload, th analysis may take several minutes to complete. Once the analysis is done, there will be a notification in the portal. Analysis performs a deep examination of your database. We recommend you perform analysis during off-peak periods.
The **Recommendations** window will show a list of recommendations if any were found. :::image type="content" source="./media/concepts-performance-recommendations/performance-recommendations-result.png" alt-text="Performance Recommendations new page":::
-Recommendations are not automatically applied. To apply the recommendation, copy the query text and run it from your client of choice. Remember to test and monitor to evaluate the recommendation.
+Recommendations are not automatically applied. To apply the recommendation, copy the query text and run it from your client of choice. Remember to test and monitor to evaluate the recommendation.
## Recommendation types Currently, two types of recommendations are supported: *Create Index* and *Drop Index*. ### Create Index recommendations+ *Create Index* recommendations suggest new indexes to speed up the most frequently run or time-consuming queries in the workload. This recommendation type requires [Query Store](concepts-query-store.md) to be enabled. Query Store collects query information and provides the detailed query runtime and frequency statistics that the analysis uses to make the recommendation. ### Drop Index recommendations+ Besides detecting missing indexes, Azure Database for PostgreSQL analyzes the performance of existing indexes. If an index is either rarely used or redundant, the analyzer recommends dropping it. ## Considerations+ * Performance Recommendations is not available for [read replicas](concepts-read-replicas.md). ## Next steps-- Learn more about [monitoring and tuning](concepts-monitoring.md) in Azure Database for PostgreSQL.
+- Learn more about [monitoring and tuning](concepts-monitoring.md) in Azure Database for PostgreSQL.
postgresql Concepts Planned Maintenance Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-planned-maintenance-notification.md
Previously updated : 2/17/2022 Last updated : 06/24/2022 # Planned maintenance notification in Azure Database for PostgreSQL - Single Server
You can utilize the planned maintenance notifications feature to receive alerts
### Planned maintenance notification - **Planned maintenance notifications** allow you to receive alerts for upcoming planned maintenance event to your Azure Database for PostgreSQL. These notifications are integrated with [Service Health's](../../service-health/overview.md) planned maintenance and allow you to view all scheduled maintenance for your subscriptions in one place. It also helps to scale the notification to the right audiences for different resource groups, as you may have different contacts responsible for different resources. You will receive the notification about the upcoming maintenance 72 calendar hours before the event. We will make every attempt to provide **Planned maintenance notification** 72 hours notice for all events. However, in cases of critical or security patches, notifications might be sent closer to the event or be omitted.
-You can either check the planned maintenance notification on Azure portal or configure alerts to receive notification.
+You can either check the planned maintenance notification on Azure portal or configure alerts to receive notification.
### Check planned maintenance notification from Azure portal 1. In the [Azure portal](https://portal.azure.com), select **Service Health**. 2. Select **Planned Maintenance** tab
-3. Select **Subscription**, **Region, and **Service** for which you want to check the planned maintenance notification.
-
+3. Select **Subscription**, **Region, and **Service** for which you want to check the planned maintenance notification.
+ ### To receive planned maintenance notification 1. In the [portal](https://portal.azure.com), select **Service Health**.
No, all the Azure regions are patched during the deployment wise window timings.
A transient error, also known as a transient fault, is an error that will resolve itself. [Transient errors](./concepts-connectivity.md#transient-errors) can occur during maintenance. Most of these events are automatically mitigated by the system in less than 60 seconds. Transient errors should be handled using [retry logic](./concepts-connectivity.md#handling-transient-errors). - ## Next steps - For any questions or suggestions you might have about working with Azure Database for PostgreSQL, send an email to the Azure Database for PostgreSQL Team at AskAzureDBforPostgreSQL@service.microsoft.com
postgresql Concepts Pricing Tiers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-pricing-tiers.md
Previously updated : 10/14/2020 Last updated : 06/24/2022
Servers with less than equal to 100 GB provisioned storage are marked read-only
For example, if you have provisioned 110 GB of storage, and the actual utilization goes over 105 GB, the server is marked read-only. Alternatively, if you have provisioned 5 GB of storage, the server is marked read-only when the free storage reaches less than 512 MB.
-When the server is set to read-only, all existing sessions are disconnected and uncommitted transactions are rolled back. Any subsequent write operations and transaction commits fail. All subsequent read queries will work uninterrupted.
+When the server is set to read-only, all existing sessions are disconnected and uncommitted transactions are rolled back. Any subsequent write operations and transaction commits fail. All subsequent read queries will work uninterrupted.
You can either increase the amount of provisioned storage to your server or start a new session in read-write mode and drop data to reclaim free storage. Running `SET SESSION CHARACTERISTICS AS TRANSACTION READ WRITE;` sets the current session to read write mode. In order to avoid data corruption, do not perform any write operations when the server is still in read-only status.
postgresql Concepts Query Performance Insight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-query-performance-insight.md
Previously updated : 08/21/2019 Last updated : 06/24/2022
-# Query Performance Insight
+# Query Performance Insight
[!INCLUDE [applies-to-postgresql-single-server](../includes/applies-to-postgresql-single-server.md)]
Last updated 08/21/2019
Query Performance Insight helps you to quickly identify what your longest running queries are, how they change over time, and what waits are affecting them. ## Prerequisites+ For Query Performance Insight to function, data must exist in the [Query Store](concepts-query-store.md). ## Viewing performance insights
-The [Query Performance Insight](concepts-query-performance-insight.md) view in the Azure portal will surface visualizations on key information from Query Store.
+
+The [Query Performance Insight](concepts-query-performance-insight.md) view in the Azure portal will surface visualizations on key information from Query Store.
In the portal page of your Azure Database for PostgreSQL server, select **Query performance Insight** under the **Intelligent Performance** section of the menu bar. **Query Text is no longer supported** is shown. However, the query text can still be viewed by connecting to azure_sys and querying 'query_store.query_texts_view'.
In the portal page of your Azure Database for PostgreSQL server, select **Query
The **Long running queries** tab shows the top five queries by average duration per execution, aggregated in 15-minute intervals. You can view more queries by selecting from the **Number of Queries** drop down. The chart colors may change for a specific Query ID when you do this.
-You can click and drag in the chart to narrow down to a specific time window. Alternatively, use the zoom in and out icons to view a smaller or larger period of time respectively.
+You can select and drag in the chart to narrow down to a specific time window. Alternatively, use the zoom in and out icons to view a smaller or larger period of time respectively.
The table below the chart gives more details about the long-running queries in that time window.
Select the **Wait Statistics** tab to view the corresponding visualizations on w
:::image type="content" source="./media/concepts-query-performance-insight/query-performance-insight-wait-statistics.png" alt-text="Query Performance Insight waits statistics"::: ## Considerations+ * Query Performance Insight is not available for [read replicas](concepts-read-replicas.md). ## Next steps-- Learn more about [monitoring and tuning](concepts-monitoring.md) in Azure Database for PostgreSQL.-
+- Learn more about [monitoring and tuning](concepts-monitoring.md) in Azure Database for PostgreSQL.
postgresql Concepts Query Store Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-query-store-best-practices.md
Previously updated : 5/6/2019 Last updated : 06/24/2022 # Best practices for Query Store
This article outlines best practices for using Query Store in Azure Database for
## Set the optimal query capture mode
-Let Query Store capture the data that matters to you.
+Let Query Store capture the data that matters to you.
|**pg_qs.query_capture_mode** | **Scenario**| |||
Let Query Store capture the data that matters to you.
|_Top_ |Focus your attention on top queries - those issued by clients. |_None_ |You've already captured a query set and time window that you want to investigate and you want to eliminate the distractions that other queries may introduce. _None_ is suitable for testing and bench-marking environments. _None_ should be used with caution as you might miss the opportunity to track and optimize important new queries. You can't recover data on those past time windows. |
-Query Store also includes a store for wait statistics. There is an additional capture mode query that governs wait statistics: **pgms_wait_sampling.query_capture_mode** can be set to _none_ or _all_.
+Query Store also includes a store for wait statistics. There is an additional capture mode query that governs wait statistics: **pgms_wait_sampling.query_capture_mode** can be set to _none_ or _all_.
> [!NOTE]
-> **pg_qs.query_capture_mode** supersedes **pgms_wait_sampling.query_capture_mode**. If pg_qs.query_capture_mode is _none_, the pgms_wait_sampling.query_capture_mode setting has no effect.
-
+> **pg_qs.query_capture_mode** supersedes **pgms_wait_sampling.query_capture_mode**. If pg_qs.query_capture_mode is _none_, the pgms_wait_sampling.query_capture_mode setting has no effect.
## Keep the data you need
-The **pg_qs.retention_period_in_days** parameter specifies in days the data retention period for Query Store. Older query and statistics data is deleted. By default, Query Store is configured to retain the data for 7 days. Avoid keeping historical data you do not plan to use. Increase the value if you need to keep data longer.
+The **pg_qs.retention_period_in_days** parameter specifies in days the data retention period for Query Store. Older query and statistics data is deleted. By default, Query Store is configured to retain the data for 7 days. Avoid keeping historical data you do not plan to use. Increase the value if you need to keep data longer.
## Set the frequency of wait stats sampling
-The **pgms_wait_sampling.history_period** parameter specifies how often (in milliseconds) wait events are sampled. The shorter the period, the more frequent the sampling. More information is retrieved, but that comes with the cost of greater resource consumption. Increase this period if the server is under load or you don't need the granularity
+The **pgms_wait_sampling.history_period** parameter specifies how often (in milliseconds) wait events are sampled. The shorter the period, the more frequent the sampling. More information is retrieved, but that comes with the cost of greater resource consumption. Increase this period if the server is under load or you don't need the granularity
## Get quick insights into Query Store+ You can use [Query Performance Insight](concepts-query-performance-insight.md) in the Azure portal to get quick insights into the data in Query Store. The visualizations surface the longest running queries and longest wait events over time. ## Next steps-- Learn how to get or set parameters using the [Azure portal](how-to-configure-server-parameters-using-portal.md) or the [Azure CLI](how-to-configure-server-parameters-using-cli.md).+
+- Learn how to get or set parameters using the [Azure portal](how-to-configure-server-parameters-using-portal.md) or the [Azure CLI](how-to-configure-server-parameters-using-cli.md).
postgresql Concepts Query Store Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-query-store-scenarios.md
Previously updated : 5/6/2019 Last updated : 06/24/2022 + # Usage scenarios for Query Store [!INCLUDE [applies-to-postgresql-single-server](../includes/applies-to-postgresql-single-server.md)]
You can use Query Store in a wide variety of scenarios in which tracking and mai
- Identifying and tuning top expensive queries - A/B testing - Keeping performance stable during upgrades -- Identifying and improving ad hoc workloads
+- Identifying and improving ad hoc workloads
-## Identify and tune expensive queries
+## Identify and tune expensive queries
### Identify longest running queries
-Use the [Query Performance Insight](concepts-query-performance-insight.md) view in the Azure portal to quickly identify the longest running queries. These queries typically tend to consume a significant amount resources. Optimizing your longest running questions can improve performance by freeing up resources for use by other queries running on your system.
+
+Use the [Query Performance Insight](concepts-query-performance-insight.md) view in the Azure portal to quickly identify the longest running queries. These queries typically tend to consume a significant amount resources. Optimizing your longest running questions can improve performance by freeing up resources for use by other queries running on your system.
### Target queries with performance deltas + Query Store slices the performance data into time windows, so you can track a query's performance over time. This helps you identify exactly which queries are contributing to an increase in overall time spent. As a result you can do targeted troubleshooting of your workload. ### Tuning expensive queries + When you identify a query with suboptimal performance, the action you take depends on the nature of the problem: - Use [Performance Recommendations](concepts-performance-recommendations.md) to determine if there are any suggested indexes. If yes, create the index, and then use Query Store to evaluate query performance after creating the index. - Make sure that the statistics are up-to-date for the underlying tables used by the query.-- Consider rewriting expensive queries. For example, take advantage of query parameterization and reduce use of dynamic SQL. Implement optimal logic when reading data like applying data filtering on database side, not on application side. -
+- Consider rewriting expensive queries. For example, take advantage of query parameterization and reduce use of dynamic SQL. Implement optimal logic when reading data like applying data filtering on database side, not on application side.
## A/B testing + Use Query Store to compare workload performance before and after an application change you plan to introduce. Examples of scenarios for using Query Store to assess the impact of the environment or application change to workload performance: - Rolling out a new version of an application. - Adding additional resources to the server. -- Creating missing indexes on tables referenced by expensive queries.
-
+- Creating missing indexes on tables referenced by expensive queries.
+ In any of these scenarios, apply the following workflow: 1. Run your workload with Query Store before the planned change to generate a performance baseline. 2. Apply application change(s) at the controlled moment in time. 3. Continue running the workload long enough to generate performance image of the system after the change. 4. Compare results from before and after the change.
-5. Decide whether to keep the change or rollback.
-
+5. Decide whether to keep the change or rollback.
## Identify and improve ad hoc workloads
-Some workloads do not have dominant queries that you can tune to improve overall application performance. Those workloads are typically characterized with a relatively large number of unique queries, each of them consuming a portion of system resources. Each unique query is executed infrequently, so individually their runtime consumption is not critical. On the other hand, given that the application is generating new queries all the time, a significant portion of system resources is spent on query compilation, which is not optimal. Usually, this situation happens if your application generates queries (instead of using stored procedures or parameterized queries) or if it relies on object-relational mapping frameworks that generate queries by default.
-
-If you are in control of the application code, you may consider rewriting the data access layer to use stored procedures or parameterized queries. However, this situation can be also improved without application changes by forcing query parameterization for the entire database (all queries) or for the individual query templates with the same query hash.
+
+Some workloads do not have dominant queries that you can tune to improve overall application performance. Those workloads are typically characterized with a relatively large number of unique queries, each of them consuming a portion of system resources. Each unique query is executed infrequently, so individually their runtime consumption is not critical. On the other hand, given that the application is generating new queries all the time, a significant portion of system resources is spent on query compilation, which is not optimal. Usually, this situation happens if your application generates queries (instead of using stored procedures or parameterized queries) or if it relies on object-relational mapping frameworks that generate queries by default.
+
+If you are in control of the application code, you may consider rewriting the data access layer to use stored procedures or parameterized queries. However, this situation can be also improved without application changes by forcing query parameterization for the entire database (all queries) or for the individual query templates with the same query hash.
## Next steps-- Learn more about the [best practices for using Query Store](concepts-query-store-best-practices.md)+
+- Learn more about the [best practices for using Query Store](concepts-query-store-best-practices.md)
postgresql Concepts Query Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-query-store.md
Previously updated : 07/01/2020 Last updated : 06/24/2022 # Monitor performance with the Query Store
The Query Store feature in Azure Database for PostgreSQL provides a way to track
> Do not modify the **azure_sys** database or its schemas. Doing so will prevent Query Store and related performance features from functioning correctly. ## Enabling Query Store+ Query Store is an opt-in feature, so it isn't active by default on a server. The store is enabled or disabled globally for all the databases on a given server and cannot be turned on or off per database. ### Enable Query Store using the Azure portal+ 1. Sign in to the Azure portal and select your Azure Database for PostgreSQL server. 2. Select **Server Parameters** in the **Settings** section of the menu. 3. Search for the `pg_qs.query_capture_mode` parameter.
To enable wait statistics in your Query Store:
1. Search for the `pgms_wait_sampling.query_capture_mode` parameter. 1. Set the value to `ALL` and **Save**. - Alternatively you can set these parameters using the Azure CLI. ```azurecli-interactive az postgres server configuration set --name pg_qs.query_capture_mode --resource-group myresourcegroup --server mydemoserver --value TOP
az postgres server configuration set --name pgms_wait_sampling.query_capture_mod
Allow up to 20 minutes for the first batch of data to persist in the azure_sys database. ## Information in Query Store+ Query Store has two stores: - A runtime stats store for persisting the query execution statistics information. - A wait stats store for persisting wait statistics information.
To minimize space usage, the runtime execution statistics in the runtime stats s
## Access Query Store information
-Query Store data is stored in the azure_sys database on your Postgres server.
+Query Store data is stored in the azure_sys database on your Postgres server.
The following query returns information about queries in Query Store: ```sql SELECT * FROM query_store.qs_view;
-```
+```
Or this query for wait stats: ```sql
SELECT * FROM query_store.pgms_wait_sampling_view;
``` ## Finding wait queries+ Wait event types combine different wait events into buckets by similarity. Query Store provides the wait event type, specific wait event name, and the query in question. Being able to correlate this wa