Updates from: 10/21/2022 01:12:44
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Export Import Provisioning Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/export-import-provisioning-configuration.md
Previously updated : 05/11/2021 Last updated : 10/20/2022
active-directory Expression Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/expression-builder.md
Previously updated : 06/02/2021 Last updated : 10/20/2022
active-directory Functions For Customizing Application Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/functions-for-customizing-application-data.md
Previously updated : 04/13/2022 Last updated : 10/20/2022
active-directory How Provisioning Works https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/how-provisioning-works.md
Previously updated : 02/03/2022 Last updated : 10/20/2022
active-directory Hr Attribute Retrieval Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/hr-attribute-retrieval-issues.md
Previously updated : 10/27/2021 Last updated : 10/20/2022
active-directory Hr Manager Update Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/hr-manager-update-issues.md
Previously updated : 10/27/2021 Last updated : 10/20/2022
active-directory Hr User Creation Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/hr-user-creation-issues.md
Previously updated : 10/27/2021 Last updated : 10/20/2022
active-directory Hr User Update Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/hr-user-update-issues.md
Previously updated : 10/27/2021 Last updated : 10/20/2022
active-directory Hr Writeback Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/hr-writeback-issues.md
Previously updated : 10/27/2021 Last updated : 10/20/2022
active-directory Isv Automatic Provisioning Multi Tenant Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/isv-automatic-provisioning-multi-tenant-apps.md
Previously updated : 05/11/2021 Last updated : 10/20/2022
active-directory Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/known-issues.md
Previously updated : 11/18/2021 Last updated : 10/20/2022
active-directory On Premises Migrate Microsoft Identity Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/on-premises-migrate-microsoft-identity-manager.md
Previously updated : 11/17/2021 Last updated : 10/20/2022
active-directory On Premises Sql Connector Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/on-premises-sql-connector-configure.md
Previously updated : 06/06/2021 Last updated : 10/20/2022
active-directory Plan Auto User Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/plan-auto-user-provisioning.md
This article uses the following terms:
| - | - | | On-demand webinars| [Manage your Enterprise Applications with Azure AD](https://info.microsoft.com/CO-AZUREPLAT-WBNR-FY18-03Mar-06-ManageYourEnterpriseApplicationsOption1-MCW0004438_02OnDemandRegistration-ForminBody.html)<br>ΓÇÄLearn how Azure AD can help you achieve SSO to your enterprise SaaS applications and best practices for controlling access. | | Videos| [What is user provisioning in Active Azure Directory?](https://youtu.be/_ZjARPpI6NI) <br> [How to deploy user provisioning in Active Azure Directory?](https://youtu.be/pKzyts6kfrw) <br> [Integrating Salesforce with Azure AD: How to automate User Provisioning](https://azure.microsoft.com/resources/videos/integrating-salesforce-with-azure-ad-how-to-automate-user-provisioning/) |
-| Online courses| SkillUp Online: [Managing Identities](https://skillup.online/courses/course-v1:Microsoft+AZ-100.5+2018_T3/about) <br> Learn how to integrate Azure AD with many SaaS applications and to secure user access to those applications. |
+| Online courses| SkillUp Online: [Managing Identities](https://skillup.online/courses/course-v1:Microsoft+AZ-100.5+2018_T3/) <br> Learn how to integrate Azure AD with many SaaS applications and to secure user access to those applications. |
| Books| [Modern Authentication with Azure Active Directory for Web Applications (Developer Reference) 1st Edition](https://www.amazon.com/Authentication-Directory-Applications-Developer-Reference/dp/0735696942/ref=sr_1_fkmr0_1?keywords=Azure+multifactor+authentication&qid=1550168894&s=gateway&sr=8-1-fkmr0). <br> ΓÇÄThis is an authoritative, deep-dive guide to building Active Directory authentication solutions for these new environments. | | Tutorials| See the [list of tutorials on how to integrate SaaS apps with Azure AD](../saas-apps/tutorial-list.md). | | FAQ| [Frequently asked questions](../app-provisioning/user-provisioning.md) on automated user provisioning |
active-directory Plan Cloud Hr Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/plan-cloud-hr-provision.md
Previously updated : 07/13/2021 Last updated : 10/20/2022
active-directory Provisioning Agent Release Version History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/provisioning-agent-release-version-history.md
Previously updated : 05/11/2021 Last updated : 10/20/2022
active-directory Sap Successfactors Attribute Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/sap-successfactors-attribute-reference.md
Previously updated : 05/11/2021 Last updated : 10/20/2022
active-directory Sap Successfactors Integration Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/sap-successfactors-integration-reference.md
Previously updated : 10/11/2021 Last updated : 10/20/2022
active-directory Scim Graph Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/scim-graph-scenarios.md
Previously updated : 05/11/2021 Last updated : 10/20/2022
active-directory Skip Out Of Scope Deletions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/skip-out-of-scope-deletions.md
Previously updated : 05/11/2021 Last updated : 10/20/2022
active-directory User Provisioning Sync Attributes For Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/user-provisioning-sync-attributes-for-mapping.md
Previously updated : 08/24/2021 Last updated : 10/20/2022
active-directory User Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/user-provisioning.md
Previously updated : 12/08/2021 Last updated : 10/20/2022
active-directory What Is Hr Driven Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/what-is-hr-driven-provisioning.md
Previously updated : 10/30/2020 Last updated : 10/20/2022
active-directory Workday Attribute Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/workday-attribute-reference.md
Previously updated : 05/11/2021 Last updated : 10/20/2022
active-directory Workday Integration Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/workday-integration-reference.md
Previously updated : 06/01/2021 Last updated : 10/20/2022
active-directory Concept Authentication Oath Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-oath-tokens.md
Previously updated : 08/07/2022 Last updated : 09/12/2022
Once any errors have been addressed, the administrator then can activate each ke
Users may have a combination of up to five OATH hardware tokens or authenticator applications, such as the Microsoft Authenticator app, configured for use at any time. Hardware OATH tokens cannot be assigned to guest users in the resource tenant. >[!IMPORTANT]
->The preview is not supported in Azure Government or sovereign clouds.
+>The preview is only supported in Azure Global and Azure Government clouds.
## Next steps
active-directory Feature Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/feature-availability.md
Previously updated : 03/22/2022 Last updated : 09/15/2022
This following tables list Azure AD feature availability in Azure Government.
|**Authentication, single sign-on, and MFA**|Cloud authentication (Pass-through authentication, password hash synchronization) | &#x2705; | || Federated authentication (Active Directory Federation Services or federation with other identity providers) | &#x2705; | || Single sign-on (SSO) unlimited | &#x2705; |
-|| Multifactor authentication (MFA) | Hardware OATH tokens are not available. Instead, use Conditional Access policies with named locations to establish when multifactor authentication should and should not be required based off the user's current IP address. Microsoft Authenticator only shows GUID and not UPN for compliance reasons. |
+|| Multifactor authentication (MFA) <sup>1</sup>| &#x2705; |
|| Passwordless (Windows Hello for Business, Microsoft Authenticator, FIDO2 security key integrations) | &#x2705; | || Service-level agreement | &#x2705; | |**Applications access**|SaaS apps with modern authentication (Azure AD application gallery apps, SAML, and OAUTH 2.0) | &#x2705; |
This following tables list Azure AD feature availability in Azure Government.
|| Identity Protection: vulnerabilities and risky accounts | &#x2705; | || Identity Protection: risk events investigation, SIEM connectivity | &#x2705; | |**Frontline workers**|SMS sign-in | Feature not available. |
-|| Shared device sign-out | Enterprise state roaming for Windows 10 devices is not available. |
+|| Shared device sign-out | Enterprise state roaming for Windows 10 devices isn't available. |
|| Delegated user management portal (My Staff) | Feature not available. |
+<sup>1</sup>Microsoft Authenticator only shows GUID and not UPN for compliance reasons.
## Identity protection
active-directory Plan Cloud Sync Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/plan-cloud-sync-topologies.md
The information below should be kept in mind, when selecting a solution.
- Users and groups must be uniquely identified across all forests - Matching across forests doesn't occur with cloud sync-- A user or group must be represented only once across all forests - The source anchor for objects is chosen automatically. It uses ms-DS-ConsistencyGuid if present, otherwise ObjectGUID is used. - You can't change the attribute that is used for source anchor.
active-directory Howto Convert App To Be Multi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-convert-app-to-be-multi-tenant.md
Title: Build apps that sign in Azure AD users
-description: Shows how to build a multi-tenant application that can sign in a user from any Azure Active Directory tenant.
+ Title: Convert single-tenant app to multi-tenant on Azure AD
+description: Shows how to convert an existing single-tenant app to a multi-tenant app that can sign in a user from any Azure AD tenant.
- Previously updated : 10/27/2020 Last updated : 10/20/2022 -+
+#Customer intent: As an Azure user, I want to convert a single tenant app to an Azure AD multi-tenant app so any Azure AD user can sign in,
-# Sign in any Azure Active Directory user using the multi-tenant application pattern
-
-If you offer a Software as a Service (SaaS) application to many organizations, you can configure your application to accept sign-ins from any Azure Active Directory (Azure AD) tenant. This configuration is called *making your application multi-tenant*. Users in any Azure AD tenant will be able to sign in to your application after consenting to use their account with your application.
+# Making your application multi-tenant
-If you have an existing application that has its own account system, or supports other kinds of sign-ins from other cloud providers, adding Azure AD sign-in from any tenant is simple. Just register your app, add sign-in code via OAuth2, OpenID Connect, or SAML, and put a ["Sign in with Microsoft" button][AAD-App-Branding] in your application.
+If you offer a Software as a Service (SaaS) application to many organizations, you can configure your application to accept sign-ins from any Azure Active Directory (Azure AD) tenant by converting it to multi-tenant. Users in any Azure AD tenant will be able to sign in to your application after consenting to use their account with your application.
-> [!NOTE]
-> This article assumes youΓÇÖre already familiar with building a single-tenant application for Azure AD. If youΓÇÖre not, start with one of the quickstarts on the [developer guide homepage][AAD-Dev-Guide].
+For existing apps with its own account system (or other sign-ins from other cloud providers), you should add sign-in code via OAuth2, OpenID Connect, or SAML, and put a ["Sign in with Microsoft" button][AAD-App-Branding] in your application.
-There are four steps to convert your application into an Azure AD multi-tenant app:
+In this how-to guide, you'll undertake the four steps needed to convert a single tenant app into an Azure AD multi-tenant app:
1. [Update your application registration to be multi-tenant](#update-registration-to-be-multi-tenant)
-2. [Update your code to send requests to the /common endpoint](#update-your-code-to-send-requests-to-common)
+2. [Update your code to send requests to the `/common` endpoint](#update-your-code-to-send-requests-to-common)
3. [Update your code to handle multiple issuer values](#update-your-code-to-handle-multiple-issuer-values)
-4. [Understand user and admin consent and make appropriate code changes](#understand-user-and-admin-consent)
+4. [Understand user and admin consent and make appropriate code changes](#understand-user-and-admin-consent-and-make-appropriate-code-changes)
-LetΓÇÖs look at each step in detail. You can also jump straight to the sample [Build a multi-tenant SaaS web application that calls Microsoft Graph using Azure AD and OpenID Connect](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/2-WebApp-graph-user/2-3-Multi-Tenant/README.md).
+You can also refer to the sample; [Build a multi-tenant SaaS web application that calls Microsoft Graph using Azure AD and OpenID Connect](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/2-WebApp-graph-user/2-3-Multi-Tenant/README.md). This how-to assumes familiarity with building a single-tenant application for Azure AD. If not, start with one of the quickstarts on the [developer guide homepage][AAD-Dev-Guide].
## Update registration to be multi-tenant
-By default, web app/API registrations in Azure AD are single-tenant. You can make your registration multi-tenant by finding the **Supported account types** switch on the **Authentication** pane of your application registration in the [Azure portal][AZURE-portal] and setting it to **Accounts in any organizational directory**.
-
-Before an application can be made multi-tenant, Azure AD requires the App ID URI of the application to be globally unique. The App ID URI is one of the ways an application is identified in protocol messages. For a single-tenant application, it is sufficient for the App ID URI to be unique within that tenant. For a multi-tenant application, it must be globally unique so Azure AD can find the application across all tenants. Global uniqueness is enforced by requiring the App ID URI to have a host name that matches a verified domain of the Azure AD tenant.
-
-By default, apps created via the Azure portal have a globally unique App ID URI set on app creation, but you can change this value. For example, if the name of your tenant was contoso.onmicrosoft.com then a valid App ID URI would be `https://contoso.onmicrosoft.com/myapp`. If your tenant had a verified domain of `contoso.com`, then a valid App ID URI would also be `https://contoso.com/myapp`. If the App ID URI doesnΓÇÖt follow this pattern, setting an application as multi-tenant fails.
+By default, web app/API registrations in Azure AD are single-tenant upon creation. To make the registration multi-tenant, look for the **Supported account types** section on the **Authentication** pane of the application registration in the [Azure portal][AZURE-portal]. Change the setting to **Accounts in any organizational directory**.
-## Update your code to send requests to /common
+When a single-tenant application is created via the Azure portal, one of the items listed on the **Overview** page is the **Application ID URI**. This is one of the ways an application is identified in protocol messages, and can be added at any time. The App ID URI for single tenant apps can be globally unique within that tenant. In contrast, for multi-tenant apps it must be globally unique across all tenants, which ensures that Azure AD can find the app across all tenants.
-In a single-tenant application, sign-in requests are sent to the tenantΓÇÖs sign-in endpoint. For example, for contoso.onmicrosoft.com the endpoint would be: `https://login.microsoftonline.com/contoso.onmicrosoft.com`. Requests sent to a tenantΓÇÖs endpoint can sign in users (or guests) in that tenant to applications in that tenant.
+For example, if the name of your tenant was `contoso.onmicrosoft.com` then a valid App ID URI would be `https://contoso.onmicrosoft.com/myapp`. If the App ID URI doesnΓÇÖt follow this pattern, setting an application as multi-tenant fails.
-With a multi-tenant application, the application doesnΓÇÖt know up front what tenant the user is from, so you canΓÇÖt send requests to a tenantΓÇÖs endpoint. Instead, requests are sent to an endpoint that multiplexes across all Azure AD tenants: `https://login.microsoftonline.com/common`
+## Update your code to send requests to `/common`
-When the Microsoft identity platform receives a request on the /common endpoint, it signs the user in and, as a consequence, discovers which tenant the user is from. The /common endpoint works with all of the authentication protocols supported by the Azure AD: OpenID Connect, OAuth 2.0, SAML 2.0, and WS-Federation.
+With a multi-tenant application, because the application can't immediately tell which tenant the user is from, requests can't be sent to a tenantΓÇÖs endpoint. Instead, requests are sent to an endpoint that multiplexes across all Azure AD tenants: `https://login.microsoftonline.com/common`.
-The sign-in response to the application then contains a token representing the user. The issuer value in the token tells an application what tenant the user is from. When a response returns from the /common endpoint, the issuer value in the token corresponds to the userΓÇÖs tenant.
+Edit your code and change the value for your tenant to `/common`. It's important to note that this endpoint isn't a tenant or an issuer itself. When the Microsoft identity platform receives a request on the `/common` endpoint, it signs the user in, thereby discovering which tenant the user is from. This endpoint works with all of the authentication protocols supported by the Azure AD (OpenID Connect, OAuth 2.0, SAML 2.0, WS-Federation).
-> [!IMPORTANT]
-> The /common endpoint is not a tenant and is not an issuer, itΓÇÖs just a multiplexer. When using /common, the logic in your application to validate tokens needs to be updated to take this into account.
+The sign-in response to the application then contains a token representing the user. The issuer value in the token tells an application what tenant the user is from. When a response returns from the `/common` endpoint, the issuer value in the token corresponds to the userΓÇÖs tenant.
## Update your code to handle multiple issuer values
-Web applications and web APIs receive and validate tokens from the Microsoft identity platform.
+Web applications and web APIs receive and validate tokens from the Microsoft identity platform. Native client applications don't validate access tokens and must treat them as opaque. They instead request and receive tokens from the Microsoft identity platform, and do so to send them to APIs, where they're then validated. Multi-tenant applications canΓÇÖt validate tokens by matching the issuer value in the metadata with the `issuer` value in the token. A multi-tenant application needs logic to decide which issuer values are valid and which aren't based on the tenant ID portion of the issuer value.
-> [!NOTE]
-> While native client applications request and receive tokens from the Microsoft identity platform, they do so to send them to APIs, where they are validated. Native applications do not validate access tokens and must treat them as opaque.
+For example, if a multi-tenant application only allows sign-in from specific tenants who have signed up for their service, then it must check either the `issuer` value or the `tid` claim value in the token to make sure that tenant is in their list of subscribers. If a multi-tenant application only deals with individuals and doesnΓÇÖt make any access decisions based on tenants, then it can ignore the issuer value altogether.
-LetΓÇÖs look at how an application validates tokens it receives from the Microsoft identity platform. A single-tenant application normally takes an endpoint value like:
+In the [multi-tenant samples][AAD-Samples-MT], issuer validation is disabled to enable any Azure AD tenant to sign in. Because the `/common` endpoint doesnΓÇÖt correspond to a tenant and isnΓÇÖt an issuer, when you examine the issuer value in the metadata for `/common`, it has a templated URL instead of an actual value:
```http
-https://login.microsoftonline.com/contoso.onmicrosoft.com
+https://sts.windows.net/{tenantid}/
```
+To ensure your app can support multiple tenants, modify the relevant section of your code to ensure that your issuer value is set to `{tenantid}`.
-...and uses it to construct a metadata URL (in this case, OpenID Connect) like:
+In contrast, single-tenant applications normally take endpoint values to construct metadata URLs such as:
```http https://login.microsoftonline.com/contoso.onmicrosoft.com/.well-known/openid-configuration
Each Azure AD tenant has a unique issuer value of the form:
https://sts.windows.net/31537af4-6d77-4bb9-a681-d2394888ea26/ ```
-...where the GUID value is the rename-safe version of the tenant ID of the tenant. If you select the preceding metadata link for `contoso.onmicrosoft.com`, you can see this issuer value in the document.
+...where the GUID value is the rename-safe version of the tenant ID of the tenant.
When a single-tenant application validates a token, it checks the signature of the token against the signing keys from the metadata document. This test allows it to make sure the issuer value in the token matches the one that was found in the metadata document.
-Because the /common endpoint doesnΓÇÖt correspond to a tenant and isnΓÇÖt an issuer, when you examine the issuer value in the metadata for /common it has a templated URL instead of an actual value:
-
-```http
-https://sts.windows.net/{tenantid}/
-```
-
-Therefore, a multi-tenant application canΓÇÖt validate tokens just by matching the issuer value in the metadata with the `issuer` value in the token. A multi-tenant application needs logic to decide which issuer values are valid and which are not based on the tenant ID portion of the issuer value.
-
-For example, if a multi-tenant application only allows sign-in from specific tenants who have signed up for their service, then it must check either the issuer value or the `tid` claim value in the token to make sure that tenant is in their list of subscribers. If a multi-tenant application only deals with individuals and doesnΓÇÖt make any access decisions based on tenants, then it can ignore the issuer value altogether.
-
-In the [multi-tenant samples][AAD-Samples-MT], issuer validation is disabled to enable any Azure AD tenant to sign in.
-
-## Understand user and admin consent
+## Understand user and admin consent and make appropriate code changes
-For a user to sign in to an application in Azure AD, the application must be represented in the userΓÇÖs tenant. This allows the organization to do things like apply unique policies when users from their tenant sign in to the application. For a single-tenant application, this registration easier; itΓÇÖs the one that happens when you register the application in the [Azure portal][AZURE-portal].
+For a user to sign in to an application in Azure AD, the application must be represented in the userΓÇÖs tenant. This allows the organization to do things like apply unique policies when users from their tenant sign in to the application. For a single-tenant application, one can use the registration via the [Azure portal][AZURE-portal].
-For a multi-tenant application, the initial registration for the application lives in the Azure AD tenant used by the developer. When a user from a different tenant signs in to the application for the first time, Azure AD asks them to consent to the permissions requested by the application. If they consent, then a representation of the application called a *service principal* is created in the userΓÇÖs tenant, and sign-in can continue. A delegation is also created in the directory that records the userΓÇÖs consent to the application. For details on the application's Application and ServicePrincipal objects, and how they relate to each other, see [Application objects and service principal objects][AAD-App-SP-Objects].
+For a multi-tenant application, the initial registration for the application resides in the Azure AD tenant used by the developer. When a user from a different tenant signs in to the application for the first time, Azure AD asks them to consent to the permissions requested by the application. If they consent, then a representation of the application called a *service principal* is created in the userΓÇÖs tenant, and sign-in can continue. A delegation is also created in the directory that records the userΓÇÖs consent to the application. For details on the application's Application and ServicePrincipal objects, and how they relate to each other, see [Application objects and service principal objects][AAD-App-SP-Objects].
-![Illustrates consent to single-tier app][Consent-Single-Tier]
+![Diagram which illustrates a user's consent to a single-tier app.][Consent-Single-Tier]
This consent experience is affected by the permissions requested by the application. The Microsoft identity platform supports two kinds of permissions, app-only and delegated.
To learn more about user and admin consent, see [Configure the admin consent wor
App-only permissions always require a tenant administratorΓÇÖs consent. If your application requests an app-only permission and a user tries to sign in to the application, an error message is displayed saying the user isnΓÇÖt able to consent.
-Certain delegated permissions also require a tenant administratorΓÇÖs consent. For example, the ability to write back to Azure AD as the signed in user requires a tenant administratorΓÇÖs consent. Like app-only permissions, if an ordinary user tries to sign in to an application that requests a delegated permission that requires administrator consent, your application receives an error. Whether a permission requires admin consent is determined by the developer that published the resource, and can be found in the documentation for the resource. The permissions documentation for the [Microsoft Graph API][MSFT-Graph-permission-scopes] indicate which permissions require admin consent.
+Certain delegated permissions also require a tenant administratorΓÇÖs consent. For example, the ability to write back to Azure AD as the signed in user requires a tenant administratorΓÇÖs consent. Like app-only permissions, if an ordinary user tries to sign in to an application that requests a delegated permission that requires administrator consent, the app receives an error. Whether a permission requires admin consent is determined by the developer that published the resource, and can be found in the documentation for the resource. The permissions documentation for the [Microsoft Graph API][MSFT-Graph-permission-scopes] indicate which permissions require admin consent.
-If your application uses permissions that require admin consent, have a gesture such as a button or link where the admin can initiate the action. The request your application sends for this action is the usual OAuth2/OpenID Connect authorization request that also includes the `prompt=consent` query string parameter. Once the admin has consented and the service principal is created in the customerΓÇÖs tenant, subsequent sign-in requests do not need the `prompt=consent` parameter. Since the administrator has decided the requested permissions are acceptable, no other users in the tenant are prompted for consent from that point forward.
+If your application uses permissions that require admin consent, consider adding a button or link where the admin can initiate the action. The request your application sends for this action is the usual OAuth2/OpenID Connect authorization request that also includes the `prompt=consent` query string parameter. Once the admin has consented and the service principal is created in the customerΓÇÖs tenant, subsequent sign-in requests don't need the `prompt=consent` parameter. Since the administrator has decided the requested permissions are acceptable, no other users in the tenant are prompted for consent from that point forward.
A tenant administrator can disable the ability for regular users to consent to applications. If this capability is disabled, admin consent is always required for the application to be used in the tenant. If you want to test your application with end-user consent disabled, you can find the configuration switch in the [Azure portal][AZURE-portal] in the **[User settings](https://portal.azure.com/#blade/Microsoft_AAD_IAM/StartboardApplicationsMenuBlade/UserSettings/menuId/)** section under **Enterprise applications**.
-The `prompt=consent` parameter can also be used by applications that request permissions that do not require admin consent. An example of when this would be used is if the application requires an experience where the tenant admin ΓÇ£signs upΓÇ¥ one time, and no other users are prompted for consent from that point on.
+The `prompt=consent` parameter can also be used by applications that request permissions that don't require admin consent. An example of when this would be used is if the application requires an experience where the tenant admin ΓÇ£signs upΓÇ¥ one time, and no other users are prompted for consent from that point on.
If an application requires admin consent and an admin signs in without the `prompt=consent` parameter being sent, when the admin successfully consents to the application it will apply **only for their user account**. Regular users will still not be able to sign in or consent to the application. This feature is useful if you want to give the tenant administrator the ability to explore your application before allowing other users access.
This can be a problem if your logical application consists of two or more applic
This is demonstrated in a multi-tier native client calling web API sample in the [Related content](#related-content) section at the end of this article. The following diagram provides an overview of consent for a multi-tier app registered in a single tenant.
-![Illustrates consent to multi-tier known client app][Consent-Multi-Tier-Known-Client]
+![Diagram which illustrates consent to multi-tier known client app.][Consent-Multi-Tier-Known-Client]
#### Multiple tiers in multiple tenants
If it's an API built by an organization other than Microsoft, the developer of t
The following diagram provides an overview of consent for a multi-tier app registered in different tenants.
-![Illustrates consent to multi-tier multi-party app][Consent-Multi-Tier-Multi-Party]
+![Diagram which illustrates consent to multi-tier multi-party app.][Consent-Multi-Tier-Multi-Party]
### Revoking consent
Users and administrators can revoke consent to your application at any time:
* Users revoke access to individual applications by removing them from their [Access Panel Applications][AAD-Access-Panel] list. * Administrators revoke access to applications by removing them using the [Enterprise applications](https://portal.azure.com/#blade/Microsoft_AAD_IAM/StartboardApplicationsMenuBlade/AllApps) section of the [Azure portal][AZURE-portal].
-If an administrator consents to an application for all users in a tenant, users cannot revoke access individually. Only the administrator can revoke access, and only for the whole application.
+If an administrator consents to an application for all users in a tenant, users can't revoke access individually. Only the administrator can revoke access, and only for the whole application.
## Multi-tenant applications and caching access tokens
-Multi-tenant applications can also get access tokens to call APIs that are protected by Azure AD. A common error when using the Microsoft Authentication Library (MSAL) with a multi-tenant application is to initially request a token for a user using /common, receive a response, then request a subsequent token for that same user also using /common. Because the response from Azure AD comes from a tenant, not /common, MSAL caches the token as being from the tenant. The subsequent call to /common to get an access token for the user misses the cache entry, and the user is prompted to sign in again. To avoid missing the cache, make sure subsequent calls for an already signed in user are made to the tenantΓÇÖs endpoint.
+Multi-tenant applications can also get access tokens to call APIs that are protected by Azure AD. A common error when using the Microsoft Authentication Library (MSAL) with a multi-tenant application is to initially request a token for a user using `/common`, receive a response, then request a subsequent token for that same user also using `/common`. Because the response from Azure AD comes from a tenant, not `/common`, MSAL caches the token as being from the tenant. The subsequent call to `/common` to get an access token for the user misses the cache entry, and the user is prompted to sign in again. To avoid missing the cache, make sure subsequent calls for an already signed in user are made to the tenantΓÇÖs endpoint.
## Related content
Multi-tenant applications can also get access tokens to call APIs that are prote
## Next steps
-In this article, you learned how to build an application that can sign in a user from any Azure AD tenant. After enabling Single Sign-On (SSO) between your app and Azure AD, you can also update your application to access APIs exposed by Microsoft resources like Microsoft 365. This lets you offer a personalized experience in your application, such as showing contextual information to the users, like their profile picture or their next calendar appointment.
+In this article, you learned how to convert a single tenant application to a multi-tenant application. After enabling single sign-on (SSO) between your app and Azure AD, update your app to access APIs exposed by Microsoft resources like Microsoft 365. This lets you offer a personalized experience in your application, such as showing contextual information to the users, for example, profile pictures and calendar appointments.
To learn more about making API calls to Azure AD and Microsoft 365 services like Exchange, SharePoint, OneDrive, OneNote, and more, visit [Microsoft Graph API][MSFT-Graph-overview].
active-directory Quickstart V2 Nodejs Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-nodejs-desktop.md
> > ### Requesting tokens >
-> In the first leg of authorization code flow with PKCE, prepare and send an authorization code request with the appropriate parameters. Then, in the second leg of the flow, listen for the authorization code response. Once the code is obtained, exchange it to obtain a token.
+> You can use MSAL Node's acquireTokenInteractive public API to acquire tokens via an external user-agent such as the default system browser.
> > ```javascript
-> // The redirect URI you setup during app registration with a custom file protocol "msal"
-> const redirectUri = "msal://redirect";
->
-> const cryptoProvider = new CryptoProvider();
->
-> const pkceCodes = {
-> challengeMethod: "S256", // Use SHA256 Algorithm
-> verifier: "", // Generate a code verifier for the Auth Code Request first
-> challenge: "" // Generate a code challenge from the previously generated code verifier
-> };
->
-> /**
-> * Starts an interactive token request
-> * @param {object} authWindow: Electron window object
-> * @param {object} tokenRequest: token request object with scopes
-> */
-> async function getTokenInteractive(authWindow, tokenRequest) {
->
-> /**
-> * Proof Key for Code Exchange (PKCE) Setup
-> *
-> * MSAL enables PKCE in the Authorization Code Grant Flow by including the codeChallenge and codeChallengeMethod
-> * parameters in the request passed into getAuthCodeUrl() API, as well as the codeVerifier parameter in the
-> * second leg (acquireTokenByCode() API).
-> */
->
-> const {verifier, challenge} = await cryptoProvider.generatePkceCodes();
->
-> pkceCodes.verifier = verifier;
-> pkceCodes.challenge = challenge;
->
-> const authCodeUrlParams = {
-> redirectUri: redirectUri
-> scopes: tokenRequest.scopes,
-> codeChallenge: pkceCodes.challenge, // PKCE Code Challenge
-> codeChallengeMethod: pkceCodes.challengeMethod // PKCE Code Challenge Method
-> };
->
-> const authCodeUrl = await pca.getAuthCodeUrl(authCodeUrlParams);
->
-> // register the custom file protocol in redirect URI
-> protocol.registerFileProtocol(redirectUri.split(":")[0], (req, callback) => {
-> const requestUrl = url.parse(req.url, true);
-> callback(path.normalize(`${__dirname}/${requestUrl.path}`));
-> });
->
-> const authCode = await listenForAuthCode(authCodeUrl, authWindow); // see below
->
-> const authResponse = await pca.acquireTokenByCode({
-> redirectUri: redirectUri,
-> scopes: tokenRequest.scopes,
-> code: authCode,
-> codeVerifier: pkceCodes.verifier // PKCE Code Verifier
-> });
->
-> return authResponse;
-> }
->
-> /**
-> * Listens for auth code response from Azure AD
-> * @param {string} navigateUrl: URL where auth code response is parsed
-> * @param {object} authWindow: Electron window object
-> */
-> async function listenForAuthCode(navigateUrl, authWindow) {
->
-> authWindow.loadURL(navigateUrl);
->
-> return new Promise((resolve, reject) => {
-> authWindow.webContents.on('will-redirect', (event, responseUrl) => {
-> try {
-> const parsedUrl = new URL(responseUrl);
-> const authCode = parsedUrl.searchParams.get('code');
-> resolve(authCode);
-> } catch (err) {
-> reject(err);
-> }
-> });
-> });
+> const { shell } = require('electron');
+>
+> try {
+> const openBrowser = async (url) => {
+> await shell.openExternal(url);
+> };
+>
+> const authResponse = await pca.acquireTokenInteractive({
+> scopes: ["User.Read"],
+> openBrowser,
+> successTemplate: '<h1>Successfully signed in!</h1> <p>You can close this window now.</p>',
+> failureTemplate: '<h1>Oops! Something went wrong</h1> <p>Check the console for more information.</p>',
+> });
+>
+> return authResponse;
+> } catch (error) {
+> throw error;
> } > ```
->
-> > |Where:| Description |
-> > |||
-> > | `authWindow` | Current Electron window in process. |
-> > | `tokenRequest` | Contains the scopes being requested, such as `"User.Read"` for Microsoft Graph or `"api://<Application ID>/access_as_user"` for custom web APIs. |
+>
> > ## Next steps >
active-directory Tutorial V2 Nodejs Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-nodejs-desktop.md
First, complete the steps in [Register an application with the Microsoft identit
Use the following settings for your app registration: - Name: `ElectronDesktopApp` (suggested)-- Supported account types: **Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox)**
+- Supported account types: **Accounts in my organizational directory only (single tenant)**
- Platform type: **Mobile and desktop applications**-- Redirect URI: `msal{Your_Application/Client_Id}://auth`
+- Redirect URI: `http://localhost`
## Create the project
Create a folder to host your application, for example *ElectronDesktopApp*.
```console npm init -y
- npm install --save @azure/msal-node axios bootstrap dotenv jquery popper.js
- npm install --save-dev babel electron@18.2.3 webpack
+ npm install --save @azure/msal-node @microsoft/microsoft-graph-sdk isomorphic-fetch bootstrap jquery popper.js
+ npm install --save-dev electron@20.0.0
``` 2. Then, create a folder named *App*. Inside this folder, create a file named *https://docsupdatetracker.net/index.html* that will serve as UI. Add the following code there:
The renderer methods are exposed by the preload script found in the *preload.js*
:::code language="js" source="~/ms-identity-JavaScript-nodejs-desktop/App/preload.js":::
-This preload script exposes a renderer methods to give the renderer process controlled access to some `Node APIs` by applying IPC channels that have been configured for communication between the main and renderer processes.
+This preload script exposes a renderer API to give the renderer process controlled access to some `Node APIs` by applying IPC channels that have been configured for communication between the main and renderer processes.
-6. Next, create *UIManager.js* class inside the *App* folder and add the following code:
-
- :::code language="js" source="~/ms-identity-JavaScript-nodejs-desktop/App/UIManager.js":::
-
-7. After that, create *CustomProtocolListener.js* class and add the following code there:
-
- :::code language="js" source="~/ms-identity-JavaScript-nodejs-desktop/App/CustomProtocolListener.js":::
-
-*CustomProtocolListener* class can be instantiated in order to register and unregister a custom typed protocol on which MSAL Node can listen for Auth Code responses.
-
-8. Finally, create a file named *constants.js* that will store the strings constants for describing the application **events**:
+6. Finally, create a file named *constants.js* that will store the strings constants for describing the application **events**:
:::code language="js" source="~/ms-identity-JavaScript-nodejs-desktop/App/constants.js":::
ElectronDesktopApp/
├── App │   ├── AuthProvider.js │   ├── constants.js
-│   ├── CustomProtocolListener.js
-│   ├── fetch.js
+│   ├── graph.js
│   ├── https://docsupdatetracker.net/index.html | ├── main.js | ├── preload.js | ├── renderer.js
-│   ├── UIManager.js
│   ├── authConfig.js ├── package.json ```
In *App* folder, create a file named *AuthProvider.js*. The *AuthProvider.js* fi
:::code language="js" source="~/ms-identity-JavaScript-nodejs-desktop/App/AuthProvider.js":::
-In the code snippet above, we first initialized MSAL Node `PublicClientApplication` by passing a configuration object (`msalConfig`). We then exposed `login`, `logout` and `getToken` methods to be called by main module (*main.js*). In `login` and `getToken`, we acquire ID and access tokens, respectively, by first requesting an authorization code and then exchanging this with a token using MSAL Node `acquireTokenByCode` public API.
+In the code snippet above, we first initialized MSAL Node `PublicClientApplication` by passing a configuration object (`msalConfig`). We then exposed `login`, `logout` and `getToken` methods to be called by main module (*main.js*). In `login` and `getToken`, we acquire ID and access tokens using MSAL Node `acquireTokenInteractive` public API.
-## Add a method to call a web API
+## Add Microsoft Graph SDK
-Create another file named *fetch.js*. This file will contain an Axios HTTP client for making REST calls to the Microsoft Graph API.
+Create a file named *graph.js*. The *graph.js* file will contain an instance of the Microsoft Graph SDK Client to facilitate accessing data on the Microsoft Graph API, using the access token obtained by MSAL Node:
## Add app registration details
-Finally, create an environment file to store the app registration details that will be used when acquiring tokens. To do so, create a file named *authConfig.js* inside the root folder of the sample (*ElectronDesktopApp*), and add the following code:
+Create an environment file to store the app registration details that will be used when acquiring tokens. To do so, create a file named *authConfig.js* inside the root folder of the sample (*ElectronDesktopApp*), and add the following code:
:::code language="js" source="~/ms-identity-JavaScript-nodejs-desktop/App/authConfig.js":::
Fill in these details with the values you obtain from Azure app registration por
- `Enter_the_Cloud_Instance_Id_Here`: The Azure cloud instance in which your application is registered. - For the main (or *global*) Azure cloud, enter `https://login.microsoftonline.com/`. - For **national** clouds (for example, China), you can find appropriate values in [National clouds](authentication-national-cloud.md).-- `Enter_the_Redirect_Uri_Here`: The Redirect Uri of the application you registered `msal{Your_Application/Client_Id}:///auth`. - `Enter_the_Graph_Endpoint_Here` is the instance of the Microsoft Graph API the application should communicate with. - For the **global** Microsoft Graph API endpoint, replace both instances of this string with `https://graph.microsoft.com/`. - For endpoints in **national** cloud deployments, see [National cloud deployments](/graph/deployments) in the Microsoft Graph documentation.
If you consent to the requested permissions, the web applications displays your
## Test web API call
-After you sign in, select **See Profile** to view the user profile information returned in the response from the call to the Microsoft Graph API:
+After you sign in, select **See Profile** to view the user profile information returned in the response from the call to the Microsoft Graph API. After consent, you'll view the profile information returned in the response:
:::image type="content" source="media/tutorial-v2-nodejs-desktop/desktop-04-profile.png" alt-text="profile information from Microsoft Graph":::
-Select **Read Mails** to view the messages in user's account. You'll be presented with a consent screen:
--
-After consent, you'll view the messages returned in the response from the call to the Microsoft Graph API:
-- ## How the application works
-When a user selects the **Sign In** button for the first time, get `getTokenInteractive` method of *AuthProvider.js* is called. This method redirects the user to sign-in with the Microsoft identity platform endpoint and validates the user's credentials, and then obtains an **authorization code**. This code is then exchanged for an access token using `acquireTokenByCode` public API of MSAL Node.
-
-At this point, a PKCE-protected authorization code is sent to the CORS-protected token endpoint and is exchanged for tokens. An ID token, access token, and refresh token are received by your application and processed by MSAL Node, and the information contained in the tokens is cached.
+When a user selects the **Sign In** button for the first time, the `acquireTokenInteractive` method of MSAL Node. This method redirects the user to sign-in with the Microsoft identity platform endpoint and validates the user's credentials, obtains an **authorization code** and then exchanges that code for an ID token, access token, and refresh token. MSAL Node also caches these tokens for future use.
The ID token contains basic information about the user, like their display name. The access token has a limited lifetime and expires after 24 hours. If you plan to use these tokens for accessing protected resource, your back-end server *must* validate it to guarantee the token was issued to a valid user for your application.
active-directory Cross Cloud Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/cross-cloud-settings.md
When Azure AD organizations in separate Microsoft Azure clouds need to collaborate, they can use Microsoft cloud settings to enable Azure AD B2B collaboration. B2B collaboration is available between the following global and sovereign Microsoft Azure clouds: -- Microsoft Azure global cloud and Microsoft Azure Government-- Microsoft Azure global cloud and Microsoft Azure China 21Vianet
+- Microsoft Azure commercial cloud and Microsoft Azure Government
+- Microsoft Azure commercial cloud and Microsoft Azure China 21Vianet
To set up B2B collaboration between partner organizations in different Microsoft Azure clouds, each partner mutually agrees to configure B2B collaboration with each other. In each organization, an admin completes the following steps:
Follow these steps to add the tenant you want to collaborate with to your Organi
![Screenshot showing an organization added with default settings.](media/cross-cloud-settings/org-specific-settings-inherited.png) - 1. If you want to change the cross-tenant access settings for this organization, select the **Inherited from default** link under the **Inbound access** or **Outbound access** column. Then follow the detailed steps in these sections: - [Modify inbound access settings](cross-tenant-access-settings-b2b-collaboration.md#modify-inbound-access-settings) - [Modify outbound access settings](cross-tenant-access-settings-b2b-collaboration.md#modify-outbound-access-settings)
+## Sign-in endpoints
+
+After enabling collaboration with an organization from a different Microsoft cloud, cross-cloud Azure AD guest users can now sign in to your multi-tenant or Microsoft first-party apps by using a [common endpoint](redemption-experience.md#redemption-and-sign-in-through-a-common-endpoint) (in other words, a general app URL that doesn't include your tenant context). During the sign-in process, the guest user chooses **Sign-in options**, and then selects **Sign in to an organization**. The user then types the name of your organization and continues signing in using their Azure AD credentials.
+
+Cross-cloud Azure AD guest users can also use application endpoints that include your tenant information, for example:
+
+ * `https://myapps.microsoft.com/?tenantid=<your tenant ID>`
+ * `https://myapps.microsoft.com/<your verified domain>.onmicrosoft.com`
+ * `https://contoso.sharepoint.com/sites/testsite`
+
+You can also give cross-cloud Azure AD guest users a direct link to an application or resource by including your tenant information, for example `https://myapps.microsoft.com/signin/Twitter/<application ID?tenantId=<your tenant ID>`.
+
+## Supported scenarios with cross-cloud Azure AD guest users
+
+The following scenarios are supported when collaborating with an organization from a different Microsoft cloud:
+
+- Use B2B collaboration to invite a user in the partner tenant to access resources in your organization, including web line-of-business apps, SaaS apps, and SharePoint Online sites, documents, and files.
+- Use B2B collaboration to [share Power BI content to a user in the partner tenant](/power-bi/enterprise/service-admin-azure-ad-b2b#cross-cloud-b2b).
+- Apply Conditional Access policies to the B2B collaboration user and opt to trust multi-factor authentication or device claims (compliant claims and hybrid Azure AD joined claims) from the userΓÇÖs home tenant.
+ ## Next steps See [Configure external collaboration settings](external-collaboration-settings-configure.md) for B2B collaboration with non-Azure AD identities, social identities, and non-IT managed external accounts.
active-directory Cross Tenant Access Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/cross-tenant-access-overview.md
You can configure organization-specific settings by adding an organization and m
Microsoft cloud settings let you collaborate with organizations from different Microsoft Azure clouds. With Microsoft cloud settings, you can establish mutual B2B collaboration between the following clouds: -- Microsoft Azure global cloud and Microsoft Azure Government-- Microsoft Azure global cloud and Microsoft Azure China 21Vianet
+- Microsoft Azure commercial cloud and Microsoft Azure Government
+- Microsoft Azure commercial cloud and Microsoft Azure China (operated by 21Vianet)
> [!NOTE] > Microsoft Azure Government includes the Office GCC-High and DoD clouds.
To set up B2B collaboration, both organizations configure their Microsoft cloud
- Use B2B collaboration to invite a user in the partner tenant to access resources in your organization, including web line-of-business apps, SaaS apps, and SharePoint Online sites, documents, and files. - Use B2B collaboration to [share Power BI content to a user in the partner tenant](/power-bi/enterprise/service-admin-azure-ad-b2b#cross-cloud-b2b).-- Apply Conditional Access policies to the B2B collaboration user and opt to trust device claims (compliant claims and hybrid Azure AD joined claims) from the userΓÇÖs home tenant.
+- Apply Conditional Access policies to the B2B collaboration user and opt to trust multi-factor authentication or device claims (compliant claims and hybrid Azure AD joined claims) from the userΓÇÖs home tenant.
> [!NOTE] > B2B direct connect is not supported for collaboration with Azure AD tenants in a different Microsoft cloud.
active-directory Leave The Organization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/leave-the-organization.md
Title: Leave an organization as a guest user - Azure Active Directory
+ Title: Leave an organization - Azure Active Directory
+ description: Shows how an Azure AD B2B guest user can leave an organization by using the Access Panel.
adobe-target: true
# Leave an organization as an external user
-As an Azure Active Directory (Azure AD) B2B collaboration or B2B direct connect user, you can decide to leave an organization at any time if you no longer need to use apps from that organization or maintain any association.
+As an Azure Active Directory (Azure AD) [B2B collaboration](/articles/active-directory/external-identities/what-is-b2b.md) or [B2B direct connect](/articles/active-directory/external-identities/b2b-direct-connect-overview.md) user, you can leave an organization at any time if you no longer need to use apps from that organization, or maintain any association.
-You can usually leave an organization on your own without having to contact an administrator. However, in some cases this option won't be available and you'll need to contact your tenant admin, who can delete your account in the external organization.
+You can usually leave an organization on your own without having to contact an administrator. However, in some cases this option won't be available and you'll need to contact your tenant admin, who can delete your account in the external organization. This article is intended for administrators. If you're a user looking for information about how to manage and leave an organization, see the [Manage organizations article.](https://support.microsoft.com/account-billing/manage-organizations-for-a-work-or-school-account-in-the-my-account-portal-a9b65a70-fec5-4a1a-8e00-09f99ebdea17)
[!INCLUDE [GDPR-related guidance](../../../includes/gdpr-dsr-and-stp-note.md)] ## What organizations do I belong to?
-1. To view the organizations you belong to, first open your **My Account** page by doing one of the following:
+1. To view the organizations you belong to, first open your **My Account** page. You either have a work or school account created by an organization or a personal account such as for Xbox, Hotmail, or Outlook.com.
- If you're using a work or school account, go to https://myaccount.microsoft.com and sign in. - If you're using a personal account or email one-time passcode, you'll need to use a My Account URL that includes your tenant name or tenant ID, for example: https://myaccount.microsoft.com?tenantId=wingtiptoys.onmicrosoft.com or https://myaccount.microsoft.com?tenantId=ab123456-cd12-ef12-gh12-ijk123456789.
You can usually leave an organization on your own without having to contact an a
![Screenshot showing the list of organizations you belong to.](media/leave-the-organization/organization-list.png)
- - **Home organization**: Your home organization is listed first. This is the organization that owns your work or school account. Because your account is managed by your administrator, you're not allowed to leave your home organization (you'll see there's no option to **Leave**). If you don't have an assigned home organization, you'll just see a single heading that says **Organizations** with the list of your associated organizations.
+ - **Home organization**: Your home organization is listed first. This organization owns your work or school account. Because your account is managed by your administrator, you're not allowed to leave your home organization. You'll see there's no link to **Leave**. If you don't have an assigned home organization, you'll just see a single heading that says **Organizations** with the list of your associated organizations.
- **Other organizations you collaborate with**: You'll also see the other organizations that you've signed in to previously using your work or school account. You can decide to leave any of these organizations at any time.
If your organization allows users to remove themselves from external organizatio
![Screenshot showing Leave organization option in the user interface.](media/leave-the-organization/leave-org.png) 1. When asked to confirm, select **Leave**.
-1. If you select **Leave** for an organization but you see the following message, it means youΓÇÖll need to contact the organization's admin or privacy contact and ask them to remove you from their organization.
+1. If you select **Leave** for an organization but you see the following message, it means youΓÇÖll need to contact the organization's admin, or privacy contact and ask them to remove you from their organization.
![Screenshot showing the message when you need permission to leave an organization.](media/leave-the-organization/need-permission-leave.png) ## Why canΓÇÖt I leave an organization?
-In the **Home organization** section, there's no option to **Leave** your organization. Only an administrator can remove your account from your home organization.
+In the **Home organization** section, there's no link to **Leave** your organization. Only an administrator can remove your account from your home organization.
For the external organizations listed under **Other organizations you collaborate with**, you might not be able to leave on your own, for example when:
In these cases, you can select **Leave**, but then you'll see a message saying y
## More information for administrators
-Administrators can use the **External user leave settings** to control whether external users can remove themselves from their organization. If you disallow the ability for external users to remove themselves from your organization, external users will need to contact your admin or privacy contact to be removed.
+Administrators can use the **External user leave settings** to control whether external users can remove themselves from their organization. If you disallow the ability for external users to remove themselves from your organization, external users will need to contact your admin, or privacy contact to be removed.
> [!IMPORTANT] > You can configure **External user leave settings** only if you have [added your privacy information](../fundamentals/active-directory-properties-area.md) to your Azure AD tenant. Otherwise, this setting will be unavailable. We recommend adding your privacy information to allow external users to review your policies and email your privacy contact when necessary.
Administrators can use the **External user leave settings** to control whether e
1. Under **External user leave** settings, choose whether to allow external users to leave your organization themselves: - **Yes**: Users can leave the organization themselves without approval from your admin or privacy contact.
- - **No**: Users can't leave your organization themselves. They'll see a message guiding them to contact your admin or privacy contact to request removal from your organization.
+ - **No**: Users can't leave your organization themselves. They'll see a message guiding them to contact your admin, or privacy contact to request removal from your organization.
- ![Screenshot showing External user leave settings in the portal.](media/leave-the-organization/external-user-leave-settings.png)
+
+ :::image type="content" source="media/leave-the-organization/external-user-leave-settings.png" alt-text="Screenshot showing External user leave settings in the portal.":::
### Account removal
If desired, a tenant administrator can permanently delete the account at any tim
1. Select the check box next to a deleted user, and then select **Delete permanently**.
-Once permanent deletion begins, whether it's initiated by the admin or the end of the soft deletion period, it can take up to an additional 30 days for data removal ([learn more](/compliance/regulatory/gdpr-dsr-azure#step-5-delete)).
+Permanent deletion can be initiated by the admin, or it happens at the end of the soft deletion period. Permanent deletion can take up to an extra 30 days for data removal ([learn more](/compliance/regulatory/gdpr-dsr-azure#step-5-delete)).
> [!NOTE] > For B2B direct connect users, data removal begins as soon as the user selects **Leave** in the confirmation message and can take up to 30 days to complete ([learn more](/compliance/regulatory/gdpr-dsr-azure#delete-a-users-data-when-there-is-no-account-in-the-azure-tenant)). ## Next steps
-Learn more about [Azure AD B2B collaboration](what-is-b2b.md) and [Azure AD B2B direct connect](b2b-direct-connect-overview.md)
+- Learn more about [Azure AD B2B collaboration](what-is-b2b.md) and [Azure AD B2B direct connect](b2b-direct-connect-overview.md)
+- [Close your Microsoft account](/microsoft-365/commerce/close-your-account)
active-directory Redemption Experience https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/redemption-experience.md
When you add a guest user to your directory, the guest user account has a consen
## Redemption and sign-in through a common endpoint
-Guest users can now sign in to your multi-tenant or Microsoft first-party apps through a common endpoint (URL), for example `https://myapps.microsoft.com`. Previously, a common URL would redirect a guest user to their home tenant instead of your resource tenant for authentication, so a tenant-specific link was required (for example `https://myapps.microsoft.com/?tenantid=<tenant id>`). Now the guest user can go to the application's common URL, choose **Sign-in options**, and then select **Sign in to an organization**. The user then types the name of your organization.
+Guest users can now sign in to your multi-tenant or Microsoft first-party apps through a common endpoint (URL), for example `https://myapps.microsoft.com`. Previously, a common URL would redirect a guest user to their home tenant instead of your resource tenant for authentication, so a tenant-specific link was required (for example `https://myapps.microsoft.com/?tenantid=<tenant id>`). Now the guest user can go to the application's common URL, choose **Sign-in options**, and then select **Sign in to an organization**. The user then types the domain name of your organization.
![Screenshots showing common endpoints used for signing in.](media/redemption-experience/common-endpoint-flow-small.png)
active-directory Secure With Azure Ad Resource Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/secure-with-azure-ad-resource-management.md
When a requirement exists to deploy IaaS workloads to Azure that require identit
![Diagram that shows Azure AD authentication to Azure VMs.](media/secure-with-azure-ad-resource-management/sign-into-vm.png)
-**Supported operating systems**: Signing into virtual machines in Azure using Azure AD authentication is currently supported in Windows and Linux. For more specifics on supported operating systems, refer to the documentation for [Windows](../devices/howto-vm-sign-in-azure-ad-windows.md) and [Linux](/azure-docs-archive-pr/virtual-machines/linux/login-using-aad).
+**Supported operating systems**: Signing into virtual machines in Azure using Azure AD authentication is currently supported in Windows and Linux. For more specifics on supported operating systems, refer to the documentation for [Windows](../devices/howto-vm-sign-in-azure-ad-windows.md) and [Linux](/azure/active-directory/devices/howto-vm-sign-in-azure-ad-linux).
**Credentials**: One of the key benefits of signing into virtual machines in Azure using Azure AD authentication is the ability to use the same federated or managed Azure AD credentials that you normally use for access to Azure AD services for sign-in to the virtual machine. >[!NOTE] >The Azure AD tenant that is used for sign-in in this scenario is the Azure AD tenant that is associated with the subscription that the virtual machine has been provisioned into. This Azure AD tenant can be one that has identities synchronized from on-premises AD DS. Organizations should make an informed choice that aligns with their isolation principals when choosing which subscription and Azure AD tenant they wish to use for sign-in to these servers.
-**Network Requirements**: These virtual machines will need to access Azure AD for authentication so you must ensure that the virtual machines network configuration permits outbound access to Azure AD endpoints on 443. See the documentation for [Windows](../devices/howto-vm-sign-in-azure-ad-windows.md) and [Linux](/azure-docs-archive-pr/virtual-machines/linux/login-using-aad) for more information.
+**Network Requirements**: These virtual machines will need to access Azure AD for authentication so you must ensure that the virtual machines network configuration permits outbound access to Azure AD endpoints on 443. See the documentation for [Windows](../devices/howto-vm-sign-in-azure-ad-windows.md) and [Linux](/azure/active-directory/devices/howto-vm-sign-in-azure-ad-linux) for more information.
**Role-based Access Control (RBAC)**: Two RBAC roles are available to provide the appropriate level of access to these virtual machines. These RBAC roles can be configured via the Azure AD Portal or via the Azure Cloud Shell Experience. For more information, see [Configure role assignments for the VM](../devices/howto-vm-sign-in-azure-ad-windows.md).
For this isolated model, it's assumed that there's no connectivity to the VNet t
* [Resource isolation with multiple tenants](secure-with-azure-ad-multiple-tenants.md)
-* [Best practices](secure-with-azure-ad-best-practices.md)
+* [Best practices](secure-with-azure-ad-best-practices.md)
active-directory Users Default Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/users-default-permissions.md
The set of default permissions depends on whether the user is a native member of
| | - | - Users and contacts | <ul><li>Enumerate the list of all users and contacts<li>Read all public properties of users and contacts</li><li>Invite guests<li>Change their own password<li>Manage their own mobile phone number<li>Manage their own photo<li>Invalidate their own refresh tokens</li></ul> | <ul><li>Read their own properties<li>Read display name, email, sign-in name, photo, user principal name, and user type properties of other users and contacts<li>Change their own password<li>Search for another user by object ID (if allowed)<li>Read manager and direct report information of other users</li></ul> | <ul><li>Read their own properties<li>Change their own password</li><li>Manage their own mobile phone number</li></ul> Groups | <ul><li>Create security groups<li>Create Microsoft 365 groups<li>Enumerate the list of all groups<li>Read all properties of groups<li>Read non-hidden group memberships<li>Read hidden Microsoft 365 group memberships for joined groups<li>Manage properties, ownership, and membership of groups that the user owns<li>Add guests to owned groups<li>Manage dynamic membership settings<li>Delete owned groups<li>Restore owned Microsoft 365 groups</li></ul> | <ul><li>Read properties of non-hidden groups, including membership and ownership (even non-joined groups)<li>Read hidden Microsoft 365 group memberships for joined groups<li>Search for groups by display name or object ID (if allowed)</li></ul> | <ul><li>Read object ID for joined groups<li>Read membership and ownership of joined groups in some Microsoft 365 apps (if allowed)</li></ul>
-Applications | <ul><li>Register (create) new applications<li>Enumerate the list of all applications<li>Read properties of registered and enterprise applications<li>Manage application properties, assignments, and credentials for owned applications<li>Create or delete application passwords for users<li>Delete owned applications<li>Restore owned applications</li></ul> | <ul><li>Read properties of registered and enterprise applications</li></ul> | <ul><li>Read properties of registered and enterprise applications
+Applications | <ul><li>Register (create) new applications<li>Enumerate the list of all applications<li>Read properties of registered and enterprise applications<li>List permissions granted to applications<li>Manage application properties, assignments, and credentials for owned applications<li>Create or delete application passwords for users<li>Delete owned applications<li>Restore owned applications<li>List permissions granted to applications</ul> | <ul><li>Read properties of registered and enterprise applications<li>List permissions granted to applications</ul> | <ul><li>Read properties of registered and enterprise applications</li><li>List permissions granted to applications</li></ul>
Devices</li></ul> | <ul><li>Enumerate the list of all devices<li>Read all properties of devices<li>Manage all properties of owned devices</li></ul> | No permissions | No permissions Organization | <ul><li>Read all company information<li>Read all domains<li>Read configuration of certificate-based authentication<li>Read all partner contracts</li></ul> | <ul><li>Read company display name<li>Read all domains<li>Read configuration of certificate-based authentication</li></ul> | <ul><li>Read company display name<li>Read all domains</li></ul> Roles and scopes | <ul><li>Read all administrative roles and memberships<li>Read all properties and membership of administrative units</li></ul> | No permissions | No permissions
active-directory How To Lifecycle Workflow Sync Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/how-to-lifecycle-workflow-sync-attributes.md
The following table shows the scheduling (trigger) relevant attributes and the m
|employeeLeaveDateTime|DateTimeOffset|Yes|Not currently|Not currently| > [!NOTE]
-> To take advantaged of leaver scenarios, you can set the employeeLeaveDateTime manually for cloud-only users. For more information, see: [Set employeeLeaveDateTime](set-employee-leave-date-time.md)
+> To take advantaged of leaver scenarios, you can set the employeeLeaveDateTime manually for cloud-only users. For more information, see: [Configure the employeeLeaveDateTime property for a user](/graph/tutorial-lifecycle-workflows-set-employeeleavedatetime)
This document explains how to set up synchronization from on-premises Azure AD Connect cloud sync and Azure AD Connect for the required attributes.
active-directory Set Employee Leave Date Time https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/set-employee-leave-date-time.md
- Title: Set employeeLeaveDateTime
-description: Explains how to manually set employeeLeaveDateTime.
---- Previously updated : 09/07/2022---
-# Set employeeLeaveDateTime
-
-This article describes how to manually set the employeeLeaveDateTime attribute for a user. This attribute can be set as a trigger for leaver workflows created using Lifecycle Workflows.
-
-## Required permission and roles
-
-To set the employeeLeaveDateTime attribute, you must make sure the correct delegated roles and application permissions are set. They are as follows:
-
-### Delegated
-
-In delegated scenarios, the signed-in user needs the Global Administrator role to update the employeeLeaveDateTime attribute. One of the following delegated permissions is also required:
-- User-LifeCycleInfo.ReadWrite.All-- Directory.AccessAsUser.All-
-### Application
-
-Updating the employeeLeaveDateTime requires the User-LifeCycleInfo.ReadWrite.All application permission.
-
-## Set employeeLeaveDateTime via PowerShell
-To set the employeeLeaveDateTime for a user using PowerShell enter the following information:
-
- ```powershell
- Connect-MgGraph -Scopes "User-LifeCycleInfo.ReadWrite.All"
- Select-MgProfile -Name "beta"
-
- $UserId = "<Object ID of the user>"
- $employeeLeaveDateTime = "<Leave date>"
-
- $Body = '{"employeeLeaveDateTime": "' + $employeeLeaveDateTime + '"}'
- Update-MgUser -UserId $UserId -BodyParameter $Body
-
- $User = Get-MgUser -UserId $UserId -Property employeeLeaveDateTime
- $User.AdditionalProperties
- ```
-
- This script is an example of a user who will leave on September 30, 2022 at 23:59.
-
- ```powershell
- Connect-MgGraph -Scopes "User-LifeCycleInfo.ReadWrite.All"
- Select-MgProfile -Name "beta"
-
- $UserId = "528492ea-779a-4b59-b9a3-b3773ef6da6d"
- $employeeLeaveDateTime = "2022-09-30T23:59:59Z"
-
- $Body = '{"employeeLeaveDateTime": "' + $employeeLeaveDateTime + '"}'
- Update-MgUser -UserId $UserId -BodyParameter $Body
-
- $User = Get-MgUser -UserId $UserId -Property employeeLeaveDateTime
- $User.AdditionalProperties
-```
--
-## Next steps
--- [How to synchronize attributes for Lifecycle workflows](how-to-lifecycle-workflow-sync-attributes.md)-- [Lifecycle Workflows templates](lifecycle-workflow-templates.md)
active-directory Tutorial Offboard Custom Workflow Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-offboard-custom-workflow-graph.md
- Title: 'Execute employee offboarding tasks in real-time on their last day of work with Microsoft Graph (preview)'
-description: Tutorial for off-boarding users from an organization using Lifecycle workflows with Microsoft Graph (preview).
------- Previously updated : 08/18/2022----
-# Execute employee offboarding tasks in real-time on their last day of work with Microsoft Graph (preview)
-
-This tutorial provides a step-by-step guide on how to execute a real-time employee termination with Lifecycle workflows using the GRAPH API.
-
-This off-boarding scenario will run a workflow on-demand and accomplish the following tasks:
-
-1. Remove user from all groups
-2. Remove user from all Teams
-3. Delete user account
-
-You may learn more about running a workflow on-demand [here](on-demand-workflow.md).
-
-## Before you begin
-
-As part of the prerequisites for completing this tutorial, you will need an account that has group and Teams memberships that can be deleted during the tutorial. For more comprehensive instructions on how to complete these prerequisite steps, you may refer to the [Preparing user accounts for Lifecycle workflows tutorial](tutorial-prepare-azure-ad-user-accounts.md).
-
-The leaver scenario can be broken down into the following:
-- **Prerequisite:** Create a user account that represents an employee leaving your organization-- **Prerequisite:** Prepare the user account with groups and Teams memberships-- Create the lifecycle management workflow-- Run the workflow on-demand-- Verify that the workflow was successfully executed--
-## Create a leaver workflow on-demand using Graph API
-
-Before introducing the API call to create this workflow, you may want to review some of the parameters that are required for this workflow creation.
-
-|Parameter |Description |
-|||
-|category | A string that identifies the category of the workflow. String is "joiner", "mover", or "leaver and can support multiple strings. Category of workflow must also contain the category of its tasks. For full task definitions, see: [Lifecycle workflow tasks and definitions](lifecycle-workflow-tasks.md) |
-|displayName | A unique string that identifies the workflow. |
-|description | A string that describes the purpose of the workflow for administrative use. (Optional) |
-|isEnabled | A boolean value that denotes whether the workflow is set to run or not. If set to ΓÇ£true" then the workflow will run. |
-|isSchedulingEnabled | A Boolean value that denotes whether scheduling is enabled or not. Unlike isEnbaled, a workflow can still be run on demand if this value is set to false. |
-|executionConditions | An argument that contains: <br><br>A time-based attribute and an integer parameter defining when a workflow will run between -60 and 60 <br><br>A scope attribute defining who the workflow runs for. |
-|tasks | An argument in a workflow that has a unique displayName and a description. <br><br> It defines the specific tasks to be executed in the workflow. <br><br>The specified task is outlined by the taskDefinitionID and its parameters. For a list of supported tasks, and their corresponding IDs, see [Supported Task Definitions](lifecycle-workflow-tasks.md). |
-
-For the purpose of this tutorial, there are three tasks that will be introduced in this workflow:
-
-### Remove user from all groups task
-
-```Example
-"tasks":[
- {
- "continueOnError": true,
- "displayName": "Remove user from all groups",
- "description": "Remove user from all Azure AD groups memberships",
- "isEnabled": true,
- "taskDefinitionId": "b3a31406-2a15-4c9a-b25b-a658fa5f07fc",
- "arguments": []
- }
- ]
-```
-
-> [!NOTE]
-> The task does not support removing users from Privileged Access Groups, Dynamic Groups, and synchronized Groups.
-
-### Remove user from all Teams task
-
-```Example
-"tasks":[
- {
- "continueOnError": true,
- "description": "Remove user from all Teams",
- "displayName": "Remove user from all Teams memberships",
- "isEnabled": true,
- "taskDefinitionId": "81f7b200-2816-4b3b-8c5d-dc556f07b024",
- "arguments": []
- }
- ]
-```
-### Delete user task
-
-```Example
-"tasks":[
- {
- "continueOnError": true,
- "displayName": "Delete user account",
- "description": "Delete user account in Azure AD",
- "isEnabled": true,
- "taskDefinitionId": "8d18588d-9ad3-4c0f-99d0-ec215f0e3dff",
- "arguments": []
- }
- ]
-```
-### Leaver workflow on-demand
-
-The following POST API call will create a leaver workflow that can be executed on-demand for real-time employee terminations.
-
- ```http
-POST https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows
-Content-type: application/json
-
-{
- "category": "Leaver",
- "displayName": "Real-time employee termination",
- "description": "Execute real-time termination tasks for employees on their last day of work",
- "isEnabled": true,
- "isSchedulingEnabled": false,
- "executionConditions":{
- "@odata.type":"#microsoft.graph.identityGovernance.onDemandExecutionOnly"
- },
- "tasks": [
- {
- "continueOnError": false,
- "description": "Remove user from all Azure AD groups memberships",
- "displayName": "Remove user from all groups",
- "executionSequence": 1,
- "isEnabled": true,
- "taskDefinitionId": "b3a31406-2a15-4c9a-b25b-a658fa5f07fc",
- "arguments": []
- },
- {
- "continueOnError": false,
- "description": "Remove user from all Teams memberships",
- "displayName": "Remove user from all Teams",
- "executionSequence": 2,
- "isEnabled": true,
- "taskDefinitionId": "81f7b200-2816-4b3b-8c5d-dc556f07b024",
- "arguments": []
- },
- {
- "continueOnError": false,
- "description": "Delete user account in Azure AD",
- "displayName": "Delete User Account",
- "executionSequence": 3,
- "isEnabled": true,
- "taskDefinitionId": "8d18588d-9ad3-4c0f-99d0-ec215f0e3dff",
- "arguments": []
- }
- ]
-}
-```
-
-## Run the workflow
-
-Now that the workflow is created, it will automatically run the workflow every 3 hours. Lifecycle workflows will check every 3 hours for users in the associated execution condition and execute the configured tasks for those users. However, for the tutorial, we would like to run it immediately. To run a workflow immediately, we can use the on-demand feature.
-
->[!NOTE]
->Be aware that you currently cannot run a workflow on-demand if it is set to disabled. You need to set the workflow to enabled to use the on-demand feature.
-
-To run a workflow on-demand for users using the GRAPH API do the following steps:
-
-1. Open [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer).
-2. Make sure the top is still set to **POST**, and **beta** and `https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows/<id>/activate` is in the box. Change `<id>` to the ID of workflows.
- 3. Copy the code below in to the **Request body**
- 4. Replace `<userid>` in the code below with the value of the user's ID.
- 5. Select **Run query**
- ```json
- {
- "subjects":[
- {"id":"<userid>"}
-
- ]
-}
-
-```
-
-## Check tasks and workflow status
-
-At any time, you may monitor the status of the workflows and the tasks. As a reminder, there are three different data pivots, users runs, and tasks which are currently available in public preview. You may learn more in the how-to guide [Check the status of a workflow (preview)](check-status-workflow.md). In the course of this tutorial, we will look at the status using the user focused reports.
-
-To begin, you will just need the ID of the workflow and the date range for which you want to see the summary of the status. You may obtain the workflow ID from the response code of the POST API call that was used to create the workflow.
-
-This example will show you how to list the userProcessingResults for the last 7 days.
-
-```http
-GET https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows/<workflow_id>/userProcessingResults
-```
-Furthermore, it is possible to get a summary of the userProcessingResults to get a quicker overview of large amounts of data, but for this a time span must be specified.
-
-```http
-GET https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows/<workflow id>/userProcessingResults/summary(startDateTime=2022-05-01T00:00:00Z,endDateTime=2022-05-30T00:00:00Z)
-```
-You may also check the full details about the tasks of a given userProcessingResults. You will need to provide the workflow ID of the workflow, as well as the userProcessingResult ID. You may obtain the userProcessingResult ID from the response of the userProcessingResults GET call above.
-
-```http
-GET https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows/<workflow_id>/userProcessingResults/<userProcessingResult_id>/taskProcessingResults
-```
-
-## Next steps
-- [Preparing user accounts for Lifecycle workflows (preview)](tutorial-prepare-azure-ad-user-accounts.md)-- [Execute employee offboarding tasks in real-time on their last day of work with Azure portal (preview)](tutorial-offboard-custom-workflow-portal.md)
active-directory Tutorial Offboard Custom Workflow Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-offboard-custom-workflow-portal.md
At any time, you may monitor the status of the workflows and the tasks. As a rem
## Next steps - [Preparing user accounts for Lifecycle workflows (preview)](tutorial-prepare-azure-ad-user-accounts.md)-- [Execute employee offboarding tasks in real-time on their last day of work with Microsoft Graph (preview)](tutorial-offboard-custom-workflow-graph.md)
+- [Complete employee offboarding tasks in real-time on their last day of work using Lifecycle Workflows APIs](/graph/tutorial-lifecycle-workflows-offboard-custom-workflow)
active-directory Tutorial Onboard Custom Workflow Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-onboard-custom-workflow-graph.md
- Title: 'Automate employee onboarding tasks before their first day of work with Microsoft Graph (preview)'
-description: Tutorial for onboarding users to an organization using Lifecycle workflows with Microsoft Graph (preview).
------- Previously updated : 08/18/2022----
-# Automate employee onboarding tasks before their first day of work with Microsoft Graph (preview)
-
-This tutorial provides a step-by-step guide on how to automate pre-hire tasks with Lifecycle workflows using the GRAPH API.
-
-This pre-hire scenario will generate a temporary password for our new employee and send it via email to the user's new manager.
-
-## Before you begin
-
-Two accounts are required for the tutorial, one account for the new hire and another account that acts as the manager of the new hire. The new hire account must have the following attributes set:
-- employeeHireDate must be set to today-- department must be set to sales-- manager attribute must be set, and the manager account should have a mailbox to receive an email.-
-For more comprehensive instructions on how to complete these prerequisite steps, you may refer to the [Preparing user accounts for Lifecycle workflows tutorial](tutorial-prepare-azure-ad-user-accounts.md). The [TAP policy](../authentication/howto-authentication-temporary-access-pass.md#enable-the-temporary-access-pass-policy) must also be enabled to run this tutorial.
-
-Detailed breakdown of the relevant attributes:
-
- | Attribute | Description |Set on|
- |: |::|--|
- |mail|Used to notify manager of the new employees temporary access pass|Both|
- |manager|This attribute that is used by the lifecycle workflow|Employee|
- |employeeHireDate|Used to trigger the workflow|Both|
- |department|Used to provide the scope for the workflow|Both|
-
-The pre-hire scenario can be broken down into the following:
- - **Prerequisite:** Create two user accounts, one to represent an employee and one to represent a manager
- - **Prerequisite:** Edit the manager attribute for this scenario using Microsoft Graph Explorer
- - **Prerequisite:** Enabling and using Temporary Access Pass (TAP)
- - Creating the lifecycle management workflow
- - Triggering the workflow
- - Verifying the workflow was successfully executed
-
-## Create a pre-hire workflow using Graph API
-
-Now that the pre-hire workflow attributes have been updated and correctly populated, a pre-hire workflow can then be created to generate a Temporary Access Pass (TAP) and send it via email to a user's manager. Before introducing the API call to create this workflow, you may want to review some of the parameters that are required for this workflow creation.
-
-|Parameter |Description |
-|||
-|category | A string that identifies the category of the workflow. String is "joiner", "mover", or "leaver and can support multiple strings. Category of workflow must also contain the category of its tasks. For full task definitions, see: [Lifecycle workflow tasks and definitions](lifecycle-workflow-tasks.md) |
-|displayName | A unique string that identifies the workflow. |
-|description | A string that describes the purpose of the workflow for administrative use. (Optional) |
-|isEnabled | A boolean value that denotes whether the workflow is set to run or not. If set to ΓÇ£true" then the workflow will run. |
-|isSchedulingEnabled | A Boolean value that denotes whether scheduling is enabled or not. Unlike isEnbaled, a workflow can still be run on demand if this value is set to false. |
-|executionConditions | An argument that contains: <br><br> A time-based attribute and an integer parameter defining when a workflow will run between -60 and 60 <br><br>a scope attribute defining who the workflow runs for. |
-|tasks | An argument in a workflow that has a unique displayName and a description. <br><br> It defines the specific tasks to be executed in the workflow. The specified task is outlined by the taskDefinitionID and its parameters. For a list of supported tasks, and their corresponding IDs, see [Supported Task Definitions](lifecycle-workflow-tasks.md). |
-
-The following POST API call will create a pre-hire workflow that will generate a TAP and send it via email to the user's manager.
-
- ```http
- POST https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows
-Content-type: application/json
-
-{
- "displayName":"Onboard pre-hire employee",
- "description":"Configure pre-hire tasks for onboarding employees before their first day",
- "isEnabled":true,
- "isSchedulingEnabled": false,
- "executionConditions": {
- "@odata.type": "microsoft.graph.identityGovernance.triggerAndScopeBasedConditions",
- "scope": {
- "@odata.type": "microsoft.graph.identityGovernance.ruleBasedSubjectSet",
- "rule": "(department eq 'sales')"
- },
- "trigger": {
- "@odata.type": "microsoft.graph.identityGovernance.timeBasedAttributeTrigger",
- "timeBasedAttribute": "employeeHireDate",
- "offsetInDays": -2
- }
- },
- "tasks":[
- {
- "isEnabled":true,
- "category": "Joiner",
- "taskDefinitionId":"1b555e50-7f65-41d5-b514-5894a026d10d",
- "displayName":"Generate TAP And Send Email",
- "description":"Generate Temporary Access Pass and send via email to user's manager",
- "arguments":[
- {
- "name": "tapLifetimeMinutes",
- "value": "480"
- },
- {
- "name": "tapIsUsableOnce",
- "value": "true"
- }
- ]
- }
- ]
-}
-```
-
-## Run the workflow
-Now that the workflow is created, it will automatically run the workflow every 3 hours. Lifecycle workflows will check every 3 hours for users in the associated execution condition and execute the configured tasks for those users. However, for the tutorial, we would like to run it immediately. To run a workflow immediately, we can use the on-demand feature.
-
->[!NOTE]
->Be aware that you currently cannot run a workflow on-demand if it is set to disabled. You need to set the workflow to enabled to use the on-demand feature.
-
-To run a workflow on-demand for users using the GRAPH API do the following steps:
-
-1. Open [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer).
-2. Make sure the top is still set to **POST**, and **beta** and `https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows/<id>/activate` is in the box. Change `<id>` to the ID of workflows.
- 3. Copy the code below in to the **Request body**
- 4. Replace `<userid>` in the code below with the value of the user's ID.
- 5. Select **Run query**
- ```json
- {
- "subjects":[
- {"id":"<userid>"}
-
- ]
-}
-
-```
-
-## Check tasks and workflow status
-
-At any time, you may monitor the status of the workflows and the tasks. As a reminder, there are three different data pivots, users runs, and tasks which are currently available in public preview. You may learn more in the how-to guide [Check the status of a workflow (preview)](check-status-workflow.md). In the course of this tutorial, we will look at the status using the user focused reports.
-
-To begin, you will just need the ID of the workflow and the date range for which you want to see the summary of the status. You may obtain the workflow ID from the response code of the POST API call that was used to create the workflow.
-
-This example will show you how to list the userProcessingResults for the last 7 days.
-
-```http
-GET https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows/<workflow id>/userProcessingResults
-```
-Furthermore, it is possible to get a summary of the userProcessingResults to get a quicker overview of large amounts of data, but for this a time span must be specified.
-
-```http
-GET https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows/<workflow id>/userProcessingResults/summary(startDateTime=2022-05-01T00:00:00Z,endDateTime=2022-05-30T00:00:00Z)
-```
-You may also check the full details about the tasks of a given userProcessingResults. You will need to provide the workflow ID of the workflow, as well as the userProcessingResult ID. You may obtain the userProcessingResult ID from the response of the userProcessingResults GET call above.
-
-```http
-GET https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows/<workflow_id>/userProcessingResults/<userProcessingResult_id>/taskProcessingResults
-```
-
-## Enable the workflow schedule
-
-After running your workflow on-demand and checking that everything is working fine, you may want to enable the workflow schedule. To enable the workflow schedule, you may run the following PATCH call.
-
-```http
-PATCH https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<id>
-Content-type: application/json
-
-{
- "displayName":"Onboard pre-hire employee",
- "description":"Configure pre-hire tasks for onboarding employees before their first day",
- "isEnabled": true,
- "isSchedulingEnabled": true
-}
-
-```
-
-## Next steps
-- [Preparing user accounts for Lifecycle workflows (preview)](tutorial-prepare-azure-ad-user-accounts.md)-- [Automate employee onboarding tasks before their first day of work with Azure portal (preview)](tutorial-onboard-custom-workflow-portal.md)
active-directory Tutorial Onboard Custom Workflow Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-onboard-custom-workflow-portal.md
After running your workflow on-demand and checking that everything is working fi
## Next steps - [Tutorial: Preparing user accounts for Lifecycle workflows (preview)](tutorial-prepare-azure-ad-user-accounts.md)-- [Automate employee onboarding tasks before their first day of work with Microsoft Graph (preview)](tutorial-onboard-custom-workflow-graph.md)
+- [Automate employee onboarding tasks before their first day of work using Lifecycle Workflows APIs](/graph/tutorial-lifecycle-workflows-onboard-custom-workflow)
active-directory Tutorial Scheduled Leaver Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-scheduled-leaver-graph.md
- Title: Automate employee offboarding tasks after their last day of work with Microsoft Graph (preview)
-description: Tutorial for post off-boarding users from an organization using Lifecycle workflows with Microsoft Graph (preview).
------- Previously updated : 08/18/2022----
-# Automate employee offboarding tasks after their last day of work with Microsoft Graph (preview)
-
-This tutorial provides a step-by-step guide on how to configure off-boarding tasks for employees after their last day of work with Lifecycle workflows using the GRAPH API.
-
-This post off-boarding scenario will run a scheduled workflow and accomplish the following tasks:
-
-1. Remove all licenses for user
-2. Remove user from all Teams
-3. Delete user account
-
-## Before you begin
-
-As part of the prerequisites for completing this tutorial, you will need an account that has licenses and Teams memberships that can be deleted during the tutorial. For more comprehensive instructions on how to complete these prerequisite steps, you may refer to the [Preparing user accounts for Lifecycle workflows tutorial](tutorial-prepare-azure-ad-user-accounts.md).
-
-The scheduled leaver scenario can be broken down into the following:
-- **Prerequisite:** Create a user account that represents an employee leaving your organization-- **Prerequisite:** Prepare the user account with licenses and Teams memberships-- Create the lifecycle management workflow-- Run the scheduled workflow after last day of work-- Verify that the workflow was successfully executed-
-## Create a scheduled leaver workflow using Graph API
-
-Before introducing the API call to create this workflow, you may want to review some of the parameters that are required for this workflow creation.
-
-|Parameter |Description |
-|||
-|category | A string that identifies the category of the workflow. String is "joiner", "mover", or "leaver and can support multiple strings. Category of workflow must also contain the category of its tasks. For full task definitions, see: [Lifecycle workflow tasks and definitions](lifecycle-workflow-tasks.md) |
-|displayName | A unique string that identifies the workflow. |
-|description | A string that describes the purpose of the workflow for administrative use. (Optional) |
-|isEnabled | A boolean value that denotes whether the workflow is set to run or not. If set to ΓÇ£true" then the workflow will run. |
-|isSchedulingEnabled | A Boolean value that denotes whether scheduling is enabled or not. Unlike isEnabled, a workflow can still be run on demand if this value is set to false. |
-|executionConditions | An argument that contains: <br><br>a time-based attribute and an integer parameter defining when a workflow will run between -60 and 60 <br><br>A scope attribute defining who the workflow runs for. |
-|tasks | An argument in a workflow that has a unique displayName and a description. <br><br> It defines the specific tasks to be executed in the workflow. The specified task is outlined by the taskDefinitionID and its parameters. For a list of supported tasks, and their corresponding IDs, see [Supported Task Definitions](lifecycle-workflow-tasks.md). |
-
-For the purpose of this tutorial, there are three tasks that will be introduced in this workflow:
-
-### Remove all licenses for user
-
-```Example
-"tasks":[
- {
- "category": "leaver",
- "description": "Remove all licenses assigned to the user",
- "displayName": "Remove all licenses for user",
- "id": "8fa97d28-3e52-4985-b3a9-a1126f9b8b4e",
- "version": 1,
- "parameters": []
- }
- ]
-```
-### Remove user from all Teams task
-
-```Example
-"tasks":[
- {
- "category": "leaver",
- "description": "Remove user from all Teams memberships",
- "displayName": "Remove user from all Teams",
- "id": "81f7b200-2816-4b3b-8c5d-dc556f07b024",
- "version": 1,
- "parameters": []
- }
- ]
-```
-### Delete user account
-
-```Example
-"tasks":[
- {
- "category": "leaver",
- "description": "Delete user account in Azure AD",
- "displayName": "Delete User Account",
- "id": "8d18588d-9ad3-4c0f-99d0-ec215f0e3dff",
- "version": 1,
- "parameters": []
- }
- ]
-```
-### Scheduled leaver workflow
-
-The following POST API call will create a scheduled leaver workflow to configure off-boarding tasks for employees after their last day of work.
-
-```http
-POST https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows
-Content-type: application/json
-
-{
- "category": "leaver",
- "displayName": "Post-Offboarding of an employee",
- "description": "Configure offboarding tasks for employees after their last day of work",
- "isEnabled": true,
- "isSchedulingEnabled": false,
- "executionConditions": {
- "@odata.type": "#microsoft.graph.identityGovernance.triggerAndScopeBasedConditions",
- "scope": {
- "@odata.type": "#microsoft.graph.identityGovernance.ruleBasedSubjectSet",
- "rule": "department eq 'Marketing'"
- },
- "trigger": {
- "@odata.type": "#microsoft.graph.identityGovernance.timeBasedAttributeTrigger",
- "timeBasedAttribute": "employeeLeaveDateTime",
- "offsetInDays": 7
- }
- },
- "tasks": [
- {
- "category": "leaver",
- "continueOnError": false,
- "description": "Remove all licenses assigned to the user",
- "displayName": "Remove all licenses for user",
- "executionSequence": 1,
- "isEnabled": true,
- "taskDefinitionId": "8fa97d28-3e52-4985-b3a9-a1126f9b8b4e",
- "arguments": []
- },
- {
- "category": "leaver",
- "continueOnError": false,
- "description": "Remove user from all Teams memberships",
- "displayName": "Remove user from all Teams",
- "executionSequence": 2,
- "isEnabled": true,
- "taskDefinitionId": "81f7b200-2816-4b3b-8c5d-dc556f07b024",
- "arguments": []
- },
- {
- "category": "leaver",
- "continueOnError": false,
- "description": "Delete user account in Azure AD",
- "displayName": "Delete User Account",
- "executionSequence": 3,
- "isEnabled": true,
- "taskDefinitionId": "8d18588d-9ad3-4c0f-99d0-ec215f0e3dff",
- "arguments": []
- }
- ]
-}
-```
-
-## Run the workflow
-Now that the workflow is created, it will automatically run the workflow every 3 hours. Lifecycle workflows will check every 3 hours for users in the associated execution condition and execute the configured tasks for those users. However, for the tutorial, we would like to run it immediately. To run a workflow immediately, we can use the on-demand feature.
-
->[!NOTE]
->Be aware that you currently cannot run a workflow on-demand if it is set to disabled. You need to set the workflow to enabled to use the on-demand feature.
-
-To run a workflow on-demand for users using the GRAPH API do the following steps:
-
-1. Open [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer).
-2. Make sure the top is still set to **POST**, and **beta** and `https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows/<id>/activate` is in the box. Change `<id>` to the ID of workflows.
- 3. Copy the code below in to the **Request body**
- 4. Replace `<userid>` in the code below with the value of the user's ID.
- 5. Select **Run query**
- ```json
- {
- "subjects":[
- {"id":"<userid>"}
-
- ]
-}
-
-```
-
-## Check tasks and workflow status
-
-At any time, you may monitor the status of the workflows and the tasks. As a reminder, there are three different data pivots, users runs, and tasks which are currently available in public preview. You may learn more in the how-to guide [Check the status of a workflow (preview)](check-status-workflow.md). In the course of this tutorial, we will look at the status using the user focused reports.
-
-To begin, you will just need the ID of the workflow and the date range for which you want to see the summary of the status. You may obtain the workflow ID from the response code of the POST API call that was used to create the workflow.
-
-This example will show you how to list the userProcessingResults for the last 7 days.
-
-```http
-GET https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows/<workflow id>/userProcessingResults
-```
-Furthermore, it is possible to get a summary of the userProcessingResults to get a quicker overview of large amounts of data, but for this a time span must be specified.
-
-```http
-GET https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows/<workflow id>/userProcessingResults/summary(startDateTime=2022-05-01T00:00:00Z,endDateTime=2022-05-30T00:00:00Z)
-```
-You may also check the full details about the tasks of a given userProcessingResults. You will need to provide the workflow ID of the workflow, as well as the userProcessingResult ID. You may obtain the userProcessingResult ID from the response of the userProcessingResults GET call above.
-
-```http
-GET https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows/<workflow_id>/userProcessingResults/<userProcessingResult_id>/taskProcessingResults
-```
-## Enable the workflow schedule
-
-After running your workflow on-demand and checking that everything is working fine, you may want to enable the workflow schedule. To enable the workflow schedule, you may run the following PATCH call.
-
-```http
-PATCH https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<id>
-Content-type: application/json
-
-{
- "displayName":"Post-Offboarding of an employee",
- "description":"Configure offboarding tasks for employees after their last day of work",
- "isEnabled": true,
- "isSchedulingEnabled": true
-}
-
-```
-
-## Next steps
-- [Preparing user accounts for Lifecycle workflows (preview)](tutorial-prepare-azure-ad-user-accounts.md)-- [Automate employee offboarding tasks after their last day of work with Azure portal (preview)](tutorial-scheduled-leaver-portal.md)
active-directory Tutorial Scheduled Leaver Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-scheduled-leaver-portal.md
After running your workflow on-demand and checking that everything is working fi
## Next steps - [Preparing user accounts for Lifecycle workflows (preview)](tutorial-prepare-azure-ad-user-accounts.md)-- [Automate employee offboarding tasks after their last day of work with Microsoft Graph (preview)](tutorial-scheduled-leaver-graph.md)
+- [Automate employee offboarding tasks after their last day of work using Lifecycle Workflows APIs](/graph/tutorial-lifecycle-workflows-scheduled-leaver)
active-directory Add Application Portal Setup Oidc Sso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/add-application-portal-setup-oidc-sso.md
To configure OIDC-based SSO for an application:
:::image type="content" source="media/add-application-portal-setup-oidc-sso/oidc-sso-configuration.png" alt-text="Complete the consent screen for an application.":::
-1. Select **Consent on behalf of your organization** and then select **Accept**. The application is added to your tenant and the application home page appears. To learn more about user and admin consent, see [Understand user and admin consent](../develop/howto-convert-app-to-be-multi-tenant.md#understand-user-and-admin-consent).
+1. Select **Consent on behalf of your organization** and then select **Accept**. The application is added to your tenant and the application home page appears. To learn more about user and admin consent, see [Understand user and admin consent](../develop/howto-convert-app-to-be-multi-tenant.md#understand-user-and-admin-consent-and-make-appropriate-code-changes).
## Next steps
active-directory Powershell Export All App Registrations Secrets And Certs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/scripts/powershell-export-all-app-registrations-secrets-and-certs.md
Title: PowerShell sample - Export secrets and certificates for app registrations in Azure Active Directory tenant. description: PowerShell example that exports all secrets and certificates for the specified app registrations in your Azure Active Directory tenant. -+ Last updated 03/09/2021-+
active-directory Powershell Export All Enterprise Apps Secrets And Certs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/scripts/powershell-export-all-enterprise-apps-secrets-and-certs.md
Title: PowerShell sample - Export secrets and certificates for enterprise apps in Azure Active Directory tenant. description: PowerShell example that exports all secrets and certificates for the specified enterprise apps in your Azure Active Directory tenant. -+ Last updated 03/09/2021-+
active-directory Powershell Export Apps With Expriring Secrets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/scripts/powershell-export-apps-with-expriring-secrets.md
Title: PowerShell sample - Export apps with expiring secrets and certificates in Azure Active Directory tenant. description: PowerShell example that exports all apps with expiring secrets and certificates for the specified apps in your Azure Active Directory tenant. -+ Last updated 03/09/2021-+
active-directory Powershell Export Apps With Secrets Beyond Required https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/scripts/powershell-export-apps-with-secrets-beyond-required.md
Title: PowerShell sample - Export apps with secrets and certificates expiring beyond the required date in Azure Active Directory tenant. description: PowerShell example that exports all apps with secrets and certificates expiring beyond the required date for the specified apps in your Azure Active Directory tenant. -+ Last updated 03/09/2021-+
active-directory Pim Create Azure Ad Roles And Resource Roles Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-create-azure-ad-roles-and-resource-roles-review.md
Previously updated : 10/07/2021 Last updated : 10/20/2022
The need for access to privileged Azure resource and Azure AD roles by employees
To create access reviews for Azure resources, you must be assigned to the [Owner](../../role-based-access-control/built-in-roles.md#owner) or the [User Access Administrator](../../role-based-access-control/built-in-roles.md#user-access-administrator) role for the Azure resources. To create access reviews for Azure AD roles, you must be assigned to the [Global Administrator](../roles/permissions-reference.md#global-administrator) or the [Privileged Role Administrator](../roles/permissions-reference.md#privileged-role-administrator) role.
-> [!Note]
-> In public preview, you can scope an access review to service principals with access to Azure AD and Azure resource roles with an Azure Active Directory Premium P2 edition active in your tenant. After general availability, additional licenses might be required.
- ## Create access reviews 1. Sign in to [Azure portal](https://portal.azure.com/) as a user that is assigned to one of the prerequisite role(s).
The need for access to privileged Azure resource and Azure AD roles by employees
3. For **Azure AD roles**, select **Azure AD roles** under **Privileged Identity Management**. For **Azure resources**, select **Azure resources** under **Privileged Identity Management**.
- :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/identity-governance.png" alt-text="Select Identity Governance in Azure Portal screenshot." lightbox="./media/pim-create-azure-ad-roles-and-resource-roles-review/identity-governance.png":::
+ :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/identity-governance.png" alt-text="Select Identity Governance in the Azure portal screenshot." lightbox="./media/pim-create-azure-ad-roles-and-resource-roles-review/identity-governance.png":::
4. For **Azure AD roles**, select **Azure AD roles** again under **Manage**. For **Azure resources**, select the subscription you want to manage.
active-directory Concept Activity Logs Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-activity-logs-azure-monitor.md
This section answers frequently asked questions and discusses known issues with
**Q: What SIEM tools are currently supported?**
-**A**: Currently, Azure Monitor is supported by [Splunk](./howto-integrate-activity-logs-with-splunk.md), IBM QRadar, [Sumo Logic](https://help.sumologic.com/Send-Dat).
+**A**: Currently, Azure Monitor is supported by [Splunk](./howto-integrate-activity-logs-with-splunk.md), IBM QRadar, [Sumo Logic](https://help.sumologic.com/docs/integrations/microsoft-azure/active-directory-azure/), [ArcSight](./howto-integrate-activity-logs-with-arcsight.md), LogRhythm, and Logz.io. For more information about how the connectors work, see [Stream Azure monitoring data to an event hub for consumption by an external tool](../../azure-monitor/essentials/stream-monitoring-data-event-hubs.md).
This section answers frequently asked questions and discusses known issues with
**Q: How do I integrate Azure AD activity logs with Sumo Logic?**
-**A**: First, [route the Azure AD activity logs to an event hub](https://help.sumologic.com/Send-Data/Applications-and-Other-Data-Sources/Azure_Active_Directory/Collect_Logs_for_Azure_Active_Directory), then follow the steps to [Install the Azure AD application and view the dashboards in SumoLogic](https://help.sumologic.com/Send-Data/Applications-and-Other-Data-Sources/Azure_Active_Directory/Install_the_Azure_Active_Directory_App_and_View_the_Dashboards).
+**A**: First, [route the Azure AD activity logs to an event hub](https://help.sumologic.com/docs/integrations/microsoft-azure/active-directory-azure#collecting-logs-for-azure-active-directory), then follow the steps to [Install the Azure AD application and view the dashboards in SumoLogic](https://help.sumologic.com/docs/integrations/microsoft-azure/active-directory-azure#viewing-azure-active-directory-dashboards).
active-directory Howto Integrate Activity Logs With Sumologic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-sumologic.md
To use this feature, you need:
## Steps to integrate Azure AD logs with SumoLogic 1. First, [stream the Azure AD logs to an Azure event hub](./tutorial-azure-monitor-stream-logs-to-event-hub.md).
-2. Configure your SumoLogic instance to [collect logs for Azure Active Directory](https://help.sumologic.com/Send-Data/Applications-and-Other-Data-Sources/Azure_Active_Directory/Collect_Logs_for_Azure_Active_Directory).
-3. [Install the Azure AD SumoLogic app](https://help.sumologic.com/Send-Data/Applications-and-Other-Data-Sources/Azure_Active_Directory/Install_the_Azure_Active_Directory_App_and_View_the_Dashboards) to use the pre-configured dashboards that provide real-time analysis of your environment.
+2. Configure your SumoLogic instance to [collect logs for Azure Active Directory](https://help.sumologic.com/docs/integrations/microsoft-azure/active-directory-azure#collecting-logs-for-azure-active-directory).
+3. [Install the Azure AD SumoLogic app](https://help.sumologic.com/docs/integrations/microsoft-azure/active-directory-azure#viewing-azure-active-directory-dashboards) to use the pre-configured dashboards that provide real-time analysis of your environment.
![Dashboard](./media/howto-integrate-activity-logs-with-sumologic/overview-dashboard.png)
To use this feature, you need:
* [Interpret audit logs schema in Azure Monitor](./overview-reports.md) * [Interpret sign-in logs schema in Azure Monitor](reference-azure-monitor-sign-ins-log-schema.md)
-* [Frequently asked questions and known issues](concept-activity-logs-azure-monitor.md#frequently-asked-questions)
+* [Frequently asked questions and known issues](concept-activity-logs-azure-monitor.md#frequently-asked-questions)
active-directory Tutorial Azure Monitor Stream Logs To Event Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md
After data is displayed in the event hub, you can access and read the data in tw
* **IBM QRadar**: The DSM and Azure Event Hubs Protocol are available for download at [IBM support](https://www.ibm.com/support). For more information about integration with Azure, go to the [IBM QRadar Security Intelligence Platform 7.3.0](https://www.ibm.com/support/knowledgecenter/SS42VS_DSM/c_dsm_guide_microsoft_azure_overview.html?cp=SS42VS_7.3.0) site.
- * **Sumo Logic**: To set up Sumo Logic to consume data from an event hub, see [Install the Azure AD app and view the dashboards](https://help.sumologic.com/Send-Data/Applications-and-Other-Data-Sources/Azure_Active_Directory/Install_the_Azure_Active_Directory_App_and_View_the_Dashboards).
+ * **Sumo Logic**: To set up Sumo Logic to consume data from an event hub, see [Install the Azure AD app and view the dashboards](https://help.sumologic.com/docs/integrations/microsoft-azure/active-directory-azure#viewing-azure-active-directory-dashboards).
* **Set up custom tooling**. If your current SIEM isn't supported in Azure Monitor diagnostics yet, you can set up custom tooling by using the Event Hubs API. To learn more, see the [Getting started receiving messages from an event hub](../../event-hubs/event-hubs-dotnet-standard-getstarted-send.md).
active-directory Atlassian Cloud Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/atlassian-cloud-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal.
![Single Sign-On](./media/atlassian-cloud-tutorial/configure.png)
- b. Copy **Azure AD Identifier** value from Azure portal, paste it in the **Identity Provider Entity ID** textbox in Atlassian.
+ b. Copy **Login URL** value from Azure portal, paste it in the **Identity Provider SSO URL** textbox in Atlassian.
- c. Copy **Login URL** value from Azure portal, paste it in the **Identity Provider SSO URL** textbox in Atlassian.
+ c. Copy **Azure AD Identifier** value from Azure portal, paste it in the **Identity Provider Entity ID** textbox in Atlassian.
![Identity Provider SSO URL](./media/atlassian-cloud-tutorial/configuration-azure.png)
active-directory Memo 22 09 Meet Identity Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/memo-22-09-meet-identity-requirements.md
US executive order [14028, Improving the Nation's Cyber Security](https://www.wh
This series of articles offers guidance for employing Azure Active Directory (Azure AD) as a centralized identity management system for implementing Zero Trust principles, as described in memorandum 22-09.
-The release of memorandum 22-09 is designed to support Zero Trust initiatives within federal agencies. It also provides regulatory guidance in supporting federal cybersecurity and data privacy laws. The memo cites the [Department of Defense (DoD) Zero Trust Reference Architecture](https://dodcio.defense.gov/Portals/0/Documents/Library/(U)ZT_RA_v1.1(U)_Mar21.pdf):
+The release of memorandum 22-09 is designed to support Zero Trust initiatives within federal agencies. It also provides regulatory guidance in supporting federal cybersecurity and data privacy laws. The memo cites the [Department of Defense (DoD) Zero Trust Reference Architecture](https://cloudsecurityalliance.org/artifacts/dod-zero-trust-reference-architecture/):
>"The foundational tenet of the Zero Trust Model is that no actor, system, network, or service operating outside or within the security perimeter is trusted. Instead, we must verify anything and everything attempting to establish access. It is a dramatic paradigm shift in philosophy of how we secure our infrastructure, networks, and data, from verify once at the perimeter to continual verification of each user, device, application, and transaction."
aks Csi Secrets Store Driver https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-secrets-store-driver.md
The Secrets Store CSI Driver allows for the following methods to access an Azure
Follow the instructions in [Provide an identity to access the Azure Key Vault Provider for Secrets Store CSI Driver][identity-access-methods] for your chosen method.
+> [!NOTE]
+> The rest of the examples on this page require that you've followed the instructions in [Provide an identity to access the Azure Key Vault Provider for Secrets Store CSI Driver][identity-access-methods], chosen one of the identity methods, and configured a SecretProviderClass. Come back to this page after completed those steps.
+ ## Validate the secrets After the pod starts, the mounted content at the volume path that you specified in your deployment YAML is available.
aks Quick Kubernetes Deploy Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-bicep.md
To learn more about AKS, and walk through a complete code to deployment example,
[az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials [import-azakscredential]: /powershell/module/az.aks/import-azakscredential [az-aks-install-cli]: /cli/azure/aks#az_aks_install_cli
-[install-azakskubectl]: /powershell/module/az.aks/install-azakskubectl
+[install-azakskubectl]: /powershell/module/az.aks/install-azaksclitool
[az-group-create]: /cli/azure/group#az_group_create [az-group-delete]: /cli/azure/group#az_group_delete [remove-azresourcegroup]: /powershell/module/az.resources/remove-azresourcegroup
aks Quick Kubernetes Deploy Rm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-rm-template.md
To learn more about AKS, and walk through a complete code to deployment example,
[az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials [import-azakscredential]: /powershell/module/az.aks/import-azakscredential [az-aks-install-cli]: /cli/azure/aks#az_aks_install_cli
-[install-azakskubectl]: /powershell/module/az.aks/install-azakskubectl
+[install-azakskubectl]: /powershell/module/az.aks/install-azaksclitool
[az-group-create]: /cli/azure/group#az_group_create [az-group-delete]: /cli/azure/group#az_group_delete [remove-azresourcegroup]: /powershell/module/az.resources/remove-azresourcegroup
aks Node Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-access.md
+
+ Title: Connect to Azure Kubernetes Service (AKS) cluster nodes
+description: Learn how to connect to Azure Kubernetes Service (AKS) cluster nodes for troubleshooting and maintenance tasks.
++ Last updated : 10/20/2022+++
+#Customer intent: As a cluster operator, I want to learn how to connect to virtual machines in an AKS cluster to perform maintenance or troubleshoot a problem.
++
+# Connect to Azure Kubernetes Service (AKS) cluster nodes for maintenance or troubleshooting
+
+Throughout the lifecycle of your Azure Kubernetes Service (AKS) cluster, you may need to access an AKS node. This access could be for maintenance, log collection, or other troubleshooting operations. You can access AKS nodes using SSH, including Windows Server nodes. You can also [connect to Windows Server nodes using remote desktop protocol (RDP) connections][aks-windows-rdp]. For security purposes, the AKS nodes aren't exposed to the internet. To connect to the AKS nodes, you use `kubectl debug` or the private IP address.
+
+This article shows you how to create a connection to an AKS node.
+
+## Before you begin
+
+This article assumes you have an SSH key. If not, you can create an SSH key using [macOS or Linux][ssh-nix] or [Windows][ssh-windows]. If you use PuTTY Gen to create the key pair, save the key pair in an OpenSSH format rather than the default PuTTy private key format (.ppk file).
+
+You also need the Azure CLI version 2.0.64 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+
+## Create an interactive shell connection to a Linux node
+
+To create an interactive shell connection to a Linux node, use the `kubectl debug` command to run a privileged container on your node. To list your nodes, use the `kubectl get nodes` command:
+
+```bash
+kubectl get nodes -o wide
+```
+
+The following example resembles output from the command:
+
+```output
+NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
+aks-nodepool1-12345678-vmss000000 Ready agent 13m v1.19.9 10.240.0.4 <none> Ubuntu 18.04.5 LTS 5.4.0-1046-azure containerd://1.4.4+azure
+aks-nodepool1-12345678-vmss000001 Ready agent 13m v1.19.9 10.240.0.35 <none> Ubuntu 18.04.5 LTS 5.4.0-1046-azure containerd://1.4.4+azure
+aksnpwin000000 Ready agent 87s v1.19.9 10.240.0.67 <none> Windows Server 2019 Datacenter 10.0.17763.1935 docker://19.3.1
+```
+
+Us the `kubectl debug` command to run a container image on the node to connect to it.
+
+```bash
+kubectl debug node/aks-nodepool1-12345678-vmss000000 -it --image=mcr.microsoft.com/dotnet/runtime-deps:6.0
+```
+
+The following command starts a privileged container on your node and connects to it.
+
+```bash
+kubectl debug node/aks-nodepool1-12345678-vmss000000 -it --image=mcr.microsoft.com/dotnet/runtime-deps:6.0
+```
+
+The following example resembles output from the command:
+
+```output
+Creating debugging pod node-debugger-aks-nodepool1-12345678-vmss000000-bkmmx with container debugger on node aks-nodepool1-12345678-vmss000000.
+If you don't see a command prompt, try pressing enter.
+root@aks-nodepool1-12345678-vmss000000:/#
+```
+
+This privileged container gives access to the node.
+
+> [!NOTE]
+> You can interact with the node session by running `chroot /host` from the privileged container.
+
+### Remove Linux node access
+
+When done, `exit` the interactive shell session. After the interactive container session closes, delete the pod used for access with `kubectl delete pod`.
+
+```bash
+kubectl delete pod node-debugger-aks-nodepool1-12345678-vmss000000-bkmmx
+```
+
+## Create the SSH connection to a Windows node
+
+At this time, you can't connect to a Windows Server node directly by using `kubectl debug`. Instead, you need to first connect to another node in the cluster, then connect to the Windows Server node from that node using SSH. Alternatively, you can [connect to Windows Server nodes using remote desktop protocol (RDP) connections][aks-windows-rdp] instead of using SSH.
+
+To connect to another node in the cluster, use the `kubectl debug` command. For more information, see [Create an interactive shell connection to a Linux node][ssh-linux-kubectl-debug].
+
+To create the SSH connection to the Windows Server node from another node, use the SSH keys provided when you created the AKS cluster and the internal IP address of the Windows Server node.
+
+Open a new terminal window and use the `kubectl get pods` command to get the name of the pod started by `kubectl debug`.
+
+```bash
+kubectl get pods
+```
+
+The following example resembles output from the command:
+
+```output
+NAME READY STATUS RESTARTS AGE
+node-debugger-aks-nodepool1-12345678-vmss000000-bkmmx 1/1 Running 0 21s
+```
+
+In the above example, *node-debugger-aks-nodepool1-12345678-vmss000000-bkmmx* is the name of the pod started by `kubectl debug`.
+
+Use the `kubectl port-forward` command to open a connection to the deployed pod:
+
+```bash
+kubectl port-forward node-debugger-aks-nodepool1-12345678-vmss000000-bkmmx 2022:22
+```
+
+The following example resembles output from the command:
+
+```output
+Forwarding from 127.0.0.1:2022 -> 22
+Forwarding from [::1]:2022 -> 22
+```
+
+The above example begins forwarding network traffic from port 2022 on your development computer to port 22 on the deployed pod. When using `kubectl port-forward` to open a connection and forward network traffic, the connection remains open until you stop the `kubectl port-forward` command.
+
+Open a new terminal and run the command `kubectl get nodes` to show the internal IP address of the Windows Server node:
+
+```bash
+kubectl get nodes -o wide
+```
+
+The following example resembles output from the command:
+
+```output
+NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
+aks-nodepool1-12345678-vmss000000 Ready agent 13m v1.19.9 10.240.0.4 <none> Ubuntu 18.04.5 LTS 5.4.0-1046-azure containerd://1.4.4+azure
+aks-nodepool1-12345678-vmss000001 Ready agent 13m v1.19.9 10.240.0.35 <none> Ubuntu 18.04.5 LTS 5.4.0-1046-azure containerd://1.4.4+azure
+aksnpwin000000 Ready agent 87s v1.19.9 10.240.0.67 <none> Windows Server 2019 Datacenter 10.0.17763.1935 docker://19.3.1
+```
+
+In the above example, *10.240.0.67* is the internal IP address of the Windows Server node.
+
+Create an SSH connection to the Windows Server node using the internal IP address, and connect to port 22 through port 2022 on your development computer. The default username for AKS nodes is *azureuser*. Accept the prompt to continue with the connection. You are then provided with the bash prompt of your Windows Server node:
+
+```bash
+ssh -o 'ProxyCommand ssh -p 2022 -W %h:%p azureuser@127.0.0.1' azureuser@10.240.0.67
+```
+
+The following example resembles output from the command:
+
+```output
+The authenticity of host '10.240.0.67 (10.240.0.67)' can't be established.
+ECDSA key fingerprint is SHA256:1234567890abcdefghijklmnopqrstuvwxyzABCDEFG.
+Are you sure you want to continue connecting (yes/no)? yes
+
+[...]
+
+Microsoft Windows [Version 10.0.17763.1935]
+(c) 2018 Microsoft Corporation. All rights reserved.
+
+azureuser@aksnpwin000000 C:\Users\azureuser>
+```
+
+> [!NOTE]
+> If you prefer to use password authentication, include the parameter `-o PreferredAuthentications=password`. For example:
+>
+> ```console
+> ssh -o 'ProxyCommand ssh -p 2022 -W %h:%p azureuser@127.0.0.1' -o PreferredAuthentications=password azureuser@10.240.0.67
+> ```
+
+### Remove SSH access
+
+When done, `exit` the SSH session, stop any port forwarding, and then `exit` the interactive container session. After the interactive container session closes, delete the pod used for SSH access using the `kubectl delete pod` command.
+
+```bash
+kubectl delete pod node-debugger-aks-nodepool1-12345678-vmss000000-bkmmx
+```
+
+## Next steps
+
+If you need more troubleshooting data, you can [view the kubelet logs][view-kubelet-logs] or [view the Kubernetes master node logs][view-master-logs].
+
+<!-- INTERNAL LINKS -->
+[view-kubelet-logs]: kubelet-logs.md
+[view-master-logs]: monitor-aks-reference.md#resource-logs
+[install-azure-cli]: /cli/azure/install-azure-cli
+[aks-windows-rdp]: rdp.md
+[ssh-nix]: ../virtual-machines/linux/mac-create-ssh-keys.md
+[ssh-windows]: ../virtual-machines/linux/ssh-from-windows.md
+[ssh-linux-kubectl-debug]: #create-an-interactive-shell-connection-to-a-linux-node
aks Node Updates Kured https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-updates-kured.md
For AKS clusters that use Windows Server nodes, see [Upgrade a node pool in AKS]
<!-- LINKS - external --> [kured]: https://github.com/weaveworks/kured
-[kured-install]: https://github.com/weaveworks/kured/tree/master/charts/kured
+[kured-install]: https://github.com/kubereboot/kured/tree/main/cmd/kured
[kubectl-get-nodes]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get <!-- LINKS - internal -->
aks Rdp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/rdp.md
If you need more troubleshooting data, you can [view the Kubernetes primary node
[aks-quickstart-windows-cli]: ./learn/quick-windows-container-deploy-cli.md [aks-quickstart-windows-powershell]: ./learn/quick-windows-container-deploy-powershell.md [az-aks-install-cli]: /cli/azure/aks#az_aks_install_cli
-[install-azakskubectl]: /powershell/module/az.aks/install-azakskubectl
+[install-azakskubectl]: /powershell/module/az.aks/install-azaksclitool
[az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials [import-azakscredential]: /powershell/module/az.aks/import-azakscredential [az-vm-delete]: /cli/azure/vm#az_vm_delete
aks Tutorial Kubernetes Deploy Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-deploy-cluster.md
Advance to the next tutorial to learn how to deploy an application to the cluste
[quotas-skus-regions]: quotas-skus-regions.md [azure-powershell-install]: /powershell/azure/install-az-ps [new-azakscluster]: /powershell/module/az.aks/new-azakscluster
-[install-azakskubectl]: /powershell/module/az.aks/install-azakskubectl
+[install-azakskubectl]: /powershell/module/az.aks/install-azaksclitool
[import-azakscredential]: /powershell/module/az.aks/import-azakscredential
aks Workload Identity Deploy Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-deploy-cluster.md
In this article, you deployed a Kubernetes cluster and configured it to use a wo
[az-feature-register]: /cli/azure/feature#az_feature_register [workload-identity-overview]: workload-identity-overview.md [create-key-vault-azure-cli]: ../key-vault/general/quick-create-cli.md
-[az-keyvault-list]: /cli/azure/keyvaultt#az-keyvault-list
+[az-keyvault-list]: /cli/azure/keyvault#az-keyvault-list
[aks-identity-concepts]: concepts-identity.md [az-account]: /cli/azure/account [az-aks-create]: /cli/azure/aks#az-aks-create
aks Workload Identity Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-overview.md
Title: Use an Azure AD workload identities (preview) on Azure Kubernetes Service
description: Learn about Azure Active Directory workload identity (preview) for Azure Kubernetes Service (AKS) and how to migrate your application to authenticate using this identity. Previously updated : 09/29/2022 Last updated : 10/20/2022
This article helps you understand this new authentication feature, and reviews t
## Dependencies -- AKS supports Azure AD workload identities on version 1.24 and higher.
+- AKS supports Azure AD workload identities on version 1.22 and higher.
- The Azure CLI version 2.40.0 or later. Run `az --version` to find the version, and run `az upgrade` to upgrade the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
The following table summarizes our migration or deployment recommendations for w
[dotnet-azure-identity-client-library]: /dotnet/api/overview/azure/identity-readme [java-azure-identity-client-library]: /java/api/overview/azure/identity-readme [javascript-azure-identity-client-library]: /javascript/api/overview/azure/identity-readme
-[python-azure-identity-client-library]: /python/api/overview/azure/identity-readme
+[python-azure-identity-client-library]: /python/api/overview/azure/identity-readme
analysis-services Analysis Services Addservprinc Admins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-addservprinc-admins.md
Title: Add service principal to Azure Analysis Services admin role | Microsoft Docs description: Learn how to add an automation service principal to the Azure Analysis Services server admin role -+ Last updated 05/14/2021
analysis-services Analysis Services Async Refresh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-async-refresh.md
Title: Asynchronous refresh for Azure Analysis Services models | Microsoft Docs description: Describes how to use the Azure Analysis Services REST API to code asynchronous refresh of model data. -+ Last updated 02/02/2022
analysis-services Analysis Services Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-backup.md
Title: Azure Analysis Services database backup and restore | Microsoft Docs description: This article describes how to backup and restore model metadata and data from an Azure Analysis Services database. -+ Last updated 03/29/2021
analysis-services Analysis Services Bcdr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-bcdr.md
Title: Azure Analysis Services high availability | Microsoft Docs description: This article describes how Azure Analysis Services provides high availability during service disruption. -+ Last updated 02/02/2022
analysis-services Analysis Services Capacity Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-capacity-limits.md
Title: Azure Analysis Services resource and object limits | Microsoft Docs description: This article describes resource and object limits for an Azure Analysis Services server. -+ Last updated 03/29/2021
analysis-services Analysis Services Connect Excel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-connect-excel.md
Title: Connect to Azure Analysis Services with Excel | Microsoft Docs description: Learn how to connect to an Azure Analysis Services server by using Excel. Once connected, users can create PivotTables to explore data. -+ Last updated 05/16/2022
analysis-services Analysis Services Connect Pbi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-connect-pbi.md
Title: Connect to Azure Analysis Services with Power BI | Microsoft Docs description: Learn how to connect to an Azure Analysis Services server by using Power BI. Once connected, users can explore model data. -+ Last updated 06/30/2021
analysis-services Analysis Services Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-connect.md
Title: Connecting to Azure Analysis Services servers| Microsoft Docs description: Learn how to connect to and get data from an Analysis Services server in Azure. -+ Last updated 02/02/2022
analysis-services Analysis Services Create Bicep File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-create-bicep-file.md
Title: Quickstart - Create an Azure Analysis Services server resource by using B
description: Quickstart showing how to an Azure Analysis Services server resource by using a Bicep file. Last updated 03/08/2022 -+ tags: azure-resource-manager, bicep
analysis-services Analysis Services Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-create-powershell.md
Last updated 10/12/2021 -+ #Customer intent: As a BI developer, I want to create an Azure Analysis Services server by using PowerShell.
analysis-services Analysis Services Create Sample Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-create-sample-model.md
Title: Tutorial - Add a sample model- Azure Analysis Services | Microsoft Docs description: In this tutorial, learn how to add a sample model in Azure Analysis Services. -+ Last updated 10/12/2021
analysis-services Analysis Services Create Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-create-server.md
Last updated 10/12/2021 -+ #Customer intent: As a BI developer, I want to create an Azure Analysis Services server by using the Azure portal.
analysis-services Analysis Services Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-create-template.md
Last updated 10/12/2021 -+ tags: azure-resource-manager #Customer intent: As a BI developer who is new to Azure, I want to use Azure Analysis Services to store and manage my organizations data models.
analysis-services Analysis Services Database Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-database-users.md
Title: Manage database roles and users in Azure Analysis Services | Microsoft Docs description: Learn how to manage database roles and users on an Analysis Services server in Azure. -+ Last updated 02/02/2022
analysis-services Analysis Services Datasource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-datasource.md
Title: Data sources supported in Azure Analysis Services | Microsoft Docs description: Describes data sources and connectors supported for tabular 1200 and higher data models in Azure Analysis Services. -+ Last updated 02/02/2022
analysis-services Analysis Services Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-deploy.md
Title: Deploy a model to Azure Analysis Services by using Visual Studio | Microsoft Docs description: Learn how to deploy a tabular model to an Azure Analysis Services server by using Visual Studio. -+ Last updated 12/01/2020
analysis-services Analysis Services Gateway Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-gateway-install.md
Title: Install On-premises data gateway for Azure Analysis Services | Microsoft Docs description: Learn how to install and configure an On-premises data gateway to connect to on-premises data sources from an Azure Analysis Services server. -+ Last updated 01/31/2022
analysis-services Analysis Services Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-gateway.md
Title: On-premises data gateway for Azure Analysis Services | Microsoft Docs description: An On-premises gateway is necessary if your Analysis Services server in Azure will connect to on-premises data sources. -+ Last updated 02/02/2022
analysis-services Analysis Services Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-logging.md
Title: Diagnostic logging for Azure Analysis Services | Microsoft Docs description: Describes how to setup up logging to monitoring your Azure Analysis Services server. -+ Last updated 04/27/2021
analysis-services Analysis Services Long Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-long-operations.md
Title: Best practices for long running operations in Azure Analysis Services | Microsoft Docs description: This article describes best practices for long running operations. -+ Last updated 04/27/2021
analysis-services Analysis Services Manage Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-manage-users.md
Title: Azure Analysis Services authentication and user permissions| Microsoft Docs description: This article describes how Azure Analysis Services uses Azure Active Directory (Azure AD) for identity management and user authentication. -+ Last updated 02/02/2022
analysis-services Analysis Services Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-manage.md
Title: Manage Azure Analysis Services | Microsoft Docs description: This article describes the tools used to manage administration and management tasks for an Azure Analysis Services server. -+ Last updated 02/02/2022
analysis-services Analysis Services Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-monitor.md
Title: Monitor Azure Analysis Services server metrics | Microsoft Docs description: Learn how Analysis Services use Azure Metrics Explorer, a free tool in the portal, to help you monitor the performance and health of your servers. -+ Last updated 03/04/2020
analysis-services Analysis Services Odc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-odc.md
Title: Connect to Azure Analysis Services with an .odc file | Microsoft Docs description: Learn how to create an Office Data Connection file to connect to and get data from an Analysis Services server in Azure. -+ Last updated 04/27/2021
analysis-services Analysis Services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-overview.md
Title: What is Azure Analysis Services? description: Learn about Azure Analysis Services, a fully managed platform as a service (PaaS) that provides enterprise-grade data models in the cloud. -+ Last updated 02/15/2022
analysis-services Analysis Services Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-powershell.md
Title: Manage Azure Analysis Services with PowerShell | Microsoft Docs description: Describes Azure Analysis Services PowerShell cmdlets for common administrative tasks such as creating servers, suspending operations, or changing service level. -+ Last updated 04/27/2021
analysis-services Analysis Services Qs Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-qs-firewall.md
Last updated 08/12/2020 -+ #Customer intent: As a BI developer, I want to secure my server by configuring a server firewall and create open IP address ranges for client computers in my organization.
analysis-services Analysis Services Refresh Azure Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-refresh-azure-automation.md
Title: Refresh Azure Analysis Services models with Azure Automation | Microsoft Docs description: This article describes how to code model refreshes for Azure Analysis Services by using Azure Automation. -+ Last updated 12/01/2020
analysis-services Analysis Services Refresh Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-refresh-logic-app.md
Title: Refresh with Logic Apps for Azure Analysis Services models | Microsoft Docs description: This article describes how to code asynchronous refresh for Azure Analysis Services by using Azure Logic Apps. -+ Last updated 10/30/2019
analysis-services Analysis Services Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-samples.md
Title: Azure Analysis Services code, project, and database samples description: This article describes resources to learn about code, project, and database samples for Azure Analysis Services. -+ Last updated 04/27/2021
analysis-services Analysis Services Scale Out https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-scale-out.md
Title: Azure Analysis Services scale-out| Microsoft Docs description: Replicate Azure Analysis Services servers with scale-out. Client queries can then be distributed among multiple query replicas in a scale-out query pool. -+ Last updated 04/27/2021
analysis-services Analysis Services Server Admins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-server-admins.md
Title: Manage server admins in Azure Analysis Services | Microsoft Docs description: This article describes how to manage server administrators for an Azure Analysis Services server by using the Azure portal, PowerShell, or REST APIs. -+ Last updated 02/02/2022
analysis-services Analysis Services Server Alias https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-server-alias.md
Title: Azure Analysis Services alias server names | Microsoft Docs description: Learn how to create Azure Analysis Services server name aliases. Users can then connect to your server with a shorter alias name instead of the server name. -+ Last updated 12/07/2021
analysis-services Analysis Services Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-service-principal.md
Title: Automate Azure Analysis Services tasks with service principals | Microsoft Docs description: Learn how to create a service principal for automating Azure Analysis Services administrative tasks. -+ Last updated 02/02/2022
analysis-services Analysis Services Vnet Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-vnet-gateway.md
Title: Configure Azure Analysis Services for VNet data sources | Microsoft Docs description: Learn how to configure an Azure Analysis Services server to use a gateway for data sources on Azure Virtual Network (VNet). -+ Last updated 02/02/2022
analysis-services Move Between Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/move-between-regions.md
Title: Move Azure Analysis Services to a different region | Microsoft Docs description: Describes how to move an Azure Analysis Services resource to a different region. -+ Last updated 12/01/2020
analysis-services Analysis Services Tutorial Pbid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/tutorials/analysis-services-tutorial-pbid.md
Title: Tutorial - Connect Azure Analysis Services with Power BI Desktop | Microsoft Docs description: In this tutorial, learn how to get an Analysis Services server name from the Azure portal and then connect to the server by using Power BI Desktop.-+ Last updated 02/02/2022
analysis-services Analysis Services Tutorial Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/tutorials/analysis-services-tutorial-roles.md
Title: Tutorial - Configure Azure Analysis Services roles | Microsoft Docs description: In this tutorial, learn how to configure Azure Analysis Services administrator and user roles by using the Azure portal or SQL Server Management Studio. -+ Last updated 10/12/2021
app-service App Service Web Tutorial Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-web-tutorial-rest-api.md
Congratulations, you're running an API in Azure App Service with CORS support.
You can use your own CORS utilities instead of App Service CORS for more flexibility. For example, you may want to specify different allowed origins for different routes or methods. Since App Service CORS lets you specify one set of accepted origins for all API routes and methods, you would want to use your own CORS code. See how ASP.NET Core does it at [Enabling Cross-Origin Requests (CORS)](/aspnet/core/security/cors).
+The built-in App Service CORS feature does not have options to allow only specific HTTP methods or verbs for each origin that you specify. It will automatically allow all methods and headers for each origin defined. This behavior is similar to [ASP.NET Core CORS](/aspnet/core/security/cors) policies when you use the options `.AllowAnyHeader()` and `.AllowAnyMethod()` in the code.
+ > [!NOTE] > Don't try to use App Service CORS and your own CORS code together. When used together, App Service CORS takes precedence and your own CORS code has no effect. >
app-service Migration Alternatives https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migration-alternatives.md
Title: Migrate to App Service Environment v3
description: How to migrate your applications to App Service Environment v3 Previously updated : 10/19/2022 Last updated : 10/20/2022 # Migrate to App Service Environment v3
Scenario: An existing app running on an App Service Environment v1 or App Servic
For any migration method that doesn't use the [migration feature](migrate.md), you'll need to [create the App Service Environment v3](creation.md) and a new subnet using the method of your choice. There are [feature differences](overview.md#feature-differences) between App Service Environment v1/v2 and App Service Environment v3 as well as [networking changes](networking.md) that will involve new (and for internet-facing environments, additional) IP addresses. You'll need to update any infrastructure that relies on these IPs.
-Note that multiple App Service Environments can't exist in a single subnet. If you need to use your existing subnet for your new App Service Environment v3, you'll need to delete the existing App Service Environment before you create a new one. There will be application downtime during this process because of the time it takes to delete the old environment, create the new App Service Environment v3, configure any infrastructure and connected resources to work with the new environment, and deploy your apps onto the new environment.
+Note that multiple App Service Environments can't exist in a single subnet. If you need to use your existing subnet for your new App Service Environment v3, you'll need to delete the existing App Service Environment before you create a new one. For this scenario, the recommended migration method is to [back up your apps and then restore them](#back-up-and-restore) in the new environment after it gets created and configured. There will be application downtime during this process because of the time it takes to delete the old environment, create the new App Service Environment v3, configure any infrastructure and connected resources to work with the new environment, and deploy your apps onto the new environment.
### Checklist before migrating apps
Note that multiple App Service Environments can't exist in a single subnet. If y
App Service Environment v3 uses Isolated v2 App Service plans that are priced and sized differently than those from Isolated plans. Review the [SKU details](https://azure.microsoft.com/pricing/details/app-service/windows/) to understand how you're new environment will need to be sized and scaled to ensure appropriate capacity. There's no difference in how you create App Service plans for App Service Environment v3 compared to previous versions.
+## Back up and restore
+
+The [back up and restore](../manage-backup.md) feature allows you to keep your app configuration, file content, and database connected to your app when migrating to your new environment. Make sure you review the [details](../manage-backup.md#automatic-vs-custom-backups) of this feature.
+
+> [!IMPORTANT]
+> You must configure custom backups for your apps in order to restore them to an App Service Environment v3. Automatic backup doesn't support restoration on different App Service Environment versions. For more information on custom backups, see [Automatic vs custom backups](../manage-backup.md#automatic-vs-custom-backups).
+>
+
+The step-by-step instructions in the current documentation for [backup and restore](../manage-backup.md) should be sufficient to allow you to use this feature. You can select a custom backup and use that to restore the app to an App Service in your App Service Environment v3.
++
+|Benefits |Limitations |
+|||
+|Quick - should only take 5-10 minutes per app |Support is limited to [certain database types](../manage-backup.md#automatic-vs-custom-backups) |
+|Multiple apps can be restored at the same time (restoration needs to be configured for each app individually) |Old and new environments as well as supporting resources (for example apps, databases, storage accounts, and containers) must all be in the same subscription |
+|In-app MySQL databases are automatically backed up without any configuration |Backups can be up to 10 GB of app and database content, up to 4 GB of which can be the database backup. If the backup size exceeds this limit, you get an error. |
+|Can restore the app to a snapshot of a previous state |Using a [firewall enabled storage account](../../storage/common/storage-network-security.md) as the destination for your backups isn't supported |
+|Can integrate with [Azure Traffic Manager](../../traffic-manager/traffic-manager-overview.md) and [Azure Application Gateway](../../application-gateway/overview.md) to distribute traffic across old and new environments |Using a [private endpoint enabled storage account](../../storage/common/storage-private-endpoints.md) for backup and restore isn't supported |
+|Can create empty web apps to restore to in your new environment before you start restoring to speed up the process | Only supports custom backups |
+ ## Clone your app to an App Service Environment v3 [Cloning your apps](../app-service-web-app-cloning.md) is another feature that can be used to get your **Windows** apps onto your App Service Environment v3. There are limitations with cloning apps. These limitations are the same as those for the App Service Backup feature, see [Back up an app in Azure App Service](../manage-backup.md#whats-included-in-an-automatic-backup).
Once your migration and any testing with your new environment is complete, delet
- **What properties of my App Service Environment will change?** You'll now be on App Service Environment v3 so be sure to review the [features and feature differences](overview.md#feature-differences) compared to previous versions. For ILB App Service Environment, you'll keep the same ILB IP address. For internet facing App Service Environment, the public IP address and the outbound IP address will change. Note for internet facing App Service Environment, previously there was a single IP for both inbound and outbound. For App Service Environment v3, they're separate. For more information, see [App Service Environment v3 networking](networking.md#addresses). - **Is backup and restore supported for moving apps from App Service Environment v2 to v3?**
- The [back up and restore](../manage-backup.md) feature doesn't support restoring apps between App Service Environment versions (an app running on App Service Environment v2 can't be restored on an App Service Environment v3).
+ The [back up and restore](../manage-backup.md) feature supports restoring apps between App Service Environment versions as long as a custom backup is used for the restoration. Automatic backup doesn't support restoration to different App Service Environment versions.
- **What will happen to my App Service Environment v1/v2 resources after 31 August 2024?** After 31 August 2024, if you haven't migrated to App Service Environment v3, your App Service Environment v1/v2s and the apps deployed in them will no longer be available. App Service Environment v1/v2 is hosted on App Service scale units running on [Cloud Services (classic)](../../cloud-services/cloud-services-choose-me.md) architecture that will be [retired on 31 August 2024](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/). Because of this, [App Service Environment v1/v2 will no longer be available after that date](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement/). Migrate to App Service Environment v3 to keep your apps running or save or back up any resources or data that you need to maintain.
Once your migration and any testing with your new environment is complete, delet
> [Migrate to App Service Environment v3 using the migration feature](migrate.md) > [!div class="nextstepaction"]
-> [Custom domain suffix](./how-to-custom-domain-suffix.md)
+> [Custom domain suffix](./how-to-custom-domain-suffix.md)
app-service Provision Resource Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/provision-resource-terraform.md
description: Create your first app to Azure App Service in seconds using a Terra
Previously updated : 8/5/2022 Last updated : 10/20/2022 ms.tool: terraform
resource "azurerm_service_plan" "appserviceplan" {
location = azurerm_resource_group.rg.location resource_group_name = azurerm_resource_group.rg.name os_type = "Linux"
- sku_name = "F1"
+ sku_name = "B1"
} # Create the web app, pass in the App Service Plan ID
application-gateway Application Gateway Backend Health Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-backend-health-troubleshooting.md
To create a custom probe, follow [these steps](./application-gateway-create-prob
### HTTP response body mismatch **Message:** Body of the backend's HTTP response did not match the
-probe setting. Received response body does not contain {string}.
+probe setting. Received response body doesn't contain {string}.
**Cause:** When you create a custom probe, you can mark a backend server as Healthy by matching a string from the response body. For example, you can configure Application Gateway to accept "unauthorized" as a string to match. If the backend server response for the probe request contains the string **unauthorized**, it will be marked as Healthy. Otherwise, it will be marked as Unhealthy with this message.
For more information about how to extract and upload Trusted Root Certificates i
### Trusted root certificate mismatch
-**Message:** The root certificate of the server certificate used by the backend does not match the trusted root certificate added to the application gateway. Ensure that you add the correct root certificate to whitelist the backend.
+**Message:** The root certificate of the server certificate used by the backend doesn't match the trusted root certificate added to the application gateway. Ensure that you add the correct root certificate to whitelist the backend.
**Cause:** End-to-end SSL with Application Gateway v2 requires the backend server's certificate to be verified in order to deem the server Healthy. For a TLS/SSL certificate to be trusted, the backend server certificate must be issued by a CA that's included in the trusted store of Application Gateway. If the certificate wasn't issued by a trusted CA (for example, a self-signed certificate was used), users should upload the issuer's certificate to Application Gateway.
If the output doesn't show the complete chain of the certificate being returned,
### Backend certificate invalid common name (CN)
-**Message:** The Common Name (CN) of the backend certificate does not match the host header of the probe.
+**Message:** The Common Name (CN) of the backend certificate doesn't match the host header of the probe.
**Cause:** Application Gateway checks whether the host name specified in the backend HTTP settings matches that of the CN presented by the backend serverΓÇÖs TLS/SSL certificate. This verification is Standard_v2 and WAF_v2 SKU (V2) behavior. The Standard and WAF SKU (v1) Server Name Indication (SNI) is set as the FQDN in the backend pool address. For more information on SNI behavior and differences between v1 and v2 SKU, see [Overview of TLS termination and end to end TLS with Application Gateway](ssl-overview.md).
This behavior can occur for one or more of the following reasons:
3. Default route advertised by the ExpressRoute/VPN connection to the virtual network over BGP:
- a. If you have an ExpressRoute/VPN connection to the virtual network over BGP, and if you are advertising a default route, you must make sure that the packet is routed back to the internet destination without modifying it. You can verify by using the **Connection Troubleshoot** option in the Application Gateway portal.
+ a. If you have an ExpressRoute/VPN connection to the virtual network over BGP, and if you're advertising a default route, you must make sure that the packet is routed back to the internet destination without modifying it. You can verify by using the **Connection Troubleshoot** option in the Application Gateway portal.
b. Choose the destination manually as any internet-routable IP address like 1.1.1.1. Set the destination port as anything, and verify the connectivity. c. If the next hop is virtual network gateway, there might be a default route advertised over ExpressRoute or VPN.
application-gateway Application Gateway Components https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-components.md
Title: Application gateway components description: This article provides information about the various components in an application gateway -+ Last updated 08/21/2020-+ # Application gateway components
A frontend IP address is the IP address associated with an application gateway.
The Azure Application Gateway V2 SKU can be configured to support either both static internal IP address and static public IP address, or only static public IP address. It cannot be configured to support only static internal IP address.
-The V1 SKU can be configured to support static or dynamic internal IP address and dynamic public IP address. The dynamic IP address of Application Gateway does not change on a running gateway. It can change only when you stop or start the Gateway. It does not change on system failures, updates, Azure host updates etc.
+The V1 SKU can be configured to support static or dynamic internal IP address and dynamic public IP address. The dynamic IP address of Application Gateway doesn't change on a running gateway. It can change only when you stop or start the Gateway. It doesn't change on system failures, updates, Azure host updates etc.
The DNS name associated with an application gateway doesn't change over the lifecycle of the gateway. As a result, you should use a CNAME alias and point it to the DNS address of the application gateway.
After you create a listener, you associate it with a request routing rule. This
## Request routing rules
-A request routing rule is a key component of an application gateway because it determines how to route traffic on the listener. The rule binds the listener, the back-end server pool, and the backend HTTP settings.
+A request routing rule is a key component of an application gateway because it determines how to route traffic on the listener. The rule binds the listener, the backend server pool, and the backend HTTP settings.
When a listener accepts a request, the request routing rule forwards the request to the backend or redirects it elsewhere. If the request is forwarded to the backend, the request routing rule defines which backend server pool to forward it to. The request routing rule also determines if the headers in the request are to be rewritten. One listener can be attached to one rule.
You can create different backend pools for different types of requests. For exam
By default, an application gateway monitors the health of all resources in its backend pool and automatically removes unhealthy ones. It then monitors unhealthy instances and adds them back to the healthy backend pool when they become available and respond to health probes.
-In addition to using default health probe monitoring, you can also customize the health probe to suit your application's requirements. Custom probes allow more granular control over the health monitoring. When using custom probes, you can configure a custom hostname, URL path, probe interval, and how many failed responses to accept before marking the back-end pool instance as unhealthy, custom status codes and response body match, etc. We recommend that you configure custom probes to monitor the health of each backend pool.
+In addition to using default health probe monitoring, you can also customize the health probe to suit your application's requirements. Custom probes allow more granular control over the health monitoring. When using custom probes, you can configure a custom hostname, URL path, probe interval, and how many failed responses to accept before marking the backend pool instance as unhealthy, custom status codes and response body match, etc. We recommend that you configure custom probes to monitor the health of each backend pool.
For more information, see [Monitor the health of your application gateway](../application-gateway/application-gateway-probe-overview.md).
application-gateway Application Gateway Configure Listener Specific Ssl Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-configure-listener-specific-ssl-policy.md
Title: Configure listener-specific SSL policies on Azure Application Gateway through portal description: Learn how to configure listener-specific SSL policies on Application Gateway through portal -+ Last updated 02/18/2022-+ # Configure listener-specific SSL policies on Application Gateway through portal
Before you proceed, here are some important points related to listener-specific
- You don't have to configure client authentication on an SSL profile to associate it to a listener. You can have only client authentication or listener-specific SSL policy configured, or both configured in your SSL profile. - Using a new Predefined or Customv2 policy enhances SSL security and performance for the entire gateway (SSL Policy and SSL Profile). Therefore, you cannot have different listeners on both old as well as new SSL (predefined or custom) policies.
- Consider this example, you are currently using SSL Policy and SSL Profile with &#34;older&#34; policies/ciphers. To use a &#34;new&#34; Predefined or Customv2 policy for any one of them will also require you to upgrade the other configuration. You may use the new predefined policies, or customv2 policy, or combination of these across the gateway.
+ Consider this example, you're currently using SSL Policy and SSL Profile with &#34;older&#34; policies/ciphers. To use a &#34;new&#34; Predefined or Customv2 policy for any one of them will also require you to upgrade the other configuration. You may use the new predefined policies, or customv2 policy, or combination of these across the gateway.
To set up a listener-specific SSL policy, you'll need to first go to the **SSL settings** tab in the Portal and create a new SSL profile. When you create an SSL profile, you'll see two tabs: **Client Authentication** and **SSL Policy**. The **SSL Policy** tab is to configure a listener-specific SSL policy. The **Client Authentication** tab is where to upload a client certificate(s) for mutual authentication - for more information, check out [Configuring a mutual authentication](./mutual-authentication-portal.md).
Now that we've created an SSL profile with a listener-specific SSL policy, we ne
![Associate SSL profile to new listener](./media/mutual-authentication-portal/mutual-authentication-listener-portal.png) ### Limitations
-There is a limitation right now on Application Gateway where different listeners using the same port cannot have SSL policies (predefined or custom) with different TLS protocol versions. Choosing the same TLS version for different listeners will work for configuring cipher suite preference for each listener. However, to use different TLS protocol versions for separate listeners, you will need to use distinct ports for each.
+There is a limitation right now on Application Gateway that different listeners using the same port cannot have SSL policies (predefined or custom) with different TLS protocol versions. Choosing the same TLS version for different listeners will work for configuring cipher suite preference for each listener. However, to use different TLS protocol versions for separate listeners, you will need to use distinct ports for each.
## Next steps
application-gateway Application Gateway Configure Ssl Policy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-configure-ssl-policy-powershell.md
Set-AzApplicationGateway -ApplicationGateway $gw
``` > [!IMPORTANT]
-> - If you are using a custom SSL policy in Application Gateway v1 SKU (Standard or WAF), make sure that you add the mandatory cipher &#34;TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256&#34; to the list. This cipher is required to enable metrics and logging in the Application Gateway v1 SKU. This is not mandatory for Application Gateway v2 SKU (Standard_v2 or WAF_v2).
-> - Cipher suites "TLS_AES_128_GCM_SHA256" and "TLS_AES_256_GCM_SHA384" with TLSv1.3 are not customizable and included by default when setting a CustomV2 policy with a minimum TLS version of 1.2 or 1.3. These two cipher suites will not appear in the Get Details output, with an exception of Portal.
+> - If you're using a custom SSL policy in Application Gateway v1 SKU (Standard or WAF), make sure that you add the mandatory cipher &#34;TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256&#34; to the list. This cipher is required to enable metrics and logging in the Application Gateway v1 SKU. This is not mandatory for Application Gateway v2 SKU (Standard_v2 or WAF_v2).
+> - Cipher suites "TLS_AES_128_GCM_SHA256" and "TLS_AES_256_GCM_SHA384" with TLSv1.3 are not customizable and included by default when setting a CustomV2 policy with a minimum TLS version of 1.2 or 1.3. These two cipher suites won't appear in the Get Details output, with an exception of Portal.
To set minimum protocol version to 1.3, you must use the following command:
application-gateway Application Gateway Create Probe Classic Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-create-probe-classic-ps.md
To create an application gateway:
### Create an application gateway resource with a custom probe
-To create the gateway, use the `New-AzureApplicationGateway` cmdlet, replacing the values with your own. Billing for the gateway does not start at this point. Billing begins in a later step, when the gateway is successfully started.
+To create the gateway, use the `New-AzureApplicationGateway` cmdlet, replacing the values with your own. Billing for the gateway doesn't start at this point. Billing begins in a later step, when the gateway is successfully started.
The following example creates an application gateway by using a virtual network called "testvnet1" and a subnet called "subnet-1".
Copy the following text to Notepad.
Edit the values between the parentheses for the configuration items. Save the file with extension .xml.
-The following example shows how to use a configuration file to set up the application gateway to load balance HTTP traffic on public port 80 and send network traffic to back-end port 80 between two IP addresses by using a custom probe.
+The following example shows how to use a configuration file to set up the application gateway to load balance HTTP traffic on public port 80 and send network traffic to backend port 80 between two IP addresses by using a custom probe.
> [!IMPORTANT] > The protocol item Http or Https is case-sensitive.
The configuration parameters are:
| **Host** and **Path** | Complete URL path that is invoked by the application gateway to determine the health of the instance. For example, if you have a website http:\//contoso.com/, then the custom probe can be configured for "http:\//contoso.com/path/custompath.htm" for probe checks to have a successful HTTP response.| | **Interval** | Configures the probe interval checks in seconds.| | **Timeout** | Defines the probe time-out for an HTTP response check.|
-| **UnhealthyThreshold** | The number of failed HTTP responses needed to flag the back-end instance as *unhealthy*.|
+| **UnhealthyThreshold** | The number of failed HTTP responses needed to flag the backend instance as *unhealthy*.|
-The probe name is referenced in the \<BackendHttpSettings\> configuration to assign which back-end pool uses custom probe settings.
+The probe name is referenced in the \<BackendHttpSettings\> configuration to assign which backend pool uses custom probe settings.
## Add a custom probe to an existing application gateway
application-gateway Application Gateway Create Probe Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-create-probe-portal.md
> * [Azure Resource Manager PowerShell](application-gateway-create-probe-ps.md) > * [Azure Classic PowerShell](application-gateway-create-probe-classic-ps.md)
-In this article, you add a custom health probe to an existing application gateway through the Azure portal. Azure Application Gateway uses these health probes to monitor the health of the resources in the back-end pool.
+In this article, you add a custom health probe to an existing application gateway through the Azure portal. Azure Application Gateway uses these health probes to monitor the health of the resources in the backend pool.
## Before you begin
Probes are configured in a two-step process through the portal. The first step i
|**Name**|customProbe|This value is a friendly name given to the probe that is accessible in the portal.| |**Protocol**|HTTP or HTTPS | The protocol that the health probe uses. | |**Host**|i.e contoso.com|This value is the name of the virtual host (different from the VM host name) running on the application server. The probe is sent to \<protocol\>://\<host name\>:\<port\>/\<urlPath\> This can also be the private IP address of the server, or the public IP address, or the DNS entry of the public IP address. The probe will attempt to access the server when used with a file based path entry, and validate a specific file exists on the server as a health check.|
- |**Pick host name from backend HTTP settings**|Yes or No|Sets the *host* header in the probe to the host name from the HTTP settings to which this probe is associated. Specially required for multi-tenant backends such as Azure app service. [Learn more](./configuration-http-settings.md#pick-host-name-from-back-end-address)|
+ |**Pick host name from backend HTTP settings**|Yes or No|Sets the *host* header in the probe to the host name from the HTTP settings to which this probe is associated. Specially required for multi-tenant backends such as Azure app service. [Learn more](./configuration-http-settings.md#pick-host-name-from-backend-address)|
|**Pick port from backend HTTP settings**| Yes or No|Sets the *port* of the health probe to the port from HTTP settings to which this probe is associated. If you choose no, you can enter a custom destination port to use | |**Port**| 1-65535 | Custom port to be used for the health probes | |**Path**|/ or any valid path|The remainder of the full url for the custom probe. A valid path starts with '/'. For the default path of http:\//contoso.com, just use '/'. You can also input a server path to a file for a static health check instead of web based. File paths should be used while using public / private ip, or public ip dns entry as the hostname entry.|
Probes are configured in a two-step process through the portal. The first step i
|**HTTP Settings**|selection from dropdown|Probe will get associated with the HTTP settings selected here and therefore, will monitor the health of that backend pool, which is associated with the selected HTTP setting. It will use the same port for the probe request as the one being used in the selected HTTP setting. You can only choose those HTTP settings, which aren't associated with any other custom probe. <br>The only HTTP settings that are available for association are those that have the same protocol as the protocol chosen in this probe configuration, and have the same state for the *Pick Host Name From Backend HTTP setting* switch.| > [!IMPORTANT]
- > The probe will monitor health of the backend only when it's associated with one or more HTTP settings. It will monitor back-end resources of those back-end pools which are associated to the HTTP settings to which this probe is associated with. The probe request will be sent as \<protocol\>://\<hostName\>:\<port\>/\<urlPath\>.
+ > The probe will monitor health of the backend only when it's associated with one or more HTTP settings. It will monitor backend resources of those backend pools which are associated to the HTTP settings to which this probe is associated with. The probe request will be sent as \<protocol\>://\<hostName\>:\<port\>/\<urlPath\>.
### Test backend health with the probe
-After entering the probe properties, you can test the health of the back-end resources to verify that the probe configuration is correct and that the back-end resources are working as expected.
+After entering the probe properties, you can test the health of the backend resources to verify that the probe configuration is correct and that the backend resources are working as expected.
1. Select **Test** and note the result of the probe. The Application gateway tests the health of all the backend resources in the backend pools associated with the HTTP settings used for this probe.
After entering the probe properties, you can test the health of the back-end res
2. If there are any unhealthy backend resources, then check the **Details** column to understand the reason for unhealthy state of the resource. If the resource has been marked unhealthy due to an incorrect probe configuration, then select the **Go back to probe** link and edit the probe configuration. Otherwise, if the resource has been marked unhealthy due to an issue with the backend, then resolve the issues with the backend resource and then test the backend again by selecting the **Go back to probe** link and select **Test**. > [!NOTE]
- > You can choose to save the probe even with unhealthy backend resources, but it isn't recommended. This is because the Application Gateway will not forward requests to the backend servers from the backend pool, which are determined to be unhealthy by the probe. In case there are no healthy resources in a backend pool, you will not be able to access your application and will get a HTTP 502 error.
+ > You can choose to save the probe even with unhealthy backend resources, but it isn't recommended. This is because the Application Gateway won't forward requests to the backend servers from the backend pool, which are determined to be unhealthy by the probe. In case there are no healthy resources in a backend pool, you won't be able to access your application and will get a HTTP 502 error.
![View probe result][6]
Probes are configured in a two-step process through the portal. The first step i
|**Name**|customProbe|This value is a friendly name given to the probe that is accessible in the portal.| |**Protocol**|HTTP or HTTPS | The protocol that the health probe uses. | |**Host**|i.e contoso.com|This value is the name of the virtual host (different from the VM host name) running on the application server. The probe is sent to (protocol)://(host name):(port from httpsetting)/urlPath. This is applicable when multi-site is configured on Application Gateway. If the Application Gateway is configured for a single site, then enter '127.0.0.1'. You can also input a server path to a file for a static health check instead of web based. File paths should be used while using public / private ip, or public ip dns entry as the hostname entry.|
- |**Pick host name from backend HTTP settings**|Yes or No|Sets the *host* header in the probe to the host name of the back-end resource in the back-end pool associated with the HTTP Setting to which this probe is associated. Specially required for multi-tenant backends such as Azure app service. [Learn more](./configuration-http-settings.md#pick-host-name-from-back-end-address)|
+ |**Pick host name from backend HTTP settings**|Yes or No|Sets the *host* header in the probe to the host name of the backend resource in the backend pool associated with the HTTP Setting to which this probe is associated. Specially required for multi-tenant backends such as Azure app service. [Learn more](./configuration-http-settings.md#pick-host-name-from-backend-address)|
|**Path**|/ or any valid path|The remainder of the full url for the custom probe. A valid path starts with '/'. For the default path of http:\//contoso.com, just use '/' You can also input a server path to a file for a static health check instead of web based. File paths should be used while using public / private ip, or public ip dns entry as the hostname entry.| |**Interval (secs)**|30|How often the probe is run to check for health. It isn't recommended to set the lower than 30 seconds.| |**Timeout (secs)**|30|The amount of time the probe waits before timing out. If a valid response isn't received within this time-out period, the probe is marked as failed. The timeout interval needs to be high enough that an http call can be made to ensure the backend health page is available. The time-out value shouldn't be more than the ΓÇÿIntervalΓÇÖ value used in this probe setting or the ΓÇÿRequest timeoutΓÇÖ value in the HTTP setting, which will be associated with this probe.|
Now that the probe has been created, it's time to add it to the gateway. Probe s
## Next steps
-View the health of the backend resources as determined by the probe using the [backend health view](./application-gateway-diagnostics.md#back-end-health).
+View the health of the backend resources as determined by the probe using the [backend health view](./application-gateway-diagnostics.md#backend-health).
[1]: ./media/application-gateway-create-probe-portal/figure1.png [2]: ./media/application-gateway-create-probe-portal/figure2.png
application-gateway Application Gateway Create Probe Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-create-probe-ps.md
$vnet = New-AzVirtualNetwork -Name appgwvnet -ResourceGroupName appgw-rg -Locati
$subnet = $vnet.Subnets[0] ```
-### Create a public IP address for the front-end configuration
+### Create a public IP address for the frontend configuration
-Create a public IP resource **publicIP01** in resource group **appgw-rg** for the West US region. This example uses a public IP address for the front-end IP address of the application gateway. Application gateway requires the public IP address to have a dynamically created DNS name therefore the `-DomainNameLabel` cannot be specified during the creation of the public IP address.
+Create a public IP resource **publicIP01** in resource group **appgw-rg** for the West US region. This example uses a public IP address for the frontend IP address of the application gateway. Application gateway requires the public IP address to have a dynamically created DNS name therefore the `-DomainNameLabel` cannot be specified during the creation of the public IP address.
```powershell $publicip = New-AzPublicIpAddress -ResourceGroupName appgw-rg -Name publicIP01 -Location 'West US' -AllocationMethod Dynamic
You set up all configuration items before creating the application gateway. The
# Creates an application gateway Frontend IP configuration named gatewayIP01 $gipconfig = New-AzApplicationGatewayIPConfiguration -Name gatewayIP01 -Subnet $subnet
-#Creates a back-end IP address pool named pool01 with IP addresses 134.170.185.46, 134.170.188.221, 134.170.185.50.
+#Creates a backend IP address pool named pool01 with IP addresses 134.170.185.46, 134.170.188.221, 134.170.185.50.
$pool = New-AzApplicationGatewayBackendAddressPool -Name pool01 -BackendIPAddresses 134.170.185.46, 134.170.188.221, 134.170.185.50 # Creates a probe that will check health at http://contoso.com/path/path.htm
$poolSetting = New-AzApplicationGatewayBackendHttpSettings -Name poolsetting01 -
# Creates a frontend port for the application gateway to listen on port 80 that will be used by the listener. $fp = New-AzApplicationGatewayFrontendPort -Name frontendport01 -Port 80
-# Creates a frontend IP configuration. This associates the $publicip variable defined previously with the front-end IP that will be used by the listener.
+# Creates a frontend IP configuration. This associates the $publicip variable defined previously with the frontend IP that will be used by the listener.
$fipconfig = New-AzApplicationGatewayFrontendIPConfig -Name fipconfig01 -PublicIPAddress $publicip # Creates the listener. The listener is a combination of protocol and the frontend IP configuration $fipconfig and frontend port $fp created in previous steps.
Set-AzApplicationGateway -ApplicationGateway $getgw
## Get application gateway DNS name
-Once the gateway is created, the next step is to configure the front end for communication. When using a public IP, application gateway requires a dynamically assigned DNS name, which is not friendly. To ensure end users can hit the application gateway a CNAME record can be used to point to the public endpoint of the application gateway. [Configuring a custom domain name for in Azure](../cloud-services/cloud-services-custom-domain-name-portal.md). To do this, retrieve details of the application gateway and its associated IP/DNS name using the PublicIPAddress element attached to the application gateway. The application gateway's DNS name should be used to create a CNAME record, which points the two web applications to this DNS name. The use of A-records is not recommended since the VIP may change on restart of application gateway.
+Once the gateway is created, the next step is to configure the front end for communication. When you're using a public IP address, application gateway requires a dynamically assigned DNS name, which is not friendly. To ensure end users can hit the application gateway a CNAME record can be used to point to the public endpoint of the application gateway. [Configuring a custom domain name for in Azure](../cloud-services/cloud-services-custom-domain-name-portal.md). To do this, retrieve details of the application gateway and its associated IP/DNS name using the PublicIPAddress element attached to the application gateway. The application gateway's DNS name should be used to create a CNAME record, which points the two web applications to this DNS name. The use of A-records is not recommended since the VIP may change on restart of application gateway.
```powershell Get-AzPublicIpAddress -ResourceGroupName appgw-RG -Name publicIP01
application-gateway Application Gateway Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-diagnostics.md
Title: Back-end health and diagnostic logs
+ Title: Backend health and diagnostic logs
description: Learn how to enable and manage access logs and performance logs for Azure Application Gateway
-# Back-end health and diagnostic logs for Application Gateway
+# Backend health and diagnostic logs for Application Gateway
You can monitor Azure Application Gateway resources in the following ways:
-* [Back-end health](#back-end-health): Application Gateway provides the capability to monitor the health of the servers in the back-end pools through the Azure portal and through PowerShell. You can also find the health of the back-end pools through the performance diagnostic logs.
+* [Backend health](#backend-health): Application Gateway provides the capability to monitor the health of the servers in the backend pools through the Azure portal and through PowerShell. You can also find the health of the backend pools through the performance diagnostic logs.
* [Logs](#diagnostic-logging): Logs allow for performance, access, and other data to be saved or consumed from a resource for monitoring purposes.
You can monitor Azure Application Gateway resources in the following ways:
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
-## Back-end health
+## Backend health
-Application Gateway provides the capability to monitor the health of individual members of the back-end pools through the portal, PowerShell, and the command-line interface (CLI). You can also find an aggregated health summary of back-end pools through the performance diagnostic logs.
+Application Gateway provides the capability to monitor the health of individual members of the backend pools through the portal, PowerShell, and the command-line interface (CLI). You can also find an aggregated health summary of backend pools through the performance diagnostic logs.
-The back-end health report reflects the output of the Application Gateway health probe to the back-end instances. When probing is successful and the back end can receive traffic, it's considered healthy. Otherwise, it's considered unhealthy.
+The backend health report reflects the output of the Application Gateway health probe to the backend instances. When probing is successful and the back end can receive traffic, it's considered healthy. Otherwise, it's considered unhealthy.
> [!IMPORTANT]
-> If there is a network security group (NSG) on an Application Gateway subnet, open port ranges 65503-65534 for v1 SKUs, and 65200-65535 for v2 SKUs on the Application Gateway subnet for inbound traffic. This port range is required for Azure infrastructure communication. They are protected (locked down) by Azure certificates. Without proper certificates, external entities, including the customers of those gateways, will not be able to initiate any changes on those endpoints.
+> If there is a network security group (NSG) on an Application Gateway subnet, open port ranges 65503-65534 for v1 SKUs, and 65200-65535 for v2 SKUs on the Application Gateway subnet for inbound traffic. This port range is required for Azure infrastructure communication. They are protected (locked down) by Azure certificates. Without proper certificates, external entities, including the customers of those gateways, won't be able to initiate any changes on those endpoints.
-### View back-end health through the portal
+### View backend health through the portal
-In the portal, back-end health is provided automatically. In an existing application gateway, select **Monitoring** > **Backend health**.
+In the portal, backend health is provided automatically. In an existing application gateway, select **Monitoring** > **Backend health**.
-Each member in the back-end pool is listed on this page (whether it's a NIC, IP, or FQDN). Back-end pool name, port, back-end HTTP settings name, and health status are shown. Valid values for health status are **Healthy**, **Unhealthy**, and **Unknown**.
+Each member in the backend pool is listed on this page (whether it's a NIC, IP, or FQDN). Backend pool name, port, backend HTTP settings name, and health status are shown. Valid values for health status are **Healthy**, **Unhealthy**, and **Unknown**.
> [!NOTE]
-> If you see a back-end health status of **Unknown**, ensure that access to the back end is not blocked by an NSG rule, a user-defined route (UDR), or a custom DNS in the virtual network.
+> If you see a backend health status of **Unknown**, ensure that access to the back end is not blocked by an NSG rule, a user-defined route (UDR), or a custom DNS in the virtual network.
-![Back-end health][10]
+![Backend health][10]
-### View back-end health through PowerShell
+### View backend health through PowerShell
-The following PowerShell code shows how to view back-end health by using the `Get-AzApplicationGatewayBackendHealth` cmdlet:
+The following PowerShell code shows how to view backend health by using the `Get-AzApplicationGatewayBackendHealth` cmdlet:
```powershell Get-AzApplicationGatewayBackendHealth -Name ApplicationGateway1 -ResourceGroupName Contoso ```
-### View back-end health through Azure CLI
+### View backend health through Azure CLI
```azurecli az network application-gateway show-backend-health --resource-group AdatumAppGatewayRG --name AdatumAppGateway
You can use different types of logs in Azure to manage and troubleshoot applicat
* **Activity log**: You can use [Azure activity logs](../azure-monitor/essentials/activity-log.md) (formerly known as operational logs and audit logs) to view all operations that are submitted to your Azure subscription, and their status. Activity log entries are collected by default, and you can view them in the Azure portal. * **Access log**: You can use this log to view Application Gateway access patterns and analyze important information. This includes the caller's IP, requested URL, response latency, return code, and bytes in and out. An access log is collected every 60 seconds. This log contains one record per instance of Application Gateway. The Application Gateway instance is identified by the instanceId property.
-* **Performance log**: You can use this log to view how Application Gateway instances are performing. This log captures performance information for each instance, including total requests served, throughput in bytes, total requests served, failed request count, and healthy and unhealthy back-end instance count. A performance log is collected every 60 seconds. The Performance log is available only for the v1 SKU. For the v2 SKU, use [Metrics](application-gateway-metrics.md) for performance data.
+* **Performance log**: You can use this log to view how Application Gateway instances are performing. This log captures performance information for each instance, including total requests served, throughput in bytes, total requests served, failed request count, and healthy and unhealthy backend instance count. A performance log is collected every 60 seconds. The Performance log is available only for the v1 SKU. For the v2 SKU, use [Metrics](application-gateway-metrics.md) for performance data.
* **Firewall log**: You can use this log to view the requests that are logged through either detection or prevention mode of an application gateway that is configured with the web application firewall. Firewall logs are collected every 60 seconds. > [!NOTE]
The access log is generated only if you've enabled it on each Application Gatewa
|clientPort | Originating port for the request. | |httpMethod | HTTP method used by the request. | |requestUri | URI of the received request. |
-|RequestQuery | **Server-Routed**: Back-end pool instance that was sent the request.</br>**X-AzureApplicationGateway-LOG-ID**: Correlation ID used for the request. It can be used to troubleshoot traffic issues on the back-end servers. </br>**SERVER-STATUS**: HTTP response code that Application Gateway received from the back end. |
+|RequestQuery | **Server-Routed**: Backend pool instance that was sent the request.</br>**X-AzureApplicationGateway-LOG-ID**: Correlation ID used for the request. It can be used to troubleshoot traffic issues on the backend servers. </br>**SERVER-STATUS**: HTTP response code that Application Gateway received from the back end. |
|UserAgent | User agent from the HTTP request header. | |httpStatus | HTTP status code returned to the client from Application Gateway. | |httpVersion | HTTP version of the request. | |receivedBytes | Size of packet received, in bytes. | |sentBytes| Size of packet sent, in bytes.| |timeTaken| Length of time (in milliseconds) that it takes for a request to be processed and its response to be sent. This is calculated as the interval from the time when Application Gateway receives the first byte of an HTTP request to the time when the response send operation finishes. It's important to note that the Time-Taken field usually includes the time that the request and response packets are traveling over the network. |
-|sslEnabled| Whether communication to the back-end pools used TLS/SSL. Valid values are on and off.|
+|sslEnabled| Whether communication to the backend pools used TLS/SSL. Valid values are on and off.|
|host| The hostname with which the request has been sent to the backend server. If backend hostname is being overridden, this name will reflect that.| |originalHost| The hostname with which the request was received by the Application Gateway from the client.|
The access log is generated only if you've enabled it on each Application Gatewa
|WAFEvaluationTime| Length of time (in **seconds**) that it takes for the request to be processed by the WAF. | |WAFMode| Value can be either Detection or Prevention | |transactionId| Unique identifier to correlate the request received from the client |
-|sslEnabled| Whether communication to the back-end pools used TLS. Valid values are on and off.|
+|sslEnabled| Whether communication to the backend pools used TLS. Valid values are on and off.|
|sslCipher| Cipher suite being used for TLS communication (if TLS is enabled).| |sslProtocol| SSL/TLS protocol being used (if TLS is enabled).| |serverRouted| The backend server that application gateway routes the request to.|
The performance log is generated only if you have enabled it on each Application
|Value |Description | ||| |instanceId | Application Gateway instance for which performance data is being generated. For a multiple-instance application gateway, there is one row per instance. |
-|healthyHostCount | Number of healthy hosts in the back-end pool. |
-|unHealthyHostCount | Number of unhealthy hosts in the back-end pool. |
+|healthyHostCount | Number of healthy hosts in the backend pool. |
+|unHealthyHostCount | Number of unhealthy hosts in the backend pool. |
|requestCount | Number of requests served. | |latency | Average latency (in milliseconds) of requests from the instance to the back end that serves the requests. | |failedRequestCount| Number of failed requests.|
You can view and analyze activity log data by using any of the following methods
You can also connect to your storage account and retrieve the JSON log entries for access and performance logs. After you download the JSON files, you can convert them to CSV and view them in Excel, Power BI, or any other data-visualization tool. > [!TIP]
-> If you are familiar with Visual Studio and basic concepts of changing values for constants and variables in C#, you can use the [log converter tools](https://github.com/Azure-Samples/networking-dotnet-log-converter) available from GitHub.
+> If you're familiar with Visual Studio and basic concepts of changing values for constants and variables in C#, you can use the [log converter tools](https://github.com/Azure-Samples/networking-dotnet-log-converter) available from GitHub.
> >
application-gateway Application Gateway End To End Ssl Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-end-to-end-ssl-powershell.md
## Overview
-Azure Application Gateway supports end-to-end encryption of traffic. Application Gateway terminates the TLS/SSL connection at the application gateway. The gateway then applies the routing rules to the traffic, re-encrypts the packet, and forwards the packet to the appropriate back-end server based on the routing rules defined. Any response from the web server goes through the same process back to the end user.
+Azure Application Gateway supports end-to-end encryption of traffic. Application Gateway terminates the TLS/SSL connection at the application gateway. The gateway then applies the routing rules to the traffic, re-encrypts the packet, and forwards the packet to the appropriate backend server based on the routing rules defined. Any response from the web server goes through the same process back to the end user.
Application Gateway supports defining custom TLS options. It also supports disabling the following protocol versions: **TLSv1.0**, **TLSv1.1**, and **TLSv1.2**, as well defining which cipher suites to use and the order of preference. To learn more about configurable TLS options, see the [TLS policy overview](application-gateway-SSL-policy-overview.md).
This scenario will:
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
-To configure end-to-end TLS with an application gateway, a certificate is required for the gateway and certificates are required for the back-end servers. The gateway certificate is used to derive a symmetric key as per TLS protocol specification. The symmetric key is then used encrypt and decrypt the traffic sent to the gateway. The gateway certificate needs to be in Personal Information Exchange (PFX) format. This file format allows you to export the private key that is required by the application gateway to perform the encryption and decryption of traffic.
+To configure end-to-end TLS with an application gateway, a certificate is required for the gateway and certificates are required for the backend servers. The gateway certificate is used to derive a symmetric key as per TLS protocol specification. The symmetric key is then used encrypt and decrypt the traffic sent to the gateway. The gateway certificate needs to be in Personal Information Exchange (PFX) format. This file format allows you to export the private key that is required by the application gateway to perform the encryption and decryption of traffic.
-For end-to-end TLS encryption, the back end must be explicitly allowed by the application gateway. Upload the public certificate of the back-end servers to the application gateway. Adding the certificate ensures that the application gateway only communicates with known back-end instances. This further secures the end-to-end communication.
+For end-to-end TLS encryption, the back end must be explicitly allowed by the application gateway. Upload the public certificate of the backend servers to the application gateway. Adding the certificate ensures that the application gateway only communicates with known backend instances. This further secures the end-to-end communication.
The configuration process is described in the following sections.
The following example creates a virtual network and two subnets. One subnet is u
> Subnets configured for an application gateway should be properly sized. An application gateway can be configured for up to 10 instances. Each instance takes one IP address from the subnet. Too small of a subnet can adversely affect scaling out an application gateway. >
-2. Assign an address range to be used for the back-end address pool.
+2. Assign an address range to be used for the backend address pool.
```powershell $nicSubnet = New-AzVirtualNetworkSubnetConfig -Name 'appsubnet' -AddressPrefix 10.0.2.0/24
The following example creates a virtual network and two subnets. One subnet is u
$nicSubnet = Get-AzVirtualNetworkSubnetConfig -Name 'appsubnet' -VirtualNetwork $vnet ```
-## Create a public IP address for the front-end configuration
+## Create a public IP address for the frontend configuration
Create a public IP resource to be used for the application gateway. This public IP address is used in one of the steps that follow.
$publicip = New-AzPublicIpAddress -ResourceGroupName appgw-rg -Name 'publicIP01'
``` > [!IMPORTANT]
-> Application Gateway does not support the use of a public IP address created with a defined domain label. Only a public IP address with a dynamically created domain label is supported. If you require a friendly DNS name for the application gateway, we recommend you use a CNAME record as an alias.
+> Application Gateway doesn't support the use of a public IP address created with a defined domain label. Only a public IP address with a dynamically created domain label is supported. If you require a friendly DNS name for the application gateway, we recommend you use a CNAME record as an alias.
## Create an application gateway configuration object All configuration items are set before creating the application gateway. The following steps create the configuration items that are needed for an application gateway resource.
-1. Create an application gateway IP configuration. This setting configures which of the subnets the application gateway uses. When application gateway starts, it picks up an IP address from the configured subnet and routes network traffic to the IP addresses in the back-end IP pool. Keep in mind that each instance takes one IP address.
+1. Create an application gateway IP configuration. This setting configures which of the subnets the application gateway uses. When application gateway starts, it picks up an IP address from the configured subnet and routes network traffic to the IP addresses in the backend IP pool. Keep in mind that each instance takes one IP address.
```powershell $gipconfig = New-AzApplicationGatewayIPConfiguration -Name 'gwconfig' -Subnet $gwSubnet ```
-2. Create a front-end IP configuration. This setting maps a private or public IP address to the front end of the application gateway. The following step associates the public IP address in the preceding step with the front-end IP configuration.
+2. Create a frontend IP configuration. This setting maps a private or public IP address to the front end of the application gateway. The following step associates the public IP address in the preceding step with the frontend IP configuration.
```powershell $fipconfig = New-AzApplicationGatewayFrontendIPConfig -Name 'fip01' -PublicIPAddress $publicip ```
-3. Configure the back-end IP address pool with the IP addresses of the back-end web servers. These IP addresses are the IP addresses that receive the network traffic that comes from the front-end IP endpoint. Replace the IP addresses in the sample with your own application IP address endpoints.
+3. Configure the backend IP address pool with the IP addresses of the backend web servers. These IP addresses are the IP addresses that receive the network traffic that comes from the frontend IP endpoint. Replace the IP addresses in the sample with your own application IP address endpoints.
```powershell $pool = New-AzApplicationGatewayBackendAddressPool -Name 'pool01' -BackendIPAddresses 1.1.1.1, 2.2.2.2, 3.3.3.3 ``` > [!NOTE]
- > A fully qualified domain name (FQDN) is also a valid value to use in place of an IP address for the back-end servers. You enable it by using the **-BackendFqdns** switch.
+ > A fully qualified domain name (FQDN) is also a valid value to use in place of an IP address for the backend servers. You enable it by using the **-BackendFqdns** switch.
-4. Configure the front-end IP port for the public IP endpoint. This port is the port that end users connect to.
+4. Configure the frontend IP port for the public IP endpoint. This port is the port that end users connect to.
```powershell $fp = New-AzApplicationGatewayFrontendPort -Name 'port01' -Port 443
All configuration items are set before creating the application gateway. The fol
> [!NOTE] > This sample configures the certificate used for the TLS connection. The certificate needs to be in .pfx format.
-6. Create the HTTP listener for the application gateway. Assign the front-end IP configuration, port, and TLS/SSL certificate to use.
+6. Create the HTTP listener for the application gateway. Assign the frontend IP configuration, port, and TLS/SSL certificate to use.
```powershell $listener = New-AzApplicationGatewayHttpListener -Name listener01 -Protocol Https -FrontendIPConfiguration $fipconfig -FrontendPort $fp -SSLCertificate $cert ```
-7. Upload the certificate to be used on the TLS-enabled back-end pool resources.
+7. Upload the certificate to be used on the TLS-enabled backend pool resources.
> [!NOTE]
- > The default probe gets the public key from the *default* TLS binding on the back-end's IP address and compares the public key value it receives to the public key value you provide here.
+ > The default probe gets the public key from the *default* TLS binding on the backend's IP address and compares the public key value it receives to the public key value you provide here.
>
- > If you are using host headers and Server Name Indication (SNI) on the back end, the retrieved public key might not be the intended site to which traffic flows. If you're in doubt, visit https://127.0.0.1/ on the back-end servers to confirm which certificate is used for the *default* TLS binding. Use the public key from that request in this section. If you are using host-headers and SNI on HTTPS bindings and you do not receive a response and certificate from a manual browser request to https://127.0.0.1/ on the back-end servers, you must set up a default TLS binding on the them. If you do not do so, probes fail and the back end is not allowed.
+ > If you're using host headers and Server Name Indication (SNI) on the back end, the retrieved public key might not be the intended site to which traffic flows. If you're in doubt, visit https://127.0.0.1/ on the backend servers to confirm which certificate is used for the *default* TLS binding. Use the public key from that request in this section. If you're using host-headers and SNI on HTTPS bindings and you do not receive a response and certificate from a manual browser request to https://127.0.0.1/ on the backend servers, you must set up a default TLS binding on the them. If you do not do so, probes fail and the back end is not allowed.
For more information about SNI in Application Gateway, see [Overview of TLS termination and end to end TLS with Application Gateway](ssl-overview.md).
All configuration items are set before creating the application gateway. The fol
``` > [!NOTE]
- > The certificate provided in the previous step should be the public key of the .pfx certificate present on the back end. Export the certificate (not the root certificate) installed on the back-end server in Claim, Evidence, and Reasoning (CER) format and use it in this step. This step allows the back end with the application gateway.
+ > The certificate provided in the previous step should be the public key of the .pfx certificate present on the back end. Export the certificate (not the root certificate) installed on the backend server in Claim, Evidence, and Reasoning (CER) format and use it in this step. This step allows the back end with the application gateway.
- If you are using the Application Gateway v2 SKU, then create a trusted root certificate instead of an authentication certificate. For more information, see [Overview of end to end TLS with Application Gateway](ssl-overview.md#end-to-end-tls-with-the-v2-sku):
+ If you're using the Application Gateway v2 SKU, then create a trusted root certificate instead of an authentication certificate. For more information, see [Overview of end to end TLS with Application Gateway](ssl-overview.md#end-to-end-tls-with-the-v2-sku):
```powershell $trustedRootCert01 = New-AzApplicationGatewayTrustedRootCertificate -Name "test1" -CertificateFile <path to root cert file>
For V2 SKU use the below command
$appgw = New-AzApplicationGateway -Name appgateway -SSLCertificates $cert -ResourceGroupName "appgw-rg" -Location "West US" -BackendAddressPools $pool -BackendHttpSettingsCollection $poolSetting01 -FrontendIpConfigurations $fipconfig -GatewayIpConfigurations $gipconfig -FrontendPorts $fp -HttpListeners $listener -RequestRoutingRules $rule -Sku $sku -SSLPolicy $SSLPolicy -TrustedRootCertificate $trustedRootCert01 -Verbose ```
-## Apply a new certificate if the back-end certificate is expired
+## Apply a new certificate if the backend certificate is expired
-Use this procedure to apply a new certificate if the back-end certificate is expired.
+Use this procedure to apply a new certificate if the backend certificate is expired.
1. Retrieve the application gateway to update.
application-gateway Application Gateway Ilb Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-ilb-arm.md
This article walks you through the steps to configure a Standard v1 Application
## What is required to create an application gateway?
-* **Back-end server pool:** The list of IP addresses of the back-end servers. The IP addresses listed should either belong to the virtual network but in a different subnet for the application gateway or should be a public IP/VIP.
-* **Back-end server pool settings:** Every pool has settings like port, protocol, and cookie-based affinity. These settings are tied to a pool and are applied to all servers within the pool.
-* **Front-end port:** This port is the public port that is opened on the application gateway. Traffic hits this port, and then gets redirected to one of the back-end servers.
-* **Listener:** The listener has a front-end port, a protocol (Http or Https, these are case-sensitive), and the SSL certificate name (if configuring SSL offload).
-* **Rule:** The rule binds the listener and the back-end server pool and defines which back-end server pool the traffic should be directed to when it hits a particular listener. Currently, only the *basic* rule is supported. The *basic* rule is round-robin load distribution.
+* **Backend server pool:** The list of IP addresses of the backend servers. The IP addresses listed should either belong to the virtual network but in a different subnet for the application gateway or should be a public IP/VIP.
+* **Backend server pool settings:** Every pool has settings like port, protocol, and cookie-based affinity. These settings are tied to a pool and are applied to all servers within the pool.
+* **Frontend port:** This port is the public port that is opened on the application gateway. Traffic hits this port, and then gets redirected to one of the backend servers.
+* **Listener:** The listener has a frontend port, a protocol (Http or Https, these are case-sensitive), and the SSL certificate name (if configuring SSL offload).
+* **Rule:** The rule binds the listener and the backend server pool and defines which backend server pool the traffic should be directed to when it hits a particular listener. Currently, only the *basic* rule is supported. The *basic* rule is round-robin load distribution.
## Create an application gateway
This step assigns the subnet object to variable $subnet for the next steps.
$gipconfig = New-AzApplicationGatewayIPConfiguration -Name gatewayIP01 -Subnet $subnet ```
-This step creates an application gateway IP configuration named "gatewayIP01". When Application Gateway starts, it picks up an IP address from the subnet configured and route network traffic to the IP addresses in the back-end IP pool. Keep in mind that each instance takes one IP address.
+This step creates an application gateway IP configuration named "gatewayIP01". When Application Gateway starts, it picks up an IP address from the subnet configured and route network traffic to the IP addresses in the backend IP pool. Keep in mind that each instance takes one IP address.
### Step 2
This step creates an application gateway IP configuration named "gatewayIP01". W
$pool = New-AzApplicationGatewayBackendAddressPool -Name pool01 -BackendIPAddresses 10.1.1.8,10.1.1.9,10.1.1.10 ```
-This step configures the back-end IP address pool named "pool01" with IP addresses "10.1.1.8, 10.1.1.9, 10.1.1.10". Those are the IP addresses that receive the network traffic that comes from the front-end IP endpoint. You replace the preceding IP addresses to add your own application IP address endpoints.
+This step configures the backend IP address pool named "pool01" with IP addresses "10.1.1.8, 10.1.1.9, 10.1.1.10". Those are the IP addresses that receive the network traffic that comes from the frontend IP endpoint. You replace the preceding IP addresses to add your own application IP address endpoints.
### Step 3
This step configures the back-end IP address pool named "pool01" with IP address
$poolSetting = New-AzApplicationGatewayBackendHttpSettings -Name poolsetting01 -Port 80 -Protocol Http -CookieBasedAffinity Disabled ```
-This step configures application gateway setting "poolsetting01" for the load balanced network traffic in the back-end pool.
+This step configures application gateway setting "poolsetting01" for the load balanced network traffic in the backend pool.
### Step 4
This step configures application gateway setting "poolsetting01" for the load ba
$fp = New-AzApplicationGatewayFrontendPort -Name frontendport01 -Port 80 ```
-This step configures the front-end IP port named "frontendport01" for the ILB.
+This step configures the frontend IP port named "frontendport01" for the ILB.
### Step 5
This step configures the front-end IP port named "frontendport01" for the ILB.
$fipconfig = New-AzApplicationGatewayFrontendIPConfig -Name fipconfig01 -Subnet $subnet ```
-This step creates the front-end IP configuration called "fipconfig01" and associates it with a private IP from the current virtual network subnet.
+This step creates the frontend IP configuration called "fipconfig01" and associates it with a private IP from the current virtual network subnet.
### Step 6
This step creates the front-end IP configuration called "fipconfig01" and associ
$listener = New-AzApplicationGatewayHttpListener -Name listener01 -Protocol Http -FrontendIPConfiguration $fipconfig -FrontendPort $fp ```
-This step creates the listener called "listener01" and associates the front-end port to the front-end IP configuration.
+This step creates the listener called "listener01" and associates the frontend port to the frontend IP configuration.
### Step 7
Get-AzApplicationGateway -Name appgwtest -ResourceGroupName appgw-rg
``` VERBOSE: 10:52:46 PM - Begin Operation: Get-AzureApplicationGateway
-Get-AzureApplicationGateway : ResourceNotFound: The gateway does not exist.
+Get-AzureApplicationGateway : ResourceNotFound: The gateway doesn't exist.
``` ## Next steps
application-gateway Application Gateway Key Vault Common Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-key-vault-common-errors.md
Title: Common key vault errors in Application Gateway description: This article identifies key vault-related problems, and helps you resolve them for smooth operations of Application Gateway.-+ Last updated 07/26/2022-+
application-gateway Application Gateway Probe Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-probe-overview.md
Title: Health monitoring overview for Azure Application Gateway
-description: Azure Application Gateway monitors the health of all resources in its back-end pool and automatically removes any resource considered unhealthy from the pool.
+description: Azure Application Gateway monitors the health of all resources in its backend pool and automatically removes any resource considered unhealthy from the pool.
# Application Gateway health monitoring overview
-Azure Application Gateway by default monitors the health of all resources in its back-end pool and automatically removes any resource considered unhealthy from the pool. Application Gateway continues to monitor the unhealthy instances and adds them back to the healthy back-end pool once they become available and respond to health probes. By default, Application gateway sends the health probes with the same port that is defined in the back-end HTTP settings. A custom probe port can be configured using a custom health probe.
+Azure Application Gateway by default monitors the health of all resources in its backend pool and automatically removes any resource considered unhealthy from the pool. Application Gateway continues to monitor the unhealthy instances and adds them back to the healthy backend pool once they become available and respond to health probes. By default, Application gateway sends the health probes with the same port that is defined in the backend HTTP settings. A custom probe port can be configured using a custom health probe.
-The source IP address Application Gateway uses for health probes depends on the backend pool:
+The source IP address that Application Gateway uses for health probes will depend on the backend pool:
- If the server address in the backend pool is a public endpoint, then the source address is the application gateway's frontend public IP address. - If the server address in the backend pool is a private endpoint, then the source IP address is from the application gateway subnet's private IP address space.
In addition to using default health probe monitoring, you can also customize the
## Default health probe
-An application gateway automatically configures a default health probe when you don't set up any custom probe configuration. The monitoring behavior works by making an HTTP GET request to the IP addresses or FQDN configured in the back-end pool. For default probes if the backend http settings are configured for HTTPS, the probe uses HTTPS to test health of the backend servers.
+An application gateway automatically configures a default health probe when you don't set up any custom probe configuration. The monitoring behavior works by making an HTTP GET request to the IP addresses or FQDN configured in the backend pool. For default probes if the backend http settings are configured for HTTPS, the probe uses HTTPS to test health of the backend servers.
-For example: You configure your application gateway to use back-end servers A, B, and C to receive HTTP network traffic on port 80. The default health monitoring tests the three servers every 30 seconds for a healthy HTTP response with a 30 second timeout for each request. A healthy HTTP response has a [status code](https://msdn.microsoft.com/library/aa287675.aspx) between 200 and 399. In this case, the HTTP GET request for the health probe will look like `http://127.0.0.1/`.
+For example: You configure your application gateway to use backend servers A, B, and C to receive HTTP network traffic on port 80. The default health monitoring tests the three servers every 30 seconds for a healthy HTTP response with a 30 second timeout for each request. A healthy HTTP response has a [status code](https://msdn.microsoft.com/library/aa287675.aspx) between 200 and 399. In this case, the HTTP GET request for the health probe will look like `http://127.0.0.1/`.
If the default probe check fails for server A, the application gateway stops forwarding requests to this server. The default probe still continues to check for server A every 30 seconds. When server A responds successfully to one request from a default health probe, application gateway starts forwarding the requests to the server again.
If the default probe check fails for server A, the application gateway stops for
| Probe URL |\<protocol\>://127.0.0.1:\<port\>/ |The protocol and port are inherited from the backend HTTP settings to which the probe is associated | | Interval |30 |The amount of time in seconds to wait before the next health probe is sent.| | Time-out |30 |The amount of time in seconds the application gateway waits for a probe response before marking the probe as unhealthy. If a probe returns as healthy, the corresponding backend is immediately marked as healthy.|
-| Unhealthy threshold |3 |Governs how many probes to send in case there's a failure of the regular health probe. In v1 SKU, these additional health probes are sent in quick succession to determine the health of the backend quickly and don't wait for the probe interval. In the case of v2 SKU, the health probes wait the interval. The back-end server is marked down after the consecutive probe failure count reaches the unhealthy threshold. |
+| Unhealthy threshold |3 |Governs how many probes to send in case there's a failure of the regular health probe. In v1 SKU, these additional health probes are sent in quick succession to determine the health of the backend quickly and don't wait for the probe interval. For v2 SKU, the health probes wait the interval. The backend server is marked down after the consecutive probe failure count reaches the unhealthy threshold. |
The default probe looks only at \<protocol\>:\//127.0.0.1:\<port\> to determine health status. If you need to configure the health probe to go to a custom URL or modify any other settings, you must use custom probes. For more information about HTTPS probes, see [Overview of TLS termination and end to end TLS with Application Gateway](ssl-overview.md#for-probe-traffic).
Also if there are multiple listeners, then each listener probes the backend inde
## Custom health probe
-Custom probes allow you to have more granular control over the health monitoring. When using custom probes, you can configure a custom hostname, URL path, probe interval, and how many failed responses to accept before marking the back-end pool instance as unhealthy, etc.
+Custom probes allow you to have more granular control over the health monitoring. When using custom probes, you can configure a custom hostname, URL path, probe interval, and how many failed responses to accept before marking the backend pool instance as unhealthy, etc.
### Custom health probe settings
The following table provides definitions for the properties of a custom health p
| Probe property | Description | | | |
-| Name |Name of the probe. This name is used to identify and refer to the probe in back-end HTTP settings. |
-| Protocol |Protocol used to send the probe. This has to match with the protocol defined in the back-end HTTP settings it is associated to|
+| Name |Name of the probe. This name is used to identify and refer to the probe in backend HTTP settings. |
+| Protocol |Protocol used to send the probe. This has to match with the protocol defined in the backend HTTP settings it is associated to|
| Host |Host name to send the probe with. In v1 SKU, this value will be used only for the host header of the probe request. In v2 SKU, it will be used both as host header as well as SNI | | Path |Relative path of the probe. A valid path starts with '/' | | Port |If defined, this is used as the destination port. Otherwise, it uses the same port as the HTTP settings that it is associated to. This property is only available in the v2 SKU | Interval |Probe interval in seconds. This value is the time interval between two consecutive probes | | Time-out |Probe time-out in seconds. If a valid response isn't received within this time-out period, the probe is marked as failed |
-| Unhealthy threshold |Probe retry count. The back-end server is marked down after the consecutive probe failure count reaches the unhealthy threshold |
+| Unhealthy threshold |Probe retry count. The backend server is marked down after the consecutive probe failure count reaches the unhealthy threshold |
### Probe matching
application-gateway Application Gateway Ssl Policy Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-ssl-policy-overview.md
Title: TLS policy overview for Azure Application Gateway
-description: Learn how to configure TLS policy for Azure Application Gateway and reduce encryption and decryption overhead from a back-end server farm.
+description: Learn how to configure TLS policy for Azure Application Gateway and reduce encryption and decryption overhead from a backend server farm.
-+ Last updated 12/17/2020-+ # Application Gateway TLS policy overview
-You can use Azure Application Gateway to centralize TLS/SSL certificate management and reduce encryption and decryption overhead from a back-end server farm. This centralized TLS handling also lets you specify a central TLS policy that's suited to your organizational security requirements. This helps you meet compliance requirements as well as security guidelines and recommended practices.
+You can use Azure Application Gateway to centralize TLS/SSL certificate management and reduce encryption and decryption overhead from a backend server farm. This centralized TLS handling also lets you specify a central TLS policy that's suited to your organizational security requirements. This helps you meet compliance requirements as well as security guidelines and recommended practices.
The TLS policy includes control of the TLS protocol version as well as the cipher suites and the order in which ciphers are used during a TLS handshake. Application Gateway offers two mechanisms for controlling TLS policy. You can use either a predefined policy or a custom policy.
If a TLS policy needs to be configured for your requirements, you can use a Cust
> The newer, stronger ciphers and TLSv1.3 support are only available with the **CustomV2 policy (Preview)**. It provides enhanced security and performance benefits. > [!IMPORTANT]
-> - If you are using a custom SSL policy in Application Gateway v1 SKU (Standard or WAF), make sure that you add the mandatory cipher "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" to the list. This cipher is required to enable metrics and logging in the Application Gateway v1 SKU.
+> - If you're using a custom SSL policy in Application Gateway v1 SKU (Standard or WAF), make sure that you add the mandatory cipher "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" to the list. This cipher is required to enable metrics and logging in the Application Gateway v1 SKU.
> This is not mandatory for Application Gateway v2 SKU (Standard_v2 or WAF_v2).
-> - The cipher suites ΓÇ£TLS_AES_128_GCM_SHA256ΓÇ¥ and ΓÇ£TLS_AES_256_GCM_SHA384ΓÇ¥ are mandatory for TLSv1.3. You need NOT mention these explicitly when setting a CustomV2 policy with minimum protocol version 1.2 or 1.3 through [PowerShell](application-gateway-configure-ssl-policy-powershell.md) or CLI. Accordingly, these ciphers suites will not appear in the Get Details output, with an exception of Portal.
+> - The cipher suites ΓÇ£TLS_AES_128_GCM_SHA256ΓÇ¥ and ΓÇ£TLS_AES_256_GCM_SHA384ΓÇ¥ are mandatory for TLSv1.3. You need NOT mention these explicitly when setting a CustomV2 policy with minimum protocol version 1.2 or 1.3 through [PowerShell](application-gateway-configure-ssl-policy-powershell.md) or CLI. Accordingly, these ciphers suites won't appear in the Get Details output, with an exception of Portal.
### Cipher suites
Application Gateway supports the following cipher suites from which you can choo
- The connections to backend servers are always with minimum protocol TLS v1.0 and up to TLS v1.2. Therefore, only TLS versions 1.0, 1.1 and 1.2 are supported to establish a secured connection with backend servers. - As of now, the TLS 1.3 implementation is not enabled with &#34;Zero Round Trip Time (0-RTT)&#34; feature.-- Application Gateway v2 does not support the following DHE ciphers. These won't be used for the TLS connections with clients even though they are mentioned in the predefined policies. Instead of DHE ciphers, secure and faster ECDHE ciphers are recommended.
+- Application Gateway v2 doesn't support the following DHE ciphers. These won't be used for the TLS connections with clients even though they are mentioned in the predefined policies. Instead of DHE ciphers, secure and faster ECDHE ciphers are recommended.
- TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 - TLS_DHE_RSA_WITH_AES_128_CBC_SHA - TLS_DHE_RSA_WITH_AES_256_GCM_SHA384
application-gateway Application Gateway Troubleshooting 502 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-troubleshooting-502.md
Learn how to troubleshoot bad gateway (502) errors received when using Azure App
After you configure an application gateway, one of the errors that you may see is **Server Error: 502 - Web server received an invalid response while acting as a gateway or proxy server**. This error may happen for the following main reasons: * NSG, UDR, or Custom DNS is blocking access to backend pool members.
-* Back-end VMs or instances of virtual machine scale set aren't responding to the default health probe.
+* Backend VMs or instances of virtual machine scale set aren't responding to the default health probe.
* Invalid or improper configuration of custom health probes.
-* Azure Application Gateway's [back-end pool isn't configured or empty](#empty-backendaddresspool).
+* Azure Application Gateway's [backend pool isn't configured or empty](#empty-backendaddresspool).
* None of the VMs or instances in [virtual machine scale set are healthy](#unhealthy-instances-in-backendaddresspool). * [Request time-out or connectivity issues](#request-time-out) with user requests.
Validate NSG, UDR, and DNS configuration by going through the following steps:
### Cause
-502 errors can also be frequent indicators that the default health probe can't reach back-end VMs.
+502 errors can also be frequent indicators that the default health probe can't reach backend VMs.
When an application gateway instance is provisioned, it automatically configures a default health probe to each BackendAddressPool using properties of the BackendHttpSetting. No user input is required to set this probe. Specifically, when a load-balancing rule is configured, an association is made between a BackendHttpSetting and a BackendAddressPool. A default probe is configured for each of these associations and the application gateway starts a periodic health check connection to each instance in the BackendAddressPool at the port specified in the BackendHttpSetting element.
The following table lists the values associated with the default health probe:
| Probe URL |`http://127.0.0.1/` |URL path | | Interval |30 |Probe interval in seconds | | Time-out |30 |Probe time-out in seconds |
-| Unhealthy threshold |3 |Probe retry count. The back-end server is marked down after the consecutive probe failure count reaches the unhealthy threshold. |
+| Unhealthy threshold |3 |Probe retry count. The backend server is marked down after the consecutive probe failure count reaches the unhealthy threshold. |
### Solution
The following table lists the values associated with the default health probe:
### Cause
-Custom health probes allow additional flexibility to the default probing behavior. When you use custom probes, you can configure the probe interval, the URL, the path to test, and how many failed responses to accept before marking the back-end pool instance as unhealthy.
+Custom health probes allow additional flexibility to the default probing behavior. When you use custom probes, you can configure the probe interval, the URL, the path to test, and how many failed responses to accept before marking the backend pool instance as unhealthy.
The following additional properties are added: | Probe property | Description | | | |
-| Name |Name of the probe. This name is used to refer to the probe in back-end HTTP settings. |
-| Protocol |Protocol used to send the probe. The probe uses the protocol defined in the back-end HTTP settings |
+| Name |Name of the probe. This name is used to refer to the probe in backend HTTP settings. |
+| Protocol |Protocol used to send the probe. The probe uses the protocol defined in the backend HTTP settings |
| Host |Host name to send the probe. Applicable only when multi-site is configured on the application gateway. This is different from VM host name. | | Path |Relative path of the probe. The valid path starts from '/'. The probe is sent to \<protocol\>://\<host\>:\<port\>\<path\> | | Interval |Probe interval in seconds. This is the time interval between two consecutive probes. | | Time-out |Probe time-out in seconds. If a valid response isn't received within this time-out period, the probe is marked as failed. |
-| Unhealthy threshold |Probe retry count. The back-end server is marked down after the consecutive probe failure count reaches the unhealthy threshold. |
+| Unhealthy threshold |Probe retry count. The backend server is marked down after the consecutive probe failure count reaches the unhealthy threshold. |
### Solution
Validate that the Custom Health Probe is configured correctly as the preceding t
### Cause
-When a user request is received, the application gateway applies the configured rules to the request and routes it to a back-end pool instance. It waits for a configurable interval of time for a response from the back-end instance. By default, this interval is **20** seconds. In Application Gateway v1, if the application gateway doesn't receive a response from back-end application in this interval, the user request gets a 502 error. In Application Gateway v2, if the application gateway doesn't receive a response from the back-end application in this interval, the request will be tried against a second back-end pool member. If the second request fails the user request gets a 502 error.
+When a user request is received, the application gateway applies the configured rules to the request and routes it to a backend pool instance. It waits for a configurable interval of time for a response from the backend instance. By default, this interval is **20** seconds. In Application Gateway v1, if the application gateway doesn't receive a response from backend application in this interval, the user request gets a 502 error. In Application Gateway v2, if the application gateway doesn't receive a response from the backend application in this interval, the request will be tried against a second backend pool member. If the second request fails the user request gets a 502 error.
### Solution
-Application Gateway allows you to configure this setting via the BackendHttpSetting, which can be then applied to different pools. Different back-end pools can have different BackendHttpSetting, and a different request time-out configured.
+Application Gateway allows you to configure this setting via the BackendHttpSetting, which can be then applied to different pools. Different backend pools can have different BackendHttpSetting, and a different request time-out configured.
```azurepowershell New-AzApplicationGatewayBackendHttpSettings -Name 'Setting01' -Port 80 -Protocol Http -CookieBasedAffinity Enabled -RequestTimeout 60
Application Gateway allows you to configure this setting via the BackendHttpSett
### Cause
-If the application gateway has no VMs or virtual machine scale set configured in the back-end address pool, it can't route any customer request and sends a bad gateway error.
+If the application gateway has no VMs or virtual machine scale set configured in the backend address pool, it can't route any customer request and sends a bad gateway error.
### Solution
-Ensure that the back-end address pool isn't empty. This can be done either via PowerShell, CLI, or portal.
+Ensure that the backend address pool isn't empty. This can be done either via PowerShell, CLI, or portal.
```azurepowershell Get-AzApplicationGateway -Name "SampleGateway" -ResourceGroupName "ExampleResourceGroup" ```
-The output from the preceding cmdlet should contain non-empty back-end address pool. The following example shows two pools returned which are configured with an FQDN or an IP addresses for the backend VMs. The provisioning state of the BackendAddressPool must be 'Succeeded'.
+The output from the preceding cmdlet should contain non-empty backend address pool. The following example shows two pools returned which are configured with an FQDN or an IP addresses for the backend VMs. The provisioning state of the BackendAddressPool must be 'Succeeded'.
BackendAddressPoolsText:
BackendAddressPoolsText:
### Cause
-If all the instances of BackendAddressPool are unhealthy, then the application gateway doesn't have any back-end to route user request to. This can also be the case when back-end instances are healthy but don't have the required application deployed.
+If all the instances of BackendAddressPool are unhealthy, then the application gateway doesn't have any backend to route user request to. This can also be the case when backend instances are healthy but don't have the required application deployed.
### Solution
-Ensure that the instances are healthy and the application is properly configured. Check if the back-end instances can respond to a ping from another VM in the same VNet. If configured with a public end point, ensure a browser request to the web application is serviceable.
+Ensure that the instances are healthy and the application is properly configured. Check if the backend instances can respond to a ping from another VM in the same VNet. If configured with a public end point, ensure a browser request to the web application is serviceable.
## Next steps
application-gateway Application Gateway Websocket https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-websocket.md
Title: WebSocket support in Azure Application Gateway description: Application Gateway provides native support for WebSocket across all gateway sizes. There are no user-configurable settings. -+
To establish a WebSocket connection, a specific HTTP-based handshake is exchange
![Diagram compares a client interacting with a web server, connecting twice to get two replies, with a WebSocket interaction, where a client connects to a server once to get multiple replies.](./media/application-gateway-websocket/websocket.png) > [!NOTE]
-> As described, the HTTP protocol is used only to perform a handshake when establishing a WebSocket connection. Once the handshake is completed, a WebSocket connection gets opened for transmitting the data, and the Web Application Firewall (WAF) cannot parse any contents. Therefore, WAF does not perform any inspections on such data.
+> As described, the HTTP protocol is used only to perform a handshake when establishing a WebSocket connection. Once the handshake is completed, a WebSocket connection gets opened for transmitting the data, and the Web Application Firewall (WAF) cannot parse any contents. Therefore, WAF doesn't perform any inspections on such data.
### Listener configuration element
Your backend must have a HTTP/HTTPS web server running on the configured port (u
Sec-WebSocket-Version: 13 ```
-Another reason for this is that application gateway backend health probe supports HTTP and HTTPS protocols only. If the backend server does not respond to HTTP or HTTPS probes, it is taken out of backend pool.
+Another reason for this is that application gateway backend health probe supports HTTP and HTTPS protocols only. If the backend server doesn't respond to HTTP or HTTPS probes, it is taken out of backend pool.
## Next steps
application-gateway Configuration Frontend Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-frontend-ip.md
+
+ Title: Azure Application Gateway frontend IP address configuration
+description: This article describes how to configure the Azure Application Gateway frontend IP address.
++++ Last updated : 09/09/2020+++
+# Application Gateway frontend IP address configuration
+
+You can configure the application gateway to have a public IP address, a private IP address, or both. A public IP address is required when you host a back end that clients must access over the Internet via an Internet-facing virtual IP (VIP).
+
+## Public and private IP address support
+
+Application Gateway V2 currently doesn't support only private IP mode. It supports the following combinations:
+
+* Private IP address and public IP address
+* Public IP address only
+
+For more information, see [Frequently asked questions about Application Gateway](application-gateway-faq.yml#how-do-i-use-application-gateway-v2-with-only-private-frontend-ip-address).
++
+A public IP address isn't required for an internal endpoint that's not exposed to the Internet. That's known as an *internal load-balancer* (ILB) endpoint or private frontend IP. An application gateway ILB is useful for internal line-of-business applications that aren't exposed to the Internet. It's also useful for services and tiers in a multi-tier application within a security boundary that aren't exposed to the Internet but that require round-robin load distribution, session stickiness, or TLS termination.
+
+Only one public IP address and one private IP address is supported. You choose the frontend IP when you create the application gateway.
+
+- For a public IP address, you can create a new public IP address or use an existing public IP in the same location as the application gateway. For more information, see [static vs. dynamic public IP address](./application-gateway-components.md#static-versus-dynamic-public-ip-address).
+
+- For a private IP address, you can specify a private IP address from the subnet where the application gateway is created. For Application Gateway v2 sku deployments, a static IP address must be defined when adding a private IP address to the gateway. For Application Gateway v1 sku deployments, if you don't specify an IP address, an available IP address is automatically selected from the subnet. The IP address type that you select (static or dynamic) can't be changed later. For more information, see [Create an application gateway with an internal load balancer](./application-gateway-ilb-arm.md).
+
+A frontend IP address is associated to a *listener*, which checks for incoming requests on the frontend IP.
+
+## Next steps
+
+- [Learn about listener configuration](configuration-listeners.md)
application-gateway Configuration Http Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-http-settings.md
# Application Gateway HTTP settings configuration
-The application gateway routes traffic to the back-end servers by using the configuration that you specify here. After you create an HTTP setting, you must associate it with one or more request-routing rules.
+The application gateway routes traffic to the backend servers by using the configuration that you specify here. After you create an HTTP setting, you must associate it with one or more request-routing rules.
## Cookie-based affinity
Azure Application Gateway uses gateway-managed cookies for maintaining user sess
This feature is useful when you want to keep a user session on the same server and when session state is saved locally on the server for a user session. If the application can't handle cookie-based affinity, you can't use this feature. To use it, make sure that the clients support cookies. > [!NOTE]
-> Some vulnerability scans may flag the Application Gateway affinity cookie because the Secure or HttpOnly flags are not set. These scans do not take into account that the data in the cookie is generated using a one-way hash. The cookie does not contain any user information and is used purely for routing.
+> Some vulnerability scans may flag the Application Gateway affinity cookie because the Secure or HttpOnly flags are not set. These scans do not take into account that the data in the cookie is generated using a one-way hash. The cookie doesn't contain any user information and is used purely for routing.
-The [Chromium browser](https://www.chromium.org/Home) [v80 update](https://chromiumdash.appspot.com/schedule) brought a mandate where HTTP cookies without [SameSite](https://datatracker.ietf.org/doc/html/draft-ietf-httpbis-rfc6265bis-03#rfc.section.5.3.7) attribute have to be treated as SameSite=Lax. In the case of CORS (Cross-Origin Resource Sharing) requests, if the cookie has to be sent in a third-party context, it has to use *SameSite=None; Secure* attributes and it should be sent over HTTPS only. Otherwise, in an HTTP only scenario, the browser doesn't send the cookies in the third-party context. The goal of this update from Chrome is to enhance security and to avoid Cross-Site Request Forgery (CSRF) attacks.
+The [Chromium browser](https://www.chromium.org/Home) [v80 update](https://chromiumdash.appspot.com/schedule) brought a mandate where HTTP cookies without [SameSite](https://datatracker.ietf.org/doc/html/draft-ietf-httpbis-rfc6265bis-03#rfc.section.5.3.7) attribute have to be treated as SameSite=Lax. For CORS (Cross-Origin Resource Sharing) requests, if the cookie has to be sent in a third-party context, it has to use *SameSite=None; Secure* attributes and it should be sent over HTTPS only. Otherwise, in an HTTP only scenario, the browser doesn't send the cookies in the third-party context. The goal of this update from Chrome is to enhance security and to avoid Cross-Site Request Forgery (CSRF) attacks.
To support this change, starting February 17 2020, Application Gateway (all the SKU types) will inject another cookie called *ApplicationGatewayAffinityCORS* in addition to the existing *ApplicationGatewayAffinity* cookie. The *ApplicationGatewayAffinityCORS* cookie has two more attributes added to it (*"SameSite=None; Secure"*) so that sticky sessions are maintained even for cross-origin requests.
Please refer to TLS offload and End-to-End TLS documentation for Application Gat
## Connection draining
-Connection draining helps you gracefully remove back-end pool members during planned service updates. You can apply this setting to all members of a back-end pool by enabling connection draining on the HTTP setting. It ensures that all deregistering instances of a back-end pool continue to maintain existing connections and serve on-going requests for a configurable timeout and don't receive any new requests or connections. The only exception to this are requests bound for deregistering instances because of gateway-managed session affinity and will continue to be forwarded to the deregistering instances. Connection draining applies to back-end instances that are explicitly removed from the back-end pool.
+Connection draining helps you gracefully remove backend pool members during planned service updates. You can apply this setting to all members of a backend pool by enabling connection draining on the HTTP setting. It ensures that all deregistering instances of a backend pool continue to maintain existing connections and serve on-going requests for a configurable timeout and don't receive any new requests or connections. The only exception to this are requests bound for deregistering instances because of gateway-managed session affinity and will continue to be forwarded to the deregistering instances. Connection draining applies to backend instances that are explicitly removed from the backend pool.
## Protocol
-Application Gateway supports both HTTP and HTTPS for routing requests to the back-end servers. If you choose HTTP, traffic to the back-end servers is unencrypted. If unencrypted communication isn't acceptable, choose HTTPS.
+Application Gateway supports both HTTP and HTTPS for routing requests to the backend servers. If you choose HTTP, traffic to the backend servers is unencrypted. If unencrypted communication isn't acceptable, choose HTTPS.
-This setting combined with HTTPS in the listener supports [end-to-end TLS](ssl-overview.md). This allows you to securely transmit sensitive data encrypted to the back end. Each back-end server in the back-end pool that has end-to-end TLS enabled must be configured with a certificate to allow secure communication.
+This setting combined with HTTPS in the listener supports [end-to-end TLS](ssl-overview.md). This allows you to securely transmit sensitive data encrypted to the back end. Each backend server in the backend pool that has end-to-end TLS enabled must be configured with a certificate to allow secure communication.
## Port
-This setting specifies the port where the back-end servers listen to traffic from the application gateway. You can configure ports ranging from 1 to 65535.
+This setting specifies the port where the backend servers listen to traffic from the application gateway. You can configure ports ranging from 1 to 65535.
## Trusted root certificate
-If you select HTTPS as the back-end protocol, the Application Gateway requires a trusted root certificate to trust the back-end pool for end-to-end SSL. By default, the **Use well known CA certificate** option is set to **No**. If you plan to use a self-signed certificate, or a certificate signed by an internal Certificate Authority, then you must provide the Application Gateway the matching public certificate that the back-end pool will be using. This certificate must be uploaded directly to the Application Gateway in .CER format.
+If you select HTTPS as the backend protocol, the Application Gateway requires a trusted root certificate to trust the backend pool for end-to-end SSL. By default, the **Use well known CA certificate** option is set to **No**. If you plan to use a self-signed certificate, or a certificate signed by an internal Certificate Authority, then you must provide the Application Gateway the matching public certificate that the backend pool will be using. This certificate must be uploaded directly to the Application Gateway in .CER format.
-If you plan to use a certificate on the back-end pool that is signed by a trusted public Certificate Authority, then you can set the **Use well known CA certificate** option to **Yes** and skip uploading a public certificate.
+If you plan to use a certificate on the backend pool that is signed by a trusted public Certificate Authority, then you can set the **Use well known CA certificate** option to **Yes** and skip uploading a public certificate.
## Request timeout
-This setting is the number of seconds that the application gateway waits to receive a response from the back-end server.
+This setting is the number of seconds that the application gateway waits to receive a response from the backend server.
-## Override back-end path
+## Override backend path
This setting lets you configure an optional custom forwarding path to use when the request is forwarded to the back end. Any part of the incoming path that matches the custom path in the **override backend path** field is copied to the forwarded path. The following table shows how this feature works: - When the HTTP setting is attached to a basic request-routing rule:
- | Original request | Override back-end path | Request forwarded to back end |
+ | Original request | Override backend path | Request forwarded to back end |
| -- | | - | | /home/ | /override/ | /override/home/ | | /home/secondhome/ | /override/ | /override/home/secondhome/ | - When the HTTP setting is attached to a path-based request-routing rule:
- | Original request | Path rule | Override back-end path | Request forwarded to back end |
+ | Original request | Path rule | Override backend path | Request forwarded to back end |
| -- | | | - | | /pathrule/home/ | /pathrule* | /override/ | /override/home/ | | /pathrule/home/secondhome/ | /pathrule* | /override/ | /override/home/secondhome/ |
This setting lets you configure an optional custom forwarding path to use when t
This setting associates a [custom probe](application-gateway-probe-overview.md#custom-health-probe) with an HTTP setting. You can associate only one custom probe with an HTTP setting. If you don't explicitly associate a custom probe, the [default probe](application-gateway-probe-overview.md#default-health-probe-settings) is used to monitor the health of the back end. We recommend that you create a custom probe for greater control over the health monitoring of your back ends. > [!NOTE]
-> The custom probe doesn't monitor the health of the back-end pool unless the corresponding HTTP setting is explicitly associated with a listener.
+> The custom probe doesn't monitor the health of the backend pool unless the corresponding HTTP setting is explicitly associated with a listener.
## Configuring the host name
There are two aspects of an HTTP setting that influence the [`Host`](https://dat
- "Pick host name from backend-address" - "Host name override"
-## Pick host name from back-end address
+## Pick host name from backend address
-This capability dynamically sets the *host* header in the request to the host name of the back-end pool. It uses an IP address or FQDN.
+This capability dynamically sets the *host* header in the request to the host name of the backend pool. It uses an IP address or FQDN.
This feature helps when the domain name of the back end is different from the DNS name of the application gateway, and the back end relies on a specific host header to resolve to the correct endpoint.
For a custom domain whose existing custom DNS name is mapped to the app service,
This capability replaces the *host* header in the incoming request on the application gateway with the host name that you specify.
-For example, if *www.contoso.com* is specified in the **Host name** setting, the original request *`https://appgw.eastus.cloudapp.azure.com/path1` is changed to *`https://www.contoso.com/path1` when the request is forwarded to the back-end server.
+For example, if *www.contoso.com* is specified in the **Host name** setting, the original request *`https://appgw.eastus.cloudapp.azure.com/path1` is changed to *`https://www.contoso.com/path1` when the request is forwarded to the backend server.
## Next steps -- [Learn about the back-end pool](configuration-overview.md#back-end-pool)
+- [Learn about the backend pool](configuration-overview.md#backend-pool)
application-gateway Configuration Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-infrastructure.md
An application gateway is a dedicated deployment in your virtual network. Within
### Size of the subnet
-Application Gateway uses one private IP address per instance, plus another private IP address if a private front-end IP is configured.
+Application Gateway uses one private IP address per instance, plus another private IP address if a private frontend IP is configured.
-Azure also reserves five IP addresses in each subnet for internal use: the first four and the last IP addresses. For example, consider 15 application gateway instances with no private front-end IP. You need at least 20 IP addresses for this subnet: five for internal use and 15 for the application gateway instances.
+Azure also reserves five IP addresses in each subnet for internal use: the first four and the last IP addresses. For example, consider 15 application gateway instances with no private frontend IP. You need at least 20 IP addresses for this subnet: five for internal use and 15 for the application gateway instances.
-Consider a subnet that has 27 application gateway instances and an IP address for a private front-end IP. In this case, you need 33 IP addresses: 27 for the application gateway instances, one for the private front end, and five for internal use.
+Consider a subnet that has 27 application gateway instances and an IP address for a private frontend IP. In this case, you need 33 IP addresses: 27 for the application gateway instances, one for the private front end, and five for internal use.
Application Gateway (Standard or WAF) SKU can support up to 32 instances (32 instance IP addresses + 1 private frontend IP configuration + 5 Azure reserved) ΓÇô so a minimum subnet size of /26 is recommended
Network security groups (NSGs) are supported on Application Gateway. But there a
For this scenario, use NSGs on the Application Gateway subnet. Put the following restrictions on the subnet in this order of priority: 1. Allow incoming traffic from a source IP or IP range with the destination as the entire Application Gateway subnet address range and destination port as your inbound access port, for example, port 80 for HTTP access.
-2. Allow incoming requests from source as **GatewayManager** service tag and destination as **Any** and destination ports as 65503-65534 for the Application Gateway v1 SKU, and ports 65200-65535 for v2 SKU for [back-end health status communication](./application-gateway-diagnostics.md). This port range is required for Azure infrastructure communication. These ports are protected (locked down) by Azure certificates. Without appropriate certificates in place, external entities can't initiate changes on those endpoints.
+2. Allow incoming requests from source as **GatewayManager** service tag and destination as **Any** and destination ports as 65503-65534 for the Application Gateway v1 SKU, and ports 65200-65535 for v2 SKU for [backend health status communication](./application-gateway-diagnostics.md). This port range is required for Azure infrastructure communication. These ports are protected (locked down) by Azure certificates. Without appropriate certificates in place, external entities can't initiate changes on those endpoints.
3. Allow incoming Azure Load Balancer probes (*AzureLoadBalancer* tag) on the [network security group](../virtual-network/network-security-groups-overview.md). 4. Allow expected inbound traffic to match your listener configuration (i.e. if you have listeners configured for port 80, you will want an allow inbound rule for port 80) 5. Block all other incoming traffic by using a deny-all rule.
For this scenario, use NSGs on the Application Gateway subnet. Put the following
## Supported user-defined routes > [!IMPORTANT]
-> Using UDRs on the Application Gateway subnet might cause the health status in the [back-end health view](./application-gateway-diagnostics.md#back-end-health) to appear as **Unknown**. It also might cause generation of Application Gateway logs and metrics to fail. We recommend that you don't use UDRs on the Application Gateway subnet so that you can view the back-end health, logs, and metrics.
+> Using UDRs on the Application Gateway subnet might cause the health status in the [backend health view](./application-gateway-diagnostics.md#backend-health) to appear as **Unknown**. It also might cause generation of Application Gateway logs and metrics to fail. We recommend that you don't use UDRs on the Application Gateway subnet so that you can view the backend health, logs, and metrics.
- **v1**
For this scenario, use NSGs on the Application Gateway subnet. Put the following
## Next steps -- [Learn about front-end IP address configuration](configuration-front-end-ip.md).
+- [Learn about frontend IP address configuration](configuration-frontend-ip.md).
application-gateway Configuration Listeners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-listeners.md
Last updated 09/09/2020-+
For the v1 SKU, requests are matched according to the order of the rules and the
For the v2 SKU, multi-site listeners are processed before basic listeners.
-## Front-end IP address
+## Frontend IP address
-Choose the front-end IP address that you plan to associate with this listener. The listener will listen to incoming requests on this IP.
+Choose the frontend IP address that you plan to associate with this listener. The listener will listen to incoming requests on this IP.
-## Front-end port
+## Frontend port
-Choose the front-end port. Select an existing port or create a new one. Choose any value from the [allowed range of ports](./application-gateway-components.md#ports). You can use not only well-known ports, such as 80 and 443, but any allowed custom port that's suitable. A port can be used for public-facing listeners or private-facing listeners.
+Choose the frontend port. Select an existing port or create a new one. Choose any value from the [allowed range of ports](./application-gateway-components.md#ports). You can use not only well-known ports, such as 80 and 443, but any allowed custom port that's suitable. A port can be used for public-facing listeners or private-facing listeners.
## Protocol
Choose HTTP or HTTPS:
- If you choose HTTP, the traffic between the client and the application gateway is unencrypted. -- Choose HTTPS if you want [TLS termination](features.md#secure-sockets-layer-ssltls-termination) or [end-to-end TLS encryption](./ssl-overview.md). The traffic between the client and the application gateway is encrypted. And the TLS connection terminates at the application gateway. If you want end-to-end TLS encryption, you must choose HTTPS and configure the **back-end HTTP** setting. This ensures that traffic is re-encrypted when it travels from the application gateway to the back end.
+- Choose HTTPS if you want [TLS termination](features.md#secure-sockets-layer-ssltls-termination) or [end-to-end TLS encryption](./ssl-overview.md). The traffic between the client and the application gateway is encrypted. And the TLS connection terminates at the application gateway. If you want end-to-end TLS encryption, you must choose HTTPS and configure the **backend HTTP** setting. This ensures that traffic is re-encrypted when it travels from the application gateway to the back end.
To configure TLS termination, a TLS/SSL certificate must be added to the listener. This allows the Application Gateway to decrypt incoming traffic and encrypt response traffic to the client. The certificate provided to the Application Gateway must be in Personal Information Exchange (PFX) format, which contains both the private and public keys.
See [Overview of TLS termination and end to end TLS with Application Gateway](ss
### HTTP2 support
-HTTP/2 protocol support is available to clients that connect to application gateway listeners only. The communication to back-end server pools is over HTTP/1.1. By default, HTTP/2 support is disabled. The following Azure PowerShell code snippet shows how to enable this:
+HTTP/2 protocol support is available to clients that connect to application gateway listeners only. The communication to backend server pools is over HTTP/1.1. By default, HTTP/2 support is disabled. The following Azure PowerShell code snippet shows how to enable this:
```azurepowershell $gw = Get-AzApplicationGateway -Name test -ResourceGroupName hm
To configure a global custom error page, see [Azure PowerShell configuration](./
## TLS policy
-You can centralize TLS/SSL certificate management and reduce encryption-decryption overhead for a back-end server farm. Centralized TLS handling also lets you specify a central TLS policy that's suited to your security requirements. You can choose *default*, *predefined*, or *custom* TLS policy.
+You can centralize TLS/SSL certificate management and reduce encryption-decryption overhead for a backend server farm. Centralized TLS handling also lets you specify a central TLS policy that's suited to your security requirements. You can choose *default*, *predefined*, or *custom* TLS policy.
You configure TLS policy to control TLS protocol versions. You can configure an application gateway to use a minimum protocol version for TLS handshakes from TLS1.0, TLS1.1, and TLS1.2. By default, SSL 2.0 and 3.0 are disabled and aren't configurable. For more information, see [Application Gateway TLS policy overview](./application-gateway-ssl-policy-overview.md).
application-gateway Configuration Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-overview.md
Last updated 09/09/2020-+ # Application Gateway configuration overview
For more information, see [Application Gateway infrastructure configuration](con
-## Front-end IP address
+## Frontend IP address
You can configure the application gateway to have a public IP address, a private IP address, or both. A public IP is required when you host a back end that clients must access over the Internet via an Internet-facing virtual IP (VIP).
-For more information, see [Application Gateway front-end IP address configuration](configuration-front-end-ip.md).
+For more information, see [Application Gateway frontend IP address configuration](configuration-frontend-ip.md).
## Listeners
For more information, see [Application Gateway listener configuration](configura
## Request routing rules
-When you create an application gateway by using the Azure portal, you create a default rule (*rule1*). This rule binds the default listener (*appGatewayHttpListener*) with the default back-end pool (*appGatewayBackendPool*) and the default back-end HTTP settings (*appGatewayBackendHttpSettings*). After you create the gateway, you can edit the settings of the default rule or create new rules.
+When you create an application gateway by using the Azure portal, you create a default rule (*rule1*). This rule binds the default listener (*appGatewayHttpListener*) with the default backend pool (*appGatewayBackendPool*) and the default backend HTTP settings (*appGatewayBackendHttpSettings*). After you create the gateway, you can edit the settings of the default rule or create new rules.
For more information, see [Application Gateway request routing rules](configuration-request-routing-rules.md). ## HTTP settings
-The application gateway routes traffic to the back-end servers by using the configuration that you specify here. After you create an HTTP setting, you must associate it with one or more request-routing rules.
+The application gateway routes traffic to the backend servers by using the configuration that you specify here. After you create an HTTP setting, you must associate it with one or more request-routing rules.
For more information, see [Application Gateway HTTP settings configuration](configuration-http-settings.md).
-## Back-end pool
+## Backend pool
-You can point a back-end pool to four types of backend members: a specific virtual machine, a virtual machine scale set, an IP address/FQDN, or an app service.
+You can point a backend pool to four types of backend members: a specific virtual machine, a virtual machine scale set, an IP address/FQDN, or an app service.
-After you create a back-end pool, you must associate it with one or more request-routing rules. You must also configure health probes for each back-end pool on your application gateway. When a request-routing rule condition is met, the application gateway forwards the traffic to the healthy servers (as determined by the health probes) in the corresponding back-end pool.
+After you create a backend pool, you must associate it with one or more request-routing rules. You must also configure health probes for each backend pool on your application gateway. When a request-routing rule condition is met, the application gateway forwards the traffic to the healthy servers (as determined by the health probes) in the corresponding backend pool.
## Health probes
-An application gateway monitors the health of all resources in its back end by default. But we strongly recommend that you create a custom probe for each back-end HTTP setting to get greater control over health monitoring. To learn how to configure a custom probe, see [Custom health probe settings](application-gateway-probe-overview.md#custom-health-probe-settings).
+An application gateway monitors the health of all resources in its back end by default. But we strongly recommend that you create a custom probe for each backend HTTP setting to get greater control over health monitoring. To learn how to configure a custom probe, see [Custom health probe settings](application-gateway-probe-overview.md#custom-health-probe-settings).
> [!NOTE]
-> After you create a custom health probe, you need to associate it to a back-end HTTP setting. A custom probe won't monitor the health of the back-end pool unless the corresponding HTTP setting is explicitly associated with a listener using a rule.
+> After you create a custom health probe, you need to associate it to a backend HTTP setting. A custom probe won't monitor the health of the backend pool unless the corresponding HTTP setting is explicitly associated with a listener using a rule.
## Next steps
application-gateway Configuration Request Routing Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-request-routing-rules.md
Last updated 09/09/2020-+ # Application Gateway request routing rules
-When you create an application gateway using the Azure portal, you create a default rule (*rule1*). This rule binds the default listener (*appGatewayHttpListener*) with the default back-end pool (*appGatewayBackendPool*) and the default back-end HTTP settings (*appGatewayBackendHttpSettings*). After you create the gateway, you can edit the settings of the default rule or create new rules.
+When you create an application gateway using the Azure portal, you create a default rule (*rule1*). This rule binds the default listener (*appGatewayHttpListener*) with the default backend pool (*appGatewayBackendPool*) and the default backend HTTP settings (*appGatewayBackendHttpSettings*). After you create the gateway, you can edit the settings of the default rule or create new rules.
## Rule type When you create a rule, you choose between [*basic* and *path-based*](./application-gateway-components.md#request-routing-rules). -- Choose basic if you want to forward all requests on the associated listener (for example, *blog<i></i>.contoso.com/\*)* to a single back-end pool.-- Choose path-based if you want to route requests from specific URL paths to specific back-end pools. The path pattern is applied only to the path of the URL, not to its query parameters.
+- Choose basic if you want to forward all requests on the associated listener (for example, *blog<i></i>.contoso.com/\*)* to a single backend pool.
+- Choose path-based if you want to route requests from specific URL paths to specific backend pools. The path pattern is applied only to the path of the URL, not to its query parameters.
### Order of processing rules
For the v1 and v2 SKU, pattern matching of incoming requests is processed in the
## Associated listener
-Associate a listener to the rule so that the *request-routing rule* that's associated with the listener is evaluated to determine the back-end pool to route the request to.
+Associate a listener to the rule so that the *request-routing rule* that's associated with the listener is evaluated to determine the backend pool to route the request to.
-## Associated back-end pool
+## Associated backend pool
-Associate to the rule the back-end pool that contains the back-end targets that serve requests that the listener receives.
+Associate to the rule the backend pool that contains the backend targets that serve requests that the listener receives.
+ - For a basic rule, only one backend pool is allowed. All requests on the associated listener are forwarded to that backend pool.
+ - For a path-based rule, add multiple backend pools that correspond to each URL path. The requests that match the URL path that's entered are forwarded to the corresponding backend pool. Also, add a default backend pool. Requests that don't match any URL path in the rule are forwarded to that pool.
-## Associated back-end HTTP setting
+## Associated backend HTTP setting
-Add a back-end HTTP setting for each rule. Requests are routed from the application gateway to the back-end targets by using the port number, protocol, and other information that's specified in this setting.
+Add a backend HTTP setting for each rule. Requests are routed from the application gateway to the backend targets by using the port number, protocol, and other information that's specified in this setting.
-For a basic rule, only one back-end HTTP setting is allowed. All requests on the associated listener are forwarded to the corresponding back-end targets by using this HTTP setting.
+For a basic rule, only one backend HTTP setting is allowed. All requests on the associated listener are forwarded to the corresponding backend targets by using this HTTP setting.
-For a path-based rule, add multiple back-end HTTP settings that correspond to each URL path. Requests that match the URL path in this setting are forwarded to the corresponding back-end targets by using the HTTP settings that correspond to each URL path. Also, add a default HTTP setting. Requests that don't match any URL path in this rule are forwarded to the default back-end pool by using the default HTTP setting.
+For a path-based rule, add multiple backend HTTP settings that correspond to each URL path. Requests that match the URL path in this setting are forwarded to the corresponding backend targets by using the HTTP settings that correspond to each URL path. Also, add a default HTTP setting. Requests that don't match any URL path in this rule are forwarded to the default backend pool by using the default HTTP setting.
## Redirection setting
application-gateway Configure Alerts With Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configure-alerts-with-templates.md
Title: Configure Azure Monitor alerts for Application Gateway description: Learn how to use ARM templates to configure Azure Monitor alerts for Application Gateway-+
application-gateway Configure Application Gateway With Private Frontend Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configure-application-gateway-with-private-frontend-ip.md
Title: Configure an internal load balancer (ILB) endpoint
description: This article provides information on how to configure Application Gateway Standard v1 with a private frontend IP address -+ Last updated 01/11/2022
In this example, you create a new virtual network. You can create a virtual netw
## Add backend pool
-The backend pool is used to route requests to the backend servers that serve the request. The backend can be composed of NICs, virtual machine scale sets, public IP addresses, internal IP addresses, fully qualified domain names (FQDN), and multi-tenant back-ends like Azure App Service. In this example, you use virtual machines as the target backend. You can either use existing virtual machines or create new ones. In this example, you create two virtual machines that Azure uses as backend servers for the application gateway.
+The backend pool is used to route requests to the backend servers that serve the request. The backend can be composed of NICs, virtual machine scale sets, public IP addresses, internal IP addresses, fully qualified domain names (FQDN), and multi-tenant backends like Azure App Service. In this example, you use virtual machines as the target backend. You can either use existing virtual machines or create new ones. In this example, you create two virtual machines that Azure uses as backend servers for the application gateway.
To do this, you:
The client virtual machine is used to connect to the application gateway backend
## Next steps
-If you want to monitor the health of your backend pool, see [Back-end health and diagnostic logs for Application Gateway](application-gateway-diagnostics.md).
+If you want to monitor the health of your backend pool, see [Backend health and diagnostic logs for Application Gateway](application-gateway-diagnostics.md).
application-gateway Configure Key Vault Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configure-key-vault-portal.md
Title: Configure TLS termination with Key Vault certificates - Portal
description: Learn how to use an Azure portal to integrate your key vault with your application gateway for TLS/SSL termination certificates. -+ Last updated 10/01/2021
At this point, your Azure account is the only one authorized to perform operatio
:::image type="content" source="media/configure-key-vault-portal/create-key-vault-certificate.png" alt-text="Screenshot of key vault certificate creation"::: > [!Important]
-> Issuance policies only affect certificates that will be issued in the future. Modifying this issuance policy will not affect any existing certificates.
+> Issuance policies only affect certificates that will be issued in the future. Modifying this issuance policy won't affect any existing certificates.
### Create a Virtual Network
You can configure the Frontend IP to be Public or Private as per your use case.
**Backends tab**
-The backend pool is used to route requests to the backend servers that serve the request. Backend pools can be composed of NICs, virtual machine scale sets, public IP addresses, internal IP addresses, fully qualified domain names (FQDN), and multi-tenant back-ends like Azure App Service. In this example, you'll create an empty backend pool with your application gateway and then add backend targets to the backend pool.
+The backend pool is used to route requests to the backend servers that serve the request. Backend pools can be composed of NICs, virtual machine scale sets, public IP addresses, internal IP addresses, fully qualified domain names (FQDN), and multi-tenant backends like Azure App Service. In this example, you'll create an empty backend pool with your application gateway and then add backend targets to the backend pool.
1. On the **Backends** tab, select **Add a backend pool**.
application-gateway Configure Keyvault Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configure-keyvault-ps.md
Title: Configure TLS termination with Key Vault certificates - PowerShell
-description: Learn how how to use an Azure PowerShell script to integrate your key vault with your application gateway for TLS/SSL termination certificates.
+description: Learn how to use an Azure PowerShell script to integrate your key vault with your application gateway for TLS/SSL termination certificates.
$publicip = New-AzPublicIpAddress -ResourceGroupName $rgname -name "AppGwIP" `
-location $location -AllocationMethod Static -Sku Standard ```
-### Create pool and front-end ports
+### Create pool and frontend ports
```azurepowershell $gwSubnet = Get-AzVirtualNetworkSubnetConfig -Name "appgwSubnet" -VirtualNetwork $vnet
application-gateway Configure Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configure-web-app.md
Title: Manage traffic to App Service
description: This article provides guidance on how to configure Application Gateway with Azure App Service -+ Last updated 02/17/2022-+ <!-- markdownlint-disable MD044 --> # Configure App Service with Application Gateway
-Application gateway allows you to have an App Service app or other multi-tenant service as a back-end pool member. In this article, you learn to configure an App Service app with Application Gateway. The configuration for Application Gateway will differ depending on how App Service will be accessed:
+Application gateway allows you to have an App Service app or other multi-tenant service as a backend pool member. In this article, you learn to configure an App Service app with Application Gateway. The configuration for Application Gateway will differ depending on how App Service will be accessed:
- The first option makes use of a **custom domain** on both Application Gateway and the App Service in the backend. - The second option is to have Application Gateway access App Service using its **default domain**, suffixed as ".azurewebsites.net".
Application gateway allows you to have an App Service app or other multi-tenant
This configuration is recommended for production-grade scenarios and meets the practice of not changing the host name in the request flow. You are required to have a custom domain (and associated certificate) available to avoid having to rely on the default ".azurewebsites" domain.
-By associating the same domain name to both Application Gateway and App Service in the backend pool, the request flow does not need to override the host name. The backend web application will see the original host as was used by the client.
+By associating the same domain name to both Application Gateway and App Service in the backend pool, the request flow doesn't need to override the host name. The backend web application will see the original host as was used by the client.
:::image type="content" source="media/configure-web-app/scenario-application-gateway-to-azure-app-service-custom-domain.png" alt-text="Scenario overview for Application Gateway to App Service using the same custom domain for both"::: ## [Default domain](#tab/defaultdomain)
-This configuration is the easiest and does not require a custom domain. As such it allows for a quick convenient setup.
+This configuration is the easiest and doesn't require a custom domain. As such it allows for a quick convenient setup.
> [!WARNING] > This configuration comes with limitations. We recommend to review the implications of using different host names between the client and Application Gateway and between Application and App Service in the backend. For more information, please review the article in Architecture Center: [Preserve the original HTTP host name between a reverse proxy and its backend web application](/azure/architecture/best-practices/host-name-preservation)
-When App Service does not have a custom domain associated with it, the host header on the incoming request on the web application will need to be set to the default domain, suffixed with ".azurewebsites.net" or else the platform will not be able to properly route the request.
+When App Service doesn't have a custom domain associated with it, the host header on the incoming request on the web application will need to be set to the default domain, suffixed with ".azurewebsites.net" or else the platform won't be able to properly route the request.
The host header in the original request received by the Application Gateway will be different from the host name of the backend App Service.
We will connect to the backend using HTTPS.
1. Under **HTTP Settings**, select an existing HTTP setting or add a new one. 2. When creating a new HTTP Setting, give it a name 3. Select HTTPS as the desired backend protocol using port 443
-4. If the certificate is signed by a well known authority, select "Yes" for "User well known CA certificate". Alternatively [Add authentication/trusted root certificates of back-end servers](./end-to-end-ssl-portal.md#add-authenticationtrusted-root-certificates-of-back-end-servers)
+4. If the certificate is signed by a well known authority, select "Yes" for "User well known CA certificate". Alternatively [Add authentication/trusted root certificates of backend servers](./end-to-end-ssl-portal.md#add-authenticationtrusted-root-certificates-of-backend-servers)
5. Make sure to set "Override with new host name" to "No" 6. Select the custom HTTPS health probe in the dropdown for "Custom probe". > [!Note]
An HTTP Setting is required that instructs Application Gateway to access the App
1. Under **HTTP Settings**, select an existing HTTP setting or add a new one. 2. When creating a new HTTP Setting, give it a name 3. Select HTTPS as the desired backend protocol using port 443
-4. If the certificate is signed by a well known authority, select "Yes" for "User well known CA certificate". Alternatively [Add authentication/trusted root certificates of back-end servers](./end-to-end-ssl-portal.md#add-authenticationtrusted-root-certificates-of-back-end-servers)
+4. If the certificate is signed by a well known authority, select "Yes" for "User well known CA certificate". Alternatively [Add authentication/trusted root certificates of backend servers](./end-to-end-ssl-portal.md#add-authenticationtrusted-root-certificates-of-backend-servers)
5. Make sure to set "Override with new host name" to "Yes" 6. Under "Host name override", select "Pick host name from backend target". This setting will cause the request towards App Service to use the "azurewebsites.net" host name, as is configured in the Backend Pool.
if ($listener -eq $null){
## Configure request routing rule
-Provided with the earlier configured Backend Pool and the HTTP Settings, the request routing rule can be set up to take traffic from a listener and route it to the Backend Pool using the HTTP Settings. For this, make sure you have an HTTP or HTTPS listener available that is not already bound to an existing routing rule.
+Using the earlier configured Backend Pool and the HTTP Settings, the request routing rule can be set up to take traffic from a listener and route it to the Backend Pool using the HTTP Settings. For this, make sure you have an HTTP or HTTPS listener available that is not already bound to an existing routing rule.
### [Azure portal](#tab/azure-portal)
Pay attention to the following non-exhaustive list of potential symptoms when te
- domain-bound cookies not being passed on to the backend - this includes the use of the ["ARR affinity" setting](../app-service/configure-common.md#configure-general-settings) in App Service
-The above conditions (explained in more detail in [Architecture Center](/azure/architecture/best-practices/host-name-preservation)) would indicate that your web application does not deal well with rewriting the host name. This is very common to see. The recommended way to deal with this is to follow the instructions for configuration Application Gateway with App Service using a custom domain. Also see: [Troubleshoot App Service issues in Application Gateway](troubleshoot-app-service-redirection-app-service-url.md).
+The above conditions (explained in more detail in [Architecture Center](/azure/architecture/best-practices/host-name-preservation)) would indicate that your web application doesn't deal well with rewriting the host name. This is commonly seen. The recommended way to deal with this is to follow the instructions for configuration Application Gateway with App Service using a custom domain. Also see: [Troubleshoot App Service issues in Application Gateway](troubleshoot-app-service-redirection-app-service-url.md).
### [Azure portal](#tab/azure-portal/customdomain)
Pay attention to the following non-exhaustive list of potential symptoms when te
- domain-bound cookies not being passed on to the backend - this includes the use of the ["ARR affinity" setting](../app-service/configure-common.md#configure-general-settings) in App Service
-The above conditions (explained in more detail in [Architecture Center](/azure/architecture/best-practices/host-name-preservation)) would indicate that your web application does not deal well with rewriting the host name. This is very common to see. The recommended way to deal with this is to follow the instructions for configuration Application Gateway with App Service using a custom domain. Also see: [Troubleshoot App Service issues in Application Gateway](troubleshoot-app-service-redirection-app-service-url.md).
+The above conditions (explained in more detail in [Architecture Center](/azure/architecture/best-practices/host-name-preservation)) would indicate that your web application doesn't deal well with rewriting the host name. This is commonly seen. The recommended way to deal with this is to follow the instructions for configuration Application Gateway with App Service using a custom domain. Also see: [Troubleshoot App Service issues in Application Gateway](troubleshoot-app-service-redirection-app-service-url.md).
## Restrict access
-The web apps deployed in these examples use public IP addresses that can be accessed directly from the Internet. This helps with troubleshooting when you are learning about a new feature and trying new things. But if you intend to deploy a feature into production, you'll want to add more restrictions. Consider the following options:
+The web apps deployed in these examples use public IP addresses that can be accessed directly from the Internet. This helps with troubleshooting when you're learning about a new feature and trying new things. But if you intend to deploy a feature into production, you'll want to add more restrictions. Consider the following options:
- Configure [Access restriction rules based on service endpoints](../app-service/overview-access-restrictions.md#access-restriction-rules-based-on-service-endpoints). This allows you to lock down inbound access to the app making sure the source address is from Application Gateway. - Use [Azure App Service static IP restrictions](../app-service/app-service-ip-restrictions.md). For example, you can restrict the web app so that it only receives traffic from the application gateway. Use the app service IP restriction feature to list the application gateway VIP as the only address with access.
application-gateway Create Gateway Internal Load Balancer App Service Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/create-gateway-internal-load-balancer-app-service-environment.md
Last updated 06/10/2022
-# Back-end server certificate isn't allow-listed for an application gateway using an Internal Load Balancer with an App Service Environment
+# Backend server certificate isn't allow-listed for an application gateway using an Internal Load Balancer with an App Service Environment
This article troubleshoots the following issue: A certificate isn't allow-listed when you create an application gateway by using an Internal Load Balancer (ILB) together with an App Service Environment (ASE) at the back end when using end-to-end TLS in Azure. ## Symptoms
-When you create an application gateway by using an ILB with an ASE at the back end, the back-end server may become unhealthy. This problem occurs if the authentication certificate of the application gateway doesn't match the configured certificate on the back-end server. See the following scenario as an example:
+When you create an application gateway by using an ILB with an ASE at the back end, the backend server may become unhealthy. This problem occurs if the authentication certificate of the application gateway doesn't match the configured certificate on the backend server. See the following scenario as an example:
**Application Gateway configuration:**
When you create an application gateway by using an ILB with an ASE at the back e
- **App Service:** test.appgwtestase.com - **SSL Binding:** SNI SSL ΓÇô CN=test.appgwtestase.com
-When you access the application gateway, you receive the following error message because the back-end server is unhealthy:
+When you access the application gateway, you receive the following error message because the backend server is unhealthy:
**502 ΓÇô Web server received an invalid response while acting as a gateway or proxy server.** ## Solution
-When you don't use a host name to access an HTTPS website, the back-end server will return the configured certificate on the default website, in case SNI is disabled. For an ILB ASE, the default certificate comes from the ILB certificate. If there are no configured certificates for the ILB, the certificate comes from the ASE App certificate.
+When you don't use a host name to access an HTTPS website, the backend server will return the configured certificate on the default website, in case SNI is disabled. For an ILB ASE, the default certificate comes from the ILB certificate. If there are no configured certificates for the ILB, the certificate comes from the ASE App certificate.
-When you use a fully qualified domain name (FQDN) to access the ILB, the back-end server will return the correct certificate that's uploaded in the HTTP settings. If that isn't the case , consider the following options:
+When you use a fully qualified domain name (FQDN) to access the ILB, the backend server will return the correct certificate that's uploaded in the HTTP settings. If that isn't the case , consider the following options:
-- Use FQDN in the back-end pool of the application gateway to point to the IP address of the ILB. This option only works if you have a private DNS zone or a custom DNS configured. Otherwise, you have to create an "A" record for a public DNS.
+- Use FQDN in the backend pool of the application gateway to point to the IP address of the ILB. This option only works if you have a private DNS zone or a custom DNS configured. Otherwise, you have to create an "A" record for a public DNS.
- Use the uploaded certificate on the ILB or the default certificate (ILB certificate) in the HTTP settings. The application gateway gets the certificate when it accesses the ILB's IP for the probe. -- Use a wildcard certificate on the ILB and the back-end server, so that for all the websites, the certificate is common. However, this solution is possible only for subdomains and not if each of the websites require different hostnames.
+- Use a wildcard certificate on the ILB and the backend server, so that for all the websites, the certificate is common. However, this solution is possible only for subdomains and not if each website requires different hostnames.
- Clear the **Use for App service** option for the application gateway in case you're using the IP address of the ILB.
application-gateway Create Multiple Sites Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/create-multiple-sites-portal.md
Sign in to the [Azure portal](https://portal.azure.com).
### Backends tab
-The backend pool is used to route requests to the backend servers that serve the request. Backend pools can be NICs, virtual machine scale sets, public IPs, internal IPs, fully qualified domain names (FQDN), and multi-tenant back-ends like Azure App Service. In this example, you'll create an empty backend pool with your application gateway and then add backend targets to the backend pool.
+The backend pool is used to route requests to the backend servers that serve the request. Backend pools can be NICs, virtual machine scale sets, public IPs, internal IPs, fully qualified domain names (FQDN), and multi-tenant backends like Azure App Service. In this example, you'll create an empty backend pool with your application gateway and then add backend targets to the backend pool.
1. On the **Backends** tab, select **Add a backend pool**.
application-gateway Create Ssl Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/create-ssl-portal.md
Sign in to the [Azure portal](https://portal.azure.com).
### Backends tab
-The backend pool is used to route requests to the backend servers that serve the request. Backend pools can be composed of NICs, virtual machine scale sets, public IPs, internal IPs, fully qualified domain names (FQDN), and multi-tenant back-ends like Azure App Service. In this example, you'll create an empty backend pool with your application gateway and then add backend targets to the backend pool.
+The backend pool is used to route requests to the backend servers that serve the request. Backend pools can be composed of NICs, virtual machine scale sets, public IPs, internal IPs, fully qualified domain names (FQDN), and multi-tenant backends like Azure App Service. In this example, you'll create an empty backend pool with your application gateway and then add backend targets to the backend pool.
1. On the **Backends** tab, select **Add a backend pool**.
application-gateway Create Url Route Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/create-url-route-portal.md
In this example, you create three virtual machines to be used as backend servers
### Backends tab
-The backend pool is used to route requests to the backend servers that serve the request. Backend pools can be composed of NICs, virtual machine scale sets, public IPs, internal IPs, fully qualified domain names (FQDN), and multi-tenant back-ends like Azure App Service.
+The backend pool is used to route requests to the backend servers that serve the request. Backend pools can be composed of NICs, virtual machine scale sets, public IPs, internal IPs, fully qualified domain names (FQDN), and multi-tenant backends like Azure App Service.
1. On the **Backends** tab, select **Add a backend pool**.
application-gateway Custom Error https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/custom-error.md
For more information, see [Add-AzApplicationGatewayCustomError](/powershell/modu
## Next steps
-For information about Application Gateway diagnostics, see [Back-end health, diagnostic logs, and metrics for Application Gateway](application-gateway-diagnostics.md).
+For information about Application Gateway diagnostics, see [Backend health, diagnostic logs, and metrics for Application Gateway](application-gateway-diagnostics.md).
application-gateway Disabled Listeners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/disabled-listeners.md
Title: Understanding disabled listeners description: The article explains the details of a disabled listener and ways to resolve the problem.-+ Last updated 02/22/2022-+
The SSL/TLS certificates for Azure Application GatewayΓÇÖs listeners can be referenced from a customerΓÇÖs Key Vault resource. Your application gateway must always have access to such linked key vault resource and its certificate object to ensure smooth operations of the TLS termination feature and the overall health of the gateway resource.
-It is important to consider any impact on your Application Gateway resource when making changes or revoking access to your Key Vault resource. In case your application gateway is unable to access the associated key vault or locate its certificate object, it will automatically put that listener in a disabled state. The action is triggered only in the case of configuration errors. Transient connectivity problems do not have any impact on the listeners.
+It is important to consider any impact on your Application Gateway resource when making changes or revoking access to your Key Vault resource. In case your application gateway is unable to access the associated key vault or locate its certificate object, it will automatically put that listener in a disabled state. The action is triggered only for configuration errors. Transient connectivity problems do not have any impact on the listeners.
A disabled listener doesnΓÇÖt affect the traffic for other operational listeners on your Application Gateway. For example, the HTTP listeners or HTTPS listeners for which PFX certificate file is directly uploaded on Application Gateway resource will never go in a disabled state.
application-gateway End To End Ssl Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/end-to-end-ssl-portal.md
If you don't have an Azure subscription, create a [free account](https://azure.m
## Before you begin
-To configure end-to-end TLS with an application gateway, you need a certificate for the gateway. Certificates are also required for the back-end servers. The gateway certificate is used to derive a symmetric key in compliance with the TLS protocol specification. The symmetric key is then used to encrypt and decrypt the traffic sent to the gateway.
+To configure end-to-end TLS with an application gateway, you need a certificate for the gateway. Certificates are also required for the backend servers. The gateway certificate is used to derive a symmetric key in compliance with the TLS protocol specification. The symmetric key is then used to encrypt and decrypt the traffic sent to the gateway.
-For end-to-end TLS encryption, the right back-end servers must be allowed in the application gateway. To allow this access, upload the public certificate of the back-end servers, also known as Authentication Certificates (v1) or Trusted Root Certificates (v2), to the application gateway. Adding the certificate ensures that the application gateway communicates only with known back-end instances. This configuration further secures end-to-end communication.
+For end-to-end TLS encryption, the right backend servers must be allowed in the application gateway. To allow this access, upload the public certificate of the backend servers, also known as Authentication Certificates (v1) or Trusted Root Certificates (v2), to the application gateway. Adding the certificate ensures that the application gateway communicates only with known backend instances. This configuration further secures end-to-end communication.
To learn more, see [Overview of TLS termination and end to end TLS with Application Gateway](./ssl-overview.md). ## Create a new application gateway with end-to-end TLS
-To create a new application gateway with end-to-end TLS encryption, you'll need to first enable TLS termination while creating a new application gateway. This action enables TLS encryption for communication between the client and application gateway. Then, you'll need to put on the Safe Recipients list the certificates for the back-end servers in the HTTP settings. This configuration enables TLS encryption for communication between the application gateway and the back-end servers. That accomplishes end-to-end TLS encryption.
+To create a new application gateway with end-to-end TLS encryption, you'll need to first enable TLS termination while creating a new application gateway. This action enables TLS encryption for communication between the client and application gateway. Then, you'll need to put on the Safe Recipients list the certificates for the backend servers in the HTTP settings. This configuration enables TLS encryption for communication between the application gateway and the backend servers. That accomplishes end-to-end TLS encryption.
### Enable TLS termination while creating a new application gateway To learn more, see [enable TLS termination while creating a new application gateway](./create-ssl-portal.md).
-### Add authentication/root certificates of back-end servers
+### Add authentication/root certificates of backend servers
1. Select **All resources**, and then select **myAppGateway**.
To learn more, see [enable TLS termination while creating a new application gate
7. Select the certificate file in the **Upload CER certificate** box.
- For Standard and WAF (v1) application gateways, you should upload the public key of your back-end server certificate in .cer format.
+ For Standard and WAF (v1) application gateways, you should upload the public key of your backend server certificate in .cer format.
![Add certificate](./media/end-to-end-ssl-portal/addcert.png)
- For Standard_v2 and WAF_v2 application gateways, you should upload the root certificate of the back-end server certificate in .cer format. If the back-end certificate is issued by a well-known certificate authority (CA), you can select the **Use Well Known CA Certificate** check box, and then you don't have to upload a certificate.
+ For Standard_v2 and WAF_v2 application gateways, you should upload the root certificate of the backend server certificate in .cer format. If the backend certificate is issued by a well-known certificate authority (CA), you can select the **Use Well Known CA Certificate** check box, and then you don't have to upload a certificate.
![Add trusted root certificate](./media/end-to-end-ssl-portal/trustedrootcert-portal.png)
To learn more, see [enable TLS termination while creating a new application gate
## Enable end-to-end TLS for an existing application gateway
-To configure an existing application gateway with end-to-end TLS encryption, you must first enable TLS termination in the listener. This action enables TLS encryption for communication between the client and the application gateway. Then, put those certificates for back-end servers in the HTTP settings on the Safe Recipients list. This configuration enables TLS encryption for communication between the application gateway and the back-end servers. That accomplishes end-to-end TLS encryption.
+To configure an existing application gateway with end-to-end TLS encryption, you must first enable TLS termination in the listener. This action enables TLS encryption for communication between the client and the application gateway. Then, put those certificates for backend servers in the HTTP settings on the Safe Recipients list. This configuration enables TLS encryption for communication between the application gateway and the backend servers. That accomplishes end-to-end TLS encryption.
You'll need to use a listener with the HTTPS protocol and a certificate for enabling TLS termination. You can either use an existing listener that meets those conditions or create a new listener. If you choose the former option, you can ignore the following "Enable TLS termination in an existing application gateway" section and move directly to the "Add authentication/trusted root certificates for backend servers" section.
If you choose the latter option, apply the steps in the following procedure.
7. Select **OK** to save.
-### Add authentication/trusted root certificates of back-end servers
+### Add authentication/trusted root certificates of backend servers
1. Select **All resources**, and then select **myAppGateway**.
-2. Select **HTTP settings** from the left-side menu. You can either put certificates in an existing back-end HTTP setting on the Safe Recipients list or create a new HTTP setting. (In the next step, the certificate for the default HTTP setting, **appGatewayBackendHttpSettings**, is added to the Safe Recipients list.)
+2. Select **HTTP settings** from the left-side menu. You can either put certificates in an existing backend HTTP setting on the Safe Recipients list or create a new HTTP setting. (In the next step, the certificate for the default HTTP setting, **appGatewayBackendHttpSettings**, is added to the Safe Recipients list.)
3. Select **appGatewayBackendHttpSettings**.
If you choose the latter option, apply the steps in the following procedure.
7. Select the certificate file in the **Upload CER certificate** box.
- For Standard and WAF (v1) application gateways, you should upload the public key of your back-end server certificate in .cer format.
+ For Standard and WAF (v1) application gateways, you should upload the public key of your backend server certificate in .cer format.
![Add certificate](./media/end-to-end-ssl-portal/addcert.png)
- For Standard_v2 and WAF_v2 application gateways, you should upload the root certificate of the back-end server certificate in .cer format. If the back-end certificate is issued by a well-known CA, you can select the **Use Well Known CA Certificate** check box, and then you don't have to upload a certificate.
+ For Standard_v2 and WAF_v2 application gateways, you should upload the root certificate of the backend server certificate in .cer format. If the backend certificate is issued by a well-known CA, you can select the **Use Well Known CA Certificate** check box, and then you don't have to upload a certificate.
![Add trusted root certificate](./media/end-to-end-ssl-portal/trustedrootcert-portal.png)
application-gateway Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/features.md
For more information, see [Application Gateway Ingress Controller (AGIC)](ingres
## URL-based routing
-URL Path Based Routing allows you to route traffic to back-end server pools based on URL Paths of the request.
+URL Path Based Routing allows you to route traffic to backend server pools based on URL Paths of the request.
One of the scenarios is to route requests for different content types to different pool. For example, requests for `http://contoso.com/video/*` are routed to VideoServerPool, and `http://contoso.com/images/*` are routed to ImageServerPool. DefaultServerPool is selected if none of the path patterns match.
HTTP headers allow the client and server to pass additional information with the
- Removing response header fields that can reveal sensitive information. - Stripping port information from X-Forwarded-For headers.
-Application Gateway and WAF v2 SKU supports the capability to add, remove, or update HTTP request and response headers, while the request and response packets move between the client and back-end pools. You can also rewrite URLs, query string parameters and host name. With URL rewrite and URL path-based routing, you can choose to either route requests to one of the backend pools based on the original path or the rewritten path, using the re-evaluate path map option.
+Application Gateway and WAF v2 SKU supports the capability to add, remove, or update HTTP request and response headers, while the request and response packets move between the client and backend pools. You can also rewrite URLs, query string parameters and host name. With URL rewrite and URL path-based routing, you can choose to either route requests to one of the backend pools based on the original path or the rewritten path, using the re-evaluate path map option.
It also provides you with the capability to add conditions to ensure the specified headers or URL are rewritten only when certain conditions are met. These conditions are based on the request and response information.
For a complete list of application gateway limits, see [Application Gateway serv
The following table shows an average performance throughput for each application gateway v1 instance with SSL offload enabled:
-| Average back-end page response size | Small | Medium | Large |
+| Average backend page response size | Small | Medium | Large |
| | | | | | 6 KB |7.5 Mbps |13 Mbps |50 Mbps | | 100 KB |35 Mbps |100 Mbps |200 Mbps | > [!NOTE]
-> These values are approximate values for an application gateway throughput. The actual throughput depends on various environment details, such as average page size, location of back-end instances, and processing time to serve a page. For exact performance numbers, you should run your own tests. These values are only provided for capacity planning guidance.
+> These values are approximate values for an application gateway throughput. The actual throughput depends on various environment details, such as average page size, location of backend instances, and processing time to serve a page. For exact performance numbers, you should run your own tests. These values are only provided for capacity planning guidance.
## Version feature comparison
application-gateway High Traffic Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/high-traffic-support.md
Title: Application Gateway high traffic volume support description: This article provides guidance to configure Azure Application Gateway in support of high network traffic volume scenarios. -+ Last updated 03/24/2020-+ # Application Gateway high traffic support
You can use Application Gateway with Web Application Firewall (WAF) for a scalable and secure way to manage traffic to your web applications.
-It is important that you scale your Application Gateway according to your traffic and with a bit of a buffer so that you are prepared for any traffic surges or spikes and minimizing the impact that it may have in your QoS. The following suggestions help you set up Application Gateway with WAF to handle extra traffic.
+It is important that you scale your Application Gateway according to your traffic and with a bit of a buffer so that you're prepared for any traffic surges or spikes and minimizing the impact that it may have in your QoS. The following suggestions help you set up Application Gateway with WAF to handle extra traffic.
Please check the [metrics documentation](./application-gateway-metrics.md) for the complete list of metrics offered by Application Gateway. See [visualize metrics](./application-gateway-metrics.md#metrics-visualization) in the Azure portal and the [Azure monitor documentation](../azure-monitor/alerts/alerts-metric.md) on how to set alerts for metrics. ## Scaling for Application Gateway v1 SKU (Standard/WAF SKU) ### Set your instance count based on your peak CPU usage
-If you are using a v1 SKU gateway, youΓÇÖll have the ability to set your Application Gateway up to 32 instances for scaling. Check your Application GatewayΓÇÖs CPU utilization in the past one month for any spikes above 80%, it is available as a metric for you to monitor. It is recommended that you set your instance count according to your peak usage and with a 10% to 20% additional buffer to account for any traffic spikes.
+If you're using a v1 SKU gateway, youΓÇÖll have the ability to set your Application Gateway up to 32 instances for scaling. Check your Application GatewayΓÇÖs CPU utilization in the past one month for any spikes above 80%, it is available as a metric for you to monitor. It is recommended that you set your instance count according to your peak usage and with a 10% to 20% additional buffer to account for any traffic spikes.
:::image type="content" source="./media/application-gateway-covid-guidelines/v1-cpu-utilization-inline.png" alt-text="V1 CPU utilization metrics" lightbox="./media/application-gateway-covid-guidelines/v1-cpu-utilization-exp.png":::
application-gateway How Application Gateway Works https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/how-application-gateway-works.md
This article explains how an [application gateway](overview.md) accepts incoming
Azure Application Gateway can be used as an internal application load balancer or as an internet-facing application load balancer. An internet-facing application gateway uses public IP addresses. The DNS name of an internet-facing application gateway is publicly resolvable to its public IP address. As a result, internet-facing application gateways can route client requests from the internet.
-Internal application gateways use only private IP addresses. If you are using a Custom or [Private DNS zone](../dns/private-dns-overview.md), the domain name should be internally resolvable to the private IP address of the Application Gateway. Therefore, internal load-balancers can only route requests from clients with access to a virtual network for the application gateway.
+Internal application gateways use only private IP addresses. If you're using a Custom or [Private DNS zone](../dns/private-dns-overview.md), the domain name should be internally resolvable to the private IP address of the Application Gateway. Therefore, internal load-balancers can only route requests from clients with access to a virtual network for the application gateway.
## How an application gateway routes a request
application-gateway How To Troubleshoot Application Gateway Session Affinity Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/how-to-troubleshoot-application-gateway-session-affinity-issues.md
The problem in maintaining cookie-based session affinity may happen due to the f
- ΓÇ£Cookie-based AffinityΓÇ¥ setting is not enabled - Your application cannot handle cookie-based affinity-- Application is using cookie-based affinity but requests still bouncing between back-end servers
+- Application is using cookie-based affinity but requests still bouncing between backend servers
### Check whether the "Cookie-based AffinityΓÇ¥ setting is enabled
The application gateway can only perform session-based affinity by using a cooki
If the application cannot handle cookie-based affinity, you must use an external or internal azure load balancer or another third-party solution.
-### Application is using cookie-based affinity but requests still bouncing between back-end servers
+### Application is using cookie-based affinity but requests still bouncing between backend servers
#### Symptom
-You have enabled the Cookie-based Affinity setting, when you access the Application Gateway by using a short name URL in Internet Explorer, for example: `http://website` , the request is still bouncing between back-end servers.
+You have enabled the Cookie-based Affinity setting, when you access the Application Gateway by using a short name URL in Internet Explorer, for example: `http://website` , the request is still bouncing between backend servers.
To identify this issue, follow the instructions:
Use the web debugger of your choice. In this sample we will use Fiddler to captu
- **Example A:** You find a session log that the request is sent from the client, and it goes to the public IP address of the Application Gateway, click this log to view the details. On the right side, data in the bottom box is what the Application Gateway is returning to the client. Select the ΓÇ£RAWΓÇ¥ tab and determine whether the client is receiving a "**Set-Cookie: ApplicationGatewayAffinity=** *ApplicationGatewayAffinityValue*." If there's no cookie, session affinity isn't set, or the Application Gateway isn't applying cookie back to the client. > [!NOTE]
- > This ApplicationGatewayAffinity value is the cookie-id, that the Application Gateway sets for the client to be sent to a particular back-end server.
+ > This ApplicationGatewayAffinity value is the cookie-id, that the Application Gateway sets for the client to be sent to a particular backend server.
![Screenshot shows an example of details of a log entry with the Set-Cookie value highlighted.](./media/how-to-troubleshoot-application-gateway-session-affinity-issues/troubleshoot-session-affinity-issues-17.png) -- **Example B:** The next session log followed by the previous one is the client responding back to the Application Gateway, which has set the ApplicationGatewayAffinity. If the ApplicationGatewayAffinity cookie-id matches, the packet should be sent to the same back-end server that was used previously. Check the next several lines of http communication to see whether the client's ApplicationGatewayAffinity cookie is changing.
+- **Example B:** The next session log followed by the previous one is the client responding back to the Application Gateway, which has set the ApplicationGatewayAffinity. If the ApplicationGatewayAffinity cookie-id matches, the packet should be sent to the same backend server that was used previously. Check the next several lines of http communication to see whether the client's ApplicationGatewayAffinity cookie is changing.
![Screenshot shows an example of details of a log entry with a cookie highlighted.](./media/how-to-troubleshoot-application-gateway-session-affinity-issues/troubleshoot-session-affinity-issues-18.png)
application-gateway Http Response Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/http-response-codes.md
Azure Application Gateway shouldn't exhibit 500 response codes. Please open a su
HTTP 502 errors can have several root causes, for example: - NSG, UDR, or custom DNS is blocking access to backend pool members.-- Back-end VMs or instances of [virtual machine scale sets](../virtual-machine-scale-sets/overview.md) aren't responding to the default health probe.
+- Backend VMs or instances of [virtual machine scale sets](../virtual-machine-scale-sets/overview.md) aren't responding to the default health probe.
- Invalid or improper configuration of custom health probes.-- Azure Application Gateway's [back-end pool isn't configured or empty](application-gateway-troubleshooting-502.md#empty-backendaddresspool).
+- Azure Application Gateway's [backend pool isn't configured or empty](application-gateway-troubleshooting-502.md#empty-backendaddresspool).
- None of the VMs or instances in [virtual machine scale set are healthy](application-gateway-troubleshooting-502.md#unhealthy-instances-in-backendaddresspool). - [Request time-out or connectivity issues](application-gateway-troubleshooting-502.md#request-time-out) with user requests.
application-gateway Ingress Controller Annotations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-annotations.md
Title: Application Gateway Ingress Controller annotations description: This article provides documentation on the annotations specific to the Application Gateway Ingress Controller. -+ Last updated 3/18/2022-+ # Annotations for Application Gateway Ingress Controller
application-gateway Ingress Controller Autoscale Pods https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-autoscale-pods.md
Title: Autoscale AKS pods with Azure Application Gateway metrics description: This article provides instructions on how to scale your AKS backend pods using Application Gateway metrics and Azure Kubernetes Metric Adapter -+ Last updated 11/4/2019-+ # Autoscale your AKS pods using Application Gateway Metrics (Beta)
application-gateway Ingress Controller Cookie Affinity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-cookie-affinity.md
Title: Enable cookie based affinity with Application Gateway description: This article provides information on how to enable cookie-based affinity with an Application Gateway. -+ Last updated 11/4/2019-+ # Enable Cookie based affinity with an Application Gateway
application-gateway Ingress Controller Disable Addon https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-disable-addon.md
Title: Disable and re-enable Application Gateway Ingress Controller add-on for Azure Kubernetes Service cluster description: This article provides information on how to disable and re-enable the AGIC add-on for your AKS cluster -+ Last updated 06/10/2020-+ # Disable and re-enable AGIC add-on for your AKS cluster
application-gateway Ingress Controller Expose Websocket Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-expose-websocket-server.md
Title: Expose a WebSocket server to Application Gateway description: This article provides information on how to expose a WebSocket server to Application Gateway with ingress controller for AKS clusters. -+ Last updated 11/4/2019-+ # Expose a WebSocket server to Application Gateway
curl -i -N -H "Connection: Upgrade" \
## WebSocket Health Probes
-If your deployment does not explicitly define health probes, Application Gateway would attempt an HTTP GET on your WebSocket server endpoint.
+If your deployment doesn't explicitly define health probes, Application Gateway would attempt an HTTP GET on your WebSocket server endpoint.
Depending on the server implementation ([here is one we love](https://github.com/gorilla/websocket/blob/master/examples/chat/main.go)) WebSocket specific headers may be required (`Sec-Websocket-Version` for instance).
-Since Application Gateway does not add WebSocket headers, the Application Gateway's health probe response from your WebSocket server will most likely be `400 Bad Request`.
+Since Application Gateway doesn't add WebSocket headers, the Application Gateway's health probe response from your WebSocket server will most likely be `400 Bad Request`.
As a result Application Gateway will mark your pods as unhealthy, which will eventually result in a `502 Bad Gateway` for the consumers of the WebSocket server. To avoid this you may need to add an HTTP GET handler for a health check to your server (`/health` for instance, which returns `200 OK`).
application-gateway Ingress Controller Install Existing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-install-existing.md
Title: Create an ingress controller with an existing Application Gateway description: This article provides information on how to deploy an Application Gateway Ingress Controller with an existing Application Gateway. -+ Last updated 11/4/2019-+ # Install an Application Gateway Ingress Controller (AGIC) using an existing Application Gateway
kubectl get AzureIngressProhibitedTargets prohibit-all-targets -o yaml
``` The object `prohibit-all-targets`, as the name implies, prohibits AGIC from changing config for *any* host and path.
-Helm install with `appgw.shared=true` will deploy AGIC, but will not make any changes to Application Gateway.
+Helm install with `appgw.shared=true` will deploy AGIC, but won't make any changes to Application Gateway.
### Broaden permissions
application-gateway Ingress Controller Letsencrypt Certificate Application Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-letsencrypt-certificate-application-gateway.md
Follow the steps below to install [cert-manager](https://docs.cert-manager.io) o
# certificates, and issues related to your account. email: <YOUR.EMAIL@ADDRESS> # ACME server URL for LetΓÇÖs EncryptΓÇÖs staging environment.
- # The staging environment will not issue trusted certificates but is
+ # The staging environment won't issue trusted certificates but is
# used to ensure that the verification process is working properly # before moving to production server: https://acme-staging-v02.api.letsencrypt.org/directory
application-gateway Ingress Controller Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-migration.md
Title: How to migrate from Azure Application Gateway Ingress Controller Helm to AGIC add-on description: This article provides instructions on how to migrate from AGIC deployed through Helm to AGIC deployed as an AKS add-on -+ Last updated 03/02/2021-+ # Migrate from AGIC Helm to AGIC add-on
appgwId=$(az network application-gateway show -n myApplicationGateway -g myResou
``` ## Delete AGIC Helm from your AKS cluster
-Through Azure CLI, delete your AGIC Helm deployment from your cluster. You'll need to delete the AGIC Helm deployment first before you can enable the AGIC AKS add-on. Please note that any changes that occur within your AKS cluster between the time of deleting your AGIC Helm deployment and the time you enable the AGIC add-on won't be reflected on your Application Gateway, and therefore this migration process should be done outside of business hours to minimize impact. Application Gateway will continue to have the last configuration applied by AGIC so existing routing rules will not be affected.
+Through Azure CLI, delete your AGIC Helm deployment from your cluster. You'll need to delete the AGIC Helm deployment first before you can enable the AGIC AKS add-on. Please note that any changes that occur within your AKS cluster between the time of deleting your AGIC Helm deployment and the time you enable the AGIC add-on won't be reflected on your Application Gateway, and therefore this migration process should be done outside of business hours to minimize impact. Application Gateway will continue to have the last configuration applied by AGIC so existing routing rules won't be affected.
## Enable AGIC add-on using your existing Application Gateway You can now enable the AGIC add-on in your AKS cluster to target your existing Application Gateway through Azure CLI or Portal. Run the following Azure CLI command to enable the AGIC add-on in your AKS cluster. The example enables the add-on in a cluster called *myCluster*, in a resource group called *myResourceGroup*, using the Application Gateway resource ID *appgwId* we saved above in the earlier step.
application-gateway Ingress Controller Multiple Namespace Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-multiple-namespace-support.md
Title: Enable multiple namespace supports for Application Gateway Ingress Controller description: This article provides information on how to enable multiple namespace support in a Kubernetes cluster with an Application Gateway Ingress Controller. -+ Last updated 11/4/2019-+ # Enable multiple Namespace support in an AKS cluster with Application Gateway Ingress Controller
application-gateway Ingress Controller Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-overview.md
Title: What is Azure Application Gateway Ingress Controller? description: This article provides an introduction to what Application Gateway Ingress Controller is. -+ Last updated 03/02/2021-+ # What is Application Gateway Ingress Controller?
The Application Gateway Ingress Controller (AGIC) is a Kubernetes application, w
The Ingress Controller runs in its own pod on the customerΓÇÖs AKS. AGIC monitors a subset of Kubernetes Resources for changes. The state of the AKS cluster is translated to Application Gateway specific configuration and applied to the [Azure Resource Manager (ARM)](../azure-resource-manager/management/overview.md). ## Benefits of Application Gateway Ingress Controller
-AGIC helps eliminate the need to have another load balancer/public IP in front of the AKS cluster and avoids multiple hops in your datapath before requests reach the AKS cluster. Application Gateway talks to pods using their private IP directly and does not require NodePort or KubeProxy services. This also brings better performance to your deployments.
+AGIC helps eliminate the need to have another load balancer/public IP in front of the AKS cluster and avoids multiple hops in your datapath before requests reach the AKS cluster. Application Gateway talks to pods using their private IP directly and doesn't require NodePort or KubeProxy services. This also brings better performance to your deployments.
Ingress Controller is supported exclusively by Standard_v2 and WAF_v2 SKUs, which also brings you autoscaling benefits. Application Gateway can react in response to an increase or decrease in traffic load and scale accordingly, without consuming any resources from your AKS cluster.
application-gateway Ingress Controller Private Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-private-ip.md
Title: Use private IP address for internal routing for an ingress endpoint description: This article provides information on how to use private IPs for internal routing and thus exposing the Ingress endpoint within a cluster to the rest of the VNet. -+ Last updated 11/4/2019-+ # Use private IP for internal routing for an Ingress endpoint
application-gateway Key Vault Certs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/key-vault-certs.md
When you're using a restricted Key Vault, use the following steps to configure A
> [!Note] > If you deploy the Application Gateway instance via an ARM template by using either the Azure CLI or PowerShell, or via an Azure application deployed from the Azure portal, the SSL certificate is stored in the Key Vault as a Base64-encoded PFX file. You must complete the steps in [Use Azure Key Vault to pass secure parameter value during deployment](../azure-resource-manager/templates/key-vault-parameter.md). >
-> It's particularly important to set `enabledForTemplateDeployment` to `true`. The certificate might or might not have a password. In the case of a certificate with a password, the following example shows a possible configuration for the `sslCertificates` entry in `properties` for the ARM template configuration for Application Gateway.
+> It's particularly important to set `enabledForTemplateDeployment` to `true`. The certificate might or might not have a password. For a certificate with a password, the following example shows a possible configuration for the `sslCertificates` entry in `properties` for the ARM template configuration for Application Gateway.
> > ``` > "sslCertificates": [
application-gateway Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/log-analytics.md
Once your Application Gateway WAF is operational, you can enable logs to inspect
## Import WAF logs
-To import your firewall logs into Log Analytics, see [Back-end health, diagnostic logs, and metrics for Application Gateway](application-gateway-diagnostics.md#diagnostic-logging). When you have the firewall logs in your Log Analytics workspace, you can view data, write queries, create visualizations, and add them to your portal dashboard.
+To import your firewall logs into Log Analytics, see [Backend health, diagnostic logs, and metrics for Application Gateway](application-gateway-diagnostics.md#diagnostic-logging). When you have the firewall logs in your Log Analytics workspace, you can view data, write queries, create visualizations, and add them to your portal dashboard.
## Explore data with examples
Once you create a query, you can add it to your dashboard. Select the **Pin to
## Next steps
-[Back-end health, diagnostic logs, and metrics for Application Gateway](application-gateway-diagnostics.md)
+[Backend health, diagnostic logs, and metrics for Application Gateway](application-gateway-diagnostics.md)
application-gateway Migrate V1 V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/migrate-v1-v2.md
No. The script doesn't replicate this configuration for v2. You must add the lo
### Does this script support certificates uploaded to Azure KeyVault ?
-No. Currently the script does not support certificates in KeyVault. However, this is being considered for a future version.
+No. Currently the script doesn't support certificates in KeyVault. However, this is being considered for a future version.
### I ran into some issues with using this script. How can I get help?
application-gateway Monitor Application Gateway Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/monitor-application-gateway-reference.md
For reference, see a list of [all resource logs category types supported in Azur
> [!NOTE] > The Performance log is available only for the v1 SKU. For the v2 SKU, use [Metrics](#metrics) for performance data.
-For more information, see [Back-end health and diagnostic logs for Application Gateway](application-gateway-diagnostics.md#access-log)
+For more information, see [Backend health and diagnostic logs for Application Gateway](application-gateway-diagnostics.md#access-log)
<!-- OPTION 2 - Link to the resource logs as above, but work in extra information not found in the automated metric-supported reference article. NOTE: YOU WILL NOW HAVE TO MANUALLY MAINTAIN THIS SECTION to make sure it stays in sync with the resource-log-categories link. You can group these sections however you want provided you include the proper links back to resource-log-categories article. -->
Resource Provider and Type: [Microsoft.Network/applicationGateways](../azure-mon
|:|:-|| | **Activitylog** | Activity log | Activity log entries are collected by default. You can use [Azure activity logs](../azure-monitor/essentials/activity-log.md) (formerly known as operational logs and audit logs) to view all operations that are submitted to your Azure subscription, and their status. | |**ApplicationGatewayAccessLog**|Access log| You can use this log to view Application Gateway access patterns and analyze important information. This includes the caller's IP address, requested URL, response latency, return code, and bytes in and out. An access log is collected every 60 seconds. This log contains one record per instance of Application Gateway. The Application Gateway instance is identified by the instanceId property.|
-| **ApplicationGatewayPerformanceLog**|Performance log|You can use this log to view how Application Gateway instances are performing. This log captures performance information for each instance, including total requests served, throughput in bytes, total requests served, failed request count, and healthy and unhealthy back-end instance count. A performance log is collected every 60 seconds. The Performance log is available only for the v1 SKU. For the v2 SKU, use [Metrics](#metrics) for performance data.|
+| **ApplicationGatewayPerformanceLog**|Performance log|You can use this log to view how Application Gateway instances are performing. This log captures performance information for each instance, including total requests served, throughput in bytes, total requests served, failed request count, and healthy and unhealthy backend instance count. A performance log is collected every 60 seconds. The Performance log is available only for the v1 SKU. For the v2 SKU, use [Metrics](#metrics) for performance data.|
|**ApplicationGatewayFirewallLog**|Firewall log|You can use this log to view the requests that are logged through either detection or prevention mode of an application gateway that is configured with the web application firewall. Firewall logs are collected every 60 seconds.|
application-gateway Monitor Application Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/monitor-application-gateway.md
Keep the headings in this order.
When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation.
-This article describes the monitoring data generated by Azure Application Gateway. Azure Application Gateway uses [Azure Monitor](../azure-monitor/overview.md). If you are unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md).
+This article describes the monitoring data generated by Azure Application Gateway. Azure Application Gateway uses [Azure Monitor](../azure-monitor/overview.md). If you're unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md).
<!-- Optional diagram showing monitoring for your service. If you need help creating one, contact robb@microsoft.com -->
Azure Monitor alerts proactively notify you when important conditions are found
<!-- only include next line if applications run on your service and work with App Insights. -->
-If you are creating or running an application which use Application Gateway [Azure Monitor Application Insights](../azure-monitor/overview.md#application-insights) may offer additional types of alerts.
+If you're creating or running an application which use Application Gateway [Azure Monitor Application Insights](../azure-monitor/overview.md#application-insights) may offer additional types of alerts.
<!-- end --> The following tables list common and recommended alert rules for Application Gateway.
application-gateway Mutual Authentication Certificate Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/mutual-authentication-certificate-management.md
Title: Export trusted client CA certificate chain for client authentication
description: Learn how to export a trusted client CA certificate chain for client authentication on Azure Application Gateway -+ Last updated 03/31/2021-+ # Export a trusted client CA certificate chain to use with client authentication
application-gateway Mutual Authentication Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/mutual-authentication-overview.md
Title: Overview of mutual authentication on Azure Application Gateway description: This article is an overview of mutual authentication on Application Gateway. -+ Last updated 03/30/2021 -+ # Overview of mutual authentication with Application Gateway
application-gateway Mutual Authentication Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/mutual-authentication-portal.md
Title: Configure mutual authentication on Azure Application Gateway through portal description: Learn how to configure an Application Gateway to have mutual authentication through portal -+ Last updated 02/18/2022-+ # Configure mutual authentication with Application Gateway through portal
application-gateway Mutual Authentication Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/mutual-authentication-powershell.md
Title: Configure mutual authentication on Azure Application Gateway through PowerShell description: Learn how to configure an Application Gateway to have mutual authentication through PowerShell -+ Last updated 02/18/2022-+
application-gateway Mutual Authentication Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/mutual-authentication-troubleshooting.md
Title: Troubleshoot mutual authentication on Azure Application Gateway description: Learn how to troubleshoot mutual authentication on Application Gateway -+ Last updated 02/18/2022-+ # Troubleshooting mutual authentication errors in Application Gateway
There is certificate data that is missing. The certificate uploaded could have b
#### Solution
-Validate that the certificate file uploaded does not have any missing data.
+Validate that the certificate file uploaded doesn't have any missing data.
### Error code: ApplicationGatewayTrustedClientCertificateMustNotHavePrivateKey
application-gateway Overview V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/overview-v2.md
The following table compares the features available with each SKU.
| Proxy NTLM authentication | &#x2713; | | > [!NOTE]
-> The autoscaling v2 SKU now supports [default health probes](application-gateway-probe-overview.md#default-health-probe) to automatically monitor the health of all resources in its back-end pool and highlight those backend members that are considered unhealthy. The default health probe is automatically configured for backends that don't have any custom probe configuration. To learn more, see [health probes in application gateway](application-gateway-probe-overview.md).
+> The autoscaling v2 SKU now supports [default health probes](application-gateway-probe-overview.md#default-health-probe) to automatically monitor the health of all resources in its backend pool and highlight those backend members that are considered unhealthy. The default health probe is automatically configured for backends that don't have any custom probe configuration. To learn more, see [health probes in application gateway](application-gateway-probe-overview.md).
## Differences from v1 SKU
application-gateway Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/overview.md
This type of routing is known as application layer (OSI layer 7) load balancing.
>[!NOTE] > Azure provides a suite of fully managed load-balancing solutions for your scenarios.
-> * If you are looking to do DNS based global routing and do **not** have requirements for Transport Layer Security (TLS) protocol termination ("SSL offload"), per-HTTP/HTTPS request or application-layer processing, review [Traffic Manager](../traffic-manager/traffic-manager-overview.md).
+> * If you're looking to do DNS based global routing and do **not** have requirements for Transport Layer Security (TLS) protocol termination ("SSL offload"), per-HTTP/HTTPS request or application-layer processing, review [Traffic Manager](../traffic-manager/traffic-manager-overview.md).
> * If you need to optimize global routing of your web traffic and optimize top-tier end-user performance and reliability through quick global failover, see [Front Door](../frontdoor/front-door-overview.md). > * To do transport layer load balancing, review [Load Balancer](../load-balancer/load-balancer-overview.md). >
application-gateway Private Link Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/private-link-configure.md
A private endpoint is a network interface that uses a private IP address from th
1. Select **Create**. > [!Note]
-> If the public or private IP configuration resource is missing when trying to select a _Target sub-resource_ on the _Resource_ tab of private endpoint creation, please ensure a listener is actively utilizing the respected frontend IP configuration. Frontend IP configurations without an associated listener will not be shown as a _Target sub-resource_.
+> If the public or private IP configuration resource is missing when trying to select a _Target sub-resource_ on the _Resource_ tab of private endpoint creation, please ensure a listener is actively utilizing the respected frontend IP configuration. Frontend IP configurations without an associated listener won't be shown as a _Target sub-resource_.
> [!Note]
-> If you are provisioning a **Private Endpoint** from within another tenant, you will need to utilize the Azure Application Gateway Resource ID, along with sub-resource to your frontend configuration. For example, if the frontend configuration of the gateway was named _PrivateFrontendIp_, the resource ID would be as follows: _/subscriptions/xxxx-xxxx-xxxx-xxxx-xxxx/resourceGroups/resourceGroupname/providers/Microsoft.Network/applicationGateways/appgwname/frontendIPConfigurations/PrivateFrontendIp_.
+> If you're provisioning a **Private Endpoint** from within another tenant, you will need to utilize the Azure Application Gateway Resource ID, along with sub-resource to your frontend configuration. For example, if the frontend configuration of the gateway was named _PrivateFrontendIp_, the resource ID would be as follows: _/subscriptions/xxxx-xxxx-xxxx-xxxx-xxxx/resourceGroups/resourceGroupname/providers/Microsoft.Network/applicationGateways/appgwname/frontendIPConfigurations/PrivateFrontendIp_.
# [Azure PowerShell](#tab/powershell)
application-gateway Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/private-link.md
Four components are required to implement Private Link with Application Gateway:
- API version 2020-03-01 or later should be used to configure Private Link configurations. - Static IP allocation method in the Private Link Configuration object isn't supported. - The subnet used for PrivateLinkConfiguration cannot be same as the Application Gateway subnet.-- Private link configuration for Application Gateway does not expose the "Alias" property and must be referenced via resource URI.-- Private Endpoint creation does not create a \*.privatelink DNS record/zone. All DNS records should be entered in existing zones used for your Application Gateway.
+- Private link configuration for Application Gateway doesn't expose the "Alias" property and must be referenced via resource URI.
+- Private Endpoint creation doesn't create a \*.privatelink DNS record/zone. All DNS records should be entered in existing zones used for your Application Gateway.
- Azure Front Door and Application Gateway do not support chaining via Private Link. - Source IP address and x-forwarded-for headers will contain the Private link IP addresses
application-gateway Proxy Buffers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/proxy-buffers.md
Title: Configure Request and Response Buffers description: Learn how to configure Request and Response buffers for your Azure Application Gateway. -+ Last updated 08/03/2022-+ #Customer intent: As a user, I want to know how can I disable/enable proxy buffers.
application-gateway Quick Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/quick-create-bicep.md
Title: 'Quickstart: Direct web traffic using Bicep'
description: In this quickstart, you learn how to use Bicep to create an Azure Application Gateway that directs web traffic to virtual machines in a backend pool. --++ Last updated 04/14/2022
In this quickstart, you use Bicep to create an Azure Application Gateway. Then y
## Review the Bicep file
-This Bicep file creates a simple setup with a public front-end IP address, a basic listener to host a single site on the application gateway, a basic request routing rule, and two virtual machines in the backend pool.
+This Bicep file creates a simple setup with a public frontend IP address, a basic listener to host a single site on the application gateway, a basic request routing rule, and two virtual machines in the backend pool.
The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/ag-docs-qs/)
application-gateway Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/quick-create-cli.md
In this quickstart, you use Azure CLI to create an application gateway. Then you test it to make sure it works correctly.
-The application gateway directs application web traffic to specific resources in a backend pool. You assign listeners to ports, create rules, and add resources to a backend pool. For the sake of simplicity, this article uses a simple setup with a public front-end IP address, a basic listener to host a single site on the application gateway, a basic request routing rule, and two virtual machines in the backend pool.
+The application gateway directs application web traffic to specific resources in a backend pool. You assign listeners to ports, create rules, and add resources to a backend pool. For the sake of simplicity, this article uses a simple setup with a public frontend IP address, a basic listener to host a single site on the application gateway, a basic request routing rule, and two virtual machines in the backend pool.
:::image type="content" source="media/quick-create-portal/application-gateway-qs-resources.png" alt-text="application gateway resources":::
az network public-ip create \
## Create the backend servers
-A backend can have NICs, virtual machine scale sets, public IP addresses, internal IP addresses, fully qualified domain names (FQDN), and multi-tenant back-ends like Azure App Service. In this example, you create two virtual machines to use as backend servers for the application gateway. You also install NGINX on the virtual machines to test the application gateway.
+A backend can have NICs, virtual machine scale sets, public IP addresses, internal IP addresses, fully qualified domain names (FQDN), and multi-tenant backends like Azure App Service. In this example, you create two virtual machines to use as backend servers for the application gateway. You also install NGINX on the virtual machines to test the application gateway.
#### Create two virtual machines
application-gateway Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/quick-create-portal.md
Title: 'Quickstart: Direct web traffic using the portal'
description: In this quickstart, you learn how to use the Azure portal to create an Azure Application Gateway that directs web traffic to virtual machines in a backend pool. --++ Last updated 10/13/2022
# Quickstart: Direct web traffic with Azure Application Gateway - Azure portal
-In this quickstart, you use the Azure portal to create an [Azure Application Gateway](overview.md) and test it to make sure it works correctly. You will assign listeners to ports, create rules, and add resources to a backend pool. For the sake of simplicity, a simple setup is used with a public front-end IP address, a basic listener to host a single site on the application gateway, a basic request routing rule, and two virtual machines (VMs) in the backend pool.
+In this quickstart, you use the Azure portal to create an [Azure Application Gateway](overview.md) and test it to make sure it works correctly. You will assign listeners to ports, create rules, and add resources to a backend pool. For the sake of simplicity, a simple setup is used with a public frontend IP address, a basic listener to host a single site on the application gateway, a basic request routing rule, and two virtual machines (VMs) in the backend pool.
![Quickstart setup](./media/quick-create-portal/application-gateway-qs-resources.png)
You'll create the application gateway using the tabs on the **Create application
### Backends tab
-The backend pool is used to route requests to the backend servers that serve the request. Backend pools can be composed of NICs, Virtual Machine Scale Sets, public IP addresses, internal IP addresses, fully qualified domain names (FQDN), and multi-tenant back-ends like Azure App Service. In this example, you'll create an empty backend pool with your application gateway and then add backend targets to the backend pool.
+The backend pool is used to route requests to the backend servers that serve the request. Backend pools can be composed of NICs, Virtual Machine Scale Sets, public IP addresses, internal IP addresses, fully qualified domain names (FQDN), and multi-tenant backends like Azure App Service. In this example, you'll create an empty backend pool with your application gateway and then add backend targets to the backend pool.
1. On the **Backends** tab, select **Add a backend pool**.
application-gateway Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/quick-create-powershell.md
In this quickstart, you use Azure PowerShell to create an application gateway. Then you test it to make sure it works correctly.
-The application gateway directs application web traffic to specific resources in a backend pool. You assign listeners to ports, create rules, and add resources to a backend pool. For the sake of simplicity, this article uses a simple setup with a public front-end IP address, a basic listener to host a single site on the application gateway, a basic request routing rule, and two virtual machines in the backend pool.
+The application gateway directs application web traffic to specific resources in a backend pool. You assign listeners to ports, create rules, and add resources to a backend pool. For the sake of simplicity, this article uses a simple setup with a public frontend IP address, a basic listener to host a single site on the application gateway, a basic request routing rule, and two virtual machines in the backend pool.
:::image type="content" source="media/quick-create-portal/application-gateway-qs-resources.png" alt-text="application gateway resources":::
New-AzApplicationGateway `
### Backend servers
-Now that you have created the Application Gateway, create the backend virtual machines which will host the websites. A backend can be composed of NICs, virtual machine scale sets, public IP address, internal IP address, fully qualified domain names (FQDN), and multi-tenant back-ends like Azure App Service.
+Now that you have created the Application Gateway, create the backend virtual machines which will host the websites. A backend can be composed of NICs, virtual machine scale sets, public IP address, internal IP address, fully qualified domain names (FQDN), and multi-tenant backends like Azure App Service.
In this example, you create two virtual machines to use as backend servers for the application gateway. You also install IIS on the virtual machines to verify that Azure successfully created the application gateway.
application-gateway Quick Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/quick-create-template.md
If your environment meets the prerequisites and you're familiar with using ARM t
## Review the template
-For the sake of simplicity, this template creates a simple setup with a public front-end IP, a basic listener to host a single site on the application gateway, a basic request routing rule, and two virtual machines in the backend pool.
+For the sake of simplicity, this template creates a simple setup with a public frontend IP, a basic listener to host a single site on the application gateway, a basic request routing rule, and two virtual machines in the backend pool.
The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/ag-docs-qs/)
application-gateway Redirect External Site Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/redirect-external-site-powershell.md
If you don't have an Azure subscription, create a [free account](https://azure.m
[!INCLUDE [cloud-shell-try-it.md](../../includes/cloud-shell-try-it.md)]
-If you choose to install and use the PowerShell locally, this tutorial requires the Azure PowerShell module version 1.0.0 or later. To find the version, run `Get-Module -ListAvailable Az` . If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you are running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
+If you choose to install and use the PowerShell locally, this tutorial requires the Azure PowerShell module version 1.0.0 or later. To find the version, run `Get-Module -ListAvailable Az` . If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you're running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
## Create a resource group
application-gateway Redirect Internal Site Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/redirect-internal-site-powershell.md
If you don't have an Azure subscription, create a [free account](https://azure.m
[!INCLUDE [cloud-shell-try-it.md](../../includes/cloud-shell-try-it.md)]
-If you choose to install and use the PowerShell locally, this article requires the Azure PowerShell module version 1.0.0 or later. To find the version, run `Get-Module -ListAvailable Az` . If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you are running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
+If you choose to install and use the PowerShell locally, this article requires the Azure PowerShell module version 1.0.0 or later. To find the version, run `Get-Module -ListAvailable Az` . If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you're running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
## Create a resource group
application-gateway Rewrite Http Headers Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/rewrite-http-headers-portal.md
Sign in to the [Azure portal](https://portal.azure.com/) with your Azure account
## Configure header rewrite
-In this example, we'll modify a redirection URL by rewriting the location header in the HTTP response sent by a back-end application.
+In this example, we'll modify a redirection URL by rewriting the location header in the HTTP response sent by a backend application.
1. Select **All resources**, and then select your application gateway.
application-gateway Rewrite Http Headers Url https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/rewrite-http-headers-url.md
Application Gateway allows you to rewrite selected content of requests and respo
HTTP headers allow a client and server to pass additional information with a request or response. By rewriting these headers, you can accomplish important tasks, such as adding security-related header fields like HSTS/ X-XSS-Protection, removing response header fields that might reveal sensitive information, and removing port information from X-Forwarded-For headers.
-Application Gateway allows you to add, remove, or update HTTP request and response headers while the request and response packets move between the client and back-end pools.
+Application Gateway allows you to add, remove, or update HTTP request and response headers while the request and response packets move between the client and backend pools.
To learn how to rewrite request and response headers with Application Gateway using Azure portal, see [here](rewrite-url-portal.md).
To capture a substring for later use, put parentheses around the subpattern that
* (\d)+ # Match a digit one or more times, capturing the last into group 1 > [!Note]
-> Use of */* to prefix and suffix the pattern should not be specified in the pattern to match value. For example, (\d)(\d) will match two digits. /(\d)(\d)/ will not match two digits.
+> Use of */* to prefix and suffix the pattern should not be specified in the pattern to match value. For example, (\d)(\d) will match two digits. /(\d)(\d)/ won't match two digits.
Once captured, you can reference them in the action set using the following format:
Once captured, you can reference them in the action set using the following form
* For a server variable, you must use {var_serverVariableName_groupNumber}. For example, {var_uri_path_1} or {var_uri_path_2} > [!Note]
-> The case of the condition variable needs to match case of the capture variable. For example, if my condition variable is User-Agent, my capture variable must be in the case of User-Agent (i.e. {http_req_User-Agent_2}). If my condition variable is defined as user-agent, my capture variable must be in the case of user-agent (i.e. {http_req_user-agent_2}).
+> The case of the condition variable needs to match case of the capture variable. For example, if my condition variable is User-Agent, my capture variable must be for User-Agent (i.e. {http_req_User-Agent_2}). If my condition variable is defined as user-agent, my capture variable must be for user-agent (i.e. {http_req_user-agent_2}).
If you want to use the whole value, you should not mention the number. Simply use the format {http_req_headerName}, etc. without the groupNumber.
A rewrite rule set contains:
* Enabling 'Re-evaluate path map' isn't allowed for basic request routing rules. This is to prevent infinite evaluation loop for a basic routing rule.
-* There needs to be at least 1 conditional rewrite rule or 1 rewrite rule which does not have 'Re-evaluate path map' enabled for path-based routing rules to prevent infinite evaluation loop for a path-based routing rule.
+* There needs to be at least 1 conditional rewrite rule or 1 rewrite rule which doesn't have 'Re-evaluate path map' enabled for path-based routing rules to prevent infinite evaluation loop for a path-based routing rule.
* Incoming requests would be terminated with a 500 error code in case a loop is created dynamically based on client inputs. The Application Gateway will continue to serve other requests without any degradation in such a scenario.
Here, with only header rewrite configured, the WAF evaluation will be done on `"
#### Remove port information from the X-Forwarded-For header
-Application Gateway inserts an X-Forwarded-For header into all requests before it forwards the requests to the backend. This header is a comma-separated list of IP ports. There might be scenarios in which the back-end servers only need the headers to contain IP addresses. You can use header rewrite to remove the port information from the X-Forwarded-For header. One way to do this is to set the header to the add_x_forwarded_for_proxy server variable. Alternatively, you can also use the variable client_ip:
+Application Gateway inserts an X-Forwarded-For header into all requests before it forwards the requests to the backend. This header is a comma-separated list of IP ports. There might be scenarios in which the backend servers only need the headers to contain IP addresses. You can use header rewrite to remove the port information from the X-Forwarded-For header. One way to do this is to set the header to the add_x_forwarded_for_proxy server variable. Alternatively, you can also use the variable client_ip:
![Remove port](./media/rewrite-http-headers-url/remove-port.png)
Modification of a redirect URL can be useful under certain circumstances. For e
> [!WARNING] > The need to modify a redirection URL sometimes comes up in the context of a configuration whereby Application Gateway is configured to override the hostname towards the backend. The hostname as seen by the backend is in that case different from the hostname as seen by the browser. In this situation, the redirect would not use the correct hostname. This configuration isn't recommended. >
-> The limitations and implications of such a configuration are described in [Preserve the original HTTP host name between a reverse proxy and its back-end web application](/azure/architecture/best-practices/host-name-preservation). The recommended setup for App Service is to follow the instructions for **"Custom Domain (recommended)"** in [Configure App Service with Application Gateway](configure-web-app.md). Rewriting the location header on the response as described in the below example should be considered a workaround and does not address the root cause.
+> The limitations and implications of such a configuration are described in [Preserve the original HTTP host name between a reverse proxy and its backend web application](/azure/architecture/best-practices/host-name-preservation). The recommended setup for App Service is to follow the instructions for **"Custom Domain (recommended)"** in [Configure App Service with Application Gateway](configure-web-app.md). Rewriting the location header on the response as described in the below example should be considered a workaround and doesn't address the root cause.
When the app service sends a redirection response, it uses the same hostname in the location header of its response as the one in the request it receives from the application gateway. So the client will make the request directly to `contoso.azurewebsites.net/path2` instead of going through the application gateway (`contoso.com/path2`). Bypassing the application gateway isn't desirable.
You can fix several security vulnerabilities by implementing necessary headers i
### Delete unwanted headers
-You might want to remove headers that reveal sensitive information from an HTTP response. For example, you might want to remove information like the back-end server name, operating system, or library details. You can use the application gateway to remove these headers:
+You might want to remove headers that reveal sensitive information from an HTTP response. For example, you might want to remove information like the backend server name, operating system, or library details. You can use the application gateway to remove these headers:
![Deleting header](./media/rewrite-http-headers-url/remove-headers.png)
To accomplish scenarios where you want to choose the backend pool based on the v
:::image type="content" source="./media/rewrite-http-headers-url/url-scenario1-3.png" alt-text="URL rewrite scenario 1-3.":::
-Now, if the user requests *contoso.com/listing?category=any*, then it will be matched with the default path since none of the path patterns in the path map (/listing1, /listing2, /listing3) will match. Since you associated the above rewrite set with this path, this rewrite set will be evaluated. As the query string will not match the condition in any of the 3 rewrite rules in this rewrite set, no rewrite action will take place and therefore, the request will be routed unchanged to the backend associated with the default path (which is *GenericList*).
+Now, if the user requests *contoso.com/listing?category=any*, then it will be matched with the default path since none of the path patterns in the path map (/listing1, /listing2, /listing3) will match. Since you associated the above rewrite set with this path, this rewrite set will be evaluated. As the query string won't match the condition in any of the 3 rewrite rules in this rewrite set, no rewrite action will take place and therefore, the request will be routed unchanged to the backend associated with the default path (which is *GenericList*).
If the user requests *contoso.com/listing?category=shoes*, then again the default path will be matched. However, in this case the condition in the first rule will match and therefore, the action associated with the condition will be executed which will rewrite the URL path to /*listing1* and reevaluate the path-map. When the path-map is reevaluated, the request will now match the path associated with pattern */listing1* and the request will be routed to the backend associated with this pattern, which is ShoesListBackendPool.
For a step-by-step guide to achieve the scenario described above, see [Rewrite U
### URL rewrite vs URL redirect
-In the case of a URL rewrite, Application Gateway rewrites the URL before the request is sent to the backend. This will not change what users see in the browser because the changes are hidden from the user.
+For a URL rewrite, Application Gateway rewrites the URL before the request is sent to the backend. This won't change what users see in the browser because the changes are hidden from the user.
-In the case of a URL redirect, Application Gateway sends a redirect response to the client with the new URL. That, in turn, requires the client to resend its request to the new URL provided in the redirect. The URL that the user sees in the browser will update to the new URL.
+For a URL redirect, Application Gateway sends a redirect response to the client with the new URL. That, in turn, requires the client to resend its request to the new URL provided in the redirect. The URL that the user sees in the browser will update to the new URL.
:::image type="content" source="./media/rewrite-http-headers-url/url-rewrite-vs-redirect.png" alt-text="Rewrite vs Redirect."::: ## Limitations -- If a response has more than one header with the same name, then rewriting the value of one of those headers will result in dropping the other headers in the response. This can usually happen with Set-Cookie header since you can have more than one Set-Cookie header in a response. One such scenario is when you are using an app service with an application gateway and have configured cookie-based session affinity on the application gateway. In this case the response will contain two Set-Cookie headers: one used by the app service, for example: `Set-Cookie: ARRAffinity=ba127f1caf6ac822b2347cc18bba0364d699ca1ad44d20e0ec01ea80cda2a735;Path=/;HttpOnly;Domain=sitename.azurewebsites.net` and another for application gateway affinity, for example, `Set-Cookie: ApplicationGatewayAffinity=c1a2bd51lfd396387f96bl9cc3d2c516; Path=/`. Rewriting one of the Set-Cookie headers in this scenario can result in removing the other Set-Cookie header from the response.
+- If a response has more than one header with the same name, then rewriting the value of one of those headers will result in dropping the other headers in the response. This can usually happen with Set-Cookie header since you can have more than one Set-Cookie header in a response. One such scenario is when you're using an app service with an application gateway and have configured cookie-based session affinity on the application gateway. In this case the response will contain two Set-Cookie headers: one used by the app service, for example: `Set-Cookie: ARRAffinity=ba127f1caf6ac822b2347cc18bba0364d699ca1ad44d20e0ec01ea80cda2a735;Path=/;HttpOnly;Domain=sitename.azurewebsites.net` and another for application gateway affinity, for example, `Set-Cookie: ApplicationGatewayAffinity=c1a2bd51lfd396387f96bl9cc3d2c516; Path=/`. Rewriting one of the Set-Cookie headers in this scenario can result in removing the other Set-Cookie header from the response.
- Rewrites aren't supported when the application gateway is configured to redirect the requests or to show a custom error page. - Request header names can contain alphanumeric characters and hyphens. Headers names containing other characters will be discarded when a request is sent to the backend target. - Response header names can contain any alphanumeric characters and specific symbols as defined in [RFC 7230](https://tools.ietf.org/html/rfc7230#page-27), with the exception of underscores (\_).
application-gateway Ssl Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ssl-overview.md
To configure TLS termination, a TLS/SSL certificate must be added to the listene
> [!NOTE]
-> Application gateway does not provide any capability to create a new certificate or send a certificate request to a certification authority.
+> Application gateway doesn't provide any capability to create a new certificate or send a certificate request to a certification authority.
For the TLS connection to work, you need to ensure that the TLS/SSL certificate meets the following conditions:
Application gateway supports the following types of certificates:
- CA (Certificate Authority) certificate: A CA certificate is a digital certificate issued by a certificate authority (CA) - EV (Extended Validation) certificate: An EV certificate is a certificate that conforms to industry standard certificate guidelines. This will turn the browser locator bar green and publish the company name as well.-- Wildcard Certificate: This certificate supports any number of subdomains based on *.site.com, where your subdomain would replace the *. It doesnΓÇÖt, however, support site.com, so in case the users are accessing your website without typing the leading "www", the wildcard certificate will not cover that.
+- Wildcard Certificate: This certificate supports any number of subdomains based on *.site.com, where your subdomain would replace the *. It doesnΓÇÖt, however, support site.com, so in case the users are accessing your website without typing the leading "www", the wildcard certificate won't cover that.
- Self-Signed certificates: Client browsers don't trust these certificates and will warn the user that the virtual serviceΓÇÖs certificate isn't part of a trust chain. Self-signed certificates are good for testing or environments where administrators control the clients and can safely bypass the browserΓÇÖs security alerts. Production workloads should never use self-signed certificates. For more information, see [configure TLS termination with application gateway](./create-ssl-portal.md).
The [TLS policy](./application-gateway-ssl-policy-overview.md) applies only to t
Application Gateway only communicates with those backend servers that have either allow-listed their certificate with the Application Gateway or whose certificates are signed by well-known CA authorities and the certificate's CN matches the host name in the HTTP backend settings. These include the trusted Azure services such as Azure App Service/Web Apps and Azure API Management.
-If the certificates of the members in the backend pool aren't signed by well-known CA authorities, then each instance in the backend pool with end to end TLS enabled must be configured with a certificate to allow secure communication. Adding the certificate ensures that the application gateway only communicates with known back-end instances. This further secures the end-to-end communication.
+If the certificates of the members in the backend pool aren't signed by well-known CA authorities, then each instance in the backend pool with end to end TLS enabled must be configured with a certificate to allow secure communication. Adding the certificate ensures that the application gateway only communicates with known backend instances. This further secures the end-to-end communication.
> [!NOTE] >
The following tables outline the differences in SNI between the v1 and v2 SKU in
| If the backend pool address is an IP address (v1) or if custom probe hostname is configured as IP address (v2) | SNI (server_name) wonΓÇÖt be set. <br> **Note:** In this case, the backend server should be able to return a default/fallback certificate and this should be allow-listed in HTTP settings under authentication certificate. If thereΓÇÖs no default/fallback certificate configured in the backend server and SNI is expected, the server might reset the connection and will lead to probe failures | In the order of precedence mentioned previously, if they have IP address as hostname, then SNI won't be set as per [RFC 6066](https://tools.ietf.org/html/rfc6066). <br> **Note:** SNI also won't be set in v2 probes if no custom probe is configured and no hostname is set on HTTP settings or backend pool | > [!NOTE]
-> If a custom probe isn't configured, then Application Gateway sends a default probe in this format - \<protocol\>://127.0.0.1:\<port\>/. For example, for a default HTTPS probe, it will be sent as https://127.0.0.1:443/. Note that, the 127.0.0.1 mentioned here is only used as HTTP host header and as per RFC 6066, will not be used as SNI header. For more information on health probe errors, check the [backend health troubleshooting guide](application-gateway-backend-health-troubleshooting.md).
+> If a custom probe isn't configured, then Application Gateway sends a default probe in this format - \<protocol\>://127.0.0.1:\<port\>/. For example, for a default HTTPS probe, it will be sent as https://127.0.0.1:443/. Note that, the 127.0.0.1 mentioned here is only used as HTTP host header and as per RFC 6066, won't be used as SNI header. For more information on health probe errors, check the [backend health troubleshooting guide](application-gateway-backend-health-troubleshooting.md).
#### For live traffic
application-gateway Troubleshoot App Service Redirection App Service Url https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/troubleshoot-app-service-redirection-app-service-url.md
Title: Troubleshoot redirection to App Service URL
description: This article provides information on how to troubleshoot the redirection issue when Azure Application Gateway is used with Azure App Service -+ Last updated 04/15/2021-+ # Troubleshoot App Service issues in Application Gateway
-Learn how to diagnose and resolve issues you might encounter when Azure App Service is used as a back-end target with Azure Application Gateway.
+Learn how to diagnose and resolve issues you might encounter when Azure App Service is used as a backend target with Azure Application Gateway.
## Overview
application-gateway Tutorial Autoscale Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-autoscale-ps.md
New-AzWebApp -ResourceGroupName $rg -Name <site2-name> -Location $location -AppS
## Configure the infrastructure
-Configure the IP config, front-end IP config, back-end pool, HTTP settings, certificate, port, listener, and rule in an identical format to the existing Standard application gateway. The new SKU follows the same object model as the Standard SKU.
+Configure the IP config, frontend IP config, backend pool, HTTP settings, certificate, port, listener, and rule in an identical format to the existing Standard application gateway. The new SKU follows the same object model as the Standard SKU.
Replace your two web app FQDNs (for example: `mywebapp.azurewebsites.net`) in the $pool variable definition.
application-gateway Tutorial Http Header Rewrite Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-http-header-rewrite-powershell.md
$gwSubnet = Get-AzVirtualNetworkSubnetConfig -Name "AppGwSubnet" -VirtualNetwork
## Configure the infrastructure
-Configure the IP config, front-end IP config, back-end pool, HTTP settings, certificate, port, and listener in an identical format to the existing Standard application gateway. The new SKU follows the same object model as the Standard SKU.
+Configure the IP config, frontend IP config, backend pool, HTTP settings, certificate, port, and listener in an identical format to the existing Standard application gateway. The new SKU follows the same object model as the Standard SKU.
```azurepowershell $ipconfig = New-AzApplicationGatewayIPConfiguration -Name "IPConfig" -Subnet $gwSubnet
$rule01 = New-AzApplicationGatewayRequestRoutingRule -Name "Rule1" -RuleType bas
Now you can specify the autoscale configuration for the application gateway. Two autoscaling configuration types are supported:
-* **Fixed capacity mode**. In this mode, the application gateway does not autoscale and operates at a fixed Scale Unit capacity.
+* **Fixed capacity mode**. In this mode, the application gateway doesn't autoscale and operates at a fixed Scale Unit capacity.
```azurepowershell $sku = New-AzApplicationGatewaySku -Name Standard_v2 -Tier Standard_v2 -Capacity 2
application-gateway Tutorial Ingress Controller Add On Existing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-ingress-controller-add-on-existing.md
Title: 'Tutorial: Enable ingress controller add-on for existing AKS cluster with existing Azure application gateway' description: Use this tutorial to enable the Ingress Controller Add-On for your existing AKS cluster with an existing Application Gateway -+ Last updated 07/15/2022-+
application-gateway Tutorial Ingress Controller Add On New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-ingress-controller-add-on-new.md
Title: 'Tutorial: Enable the Ingress Controller add-on for a new AKS cluster with a new Azure application gateway' description: Use this tutorial to learn how to enable the Ingress Controller add-on for your new AKS cluster with a new application gateway instance. -+ Last updated 07/15/2022-+
application-gateway Tutorial Manage Web Traffic Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-manage-web-traffic-powershell.md
If you don't have an Azure subscription, create a [free account](https://azure.m
[!INCLUDE [cloud-shell-try-it.md](../../includes/cloud-shell-try-it.md)]
-If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 1.0.0 or later. To find the version, run `Get-Module -ListAvailable Az`. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you are running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 1.0.0 or later. To find the version, run `Get-Module -ListAvailable Az`. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you're running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
## Create a resource group
application-gateway Tutorial Url Redirect Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-url-redirect-powershell.md
If you don't have an Azure subscription, create a [free account](https://azure.m
[!INCLUDE [cloud-shell-try-it.md](../../includes/cloud-shell-try-it.md)]
-If you choose to install and use the PowerShell locally, this procedure requires the Azure PowerShell module version 1.0.0 or later. To find the version, run `Get-Module -ListAvailable Az` . If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+If you choose to install and use the PowerShell locally, this procedure requires the Azure PowerShell module version 1.0.0 or later. To find the version, run `Get-Module -ListAvailable Az` . If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
## Create a resource group
application-gateway Url Route Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/url-route-overview.md
# URL Path Based Routing overview
-URL Path Based Routing allows you to route traffic to back-end server pools based on URL Paths of the request.
+URL Path Based Routing allows you to route traffic to backend server pools based on URL Paths of the request.
One of the scenarios is to route requests for different content types to different backend server pools.
-In the following example, Application Gateway is serving traffic for contoso.com from three back-end server pools for example: VideoServerPool, ImageServerPool, and DefaultServerPool.
+In the following example, Application Gateway is serving traffic for contoso.com from three backend server pools for example: VideoServerPool, ImageServerPool, and DefaultServerPool.
![imageURLroute](./media/application-gateway-url-route-overview/figure1.png)
Requests for http\://contoso.com/video/* are routed to VideoServerPool, and http
## UrlPathMap configuration element
-The urlPathMap element is used to specify Path patterns to back-end server pool mappings. The following code example is the snippet of urlPathMap element from template file.
+The urlPathMap element is used to specify Path patterns to backend server pool mappings. The following code example is the snippet of urlPathMap element from template file.
```json "urlPathMaps": [{
availability-zones Az Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/az-overview.md
Title: Azure regions and availability zones description: Learn about regions and availability zones and how they work to help you achieve true resiliency. -++ Last updated 08/23/2022
availability-zones Az Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/az-region.md
Title: Azure services that support availability zones description: Learn what services are supported by availability zones and understand resiliency across all Azure services. -++ Previously updated : 08/23/2022 Last updated : 10/20/2022
In the Product Catalog, always-available services are listed as "non-regional" s
| | | | [Azure HPC Cache](../hpc-cache/hpc-cache-overview.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) | | [Azure IoT Hub Device Provisioning Service](../iot-dps/about-iot-dps.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure NetApp Files](../azure-netapp-files/use-availability-zones.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
| Azure Red Hat OpenShift | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) ![An icon that signifies this service is zonal](media/icon-zonal.svg) | | [Azure Managed Instance for Apache Cassandra](../managed-instance-apache-cassandr) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | Azure Storage: Ultra Disk | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
availability-zones Business Continuity Management Program https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/business-continuity-management-program.md
Title: Business continuity management program in Azure description: Learn about one of the most mature business continuity management programs in the industry. -++ Last updated 10/21/2021
availability-zones Cross Region Replication Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/cross-region-replication-azure.md
Title: Cross-region replication in Azure description: Learn about Cross-region replication in Azure. -++ Last updated 3/01/2022
availability-zones Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/glossary.md
Title: Azure resiliency terminology description: Understanding terms -++ Last updated 10/01/2021
availability-zones Region Types Service Categories Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/region-types-service-categories-azure.md
Title: Azure services description: Learn about Region types and service categories in Azure. -++ Last updated 12/10/2021
azure-arc Agent Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/agent-upgrade.md
Azure Arc-enabled Kubernetes follows the standard [semantic versioning scheme](h
While the schedule may vary, a new minor version of Azure Arc-enabled Kubernetes agents is released approximately once per month.
-The following command upgrades the agent to version 1.1.0:
+The following command upgrades the agent to version 1.8.14:
```azurecli
-az connectedk8s upgrade -g AzureArcTest1 -n AzureArcTest --agent-version 1.1.0
+az connectedk8s upgrade -g AzureArcTest1 -n AzureArcTest --agent-version 1.8.14
``` ## Check agent version
azure-arc Azure Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/azure-rbac.md
description: "Use Azure RBAC for authorization checks on Azure Arc-enabled Kubernetes clusters."
-# Integrate Azure Active Directory with Azure Arc-enabled Kubernetes clusters
+# Use Azure RBAC for Azure Arc-enabled Kubernetes clusters
Kubernetes [ClusterRoleBinding and RoleBinding](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding) object types help to define authorization in Kubernetes natively. By using this feature, you can use Azure Active Directory (Azure AD) and role assignments in Azure to control authorization checks on the cluster. This implies that you can now use Azure role assignments to granularly control who can read, write, and delete Kubernetes objects like deployment, pod, and service.
A conceptual overview of this feature is available in the [Azure RBAC on Azure A
## Prerequisites -- [Install or upgrade the Azure CLI](/cli/azure/install-azure-cli) to version 2.16.0 or later.
+- [Install or upgrade the Azure CLI](/cli/azure/install-azure-cli) to the latest version.
-- Install the `connectedk8s` Azure CLI extension, version 1.1.0 or later:
+- Install the latest version of `connectedk8s` Azure CLI extension:
```azurecli az extension add --name connectedk8s
A conceptual overview of this feature is available in the [Azure RBAC on Azure A
- Connect an existing Azure Arc-enabled Kubernetes cluster: - If you haven't connected a cluster yet, use our [quickstart](quickstart-connect-cluster.md).
- - [Upgrade your agents](agent-upgrade.md#manually-upgrade-agents) to version 1.1.0 or later.
+ - [Upgrade your agents](agent-upgrade.md#manually-upgrade-agents) to the latest version.
> [!NOTE] > You can't set up this feature for managed Kubernetes offerings of cloud providers like Elastic Kubernetes Service or Google Kubernetes Engine where the user doesn't have access to the API server of the cluster. For Azure Kubernetes Service (AKS) clusters, this [feature is available natively](../../aks/manage-azure-rbac.md) and doesn't require the AKS cluster to be connected to Azure Arc. This feature isn't supported on AKS on Azure Stack HCI.
azure-arc Cluster Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/cluster-connect.md
A conceptual overview of this feature is available in [Cluster connect - Azure A
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). -- [Install](/cli/azure/install-azure-cli) or [update](/cli/azure/update-azure-cli) Azure CLI to version >= 2.16.0.
+- [Install](/cli/azure/install-azure-cli) or [update](/cli/azure/update-azure-cli) Azure CLI to the latest version.
-- Install the `connectedk8s` Azure CLI extension of version >= 1.2.5:
+- Install the latest version of the `connectedk8s` Azure CLI extension:
```azurecli az extension add --name connectedk8s
A conceptual overview of this feature is available in [Cluster connect - Azure A
- An existing Azure Arc-enabled Kubernetes connected cluster. - If you haven't connected a cluster yet, use our [quickstart](quickstart-connect-cluster.md).
- - [Upgrade your agents](agent-upgrade.md#manually-upgrade-agents) to version >= 1.5.3.
+ - [Upgrade your agents](agent-upgrade.md#manually-upgrade-agents) to the latest version.
- Enable the below endpoints for outbound access in addition to the ones mentioned under [connecting a Kubernetes cluster to Azure Arc](quickstart-connect-cluster.md#meet-network-requirements):
A conceptual overview of this feature is available in [Cluster connect - Azure A
- An existing Azure Arc-enabled Kubernetes connected cluster. - If you haven't connected a cluster yet, use our [quickstart](quickstart-connect-cluster.md).
- - [Upgrade your agents](agent-upgrade.md#manually-upgrade-agents) to version >= 1.5.3.
+ - [Upgrade your agents](agent-upgrade.md#manually-upgrade-agents) to the latest version.
- Enable the below endpoints for outbound access in addition to the ones mentioned under [connecting a Kubernetes cluster to Azure Arc](quickstart-connect-cluster.md#meet-network-requirements):
azure-arc Custom Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/custom-locations.md
In this article, you learn how to:
## Prerequisites -- [Install or upgrade Azure CLI](/cli/azure/install-azure-cli) to version >= 2.16.0.
+- [Install or upgrade Azure CLI](/cli/azure/install-azure-cli) to the latest version.
-- Install the following Azure CLI extensions:
- - `connectedk8s` (version 1.2.0 or later)
- - `k8s-extension` (version 1.0.0 or later)
- - `customlocation` (version 0.1.3 or later)
+- Install the latest versions of the following Azure CLI extensions:
+ - `connectedk8s`
+ - `k8s-extension`
+ - `customlocation`
```azurecli az extension add --name connectedk8s
In this article, you learn how to:
Once registered, the `RegistrationState` state will have the `Registered` value. - Verify you have an existing [Azure Arc-enabled Kubernetes connected cluster](quickstart-connect-cluster.md).
- - [Upgrade your agents](agent-upgrade.md#manually-upgrade-agents) to version 1.5.3 or later.
+ - [Upgrade your agents](agent-upgrade.md#manually-upgrade-agents) to the latest version.
## Enable custom locations on your cluster
azure-arc Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/extensions.md
A conceptual overview of this feature is available in [Cluster extensions - Azur
## Prerequisites
-* [Install or upgrade Azure CLI](/cli/azure/install-azure-cli) to version >= 2.16.0.
-* `connectedk8s` (version >= 1.2.0) and `k8s-extension` (version >= 1.0.0) Azure CLI extensions. Install the latest version of these Azure CLI extensions by running the following commands:
+* [Install or upgrade Azure CLI](/cli/azure/install-azure-cli) to the latest version.
+* Install the latest version of the `connectedk8s` and `k8s-extension` Azure CLI extensions by running the following commands:
```azurecli az extension add --name connectedk8s
A conceptual overview of this feature is available in [Cluster extensions - Azur
* An existing Azure Arc-enabled Kubernetes connected cluster. * If you haven't connected a cluster yet, use our [quickstart](quickstart-connect-cluster.md).
- * [Upgrade your agents](agent-upgrade.md#manually-upgrade-agents) to version >= 1.5.3.
+ * [Upgrade your agents](agent-upgrade.md#manually-upgrade-agents) to the latest version.
## Currently available extensions
azure-arc Quickstart Connect Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/quickstart-connect-cluster.md
For a conceptual look at connecting clusters to Azure Arc, see [Azure Arc-enable
> * The identity must have 'Read' and 'Write' permissions on the Azure Arc-enabled Kubernetes resource type (`Microsoft.Kubernetes/connectedClusters`). > * The [Kubernetes Cluster - Azure Arc Onboarding built-in role](../../role-based-access-control/built-in-roles.md#kubernetes-clusterazure-arc-onboarding) can be used for this identity. This role is useful for at-scale onboarding, as it has only the granular permissions required to connect clusters to Azure Arc, and doesn't have permission to update, delete, or modify any other clusters or other Azure resources.
-* [Install or upgrade Azure CLI](/cli/azure/install-azure-cli) to version >= 2.16.0
+* [Install or upgrade Azure CLI](/cli/azure/install-azure-cli) to the latest version.
-* Install the **connectedk8s** Azure CLI extension of version >= 1.2.0:
+* Install the latest version of **connectedk8s** Azure CLI extension:
```azurecli az extension add --name connectedk8s
If your cluster is behind an outbound proxy server, requests must be routed via
For outbound proxy servers where only a trusted certificate needs to be provided without the proxy server endpoint inputs, `az connectedk8s connect` can be run with just the `--proxy-cert` input specified. In case multiple trusted certificates are expected, the combined certificate chain can be provided in a single file using the `--proxy-cert` parameter.
+> [!NOTE]
+>
+> * `--custom-ca-cert` is an alias for `--proxy-cert`. Either parameters can be used interchangeably. Passing both parameters in the same command will honour the one passed last.
+ ### [Azure CLI](#tab/azure-cli) Run the connect command with the `--proxy-cert` parameter specified:
azure-arc Validation Program https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/validation-program.md
The following providers and their corresponding Kubernetes distributions have su
| RedHat | [OpenShift Container Platform](https://www.openshift.com/products/container-platform) | [4.9.43](https://docs.openshift.com/container-platform/4.9/release_notes/ocp-4-9-release-notes.html), [4.10.23](https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html), 4.11.0-rc.6 | | VMware | [Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid) | TKGm 1.6.0; upstream K8s v1.23.8+vmware.2 <br>TKGm 1.5.3; upstream K8s v1.22.8+vmware.1 <br>TKGm 1.4.0; upstream K8s v1.21.2+vmware.1 <br>TKGm 1.3.1; upstream K8s v1.20.5_vmware.2 <br>TKGm 1.2.1; upstream K8s v1.19.3+vmware.1 | | Canonical | [Charmed Kubernetes](https://ubuntu.com/kubernetes) | [1.24](https://ubuntu.com/kubernetes/docs/1.24/components) |
-| SUSE Rancher | [Rancher Kubernetes Engine](https://rancher.com/products/rke/) | RKE CLI version: [v1.2.4](https://github.com/rancher/rke/releases/tag/v1.2.4); Kubernetes versions: [1.19.6](https://github.com/kubernetes/kubernetes/releases/tag/v1.19.6)), [1.18.14](https://github.com/kubernetes/kubernetes/releases/tag/v1.18.14)), [1.17.16](https://github.com/kubernetes/kubernetes/releases/tag/v1.17.16)) |
-| Nutanix | [Karbon](https://www.nutanix.com/products/karbon) | Version 2.2.1 |
+| SUSE Rancher | [Rancher Kubernetes Engine](https://rancher.com/products/rke/) | RKE CLI version: [v1.3.13](https://github.com/rancher/rke/releases/tag/v1.3.13); Kubernetes versions: 1.24.2, 1.23.8 |
+| Nutanix | [Nutanix Kubernetes Engine](https://www.nutanix.com/products/kubernetes-engine) | Version [2.5](https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Kubernetes-Engine-v2_5:Nutanix-Kubernetes-Engine-v2_5); upstream K8s v1.23.11 |
| Platform9 | [Platform9 Managed Kubernetes (PMK)](https://platform9.com/managed-kubernetes/) | PMK Version [5.3.0](https://platform9.com/docs/kubernetes/release-notes#platform9-managed-kubernetes-version-53-release-notes); Kubernetes versions: v1.20.5, v1.19.6, v1.18.10 | | Kublr | [Kublr Managed K8s](https://kublr.com/managed-kubernetes/) Distribution | Upstream K8s Version: 1.22.10 <br> Upstream K8s Version: 1.21.3 |
-| Mirantis | [Mirantis Kubernetes Engine](https://www.mirantis.com/software/mirantis-kubernetes-engine/) | MKE Version 3.5.1 <br> MKE Version 3.4.7 |
+| Mirantis | [Mirantis Kubernetes Engine](https://www.mirantis.com/software/mirantis-kubernetes-engine/) | MKE Version [3.5.5](https://docs.mirantis.com/mke/3.5/release-notes/3-5-5.html) <br> MKE Version [3.4.7](https://docs.mirantis.com/mke/3.4/release-notes/3-4-7.html) |
| Wind River | [Wind River Cloud Platform](https://www.windriver.com/studio/operator/cloud-platform) | Wind River Cloud Platform 22.06; Upstream K8s version: 1.23.1 <br>Wind River Cloud Platform 21.12; Upstream K8s version: 1.21.8 <br>Wind River Cloud Platform 21.05; Upstream K8s version: 1.18.1 | The Azure Arc team also ran the conformance tests and validated Azure Arc-enabled Kubernetes scenarios on the following public cloud providers:
azure-arc Support Matrix For Arc Enabled Vmware Vsphere https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/support-matrix-for-arc-enabled-vmware-vsphere.md
The following firewall URL exceptions are needed for the Azure Arc resource brid
| Azure Arc Identity service | 443 | https://*.his.arc.azure.com | Appliance VM IP and control plane endpoint need outbound connection. | Manages identity and access control for Azure resources | | Azure Arc configuration service | 443 | https://*.dp.kubernetesconfiguration.azure.com | Appliance VM IP and control plane endpoint need outbound connection. | Used for Kubernetes cluster configuration. | | Cluster connect service | 443 | https://*.servicebus.windows.net | Appliance VM IP and control plane endpoint need outbound connection. | Provides cloud-enabled communication to connect on-premises resources with the cloud. |
-| Guest Notification service | 443 | https://guestnotificationservice.azure.com | Appliance VM IP and control plane endpoint need outbound connection. | Used to connect on-premises resources to Azure. |
+| Guest Notification service | 443 | `https://guestnotificationservice.azure.com` | Appliance VM IP and control plane endpoint need outbound connection. | Used to connect on-premises resources to Azure. |
| SFS API endpoint | 443 | msk8s.api.cdp.microsoft.com | Host machine, Appliance VM IP and control plane endpoint need outbound connection. | Used when downloading product catalog, product bits, and OS images from SFS. | | Resource bridge (appliance) Dataplane service | 443 | https://*.dp.prod.appliances.azure.com | Appliance VM IP and control plane endpoint need outbound connection. | Communicate with resource provider in Azure. |
-| Resource bridge (appliance) container image download | 443 | *.blob.core.windows.net, https://ecpacr.azurecr.io | Appliance VM IP and control plane endpoint need outbound connection. | Required to pull container images. |
+| Resource bridge (appliance) container image download | 443 | *.blob.core.windows.net, `https://ecpacr.azurecr.io` | Appliance VM IP and control plane endpoint need outbound connection. | Required to pull container images. |
| Resource bridge (appliance) image download | 80 | *.dl.delivery.mp.microsoft.com | Host machine, Appliance VM IP and control plane endpoint need outbound connection. | Download the Arc resource bridge OS images. |
-| Azure Arc for K8s container image download | 443 | https://azurearcfork8sdev.azurecr.io | Appliance VM IP and control plane endpoint need outbound connection. | Required to pull container images. |
+| Azure Arc for K8s container image download | 443 | `https://azurearcfork8sdev.azurecr.io` | Appliance VM IP and control plane endpoint need outbound connection. | Required to pull container images. |
| ADHS telemetry service | 443 | adhs.events.data.microsoft.com | Appliance VM IP and control plane endpoint need outbound connection. Runs inside the appliance/mariner OS. | Used periodically to send Microsoft required diagnostic data from control plane nodes. Used when telemetry is coming off Mariner, which would mean any K8s control plane. | | Microsoft events data service | 443 | v20.events.data.microsoft.com | Appliance VM IP and control plane endpoint need outbound connection. | Used periodically to send Microsoft required diagnostic data from the Azure Stack HCI or Windows Server host. Used when telemetry is coming off Windows like Windows Server or HCI. | | vCenter Server | 443 | URL of the vCenter server | Appliance VM IP and control plane endpoint need outbound connection. | Used to by the vCenter server to communicate with the Appliance VM and the control plane.|
azure-functions Dotnet Isolated Process Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-process-guide.md
For a more complete example of using custom middleware in your function app, see
A function can accept a [CancellationToken](/dotnet/api/system.threading.cancellationtoken) parameter, which enables the operating system to notify your code when the function is about to be terminated. You can use this notification to make sure the function doesn't terminate unexpectedly in a way that leaves data in an inconsistent state.
-Cancellation tokens are supported in .NET functions when running in an isolated process. The following example shows how to use a cancellation token in a function:
+Cancellation tokens are supported in .NET functions when running in an isolated process. The following example raises an exception when a cancellation request has been received:
+
+The following example performs clean-up actions if a cancellation request has been received:
+ ## ReadyToRun
Because your isolated process app runs outside the Functions runtime, you need t
[ILogger]: /dotnet/api/microsoft.extensions.logging.ilogger [ILogger&lt;T&gt;]: /dotnet/api/microsoft.extensions.logging.ilogger-1 [GetLogger]: /dotnet/api/microsoft.azure.functions.worker.functioncontextloggerextensions.getlogger?view=azure-dotnet&preserve-view=true
-[BlobClient]: /dotnet/api/azure.storage.blobs.blobclient?view=azure-dotnet
+[BlobClient]: /dotnet/api/azure.storage.blobs.blobclient?view=azure-dotnet&preserve-view=true
[DocumentClient]: /dotnet/api/microsoft.azure.documents.client.documentclient [BrokeredMessage]: /dotnet/api/microsoft.servicebus.messaging.brokeredmessage [HttpRequestData]: /dotnet/api/microsoft.azure.functions.worker.http.httprequestdata?view=azure-dotnet&preserve-view=true
azure-functions Functions Bindings Event Grid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-grid.md
The Event Grid output binding is only available for Functions 2.x and higher. Ev
## Next steps
-* If you have questions, submit an issue to the team [here](https://github.com/Azure/azure-functions-eventgrid-extension/issues)
+* If you have questions, submit an issue to the team [here](https://github.com/Azure/azure-sdk-for-net/issues)
* [Event Grid trigger][trigger] * [Event Grid output binding][binding] * [Run a function when an Event Grid event is dispatched](./functions-bindings-event-grid-trigger.md)
azure-maps Creator Indoor Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-indoor-maps.md
The following example shows how to update a dataset, create a new tileset, and d
[tileset]: /rest/api/maps/v20220901preview/tileset [style-picker-control]: choose-map-style.md#add-the-style-picker-control [style-how-to]: how-to-create-custom-styles.md
-[map-config-api]: /rest/api/maps/v20220901preview/mapconfiguration
+[map-config-api]: /rest/api/maps/v20220901preview/map-configuration
[instantiate-indoor-manager]: how-to-use-indoor-module.md#instantiate-the-indoor-manager [style editor]: https://azure.github.io/Azure-Maps-Style-Editor
azure-maps How To Use Indoor Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-indoor-module.md
Learn more about how to add more data to your map:
> [!div class="nextstepaction"] > [Code samples](/samples/browse/?products=azure-maps)
-[mapConfiguration]: /rest/api/maps/v20220901preview/mapconfiguration
+[mapConfiguration]: /rest/api/maps/v20220901preview/map-configuration
[tutorial]: tutorial-creator-indoor-maps.md [geos]: geographic-scope.md [visual style editor]: https://azure.github.io/Azure-Maps-Style-Editor/
azure-monitor Azure Vm Vmss Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-vm-vmss-apps.md
This article walks you through enabling Application Insights monitoring using th
Auto-instrumentation is easy to enable with no advanced configuration required.
+For a complete list of supported auto-instrumentation scenarios, see [Supported environments, languages, and resource providers](codeless-overview.md#supported-environments-languages-and-resource-providers).
+ > [!NOTE] > Auto-instrumentation is available for ASP.NET, ASP.NET Core IIS-hosted applications and Java. Use an SDK to instrument Node.js and Python applications hosted on an Azure virtual machines and virtual machine scale sets.
azure-monitor Azure Web Apps Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-java.md
You can apply additional configurations, and then based on your specific scenari
You can turn on monitoring for your Java apps running in Azure App Service just with one click, no code change required. The integration adds [Application Insights Java 3.x](./java-in-process-agent.md) and you will get the telemetry auto-collected.
+For a complete list of supported auto-instrumentation scenarios, see [Supported environments, languages, and resource providers](codeless-overview.md#supported-environments-languages-and-resource-providers).
+ 1. **Select Application Insights** in the Azure control panel for your app service, then select **Enable**. :::image type="content"source="./media/azure-web-apps/enable.png" alt-text="Screenshot of Application Insights tab with enable selected.":::
azure-monitor Azure Web Apps Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-net-core.md
Enabling monitoring on your ASP.NET Core based web applications running on [Azur
## Enable auto-instrumentation monitoring
+For a complete list of supported auto-instrumentation scenarios, see [Supported environments, languages, and resource providers](codeless-overview.md#supported-environments-languages-and-resource-providers).
+ # [Windows](#tab/Windows) > [!IMPORTANT]
azure-monitor Azure Web Apps Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-net.md
Enabling monitoring on your ASP.NET based web applications running on [Azure App
## Enable auto-instrumentation monitoring
+For a complete list of supported auto-instrumentation scenarios, see [Supported environments, languages, and resource providers](codeless-overview.md#supported-environments-languages-and-resource-providers).
+ > [!NOTE] > The combination of APPINSIGHTS_JAVASCRIPT_ENABLED and urlCompression is not supported. For more info see the explanation in the [troubleshooting section](#appinsights_javascript_enabled-and-urlcompression-isnt-supported).
azure-monitor Azure Web Apps Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-nodejs.md
Turning on application monitoring in Azure portal will automatically instrument
### Auto-instrumentation through Azure portal
+For a complete list of supported auto-instrumentation scenarios, see [Supported environments, languages, and resource providers](codeless-overview.md#supported-environments-languages-and-resource-providers).
+ You can turn on monitoring for your Node.js apps running in Azure App Service just with one click, no code change required. Application Insights for Node.js is integrated with Azure App Service on Linux - both code-based and custom containers, and with App Service on Windows for code-based apps. The integration is in public preview. The integration adds Node.js SDK, which is in GA.
azure-monitor Azure Web Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps.md
There are two ways to enable monitoring for applications hosted on App Service:
This method is the easiest to enable, and no code change or advanced configurations are required. It's often referred to as "runtime" monitoring. For App Service, we recommend that at a minimum you enable this level of monitoring. Based on your specific scenario, you can evaluate whether more advanced monitoring through manual instrumentation is needed.
+ For a complete list of supported auto-instrumentation scenarios, see [Supported environments, languages, and resource providers](codeless-overview.md#supported-environments-languages-and-resource-providers).
+ The following platforms are supported for auto-instrumentation monitoring: - [.NET Core](./azure-web-apps-net-core.md)
azure-monitor Codeless Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/codeless-overview.md
Title: Monitor your apps without code changes - auto-instrumentation for Azure Monitor Application Insights | Microsoft Docs description: Overview of auto-instrumentation for Azure Monitor Application Insights - codeless application performance management Previously updated : 08/31/2021 Last updated : 10/19/2022
-# What is auto-instrumentation for Azure Monitor application insights?
+# What is auto-instrumentation for Azure Monitor Application Insights?
-Auto-instrumentation allows you to enable application monitoring with Application Insights without changing your code.
-
-Application Insights is integrated with various resource providers and works on different environments. In essence, all you have to do is enable and - in some cases - configure the agent, which will collect the telemetry automatically. In no time, you'll see the metrics, requests, and dependencies in your Application Insights resource. This telemetry will allow you to spot the source of potential problems before they occur, and analyze the root cause with end-to-end transaction view.
-
-> [!NOTE]
-> Auto-instrumentation used to be known as "codeless attach" before October 2021.
+Auto-instrumentation collects [Application Insights](app-insights-overview.md) [telemetry](data-model.md).
+> [!div class="checklist"]
+> - No code changes required
+> - [SDK update](sdk-support-guidance.md) overhead is eliminated
+> - Recommended when available
## Supported environments, languages, and resource providers
-As we're adding new integrations, the auto-instrumentation capability matrix becomes complex. The table below shows you the current state of the matter as far as support for various resource providers, languages, and environments go.
-
-|Environment/Resource Provider | .NET | .NET Core | Java | Node.js | Python |
-||--|--|--|--|--|
-|Azure App Service on Windows - Publish as Code | GA, OnBD* | GA | GA | GA, OnBD* | Not supported |
-|Azure App Service on Windows - Publish as Docker | Public Preview | Public Preview | Public Preview | Not supported | Not supported |
-|Azure App Service on Linux | N/A | Public Preview | GA | GA | Not supported |
-|Azure Functions - basic | GA, OnBD* | GA, OnBD* | GA, OnBD* | GA, OnBD* | GA, OnBD* |
-|Azure Functions - dependencies | Not supported | Not supported | Public Preview | Not supported | Through [extension](monitor-functions.md#distributed-tracing-for-python-function-apps) |
-|Azure Spring Cloud | Not supported | Not supported | GA | Not supported | Not supported |
-|Azure Kubernetes Service (AKS) | N/A | Not supported | Through agent | Not supported | Not supported |
-|Azure VMs Windows | Public Preview | Public Preview | Through agent | Not supported | Not supported |
-|On-Premises VMs Windows | GA, opt-in | Public Preview | Through agent | Not supported | Not supported |
-|Standalone agent - any env. | Not supported | Not supported | GA | Not supported | Not supported |
-
-*OnBD is short for On by Default - the Application Insights will be enabled automatically once you deploy your app in supported environments.
-
-## Azure App Service
-
-### Windows
-
-Application monitoring on Azure App Service on Windows is available for **[ASP.NET](./azure-web-apps-net.md)** (enabled by default), **[ASP.NET Core](./azure-web-apps-net-core.md)**, **[Java](./azure-web-apps-java.md)** (in public preview), and **[Node.js](./azure-web-apps-nodejs.md)** applications. To monitor a Python app, add the [SDK](./opencensus-python.md) to your code.
-
-> [!NOTE]
-> Application monitoring for apps on Windows Containers on App Service [is in public preview for .NET Core, .NET Framework, and Java](https://azure.github.io/AppService/2022/04/11/windows-containers-app-insights-preview.html).
+The table below displays the current state of auto-instrumentation availability.
-### Linux
-You can enable monitoring for **[Java](./azure-web-apps-java.md?)**, **[Node.js](./azure-web-apps-nodejs.md?tabs=linux)**, and **[ASP.NET Core](./azure-web-apps-net-core.md?tabs=linux)(Preview)** apps running on Linux in App Service through the portal.
+Links are provided to additional information for each supported scenario.
-For [Python](./opencensus-python.md), use the SDK.
+|Environment/Resource Provider | .NET Framework | .NET Core / .NET | Java | Node.js | Python |
+|-||||-|-|
+|Azure App Service on Windows - Publish as Code | [ :white_check_mark: :link: ](azure-web-apps-net.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](azure-web-apps-net-core.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](azure-web-apps-java.md) | [ :white_check_mark: :link: ](azure-web-apps-nodejs.md) <sup>[1](#OnBD)</sup> | :x: |
+|Azure App Service on Windows - Publish as Docker | [ :white_check_mark: ](https://azure.github.io/AppService/2022/04/11/windows-containers-app-insights-preview.html) <sup>[2](#Preview)</sup> | [ :white_check_mark: ](https://azure.github.io/AppService/2022/04/11/windows-containers-app-insights-preview.html) <sup>[2](#Preview)</sup> | [ :white_check_mark: ](https://azure.github.io/AppService/2022/04/11/windows-containers-app-insights-preview.html) <sup>[2](#Preview)</sup> | :x: | :x: |
+|Azure App Service on Linux | :x: | [ :white_check_mark: :link: ](azure-web-apps-net-core.md?tabs=linux) <sup>[2](#Preview)</sup> | [ :white_check_mark: :link: ](azure-web-apps-java.md) | [ :white_check_mark: :link: ](azure-web-apps-nodejs.md?tabs=linux) | :x: |
+|Azure Functions - basic | [ :white_check_mark: :link: ](monitor-functions.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](monitor-functions.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](monitor-functions.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](monitor-functions.md) <sup>[1](#OnBD)</sup> | [ :white_check_mark: :link: ](monitor-functions.md) <sup>[1](#OnBD)</sup> |
+|Azure Functions - dependencies | :x: | :x: | [ :white_check_mark: :link: ](monitor-functions.md) <sup>[2](#Preview)</sup> | :x: | [ :white_check_mark: :link: ](monitor-functions.md#distributed-tracing-for-python-function-apps) |
+|Azure Spring Cloud | :x: | :x: | [ :white_check_mark: :link: ](azure-web-apps-java.md) | :x: | :x: |
+|Azure Kubernetes Service (AKS) | :x: | :x: | [ :white_check_mark: :link: ](java-in-process-agent.md) | :x: | :x: |
+|Azure VMs Windows | [ :white_check_mark: :link: ](azure-vm-vmss-apps.md) <sup>[2](#Preview)</sup> <sup>[3](#Agent)</sup> | [ :white_check_mark: :link: ](azure-vm-vmss-apps.md) <sup>[2](#Preview)</sup> <sup>[3](#Agent)</sup> | [ :white_check_mark: :link: ](java-in-process-agent.md) | :x: | :x: |
+|On-premises VMs Windows | [ :white_check_mark: :link: ](status-monitor-v2-overview.md) <sup>[3](#Agent)</sup> | [ :white_check_mark: :link: ](status-monitor-v2-overview.md) <sup>[2](#Preview)</sup> <sup>[3](#Agent)</sup> | [ :white_check_mark: :link: ](java-in-process-agent.md) | :x: | :x: |
+|Standalone agent - any environment | :x: | :x: | [ :white_check_mark: :link: ](java-in-process-agent.md) | :x: | :x: |
-## Azure Functions
+**Footnotes**
+- <a name="OnBD">1</a>: Application Insights is on by default and enabled automatically.
+- <a name="Preview">2</a>: This feature is in public preview. [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/)
+- <a name="Agent">3</a>: An agent must be deployed and configured.
-The basic monitoring for Azure Functions is enabled by default to collect log, performance, error data, and HTTP requests. For Java applications, you can enable richer monitoring with distributed tracing and get the end-to-end transaction details. This functionality for Java is in public preview for Windows and you can [enable it in Azure portal](./monitor-functions.md).
-
-## Azure Spring Cloud
-
-### Java
-Application monitoring for Java apps running in Azure Spring Cloud is integrated into the portal, you can enable Application Insights directly from the Azure portal, both for the existing and newly created Azure Spring Cloud resources.
-
-## Azure Kubernetes Service (AKS)
-
-Codeless instrumentation of Azure Kubernetes Service (AKS) is currently available for Java applications through the [standalone agent](./java-in-process-agent.md).
-
-## Azure Windows VMs and virtual machine scale set
-
-Auto-instrumentation for Azure VMs and virtual machine scale set is available for [.NET](./azure-vm-vmss-apps.md) and [Java](./java-in-process-agent.md) - this experience isn't integrated into the portal. The monitoring is enabled through a few steps with a stand-alone solution and doesn't require any code changes.
-
-## On-premises servers
-You can easily enable monitoring for your [on-premises Windows servers for .NET applications](./status-monitor-v2-overview.md) and for [Java apps](./java-in-process-agent.md).
-
-## Other environments
-The versatile Java standalone agent works on any environment, there's no need to instrument your code. [Follow the guide](./java-in-process-agent.md) to enable Application Insights and read about the amazing capabilities of the Java agent. The agent is in public preview and available on all regions.
+> [!NOTE]
+> Auto-instrumentation was known as "codeless attach" before October 2021.
## Next steps
-* [Application Insights Overview](./app-insights-overview.md)
-* [Application map](./app-map.md)
-* [End-to-end performance monitoring](../app/tutorial-performance.md)
+* [Application Insights Overview](app-insights-overview.md)
+* [Application Insights Overview dashboard](overview-dashboard.md)
+* [Application map](app-map.md)
azure-monitor Java In Process Agent Redirect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-in-process-agent-redirect.md
Whether you are deploying on-premises or in the cloud, you can use Microsoft's O
For more information, see [Azure Monitor OpenTelemetry-based auto-instrumentation for Java applications](java-in-process-agent.md#azure-monitor-opentelemetry-based-auto-instrumentation-for-java-applications).
+For a complete list of supported auto-instrumentation scenarios, see [Supported environments, languages, and resource providers](codeless-overview.md#supported-environments-languages-and-resource-providers).
+ ## Next steps - [Azure Monitor OpenTelemetry-based auto-instrumentation for Java applications](java-in-process-agent.md#azure-monitor-opentelemetry-based-auto-instrumentation-for-java-applications)
azure-monitor Kubernetes Codeless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/kubernetes-codeless.md
## Application monitoring without instrumenting the code Currently, only Java lets you enable application monitoring without instrumenting the code. To monitor applications in other languages use the SDKs.
+For a complete list of supported auto-instrumentation scenarios, see [Supported environments, languages, and resource providers](codeless-overview.md#supported-environments-languages-and-resource-providers).
+ ## Java Once enabled, the Java agent will automatically collect a multitude of requests, dependencies, logs, and metrics from the most widely used libraries and frameworks.
azure-monitor Live Stream https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/live-stream.md
Live Metrics custom filters allow you to control which of your application's tel
- Recommended: Secure Live Metrics channel using [Azure AD authentication](./azure-ad-authentication.md#configuring-and-enabling-azure-ad-based-authentication) - Legacy (no longer recommended): Set up an authenticated channel by configuring a secret API key as explained below
+> [!NOTE]
+> On 30 September 2025, API keys used to stream live metrics telemetry into application insights will be retired. After that date, applications which use API keys will no longer be able to send live metrics data to your application insights resource. Authenticated telemetry ingestion for live metrics streaming to application insights will need to be done with [Azure AD authentication for application insights](./azure-ad-authentication.md).
+ It's possible to try custom filters without having to set up an authenticated channel. Simply click on any of the filter icons and authorize the connected servers. Notice that if you choose this option, you'll have to authorize the connected servers once every new session or when a new server comes online. > [!WARNING]
azure-monitor Monitor Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/monitor-functions.md
Application Insights collects log, performance, and error data, and automaticall
The required Application Insights instrumentation is built into Azure Functions. The only thing you need is a valid instrumentation key to connect your function app to an Application Insights resource. The instrumentation key should be added to your application settings when your function app resource is created in Azure. If your function app doesn't already have this key, you can set it manually. For more information read more about [monitoring Azure Functions](../../azure-functions/functions-monitoring.md?tabs=cmd).
+For a complete list of supported auto-instrumentation scenarios, see [Supported environments, languages, and resource providers](codeless-overview.md#supported-environments-languages-and-resource-providers).
+ [!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)] ## Distributed tracing for Java applications (public preview)
azure-monitor Status Monitor V2 Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/status-monitor-v2-overview.md
Application Insights Agent (formerly named Status Monitor V2) is a PowerShell mo
It replaces Status Monitor. Telemetry is sent to the Azure portal, where you can [monitor](./app-insights-overview.md) your app.
+For a complete list of supported auto-instrumentation scenarios, see [Supported environments, languages, and resource providers](codeless-overview.md#supported-environments-languages-and-resource-providers).
+ > [!NOTE] > The module currently supports codeless instrumentation of ASP.NET and ASP.NET Core web apps hosted with IIS. Use an SDK to instrument Java and Node.js applications.
azure-netapp-files Azure Netapp Files Create Volumes Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-create-volumes-smb.md
na Previously updated : 10/18/2022 Last updated : 10/20/2022 # Create an SMB volume for Azure NetApp Files
Before creating an SMB volume, you need to create an Active Directory connection
* **Network features** In supported regions, you can specify whether you want to use **Basic** or **Standard** network features for the volume. See [Configure network features for a volume](configure-network-features.md) and [Guidelines for Azure NetApp Files network planning](azure-netapp-files-network-topologies.md) for details.
+ * **Availability zone**
+ This option lets you deploy the new volume in the logical availability zone that you specify. Select an availability zone where Azure NetApp Files resources are present. For details, see [Manage availability zone volume placement](manage-availability-zone-volume-placement.md).
+ * If you want to apply an existing snapshot policy to the volume, click **Show advanced section** to expand it, specify whether you want to hide the snapshot path, and select a snapshot policy in the pull-down menu. For information about creating a snapshot policy, see [Manage snapshot policies](snapshots-manage-policy.md).
You can set permissions for a file or folder by using the **Security** tab of th
## Next steps
+* [Manage availability zone volume placement for Azure NetApp Files](manage-availability-zone-volume-placement.md)
* [Mount a volume for Windows or Linux virtual machines](azure-netapp-files-mount-unmount-volumes-for-virtual-machines.md) * [Resource limits for Azure NetApp Files](azure-netapp-files-resource-limits.md) * [Enable Active Directory Domain Services (ADDS) LDAP authentication for NFS volumes](configure-ldap-over-tls.md)
azure-netapp-files Azure Netapp Files Create Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-create-volumes.md
na Previously updated : 10/18/2022 Last updated : 10/20/2022 # Create an NFS volume for Azure NetApp Files
This article shows you how to create an NFS volume. For SMB volumes, see [Create
* **Network features** In supported regions, you can specify whether you want to use **Basic** or **Standard** network features for the volume. See [Configure network features for a volume](configure-network-features.md) and [Guidelines for Azure NetApp Files network planning](azure-netapp-files-network-topologies.md) for details.
+ * **Availability zone**
+ This option lets you deploy the new volume in the logical availability zone that you specify. Select an availability zone where Azure NetApp Files resources are present. For details, see [Manage availability zone volume placement](manage-availability-zone-volume-placement.md).
+ * If you want to apply an existing snapshot policy to the volume, click **Show advanced section** to expand it, specify whether you want to hide the snapshot path, and select a snapshot policy in the pull-down menu. For information about creating a snapshot policy, see [Manage snapshot policies](snapshots-manage-policy.md).
This article shows you how to create an NFS volume. For SMB volumes, see [Create
## Next steps
+* [Manage availability zone volume placement for Azure NetApp Files](manage-availability-zone-volume-placement.md)
* [Configure NFSv4.1 default domain for Azure NetApp Files](azure-netapp-files-configure-nfsv41-domain.md) * [Configure NFSv4.1 Kerberos encryption](configure-kerberos-encryption.md) * [Enable Active Directory Domain Services (AD DS) LDAP authentication for NFS volumes](configure-ldap-over-tls.md)
azure-netapp-files Backup Configure Policy Based https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-configure-policy-based.md
na Previously updated : 01/05/2022 Last updated : 09/30/2022 # Configure policy-based backups for Azure NetApp Files
To enable a policy-based (scheduled) backup:
2. Select your Azure NetApp Files account. 3. Select **Backups**.
- ![Screenshot that shows how to navigate to Backups option.](../media/azure-netapp-files/backup-navigate.png)
+ :::image type="content" source="../media/azure-netapp-files/backup-navigate.png" alt-text="Screenshot that shows how to navigate to Backups option." lightbox="../media/azure-netapp-files/backup-navigate.png":::
4. Select **Backup Policies**. 5. Select **Add**. 6. In the **Backup Policy** page, specify the backup policy name. Enter the number of backups that you want to keep for daily, weekly, and monthly backups. Click **Save**.
- ![Screenshot that shows the Backup Policy window.](../media/azure-netapp-files/backup-policy-window-daily.png)
-
+ :::image type="content" source="../media/azure-netapp-files/backup-policy-window-daily.png" alt-text="Screenshot that shows the Backup Policy window." lightbox="../media/azure-netapp-files/backup-policy-window-daily.png":::
+
* If you configure and attach a backup policy to the volume without attaching a snapshot policy, the backup does not function properly. There will be only a baseline snapshot transferred to the Azure storage. * For each backup policy that you configure (for example, daily backups), ensure that you have a corresponding snapshot policy configuration (for example, daily snapshots). * Backup policy has a dependency on snapshot policy. If you havenΓÇÖt created snapshot policy yet, you can configure both policies at the same time by selecting the **Create snapshot policy** checkbox on the Backup Policy window.
- ![Screenshot that shows the Backup Policy window with Snapshot Policy selected.](../media/azure-netapp-files/backup-policy-snapshot-policy-option.png)
+ :::image type="content" source="../media/azure-netapp-files/backup-policy-snapshot-policy-option.png" alt-text="Screenshot that shows the Backup Policy window with Snapshot Policy selected." lightbox="../media/azure-netapp-files/backup-policy-snapshot-policy-option.png":::
+ ### Example of a valid configuration
To enable the backup functionality for a volume:
The Vault information is pre-populated.
- ![Screenshot that shows Configure Backups window.](../media/azure-netapp-files/backup-configure-window.png)
+ :::image type="content" source="../media/azure-netapp-files/backup-configure-window.png" alt-text="Screenshot that shows Configure Backups window." lightbox="../media/azure-netapp-files/backup-configure-window.png":::
## Next steps
azure-netapp-files Create Volumes Dual Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/create-volumes-dual-protocol.md
na Previously updated : 10/18/2022 Last updated : 10/20/2022 # Create a dual-protocol volume for Azure NetApp Files
To create NFS volumes, see [Create an NFS volume](azure-netapp-files-create-volu
* Create a reverse lookup zone on the DNS server and then add a pointer (PTR) record of the AD host machine in that reverse lookup zone. Otherwise, the dual-protocol volume creation will fail. * The **Allow local NFS users with LDAP** option in Active Directory connections intends to provide occasional and temporary access to local users. When this option is enabled, user authentication and lookup from the LDAP server stop working, and the number of group memberships that Azure NetApp Files will support will be limited to 16. As such, you should keep this option *disabled* on Active Directory connections, except for the occasion when a local user needs to access LDAP-enabled volumes. In that case, you should disable this option as soon as local user access is no longer required for the volume. See [Allow local NFS users with LDAP to access a dual-protocol volume](#allow-local-nfs-users-with-ldap-to-access-a-dual-protocol-volume) about managing local user access. * Ensure that the NFS client is up to date and running the latest updates for the operating system.
-* Dual-protocol volumes support both Active Directory Domain Services (ADDS) and Azure Active Directory Domain Services (AADDS).
+* Dual-protocol volumes support both Active Directory Domain Services (AD DS) and Azure Active Directory Domain Services (AADDS).
* Dual-protocol volumes do not support the use of LDAP over TLS with AADDS. See [LDAP over TLS considerations](configure-ldap-over-tls.md#considerations). * The NFS version used by a dual-protocol volume can be NFSv3 or NFSv4.1. The following considerations apply: * Dual protocol does not support the Windows ACLS extended attributes `set/get` from NFS clients.
To create NFS volumes, see [Create an NFS volume](azure-netapp-files-create-volu
* **Network features** In supported regions, you can specify whether you want to use **Basic** or **Standard** network features for the volume. See [Configure network features for a volume](configure-network-features.md) and [Guidelines for Azure NetApp Files network planning](azure-netapp-files-network-topologies.md) for details.
+ * **Availability zone**
+ This option lets you deploy the new volume in the logical availability zone that you specify. Select an availability zone where Azure NetApp Files resources are present. For details, see [Manage availability zone volume placement](manage-availability-zone-volume-placement.md).
+ * If you want to apply an existing snapshot policy to the volume, click **Show advanced section** to expand it, specify whether you want to hide the snapshot path, and select a snapshot policy in the pull-down menu. For information about creating a snapshot policy, see [Manage snapshot policies](snapshots-manage-policy.md).
Follow instructions in [Configure an NFS client for Azure NetApp Files](configur
## Next steps
+* [Manage availability zone volume placement for Azure NetApp Files](manage-availability-zone-volume-placement.md)
* [Configure NFSv4.1 Kerberos encryption](configure-kerberos-encryption.md) * [Configure an NFS client for Azure NetApp Files](configure-nfs-clients.md) * [Configure Unix permissions and change ownership mode](configure-unix-permissions-change-ownership-mode.md).
azure-netapp-files Faq Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-backup.md
Previously updated : 10/11/2021 Last updated : 09/10/2022 # Azure NetApp Files backup FAQs
azure-netapp-files Manage Availability Zone Volume Placement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/manage-availability-zone-volume-placement.md
+
+ Title: Manage availability zone volume placement for Azure NetApp Files | Microsoft Docs
+description: Describes how to create a volume with an availability zone by using Azure NetApp Files.
+
+documentationcenter: ''
++
+editor: ''
+
+ms.assetid:
++
+ na
+ Last updated : 10/20/2022++
+# Manage availability zone volume placement for Azure NetApp Files
+
+Azure NetApp Files lets you deploy new volumes in the logical availability zone of your choice. To better understand availability zones, refer to [Using availability zones for high availability](use-availability-zones.md).
+
+## Requirements and considerations
+
+* The availability zone volume placement feature is supported only on newly created volumes. It is not currently supported on existing volumes.
+
+* This feature does not guarantee free capacity in the availability zone. For example, even if you can deploy a VM in availability zone 3 of the East US region, it doesnΓÇÖt guarantee free Azure NetApp Files capacity in that zone. If no sufficient capacity is available, volume creation will fail.
+
+* After a volume is created with an availability zone, the specified availability zone canΓÇÖt be modified. Volumes canΓÇÖt be moved between availability zones.
+
+* NetApp accounts and capacity pools are not bound by the availability zone. A capacity pool can contain volumes in different availability zones.
+
+* This feature provides zonal volume placement, with latency within the zonal latency envelopes. It does not provide proximity placement towards compute. As such, it doesnΓÇÖt provide lowest latency guarantee.
+
+* Each data center is assigned to a physical zone. Physical zones are mapped to logical zones in your Azure subscription. Azure subscriptions are automatically assigned this mapping at the time a subscription is created. This feature aligns with the generic logical-to-physical availability zone mapping for the subscription.
+
+* VMs and Azure NetApp Files volumes are to be deployed separately, within the same logical availability zone to create zone alignment between VMs and Azure NetApp Files. The availability zone volume placement feature does not create zonal VMs upon volume creation, or vice versa.
+
+## Register the feature
+
+The feature of availability zone volume placement is currently in preview. If you are using this feature for the first time, you need to register the feature first.
+
+1. Register the feature:
+
+ ```azurepowershell-interactive
+ Register-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFAvailabilityZone
+ ```
+
+2. Check the status of the feature registration:
+
+ ```azurepowershell-interactive
+ Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFAvailabilityZone
+ ```
+
+ > [!NOTE]
+ > The **RegistrationState** may be in the `Registering` state for up to 60 minutes before changing to `Registered`. Wait until the status is **Registered** before continuing.
+
+You can also use [Azure CLI commands](/cli/azure/feature) `az feature register` and `az feature show` to register the feature and display the registration status.
+
+## Create a volume with an availability zone
+
+1. Select **Volumes** from your capacity pool. Then select **+ Add volume** to create a volume.
+
+ For details about volume creation, see:
+ * [Create an NFS volume](azure-netapp-files-create-volumes.md)
+ * [Create an SMB volume](azure-netapp-files-create-volumes-smb.md)
+ * [Create a dual-protocol volume](create-volumes-dual-protocol.md)
+
+2. In the **Create a Volume** page, under the **Basic** tab, select the **Availability Zone** pulldown to specify an availability zone where Azure NetApp Files resources are present.
+
+ > [!IMPORTANT]
+ > Logical availability zones for the subscription without Azure NetApp Files presence are marked `(Unavailable)` and are greyed out.
+
+ [ ![Screenshot that shows the Availability Zone menu.](../media/azure-netapp-files/availability-zone-menu-drop-down.png) ](../media/azure-netapp-files/availability-zone-menu-drop-down.png#lightbox)
+
+
+3. Follow the UI to create the volume. The **Review + Create** page shows the selected availability zone you specified.
+
+ [ ![Screenshot that shows the Availability Zone review.](../media/azure-netapp-files/availability-zone-display-down.png) ](../media/azure-netapp-files/availability-zone-display-down.png#lightbox)
+
+4. After you create the volume, the **Volume Overview** page includes availability zone information for the volume.
+
+ [ ![Screenshot that shows the Availability Zone volume overview.](../media/azure-netapp-files/availability-zone-volume-overview.png) ](../media/azure-netapp-files/availability-zone-volume-overview.png#lightbox)
+
+> [!IMPORTANT]
+> Once the volume is created using the availability zone volume placement feature, the volume has the same level of support as other volumes deployed in the subscription without this feature enabled. For example, if there is an issue with backup and restore on the volume, it will be supported because the problem is not with the availability zone volume placement feature itself.
+
+## Next steps
+
+* [Use availability zones for high availability](use-availability-zones.md)
+* [Create an NFS volume for Azure NetApp Files](azure-netapp-files-create-volumes.md)
+* [Create an SMB volume for Azure NetApp Files](azure-netapp-files-create-volumes-smb.md)
+* [Create a dual-protocol volume for Azure NetApp Files](create-volumes-dual-protocol.md)
azure-netapp-files Use Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/use-availability-zones.md
+
+ Title: Use availability zones for high availability in Azure NetApp Files | Microsoft Docs
+description: Azure availability zones are highly available, fault tolerant, and more scalable than traditional single or multiple data center infrastructures.
+
+documentationcenter: ''
++
+editor: ''
+
+ms.assetid:
++
+ na
+ Last updated : 10/20/2022++
+# Use availability zones for high availability in Azure NetApp Files
+
+Azure [availability zones](../availability-zones/az-overview.md#availability-zones) are physically separate locations within each supporting Azure region that are tolerant to local failures. Failures can range from software and hardware failures to events such as earthquakes, floods, and fires. Tolerance to failures is achieved because of redundancy and logical isolation of Azure services. To ensure resiliency, a minimum of three separate availability zones are present in all availability zone-enabled regions.
+
+>[!IMPORTANT]
+> Availability zones are referred to as _logical zones_. Each data center is assigned to a physical zone. Physical zones are mapped to logical zones in your Azure subscription, and the mapping will be different with different subscriptions. Azure subscriptions are automatically assigned this mapping when a subscription is created. Azure NetApp Files aligns with the generic logical-to-physical availability zone mapping for all Azure services for the subscription.
+
+Azure availability zones are highly available, fault tolerant, and more scalable than traditional single or multiple data center infrastructures. Azure availability zones let you design and operate applications and databases that automatically transition between zones without interruption. You can design resilient solutions by using Azure services that use availability zones.
+
+The use of high availability (HA) architectures with availability zones are now a default and best practice recommendation inΓÇ»[AzureΓÇÖs Well-Architected Framework](/architecture/framework/resiliency/app-design#use-availability-zones-within-a-region). Enterprise applications and resources are increasingly deployed into multiple availability zones to achieve this level of high availability (HA) or failure domain (zone) isolation.
+
+Azure NetApp Files lets you deploy volumes in availability zones. The Azure NetApp Files [availability zone volume placement](manage-availability-zone-volume-placement.md) feature lets you deploy volumes in the logical availability zone of your choice, in alignment with Azure compute and other services in the same zone.
+
+Azure NetApp Files deployments will occur in the availability of zone of choice if the Azure NetApp Files is present in that availability zone and if it has sufficient capacity. All VMs within the region in (peered) VNets can access all Azure NetApp Files resources.
+
+>[!IMPORTANT]
+>Azure NetApp Files availability zone volume placement provides zonal placement. It doesn't provide proximity placement towards compute. As such, it doesnΓÇÖt provide lowest latency guarantee. VM-to-storage latencies are within the availability zone latency envelopes.
+
+You can co-locate your compute, storage, networking, and data resources across an availability zone, and replicate this arrangement in other availability zones. Many applications are built for HA across multiple availability zones using application-based replication and failover technologies, like [SQL Server Always-On Availability Groups (AOAG)](/sql/database-engine/availability-groups/windows/always-on-availability-groups-sql-server), [SAP HANA with HANA System Replication (HSR)](../virtual-machines/workloads/sap/sap-hana-high-availability-netapp-files-suse.md), and [Oracle with Data Guard](../virtual-machines/workloads/oracle/oracle-reference-architecture.md#high-availability-for-oracle-databases).
+
+Latency is subject to availability zone latency for within availability zone access and the regional latency envelope for cross-availability zone access.
+
+## Azure regions with availability zones
+
+For a list of regions that that currently support availability zones, refer to [Azure regions with availability zones](../availability-zones/az-overview.md#azure-regions-with-availability-zones).
+
+## Next steps
+
+* [Manage availability zone volume placement](manage-availability-zone-volume-placement.md)
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
na Previously updated : 10/13/2022 Last updated : 10/20/2022 # What's new in Azure NetApp Files
Azure NetApp Files is updated regularly. This article provides a summary about t
## October 2022
+* [Availability zone volume placement](manage-availability-zone-volume-placement.md) (Preview)
+
+ Azure availability zones are highly available, fault tolerant, and more scalable than traditional single or multiple data center infrastructures. Using Azure availability zones lets you design and operate applications and databases that automatically transition between zones without interruption. Azure NetApp Files lets you deploy new volumes in the logical availability zone of your choice to support enterprise, mission-critical HA deployments across multiple AZs. AzureΓÇÖs push towards the use of [availability zones (AZs)](../availability-zones/az-overview.md#availability-zones) has increased, and the use of high availability (HA) deployments with availability zones are now a default and best practice recommendation in AzureΓÇÖs [Well Architected Framework](/architecture/framework/resiliency/design-best-practices#use-zone-aware-services).
+ * [Application volume group for SAP HANA](application-volume-group-introduction.md) now generally available (GA) The application volume group for SAP HANA feature is now generally available. You no longer need to register the feature to use it.
azure-percept Concept Security Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/concept-security-configuration.md
Title: Azure Percept security recommendations description: Learn more about Azure Percept firewall configuration and security recommendations--++ Last updated 10/04/2022
azure-percept Overview Percept Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/overview-percept-security.md
Title: Azure Percept security description: Learn more about Azure Percept security--++ Last updated 10/06/2022
azure-resource-manager Extension Resource Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/extension-resource-types.md
Title: Extension resource types description: Lists the Azure resource types are used to extend the capabilities of other resource types. Previously updated : 08/31/2022 Last updated : 10/20/2022 # Resource types that extend capabilities of other resources
An extension resource is a resource that adds to another resource's capabilities
* Reportconfigs * Reports * ScheduledActions
+* Settings
* Views ## Microsoft.CustomProviders
An extension resource is a resource that adds to another resource's capabilities
## Microsoft.PolicyInsights * attestations
+* componentPolicyStates
* eventGridFilters * policyEvents * policyStates
An extension resource is a resource that adds to another resource's capabilities
## Microsoft.Resources
+* deploymentStacks
* links
+* snapshots
* tags ## Microsoft.Security
An extension resource is a resource that adds to another resource's capabilities
* automationRules * bookmarks * cases
+* contentPackages
+* contentTemplates
* dataConnectorDefinitions * dataConnectors * enrichment
azure-resource-manager Resources Without Resource Group Limit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resources-without-resource-group-limit.md
Title: Resources without 800 count limit description: Lists the Azure resource types that can have more than 800 instances in a resource group. Previously updated : 08/31/2022 Last updated : 10/20/2022 # Resources not limited to 800 instances per resource group
Some resources have a limit on the number instances per region. This limit is di
* galleries * galleries/images * galleries/images/versions
+* galleries/serviceArtifacts
* images * snapshots * virtualMachines
Some resources have a limit on the number instances per region. This limit is di
* dnszones/SRV * dnszones/TXT * expressRouteCrossConnections
+* loadBalancers - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.Resources/ARMDisableResourcesPerRGLimit
* networkIntentPolicies * networkInterfaces * networkSecurityGroups
Some resources have a limit on the number instances per region. This limit is di
## Microsoft.Security * assignments
+* securityConnectors
## Microsoft.ServiceBus
azure-resource-manager Tag Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-support.md
Title: Tag support for resources description: Shows which Azure resource types support tags. Provides details for all Azure services. Previously updated : 08/31/2022 Last updated : 10/20/2022 # Tag support for Azure resources
To get the same data as a file of comma-separated values, download [tag-support.
> | farmBeats | Yes | Yes | > | farmBeats / eventGridFilters | No | No | > | farmBeats / extensions | No | No |
+> | farmBeats / solutions | No | No |
> | farmBeatsExtensionDefinitions | No | No |
+> | farmBeatsSolutionDefinitions | No | No |
## Microsoft.AlertsManagement
To get the same data as a file of comma-separated values, download [tag-support.
> | automationAccounts / privateEndpointConnections | No | No | > | automationAccounts / privateLinkResources | No | No | > | automationAccounts / runbooks | Yes | Yes |
+> | automationAccounts / runtimes | Yes | Yes |
> | automationAccounts / softwareUpdateConfigurationMachineRuns | No | No | > | automationAccounts / softwareUpdateConfigurationRuns | No | No | > | automationAccounts / softwareUpdateConfigurations | No | No |
To get the same data as a file of comma-separated values, download [tag-support.
> | catalogs / products | No | No | > | catalogs / products / devicegroups | No | No |
-## Microsoft.AzureSphereGen2
-
-> [!div class="mx-tableFixed"]
-> | Resource type | Supports tags | Tag in cost report |
-> | - | -- | -- |
-> | catalogs | Yes | Yes |
-> | catalogs / certificates | No | No |
-> | catalogs / deviceRegistrations | Yes | Yes |
-> | catalogs / provisioningPackages | Yes | Yes |
-
-## Microsoft.AzureSphereV2
-
-> [!div class="mx-tableFixed"]
-> | Resource type | Supports tags | Tag in cost report |
-> | - | -- | -- |
-> | catalogs | Yes | Yes |
-> | catalogs / certificates | No | No |
-> | catalogs / deviceRegistrations | Yes | Yes |
-> | catalogs / provisioningPackages | Yes | Yes |
- ## Microsoft.AzureStack > [!div class="mx-tableFixed"]
To get the same data as a file of comma-separated values, download [tag-support.
> | galleryImages | Yes | Yes | > | marketplaceGalleryImages | Yes | Yes | > | networkinterfaces | Yes | Yes |
+> | registeredSubscriptions | No | No |
> | storageContainers | Yes | Yes | > | virtualharddisks | Yes | Yes | > | virtualmachines | Yes | Yes |
To get the same data as a file of comma-separated values, download [tag-support.
> | billingAccounts / enrollmentAccounts / billingRoleDefinitions | No | No | > | billingAccounts / enrollmentAccounts / billingSubscriptions | No | No | > | billingAccounts / invoices | No | No |
+> | billingAccounts / invoices / summary | No | No |
> | billingAccounts / invoices / transactions | No | No | > | billingAccounts / invoices / transactionSummary | No | No | > | billingAccounts / invoiceSections | No | No |
To get the same data as a file of comma-separated values, download [tag-support.
> | billingAccounts / policies | No | No | > | billingAccounts / products | No | No | > | billingAccounts / promotionalCredits | No | No |
+> | billingAccounts / reservationOrders | No | No |
+> | billingAccounts / reservationOrders / reservations | No | No |
> | billingAccounts / reservations | No | No | > | billingAccounts / savingsPlanOrders | No | No | > | billingAccounts / savingsPlanOrders / savingsPlans | No | No |
To get the same data as a file of comma-separated values, download [tag-support.
> | galleries / applications / versions | Yes | No | > | galleries / images | Yes | No | > | galleries / images / versions | Yes | No |
+> | galleries / serviceArtifacts | Yes | Yes |
> | hostGroups | Yes | Yes | > | hostGroups / hosts | Yes | Yes | > | images | Yes | Yes |
To get the same data as a file of comma-separated values, download [tag-support.
> | Resource type | Supports tags | Tag in cost report | > | - | -- | -- | > | Ledgers | Yes | Yes |
+> | ManagedCCF | Yes | Yes |
## Microsoft.Confluent
To get the same data as a file of comma-separated values, download [tag-support.
> | - | -- | -- | > | CacheNodes | Yes | Yes | > | enterpriseCustomers | Yes | Yes |
+> | ispCustomers | Yes | Yes |
+> | ispCustomers / ispCacheNodes | Yes | Yes |
## microsoft.connectedopenstack
To get the same data as a file of comma-separated values, download [tag-support.
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | Clusters | Yes | Yes |
-> | Datastores | Yes | Yes |
-> | Hosts | Yes | Yes |
-> | ResourcePools | Yes | Yes |
+> | clusters | Yes | Yes |
+> | datastores | Yes | Yes |
+> | hosts | Yes | Yes |
+> | resourcepools | Yes | Yes |
> | VCenters | Yes | Yes |
-> | VCenters / InventoryItems | No | No |
-> | VirtualMachines | Yes | Yes |
+> | vcenters / inventoryitems | No | No |
+> | virtualmachines | Yes | Yes |
> | VirtualMachines / AssessPatches | No | No |
-> | VirtualMachines / Extensions | Yes | Yes |
-> | VirtualMachines / GuestAgents | No | No |
-> | VirtualMachines / HybridIdentityMetadata | No | No |
+> | virtualmachines / extensions | Yes | Yes |
+> | virtualmachines / guestagents | No | No |
+> | virtualmachines / hybrididentitymetadata | No | No |
> | VirtualMachines / InstallPatches | No | No | > | VirtualMachines / UpgradeExtensions | No | No |
-> | VirtualMachineTemplates | Yes | Yes |
-> | VirtualNetworks | Yes | Yes |
+> | virtualmachinetemplates | Yes | Yes |
+> | virtualnetworks | Yes | Yes |
## Microsoft.Consumption
To get the same data as a file of comma-separated values, download [tag-support.
> | Resource type | Supports tags | Tag in cost report | > | - | -- | -- | > | jobs | Yes | Yes |
+> | jobs / eventGridFilters | No | No |
## Microsoft.DataBoxEdge
To get the same data as a file of comma-separated values, download [tag-support.
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | ElasticPools | Yes | Yes |
-> | ElasticPools / IotHubTenants | Yes | Yes |
-> | ElasticPools / IotHubTenants / securitySettings | No | No |
> | IotHubs | Yes | Yes | > | IotHubs / eventGridFilters | No | No | > | IotHubs / failover | No | No |
To get the same data as a file of comma-separated values, download [tag-support.
> | domains / topics | No | No | > | eventSubscriptions | No | No | > | extensionTopics | No | No |
+> | namespaces | Yes | Yes |
> | partnerConfigurations | Yes | Yes | > | partnerDestinations | Yes | Yes | > | partnerNamespaces | Yes | Yes |
To get the same data as a file of comma-separated values, download [tag-support.
> | fluidRelayServers | Yes | Yes | > | fluidRelayServers / fluidRelayContainers | No | No |
+## Microsoft.Graph
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Supports tags | Tag in cost report |
+> | - | -- | -- |
+> | AzureADApplication | Yes | Yes |
+> | AzureADApplicationPrototype | Yes | Yes |
+> | registeredSubscriptions | No | No |
+ ## Microsoft.GuestConfiguration > [!div class="mx-tableFixed"]
To get the same data as a file of comma-separated values, download [tag-support.
> | services / privateLinkResources | No | No | > | validateMedtechMappings | No | No | > | workspaces | Yes | Yes |
+> | workspaces / analyticsconnectors | Yes | Yes |
> | workspaces / dicomservices | Yes | Yes | > | workspaces / eventGridFilters | No | No | > | workspaces / fhirservices | Yes | Yes |
To get the same data as a file of comma-separated values, download [tag-support.
> | instances | Yes | Yes | > | instances / chambers | Yes | Yes | > | instances / chambers / accessProfiles | Yes | Yes |
+> | instances / chambers / fileRequests | No | No |
+> | instances / chambers / files | No | No |
> | instances / chambers / workloads | Yes | Yes | > | instances / consortiums | Yes | Yes |
To get the same data as a file of comma-separated values, download [tag-support.
> | provisionedClusters | Yes | Yes | > | provisionedClusters / agentPools | Yes | Yes | > | provisionedClusters / hybridIdentityMetadata | No | No |
+> | provisionedClusters / upgradeProfiles | No | No |
> | storageSpaces | Yes | Yes | > | virtualNetworks | Yes | Yes |
To get the same data as a file of comma-separated values, download [tag-support.
> | networkFunctionPublishers / networkFunctionDefinitionGroups | No | No | > | networkFunctionPublishers / networkFunctionDefinitionGroups / publisherNetworkFunctionDefinitionVersions | No | No | > | networkfunctions | Yes | Yes |
-> | networkfunctions / components | No | No |
+> | networkFunctions / components | No | No |
> | networkFunctionVendors | No | No | > | publishers | Yes | Yes | > | publishers / artifactStores | Yes | Yes |
To get the same data as a file of comma-separated values, download [tag-support.
> | Resource type | Supports tags | Tag in cost report | > | - | -- | -- | > | actiongroups | Yes | Yes |
+> | actiongroups / networkSecurityPerimeterAssociationProxies | No | No |
+> | actiongroups / networkSecurityPerimeterConfigurations | No | No |
> | activityLogAlerts | Yes | Yes | > | alertrules | Yes | Yes | > | autoscalesettings | Yes | Yes |
To get the same data as a file of comma-separated values, download [tag-support.
> | privateLinkScopes / scopedResources | No | No | > | rollbackToLegacyPricingModel | No | No | > | scheduledqueryrules | Yes | Yes |
+> | scheduledqueryrules / networkSecurityPerimeterAssociationProxies | No | No |
+> | scheduledqueryrules / networkSecurityPerimeterConfigurations | No | No |
> | topology | No | No | > | transactions | No | No | > | webtests | Yes | Yes |
To get the same data as a file of comma-separated values, download [tag-support.
> | Resource type | Supports tags | Tag in cost report | > | - | -- | -- | > | loadtests | Yes | Yes |
+> | loadtests / outboundNetworkDependenciesEndpoints | No | No |
+> | registeredSubscriptions | No | No |
## Microsoft.Logic
To get the same data as a file of comma-separated values, download [tag-support.
> | - | -- | -- | > | aisysteminventories | Yes | Yes | > | registries | Yes | Yes |
+> | registries / codes | No | No |
+> | registries / codes / versions | No | No |
+> | registries / components | No | No |
+> | registries / components / versions | No | No |
+> | registries / environments | No | No |
+> | registries / environments / versions | No | No |
+> | registries / models | No | No |
+> | registries / models / versions | No | No |
> | virtualclusters | Yes | Yes | > | workspaces | Yes | Yes | > | workspaces / batchEndpoints | Yes | Yes |
To get the same data as a file of comma-separated values, download [tag-support.
> | workspaces / models / versions | No | No | > | workspaces / onlineEndpoints | Yes | Yes | > | workspaces / onlineEndpoints / deployments | Yes | Yes |
-> | workspaces / registries | Yes | Yes |
> | workspaces / schedules | No | No | > | workspaces / services | No | No |
To get the same data as a file of comma-separated values, download [tag-support.
> | mediaservices / accountFilters | No | No | > | mediaservices / assets | No | No | > | mediaservices / assets / assetFilters | No | No |
+> | mediaservices / assets / tracks | No | No |
> | mediaservices / contentKeyPolicies | No | No | > | mediaservices / eventGridFilters | No | No | > | mediaservices / graphInstances | No | No |
To get the same data as a file of comma-separated values, download [tag-support.
> | cloudServicesNetworks | Yes | Yes | > | clusterManagers | Yes | Yes | > | clusters | Yes | Yes |
+> | clusters / admissions | No | No |
> | defaultCniNetworks | Yes | Yes | > | disks | Yes | Yes | > | hybridAksClusters | Yes | Yes |
To get the same data as a file of comma-separated values, download [tag-support.
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
-> | Pki | Yes | Yes |
-> | Pkis | Yes | Yes |
-> | Pkis / certificateAuthorities | Yes | Yes |
-> | Pkis / enrollmentPolicies | Yes | Yes |
+> | pkis | Yes | Yes |
+> | pkis / certificateAuthorities | No | No |
+> | pkis / enrollmentPolicies | Yes | Yes |
## Microsoft.PlayFab
To get the same data as a file of comma-separated values, download [tag-support.
> | Resource type | Supports tags | Tag in cost report | > | - | -- | -- | > | attestations | No | No |
+> | componentPolicyStates | No | No |
> | eventGridFilters | No | No | > | policyEvents | No | No | > | policyMetadata | No | No |
To get the same data as a file of comma-separated values, download [tag-support.
> | Resource type | Supports tags | Tag in cost report | > | - | -- | -- | > | appliances | Yes | Yes |
+> | telemetryconfig | No | No |
## Microsoft.ResourceGraph
To get the same data as a file of comma-separated values, download [tag-support.
> | deploymentStacks / snapshots | No | No | > | links | No | No | > | resourceGroups | Yes | No |
+> | snapshots | No | No |
> | subscriptions | Yes | No | > | tags | No | No | > | templateSpecs | Yes | Yes | > | templateSpecs / versions | Yes | Yes | > | tenants | No | No |
+> | validateResources | No | No |
## Microsoft.SaaS
To get the same data as a file of comma-separated values, download [tag-support.
> | azureDevOpsConnectors / orgs | No | No | > | azureDevOpsConnectors / orgs / projects | No | No | > | azureDevOpsConnectors / orgs / projects / repos | No | No |
+> | azureDevOpsConnectors / repos | No | No |
+> | azureDevOpsConnectors / stats | No | No |
> | gitHubConnectors | Yes | Yes | > | gitHubConnectors / gitHubRepos | No | No | > | gitHubConnectors / owners | No | No | > | gitHubConnectors / owners / repos | No | No |
+> | gitHubConnectors / repos | No | No |
+> | gitHubConnectors / stats | No | No |
## Microsoft.SecurityInsights
To get the same data as a file of comma-separated values, download [tag-support.
> | automationRules | No | No | > | bookmarks | No | No | > | cases | No | No |
+> | contentPackages | No | No |
+> | contentTemplates | No | No |
> | dataConnectorDefinitions | No | No | > | dataConnectors | No | No | > | enrichment | No | No |
To get the same data as a file of comma-separated values, download [tag-support.
> | - | -- | -- | > | clusters | Yes | Yes | > | clusters / applications | No | No |
+> | clusters / applications / services | No | No |
+> | clusters / applicationTypes | No | No |
+> | clusters / applicationTypes / versions | No | No |
> | edgeclusters | Yes | Yes | > | edgeclusters / applications | No | No | > | managedclusters | Yes | Yes |
To get the same data as a file of comma-separated values, download [tag-support.
> | testBaseAccounts / externalTestTools | No | No | > | testBaseAccounts / externalTestTools / testCases | No | No | > | testBaseAccounts / featureUpdateSupportedOses | No | No |
+> | testBaseAccounts / firstPartyApps | No | No |
> | testBaseAccounts / flightingRings | No | No | > | testBaseAccounts / packages | Yes | Yes | > | testBaseAccounts / packages / favoriteProcesses | No | No |
To get the same data as a file of comma-separated values, download [tag-support.
> | - | -- | -- | > | imageTemplates | Yes | Yes | > | imageTemplates / runOutputs | No | No |
+> | imageTemplates / triggers | No | No |
## microsoft.visualstudio
azure-resource-manager Deployment Complete Mode Deletion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deployment-complete-mode-deletion.md
Title: Complete mode deletion description: Shows how resource types handle complete mode deletion in Azure Resource Manager templates. Previously updated : 08/31/2022 Last updated : 10/20/2022 # Deletion of Azure resources for complete mode deployments
The resources are listed by resource provider namespace. To match a resource pro
> | farmBeats | Yes | > | farmBeats / eventGridFilters | No | > | farmBeats / extensions | No |
+> | farmBeats / solutions | No |
> | farmBeatsExtensionDefinitions | No |
+> | farmBeatsSolutionDefinitions | No |
## Microsoft.AlertsManagement
The resources are listed by resource provider namespace. To match a resource pro
> | automationAccounts / privateEndpointConnections | No | > | automationAccounts / privateLinkResources | No | > | automationAccounts / runbooks | Yes |
+> | automationAccounts / runtimes | Yes |
> | automationAccounts / softwareUpdateConfigurationMachineRuns | No | > | automationAccounts / softwareUpdateConfigurationRuns | No | > | automationAccounts / softwareUpdateConfigurations | No |
The resources are listed by resource provider namespace. To match a resource pro
> | catalogs / products | No | > | catalogs / products / devicegroups | No |
-## Microsoft.AzureSphereGen2
-
-> [!div class="mx-tableFixed"]
-> | Resource type | Complete mode deletion |
-> | - | -- |
-> | catalogs | Yes |
-> | catalogs / certificates | No |
-> | catalogs / deviceRegistrations | Yes |
-> | catalogs / provisioningPackages | Yes |
-
-## Microsoft.AzureSphereV2
-
-> [!div class="mx-tableFixed"]
-> | Resource type | Complete mode deletion |
-> | - | -- |
-> | catalogs | Yes |
-> | catalogs / certificates | No |
-> | catalogs / deviceRegistrations | Yes |
-> | catalogs / provisioningPackages | Yes |
- ## Microsoft.AzureStack > [!div class="mx-tableFixed"]
The resources are listed by resource provider namespace. To match a resource pro
> | galleryImages | Yes | > | marketplaceGalleryImages | Yes | > | networkinterfaces | Yes |
+> | registeredSubscriptions | No |
> | storageContainers | Yes | > | virtualharddisks | Yes | > | virtualmachines | Yes |
The resources are listed by resource provider namespace. To match a resource pro
> | billingAccounts / enrollmentAccounts / billingRoleDefinitions | No | > | billingAccounts / enrollmentAccounts / billingSubscriptions | No | > | billingAccounts / invoices | No |
+> | billingAccounts / invoices / summary | No |
> | billingAccounts / invoices / transactions | No | > | billingAccounts / invoices / transactionSummary | No | > | billingAccounts / invoiceSections | No |
The resources are listed by resource provider namespace. To match a resource pro
> | billingAccounts / policies | No | > | billingAccounts / products | No | > | billingAccounts / promotionalCredits | No |
+> | billingAccounts / reservationOrders | No |
+> | billingAccounts / reservationOrders / reservations | No |
> | billingAccounts / reservations | No | > | billingAccounts / savingsPlanOrders | No | > | billingAccounts / savingsPlanOrders / savingsPlans | No |
The resources are listed by resource provider namespace. To match a resource pro
> | galleries / applications / versions | Yes | > | galleries / images | Yes | > | galleries / images / versions | Yes |
+> | galleries / serviceArtifacts | Yes |
> | hostGroups | Yes | > | hostGroups / hosts | Yes | > | images | Yes |
The resources are listed by resource provider namespace. To match a resource pro
> | Resource type | Complete mode deletion | > | - | -- | > | Ledgers | Yes |
+> | ManagedCCF | Yes |
## Microsoft.Confluent
The resources are listed by resource provider namespace. To match a resource pro
> | - | -- | > | CacheNodes | Yes | > | enterpriseCustomers | Yes |
+> | ispCustomers | Yes |
+> | ispCustomers / ispCacheNodes | Yes |
## microsoft.connectedopenstack
The resources are listed by resource provider namespace. To match a resource pro
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | Clusters | Yes |
-> | Datastores | Yes |
-> | Hosts | Yes |
-> | ResourcePools | Yes |
+> | clusters | Yes |
+> | datastores | Yes |
+> | hosts | Yes |
+> | resourcepools | Yes |
> | VCenters | Yes |
-> | VCenters / InventoryItems | No |
-> | VirtualMachines | Yes |
+> | vcenters / inventoryitems | No |
+> | virtualmachines | Yes |
> | VirtualMachines / AssessPatches | No |
-> | VirtualMachines / Extensions | Yes |
-> | VirtualMachines / GuestAgents | No |
-> | VirtualMachines / HybridIdentityMetadata | No |
+> | virtualmachines / extensions | Yes |
+> | virtualmachines / guestagents | No |
+> | virtualmachines / hybrididentitymetadata | No |
> | VirtualMachines / InstallPatches | No | > | VirtualMachines / UpgradeExtensions | No |
-> | VirtualMachineTemplates | Yes |
-> | VirtualNetworks | Yes |
+> | virtualmachinetemplates | Yes |
+> | virtualnetworks | Yes |
## Microsoft.Consumption
The resources are listed by resource provider namespace. To match a resource pro
> | Resource type | Complete mode deletion | > | - | -- | > | jobs | Yes |
+> | jobs / eventGridFilters | No |
## Microsoft.DataBoxEdge
The resources are listed by resource provider namespace. To match a resource pro
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | ElasticPools | Yes |
-> | ElasticPools / IotHubTenants | Yes |
-> | ElasticPools / IotHubTenants / securitySettings | No |
> | IotHubs | Yes | > | IotHubs / eventGridFilters | No | > | IotHubs / failover | No |
The resources are listed by resource provider namespace. To match a resource pro
> | domains / topics | No | > | eventSubscriptions | No | > | extensionTopics | No |
+> | namespaces | Yes |
> | partnerConfigurations | Yes | > | partnerDestinations | Yes | > | partnerNamespaces | Yes |
The resources are listed by resource provider namespace. To match a resource pro
> | fluidRelayServers | Yes | > | fluidRelayServers / fluidRelayContainers | No |
+## Microsoft.Graph
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Complete mode deletion |
+> | - | -- |
+> | AzureADApplication | Yes |
+> | AzureADApplicationPrototype | Yes |
+> | registeredSubscriptions | No |
+ ## Microsoft.GuestConfiguration > [!div class="mx-tableFixed"]
The resources are listed by resource provider namespace. To match a resource pro
> | services / privateLinkResources | No | > | validateMedtechMappings | No | > | workspaces | Yes |
+> | workspaces / analyticsconnectors | Yes |
> | workspaces / dicomservices | Yes | > | workspaces / eventGridFilters | No | > | workspaces / fhirservices | Yes |
The resources are listed by resource provider namespace. To match a resource pro
> | instances | Yes | > | instances / chambers | Yes | > | instances / chambers / accessProfiles | Yes |
+> | instances / chambers / fileRequests | No |
+> | instances / chambers / files | No |
> | instances / chambers / workloads | Yes | > | instances / consortiums | Yes |
The resources are listed by resource provider namespace. To match a resource pro
> | provisionedClusters | Yes | > | provisionedClusters / agentPools | Yes | > | provisionedClusters / hybridIdentityMetadata | No |
+> | provisionedClusters / upgradeProfiles | No |
> | storageSpaces | Yes | > | virtualNetworks | Yes |
The resources are listed by resource provider namespace. To match a resource pro
> | networkFunctionPublishers / networkFunctionDefinitionGroups | No | > | networkFunctionPublishers / networkFunctionDefinitionGroups / publisherNetworkFunctionDefinitionVersions | No | > | networkfunctions | Yes |
-> | networkfunctions / components | No |
+> | networkFunctions / components | No |
> | networkFunctionVendors | No | > | publishers | Yes | > | publishers / artifactStores | Yes |
The resources are listed by resource provider namespace. To match a resource pro
> | Resource type | Complete mode deletion | > | - | -- | > | actiongroups | Yes |
+> | actiongroups / networkSecurityPerimeterAssociationProxies | No |
+> | actiongroups / networkSecurityPerimeterConfigurations | No |
> | activityLogAlerts | Yes | > | alertrules | Yes | > | autoscalesettings | Yes |
The resources are listed by resource provider namespace. To match a resource pro
> | privateLinkScopes / scopedResources | No | > | rollbackToLegacyPricingModel | No | > | scheduledqueryrules | Yes |
+> | scheduledqueryrules / networkSecurityPerimeterAssociationProxies | No |
+> | scheduledqueryrules / networkSecurityPerimeterConfigurations | No |
> | topology | No | > | transactions | No | > | webtests | Yes |
The resources are listed by resource provider namespace. To match a resource pro
> | Resource type | Complete mode deletion | > | - | -- | > | loadtests | Yes |
+> | loadtests / outboundNetworkDependenciesEndpoints | No |
+> | registeredSubscriptions | No |
## Microsoft.Logic
The resources are listed by resource provider namespace. To match a resource pro
> | - | -- | > | aisysteminventories | Yes | > | registries | Yes |
+> | registries / codes | No |
+> | registries / codes / versions | No |
+> | registries / components | No |
+> | registries / components / versions | No |
+> | registries / environments | No |
+> | registries / environments / versions | No |
+> | registries / models | No |
+> | registries / models / versions | No |
> | virtualclusters | Yes | > | workspaces | Yes | > | workspaces / batchEndpoints | Yes |
The resources are listed by resource provider namespace. To match a resource pro
> | workspaces / models / versions | No | > | workspaces / onlineEndpoints | Yes | > | workspaces / onlineEndpoints / deployments | Yes |
-> | workspaces / registries | Yes |
> | workspaces / schedules | No | > | workspaces / services | No |
The resources are listed by resource provider namespace. To match a resource pro
> | mediaservices / accountFilters | No | > | mediaservices / assets | No | > | mediaservices / assets / assetFilters | No |
+> | mediaservices / assets / tracks | No |
> | mediaservices / contentKeyPolicies | No | > | mediaservices / eventGridFilters | No | > | mediaservices / graphInstances | No |
The resources are listed by resource provider namespace. To match a resource pro
> | cloudServicesNetworks | Yes | > | clusterManagers | Yes | > | clusters | Yes |
+> | clusters / admissions | No |
> | defaultCniNetworks | Yes | > | disks | Yes | > | hybridAksClusters | Yes |
The resources are listed by resource provider namespace. To match a resource pro
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
-> | Pki | Yes |
-> | Pkis | Yes |
-> | Pkis / certificateAuthorities | Yes |
-> | Pkis / enrollmentPolicies | Yes |
+> | pkis | Yes |
+> | pkis / certificateAuthorities | No |
+> | pkis / enrollmentPolicies | Yes |
## Microsoft.PlayFab
The resources are listed by resource provider namespace. To match a resource pro
> | Resource type | Complete mode deletion | > | - | -- | > | attestations | No |
+> | componentPolicyStates | No |
> | eventGridFilters | No | > | policyEvents | No | > | policyMetadata | No |
The resources are listed by resource provider namespace. To match a resource pro
> | Resource type | Complete mode deletion | > | - | -- | > | appliances | Yes |
+> | telemetryconfig | No |
## Microsoft.ResourceGraph
The resources are listed by resource provider namespace. To match a resource pro
> | deploymentStacks / snapshots | No | > | links | No | > | resourceGroups | No |
+> | snapshots | No |
> | subscriptions | No | > | tags | No | > | templateSpecs | Yes | > | templateSpecs / versions | Yes | > | tenants | No |
+> | validateResources | No |
## Microsoft.SaaS
The resources are listed by resource provider namespace. To match a resource pro
> | azureDevOpsConnectors / orgs | No | > | azureDevOpsConnectors / orgs / projects | No | > | azureDevOpsConnectors / orgs / projects / repos | No |
+> | azureDevOpsConnectors / repos | No |
+> | azureDevOpsConnectors / stats | No |
> | gitHubConnectors | Yes | > | gitHubConnectors / gitHubRepos | No | > | gitHubConnectors / owners | No | > | gitHubConnectors / owners / repos | No |
+> | gitHubConnectors / repos | No |
+> | gitHubConnectors / stats | No |
## Microsoft.SecurityInsights
The resources are listed by resource provider namespace. To match a resource pro
> | automationRules | No | > | bookmarks | No | > | cases | No |
+> | contentPackages | No |
+> | contentTemplates | No |
> | dataConnectorDefinitions | No | > | dataConnectors | No | > | enrichment | No |
The resources are listed by resource provider namespace. To match a resource pro
> | - | -- | > | clusters | Yes | > | clusters / applications | No |
+> | clusters / applications / services | No |
+> | clusters / applicationTypes | No |
+> | clusters / applicationTypes / versions | No |
> | edgeclusters | Yes | > | edgeclusters / applications | No | > | managedclusters | Yes |
The resources are listed by resource provider namespace. To match a resource pro
> | testBaseAccounts / externalTestTools | No | > | testBaseAccounts / externalTestTools / testCases | No | > | testBaseAccounts / featureUpdateSupportedOses | No |
+> | testBaseAccounts / firstPartyApps | No |
> | testBaseAccounts / flightingRings | No | > | testBaseAccounts / packages | Yes | > | testBaseAccounts / packages / favoriteProcesses | No |
The resources are listed by resource provider namespace. To match a resource pro
> | - | -- | > | imageTemplates | Yes | > | imageTemplates / runOutputs | No |
+> | imageTemplates / triggers | No |
## microsoft.visualstudio
azure-video-indexer Accounts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/accounts-overview.md
When creating a new paid account, you need to connect the Azure Video Indexer ac
## Limited access features
-This section talks about limited access features in Azure Video Indexer.
-
-|When did I create the account?|Trial account (free)| Paid account <br/>(classic or ARM-based)|
-||||
-|Existing VI accounts <br/><br/>created before June 21, 2022|Able to access face identification, customization and celebrities recognition till June 2023. <br/><br/>**Recommended**: Move to a paid account and afterward fill in the [intake form](https://aka.ms/facerecognition) and based on the eligibility criteria we will enable the features also after the grace period. |Able to access face identification, customization and celebrities recognition till June 2023\*.<br/><br/>**Recommended**: fill in the [intake form](https://aka.ms/facerecognition) and based on the eligibility criteria we will enable the features also after the grace period.|
-|New VI accounts <br/><br/>created after June 21, 2022 |Not able the access face identification, customization and celebrities recognition as of today. <br/><br/>**Recommended**: Move to a paid account and afterward fill in the [intake form](https://aka.ms/facerecognition). Based on the eligibility criteria we will enable the features (after max 10 days).|Azure Video Indexer disables the access face identification, customization and celebrities recognition as of today by default, but gives the option to enable it. <br/><br/>**Recommended**: Fill in the [intake form](https://aka.ms/facerecognition) and based on the eligibility criteria we will enable the features (after max 10 days).|
-
-\*In Brazil South we also disabled the face detection.
For more information, see [Azure Video Indexer limited access features](limited-access-features.md).
azure-video-indexer Concepts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/concepts-overview.md
Azure Video Indexer supports embedding widgets in your apps. For more informatio
## Next steps - [overview](video-indexer-overview.md)-- [Insights](video-indexer-output-json-v2.md)
+- Once you [set up](video-indexer-get-started.md), start using [insights](video-indexer-output-json-v2.md) and check out other **How to guides**.
azure-video-indexer Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/insights-overview.md
+
+ Title: Azure Video Indexer insights overview
+description: This article gives a brief overview of Azure Video Indexer insights.
+ Last updated : 10/19/2022+++
+# Azure Video Indexer insights
+
+Insights contain an aggregated view of the data: faces, topics, emotions. Azure Video Indexer analyzes the video and audio content by running 30+ AI models, generating rich insights. For more information about available models, see [overview](video-indexer-overview.md).
++
+The [Azure Video Indexer](https://www.videoindexer.ai/) website enables you to use your video's deep insights to: find the right media content, locate the parts that youΓÇÖre interested in, and use the results to create an entirely new project. Once created, the project can be rendered and downloaded from Azure Video Indexer and be used in your own editing applications or downstream workflows. For more information, see [Use editor to create projects](use-editor-create-project.md).
+
+Once you are [set up](video-indexer-get-started.md) with Azure Video Indexer, start using [insights](video-indexer-output-json-v2.md) and check out other **How to guides**.
azure-video-indexer Limited Access Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/limited-access-features.md
The Azure Video Indexer service is made available to customers and partners unde
## Limited access features
-This section talks about limited access features in Azure Video Indexer.
-
-|When did I create the account?|Trial Account (Free)| Paid Account <br/>(classic or ARM-based)|
-||||
-|Existing VI accounts <br/><br/>created before June 21, 2022|Able to access face identification, customization and celebrities recognition till June 2023. <br/><br/>**Recommended**: Move to a paid account and afterward fill in the [intake form](https://aka.ms/facerecognition) and based on the eligibility criteria we will enable the features also after the grace period. |Able to access face identification, customization and celebrities recognition till June 2023\*.<br/><br/>**Recommended**: fill in the [intake form](https://aka.ms/facerecognition) and based on the eligibility criteria we will enable the features also after the grace period.|
-|New VI accounts <br/><br/>created after June 21, 2022 |Not able the access face identification, customization and celebrities recognition as of today. <br/><br/>**Recommended**: Move to a paid account and afterward fill in the [intake form](https://aka.ms/facerecognition). Based on the eligibility criteria we will enable the features (after max 10 days).|Azure Video Indexer disables the access face identification, customization and celebrities recognition as of today by default, but gives the option to enable it. <br/><br/>**Recommended**: Fill in the [intake form](https://aka.ms/facerecognition) and based on the eligibility criteria we will enable the features (after max 10 days).|
-
-\*In Brazil South we also disabled the face detection.
## Help and support
azure-video-indexer Video Indexer Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-get-started.md
See the [input container/file formats](/azure/media-services/latest/encode-media
After you upload and index a video, you can continue using [Azure Video Indexer website](video-indexer-view-edit.md) or [Azure Video Indexer Developer Portal](video-indexer-use-apis.md) to see the insights of the video (see [Examine the Azure Video Indexer output](video-indexer-output-json-v2.md)).
-For more details, see [Upload and index videos](upload-index-videos.md).
+## Start using insights
-To start using the APIs, see [use APIs](video-indexer-use-apis.md)
+For more details, see [Upload and index videos](upload-index-videos.md) and check out other **How to guides**.
## Next steps
-* For the API integration, seeΓÇ»[Use Azure Video Indexer REST API](video-indexer-use-apis.md).
* To embed widgets, seeΓÇ»[Embed visual widgets in your application](video-indexer-embed-widgets.md).
-* Also, check out our [introduction lab](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/IntroToVideoIndexer.md).
+* For the API integration, seeΓÇ»[Use Azure Video Indexer REST API](video-indexer-use-apis.md).
+* Check out our [introduction lab](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/IntroToVideoIndexer.md).
At the end of the workshop, you'll have a good understanding of the kind of information that can be extracted from video and audio content, you'll be more prepared to identify opportunities related to content intelligence, pitch video AI on Azure, and demo several scenarios on Azure Video Indexer.
azure-video-indexer Video Indexer Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-overview.md
When indexing by one channel, partial result for those models will be available.
Learn how to [get started with Azure Video Indexer](video-indexer-get-started.md).
+Once you set up, start using [insights](video-indexer-output-json-v2.md) and check out other **How to guides**.
+ ## Next steps You're ready to get started with Azure Video Indexer. For more information, see the following articles:
backup Backup Azure Vms Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-vms-troubleshoot.md
Title: Troubleshoot backup errors with Azure VMs
description: In this article, learn how to troubleshoot errors encountered with backup and restore of Azure virtual machines. Previously updated : 09/07/2022 Last updated : 10/20/2022
Error code: UserErrorRequestDisallowedByPolicy <BR> Error message: An invalid p
If you have an Azure Policy that [governs tags within your environment](../governance/policy/tutorials/govern-tags.md), either consider changing the policy from a [Deny effect](../governance/policy/concepts/effects.md#deny) to a [Modify effect](../governance/policy/concepts/effects.md#modify), or create the resource group manually according to the [naming schema required by Azure Backup](./backup-during-vm-creation.md#azure-backup-resource-group-for-virtual-machines).
+### UserErrorUnableToOpenMount
+
+**Error code**: UserErrorUnableToOpenMount
+
+**Cause**: Backups failed because the backup extensions on the VM were unable to open the mount points in the VM.
+
+**Recommended action**: The backup extension on the VM must be able to access all mount points in the VM to determine the underlying disks, take snapshot, and calculate the size. Ensure that all mount points are accessible.
+ ## Jobs | Error details | Workaround |
cognitive-services Batch Transcription Get https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/batch-transcription-get.md
By default, the results are stored in a container managed by Microsoft. When the
::: zone pivot="rest-api"
-The [GetTranscriptionsFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetTranscriptionsFiles) operation returns a list of result files for a transcription. A [transcription report](#transcription-report-file) file is provided for each submitted batch transcription job. In addition, one [transcription](#transcription-result-file) file (the end result) is provided for each successfully transcribed audio file.
+The [GetTranscriptionFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetTranscriptionFiles) operation returns a list of result files for a transcription. A [transcription report](#transcription-report-file) file is provided for each submitted batch transcription job. In addition, one [transcription](#transcription-result-file) file (the end result) is provided for each successfully transcribed audio file.
Make an HTTP GET request using the "files" URI from the previous response body. Replace `YourTranscriptionId` with your transcription ID, replace `YourSubscriptionKey` with your Speech resource key, and replace `YourServiceRegion` with your Speech resource region.
Depending in part on the request parameters set when you created the transcripti
- [Batch transcription overview](batch-transcription.md) - [Locate audio files for batch transcription](batch-transcription-audio-data.md)-- [Create a batch transcription](batch-transcription-create.md)
+- [Create a batch transcription](batch-transcription-create.md)
cognitive-services Devices Sdk Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/devices-sdk-release-notes.md
The following sections list changes in the most recent releases.
## Speech Devices SDK 1.11.0: - Support for arbitrary microphone array geometries and setting the working angle through a [configuration file](https://aka.ms/sdsdk-micarray-json).-- Support for [Urbetter DDK](http://www.urbetter.com/products_56/278.html).
+- Support for [Urbetter DDK](https://urbetters.com/collections).
- Released binaries for the [GGEC Speaker](https://aka.ms/sdsdk-download-speaker) used in our [Voice Assistant sample](https://aka.ms/sdsdk-speaker). - Released binaries for [Linux ARM32](https://aka.ms/sdsdk-download-linux-arm32) and [Linux ARM 64](https://aka.ms/sdsdk-download-linux-arm64) for Raspberry Pi and similar devices. - Updated the [Speech SDK](./speech-sdk.md) component to version 1.11.0. For more information, see its [release notes](./releasenotes.md).
cognitive-services How To Custom Speech Continuous Integration Continuous Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-continuous-integration-continuous-deployment.md
Title: CI/CD for Custom Speech - Speech service
description: Apply DevOps with Custom Speech and CI/CD workflows. Implement an existing DevOps solution for your own project. -+ Last updated 05/08/2022-+ # CI/CD for Custom Speech
cognitive-services How To Custom Speech Test And Train https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-test-and-train.md
Training with plain text or structured text usually finishes within a few minute
> > Start with small sets of sample data that match the language, acoustics, and hardware where your model will be used. Small datasets of representative data can expose problems before you invest in gathering larger datasets for training. For sample Custom Speech data, see <a href="https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/sampledata/customspeech" target="_target">this GitHub repository</a>.
-If you will train a custom model with audio data, choose a Speech resource region with dedicated hardware for training audio data. See footnotes in the [regions](regions.md#speech-service) table for more information. In regions with dedicated hardware for Custom Speech training, the Speech service will use up to 20 hours of your audio training data, and can process about 10 hours of data per day. In other regions, the Speech service uses up to 8 hours of your audio data, and can process about 1 hour of data per day. After the model is trained, you can copy the model to another region as needed with the [CopyModelToSubscriptionToSubscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscriptionToSubscription) REST API.
+If you will train a custom model with audio data, choose a Speech resource region with dedicated hardware for training audio data. See footnotes in the [regions](regions.md#speech-service) table for more information. In regions with dedicated hardware for Custom Speech training, the Speech service will use up to 20 hours of your audio training data, and can process about 10 hours of data per day. In other regions, the Speech service uses up to 8 hours of your audio data, and can process about 1 hour of data per day. After the model is trained, you can copy the model to another region as needed with the [CopyModelToSubscriptionToSubscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscription) REST API.
## Consider datasets by scenario
cognitive-services How To Custom Speech Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-train-model.md
Copying a model directly to a project in another region is not supported with th
::: zone pivot="rest-api"
-To copy a model to another Speech resource, use the [CopyModelToSubscriptionToSubscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscriptionToSubscription) operation of the [Speech-to-text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions:
+To copy a model to another Speech resource, use the [CopyModelToSubscriptionToSubscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscription) operation of the [Speech-to-text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions:
- Set the required `targetSubscriptionKey` property to the key of the destination Speech resource.
To connect a new model to a project of the Speech resource where the model was c
- Set the required `project` property to the URI of an existing project. This is recommended so that you can also view and manage the model in Speech Studio. You can make a [GetProjects](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetProjects) request to get available projects.
-Make an HTTP PATCH request using the URI as shown in the following example. Use the URI of the new model. You can get the new model ID from the `self` property of the [CopyModelToSubscriptionToSubscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscriptionToSubscription) response body. Replace `YourSubscriptionKey` with your Speech resource key, replace `YourServiceRegion` with your Speech resource region, and set the request body properties as previously described.
+Make an HTTP PATCH request using the URI as shown in the following example. Use the URI of the new model. You can get the new model ID from the `self` property of the [CopyModelToSubscriptionToSubscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscription) response body. Replace `YourSubscriptionKey` with your Speech resource key, replace `YourServiceRegion` with your Speech resource region, and set the request body properties as previously described.
```azurecli-interactive curl -v -X PATCH -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-Type: application/json" -d '{
cognitive-services Resiliency And Recovery Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/resiliency-and-recovery-plan.md
You should create Speech Service resources in both a main and a secondary region
Custom Speech Service doesn't support automatic failover. We suggest the following steps to prepare for manual or automatic failover implemented in your client code. In these steps, you replicate custom models in a secondary region. With this preparation, your client code can switch to a secondary region when the primary region fails. 1. Create your custom model in one main region (Primary).
-2. Run the [CopyModelToSubscriptionToSubscription](https://eastus2.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscriptionToSubscription) operation to replicate the custom model to all prepared regions (Secondary).
+2. Run the [CopyModelToSubscriptionToSubscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscription) operation to replicate the custom model to all prepared regions (Secondary).
3. Go to Speech Studio to load the copied model and create a new endpoint in the secondary region. See how to deploy a new model in [Deploy a Custom Speech model](./how-to-custom-speech-deploy-model.md). - If you have set a specific quota, also consider setting the same quota in the backup regions. See details in [Speech service Quotas and Limits](./speech-services-quotas-and-limits.md). 4. Configure your client to fail over on persistent errors as with the default endpoints usage.
Check the [public voices available](language-support.md?tabs=stt-tts). You can a
Speaker Recognition uses [Azure paired regions](../../availability-zones/cross-region-replication-azure.md) to automatically fail over operations. Speaker enrollments and voice signatures are backed up regularly to prevent data loss and to be used if there's an outage.
-During an outage, Speaker Recognition service will automatically fail over to a paired region and use the backed-up data to continue processing requests until the main region is back online.
+During an outage, Speaker Recognition service will automatically fail over to a paired region and use the backed-up data to continue processing requests until the main region is back online.
cognitive-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/service-limits.md
The following limits are observed for the conversational language understanding.
|Item|Lower Limit| Upper Limit | | | | |
-|Count of utterances per project | 1 | 15,000|
+|Count of utterances per project | 1 | 25,000|
|Utterance length in characters | 1 | 500 | |Count of intents per project | 1 | 500| |Count of entities per project | 1 | 500|
cognitive-services Entity Resolutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/named-entity-recognition/concepts/entity-resolutions.md
A resolution is a standard format for an entity. Entities can be expressed in va
You can use NER resolutions to implement actions or retrieve further information. For example, your service can extract datetime entities to extract dates and times that will be provided to a meeting scheduling system.
+> [!NOTE]
+> Entity resolution responses are only supported starting from **_api-version=2022-10-01-preview_** and **_"modelVersion": "2022-10-01-preview"_**.
+ This article documents the resolution objects returned for each entity category or subcategory. ## Age
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/named-entity-recognition/language-support.md
Use this article to learn which natural languages are supported by the NER featu
> [!NOTE] > * Languages are added as new [model versions](how-to-call.md#specify-the-ner-model) are released.
-> * Only "Person", "Location" and "Organization" entities are returned for languages marked with *.
-> * The current model version for NER is `2021-06-01`.
+> * The language support below is for model version `2022-10-01-preview`.
## NER language support
-| Language | Language code | Starting with model version: | Supports entity resolution | Notes |
-|:-|:-:|:-:|:--:|::|
-| Arabic* | `ar` | 2019-10-01 | | |
-| Chinese-Simplified | `zh-hans` | 2021-01-15 | Γ£ô | `zh` also accepted |
-| Chinese-Traditional* | `zh-hant` | 2019-10-01 | | |
-| Czech* | `cs` | 2019-10-01 | | |
-| Danish* | `da` | 2019-10-01 | | |
-| Dutch* | `nl` | 2019-10-01 | Γ£ô | |
-| English | `en` | 2019-10-01 | Γ£ô | |
-| Finnish* | `fi` | 2019-10-01 | | |
-| French | `fr` | 2021-01-15 | Γ£ô | |
-| German | `de` | 2021-01-15 | Γ£ô | |
-| Hebrew | `he` | 2022-10-01 | | |
-| Hindi | `hi` | 2022-10-01 | Γ£ô | |
-| Hungarian* | `hu` | 2019-10-01 | | |
-| Italian | `it` | 2021-01-15 | Γ£ô | |
-| Japanese | `ja` | 2021-01-15 | Γ£ô | |
-| Korean | `ko` | 2021-01-15 | | |
-| Norwegian (Bokmål)* | `no` | 2019-10-01 | | `nb` also accepted |
-| Polish* | `pl` | 2019-10-01 | | |
-| Portuguese (Brazil) | `pt-BR` | 2021-01-15 | Γ£ô | |
-| Portuguese (Portugal) | `pt-PT` | 2021-01-15 | | `pt` also accepted |
-| Russian* | `ru` | 2019-10-01 | | |
-| Spanish | `es` | 2020-04-01 | Γ£ô | |
-| Swedish* | `sv` | 2019-10-01 | | |
-| Turkish* | `tr` | 2019-10-01 | Γ£ô | |
+|Language |Language code|Supports resolution|Notes |
+||-|--||
+|Arabic |`ar` | | |
+|Chinese-Simplified |`zh-hans` |Γ£ô |`zh` also accepted|
+|Chinese-Traditional |`zh-hant` | | |
+|Czech |`cs` | | |
+|Danish |`da` | | |
+|Dutch |`nl` |Γ£ô | |
+|English |`en` |Γ£ô | |
+|Finnish |`fi` | | |
+|French |`fr` |Γ£ô | |
+|German |`de` |Γ£ô | |
+|Hebrew |`he` | | |
+|Hindi |`hi` |Γ£ô | |
+|Hungarian |`hu` | | |
+|Italian |`it` |Γ£ô | |
+|Japanese |`ja` |Γ£ô | |
+|Korean |`ko` | | |
+|Norwegian (Bokmål) |`no` | |`nb` also accepted|
+|Polish |`pl` | | |
+|Portuguese (Brazil) |`pt-BR` |Γ£ô | |
+|Portuguese (Portugal)|`pt-PT` | |`pt` also accepted|
+|Russian |`ru` | | |
+|Spanish |`es` |Γ£ô | |
+|Swedish |`sv` | | |
+|Turkish |`tr` |Γ£ô | |
+ ## Next steps
communication-services Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/insights.md
The **SMS** tab displays the operations and results for SMS usage through an Azu
:::image type="content" source="media\workbooks\sms.png" alt-text="SMS tab"::: The **Email** tab displays delivery status, email size, and email count:
-[Screenshot displays email count, size and email delivery status level that illustrate email insights]
## Editing dashboards
communication-services Logging And Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/logging-and-diagnostics.md
Communication Services offers the following types of logs that you can enable:
| SdkType | The SDK type used in the request. | | PlatformType | The platform type used in the request. | | Method | The method used in the request. |
+|NumberType| The type of number, the SMS message is being sent from. It can be either **LongCodeNumber** or **ShortCodeNumber** |
### Authentication operational logs
communication-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/service-limits.md
Title: Service limits for Azure Communication Services description: Learn how to-+ -+ Last updated 11/01/2021
communication-services Call Recording https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/call-recording.md
Title: Azure Communication Services Call Recording overview description: Provides an overview of the Call Recording feature and APIs.-+ -+ Last updated 06/30/2021
communication-services Quick Create Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/identity/quick-create-identity.md
Title: Quickstart - Quickly create Azure Communication Services identities for testing description: Learn how to use the Identities & Access Tokens tool in the Azure portal to use with samples and for troubleshooting.-+ -+ Last updated 07/19/2021
communication-services Call Recording Sample https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/call-recording-sample.md
Title: Azure Communication Services Call Recording API quickstart
description: Provides a quickstart sample for the Call Recording APIs. -+ -+ Last updated 06/30/2021
communication-services Download Recording File Sample https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/download-recording-file-sample.md
- Title: Record and download calls with Event Grid - An Azure Communication Services quickstart-
-description: In this quickstart, you'll learn how to record and download calls using Event Grid.
---- Previously updated : 06/30/2021------
-# Record and download calls with Event Grid
--
-Get started with Azure Communication Services by recording your Communication Services calls using Azure Event Grid.
-
-## Prerequisites
-- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- An active Communication Services resource. [Create a Communication Services resource](../create-communication-resource.md?pivots=platform-azp&tabs=windows).-- The [`Microsoft.Azure.EventGrid`](https://www.nuget.org/packages/Microsoft.Azure.EventGrid/) NuGet package.-
-## Create a webhook and subscribe to the recording events
-We'll use *webhooks* and *events* to facilitate call recording and media file downloads.
-
-First, we'll create a webhook. Your Communication Services resource will use Event Grid to notify this webhook when the `recording` event is triggered, and then again when recorded media is ready to be downloaded.
-
-You can write your own custom webhook to receive these event notifications. It's important for this webhook to respond to inbound messages with the validation code to successfully subscribe the webhook to the event service.
-
-```csharp
-[HttpPost]
-public async Task<ActionResult> PostAsync([FromBody] object request)
- {
- //Deserializing the request
- var eventGridEvent = JsonConvert.DeserializeObject<EventGridEvent[]>(request.ToString())
- .FirstOrDefault();
- var data = eventGridEvent.Data as JObject;
-
- // Validate whether EventType is of "Microsoft.EventGrid.SubscriptionValidationEvent"
- if (string.Equals(eventGridEvent.EventType, EventTypes.EventGridSubscriptionValidationEvent, StringComparison.OrdinalIgnoreCase))
- {
- var eventData = data.ToObject<SubscriptionValidationEventData>();
- var responseData = new SubscriptionValidationResponseData
- {
- ValidationResponse = eventData.ValidationCode
- };
- if (responseData.ValidationResponse != null)
- {
- return Ok(responseData);
- }
- }
-
- // Implement your logic here.
- ...
- ...
- }
-```
-
-The above code depends on the `Microsoft.Azure.EventGrid` NuGet package. To learn more about Event Grid endpoint validation, visit the [endpoint validation documentation](../../../event-grid/receive-events.md#endpoint-validation)
-
-We'll then subscribe this webhook to the `recording` event:
-
-1. Select the `Events` blade from your Azure Communication Services resource.
-2. Select `Event Subscription` as shown below.
-![Screenshot showing event grid UI](./media/call-recording/image1-event-grid.png)
-3. Configure the event subscription and select `Call Recording File Status Update` as the `Event Type`. Select `Webhook` as the `Endpoint type`.
-![Create Event Subscription](./media/call-recording/image2-create-event-subscription.png)
-4. Input your webhook's URL into `Subscriber Endpoint`.
-![Subscribe to Event](./media/call-recording/image3-subscribe-to-event.png)
-
-Your webhook will now be notified whenever your Communication Services resource is used to record a call.
-
-## Notification schema
-When the recording is available to download, your Communication Services resource will emit a notification with the following event schema. The document IDs for the recording can be fetched from the `documentId` fields of each `recordingChunk`.
-
-```json
-{
- "id": string, // Unique guid for event
- "topic": string, // Azure Communication Services resource id
- "subject": string, // /recording/call/{call-id}
- "data": {
- "recordingStorageInfo": {
- "recordingChunks": [
- {
- "documentId": string, // Document id for retrieving from AMS storage
- "index": int, // Index providing ordering for this chunk in the entire recording
- "endReason": string, // Reason for chunk ending: "SessionEnded",ΓÇ»"ChunkMaximumSizeExceededΓÇ¥, etc.
- }
- ]
- },
- "recordingStartTime": string, // ISO 8601 date time for the start of the recording
- "recordingDurationMs": int, // Duration of recording in milliseconds
- "sessionEndReason": string // Reason for call ending: "CallEnded",ΓÇ»"InitiatorLeftΓÇ¥, etc.
- },
- "eventType": string, // "Microsoft.Communication.RecordingFileStatusUpdated"
- "dataVersion": string, // "1.0"
- "metadataVersion": string, // "1"
- "eventTime": string // ISO 8601 date time for when the event was created
-}
-
-```
-
-## Download the recorded media files
-
-Once we get the document ID for the file we want to download, we'll call the below Azure Communication Services APIs to download the recorded media and metadata using HMAC authentication.
-
-The maximum recording file size is 1.5GB. When this file size is exceeded, the recorder will automatically split recorded media into multiple files.
-
-The client should be able to download all media files with a single request. If there's an issue, the client can retry with a range header to avoid redownloading segments that have already been downloaded.
-
-To download recorded media:
-- Method: `GET` -- URL: https://contoso.communication.azure.com/recording/download/{documentId}?api-version=2021-04-15-preview1-
-To download recorded media metadata:
-- Method: `GET` -- URL: https://contoso.communication.azure.com/recording/download/{documentId}/metadata?api-version=2021-04-15-preview1--
-### Authentication
-To download recorded media and metadata, use HMAC authentication to authenticate the request against Azure Communication Services APIs.
-
-Create an `HttpClient` and add the necessary headers using the `HmacAuthenticationUtils` provided below:
-
-```csharp
- var client = new HttpClient();
-
- // Set Http Method
- var method = HttpMethod.Get;
- StringContent content = null;
-
- // Build request
- var request = new HttpRequestMessage
- {
- Method = method, // Http GET method
- RequestUri = new Uri(<Download_Recording_Url>), // Download recording Url
- Content = content // content if required for POST methods
- };
-
- // Question: Why do we need to pass String.Empty to CreateContentHash() method?
- // Answer: In HMAC authentication, the hash of the content is one of the parameters used to generate the HMAC token.
- // In our case our recording download APIs are GET methods and do not have any content/body to be passed in the request.
- // However in this case we still need the SHA256 hash for the empty content and hence we pass an empty string.
--
- string serializedPayload = string.Empty;
-
- // Hash the content of the request.
- var contentHashed = HmacAuthenticationUtils.CreateContentHash(serializedPayload);
-
- // Add HMAC headers.
- HmacAuthenticationUtils.AddHmacHeaders(request, contentHashed, accessKey, method);
-
- // Make a request to the Azure Communication Services APIs mentioned above
- var response = await client.SendAsync(request).ConfigureAwait(false);
-```
-
-#### HmacAuthenticationUtils
-The below utilities can be used to manage your HMAC workflow.
-
-**Create content hash**
-
-```csharp
-public static string CreateContentHash(string content)
-{
- var alg = SHA256.Create();
-
- using (var memoryStream = new MemoryStream())
- using (var contentHashStream = new CryptoStream(memoryStream, alg, CryptoStreamMode.Write))
- {
- using (var swEncrypt = new StreamWriter(contentHashStream))
- {
- if (content != null)
- {
- swEncrypt.Write(content);
- }
- }
- }
-
- return Convert.ToBase64String(alg.Hash);
-}
-```
-
-**Add HMAC headers**
-
-```csharp
-public static void AddHmacHeaders(HttpRequestMessage requestMessage, string contentHash, string accessKey)
-{
- var utcNowString = DateTimeOffset.UtcNow.ToString("r", CultureInfo.InvariantCulture);
- var uri = requestMessage.RequestUri;
- var host = uri.Authority;
- var pathAndQuery = uri.PathAndQuery;
-
- var stringToSign = $"{requestMessage.Method}\n{pathAndQuery}\n{utcNowString};{host};{contentHash}";
- var hmac = new HMACSHA256(Convert.FromBase64String(accessKey));
- var hash = hmac.ComputeHash(Encoding.ASCII.GetBytes(stringToSign));
- var signature = Convert.ToBase64String(hash);
- var authorization = $"HMAC-SHA256 SignedHeaders=date;host;x-ms-content-sha256&Signature={signature}";
-
- requestMessage.Headers.Add("x-ms-content-sha256", contentHash);
- requestMessage.Headers.Add("Date", utcNowString);
- requestMessage.Headers.Add("Authorization", authorization);
-}
-```
-
-## Clean up resources
-If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../create-communication-resource.md?pivots=platform-azp&tabs=windows#clean-up-resources).
--
-## Next steps
-For more information, see the following articles:
--- Check out our [web calling sample](../../samples/web-calling-sample.md)-- Learn about [Calling SDK capabilities](./getting-started-with-calling.md?pivots=platform-web)-- Learn more about [how calling works](../../concepts/voice-video-calling/about-call-types.md)
confidential-ledger Authentication Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/authentication-azure-ad.md
To do so, the client performs a two-steps process:
Azure confidential ledger then executes the request on behalf of the security principal for which Azure AD issued the access token. All authorization checks are performed using this identity. In most cases, the recommendation is to use one of Azure confidential ledger SDKs to access the service programmatically, as they remove much of the hassle of implementing the
-flow above (and much more). See, for example, the [Python client library](https://pypi.org/project/azure-confidentialledger/) and [.NET client library](/dotnet/api/overview/azure/storage.confidentialledger-readme-pre).
+flow above (and much more). See, for example, the [Python client library](https://pypi.org/project/azure-confidentialledger/) and [.NET client library](/dotnet/api/azure.security.confidentialledger).
The main authenticating scenarios are:
For detailed steps on registering an Azure confidential ledger application with
At the end of registration, the application owner gets the following values: -- An **Application ID** (also known as the AAD Client ID or appID)
+- An **Application ID** (also known as the Azure Active Directory Client ID or appID)
- An **authentication key** (also known as the shared secret). The application must present both these values to Azure Active Directory to get a token.
This flow is called the[OAuth2 token exchange flow](https://tools.ietf.org/html/
- [Integrating applications with Azure Active Directory](../active-directory/develop/quickstart-register-app.md) - [Use portal to create an Azure AD application and service principal that can access resources](../active-directory/develop/howto-create-service-principal-portal.md) - [Create an Azure service principal with the Azure CLI](/cli/azure/create-an-azure-service-principal-azure-cli).-- [Authenticating Azure confidential ledger nodes](authenticate-ledger-nodes.md)
+- [Authenticating Azure confidential ledger nodes](authenticate-ledger-nodes.md)
confidential-ledger Quickstart Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/quickstart-net.md
Get started with the Azure confidential ledger client library for .NET. [Azure c
Azure confidential ledger client library resources:
-[API reference documentation](/dotnet/api/overview/azure/security.confidentialledger-readme-pre) | [Library source code](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/confidentialledger/Azure.Security.ConfidentialLedger) | [Package (NuGet)](https://www.nuget.org/packages/Azure.Security.ConfidentialLedger/1.0.0)
+[API reference documentation](/dotnet/api/overview/azure/security.confidentialledger-readme) | [Library source code](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/confidentialledger/Azure.Security.ConfidentialLedger) | [Package (NuGet)](https://www.nuget.org/packages/Azure.Security.ConfidentialLedger/1.0.0)
## Prerequisites
dotnet add package Azure.Identity
## Object model
-The Azure confidential ledger client library for .NET allows you to create an immutable ledger entry in the service. The [Code examples](#code-examples) section shows how to create a write to the ledger and retrieve the transaction id.
+The Azure confidential ledger client library for .NET allows you to create an immutable ledger entry in the service. The [Code examples](#code-examples) section shows how to create a write to the ledger and retrieve the transaction ID.
## Code examples
cosmos-db Continuous Backup Restore Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/continuous-backup-restore-introduction.md
The following configurations aren't restored after the point-in-time recovery:
* Consistency settings. By default, the account is restored with session consistency. ΓÇâ * Regions. * Stored procedures, triggers, UDFs.
+* Role-based access control assignments. These will need to be re-assigned.
You can add these configurations to the restored account after the restore is completed.
cosmos-db How To Configure Vnet Service Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-configure-vnet-service-endpoint.md
To migrate an Azure Cosmos DB account from using IP firewall rules to using virt
After an Azure Cosmos DB account is configured for a service endpoint for a subnet, each request from that subnet is sent differently to Azure Cosmos DB. The requests are sent with virtual network and subnet source information instead of a source public IP address. These requests will no longer match an IP filter configured on the Azure Cosmos DB account, which is why the following steps are necessary to avoid downtime.
-Before proceeding, enable the Azure Cosmos DB service endpoint on the virtual network and subnet using the step shown above in "Enable the service endpoint for an existing subnet of a virtual network".
- 1. Get virtual network and subnet information: ```powershell
Before proceeding, enable the Azure Cosmos DB service endpoint on the virtual ne
1. Repeat the previous steps for all Azure Cosmos DB accounts accessed from the subnet.
+1. Enable the Azure Cosmos DB service endpoint on the virtual network and subnet using the step shown in the [Enable the service endpoint for an existing subnet of a virtual network](#configure-using-powershell) section of this article.
+ 1. Remove the IP firewall rule for the subnet from the Azure Cosmos DB account's Firewall rules. ## Frequently asked questions
cosmos-db Intra Account Container Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/intra-account-container-copy.md
The container copy job will run in the write region. If there are accounts confi
The account's write region may change in the rare scenario of a region outage or due to manual failover. In such a scenario, incomplete container copy jobs created within the account would fail. You would need to recreate these failed jobs. Recreated jobs would then run in the new (current) write region.
-### Why is a new database *_datatransferstate* created in the account when I run container copy jobs? Am I being charged for this database?
-* *_datatransferstate* is a database that is created while running container copy jobs. This database is used by the platform to store the state and progress of the copy job.
+### Why is a new database *__datatransferstate* created in the account when I run container copy jobs? Am I being charged for this database?
+* *__datatransferstate* is a database that is created while running container copy jobs. This database is used by the platform to store the state and progress of the copy job.
* The database uses manual provisioned throughput of 800 RUs. You'll be charged for this database.
-* Deleting this database will remove the container copy job history from the account. It can be safely deleted once all the jobs in the account have completed, if you no longer need the job history. The platform will not clean up the *_datatransferstate* database automatically.
+* Deleting this database will remove the container copy job history from the account. It can be safely deleted once all the jobs in the account have completed, if you no longer need the job history. The platform will not clean up the *__datatransferstate* database automatically.
## Supported regions
Make sure the target container is created before running the job as specified in
* Error - Shared throughput database creation is not supported for serverless accounts Job creation on serverless accounts may fail with the error *"Shared throughput database creation is not supported for serverless accounts"*.
-As a work-around, create a database called *_datatransferstate* manually within the account and try creating the container copy job again.
+As a work-around, create a database called *__datatransferstate* manually within the account and try creating the container copy job again.
``` ERROR: (BadRequest) Response status code does not indicate success: BadRequest (400); Substatus: 0; ActivityId: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx; Reason: (Shared throughput database creation is not supported for serverless accounts.
cosmos-db Linux Emulator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/linux-emulator.md
Use the following steps to run the emulator on Linux:
|Name |Default |Description | ||||
-| Ports: `-p` | | Currently, only ports 8081 and 10251-10255 are needed by the emulator endpoint. |
+| Ports: `-p` | | Currently, only ports `8081` and `10250-10255` are needed by the emulator endpoint. |
| `AZURE_COSMOS_EMULATOR_PARTITION_COUNT` | 10 | Controls the total number of physical partitions, which in return controls the number of containers that can be created and can exist at a given point in time. We recommend starting small to improve the emulator start up time, i.e 3. | | Memory: `-m` | | On memory, 3 GB or more is required. | | Cores: `--cpus` | | Make sure to allocate enough memory and CPU cores. At least four cores are recommended. |
This section provides tips to troubleshoot errors when using the Linux emulator.
- Verify that the specific emulator container is in a running state. -- Verify that no other applications are using emulator ports: 8081 and 10250-10255.
+- Verify that no other applications are using emulator ports: `8081` and `10250-10255`.
-- Verify that the container port 8081, is mapped correctly and accessible from an environment outside of the container.
+- Verify that the container port `8081`, is mapped correctly and accessible from an environment outside of the container.
```bash netstat -lt
When reporting an issue with the Linux emulator, provide as much information as
- Description of the workload - Sample of the database/collection and item used - Include the console output from starting the Docker container for the emulator in attached mode-- Send all of the above to [Azure Cosmos DB team](mailto:cdbportalfeedback@microsoft.com).
+- Post feedback on our [Azure Cosmos DB Q&A forums](/answers/topics/azure-cosmos-db.html).
## Next steps
cosmos-db Managed Identity Based Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/managed-identity-based-authentication.md
Previously updated : 06/01/2022 Last updated : 10/20/2022
In this step, you'll query the document endpoint for the API for NoSQL account.
## Grant access to your Azure Cosmos DB account
-In this step, you'll assign a role to the function app's system-assigned managed identity. Azure Cosmos DB has multiple built-in roles that you can assign to the managed identity. For this solution, you'll use the [Azure Cosmos DB Built-in Data Reader](how-to-setup-rbac.md#built-in-role-definitions) role.
+In this step, you'll assign a role to the function app's system-assigned managed identity. Azure Cosmos DB has multiple built-in roles that you can assign to the managed identity for control-plane access. For data-plane access, you'll create a new custom role with acess to read metadata.
> [!TIP]
-> When you assign roles, assign only the needed access. If your service requires only reading data, then assign the **Cosmos DB Built-in Data Reader** role to the managed identity. For more information about the importance of least privilege access, see the [Lower exposure of privileged accounts](../security/fundamentals/identity-management-best-practices.md#lower-exposure-of-privileged-accounts) article.
+> For more information about the importance of least privilege access, see the [Lower exposure of privileged accounts](../security/fundamentals/identity-management-best-practices.md#lower-exposure-of-privileged-accounts) article.
1. Use ``az cosmosdb show`` with the **query** parameter set to ``id``. Store the result in a shell variable named ``scope``.
In this step, you'll assign a role to the function app's system-assigned managed
## Programmatically access the Azure Cosmos DB keys
-We now have a function app that has a system-assigned managed identity with the **Cosmos DB Built-in Data Reader** role. The following function app will query the Azure Cosmos DB account for a list of databases.
+We now have a function app that has a system-assigned managed identity with the custom role. The following function app will query the Azure Cosmos DB account for a list of databases.
1. Create a local function project with the ``--dotnet`` parameter in a folder named ``csmsfunc``. Change your shell's directory
cosmos-db Connect Using Mongoose https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/connect-using-mongoose.md
After you create the database, you'll use the name in the `COSMOSDB_DBNAME` envi
3. Install the necessary packages using one of the ```npm install``` options:
- * Mongoose: ```npm install mongoose@5 --save```
+ * **Mongoose**: ```npm install mongoose@5.13.15 --save```
- > [!Note]
- > The Mongoose example connection below is based on Mongoose 5+, which has changed since earlier versions.
+ > [!IMPORTANT]
+ > The Mongoose example connection below is based on Mongoose 5+, which has changed since earlier versions. Azure Cosmos DB for MongoDB is compatible with up to version `5.13.15` of Mongoose. For more information, please see the [issue discussion](https://github.com/Automattic/mongoose/issues/11072) in the Mongoose GitHub repository.
- * Dotenv (if you'd like to load your secrets from an .env file): ```npm install dotenv --save```
+ * **Dotenv** *(if you'd like to load your secrets from an .env file)*: ```npm install dotenv --save```
>[!Note] > The ```--save``` flag adds the dependency to the package.json file.
cosmos-db Tutorial Develop Nodejs Part 5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/tutorial-develop-nodejs-part-5.md
Mongoose is an object data modeling (ODM) library for MongoDB and Node.js. You c
1. Install the mongoose npm module, which is an API that's used to talk to MongoDB. ```bash
- npm i mongoose --save
+ npm install mongoose@5.13.15 --save
```
+ > [!IMPORTANT]
+ > Azure Cosmos DB for MongoDB is compatible with up to version `5.13.15` of Mongoose. For more information, please see the [issue discussion](https://github.com/Automattic/mongoose/issues/11072) in the Mongoose GitHub repository.
+ 1. In the **server** folder, create a file named **mongo.js**. You'll add the connection details of your Azure Cosmos DB account to this file. 1. Copy the following code into the **mongo.js** file. The code provides the following functionality:
cosmos-db Monitor Resource Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/monitor-resource-logs.md
Azure Cosmos DB provides advanced logging for detailed troubleshooting. By enabl
To learn how to query using this newly enabled feature visit [advanced queries](advanced-queries.md). ## Next steps+
+* For a reference of the log and metric data, see [monitoring Azure Cosmos DB data reference](monitor-reference.md#resource-logs).
+ * For more information on how to query resource-specific tables see [troubleshooting using resource-specific tables](monitor-logs-basic-queries.md#resource-specific-queries). * For more information on how to query AzureDiagnostics tables see [troubleshooting using AzureDiagnostics tables](monitor-logs-basic-queries.md#azure-diagnostics-queries).
cosmos-db Best Practice Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/best-practice-dotnet.md
Increase `System.Net MaxConnections` per host when you use Gateway mode. Azure C
For workloads that have heavy create payloads, set the `EnableContentResponseOnWrite` request option to `false`. The service will no longer return the created or updated resource to the SDK. Normally, because the application has the object that's being created, it doesn't need the service to return it. The header values are still accessible, like a request charge. Disabling the content response can help improve performance, because the SDK no longer needs to allocate memory or serialize the body of the response. It also reduces the network bandwidth usage to further help performance.
+> [!IMPORTANT]
+> Setting `EnableContentResponseOnWrite` to `false` will also disable the response from a trigger operation.
+ ## Next steps For a sample application that's used to evaluate Azure Cosmos DB for high-performance scenarios on a few client machines, see [Performance and scale testing with Azure Cosmos DB](performance-testing.md).
cosmos-db How To Manage Conflicts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-manage-conflicts.md
udp_collection = self.try_create_document_collection(
## Create a custom conflict resolution policy using a stored procedure
-These samples show how to set up a container with a custom conflict resolution policy with a stored procedure to resolve the conflict. These conflicts don't show up in the conflict feed unless there's an error in your stored procedure. After the policy is created with the container, you need to create the stored procedure. The .NET SDK sample below shows an example. This policy is supported on NoSQL Api only.
+These samples show how to set up a container with a custom conflict resolution policy. This policy uses the logic in a stored procedure to resolve the conflict. If a stored procedure is designated to resolve conflicts, conflicts won't show up in the conflict feed unless there's an error in the designated stored procedure.
+
+After the policy is created with the container, you need to create the stored procedure. The .NET SDK sample below shows an example of this workflow. This policy is supported in the API for NoSQL only.
### Sample custom conflict resolution stored procedure
After your container is created, you must create the `resolver` stored procedure
## Create a custom conflict resolution policy
-These samples show how to set up a container with a custom conflict resolution policy. These conflicts show up in the conflict feed.
+These samples show how to set up a container with a custom conflict resolution policy. With this implementation, each conflict will show up in the conflict feed. It's up to you to handle the conflicts individually from the conflict feed.
### <a id="create-custom-conflict-resolution-policy-dotnet"></a>.NET SDK
manual_collection = client.CreateContainer(database['_self'], collection)
## Read from conflict feed
-These samples show how to read from a container's conflict feed. Conflicts show up in the conflict feed only if they weren't resolved automatically or if using a custom conflict policy.
+These samples show how to read from a container's conflict feed. Conflicts may show up in the conflict feed only for a couple of reasons:
+
+- The conflict was not resolved automatically
+- The conflict caused an error with the designated stored procedure
+- The conflict resolution policy is set to **custom** and does not designate a stored procedure to handle conflicts
### <a id="read-from-conflict-feed-dotnet"></a>.NET SDK
cosmos-db Pagination https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/pagination.md
Here are some examples for processing results from queries with multiple pages:
## Continuation tokens
-In the .NET SDK and Java SDK you can optionally use continuation tokens as a bookmark for your query's progress. Azure Cosmos DB query executions are stateless at the server side and can be resumed at any time using the continuation token. For the Python SDK and Node.js SDK, it's supported for single partition queries, and the PK must be specified in the options object because it's not sufficient to have it in the query itself.
+In the .NET SDK and Java SDK you can optionally use continuation tokens as a bookmark for your query's progress. Azure Cosmos DB query executions are stateless at the server side and can be resumed at any time using the continuation token. For the Python SDK, continuation tokens are only supported for single partition queries. The partition key must be specified in the options object because it's not sufficient to have it in the query itself.
Here are some example for using continuation tokens:
cosmos-db Quickstart Run Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/quickstart-run-queries.md
SELECT date_trunc('hour', created_at) AS hour,
sum((payload->>'distinct_size')::int) AS num_commits FROM github_events WHERE event_type = 'PushEvent' AND
- payload @> '{"ref":"refs/heads/main"}'
+ payload @> '{"ref":"refs/heads/master"}'
GROUP BY hour ORDER BY hour; ```
cosmos-db Reference Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/reference-extensions.md
Previously updated : 08/02/2022 Last updated : 10/19/2022 # PostgreSQL extensions in Azure Cosmos DB for PostgreSQL
The versions of each extension installed in a cluster sometimes differ based on
### Citus extension > [!div class="mx-tableFixed"]
-> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** |
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | **PG 15** |
> ||||||
-> | [citus](https://github.com/citusdata/citus) | Citus distributed database. | 9.5.11 | 10.0.7 | 10.2.6 | 11.0.4 |
+> | [citus](https://github.com/citusdata/citus) | Citus distributed database. | 9.5.11 | 10.0.7 | 10.2.6 | 11.0.4 | 11.1.3 |
### Data types extensions > [!div class="mx-tableFixed"]
-> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** |
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | **PG 15** |
> ||||||
-> | [citext](https://www.postgresql.org/docs/current/static/citext.html) | Provides a case-insensitive character string type. | 1.5 | 1.6 | 1.6 | 1.6 |
-> | [cube](https://www.postgresql.org/docs/current/static/cube.html) | Provides a data type for multidimensional cubes. | 1.4 | 1.4 | 1.4 | 1.5 |
-> | [hll](https://github.com/citusdata/postgresql-hll) | Provides a HyperLogLog data structure. | 2.16 | 2.16 | 2.16 | 2.16 |
-> | [hstore](https://www.postgresql.org/docs/current/static/hstore.html) | Provides a data type for storing sets of key-value pairs. | 1.5 | 1.6 | 1.7 | 1.8 |
-> | [isn](https://www.postgresql.org/docs/current/static/isn.html) | Provides data types for international product numbering standards. | 1.2 | 1.2 | 1.2 | 1.2 |
-> | [lo](https://www.postgresql.org/docs/current/lo.html) | Large Object maintenance. | 1.1 | 1.1 | 1.1 | 1.1 |
-> | [ltree](https://www.postgresql.org/docs/current/static/ltree.html) | Provides a data type for hierarchical tree-like structures. | 1.1 | 1.1 | 1.2 | 1.2 |
-> | [seg](https://www.postgresql.org/docs/current/seg.html) | Data type for representing line segments or floating-point intervals. | 1.3 | 1.3 | 1.3 | 1.4 |
-> | [tdigest](https://github.com/tvondra/tdigest) | Data type for on-line accumulation of rank-based statistics such as quantiles and trimmed means. | 1.2.0 | 1.2.0 | 1.2.0 | 1.4.0 |
-> | [topn](https://github.com/citusdata/postgresql-topn/) | Type for top-n JSONB. | 2.4.0 | 2.4.0 | 2.4.0 | 2.4.0 |
+> | [citext](https://www.postgresql.org/docs/current/static/citext.html) | Provides a case-insensitive character string type. | 1.5 | 1.6 | 1.6 | 1.6 | 1.6 |
+> | [cube](https://www.postgresql.org/docs/current/static/cube.html) | Provides a data type for multidimensional cubes. | 1.4 | 1.4 | 1.4 | 1.5 | 1.5 |
+> | [hll](https://github.com/citusdata/postgresql-hll) | Provides a HyperLogLog data structure. | 2.16 | 2.16 | 2.16 | 2.16 | 2.16 |
+> | [hstore](https://www.postgresql.org/docs/current/static/hstore.html) | Provides a data type for storing sets of key-value pairs. | 1.5 | 1.6 | 1.7 | 1.8 | 1.8 |
+> | [isn](https://www.postgresql.org/docs/current/static/isn.html) | Provides data types for international product numbering standards. | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 |
+> | [lo](https://www.postgresql.org/docs/current/lo.html) | Large Object maintenance. | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 |
+> | [ltree](https://www.postgresql.org/docs/current/static/ltree.html) | Provides a data type for hierarchical tree-like structures. | 1.1 | 1.1 | 1.2 | 1.2 | 1.2 |
+> | [seg](https://www.postgresql.org/docs/current/seg.html) | Data type for representing line segments or floating-point intervals. | 1.3 | 1.3 | 1.3 | 1.4 | 1.4 |
+> | [tdigest](https://github.com/tvondra/tdigest) | Data type for on-line accumulation of rank-based statistics such as quantiles and trimmed means. | 1.2.0 | 1.2.0 | 1.2.0 | 1.4.0 | 1.4.0 |
+> | [topn](https://github.com/citusdata/postgresql-topn/) | Type for top-n JSONB. | 2.4.0 | 2.4.0 | 2.4.0 | 2.4.0 | 2.5.0 |
### Full-text search extensions > [!div class="mx-tableFixed"]
-> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** |
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | **PG 15** |
> ||||||
-> | [dict\_int](https://www.postgresql.org/docs/current/static/dict-int.html) | Provides a text search dictionary template for integers. | 1.0 | 1.0 | 1.0 | 1.0 |
-> | [dict\_xsyn](https://www.postgresql.org/docs/current/dict-xsyn.html) | Text search dictionary template for extended synonym processing. | 1.0 | 1.0 | 1.0 | 1.0 |
-> | [unaccent](https://www.postgresql.org/docs/current/static/unaccent.html) | A text search dictionary that removes accents (diacritic signs) from lexemes. | 1.1 | 1.1 | 1.1 | 1.1 |
+> | [dict\_int](https://www.postgresql.org/docs/current/static/dict-int.html) | Provides a text search dictionary template for integers. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [dict\_xsyn](https://www.postgresql.org/docs/current/dict-xsyn.html) | Text search dictionary template for extended synonym processing. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [unaccent](https://www.postgresql.org/docs/current/static/unaccent.html) | A text search dictionary that removes accents (diacritic signs) from lexemes. | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 |
### Functions extensions > [!div class="mx-tableFixed"]
-> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** |
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | **PG 15** |
> ||||||
-> | [autoinc](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.7) | Functions for autoincrementing fields. | 1.0 | 1.0 | 1.0 | 1.0 |
-> | [earthdistance](https://www.postgresql.org/docs/current/static/earthdistance.html) | Provides a means to calculate great-circle distances on the surface of the Earth. | 1.1 | 1.1 | 1.1 | 1.1 |
-> | [fuzzystrmatch](https://www.postgresql.org/docs/current/static/fuzzystrmatch.html) | Provides several functions to determine similarities and distance between strings. | 1.1 | 1.1 | 1.1 | 1.1 |
-> | [insert\_username](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.8) | Functions for tracking who changed a table. | 1.0 | 1.0 | 1.0 | 1.0 |
-> | [intagg](https://www.postgresql.org/docs/current/intagg.html) | Integer aggregator and enumerator (obsolete). | 1.1 | 1.1 | 1.1 | 1.1 |
-> | [intarray](https://www.postgresql.org/docs/current/static/intarray.html) | Provides functions and operators for manipulating null-free arrays of integers. | 1.2 | 1.2 | 1.3 | 1.5 |
-> | [moddatetime](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.9) | Functions for tracking last modification time. | 1.0 | 1.0 | 1.0 | 1.0 |
-> | [pg\_partman](https://pgxn.org/dist/pg_partman/doc/pg_partman.html) | Manages partitioned tables by time or ID. | 4.6.0 | 4.6.0 | 4.6.0 | 4.6.2 |
-> | [pg\_surgery](https://www.postgresql.org/docs/current/pgsurgery.html) | Functions to perform surgery on a damaged relation. | | | | 1.0 |
-> | [pg\_trgm](https://www.postgresql.org/docs/current/static/pgtrgm.html) | Provides functions and operators for determining the similarity of alphanumeric text based on trigram matching. | 1.4 | 1.4 | 1.5 | 1.6 |
-> | [pgcrypto](https://www.postgresql.org/docs/current/static/pgcrypto.html) | Provides cryptographic functions. | 1.3 | 1.3 | 1.3 | 1.3 |
-> | [refint](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.5) | Functions for implementing referential integrity (obsolete). | 1.0 | 1.0 | 1.0 | 1.0 |
-> | [tablefunc](https://www.postgresql.org/docs/current/static/tablefunc.html) | Provides functions that manipulate whole tables, including crosstab. | 1.0 | 1.0 | 1.0 | 1.0 |
-> | [tcn](https://www.postgresql.org/docs/current/tcn.html) | Triggered change notifications. | 1.0 | 1.0 | 1.0 | 1.0 |
-> | [timetravel](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.6) | Functions for implementing time travel. | 1.0 | | | |
-> | [uuid-ossp](https://www.postgresql.org/docs/current/static/uuid-ossp.html) | Generates universally unique identifiers (UUIDs). | 1.1 | 1.1 | 1.1 | 1.1 |
+> | [autoinc](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.7) | Functions for autoincrementing fields. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [earthdistance](https://www.postgresql.org/docs/current/static/earthdistance.html) | Provides a means to calculate great-circle distances on the surface of the Earth. | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 |
+> | [fuzzystrmatch](https://www.postgresql.org/docs/current/static/fuzzystrmatch.html) | Provides several functions to determine similarities and distance between strings. | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 |
+> | [insert\_username](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.8) | Functions for tracking who changed a table. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [intagg](https://www.postgresql.org/docs/current/intagg.html) | Integer aggregator and enumerator (obsolete). | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 |
+> | [intarray](https://www.postgresql.org/docs/current/static/intarray.html) | Provides functions and operators for manipulating null-free arrays of integers. | 1.2 | 1.2 | 1.3 | 1.5 | 1.5 |
+> | [moddatetime](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.9) | Functions for tracking last modification time. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [pg\_partman](https://pgxn.org/dist/pg_partman/doc/pg_partman.html) | Manages partitioned tables by time or ID. | 4.6.0 | 4.6.0 | 4.6.0 | 4.6.2 | 4.7.0 |
+> | [pg\_surgery](https://www.postgresql.org/docs/current/pgsurgery.html) | Functions to perform surgery on a damaged relation. | | | | 1.0 | 1.0 |
+> | [pg\_trgm](https://www.postgresql.org/docs/current/static/pgtrgm.html) | Provides functions and operators for determining the similarity of alphanumeric text based on trigram matching. | 1.4 | 1.4 | 1.5 | 1.6 | 1.6 |
+> | [pgcrypto](https://www.postgresql.org/docs/current/static/pgcrypto.html) | Provides cryptographic functions. | 1.3 | 1.3 | 1.3 | 1.3 | 1.3 |
+> | [refint](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.5) | Functions for implementing referential integrity (obsolete). | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [tablefunc](https://www.postgresql.org/docs/current/static/tablefunc.html) | Provides functions that manipulate whole tables, including crosstab. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [tcn](https://www.postgresql.org/docs/current/tcn.html) | Triggered change notifications. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [timetravel](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.6) | Functions for implementing time travel. | 1.0 | | | | |
+> | [uuid-ossp](https://www.postgresql.org/docs/current/static/uuid-ossp.html) | Generates universally unique identifiers (UUIDs). | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 |
### Index types extensions > [!div class="mx-tableFixed"]
-> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** |
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | **PG 15** |
> ||||||
-> | [bloom](https://www.postgresql.org/docs/current/bloom.html) | Bloom access method - signature file-based index. | 1.0 | 1.0 | 1.0 | 1.0 |
-> | [btree\_gin](https://www.postgresql.org/docs/current/static/btree-gin.html) | Provides sample GIN operator classes that implement B-tree-like behavior for certain data types. | 1.3 | 1.3 | 1.3 | 1.3 |
-> | [btree\_gist](https://www.postgresql.org/docs/current/static/btree-gist.html) | Provides GiST index operator classes that implement B-tree. | 1.5 | 1.5 | 1.5 | 1.6 |
+> | [bloom](https://www.postgresql.org/docs/current/bloom.html) | Bloom access method - signature file-based index. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [btree\_gin](https://www.postgresql.org/docs/current/static/btree-gin.html) | Provides sample GIN operator classes that implement B-tree-like behavior for certain data types. | 1.3 | 1.3 | 1.3 | 1.3 | 1.3 |
+> | [btree\_gist](https://www.postgresql.org/docs/current/static/btree-gist.html) | Provides GiST index operator classes that implement B-tree. | 1.5 | 1.5 | 1.5 | 1.6 | 1.7 |
### Language extensions > [!div class="mx-tableFixed"]
-> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** |
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | **PG 15** |
> ||||||
-> | [plpgsql](https://www.postgresql.org/docs/current/static/plpgsql.html) | PL/pgSQL loadable procedural language. | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [plpgsql](https://www.postgresql.org/docs/current/static/plpgsql.html) | PL/pgSQL loadable procedural language. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
### Miscellaneous extensions > [!div class="mx-tableFixed"]
-> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** |
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | **PG 15** |
> ||||||
-> | [amcheck](https://www.postgresql.org/docs/current/amcheck.html) | Functions for verifying relation integrity. | 1.1 | 1.2 | 1.2 | 1.3 |
-> | [dblink](https://www.postgresql.org/docs/current/dblink.html) | A module that supports connections to other PostgreSQL databases from within a database session. See the "dblink and postgres_fdw" section for information about this extension. | 1.2 | 1.2 | 1.2 | 1.2 |
-> | [old\_snapshot](https://www.postgresql.org/docs/current/oldsnapshot.html) | Allows inspection of the server state that is used to implement old_snapshot_threshold. | | | | 1.0 |
-> | [pageinspect](https://www.postgresql.org/docs/current/pageinspect.html) | Inspect the contents of database pages at a low level. | 1.7 | 1.7 | 1.8 | 1.9 |
-> | [pg\_buffercache](https://www.postgresql.org/docs/current/static/pgbuffercache.html) | Provides a means for examining what's happening in the shared buffer cache in real time. | 1.3 | 1.3 | 1.3 | 1.3 |
-> | [pg\_cron](https://github.com/citusdata/pg_cron) | Job scheduler for PostgreSQL. | 1.4 | 1.4 | 1.4 | 1.4 |
-> | [pg\_freespacemap](https://www.postgresql.org/docs/current/pgfreespacemap.html) | Examine the free space map (FSM). | 1.2 | 1.2 | 1.2 | 1.2 |
-> | [pg\_prewarm](https://www.postgresql.org/docs/current/static/pgprewarm.html) | Provides a way to load relation data into the buffer cache. | 1.2 | 1.2 | 1.2 | 1.2 |
-> | [pg\_stat\_statements](https://www.postgresql.org/docs/current/static/pgstatstatements.html) | Provides a means for tracking execution statistics of all SQL statements executed by a server. See the "pg_stat_statements" section for information about this extension. | 1.6 | 1.7 | 1.8 | 1.9 |
-> | [pg\_visibility](https://www.postgresql.org/docs/current/pgvisibility.html) | Examine the visibility map (VM) and page-level visibility information. | 1.2 | 1.2 | 1.2 | 1.2 |
-> | [pgrowlocks](https://www.postgresql.org/docs/current/static/pgrowlocks.html) | Provides a means for showing row-level locking information. | 1.2 | 1.2 | 1.2 | 1.2 |
-> | [pgstattuple](https://www.postgresql.org/docs/current/static/pgstattuple.html) | Provides a means for showing tuple-level statistics. | 1.5 | 1.5 | 1.5 | 1.5 |
-> | [postgres\_fdw](https://www.postgresql.org/docs/current/static/postgres-fdw.html) | Foreign-data wrapper used to access data stored in external PostgreSQL servers. See the "dblink and postgres_fdw" section for information about this extension.| 1.0 | 1.0 | 1.0 | 1.1 |
-> | [sslinfo](https://www.postgresql.org/docs/current/sslinfo.html) | Information about TLS/SSL certificates. | 1.2 | 1.2 | 1.2 | 1.2 |
-> | [tsm\_system\_rows](https://www.postgresql.org/docs/current/tsm-system-rows.html) | TABLESAMPLE method, which accepts number of rows as a limit. | 1.0 | 1.0 | 1.0 | 1.0 |
-> | [tsm\_system\_time](https://www.postgresql.org/docs/current/tsm-system-time.html) | TABLESAMPLE method, which accepts time in milliseconds as a limit. | 1.0 | 1.0 | 1.0 | 1.0 |
-> | [xml2](https://www.postgresql.org/docs/current/xml2.html) | XPath querying and XSLT. | 1.1 | 1.1 | 1.1 | 1.1 |
+> | [amcheck](https://www.postgresql.org/docs/current/amcheck.html) | Functions for verifying relation integrity. | 1.1 | 1.2 | 1.2 | 1.3 | 1.3 |
+> | [dblink](https://www.postgresql.org/docs/current/dblink.html) | A module that supports connections to other PostgreSQL databases from within a database session. See the "dblink and postgres_fdw" section for information about this extension. | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 |
+> | [old\_snapshot](https://www.postgresql.org/docs/current/oldsnapshot.html) | Allows inspection of the server state that is used to implement old_snapshot_threshold. | | | | 1.0 | 1.0 |
+> | [pageinspect](https://www.postgresql.org/docs/current/pageinspect.html) | Inspect the contents of database pages at a low level. | 1.7 | 1.7 | 1.8 | 1.9 | 1.10 |
+> | [pg\_buffercache](https://www.postgresql.org/docs/current/static/pgbuffercache.html) | Provides a means for examining what's happening in the shared buffer cache in real time. | 1.3 | 1.3 | 1.3 | 1.3 | 1.3 |
+> | [pg\_cron](https://github.com/citusdata/pg_cron) | Job scheduler for PostgreSQL. | 1.4 | 1.4 | 1.4 | 1.4 | 1.4 |
+> | [pg\_freespacemap](https://www.postgresql.org/docs/current/pgfreespacemap.html) | Examine the free space map (FSM). | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 |
+> | [pg\_prewarm](https://www.postgresql.org/docs/current/static/pgprewarm.html) | Provides a way to load relation data into the buffer cache. | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 |
+> | [pg\_stat\_statements](https://www.postgresql.org/docs/current/static/pgstatstatements.html) | Provides a means for tracking execution statistics of all SQL statements executed by a server. See the "pg_stat_statements" section for information about this extension. | 1.6 | 1.7 | 1.8 | 1.9 | 1.10 |
+> | [pg\_visibility](https://www.postgresql.org/docs/current/pgvisibility.html) | Examine the visibility map (VM) and page-level visibility information. | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 |
+> | [pgrowlocks](https://www.postgresql.org/docs/current/static/pgrowlocks.html) | Provides a means for showing row-level locking information. | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 |
+> | [pgstattuple](https://www.postgresql.org/docs/current/static/pgstattuple.html) | Provides a means for showing tuple-level statistics. | 1.5 | 1.5 | 1.5 | 1.5 | 1.5 |
+> | [postgres\_fdw](https://www.postgresql.org/docs/current/static/postgres-fdw.html) | Foreign-data wrapper used to access data stored in external PostgreSQL servers. See the "dblink and postgres_fdw" section for information about this extension.| 1.0 | 1.0 | 1.0 | 1.1 | 1.1 |
+> | [sslinfo](https://www.postgresql.org/docs/current/sslinfo.html) | Information about TLS/SSL certificates. | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 |
+> | [tsm\_system\_rows](https://www.postgresql.org/docs/current/tsm-system-rows.html) | TABLESAMPLE method, which accepts number of rows as a limit. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [tsm\_system\_time](https://www.postgresql.org/docs/current/tsm-system-time.html) | TABLESAMPLE method, which accepts time in milliseconds as a limit. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
+> | [xml2](https://www.postgresql.org/docs/current/xml2.html) | XPath querying and XSLT. | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 |
### PostGIS extensions > [!div class="mx-tableFixed"]
-> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** |
+> | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | **PG 15** |
> |||||| > | [PostGIS](https://www.postgis.net/) | Spatial and geographic objects for PostgreSQL. | 2.5.5 | 3.0.5 | 3.0.5 | 3.1.5 |
-> | address\_standardizer | Used to parse an address into constituent elements. Used to support geocoding address normalization step. | 2.5.5 | 3.0.5 | 3.0.5 | 3.1.5 |
-> | postgis\_sfcgal | PostGIS SFCGAL functions. | 2.5.5 | 3.0.5 | 3.0.5 | 3.1.5 |
-> | postgis\_topology | PostGIS topology spatial types and functions. | 2.5.5 | 3.0.5 | 3.0.5 | 3.1.5 |
+> | address\_standardizer | Used to parse an address into constituent elements. Used to support geocoding address normalization step. | 2.5.5 | 3.0.5 | 3.0.5 | 3.1.5 | 3.3.1 |
+> | postgis\_sfcgal | PostGIS SFCGAL functions. | 2.5.5 | 3.0.5 | 3.0.5 | 3.1.5 | 3.3.1 |
+> | postgis\_topology | PostGIS topology spatial types and functions. | 2.5.5 | 3.0.5 | 3.0.5 | 3.1.5 | 3.3.1 |
## pg_stat_statements
cosmos-db Reference Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/reference-versions.md
Previously updated : 10/14/2022 Last updated : 10/20/2022 # Supported database versions in Azure Cosmos DB for PostgreSQL
customizable during creation. Azure Cosmos DB for PostgreSQL currently supports
following major [PostgreSQL versions](https://www.postgresql.org/docs/release/):
+### PostgreSQL version 15
+
+The current minor release is 15.0. Refer to the [PostgreSQL
+documentation](https://www.postgresql.org/docs/release/15.0/) to
+learn more about improvements and fixes in this minor release.
+ ### PostgreSQL version 14 The current minor release is 14.5. Refer to the [PostgreSQL
policy](https://www.postgresql.org/support/versioning/).
| Version | What's New | Supported since | Retirement date (Azure)| | - | - | | - |
-| [PostgreSQL 11](https://www.postgresql.org/about/news/postgresql-11-released-1894/) | [Features](https://www.postgresql.org/docs/11/release-11.html) | May 7, 2019 | November 9, 2023 |
-| [PostgreSQL 12](https://www.postgresql.org/about/news/postgresql-12-released-1976/) | [Features](https://www.postgresql.org/docs/12/release-12.html) | Apr 6, 2021 | November 14, 2024
-| [PostgreSQL 13](https://www.postgresql.org/about/news/postgresql-13-released-2077/) | [Features](https://www.postgresql.org/docs/13/release-13.html) | Apr 6, 2021 | November 13, 2025
-| [PostgreSQL 14](https://www.postgresql.org/about/news/postgresql-14-released-2318/) | [Features](https://www.postgresql.org/docs/14/release-14.html) | Oct 1, 2021 | November 12, 2026
+| [PostgreSQL 11](https://www.postgresql.org/about/news/postgresql-11-released-1894/) | [Features](https://www.postgresql.org/docs/11/release-11.html) | May 7, 2019 | Nov 9, 2023 |
+| [PostgreSQL 12](https://www.postgresql.org/about/news/postgresql-12-released-1976/) | [Features](https://www.postgresql.org/docs/12/release-12.html) | Apr 6, 2021 | Nov 14, 2024 |
+| [PostgreSQL 13](https://www.postgresql.org/about/news/postgresql-13-released-2077/) | [Features](https://www.postgresql.org/docs/13/release-13.html) | Apr 6, 2021 | Nov 13, 2025 |
+| [PostgreSQL 14](https://www.postgresql.org/about/news/postgresql-14-released-2318/) | [Features](https://www.postgresql.org/docs/14/release-14.html) | Oct 1, 2021 | Nov 12, 2026 |
+| [PostgreSQL 15](https://www.postgresql.org/about/news/postgresql-15-released-2526/) | [Features](https://www.postgresql.org/docs/15/release-15.html) | Oct 20, 2022 | Nov 11, 2027 |
### Retired PostgreSQL engine versions not supported in Azure Cosmos DB for PostgreSQL
cosmos-db Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/role-based-access-control.md
In addition to the built-in roles, users may also create [custom roles](../role-
> [!TIP] > Custom roles that need to access data stored within Azure Cosmos DB or use Data Explorer in the Azure portal must have `Microsoft.DocumentDB/databaseAccounts/listKeys/*` action.
+> [!NOTE]
+> Custom role assignments may not always be visible in the Azure portal.
+ ## <a id="prevent-sdk-changes"></a>Preventing changes from the Azure Cosmos DB SDKs The Azure Cosmos DB resource provider can be locked down to prevent any changes to resources from a client connecting using the account keys (that is applications connecting via the Azure Cosmos DB SDK). This feature may be desirable for users who want higher degrees of control and governance for production environments. Preventing changes from the SDK also enables features such as resource locks and diagnostic logs for control plane operations. The clients connecting from Azure Cosmos DB SDK will be prevented from changing any property for the Azure Cosmos DB accounts, databases, containers, and throughput. The operations involving reading and writing data to Azure Cosmos DB containers themselves are not impacted.
cosmos-db Dotnet Standard Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/dotnet-standard-sdk.md
- Title: Azure Cosmos DB for Table .NET Standard SDK & Resources
-description: Learn all about the Azure Cosmos DB for Table and the .NET Standard SDK including release dates, retirement dates, and changes made between each version.
----- Previously updated : 11/03/2021--
-# Azure Cosmos DB Table .NET Standard API: Download and release notes
-> [!div class="op_single_selector"]
->
-> * [.NET](dotnet-sdk.md)
-> * [.NET Standard](dotnet-standard-sdk.md)
-> * [Java](java-sdk.md)
-> * [Node.js](nodejs-sdk.md)
-> * [Python](python-sdk.md)
-
-| | Links |
-|||
-|**SDK download**|[NuGet](https://www.nuget.org/packages/Azure.Data.Tables/)|
-|**Sample**|[Azure Cosmos DB for Table .NET Sample](https://github.com/Azure-Samples/azure-cosmos-table-dotnet-core-getting-started)|
-|**Quickstart**|[Quickstart](quickstart-dotnet.md)|
-|**Tutorial**|[Tutorial](tutorial-develop-table-dotnet.md)|
-|**Current supported framework**|[Microsoft .NET Standard 2.0](https://www.nuget.org/packages/NETStandard.Library)|
-|**Report Issue**|[Report Issue](https://github.com/Azure/azure-cosmos-table-dotnet/issues)|
-
-## Release notes for 2.0.0 series
-2.0.0 series takes the dependency on [Microsoft.Azure.Cosmos](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/), with performance improvements and namespace consolidation to Azure Cosmos DB endpoint.
-
-### <a name="2.0.0-preview"></a>2.0.0-preview
-* initial preview of 2.0.0 Table SDK that takes the dependency on [Microsoft.Azure.Cosmos](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/), with performance improvements and namespace consolidation to Azure Cosmos DB endpoint. The public API remains the same.
-
-## Release notes for 1.0.0 series
-1.0.0 series takes the dependency on [Microsoft.Azure.DocumentDB.Core](https://www.nuget.org/packages/Microsoft.Azure.DocumentDB.Core/).
-
-### <a name="1.0.8"></a>1.0.8
-* Add support to set TTL property if it's cosmosdb endpoint
-* Honor retry policy upon timeout and task cancelled exception
-* Fix intermittent task cancelled exception seen in asp .net applications
-* Fix azure table storage retrieve from secondary endpoint only location mode
-* Update `Microsoft.Azure.DocumentDB.Core` dependency version to 2.11.2 which fixes intermittent null reference exception
-* Update `Odata.Core` dependency version to 7.6.4 which fixes compatibility conflict with azure shell
-
-### <a name="1.0.7"></a>1.0.7
-* Performance improvement by setting Table SDK default trace level to SourceLevels.Off, which can be opted in via app.config
-
-### <a name="1.0.5"></a>1.0.5
-* Introduce new config under TableClientConfiguration to use Rest Executor to communicate with Azure Cosmos DB for Table
-
-### <a name="1.0.5-preview"></a>1.0.5-preview
-* Bug fixes
-
-### <a name="1.0.4"></a>1.0.4
-* Bug fixes
-* Provide HttpClientTimeout option for RestExecutorConfiguration.
-
-### <a name="1.0.4-preview"></a>1.0.4-preview
-* Bug fixes
-* Provide HttpClientTimeout option for RestExecutorConfiguration.
-
-### <a name="1.0.1"></a>1.0.1
-* Bug fixes
-
-### <a name="1.0.0"></a>1.0.0
-* General availability release
-
-### <a name="0.11.0-preview"></a>0.11.0-preview
-
-* Changes were made to how CloudTableClient can be configured. It now takes a TableClientConfiguration object during construction. TableClientConfiguration provides different properties to configure the client behavior depending on whether the target endpoint is Azure Cosmos DB for Table or Azure Storage API for Table.
-* Added support to TableQuery to return results in sorted order on a custom column. This feature is only supported on Azure Cosmos DB Table endpoints.
-* Added support to expose RequestCharges on various result types. This feature is only supported on Azure Cosmos DB Table endpoints.
-
-### <a name="0.10.1-preview"></a>0.10.1-preview
-* Add support for SAS token, operations of TablePermissions, ServiceProperties, and ServiceStats against Azure Storage Table endpoints.
- > [!NOTE]
- > Some functionalities in previous Azure Storage Table SDKs are not yet supported, such as client-side encryption.
-
-### <a name="0.10.0-preview"></a>0.10.0-preview
-* Add support for core CRUD, batch, and query operations against Azure Storage Table endpoints.
- > [!NOTE]
- > Some functionalities in previous Azure Storage Table SDKs are not yet supported, such as client-side encryption.
-
-### <a name="0.9.1-preview"></a>0.9.1-preview
-* Azure Cosmos DB Table .NET Standard SDK is a cross-platform .NET library that provides efficient access to the Table data model on Azure Cosmos DB. This initial release supports the full set of Table and Entity CRUD + Query functionalities with similar APIs as the [Azure Cosmos DB Table SDK For .NET Framework](dotnet-sdk.md).
- > [!NOTE]
- > Azure Storage Table endpoints are not yet supported in the 0.9.1-preview version.
-
-## Release and Retirement dates
-Microsoft provides notification at least **12 months** in advance of retiring an SDK in order to smooth the transition to a newer/supported version.
-
-This cross-platform .NET Standard library [Microsoft.Azure.Cosmos.Table](https://www.nuget.org/packages/Microsoft.Azure.Cosmos.Table) will replace the .NET Framework library [Microsoft.Azure.CosmosDB.Table](https://www.nuget.org/packages/Microsoft.Azure.CosmosDB.Table).
-
-### 2.0.0 series
-| Version | Release Date | Retirement Date |
-| | | |
-| [2.0.0-preview](#2.0.0-preview) |Auguest 22, 2019 | |
-
-### 1.0.0 series
-| Version | Release Date | Retirement Date |
-| | | |
-| [1.0.5](#1.0.5) |September 13, 2019 | |
-| [1.0.5-preview](#1.0.5-preview) |Auguest 20, 2019 | |
-| [1.0.4](#1.0.4) |Auguest 12, 2019 | |
-| [1.0.4-preview](#1.0.4-preview) |July 26, 2019 | |
-| 1.0.2-preview |May 2, 2019 | |
-| [1.0.1](#1.0.1) |April 19, 2019 | |
-| [1.0.0](#1.0.0) |March 13, 2019 | |
-| [0.11.0-preview](#0.11.0-preview) |March 5, 2019 | |
-| [0.10.1-preview](#0.10.1-preview) |January 22, 2019 | |
-| [0.10.0-preview](#0.10.0-preview) |December 18, 2018 | |
-| [0.9.1-preview](#0.9.1-preview) |October 18, 2018 | |
--
-## FAQ
--
-## See also
-To learn more about the Azure Cosmos DB for Table, see [Introduction to Azure Cosmos DB for Table](introduction.md).
cosmos-db Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/introduction.md
Create an Azure Cosmos DB account in the [Azure portal](https://portal.azure.com
Here are a few pointers to get you started: * [Build a .NET application by using the API for Table](quickstart-dotnet.md)
-* [Develop with the API for Table in .NET](tutorial-develop-table-dotnet.md)
* [Query table data by using the API for Table](tutorial-query.md) * [Learn how to set up Azure Cosmos DB global distribution by using the API for Table](tutorial-global-distribution.md)
-* [Azure Cosmos DB Table .NET Standard SDK](dotnet-standard-sdk.md)
-* [Azure Cosmos DB Table .NET SDK](dotnet-sdk.md)
-* [Azure Cosmos DB Table Java SDK](java-sdk.md)
-* [Azure Cosmos DB Table Node.js SDK](nodejs-sdk.md)
-* [Azure Cosmos DB Table SDK for Python](python-sdk.md)
+* [Azure Cosmos DB Table .NET SDK](/dotnet/api/overview/azure/data.tables-readme)
+* [Azure Cosmos DB Table Java SDK](/java/api/overview/azure/data-tables-readme)
+* [Azure Cosmos DB Table Node.js SDK](/javascript/api/overview/azure/data-tables-readme)
+* [Azure Cosmos DB Table SDK for Python](/python/api/azure-data-tables/azure.data.tables)
cosmos-db Tutorial Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/tutorial-query.md
The queries in this article use the following sample `People` table:
See [Querying Tables and Entities](/rest/api/storageservices/fileservices/querying-tables-and-entities) for details on how to query by using the API for Table.
-For more information on the premium capabilities that Azure Cosmos DB offers, see [Azure Cosmos DB for Table](introduction.md) and [Develop with the API for Table in .NET](tutorial-develop-table-dotnet.md).
+For more information on the premium capabilities that Azure Cosmos DB offers, see [Azure Cosmos DB for Table](introduction.md) and [Develop with the API for Table in .NET](quickstart-dotnet.md).
## Prerequisites
-For these queries to work, you must have an Azure Cosmos DB account and have entity data in the container. Don't have any of those? Complete the [five-minute quickstart](quickstart-dotnet.md) or the [developer tutorial](tutorial-develop-table-dotnet.md) to create an account and populate your database.
+For these queries to work, you must have an Azure Cosmos DB account and have entity data in the container. Don't have any of those? Complete the [five-minute quickstart](quickstart-dotnet.md) to create an account and populate your database.
## Query on PartitionKey and RowKey
cost-management-billing Prepare Buy Reservation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/prepare-buy-reservation.md
Title: Buy an Azure reservation description: Learn about important points to help you buy an Azure reservation. -+ Previously updated : 09/07/2022 Last updated : 10/20/2022
Depending on how you pay for your Azure subscription, email reservation notifica
- Cancellation - Scope change
-For customers with EA subscriptions:
+Notifications are sent to the following users:
-- Notifications are sent only to the EA notification contacts.-- Users added to a reservation using Azure RBAC (IAM) permission don't receive any email notifications.
+- Customers with EA subscriptions
+ - Notifications are sent to the EA notification contacts, EA admin, reservation owners, and the reservation administrator.
+- Customers with Microsoft Customer Agreement (Azure Plan)
+ - Notifications are sent to the reservation owners and the reservation administrator.
+- Cloud Solution Provider and new commerce partners
+ - Emails are sent to the partner notification contact.
+- Individual subscription customers with pay-as-you-go rates
+ - Emails are sent to users who are set up as account administrators, reservation owners, and the reservation administrator.
-For customers with individual subscriptions:
--- The purchaser receives a purchase notification.-- At the time of purchase, the subscription billing account owner receives a purchase notification.-- The account owner receives all other notifications. ## Next steps
cost-management-billing Reservation Renew https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/reservation-renew.md
Title: Automatically renew Azure reservations description: Learn how you can automatically renew Azure reservations to continue getting reservation discounts. -+ Previously updated : 08/29/2022 Last updated : 10/20/2022
Renewal notification emails are sent 30 days before expiration and again on the
Emails are sent to different people depending on your purchase method: -- EA customers - Emails are sent to the notification contacts set on the EA portal or Enterprise Administrators who are automatically enrolled to receive usage notifications.-- Individual subscription customers with pay-as-you-go rates - Emails are sent to users who are set up as account administrators.-- Cloud Solution Provider customers - Emails are sent to the partner notification contact. This notification isn't currently supported for Microsoft Customer Agreement subscriptions (CSP Azure Plan subscription).-
-Renewal notifications are not sent to any Microsoft Customer Agreement (Azure Plan) users.
+- Customers with EA subscriptions
+ - Notifications are sent to the EA notification contacts, EA admin, reservation owners, and the reservation administrator.
+- Customers with Microsoft Customer Agreement (Azure Plan)
+ - Notifications are sent to the reservation owners and the reservation administrator.
+- Cloud Solution Provider and new commerce partners
+ - Emails are sent to the partner notification contact.
+- Individual subscription customers with pay-as-you-go rates
+ - Emails are sent to users who are set up as account administrators, reservation owners, and the reservation administrator.
## Next steps+ - To learn more about Azure Reservations, see [What are Azure Reservations?](save-compute-costs-reservations.md)
cost-management-billing Discount Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/discount-application.md
Previously updated : 10/14/2022 Last updated : 10/20/2022 # How saving plan discount is applied
-Azure savings plans save you money when you have consistent usage of Azure compute resources. An Azure savings plan can help you save money by allowing you to commit to a fixed hourly spend on compute services for one-year or three-year terms. The savings can significantly reduce your resource costs by up to 66% from pay-as-you-go prices. Discount rates per meter vary by commitment term (1-year or 3-year), not commitment amount.
+Azure savings plans save you money when you have consistent usage of Azure compute resources. An Azure savings plan can help you save money by allowing you to commit to a fixed hourly spend on compute services for one-year or three-year terms. The savings can significantly reduce your resource costs by up to 65% from pay-as-you-go prices. Discount rates per meter vary by commitment term (1-year or 3-year), not commitment amount.
Each hour with savings plan, your eligible compute usage is discounted until you reach your commitment amount ΓÇô subsequent usage after you reach your commitment amount is priced at pay-as-you-go rates. To be eligible for a savings plan benefit, the usage must be generated by a resource within the savings plan's scope. Each hour's benefit is _use-it-or-lose-it_, and can't be rolled over to another hour.
cost-management-billing Savings Plan Compute Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/savings-plan-compute-overview.md
Previously updated : 10/12/2022 Last updated : 10/20/2022 # What are Azure savings plans for compute?
-Azure savings plans save you money when you have consistent usage of Azure compute resources. An Azure savings plan helps you save money by allowing you to commit to a fixed hourly spend on compute services for one-year or three-year terms. A savings plan can significantly reduce your resource costs by up to 66% from pay-as-you-go prices. Discount rates per meter vary by commitment term (1-year or 3-year), not commitment amount.
+Azure savings plans save you money when you have consistent usage of Azure compute resources. An Azure savings plan helps you save money by allowing you to commit to a fixed hourly spend on compute services for one-year or three-year terms. A savings plan can significantly reduce your resource costs by up to 65% from pay-as-you-go prices. Discount rates per meter vary by commitment term (1-year or 3-year), not commitment amount.
Each hour with savings plan, your compute usage is discounted until you reach your commitment amount ΓÇô subsequent usage afterward is priced at pay-as-you-go rates. Savings plan commitments are priced in USD for Microsoft Customer Agreement and Microsoft Partner Agreement customers, and in local currency for Enterprise Agreement customers. Usage from compute services such as VMs, dedicated hosts, container instances, Azure premium functions, and Azure app services are eligible for savings plan discounts.
data-factory Concepts Data Flow Debug Mode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-debug-mode.md
With debug on, the Data Preview tab will light-up on the bottom panel. Without d
:::image type="content" source="media/data-flow/datapreview.png" alt-text="Data preview":::
+You can sort columns in data preview and rearrange columns using drag and drop. Additionally, there is an export button on the top of the data preview panel that you can use to export the preview data to a CSV file for offline data exploration. You can use this feature to export up to 1,000 rows of preview data.
+ > [!NOTE] > File sources only limit the rows that you see, not the rows being read. For very large datasets, it is recommended that you take a small portion of that file and use it for your testing. You can select a temporary file in Debug Settings for each source that is a file dataset type.
data-factory Connector Sap Hana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sap-hana.md
Previously updated : 09/09/2021 Last updated : 10/20/2022 # Copy data from SAP HANA using Azure Data Factory or Synapse Analytics
data-factory Connector Sap Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sap-table.md
Previously updated : 09/09/2021 Last updated : 10/20/2022 # Copy data from an SAP table using Azure Data Factory or Azure Synapse Analytics
data-factory Data Flow Conversion Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-conversion-functions.md
Previously updated : 08/03/2022 Last updated : 10/19/2022 # Conversion functions in mapping data flow
Conversion functions are used to convert data and test for data types
| Conversion function | Task | |-|-|
+| [ascii](data-flow-expressions-usage.md#ascii) | Returns the numeric value of the input character. If the input string has more than one character, the numeric value of the first character is returned|
+| [char](data-flow-expressions-usage.md#char) | Returns the ascii character represented by the input number. If number is greater than 256, the result is equivalent to char(number % 256)|
+| [decode](data-flow-expressions-usage.md#decode) | Decodes the encoded input data into a string based on the given charset. A second (optional) argument can be used to specify which charset to use - 'US-ASCII', 'ISO-8859-1', 'UTF-8' (default), 'UTF-16BE', 'UTF-16LE', 'UTF-16'|
+| [encode](data-flow-expressions-usage.md#encode) | Encodes the input string data into binary based on a charset. A second (optional) argument can be used to specify which charset to use - 'US-ASCII', 'ISO-8859-1', 'UTF-8' (default), 'UTF-16BE', 'UTF-16LE', 'UTF-16'|
| [isBitSet](data-flow-expressions-usage.md#isBitSet) | Checks if a bit position is set in this bitset| | [setBitSet](data-flow-expressions-usage.md#setBitSet) | Sets bit positions in this bitset| | [isBoolean](data-flow-expressions-usage.md#isBoolean) | Checks if the string value is a boolean value according to the rules of ``toBoolean()``|
data-factory Data Flow Expressions Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-expressions-usage.md
Previously updated : 08/03/2022 Last updated : 10/19/2022 # Data transformation expression usage in mapping data flow
Creates an array of items. All items should be of the same type. If no items are
* ``'Washington'`` ___
-<a name="assertErrorMessages" ></a>
+<a name="ascii" ></a>
-### <code>assertErrorMessages</code>
-<code><b>assertErrorMessages() => map</b></code><br/><br/>
-Returns a map of all error messages for the row with assert ID as the key.
-
-Examples
-* ``assertErrorMessages() => ['assert1': 'This row failed on assert1.', 'assert2': 'This row failed on assert2.']. In this example, at(assertErrorMessages(), 'assert1') would return 'This row failed on assert1.'``
+### <code>ascii</code>
+<code><b>ascii(<i>&lt;Input&gt;</i> : string) => number</b></code><br/><br/>
+Returns the numeric value of the input character. If the input string has more than one character, the numeric value of the first character is returned
+* ``ascii('A') -> 65``
+* ``ascii('a') -> 97``
___ - <a name="asin" ></a> ### <code>asin</code>
Calculates an inverse sine value.
* ``asin(0) -> 0.0`` ___
+<a name="assertErrorMessages" ></a>
+
+### <code>assertErrorMessages</code>
+<code><b>assertErrorMessages() => map</b></code><br/><br/>
+Returns a map of all error messages for the row with assert ID as the key.
+
+Examples
+* ``assertErrorMessages() => ['assert1': 'This row failed on assert1.', 'assert2': 'This row failed on assert2.']. In this example, at(assertErrorMessages(), 'assert1') would return 'This row failed on assert1.'``
+
+___
<a name="associate" ></a>
Returns the smallest integer not smaller than the number.
* ``ceil(-0.1) -> 0`` ___
+<a name="char" ></a>
+
+### <code>char</code>
+<code><b>char(<i>&lt;Input&gt;</i> : number) => string</b></code><br/><br/>
+Returns the ascii character represented by the input number. If number is greater than 256, the result is equivalent to char(number % 256)
+* ``char(65) -> 'A'``
+* ``char(97) -> 'a'``
+___
<a name="coalesce" ></a>
Duration in milliseconds for number of days.
* ``days(2) -> 172800000L`` ___
+<a name="decode" ></a>
+
+### <code>decode</code>
+<code><b>decode(<i>&lt;Input&gt;</i> : any, <i>&lt;Charset&gt;</i> : string) => binary</b></code><br/><br/>
+Decodes the encoded input data into a string based on the given charset. A second (optional) argument can be used to specify which charset to use - 'US-ASCII', 'ISO-8859-1', 'UTF-8' (default), 'UTF-16BE', 'UTF-16LE', 'UTF-16'
+* ``decode(array(toByte(97),toByte(98),toByte(99)), 'US-ASCII') -> abc``
+___
+ <a name="degrees" ></a>
___
## E
+<a name="encode" ></a>
+
+### <code>encode</code>
+<code><b>encode(<i>&lt;Input&gt;</i> : string, <i>&lt;Charset&gt;</i> : string) => binary</b></code><br/><br/>
+Encodes the input string data into binary based on a charset. A second (optional) argument can be used to specify which charset to use - 'US-ASCII', 'ISO-8859-1', 'UTF-8' (default), 'UTF-16BE', 'UTF-16LE', 'UTF-16'
+* ``encode('abc', 'US-ASCII') -> array(toByte(97),toByte(98),toByte(99))``
+___
+
+Input string: string, Charset: string) => binary
<a name="endsWith" ></a> ### <code>endsWith</code>
databox Data Box Disk Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-disk-limits.md
For the latest information on Azure storage service limits and best practices fo
## Data copy and upload caveats -- Do not copy data directly into the disks. Copy data to pre-created *BlockBlob*,*PageBlob*, and *AzureFile* folders.
+- Do not copy data directly into the disks. Copy data to pre-created *BlockBlob*, *PageBlob*, and *AzureFile* folders.
- A folder under the *BlockBlob* and *PageBlob* is a container. For instance, containers are created as *BlockBlob/container* and *PageBlob/container*. - If a folder has the same name as an existing container, the folder's contents are merged with the container's contents. Files or blobs that aren't already in the cloud are added to the container. If a file or blob has the same name as a file or blob that's already in the container, the existing file or blob is overwritten. - Every file written into *BlockBlob* and *PageBlob* shares is uploaded as a block blob and page blob respectively. - The hierarchy of files is maintained while uploading to the cloud for both blobs and Azure Files. For example, you copied a file at this path: `<container folder>\A\B\C.txt`. This file is uploaded to the same path in cloud. - Any empty directory hierarchy (without any files) created under *BlockBlob* and *PageBlob* folders is not uploaded. - If you don't have long paths enabled on the client, and any path and file name in your data copy exceeds 256 characters, the Data Box Split Copy Tool (DataBoxDiskSplitCopy.exe) or the Data Box Disk Validation tool (DataBoxDiskValidation.cmd) will report failures. To avoid this kind of failure, [enable long paths on your Windows client](/windows/win32/fileio/maximum-file-path-limitation?tabs=cmd#enable-long-paths-in-windows-10-version-1607-and-later).-- To improve performance during data uploads, we recommend that you [enable large file shares on the storage account and increase share capacity to 100 TiB](../../articles/storage/files/storage-how-to-create-file-share.md#enable-large-files-shares-on-an-existing-account). Large file shares are only supported for storage accounts with locally redundant storage (LRS).
+- To improve performance during data uploads, we recommend that you [enable large file shares on the storage account and increase share capacity to 100 TiB](../../articles/storage/files/storage-how-to-create-file-share.md#enable-large-file-shares-on-an-existing-account). Large file shares are only supported for storage accounts with locally redundant storage (LRS).
- If there are any errors when uploading data to Azure, an error log is created in the target storage account. The path to this error log is available in the portal when the upload is complete and you can review the log to take corrective action. Do not delete data from the source without verifying the uploaded data. - File metadata and NTFS permissions are not preserved when the data is uploaded to Azure Files. For example, the *Last modified* attribute of the files will not be kept when the data is copied. - If you specified managed disks in the order, review the following additional considerations:
defender-for-cloud Custom Security Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/custom-security-policies.md
Title: Create custom security policies in Microsoft Defender for Cloud
+ Title: Create custom Azure security policies in Microsoft Defender for Cloud
description: Azure custom policy definitions monitored by Microsoft Defender for Cloud.
Last updated 07/20/2022
zone_pivot_groups: manage-asc-initiatives
-# Create custom security initiatives and policies
+# Create custom Azure security initiatives and policies
To help secure your systems and environment, Microsoft Defender for Cloud generates security recommendations. These recommendations are based on industry best practices, which are incorporated into the generic, default security policy supplied to all customers. They can also come from Defender for Cloud's knowledge of industry and regulatory standards.
defender-for-cloud Defender For Containers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-introduction.md
Learn more about:
### View vulnerabilities for running images in Azure Container Registry (ACR)
-Defender for Cloud gives its customers the ability to prioritize the remediation of vulnerabilities in images that are currently being used within their environment using the [Running container images should have vulnerability findings resolved](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/KubernetesRuntimeVisibilityRecommendationDetailsBlade/assessmentKey/41503391-efa5-47ee-9282-4eff6131462c/showSecurityCenterCommandBar~/false) recommendation.
+Defender for Cloud gives its customers the ability to prioritize the remediation of vulnerabilities in images that are currently being used within their environment using the [Running container images should have vulnerability findings resolved](https://portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/KubernetesRuntimeVisibilityRecommendationDetailsBlade/assessmentKey/41503391-efa5-47ee-9282-4eff6131462c/showSecurityCenterCommandBar~/false) recommendation.
To provide findings for the recommendation, Defender for Cloud collects the inventory of your running containers that are collected by the Defender agent installed on your AKS clusters. Defender for Cloud correlates that inventory with the vulnerability assessment scan of images that are stored in ACR. The recommendation shows your running containers with the vulnerabilities associated with the images that are used by each container and provides vulnerability reports and remediation steps.
defender-for-cloud Enhanced Security Features Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enhanced-security-features-overview.md
Defender for Cloud offers many enhanced security features that can help protect
- [How do I enable Defender for Cloud's enhanced security for my subscription?](#how-do-i-enable-defender-for-clouds-enhanced-security-for-my-subscription) - [Can I enable Microsoft Defender for Servers on a subset of servers?](#can-i-enable-microsoft-defender-for-servers-on-a-subset-of-servers) - [If I already have a license for Microsoft Defender for Endpoint, can I get a discount for Defender for Servers?](#if-i-already-have-a-license-for-microsoft-defender-for-endpoint-can-i-get-a-discount-for-defender-for-servers)-- [My subscription has Microsoft Defender for Servers enabled, do I pay for not-running servers?](#my-subscription-has-microsoft-defender-for-servers-enabled-do-i-pay-for-not-running-servers)
+- [My subscription has Microsoft Defender for Servers enabled, which machines do I pay for?](#my-subscription-has-microsoft-defender-for-servers-enabled-which-machines-do-i-pay-for)
- [Will I be charged for machines without the Log Analytics agent installed?](#will-i-be-charged-for-machines-without-the-log-analytics-agent-installed) - [If a Log Analytics agent reports to multiple workspaces, will I be charged twice?](#if-a-log-analytics-agent-reports-to-multiple-workspaces-will-i-be-charged-twice) - [If a Log Analytics agent reports to multiple workspaces, is the 500-MB free data ingestion available on all of them?](#if-a-log-analytics-agent-reports-to-multiple-workspaces-is-the-500-mb-free-data-ingestion-available-on-all-of-them)
To request your discount, [contact Defender for Cloud's support team](https://po
The discount will be effective starting from the approval date, and won't take place retroactively.
-### My subscription has Microsoft Defender for Servers enabled, do I pay for not-running servers?
+### My subscription has Microsoft Defender for Servers enabled, which machines do I pay for?
-No. When you enable [Microsoft Defender for Servers](defender-for-servers-introduction.md) on a subscription, you won't be charged for any machines that are in a deallocated power state while they're in that state. Machines are billed according to their power state as shown in the following table:
+When you enable [Microsoft Defender for Servers](defender-for-servers-introduction.md) on a subscription, all machines in that subscription (including machines that are part of PaaS services and reside in this subscription) are billed according to their power state as shown in the following table:
| State | Description | Instance usage billed | |--|--|--|
defender-for-cloud Export To Siem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/export-to-siem.md
To stream alerts into **ArcSight**, **SumoLogic**, **Syslog servers**, **LogRhyt
| Tool | Hosted in Azure | Description | |:|:| :|
- | SumoLogic | No | Instructions for setting up SumoLogic to consume data from an event hub are available at [Collect Logs for the Azure Audit App from Event Hubs](https://help.sumologic.com/Send-Data/Applications-and-Other-Data-Sources/Azure-Audit/02Collect-Logs-for-Azure-Audit-from-Event-Hub). |
+ | SumoLogic | No | Instructions for setting up SumoLogic to consume data from an event hub are available at [Collect Logs for the Azure Audit App from Event Hubs](https://help.sumologic.com/docs/send-data/collect-from-other-data-sources/azure-monitoring/collect-logs-azure-monitor/). |
| ArcSight | No | The ArcSight Azure Event Hubs smart connector is available as part of [the ArcSight smart connector collection](https://community.microfocus.com/cyberres/arcsight/f/arcsight-product-announcements/163662/announcing-general-availability-of-arcsight-smart-connectors-7-10-0-8114-0). | | Syslog server | No | If you want to stream Azure Monitor data directly to a syslog server, you can use a [solution based on an Azure function](https://github.com/miguelangelopereira/azuremonitor2syslog/). | LogRhythm | No| Instructions to set up LogRhythm to collect logs from an event hub are available [here](https://logrhythm.com/six-tips-for-securing-your-azure-cloud-environment/).
defender-for-cloud How To Manage Aws Assessments Standards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/how-to-manage-aws-assessments-standards.md
+
+ Title: Manage AWS assessments and standards
+
+description: Learn how to create custom security assessments and standards for your AWS environment.
+ Last updated : 10/20/2022++
+# Manage AWS assessments and standards
+
+Security standards contain comprehensive sets of security recommendations to help secure your cloud environments. Security teams can use the readily available standards such as AWS CIS 1.2.0, AWS Foundational Security Best Practices, and AWS PCI DSS 3.2.1, or create custom standards, and assessments to meet specific internal requirements.
+
+There are three types of resources that are needed to create and manage custom assessments:
+
+- Assessment:
+ - assessment details such as name, description, severity, remediation logic, etc.
+ - assessment logic in KQL
+ - the standard it belongs to
+- Standard: defines a set of assessments
+- Standard assignment: defines the scope, which the standard will evaluate. For example, specific AWS account(s).
+
+You can either use the built-in regulatory compliance standards or create your own custom standards and assessments.
+
+## Assign a built-in compliance standard to your AWS account
+
+**To assign a built-in compliance standard to your AWS account**:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. Navigate to **Microsoft Defender for Cloud** > **Environment settings**.
+
+1. Select the relevant AWS account.
+
+1. Select **Standards** > **Add** > **Standard**.
+
+ :::image type="content" source="media/how-to-manage-assessments-standards/aws-add-standard.png" alt-text="Screenshot that shows you where to navigate to in order to add a AWS standard." lightbox="media/how-to-manage-assessments-standards/aws-add-standard-zoom.png":::
+
+1. Select a built-in standard from the drop-down menu.
+
+1. Select **Save**.
+
+## Create a new custom standard for your AWS account
+
+**To create a new custom standard for your AWS account**:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. Navigate to **Microsoft Defender for Cloud** > **Environment settings**.
+
+1. Select the relevant AWS account.
+
+1. Select **Standards** > **Add** > **Standard**.
+
+1. Select **New standard**.
+
+ :::image type="content" source="media/how-to-manage-assessments-standards/new-aws-standard.png" alt-text="Screenshot that shows you where to select a new AWS standard." lightbox="media/how-to-manage-assessments-standards/new-aws-standard.png":::
+
+1. Enter a name, description and select which assessments you want to add.
+
+1. Select **Save**.
+
+## Assign a built-in assessment to your AWS account
+
+**To assign a built-in assessment to your AWS account**:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. Navigate to **Microsoft Defender for Cloud** > **Environment settings**.
+
+1. Select the relevant AWS account.
+
+1. Select **Standards** > **Add** > **Assessment**.
+
+ :::image type="content" source="media/how-to-manage-assessments-standards/aws-assessment.png" alt-text="Screenshot that shows where to navigate to, to select an AWS assessment." lightbox="media/how-to-manage-assessments-standards/aws-assessment.png":::
+
+1. Select **Existing assessment**.
+
+1. Select all relevant assessments from the drop-down menu.
+
+1. Select the standards from the drop-down menu.
+
+1. Select **Save**.
+
+## Create a new custom assessment for your AWS account
+
+**To create a new custom assessment for your AWS account**:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. Navigate to **Microsoft Defender for Cloud** > **Environment settings**.
+
+1. Select the relevant AWS account.
+
+1. Select **Standards** > **Add** > **Assessment**.
+
+1. Select **New assessment (preview)**.
+
+ :::image type="content" source="media/how-to-manage-assessments-standards/new-aws-assessment.png" alt-text="Screenshot of the adding a new assessment screen for your AWS account." lightbox="media/how-to-manage-assessments-standards/new-aws-assessment.png":::
+
+1. Enter a name, severity, and select an assessment from the drop-down menu.
+
+1. Enter a KQL query that defines the assessment logic.
+
+ If youΓÇÖd like to create a new query, select the ΓÇÿ[Azure Data Explorer](https://dataexplorer.azure.com/clusters/securitydatastoreus.centralus/databases/DiscoveryMockDataAws)ΓÇÖ link. The explorer will contain mock data on all of the supported native APIs. The data will appear in the same structure as contracted in the API.
+
+ :::image type="content" source="media/how-to-manage-assessments-standards/azure-data-explorer.png" alt-text="Screenshot that shows where to select to select the Azure Data Explorer link." lightbox="media/how-to-manage-assessments-standards/azure-data-explorer.png":::
+
+ See the [how to build a query](#how-to-build-a-query) section for more examples.
+
+1. Select the standards to add to this assessment.
+
+1. Select **Save**.
+
+## How to build a query
+
+The last row of the query should return all the original columns (donΓÇÖt use ΓÇÿprojectΓÇÖ, ΓÇÿproject-away'). End the query with an iff statement that defines the healthy or unhealthy conditions: `| extend HealthStatus = iff([boolean-logic-here], 'UNHEALTHY','HEALTHY')`.
+
+### Sample KQL queries
+
+When building a KQL query, you should use the following table structure:
+
+```kusto
+- TimeStamp
+ 2021-10-07T10:30:21.403732Z
+ - SdksInfo
+ {
+ "AWSSDK.EC2": "3.7.5.2"
+ }
+
+ - RecordProviderInfo
+ {
+ "CloudName": "AWS",
+ "CspmDiscoveryCloudRoleArn": "arn:aws:iam::123456789123:role/CSPMMonitoring",
+ "Type": "MultiCloudDiscoveryServiceDataCollector",
+ "HierarchyIdentifier": "123456789123",
+ "ConnectorId": "b3113210-63f9-43c5-a6a7-f14a2a5b3cd0"
+ }
+ - RecordOrganizationInfo
+ {
+ "Type": "MyOrganization",
+ "TenantId": "bda8bc53-d9f8-4248-b9a9-3a6c7fe0b92f",
+ "SubscriptionId": "69444886-de6b-40c5-8b43-065f739fffb9",
+ "ResourceGroupName": "MyResourceGroupName"
+ }
+
+ - CorrelationId
+ 4f5e50e1d92c400caf507036a1237c72
+ - RecordRegionalInfo
+ {
+ "Type": "MultiCloudRegion",
+ "RegionUniqueName": "eu-west-2",
+ "RegionDisplayName": "EU West (London)",
+ "IsGlobalForRecord": false
+ }
+
+ - RecordIdentifierInfo
+ {
+ "Type": "MultiCloudDiscoveryServiceDataCollector",
+ "RecordNativeCloudUniqueIdentifier": "arn:aws:ec2:eu-west-2:123456789123:elastic-ip/eipalloc-1234abcd5678efef9",
+ "RecordAzureUniqueIdentifier": "/subscriptions/69444886-de6b-40c5-8b43-065f739fffb9/resourcegroups/MyResourceGroupName/providers/Microsoft.Security/securityconnectors/b3113210-63f9-43c5-a6a7-f14a2a5b3cd0/securityentitydata/aws-ec2-elastic-ip-eipalloc-1234abcd5678efef9-eu-west-2",
+ "RecordIdentifier": "eipalloc-1234abcd5678efef9-eu-west-2",
+ "ResourceProvider": "EC2",
+ "ResourceType": "elastic-ip"
+ }
+ - Record
+ {
+ "AllocationId": "eipalloc-1234abcd5678efef9",
+ "AssociationId": "eipassoc-234abcd5678efef90",
+ "CarrierIp": null,
+ "CustomerOwnedIp": null,
+ "CustomerOwnedIpv4Pool": null,
+ "Domain": {
+ "Value": "vpc"
+ },
+ "InstanceId": "i-0a8fcc00493c4625d",
+ "NetworkBorderGroup": "eu-west-2",
+ "NetworkInterfaceId": "eni-34abcd5678efef901",
+ "NetworkInterfaceOwnerId": "123456789123",
+ "PrivateIpAddress": "172.31.21.88",
+ "PublicIp": "19.218.211.431",
+ "PublicIpv4Pool": "amazon",
+ "Tags": [
+ {
+ "Value": "arn:aws:cloudformation:eu-west-2:123456789123:stack/awseb-e-sjuh4tkr7a-stack/4ff15da0-2512-11ec-ab59-023b28e97f64",
+ "Key": "aws:cloudformation:stack-id"
+ },
+ {
+ "Value": "e-sjuh4tkr7a",
+ "Key": "elasticbeanstalk:environment-id"
+ },
+ {
+ "Value": "AWSEBEIP",
+ "Key": "aws:cloudformation:logical-id"
+ },
+ {
+ "Value": "awseb-e-sjuh4tkr7a-stack",
+ "Key": "aws:cloudformation:stack-name"
+ },
+ {
+ "Value": "Mebrennetest3-env",
+ "Key": "elasticbeanstalk:environment-name"
+ },
+ {
+ "Value": "Mebrennetest3-env",
+ "Key": "Name"
+ }
+ ]
+ }
+```
+
+> [!NOTE]
+> The `Record` field contains the data structure as it is returned from the AWS API. Use this field to define conditions which will determine if the resource is healthy or unhealthy.
+>
+> You can access internal properties of `Record` filed using a dot notation. For example: `| extend EncryptionType = Record.Encryption.Type`.
+
+**Stopped EC2 instances should be removed after a specified time period**
+
+```kusto
+EC2_Instance
+| extend State = tolower(tostring(Record.State.Name.Value))
+| extend StoppedTime = todatetime(tostring(Record.StateTransitionReason))
+| extend HealthStatus = iff(not(State == 'stopped' and StoppedTime < ago(30d)), 'HEALTHY', 'UNHEALTHY')
+```
+
+**EC2 subnets should not automatically assign public IP addresses**
+
+
+```kusto
+EC2_Subnet
+| extend MapPublicIpOnLaunch = tolower(tostring(Record.MapPublicIpOnLaunch))
+| extend HealthStatus = iff(MapPublicIpOnLaunch == 'false' ,'HEALTHY', 'UNHEALTHY')
+```
+
+**EC2 instances should not use multiple ENIs**
+
+```kusto
+EC2_Instance
+| extend NetworkInterfaces = parse_json(Record)['NetworkInterfaces']
+| extend NetworkInterfaceCount = array_length(parse_json(NetworkInterfaces))
+| extend HealthStatus = iff(NetworkInterfaceCount == 1 ,'HEALTHY', 'UNHEALTHY')
+```
+
+You can use the following links to learn more about Kusto queries:
+- [KQL quick reference](/azure/data-explorer/kql-quick-reference)
+- [Kusto Query Language (KQL) overview](/azure/data-explorer/kusto/query/)
+- [Must Learn KQL](https://azurecloudai.blog/2021/11/17/must-learn-kql-part-1-tools-and-resources/)
+
+## Next steps
+
+In this article, you learned how to manage your assessments and standards in Defender for Cloud.
+
+> [!div class="nextstepaction"]
+> [Find recommendations that can improve your security posture](review-security-recommendations.md)
defender-for-cloud How To Manage Gcp Assessments Standards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/how-to-manage-gcp-assessments-standards.md
+
+ Title: Manage GCP assessments and standards
+
+description: Learn how to create custom security assessments and standards for your GCP environment.
+ Last updated : 10/18/2022++
+# Manage GCP assessments and standards
+
+Security standards contain comprehensive sets of security recommendations to help secure your cloud environments. Security teams can use the readily available regulatory standards such as GCP CIS 1.1.0, GCP CIS and 1.2.0, or create custom standards and assessments to meet specific internal requirements.
+
+There are three types of resources that are needed to create and manage custom assessments:
+
+- Assessment:
+ - assessment details such as name, description, severity, remediation logic, etc.
+ - assessment logic in KQL
+ - the standard it belongs to
+- Standard: defines a set of assessments
+- Standard assignment: defines the scope, which the standard will evaluate. For example, specific GCP projects.
+
+You can either use the built-in compliance standards or create your own custom standards and assessments.
+
+## Assign a built-in compliance standard to your GCP project
+
+**To assign a built-in compliance standard to your GCP project**:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. Navigate to **Microsoft Defender for Cloud** > **Environment settings**.
+
+1. Select the relevant GCP project.
+
+1. Select **Standards** > **Add** > **Standard**.
+
+ :::image type="content" source="media/how-to-manage-assessments-standards/gcp-standard.png" alt-text="Screenshot that shows you where to navigate to, to add a GCP standard." lightbox="media/how-to-manage-assessments-standards/gcp-standard-zoom.png":::
+
+1. Select a built-in standard from the drop-down menu.
+
+ :::image type="content" source="media/how-to-manage-assessments-standards/drop-down-menu.png" alt-text="Screenshot that shows you the standard options you can choose from the drop-down menu." lightbox="media/how-to-manage-assessments-standards/drop-down-menu.png":::
+
+1. Select **Save**.
+
+## Create a new custom standard for your GCP project
+
+**To create a new custom standard for your GCP project**:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. Navigate to **Microsoft Defender for Cloud** > **Environment settings**.
+
+1. Select the relevant GCP project.
+
+1. Select **Standards** > **Add** > **Standard**.
+
+1. Select **New standard**.
+
+1. Enter a name, description and select which assessments you want to add.
+
+1. Select **Save**.
+
+## Assign a built-in assessment to your GCP project
+
+**To assign a built-in assessment to your GCP project**:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. Navigate to **Microsoft Defender for Cloud** > **Environment settings**.
+
+1. Select the relevant GCP project.
+
+1. Select **Standards** > **Add** > **Assessment**.
+
+ :::image type="content" source="media/how-to-manage-assessments-standards/gcp-assessment.png" alt-text="Screenshot that shows where to navigate to, to select GCP assessment." lightbox="media/how-to-manage-assessments-standards/gcp-assessment.png":::
+
+1. Select **Existing assessment**.
+
+1. Select all relevant assessments from the drop-down menu.
+
+1. Select the standards from the drop-down menu.
+
+1. Select **Save**.
+
+## Create a new custom assessment for your GCP project
+
+**To create a new custom assessment to your GCP project**:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. Navigate to **Microsoft Defender for Cloud** > **Environment settings**.
+
+1. Select the relevant GCP project.
+
+1. Select **Standards** > **Add** > **Assessment**.
+
+1. Select **New assessment (preview)**.
+
+ :::image type="content" source="media/how-to-manage-assessments-standards/new-assessment.png" alt-text="Screenshot of the new assessment screen for a GCP project." lightbox="media/how-to-manage-assessments-standards/new-assessment.png":::
+
+1. In the general section, enter a name and severity.
+
+1. In the query section, select an assessment template from the drop-down menu, or use the following query schema:
+
+ For example:
+
+ **Ensure that Cloud Storage buckets have uniform bucket-level access enabled**
+
+ ```kusto
+ let UnhealthyBuckets = Storage_Bucket
+ extend RetentionPolicy = Record.retentionPolicy
+ where isnull(RetentionPolicy) or isnull(RetentionPolicy.isLocked) or tobool(RetentionPolicy.isLocked)==false
+ project BucketName = RecordIdentifierInfo.CloudNativeResourceName; Logging_LogSink
+ extend Destination = split(Record.destination,'/')[0]
+ where Destination == 'storage.googleapis.com'
+ extend LogBucketName = split(Record.destination,'/')[1]
+ extend HealthStatus = iff(LogBucketName in(UnhealthyBuckets), 'UNHEALTHY', 'HEALTHY')"
+ ```
+
+ See the [how to build a query](#how-to-build-a-query) section for more examples.
+
+ 1. Select **Save**.
+
+## How to build a query
+
+The last row of the query should return all the original columns (donΓÇÖt use ΓÇÿprojectΓÇÖ, ΓÇÿproject-away). End the query with an iff statement that defines the healthy or unhealthy conditions: `| extend HealthStatus = iff([boolean-logic-here], 'UNHEALTHY','HEALTHY')`.
+
+### Sample KQL queries
+
+**Ensure that Cloud Storage buckets have uniform bucket-level access enabled**
+
+```kusto
+let UnhealthyBuckets = Storage_Bucket
+| extend RetentionPolicy = Record.retentionPolicy
+| where isnull(RetentionPolicy) or isnull(RetentionPolicy.isLocked) or tobool(RetentionPolicy.isLocked)==false
+| project BucketName = RecordIdentifierInfo.CloudNativeResourceName; Logging_LogSink
+| extend Destination = split(Record.destination,'/')[0]
+| where Destination == 'storage.googleapis.com'
+| extend LogBucketName = split(Record.destination,'/')[1]
+| extend HealthStatus = iff(LogBucketName in(UnhealthyBuckets), 'UNHEALTHY', 'HEALTHY')"
+```
+
+**Ensure VM disks for critical VMs are encrypted**
+
+```kusto
+Compute_Disk
+| extend DiskEncryptionKey = Record.diskEncryptionKey
+| extend IsVmNotEncrypted = isempty(tostring(DiskEncryptionKey.sha256))
+| extend HealthStatus = iff(IsVmNotEncrypted ,'UNHEALTHY' ,'HEALTHY')"
+```
+
+**Ensure Compute instances are launched with Shielded VM enabled**
+
+```kusto
+Compute_Instance
+| extend InstanceName = tostring(Record.id)
+| extend ShieldedVmExist = tostring(Record.shieldedInstanceConfig.enableIntegrityMonitoring) =~ 'true' and tostring(Record.shieldedInstanceConfig.enableVtpm) =~ 'true'
+| extend HealthStatus = iff(ShieldedVmExist, 'HEALTHY', 'UNHEALTHY')"
+```
+
+You can use the following links to learn more about Kusto queries:
+- [KQL quick reference](/azure/data-explorer/kql-quick-reference)
+- [Kusto Query Language (KQL) overview](/azure/data-explorer/kusto/query/)
+- [Must Learn KQL](https://azurecloudai.blog/2021/11/17/must-learn-kql-part-1-tools-and-resources/)
+
+## Next steps
+
+In this article, you learned how to manage your assessments and standards in Defender for Cloud.
+
+> [!div class="nextstepaction"]
+> [Find recommendations that can improve your security posture](review-security-recommendations.md)
defender-for-cloud Iac Vulnerabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/iac-vulnerabilities.md
# Discover misconfigurations in Infrastructure as Code (IaC)
-Once you have set up the Microsoft Security DevOps GitHub action or Azure DevOps extension, extra support is located in the YAML configuration that can be used to run a specific tool, or several of the tools. For example, setting up the action or extension to run Infrastructure as Code (IaC) scanning only. This can help reduce pipeline run time.
+Once you have set up the Microsoft Security DevOps GitHub action or Azure DevOps extension, you can configure the YAML configuration file to run a single tool or multiple tools. For example, you can set up the action or extension to run Infrastructure as Code (IaC) scanning tools only. This can help reduce pipeline run time.
## Prerequisites -- [Configure Microsoft Security DevOps GitHub action](github-action.md).-- [Configure the Microsoft Security DevOps Azure DevOps extension](azure-devops-extension.md).
+- Configure Microsoft Security DevOps for GitHub and/or Azure DevOps based on your source code management system:
+ - [Microsoft Security DevOps GitHub action](github-action.md)
+ - [Microsoft Security DevOps Azure DevOps extension](azure-devops-extension.md).
+- Ensure you have an IaC template in your repository.
-## View the results of the IaC scan in GitHub
+## Configure IaC scanning and view the results in GitHub
1. Sign in to [GitHub](https://www.github.com).
Once you have set up the Microsoft Security DevOps GitHub action or Azure DevOps
:::image type="content" source="media/tutorial-iac-vulnerabilities/commit-change.png" alt-text="Screenshot that shows where to select commit change on the githib page.":::
-1. (Optional) Skip this step if you already have an IaC template in your repository.
+1. (Optional) Add an IaC template to your repository. Skip if you already have an IaC template in your repository.
- Follow this link to [Install an IaC template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/webapp-basic-linux).
+ For example, [commit an IaC template to deploy a basic Linux web application](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/webapp-basic-linux) to your repository.
1. Select `azuredeploy.json`.
- :::image type="content" source="media/tutorial-iac-vulnerabilities/deploy-json.png" alt-text="Screenshot that shows where the deploy.json file is located.":::
+ :::image type="content" source="media/tutorial-iac-vulnerabilities/deploy-json.png" alt-text="Screenshot that shows where the azuredeploy.json file is located.":::
1. Select **Raw** 1. Copy all the information in the file.
- ```Bash
+ ```json
{ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0",
Once you have set up the Microsoft Security DevOps GitHub action or Azure DevOps
:::image type="content" source="media/tutorial-iac-vulnerabilities/file-added.png" alt-text="Screenshot that shows that the new file you created has been added to your repository.":::
-1. Select **Actions**.
-1. Select the workflow to see the results.
+1. Confirm the Microsoft Security DevOps scan completed:
+ 1. Select **Actions**.
+ 2. Select the workflow to see the results.
-1. Navigate in the results to the scan results section.
+1. Navigate to **Security** > **Code scanning alerts** to view the results of the scan (filter by tool as needed to see just the IaC findings).
-1. Navigate to **Security** > **Code scanning alerts** to view the results of the scan.
-
-## View the results of the IaC scan in Azure DevOps
+## Configure IaC scanning and view the results in Azure DevOps
**To view the results of the IaC scan in Azure DevOps**
-1. Sign in to [Azure DevOps](https://dev.azure.com/)
+1. Sign in to [Azure DevOps](https://dev.azure.com/).
+
+1. Select the desired project
-1. Navigate to **Pipeline**.
+1. Select **Pipeline**.
-1. Locate the pipeline with MSDO Azure DevOps Extension is configured.
+1. Select the pipeline where the Microsoft Security DevOps Azure DevOps Extension is configured.
-1. Select **Edit**.
+1. **Edit** the pipeline configuration YAML file adding the following lines:
1. Add the following lines to the YAML file
Once you have set up the Microsoft Security DevOps GitHub action or Azure DevOps
1. Select **Save**.
-1. Select **Save** to commit directly to the main branch or Create a new branch for this commit
+1. (Optional) Add an IaC template to your repository. Skip if you already have an IaC template in your repository.
+
+1. Select **Save** to commit directly to the main branch or Create a new branch for this commit.
1. Select **Pipeline** > **`Your created pipeline`** to view the results of the IaC scan. 1. Select any result to see the details.
-## Remediate PowerShell based rules:
+## View details and remediation information on IaC rules included with Microsoft Security DevOps
+
+### PowerShell-based rules
Information about the PowerShell-based rules included by our integration with [PSRule for Azure](https://aka.ms/ps-rule-azure/rules). The tool will only evaluate the rules under the [Security pillar](https://azure.github.io/PSRule.Rules.Azure/en/rules/module/#security) unless the option `--include-non-security-rules` is used. > [!NOTE]
-> Severity levels are scaled from 1 to 3. Where 1 = High, 2 = Medium, 3 = Low.
+> PowerShell-based rules are included by our integration with [PSRule for Azure](https://aka.ms/ps-rule-azure/rules). The tool will evaluate all rules under the [Security pillar](https://azure.github.io/PSRule.Rules.Azure/en/rules/module/#security).
### JSON-Based Rules:
+JSON-based rules for ARM templates and bicep files are provided by [Template-Analyzer](https://github.com/Azure/template-analyzer#template-best-practice-analyzer-bpa). Below are details on template-analyzer's rules and remediation details.
+
+> [!NOTE]
+> Severity levels are scaled from 1 to 3. Where 1 = High, 2 = Medium, 3 = Low.
+ #### TA-000001: Diagnostic logs in App Services should be enabled Audits the enabling of diagnostic logs on the app. This enables you to recreate activity trails for investigation purposes if a security incident occurs or your network is compromised.
Audits the enabling of diagnostic logs on the app. This enables you to recreate
#### TA-000002: Remote debugging should be turned off for API Apps
-Remote debugging requires inbound ports to be opened on an API app. These ports become easy targets for compromise from various internet based attacks. If you no longer need to use remote debugging, it should be turned off.
+Remote debugging requires inbound ports to be opened on an API app. These ports become easy targets for compromise from various internet-based attacks. If you no longer need to use remote debugging, it should be turned off.
**Recommendation**: To disable remote debugging, in the [Microsoft.Web/sites/config resource properties](/azure/templates/microsoft.web/sites/config-web?tabs=json#SiteConfig), remove the *remoteDebuggingEnabled* property or update its value to `false`.
Remote debugging requires inbound ports to be opened on an API app. These ports
Enable FTPS enforcement for enhanced security.
-**Recommendation**: To [enforce FTPS](../app-service/deploy-ftp.md?tabs=portal#enforce-ftps), in the [Microsoft.Web/sites/config resource properties](/azure/templates/microsoft.web/sites/config-web?tabs=json#SiteConfig), add (or update) the *ftpsState* property, setting its value to `"FtpsOnly"` or `"Disabled"` if you don't need FTPS enabled.
+**Recommendation**: To [enforce FTPS](../app-service/deploy-ftp.md?tabs=portal#enforce-ftps) in the [Microsoft.Web/sites/config resource properties](/azure/templates/microsoft.web/sites/config-web?tabs=json#SiteConfig), add (or update) the *ftpsState* property, setting its value to `"FtpsOnly"` or `"Disabled"` if you don't need FTPS enabled.
**Severity level**: 1
Enable FTPS enforcement for enhanced security.
API apps should require HTTPS to ensure connections are made to the expected server and data in transit is protected from network layer eavesdropping attacks.
-**Recommendation**: To [use HTTPS to ensure, server/service authentication and protect data in transit from network layer eavesdropping attacks](../app-service/configure-ssl-bindings.md#enforce-https), in the [Microsoft.Web/Sites resource properties](/azure/templates/microsoft.web/sites?tabs=json#siteproperties-object), add (or update) the *httpsOnly* property, setting its value to `true`.
+**Recommendation**: To [use HTTPS to ensure, server/service authentication and protect data in transit from network layer eavesdropping attacks](../app-service/configure-ssl-bindings.md#enforce-https) in the [Microsoft.Web/Sites resource properties](/azure/templates/microsoft.web/sites?tabs=json#siteproperties-object), add (or update) the *httpsOnly* property, setting its value to `true`.
**Severity level**: 2
API apps should require HTTPS to ensure connections are made to the expected ser
API apps should require the latest TLS version.
-**Recommendation**: To [enforce the latest TLS version](../app-service/configure-ssl-bindings.md#enforce-tls-versions), in the [Microsoft.Web/sites/config resource properties](/azure/templates/microsoft.web/sites/config-web?tabs=json#SiteConfig), add (or update) the *minTlsVersion* property, setting its value to `1.2`.
+**Recommendation**: To [enforce the latest TLS version](../app-service/configure-ssl-bindings.md#enforce-tls-versions) in the [Microsoft.Web/sites/config resource properties](/azure/templates/microsoft.web/sites/config-web?tabs=json#SiteConfig), add (or update) the *minTlsVersion* property, setting its value to `1.2`.
**Severity level**: 1
For enhanced authentication security, use a managed identity. On Azure, managed
#### TA-000008: Remote debugging should be turned off for Function Apps
-Remote debugging requires inbound ports to be opened on a function app. These ports become easy targets for compromise from various internet based attacks. If you no longer need to use remote debugging, it should be turned off.
+Remote debugging requires inbound ports to be opened on a function app. These ports become easy targets for compromise from various internet-based attacks. If you no longer need to use remote debugging, it should be turned off.
**Recommendation**: To disable remote debugging, in the [Microsoft.Web/sites/config resource properties](/azure/templates/microsoft.web/sites/config-web?tabs=json#SiteConfig), remove the *remoteDebuggingEnabled* property or update its value to `false`.
For enhanced authentication security, use a managed identity. On Azure, managed
#### TA-000014: Remote debugging should be turned off for Web Applications
-Remote debugging requires inbound ports to be opened on a web application. These ports become easy targets for compromise from various internet based attacks. If you no longer need to use remote debugging, it should be turned off.
+Remote debugging requires inbound ports to be opened on a web application. These ports become easy targets for compromise from various internet-based attacks. If you no longer need to use remote debugging, it should be turned off.
**Recommendation**: To disable remote debugging, in the [Microsoft.Web/sites/config resource properties](/azure/templates/microsoft.web/sites/config-web?tabs=json#SiteConfig), remove the *remoteDebuggingEnabled* property or update its value to `false`.
Set the data retention for your SQL Server's auditing to storage account destina
#### TA-000029: Azure API Management APIs should use encrypted protocols only
-Set the protocols property to only include HTTPs.
+Set the protocols property to only include HTTPS.
**Recommendation**: To use encrypted protocols only, add (or update) the *protocols* property in the [Microsoft.ApiManagement/service/apis resource properties](/azure/templates/microsoft.apimanagement/service/apis?tabs=json), to only include HTTPS. Allowing any additional protocols (for example, HTTP, WS) is insecure.
Set the protocols property to only include HTTPs.
- Learn more about the [Template Best Practice Analyzer](https://github.com/Azure/template-analyzer).
-In this tutorial you learned how to configure the Microsoft Security DevOps GitHub Action and Azure DevOps Extension to scan for only Infrastructure as Code misconfigurations.
+In this tutorial you learned how to configure the Microsoft Security DevOps GitHub Action and Azure DevOps Extension to scan for Infrastructure as Code (IaC) security misconfigurations and how to view the results.
## Next steps
Learn more about [Defender for DevOps](defender-for-devops-introduction.md).
Learn how to [connect your GitHub](quickstart-onboard-github.md) to Defender for Cloud.
-Learn how to [connect your Azure DevOps](quickstart-onboard-devops.md) to Defender for Cloud.
+Learn how to [connect your Azure DevOps](quickstart-onboard-devops.md) to Defender for Cloud.
defender-for-cloud Implement Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/implement-security-recommendations.md
description: This article explains how to respond to recommendations in Microsof
Previously updated : 11/09/2021 Last updated : 10/20/2022 # Implement security recommendations in Microsoft Defender for Cloud
To simplify remediation and improve your environment's security (and increase yo
**Fix** helps you quickly remediate a recommendation on multiple resources.
-> [!TIP]
-> The **Fix** feature is only available for specific recommendations. To find recommendations that have an available fix, use the **Response actions** filter for the list of recommendations:
->
-> :::image type="content" source="media/implement-security-recommendations/quick-fix-filter.png" alt-text="Use the filters above the recommendations list to find recommendations that have the Fix option.":::
- To implement a **Fix**: 1. From the list of recommendations that have the **Fix** action icon :::image type="icon" source="media/implement-security-recommendations/fix-icon.png" border="false":::, select a recommendation. :::image type="content" source="./media/implement-security-recommendations/microsoft-defender-for-cloud-recommendations-fix-action.png" alt-text="Recommendations list highlighting recommendations with Fix action" lightbox="./media/implement-security-recommendations/microsoft-defender-for-cloud-recommendations-fix-action.png":::
-1. From the **Unhealthy resources** tab, select the resources that you want to implement the recommendation on, and select **Remediate**.
+1. From the **Unhealthy resources** tab, select the resources that you want to implement the recommendation on, and select **Fix**.
> [!NOTE] > Some of the listed resources might be disabled, because you don't have the appropriate permissions to modify them.
To implement a **Fix**:
![Quick fix.](./media/implement-security-recommendations/microsoft-defender-for-cloud-quick-fix-view.png) > [!NOTE]
- > The implications are listed in the grey box in the **Remediate resources** window that opens after clicking **Remediate**. They list what changes happen when proceeding with the **Fix**.
+ > The implications are listed in the grey box in the **Fixing resources** window that opens after clicking **Fix**. They list what changes happen when proceeding with the **Fix**.
1. Insert the relevant parameters if necessary, and approve the remediation.
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Defender for DevOps allows you to gain visibility into and manage your connected
Security teams can configure pull request annotations to help developers address secret scanning findings in Azure DevOps directly on their pull requests.
-You can configure the Microsoft Security DevOps tools on Azure DevOps pipelines and GitHub workflows to enable the following security scans:
+You can configure the Microsoft Security DevOps tools on Azure Pipelines and GitHub workflows to enable the following security scans:
| Name | Language | License | |--|--|--| | [Bandit](https://github.com/PyCQA/bandit) | python | [Apache License 2.0](https://github.com/PyCQA/bandit/blob/main/LICENSE) | | [BinSkim](https://github.com/Microsoft/binskim) | Binary ΓÇô Windows, ELF | [MIT License](https://github.com/microsoft/binskim/blob/main/LICENSE) | | [ESlint](https://github.com/eslint/eslint) | JavaScript | [MIT License](https://github.com/microsoft/binskim/blob/main/LICENSE) |
-| [CredScan](https://secdevtools.azurewebsites.net/helpcredscan.html) (Azure DevOps Only) | Credential Scanner (aka CredScan) is a tool developed and maintained by Microsoft to identify credential leaks such as those in source code and configuration files common types: default passwords, SQL connection strings, Certificates with private keys| Not Open Source |
+| [CredScan](https://secdevtools.azurewebsites.net/helpcredscan.html) (Azure DevOps Only) | Credential Scanner (also known as CredScan) is a tool developed and maintained by Microsoft to identify credential leaks such as those in source code and configuration files common types: default passwords, SQL connection strings, Certificates with private keys| Not Open Source |
| [Template Analyze](https://github.com/Azure/template-analyzer)r | ARM template, Bicep file | [MIT License](https://github.com/microsoft/binskim/blob/main/LICENSE) | | [Terrascan](https://github.com/tenable/terrascan) | Terraform (HCL2), Kubernetes (JSON/YAML), Helm v3, Kustomize, Dockerfiles, Cloud Formation | [Apache License 2.0](https://github.com/tenable/terrascan/blob/master/LICENSE) | | [Trivy](https://github.com/aquasecurity/trivy) | Container images, file systems, git repositories | [Apache License 2.0](https://github.com/tenable/terrascan/blob/master/LICENSE) |
The new release contains the following capabilities:
> When you exempt an account, it won't be shown as unhealthy and also won't cause a subscription to appear unhealthy. |Recommendation| Assessment key|
- |-|-|
- |MFA should be enabled on accounts with owner permissions on your subscription|94290b00-4d0c-d7b4-7cea-064a9554e681|
- |MFA should be enabled on accounts with read permissions on your subscription|151e82c5-5341-a74b-1eb0-bc38d2c84bb5|
- |MFA should be enabled on accounts with write permissions on your subscription|57e98606-6b1e-6193-0e3d-fe621387c16b|
- |External accounts with owner permissions should be removed from your subscription|c3b6ae71-f1f0-31b4-e6c1-d5951285d03d|
- |External accounts with read permissions should be removed from your subscription|a8c6a4ad-d51e-88fe-2979-d3ee3c864f8b|
- |External accounts with write permissions should be removed from your subscription|04e7147b-0deb-9796-2e5c-0336343ceb3d|
+ |--|--|
+ |Accounts with owner permissions on Azure resources should be MFA enabled|6240402e-f77c-46fa-9060-a7ce53997754|
+ |Accounts with write permissions on Azure resources should be MFA enabled|c0cb17b2-0607-48a7-b0e0-903ed22de39b|
+ |Accounts with read permissions on Azure resources should be MFA enabled|dabc9bc4-b8a8-45bd-9a5a-43000df8aa1c|
+ |Guest accounts with owner permissions on Azure resources should be removed|20606e75-05c4-48c0-9d97-add6daa2109a|
+ |Guest accounts with write permissions on Azure resources should be removed|0354476c-a12a-4fcc-a79d-f0ab7ffffdbb|
+ |Guest accounts with read permissions on Azure resources should be removed|fde1c0c9-0fd2-4ecc-87b5-98956cbc1095|
+ |Blocked accounts with owner permissions on Azure resources should be removed|050ac097-3dda-4d24-ab6d-82568e7a50cf|
+ |Blocked accounts with read and write permissions on Azure resources should be removed| 1ff0b4c9-ed56-4de6-be9c-d7ab39645926 ||
The recommendations although in preview, will appear next to the recommendations that are currently in GA.
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
Title: Important changes coming to Microsoft Defender for Cloud description: Upcoming changes to Microsoft Defender for Cloud that you might need to be aware of and for which you might need to plan Previously updated : 09/20/2022 Last updated : 10/20/2022 # Important upcoming changes to Microsoft Defender for Cloud
If you're looking for the latest release notes, you'll find them in the [What's
| Planned change | Estimated date for change | |--|--|
-| [Multiple changes to identity recommendations](#multiple-changes-to-identity-recommendations) | October 2022 |
-
-### Multiple changes to identity recommendations
-
-**Estimated date for change:** October 2022
-
-Defender for Cloud includes multiple recommendations for improving the management of users and accounts. In October, we'll be making the changes outlined below.
-
-#### New recommendations in preview
-
-The new release will bring the following capabilities:
--- **Extended evaluation scope** ΓÇô Improved coverage to identity accounts without MFA and external accounts on Azure resources (instead of subscriptions only) allowing security admins to view role assignments per account.--- **Improved freshness interval** - Currently, the identity recommendations have a freshness interval of 24 hours. This update will reduce that interval to 12 hours.--- **Account exemption capability** - Defender for Cloud has many features you can use to customize your experience and ensure that your secure score reflects your organization's security priorities. For example, you can [exempt resources and recommendations from your secure score](exempt-resource.md).-
- This update will allow you to exempt specific accounts from evaluation with the six recommendations listed in the following table.
-
- Typically, you'd exempt emergency ΓÇ£break glassΓÇ¥ accounts from MFA recommendations, because such accounts are often deliberately excluded from an organization's MFA requirements. Alternatively, you might have external accounts that you'd like to permit access to but which don't have MFA enabled.
-
- > [!TIP]
- > When you exempt an account, it won't be shown as unhealthy and also won't cause a subscription to appear unhealthy.
-
- |Recommendation| Assessment key|
- |--|--|
- |Accounts with owner permissions on Azure resources should be MFA enabled|6240402e-f77c-46fa-9060-a7ce53997754|
- |Accounts with write permissions on Azure resources should be MFA enabled|c0cb17b2-0607-48a7-b0e0-903ed22de39b|
- |Accounts with read permissions on Azure resources should be MFA enabled|dabc9bc4-b8a8-45bd-9a5a-43000df8aa1c|
- |Guest accounts with owner permissions on Azure resources should be removed|20606e75-05c4-48c0-9d97-add6daa2109a|
- |Guest accounts with write permissions on Azure resources should be removed|0354476c-a12a-4fcc-a79d-f0ab7ffffdbb|
- |Guest accounts with read permissions on Azure resources should be removed|fde1c0c9-0fd2-4ecc-87b5-98956cbc1095|
- |Blocked accounts with owner permissions on Azure resources should be removed|050ac097-3dda-4d24-ab6d-82568e7a50cf|
- |Blocked accounts with read and write permissions on Azure resources should be removed| 1ff0b4c9-ed56-4de6-be9c-d7ab39645926 |
+| None | None |
## Next steps
defender-for-iot How To Forward Alert Information To Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-forward-alert-information-to-partners.md
Enter the following parameters:
| Date and time | Date and time that the syslog server machine received the information. | | Priority | User. Alert | | Hostname | Sensor IP address |
-| Protocol | TCP or UDP |
-| Message | Sensor: The sensor name.<br /> Alert: The title of the alert.<br /> Type: The type of the alert. Can be **Protocol Violation**, **Policy Violation**, **Malware**, **Anomaly**, or **Operational**.<br /> Severity: The severity of the alert. Can be **Warning**, **Minor**, **Major**, or **Critical**.<br /> Source: The source device name.<br /> Source IP: The source device IP address.<br /> Destination: The destination device name.<br /> Destination IP: The IP address of the destination device.<br /> Message: The message of the alert.<br /> Alert group: The alert group associated with the alert. |
+| Message | CyberX platform name: The sensor name.<br /> Microsoft Defender for IoT Alert: The title of the alert.<br /> Type: The type of the alert. Can be **Protocol Violation**, **Policy Violation**, **Malware**, **Anomaly**, or **Operational**.<br /> Severity: The severity of the alert. Can be **Warning**, **Minor**, **Major**, or **Critical**.<br /> Source: The source device name.<br /> Source IP: The source device IP address.<br /> Protocol (Optional): The detected source protocol.<br /> Address (Optional): Source protocol address.<br /> Destination: The destination device name.<br /> Destination IP: The IP address of the destination device.<br /> Protocol (Optional): The detected destination protocol.<br /> Address (Optional): The destination protocol address.<br /> Message: The message of the alert.<br /> Alert group: The alert group associated with the alert. |<br /> UUID (Optional): The UUID the alert. |
| Syslog object output | Description | |--|--|
defender-for-iot How To Manage Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-subscriptions.md
Delete all sensors that are associated with the subscription prior to removing t
## Move existing sensors to a different subscription
-Business considerations may require that you apply your existing IoT sensors to a different subscription than the one youΓÇÖre currently using. To do this, you'll need to onboard a new plan and register the sensors under the new subscription, and then remove them from the old subscription. This process may include some downtime, and historic data isn't migrated.
+Business considerations may require that you apply your existing IoT sensors to a different subscription than the one youΓÇÖre currently using. To do this, you'll need to onboard a new plan to the new subscription, register the sensors under the new subscription, and then remove them from the previous subscription.
+
+Billing changes will take effect one hour after cancellation of the previous subscription, and will be reflected on the next month's bill. Devices will be synchronized from the sensor to the new subscription automatically. Manual edits made in the portal will not be migrated. New alerts created by the sensor will be created under the new subscription, and existing alerts in the old subscription can be closed in bulk.
**To switch to a new subscription**:
-1. Onboard a new plan to the new subscription you want to use. For more information, see:
+**For OT sensors**:
+
+1. In the Azure portal, [onboard a new plan for OT networks](#onboard-a-defender-for-iot-plan-for-ot-networks) to the new subscription you want to use.
+
+1. Create a new activation file by [following the steps to onboard an OT sensor](onboard-sensors.md#onboard-ot-sensors).
+ - Replicate site and sensor hierarchy as is.
+ - For sensors monitoring overlapping network segments, create the activation file under the same zone. Identical devices that are detected in more than one sensor in a zone, will be merged into one device.
+
+1. [Upload a new activation file](how-to-manage-individual-sensors.md#upload-new-activation-files) for your sensors under the new subscription.
+
+1. Delete the sensor identities from the previous subscription. For more information, see [Sensor management options from the Azure portal](how-to-manage-sensors-on-the-cloud.md#sensor-management-options-from-the-azure-portal).
- [Onboard a plan for OT networks](#onboard-a-defender-for-iot-plan-for-ot-networks) in the Azure portal
+1. If relevant, [cancel the Defender for IoT plan](#cancel-a-defender-for-iot-plan-from-a-subscription) from the previous subscription.
- [Onboard a plan for Enterprise IoT networks](#onboard-a-defender-for-iot-plan-for-enterprise-iot-networks) in Defender for Endpoint
+**For Enterprise IoT sensors**:
-1. Onboard your sensors again under the new subscription. For OT sensors, [upload a new activation](how-to-manage-individual-sensors.md#upload-new-activation-files) file for your sensors.
+1. In Defender for Endpoint, [onboard a new plan for Enterprise IoT networks](#onboard-a-defender-for-iot-plan-for-enterprise-iot-networks) to the new subscription you want to use.
-1. Delete the sensor identities from the legacy subscription. For more information, see [Sensor management options from the Azure portal](how-to-manage-sensors-on-the-cloud.md#sensor-management-options-from-the-azure-portal).
+1. In the Azure portal, [follow the steps to register an Enterprise IoT sensor](tutorial-getting-started-eiot-sensor.md#register-an-enterprise-iot-sensor) under the new subscription.
-1. If relevant, [cancel the Defender for IoT plan](#cancel-a-defender-for-iot-plan-from-a-subscription) from the legacy subscription.
+1. Log into your sensor and run the activation command you saved when registering the sensor under the new subscription.
+
+1. Delete the sensor identities from the previous subscription. For more information, see [Sensor management options from the Azure portal](how-to-manage-sensors-on-the-cloud.md#sensor-management-options-from-the-azure-portal).
+
+1. If relevant, [cancel the Defender for IoT plan](#cancel-a-defender-for-iot-plan-from-a-subscription) from the previous subscription.
+
+> [!NOTE]
+> If the previous subscription was connected to Microsoft Sentinel, you will need to connect the new subscription to Microsoft Sentinel and remove the old subscription. For more information, see [Connect Microsoft Defender for IoT with Microsoft Sentinel](/azure/sentinel/iot-solution).
## Next steps
defender-for-iot How To Set Up Your Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-set-up-your-network.md
Use the following tables to ensure that required firewalls are open on your work
| Protocol | Transport | In/Out | Port | Purpose | Source | Destination | |--|--|--|--|--|--|--|
-| HTTPS | TCP | Out | 443 | Access to Azure | Sensor |**For OT sensor versions 22.x**: Download the list from the **Sites and sensors** page in the Azure portal. Select an OT sensor with software versions 22.x or higher, or a site with one or more supported sensor versions. Then, select **More options > Download endpoint details**. For more information, see [Sensor management options from the Azure portal](how-to-manage-sensors-on-the-cloud.md#sensor-management-options-from-the-azure-portal).<br><br>**For OT sensor versions 10.x**: `*.azure-devices.net`<br> `*.blob.core.windows.net`<br> `*.servicebus.windows.net`|
-| HTTPS | TCP | Out | 443 | Remote sensor updates from the Azure portal | Sensor| `download.microsoft.com`|
+| HTTPS | TCP | Out | 443 | Access to Azure | Sensor |OT network sensors connect to Azure to provide alert and device data and sensor health messages, access threat intelligence packages, and more. Connected Azure services include IoT Hub, Blob Storage, Event Hubs, and the Microsoft Download Center.<br><br>**For OT sensor versions 22.x**: Download the list from the **Sites and sensors** page in the Azure portal. Select an OT sensor with software versions 22.x or higher, or a site with one or more supported sensor versions. Then, select **More options > Download endpoint details**. For more information, see [Sensor management options from the Azure portal](how-to-manage-sensors-on-the-cloud.md#sensor-management-options-from-the-azure-portal).<br><br>**For OT sensor versions 10.x**: `*.azure-devices.net`<br> `*.blob.core.windows.net`<br> `*.servicebus.windows.net`<br> `download.microsoft.com`|
+ ### Sensor access to the on-premises management console
event-grid Authenticate With Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/authenticate-with-active-directory.md
Following are the prerequisites to authenticate to Event Grid.
- Install the SDK on your application. - [Java](/java/api/overview/azure/messaging-eventgrid-readme#include-the-package)
- - [.NET](/dotnet/api/overview/azure/messaging.eventgrid-readme-pre#install-the-package)
+ - [.NET](/dotnet/api/overview/azure/messaging.eventgrid-readme#install-the-package)
- [JavaScript](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/eventgrid/eventgrid#install-the-azureeventgrid-package) - [Python](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/eventgrid/azure-eventgrid#install-the-package) - Install the Azure Identity client library. The Event Grid SDK depends on the Azure Identity client library for authentication.
event-hubs Compare Tiers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/compare-tiers.md
Title: Compare Azure Event Hubs tiers description: This article compares supported tiers of Azure Event Hubs. Previously updated : 07/20/2021 Last updated : 10/19/2022 # Compare Azure Event Hubs tiers
frontdoor How To Configure Rule Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-configure-rule-set.md
This article shows how to create a Rule Set and your first set of rules using th
> [!NOTE] > * To delete a condition or action from a rule, use the trash can on the right-hand side of the specific condition or action. > * To create a rule that applies to all incoming traffic, do not specify any conditions.
- > * To stop evaluating remaining rules if a specific rule is met, check **Stop evaluating remaining rule**. If this option is checked and all remaining rules in the Rule Set will not be executed regardless if the matching conditions were met.
+ > * To stop evaluating remaining rules if a specific rule is met, check **Stop evaluating remaining rule**. If this option is checked then all remaining rules in the Rule Set will not be executed regardless if the matching conditions were met.
> * All paths in Rules Engine are case sensitive. > * Header names should adhere to [RFC 7230](https://datatracker.ietf.org/doc/html/rfc7230#section-3.2.6).
governance Effects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/effects.md
Title: Understand how effects work description: Azure Policy definitions have various effects that determine how compliance is managed and reported. Previously updated : 09/23/2022 Last updated : 10/20/2022
This effect is useful for testing situations or for when the policy definition h
effect. This flexibility makes it possible to disable a single assignment instead of disabling all of that policy's assignments.
-An alternative to the Disabled effect is **enforcementMode**, which is set on the policy assignment.
-When **enforcementMode** is _Disabled_, resources are still evaluated. Logging, such as Activity
+> [!NOTE]
+> Policy definitions that use the **Disabled** effect have the default compliance state **Compliant** after assignment.
+
+An alternative to the **Disabled** effect is **enforcementMode**, which is set on the policy assignment.
+When **enforcementMode** is **Disabled**_**, resources are still evaluated. Logging, such as Activity
logs, and the policy effect don't occur. For more information, see [policy assignment - enforcement mode](./assignment-structure.md#enforcement-mode).
hdinsight Benefits Of Migrating To Hdinsight 40 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/benefits-of-migrating-to-hdinsight-40.md
Hive metastore operation takes much time and thus slow down Hive compilation. In
## Troubleshooting guide
-[HDInsight 3.6 to 4.0 troubleshooting guide for Hive workloads](./interactive-query/interactive-query-troubleshoot-migrate-36-to-40.md) provides answers to common issues faced when migrating Hive workloads from HDInsight 3.6 to HDInsight 4.0.
+[HDInsight 3.6 to 4.0 troubleshooting guide for Hive workloads](/azure/hdinsight/interactive-query/interactive-query-troubleshoot-migrate-36-to-40) provides answers to common issues faced when migrating Hive workloads from HDInsight 3.6 to HDInsight 4.0.
## References
https://hadoop.apache.org/docs/r3.1.1/hadoop-project-dist/hadoop-common/release/
## Further reading
-* [HDInsight 4.0 Announcement](./hdinsight-version-release.md)
-* [HDInsight 4.0 deep dive](https://azure.microsoft.com/blog/deep-dive-into-azure-hdinsight-4-0.md)
+* [HDInsight 4.0 Announcement](/azure/hdinsight/hdinsight-version-release)
+* [HDInsight 4.0 deep dive](https://azure.microsoft.com/blog/deep-dive-into-azure-hdinsight-4-0/)
hdinsight Apache Hadoop Linux Create Cluster Get Started Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-linux-create-cluster-get-started-portal.md
keywords: hadoop getting started,hadoop linux,hadoop quickstart,hive getting sta
Previously updated : 09/15/2022 Last updated : 10/20/2022 #Customer intent: As a data analyst, I need to create a Hadoop cluster in Azure HDInsight using Azure portal and run a Hive job
In this section, you create a Hadoop cluster in HDInsight using the Azure portal
1. From the **Review + create** tab, verify the values you selected in the earlier steps.
- :::image type="content" source="./media/apache-hadoop-linux-create-cluster-get-started-portal/azure-portal-cluster-review-create-hadoop.png" alt-text="HDInsight Linux get started cluster summary" border="true":::
+ :::image type="content" source="./media/apache-hadoop-linux-create-cluster-get-started-portal/azure-portal-cluster-review-create-hadoop.png" alt-text="Screenshot showing HDInsight Linux get started cluster summary." border="true":::
1. Select **Create**. It takes about 20 minutes to create a cluster. Once the cluster is created, you see the cluster overview page in the Azure portal.
- :::image type="content" source="./media/apache-hadoop-linux-create-cluster-get-started-portal/cluster-settings-overview.png" alt-text="HDInsight Linux get started cluster settings" border="true":::
+ :::image type="content" source="./media/apache-hadoop-linux-create-cluster-get-started-portal/cluster-settings-overview.png" alt-text="Screenshot showing HDInsight Linux get started cluster settings" border="true.":::
## Run Apache Hive queries
In this section, you create a Hadoop cluster in HDInsight using the Azure portal
1. To open Ambari, from the previous screenshot, select **Cluster Dashboard**. You can also browse to `https://ClusterName.azurehdinsight.net` where `ClusterName` is the cluster you created in the previous section.
- :::image type="content" source="./media/apache-hadoop-linux-create-cluster-get-started-portal/hdinsight-linux-get-started-open-cluster-dashboard.png" alt-text="HDInsight Linux get started cluster dashboard" border="true":::
+ :::image type="content" source="./media/apache-hadoop-linux-create-cluster-get-started-portal/hdinsight-linux-get-started-open-cluster-dashboard.png" alt-text="Screenshot showing HDInsight Linux get started cluster dashboard." border="true":::
2. Enter the Hadoop username and password that you specified while creating the cluster. The default username is **admin**.
hdinsight Hdinsight Overview Before You Start https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-overview-before-you-start.md
HDInsight have two options to configure the databases in the clusters.
During cluster creation, default configuration will use internal database. Once the cluster is created, customer canΓÇÖt change the database type. Hence, it's recommended to create and use the external database. You can create custom databases for Ambari, Hive, and Ranger.
-For more information, see how to [Set up HDInsight clusters with a custom Ambari DB](./hdinsight-custom-ambari-db.md)
+For more information, see how to [Set up HDInsight clusters with a custom Ambari DB](/azure/hdinsight/hdinsight-custom-ambari-db)
## Keep your clusters up to date
As part of the best practices, we recommend you keep your clusters updated on re
HDInsight release happens every 30 to 60 days. It's always good to move to the latest release as early possible. The recommended maximum duration for cluster upgrades is less than six months.
-For more information, see how to [Migrate HDInsight cluster to a newer version](./hdinsight-upgrade-cluster.md)
+For more information, see how to [Migrate HDInsight cluster to a newer version](/azure/hdinsight/hdinsight-upgrade-cluster)
## Next steps * [Create Apache Hadoop cluster in HDInsight](./hadoop/apache-hadoop-linux-create-cluster-get-started-portal.md) * [Create Apache Spark cluster - Portal](./spark/apache-spark-jupyter-spark-sql-use-portal.md)
-* [Enterprise security in Azure HDInsight](./domain-joined/hdinsight-security-overview.md)
+* [Enterprise security in Azure HDInsight](./domain-joined/hdinsight-security-overview.md)
hdinsight Apache Hive Migrate Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/apache-hive-migrate-workloads.md
Previously updated : 07/18/2022 Last updated : 10/20/2022 # Migrate Azure HDInsight 3.6 Hive workloads to HDInsight 4.0
Migration of Hive tables to a new Storage Account needs to be done as a separate
This step uses the [`Hive Schema Tool`](https://cwiki.apache.org/confluence/display/Hive/Hive+Schema+Tool) from HDInsight 4.0 to upgrade the metastore schema. > [!WARNING]
-> This step is not reversible. Run this only on a copy of the metastore.
+> This step isn't reversible. Run this only on a copy of the metastore.
1. Create a temporary HDInsight 4.0 cluster to access the 4.0 Hive `schematool`. You can use the [default Hive metastore](../hdinsight-use-external-metadata-stores.md#default-metastore) for this step.
This step uses the [`Hive Schema Tool`](https://cwiki.apache.org/confluence/disp
> [!NOTE] > This utility uses client `beeline` to execute SQL scripts in `/usr/hdp/$STACK_VERSION/hive/scripts/metastore/upgrade/mssql/upgrade-*.mssql.sql`. >
- > SQL Syntax in these scripts is not necessarily compatible to other client tools. For example, [SSMS](/sql/ssms/download-sql-server-management-studio-ssms) and [Query Editor on Azure Portal](/azure/azure-sql/database/connect-query-portal) require keyword `GO` after each command.
+ > SQL Syntax in these scripts isn't necessarily compatible to other client tools. For example, [SSMS](/sql/ssms/download-sql-server-management-studio-ssms) and [Query Editor on Azure Portal](/azure/azure-sql/database/connect-query-portal) require keyword `GO` after each command.
> > If any script fails due to resource capacity or transaction timeouts, scale up the SQL Database.
Create a new HDInsight 4.0 cluster, [selecting the upgraded Hive metastore](../h
* The new cluster doesn't require having the same default filesystem.
-* If the metastore contains tables residing in multiple Storage Accounts, you need to add those Storage Accounts to the new cluster to access those tables. See [add additional Storage Accounts to HDInsight](../hdinsight-hadoop-add-storage.md).
+* If the metastore contains tables residing in multiple Storage Accounts, you need to add those Storage Accounts to the new cluster to access those tables. See [add extra Storage Accounts to HDInsight](../hdinsight-hadoop-add-storage.md).
* If Hive jobs fail due to storage inaccessibility, verify that the table location is in a Storage Account added to the cluster.
sudo su - hive
STACK_VERSION=$(hdp-select status hive-server2 | awk '{ print $3; }') /usr/hdp/$STACK_VERSION/hive/bin/hive --config /etc/hive/conf --service strictmanagedmigration --hiveconf hive.strict.managed.tables=true -m automatic --modifyManagedTables ```
+### 6. Class not found error with `MultiDelimitSerDe`
+
+**Problem**
+
+In certain situations when running a Hive query, you might receive `java.lang.ClassNotFoundException` stating `org.apache.hadoop.hive.contrib.serde2.MultiDelimitSerDe` class isn't found. This error occurs when customer migrates from HDInsight 3.6 to HDInsight 4.0. The SerDe class `org.apache.hadoop.hive.contrib.serde2.MultiDelimitSerDe`, which is a part of `hive-contrib-1.2.1000.2.6.5.3033-1.jar` in HDInsight 3.6 is removed and we're using `org.apache.hadoop.hive.serde2.MultiDelimitSerDe` class, which is a part of `hive-exec jar` in HDI-4.0. `hive-exec jar` will load to HS2 by default when we start the service.
+
+**STEPS TO TROUBLESHOOT**
+
+1. Check if any JAR under a folder (likely that it supposed to be under Hive libraries folder, which is `/usr/hdp/current/hive/lib` in HDInsight) contains this class or not.
+1. Check for the class `org.apache.hadoop.hive.contrib.serde2.MultiDelimitSerDe` and `org.apache.hadoop.hive.serde2.MultiDelimitSerDe` as mentioned in the solution.
+
+**Solution**
+
+1. Although a JAR file is a binary file, you can still use `grep` command with `-Hrni` switches as below to search for a particular class name
+ ```
+ grep -Hrni "org.apache.hadoop.hive.contrib.serde2.MultiDelimitSerDe" /usr/hdp/current/hive/lib
+ ```
+1. If it couldn't find the class, it will return no output. If it finds the class in a JAR file, it will return the output
+
+1. Below is the example took from HDInsight 4.x cluster
+
+ ```
+ sshuser@hn0-alters:~$ grep -Hrni "org.apache.hadoop.hive.serde2.MultiDelimitSerDe" /usr/hdp/4.1.9.7/hive/lib/
+ Binary file /usr/hdp/4.1.9.7/hive/lib/hive-exec-3.1.0.4.1-SNAPSHOT.jar matches
+ ```
+1. From the above output, we can confirm that no jar contains the class `org.apache.hadoop.hive.contrib.serde2.MultiDelimitSerDe` and hive-exec jar contains `org.apache.hadoop.hive.serde2.MultiDelimitSerDe`.
+1. Try to create the table with row format DerDe as `ROW FORMAT SERDE org.apache.hadoop.hive.serde2.MultiDelimitSerDe`
+1. This command will fix the issue. If you've already created the table, you can rename it using the below commands
+ ```
+ Hive => ALTER TABLE TABLE_NAME SET SERDE 'org.apache.hadoop.hive.serde2.MultiDelimitSerDe'
+ Backend DB => UPDATE SERDES SET SLIB='org.apache.hadoop.hive.serde2.MultiDelimitSerDe' where SLIB='org.apache.hadoop.hive.contrib.serde2.MultiDelimitSerDe';
+ ```
+The update command is to update the details manually in the backend DB and the alter command is used to alter the table with the new SerDe class from beeline or Hive.
## Secure Hive across HDInsight versions HDInsight optionally integrates with Azure Active Directory using HDInsight Enterprise Security Package (ESP). ESP uses Kerberos and Apache Ranger to manage the permissions of specific resources within the cluster. Ranger policies deployed against Hive in HDInsight 3.6 can be migrated to HDInsight 4.0 with the following steps: 1. Navigate to the Ranger Service Manager panel in your HDInsight 3.6 cluster.
-2. Navigate to the policy named **HIVE** and export the policy to a json file.
-3. Make sure that all users referred to in the exported policy json exist in the new cluster. If a user is referred to in the policy json but doesn't exist in the new cluster, either add the user to the new cluster or remove the reference from the policy.
-4. Navigate to the **Ranger Service Manager** panel in your HDInsight 4.0 cluster.
-5. Navigate to the policy named **HIVE** and import the ranger policy json from step 2.
+1. Navigate to the policy named **HIVE** and export the policy to a json file.
+1. Make sure that all users referred to in the exported policy json exist in the new cluster. If a user is referred to in the policy json but doesn't exist in the new cluster, either add the user to the new cluster or remove the reference from the policy.
+1. Navigate to the **Ranger Service Manager** panel in your HDInsight 4.0 cluster.
+1. Navigate to the policy named **HIVE** and import the ranger policy json from step 2.
## Hive changes in HDInsight 4.0 that may require application changes
-* See [Additional configuration using Hive Warehouse Connector](./apache-hive-warehouse-connector.md) for sharing the metastore between Spark and Hive for ACID tables.
+* See [Extra configuration using Hive Warehouse Connector](./apache-hive-warehouse-connector.md) for sharing the metastore between Spark and Hive for ACID tables.
* HDInsight 4.0 uses [Storage Based Authorization](https://cwiki.apache.org/confluence/display/Hive/Storage+Based+Authorization+in+the+Metastore+Server). If you modify file permissions or create folders as a different user than Hive, you'll likely hit Hive errors based on storage permissions. To fix, grant `rw-` access to the user. See [HDFS Permissions Guide](https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html).
healthcare-apis Fhir Features Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/fhir-features-supported.md
Currently, the allowed actions for a given role are applied *globally* on the AP
## Service limits
-* [**Request Units (RUs)**](../../cosmos-db/concepts-limits.md) - You can configure up to 10,000 RUs in the portal for Azure API for FHIR. You'll need a minimum of 400 RUs or 40 RUs/GB, whichever is larger. If you need more than 10,000 RUs, you can put in a support ticket to have the RUs increased. The maximum available is 1,000,000. In addition, we support [autoscaling of RUs](autoscale-azure-api-fhir.md).
+* [**Request Units (RUs)**](../../cosmos-db/concepts-limits.md) - You can configure up to 100,000 RUs in the portal for Azure API for FHIR. You'll need a minimum of 400 RUs or 40 RUs/GB, whichever is larger. If you need more than 100,000 RUs, you can put in a support ticket to have the RUs increased. The maximum available is 1,000,000. In addition, we support [autoscaling of RUs](autoscale-azure-api-fhir.md).
* **Bundle size** - Each bundle is limited to 500 items.
healthcare-apis Deploy Iot Connector In Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-iot-connector-in-azure.md
Title: Choosing a method of deployment for MedTech service in Azure - Azure Health Data Services
+ Title: Choosing a method of deployment for the MedTech service in Azure - Azure Health Data Services
description: In this article, you'll learn how to choose a method to deploy the MedTech service in Azure. Previously updated : 10/10/2022 Last updated : 10/20/2022
The different deployment methods are:
## Azure ARM Quickstart template with Deploy to Azure button
-Using a Quickstart template with Azure portal is the easiest and fastest deployment method because it automates most of your configuration with the touch of a **Deploy to Azure** button. This button automatically generates the following configurations and resources: managed identity RBAC roles, a provisioned workspace and namespace, an Event Hubs instance, a Fast Healthcare Interoperability Resources (FHIR&#174;) service instance, and a MedTech service instance. All you need to add are post-deployment device mapping, destination mapping, and a shared access policy key. This method simplifies your deployment, but does not allow for much customization.
+Using a Quickstart template with Azure portal is the easiest and fastest deployment method because it automates most of your configuration with the touch of a **Deploy to Azure** button. This button automatically generates the following configurations and resources: managed identity RBAC roles, a provisioned workspace and namespace, an Event Hubs instance, a Fast Healthcare Interoperability Resources (FHIR&#174;) service instance, and a MedTech service instance. All you need to add are post-deployment device mapping, destination mapping, and a shared access policy key. This method simplifies your deployment, but doesn't allow for much customization.
-For more information about the Quickstart template and the Deploy to Azure button, see [Deploy the MedTech service with a QuickStart template](deploy-02-new-button.md).
+For more information about the Quickstart template and the Deploy to Azure button, see [Deploy the MedTech service with a Quickstart template](deploy-02-new-button.md).
## Azure PowerShell and Azure CLI automation Azure provides Azure PowerShell and Azure CLI to speed up your configurations when used in enterprise environments. Deploying MedTech service with Azure PowerShell or Azure CLI can be useful for adding automation so that you can scale your deployment for a large number of developers. This method is more detailed but provides extra speed and efficiency because it allows you to automate your deployment.
-For more information about Using an ARM template with Azure PowerShell and Azure CLI, see [Using Azure PowerShell and Azure CLI to deploy the MedTech service using Azure Resource Manager templates](/azure/healthcare-apis/iot/deploy-08-new-ps-cli).
+For more information about Using an ARM template with Azure PowerShell and Azure CLI, see [Using Azure PowerShell and Azure CLI to deploy the MedTech service using Azure Resource Manager templates](deploy-08-new-ps-cli.md).
## Manual deployment
-The manual deployment method uses Azure portal to implement each deployment task individually. There are no shortcuts. Because you will be able to see all the details of how to complete the sequence of each task, this procedure can be beneficial if you need to customize or troubleshoot your deployment process. This is the most complex method, but it provides valuable technical information and developmental options that will enable you to fine-tune your deployment very precisely.
+The manual deployment method uses Azure portal to implement each deployment task individually. There are no shortcuts. Because you'll be able to see all the details of how to complete the sequence of each task, this procedure can be beneficial if you need to customize or troubleshoot your deployment process. This is the most complex method, but it provides valuable technical information and developmental options that will enable you to fine-tune your deployment precisely.
-For more information about manual deployment with portal, see [Overview of how to manually deploy the MedTech service using the Azure portal](/azure/healthcare-apis/iot/deploy-03-new-manual).
+For more information about manual deployment with portal, see [Overview of how to manually deploy the MedTech service using the Azure portal](deploy-03-new-manual.md).
## Deployment architecture overview
The following data-flow diagram outlines the basic steps of MedTech service depl
:::image type="content" source="media/iot-get-started/get-started-with-iot.png" alt-text="Diagram showing MedTech service architecture overview." lightbox="media/iot-get-started/get-started-with-iot.png":::
-There are six different steps of the MedTech service PaaS. Only the first four apply to deployment. All the methods of deployment will implement each of these four steps. However, the QuickStart template method will automatically implement part of step 1 and all of step 2. The other two methods will have to implement all of the steps individually. Here is a summary of each of the four deployment steps:
+There are six different steps of the MedTech service PaaS. Only the first four apply to deployment. All the methods of deployment will implement each of these four steps. However, the QuickStart template method will automatically implement part of step 1 and all of step 2. The other two methods will have to implement all of the steps individually. Here's a summary of each of the four deployment steps:
### Step 1: Prerequisites - Have an Azure subscription-- Create RBAC roles contributor and user access administrator or owner. This feature is automatically done in the QuickStart template method with the Deploy to Azure button, but it is not included in manual or PowerShell/CLI method and need to be implemented individually.
+- Create RBAC roles contributor and user access administrator or owner. This feature is automatically done in the QuickStart template method with the Deploy to Azure button, but it isn't included in manual or PowerShell/CLI method and need to be implemented individually.
### Step 2: Provision
-The QuickStart template method with the Deploy to Azure button automatically provides all these steps, but they are not included in the manual or the PowerShell/CLI method and must be completed individually.
+The QuickStart template method with the Deploy to Azure button automatically provides all these steps, but they aren't included in the manual or the PowerShell/CLI method and must be completed individually.
- Create a resource group and workspace for Event Hubs, FHIR, and MedTech services. - Provision an Event Hubs instance to a namespace.
For information about granting access to the FHIR service, see [Granting access
In this article, you learned about the different types of deployment for MedTech service. To learn more about MedTech service, see
->[!div class="nextstepaction"]
->[What is MedTech service?](/rest/api/healthcareapis/iot-connectors).
+> [!div class="nextstepaction"]
+> [What is MedTech service?](iot-connector-overview.md).
FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Get Started With Iot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/get-started-with-iot.md
Title: Get started with the MedTech service in Azure Health Data Services description: This document describes how to get you started with the MedTech service in Azure Health Data Services.-+ Previously updated : 08/30/2022- Last updated : 10/19/2022+
This article will show you how to get started with the Azure MedTech service in
The following diagram outlines the basic architectural path that enables the MedTech service to receive data from a medical device and send it to the FHIR service. This diagram shows how the six-step implementation process is divided into three key development stages: deployment, post-deployment, and data processing.
-[![Diagram showing MedTech service architectural overview.](media/iot-get-started/get-started-with-iot.png)](media/iot-get-started/get-started-with-iot.png#lightbox)
### Deployment
In order to begin deployment, you need to determine if you have: an Azure subscr
- If you don't have an Azure subscription, see [Subscription decision guide](/azure/cloud-adoption-framework/decision-guides/subscriptions/). -- You must have the appropriate RBAC roles for the subscription resources you want to use. The roles required for a user to complete the provisioning would be Contributor AND User Access Administrator OR Owner. The Contributor role allows the user to provision resources, and the User Access Administrator role allows the user to grant access so resources can send data between them. The Owner role can perform both. For more information, see [Azure role-based access control](/azure/cloud-adoption-framework/ready/considerations/roles).
+- You must have the appropriate RBAC roles for the subscription resources you want to use. The roles required for a user to complete the provisioning would be Contributor AND User Access Administrator OR Owner. The Contributor role allows the user to provision resources, and the User Access Administrator role allows the user to grant access so resources can send data between them. The Owner role can perform both. For more information, see [Azure role-based access control (RBAC)](/azure/cloud-adoption-framework/ready/considerations/roles).
## Step 2: Provision services for deployment
-After obtaining the required prerequisites, the next phase of deployment is to create a workspace and provision instances of the Event Hubs service, FHIR service, and MedTech service. You must also give the Event Hubs permission to read data from your device and give the MedTech service permission to read and write to the FHIR service. There are four parts of this provisioning process.
+After you obtain the required prerequisites, the next phase of deployment is to create a workspace and provision instances of the Event Hubs service, FHIR service, and MedTech service. You must also give the Event Hubs permission to read data from your device and give the MedTech service permission to read and write to the FHIR service. There are four parts of this provisioning process.
### Create a resource group and workspace
The MedTech service persists the data to the FHIR store using the system-managed
## Step 3: Configure MedTech for deployment
-After you have fulfilled the prerequisites and provisioned your services, the next phase of deployment is to configure MedTech services to ingest data, set up device mappings, and set up destination mappings. These configuration settings will ensure that the data can be translated from your device to Observations in the FHIR service. There are four parts in this configuration process.
+After you've fulfilled the prerequisites and provisioned your services, the next phase of deployment is to configure MedTech services to ingest data, set up device mappings, and set up destination mappings. These configuration settings will ensure that the data can be translated from your device to Observations in the FHIR service. There are four parts in this configuration process.
### Configuring MedTech service to ingest data
-MedTech service must be configured to ingest data it will receive from an event hub. First you must begin the official deployment process at the Azure portal. For more information about deploying MedTech service using the Azure portal, see [Overview of how to manually deploy the MedTech service using the Azure portal
-](deploy-03-new-manual.md) and [Prerequisites for manually deploying the MedTech service using the Azure portal](deploy-03-new-manual.md).
+MedTech service must be configured to ingest data it will receive from an event hub. First you must begin the official deployment process at the Azure portal. For more information about deploying MedTech service using the Azure portal, see [Overview of how to manually deploy the MedTech service using the Azure portal](deploy-03-new-manual.md) and [Prerequisites for manually deploying the MedTech service using the Azure portal](deploy-03-new-manual.md).
Once you have starting using the portal and added MedTech service to your workspace, you must then configure MedTech service to ingest data from an event hub. For more information about configuring MedTech service to ingest data, see [Configure the MedTech service to ingest data](deploy-05-new-config.md). ### Configuring device mappings
-You must configure MedTech to map it to the device you want to receive data from. Each device has unique settings that MedTech service must use. For more information on how to use Device mappings, see [How to use Device mappings](./how-to-use-device-mappings.md).
+You must configure MedTech to map it to the device you want to receive data from. Each device has unique settings that MedTech service must use. For more information on how to use Device mappings, see [How to use Device mappings](how-to-use-device-mappings.md).
-- Azure Health Data Services provides an open source tool you can use called [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/main/tools/data-mapper) that will help you map your device's data structure to a form that MedTech can use. For more information on device content mapping, see [Device Content Mapping](https://github.com/microsoft/iomt-fhir/blob/main/docs/Configuration.md#device-content-mapping).
+- Azure Health Data Services provides an open source tool you can use called [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/main/tools/data-mapper). The IoMT Connector Data Mapper will help you map your device's data structure to a form that MedTech can use. For more information on device content mapping, see [Device Content Mapping](https://github.com/microsoft/iomt-fhir/blob/main/docs/Configuration.md#device-content-mapping).
-- When you are deploying MedTech service, you must set specific device mapping properties. For more information on device mapping properties, see [Configure the Device mapping properties](deploy-05-new-config.md).
+- When you're deploying MedTech service, you must set specific device mapping properties. For more information on device mapping properties, see [Configure the Device mapping properties](deploy-05-new-config.md).
### Configuring destination mappings Once your device's data is properly mapped to your device's data format, you must then map it to an Observation in the FHIR service. For an overview of FHIR destination mappings, see [How to use the FHIR destination mappings](how-to-use-fhir-mappings.md). For step-by-step destination property mapping, see [Configure destination properties](deploy-05-new-config.md).
-).
### Create and deploy the MedTech service
-If you have completed the prerequisites, provisioning, and configuration, you are now ready to deploy the MedTech service. Create and deploy your MedTech service by following the procedures at [Create your MedTech service](deploy-06-new-deploy.md).
+If you've completed the prerequisites, provisioning, and configuration, you're now ready to deploy the MedTech service. Create and deploy your MedTech service by following the procedures at [Create your MedTech service](deploy-06-new-deploy.md).
## Step 4: Connect to required services (post deployment)
For more information about application roles, see [Authentication and Authorizat
## Step 5: Send the data for processing
-When MedTech service is deployed and connected to the Event Hubs and FHIR services, it is ready to process data from a device and translate it into a FHIR service Observation. There are three parts of the sending process.
+When MedTech service is deployed and connected to the Event Hubs and FHIR services, it's ready to process data from a device and translate it into a FHIR service Observation. There are three parts of the sending process.
### Data sent from Device to Event Hubs
-The data is sent to an Event Hub instance so that it can wait until MedTech service is ready to receive it. The data transfer needs to be asynchronous because it is sent over the Internet and delivery times cannot be precisely measured. Normally the data won't sit on an event hub longer than 24 hours.
+The data is sent to an Event Hubs instance so that it can wait until MedTech service is ready to receive it. The data transfer needs to be asynchronous because it's sent over the Internet and delivery times can't be precisely measured. Normally the data won't sit on an event hub longer than 24 hours.
For more information about Event Hubs, see [Event Hubs](../../event-hubs/event-hubs-about.md).
MedTech processes the data in five steps:
- Transform - Persist
-If the processing was successful and you did not get any error messages, your device data is now a FHIR service [Observation](http://hl7.org/fhir/observation.html) resource.
+If the processing was successful and you didn't get any error messages, your device data is now a FHIR service [Observation](http://hl7.org/fhir/observation.html) resource.
-For more details on the data flow through MedTech, see [MedTech service data flow](iot-data-flow.md).
+For more information on the data flow through MedTech, see [MedTech service data flow](iot-data-flow.md).
## Step 6: Verify the processed data
-You can verify that the data was processed correctly by checking to see if there is now a new Observation resource in the FHIR service. If the data isn't mapped or if the mapping isn't authored properly, the data will be skipped. If there are any problems, check the [device mapping](how-to-use-device-mappings.md) or the [FHIR destination mapping](how-to-use-fhir-mappings.md).
+You can verify that the data was processed correctly by checking to see if there's now a new Observation resource in the FHIR service. If the data isn't mapped or if the mapping isn't authored properly, the data will be skipped. If there are any problems, check the [device mapping](how-to-use-device-mappings.md) or the [FHIR destination mapping](how-to-use-fhir-mappings.md).
### Metrics
-You can verify that the data is correctly persisted into the FHIR service by using the [MedTech service metrics](how-to-display-metrics.md) in the Azure portal.
+You can verify that the data is correctly persisted into the FHIR service by using the [MedTech service metrics](how-to-configure-metrics.md) in the Azure portal.
## Next steps This article only described the basic steps needed to get started using MedTech service. For information about deploying MedTech service in the workspace, see
->[!div class="nextstepaction"]
->[Deploy the MedTech service in the Azure portal](deploy-iot-connector-in-azure.md)
+> [!div class="nextstepaction"]
+> [Deploy the MedTech service in the Azure portal](deploy-iot-connector-in-azure.md)
FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis How To Configure Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-configure-metrics.md
+
+ Title: Configure the MedTech service metrics - Azure Health Data Services
+description: This article explains how to display MedTech service metrics.
+++++ Last updated : 10/12/2022+++
+# How to configure the MedTech service metrics
+
+In this article, you'll learn how to configure the [MedTech service](iot-connector-overview.md) metrics in the Azure portal. You'll also learn how to pin the MedTech service metrics tile to an Azure portal dashboard for later viewing.
+
+The MedTech service metrics can be used to help determine the health and performance of your MedTech service and can be useful with troubleshooting and seeing patterns and/or trends with your MedTech service.
+
+## Metric types for the MedTech service
+
+This table shows the available MedTech service metrics and the information that the metrics are capturing and displaying within the Azure portal:
+
+Metric category|Metric name|Metric description|
+|--|--|--|
+|Availability|IotConnector Health Status|The overall health of the MedTech service.|
+|Errors|Total Error Count|The total number of errors.|
+|Latency|Average Group Stage Latency|The average latency of the group stage. The [group stage](iot-data-flow.md#group) performs buffering, aggregating, and grouping on normalized messages.|
+|Latency|Average Normalize Stage Latency|The average latency of the normalized stage. The [normalized stage](iot-data-flow.md#normalize) performs normalization on raw incoming messages.|
+|Traffic|Number of Fhir resources saved|The total number of Fast Healthcare Interoperability Resources (FHIR&#174;) resources [updated or persisted](iot-data-flow.md#persist) by the MedTech service.|
+|Traffic|Number of Incoming Messages|The number of received raw [incoming messages](iot-data-flow.md#ingest) (for example, the device events) from the configured source event hub.|
+|Traffic|Number of Measurements|The number of normalized value readings received by the FHIR [transformation stage](iot-data-flow.md#transform) of the MedTech service.|
+|Traffic|Number of Message Groups|The number of groups that have messages aggregated in the designated time window.|
+|Traffic|Number of Normalized Messages|The number of normalized messages.|
+
+## Configure the MedTech service metrics
+
+1. Within your Azure Health Data Services workspace, select **MedTech service** under **Services**.
+
+ :::image type="content" source="media\iot-metrics-display\workspace-displayed-with-connectors-button.png" alt-text="Screenshot of select the MedTech service within the workspace." lightbox="media\iot-metrics-display\workspace-displayed-with-connectors-button.png":::
+
+2. Select the MedTech service that you would like to display metrics for. For this example, we'll select a MedTech service named **mt-azuredocsdemo**. You'll be selecting a MedTech service within your own Azure Health Data Services workspace.
+
+ :::image type="content" source="media\iot-metrics-display\select-medtech-service.png" alt-text="Screenshot of select the MedTech service you would like to display metrics for." lightbox="media\iot-metrics-display\select-medtech-service.png":::
+
+3. Select **Metrics** within the MedTech service page.
+
+ :::image type="content" source="media\iot-metrics-display\select-metrics-under-monitoring.png" alt-text="Screenshot of select the Metrics option within your MedTech service." lightbox="media\iot-metrics-display\select-metrics-under-monitoring.png":::
+
+4. The MedTech service metrics page will open allowing you to use the drop-down menus to view and select the metrics that are available for the MedTech service.
+
+ :::image type="content" source="media\iot-metrics-display\select-metrics-to-display.png" alt-text="Screenshot the MedTech service metrics page with drop-down menus." lightbox="media\iot-metrics-display\select-metrics-to-display.png":::
+
+5. Select the metrics combinations that you want to display for your MedTech service. For this example, we'll be choosing the following selections:
+
+ * **Scope** = Your MedTech service name (**Default**)
+ * **Metric Namespace** = Standard metrics (**Default**)
+ * **Metric** = The MedTech service metrics you want to display. For this example, we'll choose **Number of Incoming Messages**.
+ * **Aggregation** = How you would like to display the metrics. For this example, we'll choose **Count**.
+
+6. You can now see your MedTech service metrics for **Number of Incoming Messages** displayed on the MedTech service metrics page.
+
+ :::image type="content" source="media\iot-metrics-display\select-metrics-being-displayed.png" alt-text="Screenshot of select metrics to display." lightbox="media\iot-metrics-display\select-metrics-being-displayed.png":::
+
+7. You can add more metrics for your MedTech service by selecting **Add metric**.
+
+ :::image type="content" source="media\iot-metrics-display\select-add-metric.png" alt-text="Screenshot of select Add metric to add more MedTech service metrics." lightbox="media\iot-metrics-display\select-add-metric.png":::
+
+8. Then select the metrics that you would like to add to your MedTech service.
+
+ :::image type="content" source="media\iot-metrics-display\select-more-metrics.png" alt-text="Screenshot of select more metrics to add to your MedTech service." lightbox="media\iot-metrics-display\select-more-metrics.png":::
+
+ > [!IMPORTANT]
+ > If you leave the MedTech service metrics page, the metrics settings for your MedTech service are lost and will have to be recreated. If you would like to save your MedTech service metrics for future viewing, you can pin them to an Azure portal dashboard as a tile.
+ >
+ > To learn how to to create an Azure portal dashboard and pin tiles, see [How to create an Azure portal dashboard and pin tiles](how-to-configure-metrics.md#how-to-create-an-azure-portal-dashboard-and-pin-tiles)
+
+ > [!TIP]
+ > To learn more about advanced metrics display and sharing options, see [Getting started with Azure Metrics Explorer](/azure/azure-monitor/essentials/metrics-getting-started)
+
+## How to create an Azure portal dashboard and pin tiles
+
+To learn how to create an Azure portal dashboard and pin tiles, see [Create a dashboard in the Azure portal](/azure/azure-portal/azure-portal-dashboards)
+
+## Next steps
+
+To learn how to enable the MedTech service diagnostic settings to export logs and metrics to another location (for example: an Azure storage account) for audit, backup, or troubleshooting, see
+
+> [!div class="nextstepaction"]
+> [How to enable diagnostic settings for the MedTech service](how-to-enable-diagnostic-settings.md)
+
+(FHIR&#174;) is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis How To Display Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-display-metrics.md
- Title: Display the MedTech service metrics - Azure Health Data Services
-description: This article explains how to display MedTech service metrics.
----- Previously updated : 08/09/2022---
-# How to display and configure the MedTech service metrics
-
-In this article, you'll learn how to display and configure the [MedTech service](iot-connector-overview.md) metrics in the Azure portal. You'll also learn how to pin the MedTech service metrics tile to an Azure portal dashboard for later viewing.
-
-The MedTech service metrics can be used to help determine the health and performance of your MedTech service and can be useful with troubleshooting and seeing patterns and/or trends with your MedTech service.
-
-## Metric types for the MedTech service
-
-This table shows the available MedTech service metrics and the information that the metrics are capturing and displaying within the Azure portal:
-
-Metric category|Metric name|Metric description|
-|--|--|--|
-|Availability|IotConnector Health Status|The overall health of the MedTech service.|
-|Errors|Total Error Count|The total number of errors.|
-|Latency|Average Group Stage Latency|The average latency of the group stage. The [group stage](iot-data-flow.md#group) performs buffering, aggregating, and grouping on normalized messages.|
-|Latency|Average Normalize Stage Latency|The average latency of the normalized stage. The [normalized stage](iot-data-flow.md#normalize) performs normalization on raw incoming messages.|
-|Traffic|Number of Fhir resources saved|The total number of Fast Healthcare Interoperability Resources (FHIR&#174;) resources [updated or persisted](iot-data-flow.md#persist) by the MedTech service.|
-|Traffic|Number of Incoming Messages|The number of received raw [incoming messages](iot-data-flow.md#ingest) (for example, the device events) from the configured source event hub.|
-|Traffic|Number of Measurements|The number of normalized value readings received by the FHIR [transformation stage](iot-data-flow.md#transform) of the MedTech service.|
-|Traffic|Number of Message Groups|The number of groups that have messages aggregated in the designated time window.|
-|Traffic|Number of Normalized Messages|The number of normalized messages.|
-
-## Display and configure the MedTech service metrics
-
-1. Within your Azure Health Data Services workspace, select **MedTech service** under **Services**.
-
- :::image type="content" source="media\iot-metrics-display\iot-workspace-displayed-with-connectors-button.png" alt-text="Screenshot of select the MedTech service within the workspace." lightbox="media\iot-metrics-display\iot-workspace-displayed-with-connectors-button.png":::
-
-2. Select the MedTech service that you would like to display metrics for. For this example, we'll select a MedTech service named **mt-azuredocsdemo**. You'll select your own MedTech service.
-
- :::image type="content" source="media\iot-metrics-display\iot-connector-select.png" alt-text="Screenshot of select the MedTech service you would like to display metrics for." lightbox="media\iot-metrics-display\iot-connector-select.png":::
-
-3. Select **Metrics** within the MedTech service page.
-
- :::image type="content" source="media\iot-metrics-display\iot-select-metrics.png" alt-text="Screenshot of select the Metrics option within your MedTech service." lightbox="media\iot-metrics-display\iot-select-metrics.png":::
-
-4. The MedTech service metrics page will open allowing you to use the drop-down menus to view and select the metrics that are available for the MedTech service.
-
- :::image type="content" source="media\iot-metrics-display\iot-metrics-opening-page.png" alt-text="Screenshot the MedTech service metrics page with drop-down menus." lightbox="media\iot-metrics-display\iot-metrics-opening-page.png":::
-
-5. Select the metrics combinations that you want to display for your MedTech service. For this example, we'll be choosing the following selections:
-
- * **Scope** = Your MedTech service name (**Default**)
- * **Metric Namespace** = Standard metrics (**Default**)
- * **Metric** = The MedTech service metrics you want to display. For this example, we'll choose **Number of Incoming Messages**.
- * **Aggregation** = How you would like to display the metrics. For this example, we'll choose **Count**.
-
-6. You can now see your MedTech service metrics for **Number of Incoming Messages** displayed on the MedTech service metrics page.
-
- :::image type="content" source="media\iot-metrics-display\iot-metrics-select-options.png" alt-text="Screenshot of select metrics to display." lightbox="media\iot-metrics-display\iot-metrics-select-options.png":::
-
-7. You can add more metrics by selecting **Add metric**.
-
- :::image type="content" source="media\iot-metrics-display\iot-select-add-metric.png" alt-text="Screenshot of select Add metric to add more MedTech service metrics." lightbox="media\iot-metrics-display\iot-select-add-metric.png":::
-
-8. Then select the metrics that you would like to add to your MedTech service.
-
- :::image type="content" source="media\iot-metrics-display\iot-metrics-select-more-metrics.png" alt-text="Screenshot of select more metrics to add to your MedTech service." lightbox="media\iot-metrics-display\iot-metrics-select-more-metrics.png":::
-
- > [!TIP]
- >
- > To learn more about advanced metrics display and sharing options, see [Getting started with Azure Metrics Explorer](../../azure-monitor/essentials/metrics-getting-started.md)
-
- > [!IMPORTANT]
- >
- > If you leave the MedTech service metrics page, the metrics settings for your MedTech service are lost and will have to be recreated. If you would like to save your MedTech service metrics for future viewing, you can pin them to an Azure dashboard as a tile.
-
-## How to pin the MedTech service metrics tile to an Azure portal dashboard
-
-1. To pin the MedTech service metrics tile to an Azure portal dashboard, select the **Pin to dashboard** option.
-
- :::image type="content" source="media\iot-metrics-display\iot-metrics-select-add-pin-to-dashboard.png" alt-text="Screenshot of select the Pin to dashboard option." lightbox="media\iot-metrics-display\iot-metrics-select-add-pin-to-dashboard.png":::
-
-2. Select the dashboard you would like to display your MedTech service metrics to by using the drop-down menu. For this example, we'll use a private dashboard named **Azuredocsdemo_Dashboard**. Select **Pin** to add your MedTech service metrics tile to the dashboard.
-
- :::image type="content" source="media\iot-metrics-display\iot-select-pin-to-dashboard.png" alt-text="Screenshot of select dashboard and Pin options to complete the dashboard pinning process." lightbox="media\iot-metrics-display\iot-select-pin-to-dashboard.png":::
-
-3. You'll receive a confirmation that your MedTech service metrics tile was successfully added to your selected Azure portal dashboard.
-
- :::image type="content" source="media\iot-metrics-display\iot-select-dashboard-pinned-successful.png" alt-text="Screenshot of metrics tile successfully pinned to dashboard." lightbox="media\iot-metrics-display\iot-select-dashboard-pinned-successful.png":::
-
-4. Once you've received a successful confirmation, select the **Dashboard** option.
-
- :::image type="content" source="media\iot-metrics-display\iot-select-dashboard-with-metrics-tile.png" alt-text="Screenshot of select the Dashboard option." lightbox="media\iot-metrics-display\iot-select-dashboard-with-metrics-tile.png":::
-
-5. Use the drop-down menu to select the dashboard that you pinned your MedTech service metrics tile. For this example, the dashboard is named **Azuredocsdemo_Dashboard**.
-
- :::image type="content" source="media\iot-metrics-display\iot-select-dashboard-with-metrics-pin.png" alt-text="Screenshot of selecting dashboard with pinned MedTech service metrics tile." lightbox="media\iot-metrics-display\iot-select-dashboard-with-metrics-pin.png":::
-
-6. The dashboard will display the MedTech service metrics tile that you created in the previous steps.
-
- :::image type="content" source="media\iot-metrics-display\iot-metrics-display-dashboard-with-metrics-pin.png" alt-text="Screenshot of dashboard with pinned MedTech service metrics tile." lightbox="media\iot-metrics-display\iot-metrics-display-dashboard-with-metrics-pin.png":::
-
-## Next steps
-
-To learn how to configure the diagnostic settings and export the MedTech service metrics to another location (for example: an Azure storage account), see
-
-> [!div class="nextstepaction"]
-> [How to configure diagnostic settings for exporting the MedTech service metrics](iot-metrics-diagnostics-export.md)
-
-To learn about the MedTech service frequently asked questions (FAQs), see
-
-> [!div class="nextstepaction"]
-> [Frequently asked questions about the MedTech service](iot-connector-faqs.md)
-
-(FHIR&#174;) is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis How To Enable Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-enable-diagnostic-settings.md
+
+ Title: How to enable the MedTech service diagnostic settings - Azure Health Data Services
+description: This article explains how to configure the MedTech service diagnostic settings.
+++++ Last updated : 10/13/2022+++
+# How to enable diagnostic settings for the MedTech service
+
+In this article, you'll learn how to enable the diagnostic settings for the MedTech service to export logs to different destinations (for example: to [Azure storage](/azure/storage/) or an [Azure event hub](/azure/event-hubs/)) for audit, analysis, or backup.
+
+## Create a diagnostic setting for the MedTech service
+1. To enable metrics export for your MedTech service, select **MedTech service** in your workspace under **Services**.
+
+ :::image type="content" source="media/iot-diagnostic-settings/select-medtech-service-in-workspace.png" alt-text="Screenshot of select the MedTech service within workspace." lightbox="media/iot-diagnostic-settings/select-medtech-service-in-workspace.png":::
+
+2. Select the MedTech service that you want to enable a diagnostic setting for. For this example, we'll be using a MedTech service named **mt-azuredocsdemo**. You'll be selecting a MedTech service within your own Azure Health Data Services workspace.
+
+ :::image type="content" source="media/iot-diagnostic-settings/select-medtech-service.png" alt-text="Screenshot of select the MedTech service for exporting metrics." lightbox="media/iot-diagnostic-settings/select-medtech-service.png":::
+
+3. Select the **Diagnostic settings** option under **Monitoring**.
+
+ :::image type="content" source="media/iot-diagnostic-settings/select-diagnostic-settings.png" alt-text="Screenshot of select the Diagnostic settings." lightbox="media/iot-diagnostic-settings/select-diagnostic-settings.png":::
+
+4. Select the **+ Add diagnostic setting** option.
+
+ :::image type="content" source="media/iot-diagnostic-settings/add-diagnostic-settings.png" alt-text="Screenshot of select the + Add diagnostic setting." lightbox="media/iot-diagnostic-settings/add-diagnostic-settings.png":::
+
+5. The **+ Add diagnostic setting** page will open, requiring configuration inputs from you.
+
+ 1. Enter a display name in the **Diagnostic setting name** box. For this example, we'll name it **MedTech_service_All_Logs_and_Metrics**. You'll enter a display name of your own choosing.
+
+ 2. Under **Logs**, select the **AllLogs** option.
+
+ 3. Under **Metrics**, select the **AllMetrics** option.
+
+ > [!Note]
+ > To view a complete list of MedTech service metrics associated with **AllMetrics**, see [Supported metrics with Azure Monitor](../../azure-monitor/essentials/metrics-supported.md#microsofthealthcareapisworkspacesiotconnectors).
+
+ 4. Under **Destination details**, select the destination you want to use for your exported MedTech service metrics. In this example, we've selected an Azure storage account named **azuredocsdemostorage**. You'll select a destination of your own choosing.
+
+ > [!Important]
+ > Each **Destination details** selection requires that certain resources (for example, an existing Azure storage account) be created and available before the selection ca